top of page

Search Results

48 items found for ""

  • The Most Important Metric of Little's Law Isn't In The Equation

    This is post 6 of 9 in our Little's Law series. As we discussed in the previous post, a thorough understanding of what it means to violate each of the assumptions of Little's Law (LL) is key to the optimization of your delivery process. So let's take a minute to walk through each of those in a bit more detail. The first thing to observe about the assumptions is that #1 and #3 are logically equivalent. I'm not sure why Dr. Little calls these out separately because I've never seen a case where one is fulfilled but the other is not. Therefore, I think we can safely treat those two as the same. But more importantly, you'll notice what Little is not saying here with either #1 or #3. He is making no judgment about the actual amount of WIP that is required to be in the system. He says nothing of less WIP being better or more WIP being worse. In fact, Little couldn't care less. All he cares about is that WIP is stable over time. So while having arrivals match departures (and thus unchanging WIP over time) is important, that tells us *nothing* about whether we have too much WIP, too little WIP, or just the right amount of WIP. Assumptions #1 and #3, therefore, while important, can be ruled out as *the* most important. Assumption #2 is one that is frequently ignored. In your work, how often do you start something but never complete it? My guess is the number of times that has happened to you over the past few months is something greater than zero. Even so, while this assumption is again of crucial importance, it is usually the exception rather than the rule. Unless you find yourself in a context where you are always abandoning more work than you complete (in which case you have much bigger problems than LL), this assumption will also not be the dominant reason why you have a suboptimal workflow. This leaves us with assumption #4. Allowing items to age arbitrarily is the single greatest factor as to why you are not efficient, effective, or predictable at delivering customer value. Stated a different way, if you plan to adopt the use of flow metrics, the single most important aspect that you should be paying attention to is not letting work items age unnecessarily! More than limiting WIP, more than visualizing work, more than finding bottlenecks (which is not necessarily a flow thing anyway), the only question to ask of your flow system is, "Are you letting items age needlessly?" Get aging right and most of the rest of predictability takes care of itself. As this is a blog series about Little's Law, getting into the specifics of how to manage item aging is a bit beyond our remit, but thankfully Julia Wester has done an excellent job of giving us an intro to how you might use ActionableAgile Analytics for this goal. To me, one of the strangest results in all of flow theory is that the most important metric to measure is not really stated in any equation (much less Little's Law). While I always had an intuition that aging was important, I never really understood its relevance. It wasn't until I went back and read the original proofs and subsequent articles by Little and others that I grasped its significance. You'll note that other than the Kanban Guide, all other flow-based frameworks do not even mention work item aging at all. Kinda makes you wonder, doesn't it? Having now explored the real reasons to understand Little's Law (e.g., pay attention to aging and understand all the assumptions), let's now turn our attention to some ways in which Little's Law should NOT be used. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation (this article) How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • It's Always The Assumptions

    This is post 5 of 9 in our Little's Law series. Not to get too morbid, but in police detective work, when a married woman is murdered, there are only three rules to determine who the killer is: 1. It's always the husband 2. It's always the husband 3. It's always the husband The same thing is true when your flow metrics are murdered by your process: 1. It's always the assumptions 2. It's always the assumptions 3. It's always the assumptions Think back to the first experiment I had you run at the start of this blog series. I had you look at your data, do some calculations, and determine if you get the results that Little's Law predicts. I even showed you some example data of a real process where the calculated metrics did not yield a valid Little's Law result. I asked you at the time, "What's going on here?" If you've read my last post, then you now have the answer. The problem isn't Little's Law. The problem is your process. The Throughput form of Little's Law is based on five basic assumptions. Break any one or more of those assumptions at any one or more times, and the equation won't work. It's as simple as that. For convenience for the rest of this discussion, I'm going to re-list Little's assumptions for the Throughput form of his law. Also, for expediency, I am going to number them, though this numbering is arbitrary and is in no way meant to imply an order of importance (or anything else for that matter): 1. Average arrival rate equals average departure rate 2. All items that enter a workflow must exit 3. WIP should neither be increasing nor decreasing 4. Average age of WIP is neither increasing nor decreasing. 5. Consistent units must be used for all measures In that earlier post, I gave this example from a team that I had worked with (60 days of historical data): WIP: 19.54, TH: 1.15, CT: 10.3 For this data, WIP / TH is 16.99, not 10.3. What that tells us is that at one or more points during that 60-day time frame, this team violated one or more of Little's Law's assumptions at least one or more times. One of the first pieces of detective work is to determine which ones were violated and when. Almost always, a violation of Little's Law comes down to your process's policies (whether those policies are explicit or not). For example, does your process call for expedites that are allowed to violate WIP limits and that takes priority over other existing work? If so, for each expedited item you had during the 60 days, you violated at least assumptions #3 and #4. Did you have blockers that you ignored? If so, then you at least violated #4. Did you cancel work and just delete it off the board? If so, then you violated #2. And so on. This was quite possibly the easiest post to write in this series -- but probably the most important one. A very quick and easy health check is to compare your calculated flow metrics with those that are calculated by Little's Law. Are they different? If so, then somewhere, somehow, you have violated an assumption. Now your detective work begins. Do you have process policies that are in direct contradiction to Little's Law's assumptions? If so, what changes can you make to improve stability/predictability? Do you have more ad hoc policies that contradict Little? If so, how do you make them more explicit so the team knows how to respond in certain situations? The goal is not to get your process perfectly in line with Little. The goal is to have a framework for continual improvement. Little is an excellent jumping-off point for that. Speaking of continual improvement, when it comes to spotting improvement opportunities as soon as possible, there is one assumption above that is more important than all of the others. If you have followed my work up until now, then you know what that assumption is. If not, then read on to the next post... Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions (this article) The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • What's the Tallest Mountain On Earth?

    If, like most everyone else, you answered, "Mount Everest," then you are not quite right. But you are not quite wrong, either. The real answer has to do with a concept I wrote about in an earlier blog post. Scientists can all objectively agree where mountains "finish". That is, it's extremely hard to argue about where a mountain "peaks". But when measuring, we know that "finished" is only half the battle. Agreeing where a mountain "starts" is a whole other conversation altogether -- and not nearly as straightforward as it may sound. For example, more than half of the Mauna Kea volcano in Hawaii is underwater. Only 4,205 meters of the whole mountain is above sea level. But if we measure from the base to the summit of Mauna Kea, it is 10,211 meters -- that's about 20% taller than Everest's 8,848 meters. If you only want to talk about mountains on land, then, base-to-summit, Denali in Alaska is actually taller (5,900m) than Everest base-to-summit (4,650m). So why does Everest get the crown? The reason is that most scientists choose to start their measurements of mountain heights from a concept known as sea level. But the problem with sea level is that anyone who has studied geography knows that the sea ain't so level. The physics of the earth are such that different densities of the earth's makeup at different locations cause different gravitational pulls resulting in "hills and valleys" of sea level across the planet (the European Space Agency has an outstanding visualization of this) Add to that things like tides, storms, wind, and a bulge around the equator due to the earth's rotation means there is no one true level for the sea. Scientists cheat to solve this problem by calculating a "mean" (arithmetic mean or average) sea level. This "average" sea level represents the zero starting point at which all land mountains are measured (cue the "Flaw of Averages"). You might ask, why don't we choose a more rigorous starting point like the center of the earth? The reason for that is... remember that bulge around the equator that I just alluded to? The earth itself is not quite spherical, and the distance from its center at the equator is longer than the distance from the center to either the north or south pole. In case you were wondering, if we were to measure from the center of the earth, then Mount Chimborazo in Ecuador would win. It seems that geologists fall prey to the same syndrome that afflicts most Agile methodologies. A bias toward defining only when something is "done" ignores half of the equation -- and the crucial half at that. What's more, you have Agilists out there who actively rant against any notion of a defined "start" or "ready". What I hope to have proven here is that, in many instances, deciding where to start can be a much more difficult (and usually much more important) problem to solve, depending on what question you are trying to solve. At the risk of repeating myself, a metric is a measurement, and any measurement contains BOTH a start point AND a finish point. Therefore, begin your flow data journey by defining the start and end points in your process. Then consider updating those definitions as you collect data and as your understanding of your context evolves. Anything else is just theatre. References PBS.org, "Be Smart", Season 10, Episode 9, 08/10/2022 The European Space Agency, https://www.esa.int/ About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • The Deviance of Standard Deviation

    Before getting too far into this post, there are two references that do a far better job than I ever will at explaining the deficiency of the standard deviation statistic: "The Flaw of Averages" by Dr. Sam Savage (https://www.flawofaverages.com/) Pretty much anything written by Dr. Donald Wheeler (spcpress.com) Why is the standard deviation so popular? Because that's what students are taught. It's that simple. Not because it is correct. Not because it is applicable in all circumstances. It is just what everyone learns. Even if you haven't taken a formal statistics class, somewhere along the line, you were taught that when presented with a set of data, the first thing you do is calculate an average (arithmetic mean) and a standard deviation. Why were taught that? It turns out there's not a really good answer to that. An unsatisfactory answer, however, would involve the history of the normal distribution (Gaussian) and how over the past century or so, the Gaussian distribution has come to dominate statistical analysis (its applicability--or, rather, inapplicability--for this purpose would be a good topic for another blog, so please leave a comment letting us know your interest). To whet your appetite on that topic, please see Bernoulli's Fallacy by Aubrey Clayton. Arithmetic means and standard deviations are what is known as descriptive statistics. An arithmetic mean describes the location of the center of a given dataset, while the standard deviation describes the data's dispersion. For example, say we are looking at Cycle Time data and we find that it has a mean of 12 and a standard deviation of 4.7. What does that really tell you? Well, actually, it tells you almost nothing--at least almost nothing that you really care about. The problem is that in our world, we are not concerned so much with describing our data as we are with doing proper analysis on it. Specifically, what we really care about is being able to identify possible process changes (signal) that may require action on our part. The standard deviation statistic is wholly unsuited to this pursuit. Why? First and foremost, the nature of how the standard deviation statistic is calculated makes it very susceptible to extreme outliers. A classic joke I use all the time is: imagine that the world's richest person walks into a pub. The average wealth of everyone in the pub is somewhere in the billions, and the standard deviation of wealth in the pub is somewhere in the billions. However, you know that if you were to walk up to any other person in the pub, that person would not be a billionaire. So what have you really learned from those descriptive statistics? This leads us to the second deficiency of the standard deviation statistic. Whenever you calculate a standard deviation, you are making a big assumption about your data (recall my earlier post about assumptions when applying theory?). Namely, you are making an assumption that all of your data has come from a single population. This assumption is not talked about much in statistical circles. According to Dr. Wheeler, "The descriptive statistics taught in introductory classes are appropriate summaries for homogeneous collections of data. But the real world has many ways of creating non-homogeneous data sets.." (https://spcpress.com/pdf/DJW377.pdf). In our pub example above, is it reasonable to assume that we are talking about a single population of peoples' wealth that shares the same characteristics? Or is it reasonable that some signal exists as evidence that one certain data point isn't routine? Take the cliched example from the probability of pulling selecting marbles from an urn. The setup usually concerns a single urn that contains two different coloured marbles--say red and white--in a given ratio. Then some question is asked, like if you select a single marble, what is the probability it will be red? The problem is that in the "real world," your data is not generated by choosing different coloured marbles from an urn. Most likely, you don't know if you are selecting from one urn or several urns. You don't know if your urns contain red marbles, white marbles, blue marbles, bicycles, or tennis racquets. Your data is generated by a process where things can--and do--change, go wrong, encompass multiple systems, etc. It is generated by potentially different influences under different circumstances with different impacts. In those situations, you don't need a set of descriptive statistics that assume a single population. What you need to do is analysis on your data to find evidence of signal of multiple (or changing) populations. In Wheeler's nomenclature, what you need to do is first determine if your data is homogenous or not. Now, here's where proponents of the standard deviation statistic will say that to find signal, all you do is take your arithmetic mean and start adding or subtracting standard deviations to it. For example, they will say that roughly 99.7% of all data should fall within your mean plus or minus 3 standard deviations. Thus, if you get a point outside of that, you have found signal. Putting aside for a minute the fact that this type of analysis ignores the assumptions I just outlined, this example brings into play yet another dangerous assumption of the standard deviation. When starting to couple percentages with a standard deviation (like 68.2%, 95.5%, 99.7%, etc.), you are making another big assumption that your data is normally distributed. I'm here to tell you that most real-world process data is NOT normally distributed. So what's the alternative? As a good first approximation, a great place to start is with the percentile approach that we utilize with ActionableAgile Analytics (see, for example, this blog post). This approach makes no assumptions about single populations, underlying distributions, etc. If you want to be a little more statistically rigorous (which at some point you will want to be), then you will need the Process Behaviour Chart advocated by Dr. Donald Wheeler's continuation of Dr. Walter Shewhart's work. A deeper discussion of the Shewhart/Wheeler approach is a whole blog series on its own that, if you are lucky, may be coming to a blog site near you soon. So, to sum up, the standard deviation statistic is an inadequate tool for data analysis because it: Is easily influenced by outliers (which your data probably has) Often assumes a normal distribution (which your data doesn't follow) Assumes a single population (which your data likely doesn't possess) Any analysis performed on top of these flaws is almost guaranteed to be invalid. One last thing. Here's a quote from Atlassian's own website: "The standard deviation gives you an indication of the level of confidence that you can have in the data. For example, if there is a narrow blue band (low standard deviation), you can be confident that the cycle time of future issues will be close to the rolling average." There are so many things wrong with this statement that I don't even know where to begin. So please help me out by leaving some of your own comments about this on the 55 Degrees community site. Happy Analysis! About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • One Law. Two Equations.

    This is post 4 of 9 in our Little's Law series. In the previous post, we demonstrated how the two different forms of Little's Law (LL) can lead to two very different answers even when using the same dataset. How can one law lead to two answers? As was suggested, the applicability of any theory depends completely on one's understanding of the assumptions that need to be in place in order for that given theory to be valid. However, in the case of LL, we have two different equations that purport to express one single theory. Does having two equations require having two sets of assumptions (and potentially two types of applicability)? In a word, yes. Recall that the L = λW (this is the version based on arrival rate) came first, and in his 1961 proof, Little stated his assumptions for the formula to be correct: "if the three means are finite and the corresponding stochastic process strictly stationary, and, if the arrival process is metrically transitive with nonzero mean, then L = λW." There's a lot of mathematical gibberish in there that you don't need to know anyway because it turns out Little's initial assumptions were overly restrictive, as was demonstrated by subsequent authors (reference #1). All you really need to know is that--very generally speaking--LL is applicable to any process that is relatively stable over time [see note below]. For our earlier thought experiment, I took this notion of stability to an extreme in order to (hopefully) prove a point. In the example data I provided, you'll see that arrivals are infinitely stable in that they never change. In this ultra-stable world, you'll note that the arrivals form of LL works--quite literally--exactly the way that it should. That is to say, when you plug two numbers into the equation, you get the exact answer for the third. Things change dramatically, however, when we start talking about the WIP = TH * CT version of the law. Most people assume--quite erroneously--that this latter form of LL only requires the same assumptions as the arrivals version. However, Dr. Little is very clear that changing the perspective of the equation from arrivals to departures has a very specific impact on the assumptions that are required for the law to be valid. Let's use Little's own words for this discussion: "At a minimum, we must have conservation of flow. Thus, the average output or departure rate (TH) equals the average input or arrival rate (λ). Furthermore, we need to assume that all jobs that enter the shop will eventually be completed and will exit the shop; there are no jobs that get lost or never depart from the shop...we need the size of the WIP to be roughly the same at the beginning and end of the time interval so that there is neither significant growth nor decline in the size of the WIP, [and] we need some assurance that the average age or latency of the WIP is neither growing nor declining." (reference #2) "At a minimum, we must have conservation of flow." Allow me to put these in a bulleted list that will be easier for your reference later. For a system being observed for an arbitrarily long amount of time: Average arrival rate equals average departure rate All items that enter a workflow must exit WIP should neither be increasing nor decreasing Average age of WIP is neither increasing nor decreasing Consistent units must be used for all measures I added that last bullet point for clarity. It should make sense that if Cycle Time is measured in days, then Throughput cannot be measured in weeks. And don't even talk to me about story points. If you have a system that obeys all of these assumptions, then you have a system in which the TH form of Little's Law will apply. If you have a system that obeys all of these assumptions, then you have a system in which the TH form of Little's Law will apply. Wait, what's that you say? Your system doesn't follow these assumptions? I'm glad you pointed that out because that will be the topic of our next post. A note on stability Most people have an incorrect notion of what stability means. "Stable" does not necessarily mean "not changing." For example, Little explicitly states aspects of a system that L = λW is NOT dependent on and, therefore, may reasonably change over time: size of items, order of items worked on, number of servers, etc. That means situations like adding or removing team members over time may not be enough to consider to a process "unstable." However, to take an extreme example, it would be easy to see that all of the restrictions/changes imposed during the 2020 COVID pandemic would cause a system to be unstable. From a LL perspective, only when all 5 assumptions are met can a system reasonably be considered stable (assuming we are talking about the TH form of LL). References Whitt, W. 1991. A review of L = λW and extensions. Queueing Systems 9(3) 235–268. Little, J. D. C., S. C. Graves. 2008. Little's Law. D. Chhajed, T. J. Lowe, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. Springer Science + Business Media LLC, New York. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations (this article) It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Is your workflow hiding key signals?

    There are lots of signals that you can get from visualizing your work - especially on a Kanban board. You can see bottlenecks, blockers, and excess work-in-progress, but one signal you don't often get to see is the answer to the question, "How much longer from here?" Now, to get that signal, you have to have a process that models flow. By flow, I mean the movement of potential value through a system. Your workflow is intended to be a model of that system. When built in that way, your workflow allows you to visualize and manage how your potential value moves through your system. Managing flow is managing liability and risk A tip is to look at your workflow from a financial perspective. Work items you haven't started are options that, when exercised, could deliver value. Work items you have finished are (hopefully) assets delivering value. The remainder - all the work items that you've spent time and money on but haven't received any value in return yet (work-in-progress) are your liabilities. What this helps us clearly demonstrate is that our work-in-progress is where most of our risk lies. Yes, we could have delivered things that don't add value (and hopefully, there are feedback loops to help identify those situations and learn from them.) You can also have options that you really should be working on to maximize the long-term value they can provide. But, by far, the biggest risk we face is taking on too much liability and not managing that liability effectively - causing us to spend more time and money than we should to turn them into assets. Expectations versus reality We, humans, have a tendency to look at things with rose-colored glasses (ok, most of us do.) So, when we start a piece of work, we think it will have a nice, straight, and effective trip through the workflow with no u-turns or roadblocks. More often than not, that's not the case, and there are many reasons for that. One of the biggest reasons is how we build our workflow. When we build our workflow to model the linear progression of work as it moves from an option to an asset, you're more likely to have that straight path. If you build your workflow to model anything else - especially the different groups of people that will work on it then you end up with an erratic path. You can get a picture of how work moves between people (if you use tools like Inspekt). But what you don't get is a picture of how work moves through a lifecycle from option to asset. This is a problem if you think you're using your workflow to help optimize flow because you aren't seeing the signals you think you are. In a situation like this, what you have is a people flow -- not a work flow. That's great if you want to focus purely on managing resource efficiency (keeping people busy) but poor if you want to optimize flow and control your liabilities. The signal you can only get from a true workflow Once you can truly say that you have modeled the life cycle of turning options into assets, you can say that a card's position in the workflow reflects how close or far away it is from realizing its potential value. What this means is that when you move to the right in your workflow, then you're signaling you're closer to turning the liability into an asset, and when you move it to the left (backward) in your workflow, you're moving farther away from that outcome. (Does it make more sense now why we handle backward movement the way we do in ActionableAgile now?) Model your workflow so that how you move a work item is signal of movement towards or away from realising its potential value When you can say this, then you can start signaling how long an item is likely to take to become an asset. With tools like ActionableAgile's Cycle Time Scatterplot, you can see how long it's likely to take for an item to be completed from any workflow stage. It's like when you go to Disney World or someplace like it, and you're in line for a ride, and you see a sign that says your wait is 1 hour from this point. Each column of your workflow can have that metaphorical sign. Except you can also know the likelihood associated with that information. Want to make a change? Don't stress if you just learned that your workflow isn't all it's cracked up to be. You can make a change! It's all about board design and policies. If you want tips on how to change your board or process, check out my blog post on how to design your board to focus on flow, or watch my talk below on this topic from Lean Agile London 2022!

  • Probabilistic vs. deterministic forecasting

    Do you hear people throwing around words like probabilistic and deterministic forecasting, and you aren't sure exactly what they mean? Well, I'm writing this blog post specifically for you. Spoiler alert: it has to do with uncertainty vs. certainty. Forecasting is the process of making predictions based on past and present data (Wikipedia). Historically the type of forecasting used for business planning was deterministic (or point) forecasting. Increasingly, however, companies are embracing probabilistic forecasting as a way to help understand risk. What is deterministic forecasting? Just like fight club, people don't really talk about deterministic forecasting. It is just what they do, and they don't question it - at least until recently. I mean, if it is all someone knows, why would they even think to question it or explore the pros and cons? But what is it really? Deterministic forecasting is when only one possible outcome is given without any context around the likelihood of that outcome occurring. Statements like these are deterministic forecasts: It will rain at 1 P.M. Seventy people will cross this intersection today. My team will finish ten work items this week. This project will be done on June 3rd. For each of those statements, we know that something else could happen. But we have picked a specific possible outcome to communicate. Now, when someone hears or reads these statements, they do what comes naturally to humans... they fill in the gaps of information with what they want to be true. Usually, what they see or hear is that these statements are absolutely certain to happen. It makes sense. We've given them no alternative information. So, the problem with giving a deterministic forecast when more than one possible outcome really exists is that we aren't giving anyone, including ourselves, any information about the risk associated with the forecast we provided. How likely is it truly to happen? Deterministic forecasts communicate a single outcome with no information about risk. If there are factors that could come into play that could change the outcome, say external risks or sick employees, then deterministic forecasting doesn't work for us. It doesn't allow us to give that information to others. Fortunately, there's an alternative - probabilistic forecasting. What is probabilistic forecasting? A probabilistic forecast is one that acknowledges the range of possible outcomes and assigns a probability, or likelihood of happening, to each. The image above is a histogram showing the range of possible outcomes from a Monte Carlo simulation I ran. The question I effectively asked it was "How many items we can complete in 13 days?" Now, there are a lot of possible answers to that question. In fact, each bar on the histogram represents a different option - anywhere from 1 to 90 or more. We can, and probably should, work to make that range tighter. But, in the meantime, we can create a forecast by understanding the risk we are willing to take on. In the image above we see that in approx 80% of the 10,000 trials we finished at least 27 items in 13 days. This means we can say that, if our conditions stay roughly similar, there's an 80% chance that we can finish at least 27 items in 13 days. That means that there's a 20% chance we could finish 26 or less. Now I can discuss that with my team and my stakeholders and make decisions to move forward or to see what we can do to improve the likelihood of the answer we'd rather have. Here are some more probabilistic forecasts: There is a 70% chance of rain between now and 1 P.M. There's an 85% chance that at least seventy people will cross this intersection today. There's a 90% chance that my team will finish ten or more work items this week. There's only a 50% chance that this project will be done on or before June 3rd. Every probabilistic forecast has two components: a range and a probability, allowing you to make informed decisions. Learn more about probabilistic forecasts Which should I use? To answer this question you have to answer another: Can you be sure that there's a single possible outcome or are there factors that could cause other possibilities? In other words, do you have certainty or uncertainty? If the answer is certainty, then deterministic forecasts are right for you. However, that is rarely, if ever, the case. It is easy to give into the allure of the single answer provided by a deterministic forecast. It feels confident. Safe. Easy. Unfortunately, those feelings are an illusion. Deterministic forecasts are often created using qualitative information and estimates but, historically, humans are really bad at estimating. Our brains just can't account for all the possible factors. Even if you were to use data to create a deterministic forecast you still have to pick an outcome to use and often people choose the average. Is it ok being wrong half the time? It is better to be vaguely right than exactly wrong. Carveth Read (1920) If the answer is uncertainty (like the rest of us) then probabilistic forecasts are the smart choice. By providing the range of outcomes and the probability of each (or a set) happening, you give significantly more information about the risk involved with any forecast, allowing people to make more informed decisions. Yes, it's not the tidy single answer that people want but its your truth. Carveth Read said it well: "It is better to be vaguely right than exactly wrong." Remember that the point of forecasting is to manage risk. So, use the technique that provides as much information about risk as possible. How can I get started? First, gather data about when work items start and finish. If you're using work management tools like Jira or Azure DevOps then you are already capturing that data. With that information you can use charts and simulations to forecast how long it takes to finish a single work item, how many work items you can finish in a fixed time period, or even how long it can take you to finish a fixed scope of work. These are things we get asked to do all the time. You don't even need a lot of data. If you. have at least 10 work items, preferably a representative mix, then you have enough data to create probabilistic forecasts. Once you have the data you need, tools like ActionableAgile™️ and Portfolio Forecaster from 55 Degrees help you determine the forecast that matches your risk tolerance with ease. You can also use our tools to improve the predictability of your process. When you do that you are happier with your forecasts because you get higher probability with a narrower range of outcomes. If you're interested in chatting with us or other users on this topic, join us in our community and create a post! See you there!

  • The Two Faces of Little's Law

    This is post 3 of 9 in our Little's Law series. Having explained in an earlier post in the series that Little's Law (LL) comes in at least two flavors, it's time for another thought experiment. For this test, I'm going to ask you to fabricate some flow data over an arbitrarily long period of time. In order to keep the experiment as simple as possible, the requirements for our fabricated data are going to be quite specific, so please allow me to list them here: Your flow data must start with zero WIP. Trust me, the experiment works equally well if you start with non-zero WIP, but in order to eliminate the possibility of certain edge cases occurring, let's all start with zero WIP. For the whole period of time under consideration, the arrival rate of your data must be constant. For example, if the arrival rate for the first day is two items, then the arrival rate for the second day must be two items, as well as two items for the third day, etc., for the whole span of your dataset. Likewise, the departure rate (Throughput) for your data must be constant for the whole time period under consideration AND must be less than your arrival rate. This should make sense. If we start with zero WIP, it would be impossible to have a constant departure rate greater than your arrival rate--otherwise, your WIP would turn negative (which, of course, is impossible). So, for example, if the departure rate for the first day of your dataset is one item, then the departure rate for the second day must also be one item, as well as one item for the third day, and so on for the whole span of your dataset. Items must move through your process and complete in strict first-in-first-out (FIFO) order. Again, this need not be strictly necessary, but it makes conjuring your dataset easier. The length of time for your dataset is completely up to you, but make it realistic, say, the length of one or two Sprints, the length of one of your releases, or the like. Got it? (I'm hoping the reasons for the specificity of these requirements will become clear shortly.) You'll recall from this earlier post that to calculate flow metrics, all you need to have is the start date and end date of each item that moves through your system. Thus, the following (Figure 1) might be some example data that we might use for this experiment: Figure 1 - Sample data Please note (and please forgive) the use of American-style dates above. "3/1/2023" is 1 March 2023, not 3 January 2023. You'll further recall from that earlier post that it is rather straightforward to calculate flow metrics from our item date data: Figure 2 - Flow Metrics Calculated From Sample Data Figure 2 above shows the arrival rate, Throughput, Cycle Time, and WIP for every single day of the time period under consideration (again, using American-style dates). The astute reader will notice the mathematically correct nuance of how the averages were calculated, which I hope to address in a future post. Now that we have all of our flow metrics derived, we can do some LL calculations for comparisons. We left off last time by pointing out that we have two versions of LL to deal with. The first is L = λ * W Where L is the average queue length (WIP), λ is the average arrival rate, and W is the average wait time (Cycle Time). Plugging in numbers from Figure 2, we have L = 7, λ = 2, and W = 3.5, or 7 = 2 * 3.5 which, of course, is correct. However, in the case of the second form of LL: WIP = TH * CT where WIP is the average work in progress, TH is the average Throughput, and CT is the average Cycle Time; when we plug in numbers, we get WIP = 7, TH = 1, and CT = 3.5, or 7 = 1 * 3.5 which, of course, is NOT correct. So how is it that we can have two forms of a "law" where one is correct, and the other is incorrect? Any mathematical theorem you can think of comes with a set of assumptions that must be in place in order for the theorem to be valid. I can guarantee you that the problem isn't with LL. The problem is with us and our understanding of LL. Let me explain. Any mathematical theorem you can think of comes with a set of assumptions that must be in place in order for the theorem to be valid. Violate any one (or more) of the assumptions at any time (or times), and the results you get in practice will not match the theory. For example, the fundamental theorem of calculus requires that you are dealing with (amongst other things) real-valued, continuous functions. Unless you are a mathematics geek, you may not know what any of that means but violate any one of those assumptions, and most of what you learned in your Calc 101 class becomes meaningless. Because it is an equation, most people want to rush to plug numbers in [Little's Law] to see what comes out the other side--without really understanding what they are doing. Little's Law is no different. It's worse, even. Because it is an equation, most people want to rush to plug numbers in to see what comes out the other side--without really understanding what they are doing. The prevailing Lean-Agile literature perpetuates this myth by suggesting you can do just that. (I'm loathe to give any examples here lest I become part of the problem, but just search the interwebs on your own for Little's Law in Agile, and you will see what I mean). What's worse is that many of those "sources" will tell you that you can use Little's Law as a predictor for what will happen if you take specific action. In other words, Little's Law will tell you exactly what your Cycle Time will be if you cut your WIP in half (spoiler alert: it won't). Another way of saying the above is that most people see L = λW, and they want to treat it like E = mc^2^ or F = ma. That is to say, they want to plug two of the three parameters into the equations to see if they can predict what the third parameter will be in some future state of the system. So if our current WIP is 12 and our current Throughput is 2, then all we need to do to get our future Cycle Time down to 3 is to cut our WIP in half while keeping our Throughput at 2 at the same time. I'm sorry to say it doesn't work like that. At all. The work to dispel these myths will start with the next post in this series. It will be a bit of a slog, and the minutia might seem tedious, but my hope is that if you stick with it, you will gain a much deeper appreciation for the law. That detailed discussion of what assumptions need to be in place for LL to be valid and how those assumptions apply to your own process data begins next. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law (this article) One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • A (Very) Brief History of Little's Law

    This post is part 2 in our Little's Law series. You might think that the history of the relationship L = λ * W (Eq. 1) would start with the publication of Dr. Little's seminal paper in 1961 [reference #1]. The reality is that we must begin by going back a bit further. What the symbols in the above figure (Eq. 1) mean will be discussed a little later. Evidence points to queuing theorists applying (Eq. 1) in their work well before 1961--seemingly without ever providing a rigorous mathematical proof as to its validity. The earliest pre-1961 example that I could find (in a semi-exhaustive search) was a paper written in 1953 called "Priority Assignment in Waiting Line Problems" by Alan Cobham [reference #2]. Somewhat coincidentally (for those who know me), this paper applies (Eq. 1) to prove the dangers of prioritization schemes to the overall predictability of queuing systems. (As an interesting aside, a quote from that paper is, "any increase in the relative frequency of priority 1 units increases not only the expected delay for units of that priority level but for units of all other levels as well."--in other words, we knew about the dangers of classes of service at least as early as the 1950s!) It would seem that (Eq. 1) was not only acknowledged well in advance of 1953, but it was also widely accepted as true even then. We knew about the dangers of classes of service as early as the 1950s! For the purposes of our story, however, the most important person before 1961 to recognize the need for a more rigorous proof of (Eq. 1) was Philip M. Morse. In 1958, Morse had published an Operations Research (OR) textbook called "Queues, Inventories, and Maintenance." [reference #3] In that book, Morse provided heuristic proofs that (Eq. 1) holds for very specific queuing models but commented that it would be useful to have the relationship proved for the general case (i.e., for all queues, not just for specific, individual models). In Morse's words, "we have now shown that...the relation between the mean number [L] and mean delay [W] is via the factor λ, the arrival rate: L = λW, and we will find, in all the examples encountered in this chapter and the next, for a wide variety of service and arrival distributions, for one or for several channels, that this same relationship holds. Those readers who would like to experience for themselves the slipperiness of fundamental concepts in this field and the intractability of really general theorems might try their hand at showing under what circumstances this simple relationship between L and W does not hold." Somewhat serendipitously, circa 1960, Dr. John Little was teaching an OR course at Case Institute of Technology in Cleveland (now Case Western Reserve University) and was using Morse's textbook as part of the curriculum. During one class, Little had introduced (Eq. 1) and commented (as Morse had) that it seemed to be a very general relationship. According to Little himself, "After class, I was talking to a number of students, and one of them (Sid Hess) asked, 'How hard would it be to prove it in general?' On the spur of the moment, I obligingly said, 'I guess it shouldn't be too hard.' Famous last words. Sid replied, 'Then you should do it!'" [reference #4] Little took up the challenge, went away for the summer in 1961 to come up with a general proof for (Eq. 1), wrote up his findings in a paper, submitted the proof to the periodical Operations Review, and had his submission accepted on the first round. His paper has since become one of the most frequently referenced articles in Operations Review's history. [reference #5] As such, the relationship L = λ * W quickly became more commonly known as Little's Law (LL). The real beauty of Little's general proof--apart from not relying on any specific queuing model--was all of the other things you didn't need to know in order to apply the law. For instance, you didn't need to have any detailed knowledge about inter-arrival times, service times, number of servers, order of service, etc., that you normally needed for queuing theory. [This point will become of monumental importance when we talk about applying LL to Agile.] In the years after its first publication, LL found applications far beyond OR. One such application was in the area of Operations Management (OM). OM is a bit different than OR because OM is generally more focused on output rather than input. Consider the perspective of an operations manager in a factory. A factory manager's primary focus is output because the whole reason a factory exists is to produce "things" (factories don't exist to take in "things"). Because of this potentially differing perspective, in the OM world, LL is usually stated in terms of throughput (TH or departures) rather than arrivals; work in progress (WIP) rather than queue length; and cycle time (CT) rather than wait time [reference #6]: WIP = TH * CT (Eq. 2) It's fairly easy to see that (Eq. 1) and (Eq. 2) are equivalent; however, the change in focus from arrivals to departures will require a nontrivial amount of care that we will get into in a later post. The reason I mention (Eq. 2) is because this is the form of LL that the Agile community seems to have preferred, and so it is here that our brief history ends and the real story begins. So why should you be concerned about any of this? There are a couple of reasons, really. First, practitioners should acknowledge that any doubts about the legitimacy of the theory have been settled for 70 years or more. There is simply no question about the validity of LL or its place in the management of flow. Second, because most agile practitioners have only seen LL in the form of (Eq. 2) and not (Eq. 1), it is important for them to understand where (Eq. 2) really comes from. It's not just a matter of simply substituting variable names, and Robert is your father's brother. There is simply no question about the validity of Little's Law or its place in the management of flow. This brings us to the fact that we actually have two forms of Little's Law to consider: L = λ * W and WIP = TH * CT But which one do we use and when? I'm glad you asked because that will be the topic of the next post in this series... Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law (this article) The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us. References Little, J. D. C. A proof for the queuing formula: L = λ W. Operations Research. 9(3) 383–387, 1961. Alan Cobham, Journal of the Operations Research Society of America, Vol. 2, No. 1 (Feb. 1954), pp. 70-76 Morse, P. M. (1958) Queues, Inventories and Maintenance, Publications in Operations Research, No.1, John Wiley, New York. Little, J. D. C., S. C. Graves. 2008. Little's Law. D. Chhajed, T. J. Lowe, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. Springer Science + Business Media LLC, New York. Whitt, W. 1991. A review of L = λW and extensions. Queueing Systems 9(3) 235–268. Hopp, W. J., M. L. Spearman. 2000. Factory Physics: Foundations of Manufacturing Management, 2nd ed. Irwin/McGraw-Hill, New York.

  • When an Equation Isn't Equal

    This is post 1 of 9 in our Little's Law series. Try an experiment for me. Assuming you are tracking flow metrics for your process -- which if you are reading this blog, you probably are -- and calculate your average Cycle Time, your average Work in Progress (WIP), and your average Throughput for the past 60-ish days. [Note: what data to collect and how to turn that data into the four basic metrics of flow is covered in a previous blog post]. The exact number of days doesn't really matter as long as it is arbitrarily long enough for your context. That is, if you have the data, you could even try this experiment for longer or shorter periods of time. Now take your historical average WIP and divide it by your historical average Throughput. When you do that, do you get your historical average Cycle Time exactly? Another quick disclaimer, for the purposes of this experiment, it is best if you don't pick a time period that starts with zero WIP and ends with zero WIP. For example, if you are one of the very few lucky Scrum teams that starts all of your Sprints with no PBIs already in progress, and all PBIs that you start within a Sprint finish by the end of the Sprint, then please don't choose the first day of the Sprint and the last day of the Sprint as the start and endpoint for your calculation. That's technically cheating, and we'll explain why in a later post. You've probably realized by now that we are testing the equation commonly referred to as Little's Law (LL): CT = WIP / TH where CT is the average CT of your process over a given time period, WIP is the average Work In Progress of your process for the same time period, and TH is the average Throughput of your process for the same time period. It may seem obvious, but LL is an equation that relates three basic metrics of flow. Yes, you read that right. LL is an equation. As in equal. Not approximate. Equal. In your above experiment, was your calculation equal? My guess is not. Here's an example of metrics from a team that I worked with recently (60 days of historical data): WIP: 19.54, TH: 1.15, CT: 10.3 In this example, WIP / TH is 16.99, not 10.3. For a different 60-day period, the numbers are: WIP: 13.18, TH: 1.03, CT: 9.1 This time, WIP / TH is 12.80, not 9.1. And one last example: WIP: 27.10, TH: 3.55, CT: 8.83, WIP / TH is 7.63, not 8.83. Better, but still not equal. If you are currently using the ActionableAgile tool, then doing these calculations is relatively easy. Simply load your data, bring up the Cumulative Flow Diagram (not that I normally recommend you use the CFD), and select "Summary Statistics" from the right sidebar. Here is a screenshot from an arbitrary date range I chose using AA's preloaded example data: From the above image, you'll see that: WIP: 15.04, TH: 1.22, CT: 9.45 However, 15.04 / 1.22 is 12.33, not 9.45. As evidence that I didn't purposefully select a date range that proved my point, here's another screenshot: Where 27.09 / 3.47 equals 7.81, not 8.86. In fact, I'd be willing to bet that in this example data -- which is from a real team, by the way -- it would be difficult to find an arbitrarily long time period where Average Cycle Time actually equals Average WIP divided by Average Throughput. Just look at the summary stats for the whole date range of pre-loaded data to see what I'm talking about: 21.62 / 2.24 equals 9.65, not 9.37 -- still close, but no cigars. I'd be willing to bet that you had (or will have) similar results with your own data. If you tried even shorter historical time periods, then the results might even be more dramatic. So what's going on here? How can something that professes to be an equation be anything but equal? We'll explore the exact reason why LL doesn't "work" with your data in an upcoming blog post, but for now, we'll actually need to take a step back and explore how we got into this mess, to begin with. After all, it is very difficult to know where we are going if we don't even know where we came from... Explore all entries in this series When an Equation Isn't Equal (this article) A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • In God We Trust. All Others Bring Data.

    Before proceeding, it would be worth reviewing Julia's excellent posts on the four basic metrics of Flow: Work Item Age Cycle Time Throughput WIP The definitions are great but are, unfortunately, meaningless unless we know what data we need to capture to calculate them. In terms of data collection, this is where our harping on you to define started and finished points will finally pay off. Take a timestamp when a work item crosses your started point and take another timestamp when that same work item crosses your finished point. Do that for every work item that flows through your process as shown below (forgive the American-style dates): That's it. To calculate all the basic flow metrics, this is the only data you will need. To calculate any or all of the basic metrics of flow, the only data you need is the timestamp for when an item started and the timestamp for when an item finished. Even better, if you are using some type of work item tracking tool to help your team, then most likely your tool will already be collecting all of this data for you. The downside of using a tracking tool, though, is that you may not be able to rely on any out-of-the-box metrics calculations that it may give you. It is one of the great secrets of the universe as to why many Agile tools cannot calculate flow metrics properly, but, for the most part, they cannot. Luckily for you, that's what this blog post is all about. To properly calculate each of the metrics from the data, do as follows: WIP WIP is the count of all work items that have a started timestamp but not a finished timestamp for a given time period. That last part is a bit difficult for people to grasp. Although technically, WIP is an instantaneous metric--that is, at any time you could count all of the work items in your process to calculate WIP--it is usually more helpful to talk about WIP over some time unit: days, weeks, Sprints, etc. Our strong recommendation--and this is going to be our strong recommendation for all of these metrics--is that you track WIP per day. Thus, if we would want to know what our WIP was for a given day, we would just count all the work items that had started but not finished by that date. For the above picture, our WIP on January 5th is 3 (work items 3, 4, and 5 have all started before January 5th but have not been finished by that day). Cycle Time Cycle Time equals the finished date minus the started date plus one (CT = FD - SD + 1). If you are wondering where the “+ 1” comes from in the calculation, it is because we count every day in which the item is worked as part of the total. For example, when a work item starts and finishes on the same day, we would never say that it took zero time to complete. So we add one, effectively rounding the partial day up to a full day. What about items that don't start and finish on the same day? For example, let's say an item starts on January 1st and finishes on January 2nd. The above Cycle Time definition would give an answer of two days (2 – 1 + 1 = 2). We think this is a reasonable, realistic outcome. Again, from the customers' perspective, if we communicate a Cycle Time of one day, then they could have a realistic expectation that they will receive their item on the same day. If we tell them two days, they have a realistic expectation that they will receive their item on the next day, etc. You might be concerned that the above Cycle Time calculation might be too biased toward measuring Cycle Time in terms of days. In reality, you can substitute whatever notion of "time" that is relevant for your context (that is why up until now, we have kept saying track a "timestamp" and not a "date"). Maybe weeks are more relevant for your specific situation. Or hours. Or even Sprints. [For Scrum, if you wanted to measure Cycle Time in terms of Sprints, then the calculation would just be Finished Sprint – Start Sprint + 1 (assuming work items cross Sprint boundaries in your context).] The point here is that this calculation is valid for all contexts. However, as with WIP, our very strong recommendation is to calculate Cycle Time in terms of days. The reasons are too numerous to get into here, so when starting out, calculate Cycle Time in terms of days and then experiment with other time units later should you feel you need them (our guess is you won't). Work Item Age Work Item Age equals the current date minus the started date plus one (Age = CD - SD + 1). The "plus one" argument is the same as for Cycle Time above. Our apologies, but you will never have a work item that has an Age of zero days. Again, our strong recommendation is to track Age in days. Throughput Let's take a look at a different set of data to make our Throughput calculation example a bit clearer: To calculate Throughput, begin by noting the earliest date that any item was completed, and the latest date that any item was completed. Then enumerate those dates. In our example, those dates in sequence are: Now for each enumerated date, simply count the number of items that finished on that exact date. For our data, those counts look like this: From Figure 2.4, we can see that we had a Throughput of 1 item on 03/01/2016, 0 items the next day, 2 items the third day, and 2 items the last day. Note the Throughput of zero on 03/02/2016 --nothing was finished that day. As stated above, you can choose whatever time units you want to calculate Throughput. If you are using Scrum, your first inclination might be to calculate Throughput per Sprint: "we got 14 work items done in the last Sprint". Let us very strongly advise against that and advise very strongly that you measure Throughput in terms of days. Again, it would be a book in itself to explain why, but let us just offer two quick justifications: (1) using days will provide you much better flexibility and granularity when we start doing things like Monte Carlo simulation; and, (2) using consistent units across all of your metrics will save you a lot of headaches. So if you are tracking WIP, Cycle Time, and Age all in days, then you will make your life a whole lot simpler if you track Throughput in days too. For Scrum, you can easily derive Throughput per Sprint from this same data if that still matters to you. Randomness We've saved the most difficult part for last. You now know how to calculate the four basic metrics of flow at the individual work item level. Further, we now know that all of these calculations are deterministic. That is, if we start a work item on Monday and finish it a few days later on Thursday, then we know that the work item had a Cycle Time of *exactly* four days. But what if someone asks us what our overall process Cycle Time is? What our overall process Throughput is? How do we answer those questions? Our guess is you immediately see the problem here. If, say, we look at our team's Cycle Time for the past six weeks, we will see that we had work items finish in a wide range of times. Some in one day, some in five days, some in more than 14 days, etc. In short, there is no single deterministic answer to the question, "What is our process Cycle Time?". Stated slightly differently, your process Cycle Time is not a unique number, rather, it is a distribution of possible values. That's because your process Cycle Time is really what's known as a random variable. [By the way, we've only been talking about Cycle Time in this section for illustrative purposes, but each of the basic metrics of flow (WIP, Cycle Time, Age, Throughput) are random variables.] What random variables are and why you should care is one of those topics that is way beyond the scope of this post. But what you do need to know is that your process is dominated by uncertainty and risk, which means all flow metrics that you track will reflect that uncertainty and risk. Further, that uncertainty and risk will show up as randomness in all of your Flow Metric calculations. How variation impacts the interpretation of flow metrics and how it impacts any action that should be taken to improve your process will be the topic of a blog series coming later this year. For now, what you need to know is that the randomness in your process is what makes it stochastic. You don't necessarily need to understand what stochastic means, but you should understand that all stochastic processes behave according to certain "laws". One such law you may have heard of before... About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • The Four-Letter Word That Begins With F

    Many of our future planned posts will refer to a concept known as Flow. For as much as Flow is talked about in Lean-Agile circles, there really aren't many reliable definitions for what Flow actually is. Our inspiration for the definition we will use going forward is the definition of Flow that is found in the Kanban Guide (and by inspiration, I mean the document that we will shamelessly steal from). What Is Flow? The whole reason for the existence of your current job (team) is to deliver value for your customers/stakeholders. Value, however, doesn't just magically appear. Constant work must be done to turn product ideas into tangible customer value. The sum total of the activities needed to turn an idea into something concrete is called a process. Whether you know it or not, you and your team have built a value delivery process. That process may be explicit or implicit, but it exists. Having an understanding of your process is fundamental to the understanding of Flow. Once your process is established, then Flow is simply defined as the movement of potential value through that process. Flow: the movement of potential value through a given process. Flow: the movement of potential value through a give process. Maybe you've heard of the other name for process, known as workflow. There is a reason it is called workFLOW. Because for any process, what really matters is the flow of work. Note: In future posts, I will often use the words "process", "workflow", and "system" interchangeably. I will try my best to indicate a difference between these when a difference is warranted. For most contexts, however, any difference among these words is negligible so that they can easily be used synonymously. The reason you should care about Flow is because your ability to achieve Flow in your process will dictate how effective, efficient, and predictable you are as a team at delivering customer value--which, as we stated at the beginning, is the whole reason you are here. Setting Up To Measure Flow As important as Flow is as a concept, it can really only act as a guide for improvements if you can measure it. Thankfully for us (and thankfully for ActionableAgile™️), Flow comes with a set of basic metrics that will give us such insight. But before we can talk about what metrics to use, we need first talk about what must be in place in order to calculate those metrics. All metrics are measurements, and all measurements have the same two things in common: a start point and an end point. Measuring Flow is no different. To measure flow, we must know what it means for work to have started in our process and what it means for work to have finished in our process. The decision around started and finished may seem trivial, but we can assure you it is not. How to set started and finished points in your process is beyond the scope of this book, but here are some decent references to check out if you need some help. It gets a little more complicated than that because it is perfectly allowed in Flow to have more than one started point and more than one finished point within a given workflow. Maybe you want to measure both from when a customer asks for an item as well as from when the team starts working on the item. Or maybe a team considers an item finished when it has been reviewed by Product Management, put into production, validated by the end user, or whatever. Any and all permutations of started and finished in your process are allowed. Not only are the different permutations allowed, it is encouraged that you experiment with different started and finished points in your process to better understand your context. You will quickly learn that changing the definition of started/finished will allow you to answer very different questions about flow in your process. If all goes well, expanding your started/finished points will get you down the path toward true business agility. The point is--as you will see--that the questions that Little's Law will help you answer will depend completely on your choices for started and finished. Conclusion Assuming you care about optimizing the value-delivery capabilities of your process, then you should care about Flow. And it should be pointed out that it doesn't matter if you are using Scrum, SAFe, Kanban, or something else for value delivery--you should still care about Flow. Therefore, if you haven't already, you need to sit down and decide--for your process--what does it mean for work to have started and what does it mean for work to have finished. All other Flow conversations will be dependent on those boundary decisions. Once defined, the movement of potential value from your defined started and finished points is what is called Flow. The concept of movement is of crucial importance because the last thing we want as a team is to start a whole bunch of work that never gets finished. That is the antithesis of value delivery. What's more, as we do our work, our customers are constantly going to be asking (whether we like it or not) questions like "how long?" or "how many?"--questions that will require our understanding of movement to answer. That's where Flow Metrics come in... About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

bottom of page