top of page

Search Results

41 items found for ""

  • Schrodinger's Work Item and the Quest for Value

    This article is a guest contribution from Julie Starling, ActionableAgile customer, and was originally posted on her blog. Jump down to read more about Julie. We're all familiar with Schrodinger's cat right? The cat in a box which has the state of both dead and alive whilst the box is closed … when the box is opened it is one or the other. I can't help but see the parallels to work items in our system. Schrodinger's Work Item An active item in our system represents both potential value and waste ...until we deliver it, we do not know which it is. Potentially Valuable – In most instances, we engage with our customers to understand what is valuable to them. Even in cases where direct customer communication is limited, we often hold a genuine belief in the value of what we're delivering. However, complete certainty about its value remains elusive until we actually deliver the item and receive feedback. Only when our work item is in the hands of our customers can we truly determine whether the time invested has indeed been valuable. Waste - Until we deliver the item, the time we are spending on it can also be considered waste, as until it's delivered there is always a risk it won't be delivered and the time spent up until now will have been for nothing… these situations happen all the time and can be for a number of reasons, be it a change in strategy due to a global pandemic or change of requirement from our customers and everything else in between. It can also have been waste if we deliver it an no one uses it, it doesn't deliver the expected outcome or if we don't get any valuable feedback. Let The Cat Out The Box Whilst we understand that work in our system is potentially not valuable, we shouldn't be using this as a reason to not be experimental with what we deliver! Instead, we should think about getting work items out of our system as efficiently as we can. This way we can find out if it was actually valuable as quickly as possible, learn from this answer and move on with this new knowledge. Compromising quality is also not the answer! Two ways to get the cat out of the box... 1. Don’t Start! If you haven’t started working on an item then you haven’t started potentially wasting time. You can then put your efforts on keeping work that has started active and flowing. 2. Finish It! If you’ve started...then finish! One way to get an item out of a system is to finish it. On their own they may seem like two obvious and probably unhelpful points. However if we look at the bigger picture, we shouldn’t start items until we know they have the best chance of flowing through our system. When we do start, we should be managing that work in progress always with a goal of finishing. We want to keep our work flowing and keep the work as busy and active as possible. If we start items before they can flow there can be a lot of sitting around in the system. The longer the item is in the system the possibility of the item being waste just increases as the world around us changes or items become stale. Don’t Put the Cat in The Box, But If You Do, Don’t Keep It In There Longer Than Necessary In essence we shouldn’t start work until it’s the right time for our system, and when we do start it, we should be managing the work in progress with the goal of finishing. There are a number of ways in which we can manage work in progress, including... 1. Limit the amount of Work In Progress By not having too much in our system we are able to focus on what is active, less context switching and spend our efforts on keeping our work busy (keep work busy before people). If you are in a situation where you have a team of busy people and a number of work items that aren’t actively being worked on, then you probably need to start controlling your WIP. 2. Make items small The smaller work items are the easier they are going to flow through your system. We need to make sure our items are right-sized and represent the smallest possible chunk of potential value. This will help flow but it will also help us get the necessary feedback we need to know if we need to pivot in the quest for value. With this approach if the world around us changes and what we were delivering is no longer relevant, we’ve also minimized the amount of waste. 3. Take action on items that are unnecessarily aging Any item that is staying in the system unnecessarily long needs action taken on it. This could range from splitting the work item down, resolving blockers or even kicking it out of the system! But how do we know if an item is unnecessarily aging? ...I’ll be covering that in my next post. Similar to the state of Schrodinger's Cat being unknown until perceived, our work items exist in a superposition of potential value and waste. That is, until they are delivered and observed by our customers. Actively managing the work in the system shortens the time to understand its fate! TLDR; We can’t assume all work will be as valuable as we expect when we decide to do it. Work not finished has a dual nature of both potentially valuable or waste until we deliver and get feedback. To get the answer to ‘was it valuable?’ as quickly as possible we should be focusing on flow. Keep items in our system for a short of a time as possible. Keep inactive time to a minimum. Whilst work is in our system, we should be actively managing it with a goal to getting it out (at a high quality) as soon as we can. Techniques such as managing WIP, right sizing items and taking action on aging items help us to do this. About Julie Starling, Guest Writer Julie is passionate about the efficient delivery of value to customers and avoiding the illusion of certainty. In recent years she has specialized in how data can be used to drive the right conversations to do this. She encourages teams to use data in actionable ways and adjust ways of working to maximize their potential. She has spent over 15 years working in and alongside software delivery teams. In her spare time, she loves to travel, snowboard, and is obsessed with houseplants!

  • How do you use pace percentiles on ActionableAgile's aging chart?

    It is inevitable that there are ways that the software creator intends a feature to be used and there are ways that it ends up being used. 🤓 Sometimes these unintended uses can be even better than the initial idea, but other times they can end up causing harm. In a recent chat with Daniel Vacanti, we discussed this very thing about ActionableAgile™️ Analytics. I can say I was more than mildly surprised when one of my favorite features came up: the pace percentile feature on ActionableAgile's Work Item Aging chart. I love this feature because it helps you get early signals of slow work. However, after talking to and training many people, Dan saw that people very often misinterpret what this particular signal really tells us. How did he come to that conclusion? He talked to them about the decisions they would make because of the signals and saw that they weren't necessarily picking up what was intended. Instead, the decisions people were likely to make could lead to even worse outcomes than currently presented on the chart. What do you think? Are you interpreting the signals correctly? Watch this short video from Dan to find out. As always, feel free to leave your (appropriate) comments and questions on our YouTube channel!

  • Little's Law - Why You Should Care

    This is post 9 of 9 in our Little's Law series. I personally can't fathom how someone could call themselves a flow practitioner without a concerted effort to study Little's Law. However, the truth is that some of the posts in this series have gone into way more detail about LL than most people would ever need to practically know. Having said that, without an understanding of what makes Little's Law work, teams are making decisions every day that are in direct contravention of established mathematical facts (and paying the consequences). To that end, for those who want to learn more, here is my suggested reading list for anyone interested in learning more about Little's Law (in this particular order): 1. http://web.eng.ucsd.edu/~massimo/ECE158A/Handouts_files/Little.pdf Frank Vega and I call this "Little's Law Chapter 5" as it is a chapter taken from a textbook that Little contributed to. For me, this is hands down the best introduction to the law in its various forms. I am not lying when I say that I've read this paper 50 times (and probably closer to 100 times) and get something new from it with each sitting. 2. https://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/OS/Littles-Law-50-Years-Later.pdf This is a paper Little wrote on the 50th anniversary of the law. It builds on the concepts of Chapter 5 and goes into more detail about the history of L=λW since its first publication in 1961. This paper, along with Chapter 5, should tell you 95% of what you need to know about LL. 3. http://fisherp.scripts.mit.edu/wordpress/wp-content/uploads/2015/11/ContentServer.pdf Speaking of the first publication of the proof of L=λW, there's no better teacher than going right to the source. This article would be my 3rd recommendation as it is a bit mathy, but its publication is one of the seminal moments in the history of queuing theory and any buff should be familiar with this proof. For extra credit: 4. http://www.columbia.edu/~ww2040/ReviewLlamW91.pdf This article is not for the faint of heart. I recommend it not only for its comprehensive review of L=λW but also (and mostly) for its exhaustive reference list. Work your way through all of the articles listed at the end of this paper, and you can truly call yourself an expert on Little's Law. If you read all of these, then you can pretty much ignore any other blog or LinkedIn post (or Wikipedia article, for that matter) that references Little's Law. Regardless of the effort that you put in, however, expertise in LL is not the end goal. No, the end goal is altogether different. Why You Really Should Care If you are studying Little's Law, it is probably because you care about process improvement. Chances are the area of process improvement that you care most about is predictability. Remember that being predictable is not completely about making forecasts. The bigger part of predictability is operating a system that behaves in a way that we expect it to. By designing and operating a system that follows the assumptions set forth by Little's Law, we will get just that: a process that behaves the way we expect it to. That means we will have controlled the things that we can control and that the interventions that we take to make things better will result in outcomes more closely aligned with our expectations. That is to say, if you know how Little's Law works, then you know how flow works. And if you know how flow works, then you know how value delivery works. I hope you have enjoyed this series and would welcome any comments or feedback you may have. Thanks for going on this learning journey with me! Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care (this article) About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Other Myths About Little's Law

    This is post 8 of 9 in our Little's Law series. In the previous blog post, we talked about the single biggest error people make when applying Little's Law. That's not to say there aren't others out there. Thankfully, Prateek Singh and I recorded an episode of our Drunk Agile podcast to go over some of these other myths in more detail. While a large portion of what we talk about below is a rehash of the forecasting debacle, we also get into lesser-known problems like: 1. Using LL to set WIP Limits 2. "Proving" LL using Cumulative Flow Diagrams 3. All items need to be the same size 4. Cycle Times must be normally distributed 5. FIFO queuing is required BTW, you will recall from a previous post where I quoted Little as saying, "...but it is quite surprising what we do not require. We have not mentioned how many servers there are, whether each server has its own queue or a single queue feeds all servers, what the service time distributions are, what the distribution of inter-arrival times is, or what is the order of service of items, etc." (1). If Little himself says that these are myths, who are we to disagree? So grab your favourite whisky and enjoy! References Little, J. D. C., S. C. Graves. 2008. Little's Law. D. Chhajed, T. J. Lowe, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. Springer Science + Business Media LLC, New York. Drunk Agile YouTube channel https://www.youtube.com/@drunkagile4780 Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law (this article) Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • How NOT to use Little's Law

    This is post 7 of 9 in our Little's Law series. You may or may not be surprised to hear me say that the Little's Law equation is indeed deterministic. But, as I have mentioned several times in the past, it is not deterministic in the way that you think it is. That is, the law is concerned with looking backward over a time period that has already been completed. It is not about looking forward; that is, is not meant to be used to make deterministic predictions. As Dr. Little himself says about the law, "This is not all bad. It just says that we are in the measurement business, not the forecasting business". (1) In other words, the fundamental way to NOT use Little's Law is to use it to make a forecast. Let me explain, as this is a sticking point for many people (again, most interwebs blog posts get this wrong). The "law" part of Little's Law specifies an exact (deterministic) relationship between average WIP, average Cycle Time, and average Throughput, and this "law" part only applies only when you are looking back over historical data. The law is not about—and was never designed for—making deterministic forecasts about the future. Little's Law wasn't designed for making deterministic forecasts about the future. For example, let's assume a team that historically has had an average WIP of 20 work items, an average Cycle Time of 5 days, and an average Throughput of 4 items per day. You cannot say that you are going to increase average WIP to 40, keep average Cycle Time constant at 5 days, and magically, Throughput will increase to 8 items per day—even if you add staff to keep the WIP to staff ratio the same in the two instances. You cannot assume that Little's Law will make that prediction. It will not. All Little's Law will say is that an increase in average WIP will result in a change to one or both of average Cycle Time and average Throughput. It will further say that those changes will manifest themselves in ways such that the relationship among all three metrics will still obey that law. But what it does not say is that you can deterministically predict what those changes will be. You have to wait until the end of the time interval you are interested in and look back to apply the law. The reason for the above is because--as we saw in the last post--it is impossible to know which of Little's assumptions (or how many times) you will violate in the future. As a point of fact, any violation of the assumptions will invalidate the law (regardless of whether you are looking backward or forward). But that restriction is not fatal. The proper application of Little's Law in our world is to understand the assumptions of the law and to develop process policies that match those assumptions. If the process we operate conforms—or mostly conforms—to all of the assumptions of the law, then we get to a world where we can start to trust the data that we are collecting from our system. It is at this point that our process is probabilistically predictable. Once there, we can start to use something like Monte Carlo simulation on our historical data to make forecasts, and, more importantly, we can have some confidence in the results we get by using that method. There are other, more fundamental reasons why you do not want to use Little's Law to make forecasts. For one thing, I have hopefully by now beaten home the point that Little's Law is a relationship of averages. I mention this again because even if you could use Little's Law as a forecasting tool (which you cannot), you would not want to, as you would be producing a forecast based on averages. Anytime you hear the word "average," you must immediately think "Flaw of Averages" (2). As a quick reminder, the Flaw of Averages (crudely) states that "plans based on average assumptions will fail on average." So, if you were to forecast using LL, then you would only be right an average amount of the time (in other words, you would most likely be wrong just as often as you were right--that's not very predictable from a forecasting perspective). Plans based on average assumptions will fail on average Having said all that, though, there is no reason why you cannot use the law for quick, back-of-the-envelope type estimations about the future. Of course, you can do that. I would not, however, make any commitments, WIP control decisions, staff hiring or firing decisions, or project cost calculations based on this type of calculation alone. I would further say that it is negligent for someone even to suggest doing so. But this simple computation might be useful as a quick gut check to decide if something like a project is worth any further exploration. While using Little's Law to forecast is a big faux pas, there are other myths that surround it, which we will cover very quickly in the next post in the series. References Little, J. D. C. *Little's Law As Viewed on Its 50th Anniversary* https://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/OS/Littles-Law-50-Years-Later.pdf Savage, Sam L. *The Flaw of Averages*. John Wiley & Sons, Inc., 2009. Vacanti, Daniel S. *Actionable Agile Metrics for Predictability* ActionableAgile Press, 2014. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law (this article) Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • The Most Important Metric of Little's Law Isn't In The Equation

    This is post 6 of 9 in our Little's Law series. As we discussed in the previous post, a thorough understanding of what it means to violate each of the assumptions of Little's Law (LL) is key to the optimization of your delivery process. So let's take a minute to walk through each of those in a bit more detail. The first thing to observe about the assumptions is that #1 and #3 are logically equivalent. I'm not sure why Dr. Little calls these out separately because I've never seen a case where one is fulfilled but the other is not. Therefore, I think we can safely treat those two as the same. But more importantly, you'll notice what Little is not saying here with either #1 or #3. He is making no judgment about the actual amount of WIP that is required to be in the system. He says nothing of less WIP being better or more WIP being worse. In fact, Little couldn't care less. All he cares about is that WIP is stable over time. So while having arrivals match departures (and thus unchanging WIP over time) is important, that tells us *nothing* about whether we have too much WIP, too little WIP, or just the right amount of WIP. Assumptions #1 and #3, therefore, while important, can be ruled out as *the* most important. Assumption #2 is one that is frequently ignored. In your work, how often do you start something but never complete it? My guess is the number of times that has happened to you over the past few months is something greater than zero. Even so, while this assumption is again of crucial importance, it is usually the exception rather than the rule. Unless you find yourself in a context where you are always abandoning more work than you complete (in which case you have much bigger problems than LL), this assumption will also not be the dominant reason why you have a suboptimal workflow. This leaves us with assumption #4. Allowing items to age arbitrarily is the single greatest factor as to why you are not efficient, effective, or predictable at delivering customer value. Stated a different way, if you plan to adopt the use of flow metrics, the single most important aspect that you should be paying attention to is not letting work items age unnecessarily! More than limiting WIP, more than visualizing work, more than finding bottlenecks (which is not necessarily a flow thing anyway), the only question to ask of your flow system is, "Are you letting items age needlessly?" Get aging right and most of the rest of predictability takes care of itself. As this is a blog series about Little's Law, getting into the specifics of how to manage item aging is a bit beyond our remit, but thankfully Julia Wester has done an excellent job of giving us an intro to how you might use ActionableAgile Analytics for this goal. To me, one of the strangest results in all of flow theory is that the most important metric to measure is not really stated in any equation (much less Little's Law). While I always had an intuition that aging was important, I never really understood its relevance. It wasn't until I went back and read the original proofs and subsequent articles by Little and others that I grasped its significance. You'll note that other than the Kanban Guide, all other flow-based frameworks do not even mention work item aging at all. Kinda makes you wonder, doesn't it? Having now explored the real reasons to understand Little's Law (e.g., pay attention to aging and understand all the assumptions), let's now turn our attention to some ways in which Little's Law should NOT be used. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation (this article) How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • It's Always The Assumptions

    This is post 5 of 9 in our Little's Law series. Not to get too morbid, but in police detective work, when a married woman is murdered, there are only three rules to determine who the killer is: 1. It's always the husband 2. It's always the husband 3. It's always the husband The same thing is true when your flow metrics are murdered by your process: 1. It's always the assumptions 2. It's always the assumptions 3. It's always the assumptions Think back to the first experiment I had you run at the start of this blog series. I had you look at your data, do some calculations, and determine if you get the results that Little's Law predicts. I even showed you some example data of a real process where the calculated metrics did not yield a valid Little's Law result. I asked you at the time, "What's going on here?" If you've read my last post, then you now have the answer. The problem isn't Little's Law. The problem is your process. The Throughput form of Little's Law is based on five basic assumptions. Break any one or more of those assumptions at any one or more times, and the equation won't work. It's as simple as that. For convenience for the rest of this discussion, I'm going to re-list Little's assumptions for the Throughput form of his law. Also, for expediency, I am going to number them, though this numbering is arbitrary and is in no way meant to imply an order of importance (or anything else for that matter): 1. Average arrival rate equals average departure rate 2. All items that enter a workflow must exit 3. WIP should neither be increasing nor decreasing 4. Average age of WIP is neither increasing nor decreasing. 5. Consistent units must be used for all measures In that earlier post, I gave this example from a team that I had worked with (60 days of historical data): WIP: 19.54, TH: 1.15, CT: 10.3 For this data, WIP / TH is 16.99, not 10.3. What that tells us is that at one or more points during that 60-day time frame, this team violated one or more of Little's Law's assumptions at least one or more times. One of the first pieces of detective work is to determine which ones were violated and when. Almost always, a violation of Little's Law comes down to your process's policies (whether those policies are explicit or not). For example, does your process call for expedites that are allowed to violate WIP limits and that takes priority over other existing work? If so, for each expedited item you had during the 60 days, you violated at least assumptions #3 and #4. Did you have blockers that you ignored? If so, then you at least violated #4. Did you cancel work and just delete it off the board? If so, then you violated #2. And so on. This was quite possibly the easiest post to write in this series -- but probably the most important one. A very quick and easy health check is to compare your calculated flow metrics with those that are calculated by Little's Law. Are they different? If so, then somewhere, somehow, you have violated an assumption. Now your detective work begins. Do you have process policies that are in direct contradiction to Little's Law's assumptions? If so, what changes can you make to improve stability/predictability? Do you have more ad hoc policies that contradict Little? If so, how do you make them more explicit so the team knows how to respond in certain situations? The goal is not to get your process perfectly in line with Little. The goal is to have a framework for continual improvement. Little is an excellent jumping-off point for that. Speaking of continual improvement, when it comes to spotting improvement opportunities as soon as possible, there is one assumption above that is more important than all of the others. If you have followed my work up until now, then you know what that assumption is. If not, then read on to the next post... Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions (this article) The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • What's the Tallest Mountain On Earth?

    If, like most everyone else, you answered, "Mount Everest," then you are not quite right. But you are not quite wrong, either. The real answer has to do with a concept I wrote about in an earlier blog post. Scientists can all objectively agree where mountains "finish". That is, it's extremely hard to argue about where a mountain "peaks". But when measuring, we know that "finished" is only half the battle. Agreeing where a mountain "starts" is a whole other conversation altogether -- and not nearly as straightforward as it may sound. For example, more than half of the Mauna Kea volcano in Hawaii is underwater. Only 4,205 meters of the whole mountain is above sea level. But if we measure from the base to the summit of Mauna Kea, it is 10,211 meters -- that's about 20% taller than Everest's 8,848 meters. If you only want to talk about mountains on land, then, base-to-summit, Denali in Alaska is actually taller (5,900m) than Everest base-to-summit (4,650m). So why does Everest get the crown? The reason is that most scientists choose to start their measurements of mountain heights from a concept known as sea level. But the problem with sea level is that anyone who has studied geography knows that the sea ain't so level. The physics of the earth are such that different densities of the earth's makeup at different locations cause different gravitational pulls resulting in "hills and valleys" of sea level across the planet (the European Space Agency has an outstanding visualization of this) Add to that things like tides, storms, wind, and a bulge around the equator due to the earth's rotation means there is no one true level for the sea. Scientists cheat to solve this problem by calculating a "mean" (arithmetic mean or average) sea level. This "average" sea level represents the zero starting point at which all land mountains are measured (cue the "Flaw of Averages"). You might ask, why don't we choose a more rigorous starting point like the center of the earth? The reason for that is... remember that bulge around the equator that I just alluded to? The earth itself is not quite spherical, and the distance from its center at the equator is longer than the distance from the center to either the north or south pole. In case you were wondering, if we were to measure from the center of the earth, then Mount Chimborazo in Ecuador would win. It seems that geologists fall prey to the same syndrome that afflicts most Agile methodologies. A bias toward defining only when something is "done" ignores half of the equation -- and the crucial half at that. What's more, you have Agilists out there who actively rant against any notion of a defined "start" or "ready". What I hope to have proven here is that, in many instances, deciding where to start can be a much more difficult (and usually much more important) problem to solve, depending on what question you are trying to solve. At the risk of repeating myself, a metric is a measurement, and any measurement contains BOTH a start point AND a finish point. Therefore, begin your flow data journey by defining the start and end points in your process. Then consider updating those definitions as you collect data and as your understanding of your context evolves. Anything else is just theatre. References PBS.org, "Be Smart", Season 10, Episode 9, 08/10/2022 The European Space Agency, https://www.esa.int/ About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • The Deviance of Standard Deviation

    Before getting too far into this post, there are two references that do a far better job than I ever will at explaining the deficiency of the standard deviation statistic: "The Flaw of Averages" by Dr. Sam Savage (https://www.flawofaverages.com/) Pretty much anything written by Dr. Donald Wheeler (spcpress.com) Why is the standard deviation so popular? Because that's what students are taught. It's that simple. Not because it is correct. Not because it is applicable in all circumstances. It is just what everyone learns. Even if you haven't taken a formal statistics class, somewhere along the line, you were taught that when presented with a set of data, the first thing you do is calculate an average (arithmetic mean) and a standard deviation. Why were taught that? It turns out there's not a really good answer to that. An unsatisfactory answer, however, would involve the history of the normal distribution (Gaussian) and how over the past century or so, the Gaussian distribution has come to dominate statistical analysis (its applicability--or, rather, inapplicability--for this purpose would be a good topic for another blog, so please leave a comment letting us know your interest). To whet your appetite on that topic, please see Bernoulli's Fallacy by Aubrey Clayton. Arithmetic means and standard deviations are what is known as descriptive statistics. An arithmetic mean describes the location of the center of a given dataset, while the standard deviation describes the data's dispersion. For example, say we are looking at Cycle Time data and we find that it has a mean of 12 and a standard deviation of 4.7. What does that really tell you? Well, actually, it tells you almost nothing--at least almost nothing that you really care about. The problem is that in our world, we are not concerned so much with describing our data as we are with doing proper analysis on it. Specifically, what we really care about is being able to identify possible process changes (signal) that may require action on our part. The standard deviation statistic is wholly unsuited to this pursuit. Why? First and foremost, the nature of how the standard deviation statistic is calculated makes it very susceptible to extreme outliers. A classic joke I use all the time is: imagine that the world's richest person walks into a pub. The average wealth of everyone in the pub is somewhere in the billions, and the standard deviation of wealth in the pub is somewhere in the billions. However, you know that if you were to walk up to any other person in the pub, that person would not be a billionaire. So what have you really learned from those descriptive statistics? This leads us to the second deficiency of the standard deviation statistic. Whenever you calculate a standard deviation, you are making a big assumption about your data (recall my earlier post about assumptions when applying theory?). Namely, you are making an assumption that all of your data has come from a single population. This assumption is not talked about much in statistical circles. According to Dr. Wheeler, "The descriptive statistics taught in introductory classes are appropriate summaries for homogeneous collections of data. But the real world has many ways of creating non-homogeneous data sets.." (https://spcpress.com/pdf/DJW377.pdf). In our pub example above, is it reasonable to assume that we are talking about a single population of peoples' wealth that shares the same characteristics? Or is it reasonable that some signal exists as evidence that one certain data point isn't routine? Take the cliched example from the probability of pulling selecting marbles from an urn. The setup usually concerns a single urn that contains two different coloured marbles--say red and white--in a given ratio. Then some question is asked, like if you select a single marble, what is the probability it will be red? The problem is that in the "real world," your data is not generated by choosing different coloured marbles from an urn. Most likely, you don't know if you are selecting from one urn or several urns. You don't know if your urns contain red marbles, white marbles, blue marbles, bicycles, or tennis racquets. Your data is generated by a process where things can--and do--change, go wrong, encompass multiple systems, etc. It is generated by potentially different influences under different circumstances with different impacts. In those situations, you don't need a set of descriptive statistics that assume a single population. What you need to do is analysis on your data to find evidence of signal of multiple (or changing) populations. In Wheeler's nomenclature, what you need to do is first determine if your data is homogenous or not. Now, here's where proponents of the standard deviation statistic will say that to find signal, all you do is take your arithmetic mean and start adding or subtracting standard deviations to it. For example, they will say that roughly 99.7% of all data should fall within your mean plus or minus 3 standard deviations. Thus, if you get a point outside of that, you have found signal. Putting aside for a minute the fact that this type of analysis ignores the assumptions I just outlined, this example brings into play yet another dangerous assumption of the standard deviation. When starting to couple percentages with a standard deviation (like 68.2%, 95.5%, 99.7%, etc.), you are making another big assumption that your data is normally distributed. I'm here to tell you that most real-world process data is NOT normally distributed. So what's the alternative? As a good first approximation, a great place to start is with the percentile approach that we utilize with ActionableAgile Analytics (see, for example, this blog post). This approach makes no assumptions about single populations, underlying distributions, etc. If you want to be a little more statistically rigorous (which at some point you will want to be), then you will need the Process Behaviour Chart advocated by Dr. Donald Wheeler's continuation of Dr. Walter Shewhart's work. A deeper discussion of the Shewhart/Wheeler approach is a whole blog series on its own that, if you are lucky, may be coming to a blog site near you soon. So, to sum up, the standard deviation statistic is an inadequate tool for data analysis because it: Is easily influenced by outliers (which your data probably has) Often assumes a normal distribution (which your data doesn't follow) Assumes a single population (which your data likely doesn't possess) Any analysis performed on top of these flaws is almost guaranteed to be invalid. One last thing. Here's a quote from Atlassian's own website: "The standard deviation gives you an indication of the level of confidence that you can have in the data. For example, if there is a narrow blue band (low standard deviation), you can be confident that the cycle time of future issues will be close to the rolling average." There are so many things wrong with this statement that I don't even know where to begin. So please help me out by leaving some of your own comments about this on the 55 Degrees community site. Happy Analysis! About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • One Law. Two Equations.

    This is post 4 of 9 in our Little's Law series. In the previous post, we demonstrated how the two different forms of Little's Law (LL) can lead to two very different answers even when using the same dataset. How can one law lead to two answers? As was suggested, the applicability of any theory depends completely on one's understanding of the assumptions that need to be in place in order for that given theory to be valid. However, in the case of LL, we have two different equations that purport to express one single theory. Does having two equations require having two sets of assumptions (and potentially two types of applicability)? In a word, yes. Recall that the L = λW (this is the version based on arrival rate) came first, and in his 1961 proof, Little stated his assumptions for the formula to be correct: "if the three means are finite and the corresponding stochastic process strictly stationary, and, if the arrival process is metrically transitive with nonzero mean, then L = λW." There's a lot of mathematical gibberish in there that you don't need to know anyway because it turns out Little's initial assumptions were overly restrictive, as was demonstrated by subsequent authors (reference #1). All you really need to know is that--very generally speaking--LL is applicable to any process that is relatively stable over time [see note below]. For our earlier thought experiment, I took this notion of stability to an extreme in order to (hopefully) prove a point. In the example data I provided, you'll see that arrivals are infinitely stable in that they never change. In this ultra-stable world, you'll note that the arrivals form of LL works--quite literally--exactly the way that it should. That is to say, when you plug two numbers into the equation, you get the exact answer for the third. Things change dramatically, however, when we start talking about the WIP = TH * CT version of the law. Most people assume--quite erroneously--that this latter form of LL only requires the same assumptions as the arrivals version. However, Dr. Little is very clear that changing the perspective of the equation from arrivals to departures has a very specific impact on the assumptions that are required for the law to be valid. Let's use Little's own words for this discussion: "At a minimum, we must have conservation of flow. Thus, the average output or departure rate (TH) equals the average input or arrival rate (λ). Furthermore, we need to assume that all jobs that enter the shop will eventually be completed and will exit the shop; there are no jobs that get lost or never depart from the shop...we need the size of the WIP to be roughly the same at the beginning and end of the time interval so that there is neither significant growth nor decline in the size of the WIP, [and] we need some assurance that the average age or latency of the WIP is neither growing nor declining." (reference #2) "At a minimum, we must have conservation of flow." Allow me to put these in a bulleted list that will be easier for your reference later. For a system being observed for an arbitrarily long amount of time: Average arrival rate equals average departure rate All items that enter a workflow must exit WIP should neither be increasing nor decreasing Average age of WIP is neither increasing nor decreasing Consistent units must be used for all measures I added that last bullet point for clarity. It should make sense that if Cycle Time is measured in days, then Throughput cannot be measured in weeks. And don't even talk to me about story points. If you have a system that obeys all of these assumptions, then you have a system in which the TH form of Little's Law will apply. If you have a system that obeys all of these assumptions, then you have a system in which the TH form of Little's Law will apply. Wait, what's that you say? Your system doesn't follow these assumptions? I'm glad you pointed that out because that will be the topic of our next post. A note on stability Most people have an incorrect notion of what stability means. "Stable" does not necessarily mean "not changing." For example, Little explicitly states aspects of a system that L = λW is NOT dependent on and, therefore, may reasonably change over time: size of items, order of items worked on, number of servers, etc. That means situations like adding or removing team members over time may not be enough to consider to a process "unstable." However, to take an extreme example, it would be easy to see that all of the restrictions/changes imposed during the 2020 COVID pandemic would cause a system to be unstable. From a LL perspective, only when all 5 assumptions are met can a system reasonably be considered stable (assuming we are talking about the TH form of LL). References Whitt, W. 1991. A review of L = λW and extensions. Queueing Systems 9(3) 235–268. Little, J. D. C., S. C. Graves. 2008. Little's Law. D. Chhajed, T. J. Lowe, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. Springer Science + Business Media LLC, New York. Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations (this article) It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Is your workflow hiding key signals?

    There are lots of signals that you can get from visualizing your work - especially on a Kanban board. You can see bottlenecks, blockers, and excess work-in-progress, but one signal you don't often get to see is the answer to the question, "How much longer from here?" Now, to get that signal, you have to have a process that models flow. By flow, I mean the movement of potential value through a system. Your workflow is intended to be a model of that system. When built in that way, your workflow allows you to visualize and manage how your potential value moves through your system. Managing flow is managing liability and risk A tip is to look at your workflow from a financial perspective. Work items you haven't started are options that, when exercised, could deliver value. Work items you have finished are (hopefully) assets delivering value. The remainder - all the work items that you've spent time and money on but haven't received any value in return yet (work-in-progress) are your liabilities. What this helps us clearly demonstrate is that our work-in-progress is where most of our risk lies. Yes, we could have delivered things that don't add value (and hopefully, there are feedback loops to help identify those situations and learn from them.) You can also have options that you really should be working on to maximize the long-term value they can provide. But, by far, the biggest risk we face is taking on too much liability and not managing that liability effectively - causing us to spend more time and money than we should to turn them into assets. Expectations versus reality We, humans, have a tendency to look at things with rose-colored glasses (ok, most of us do.) So, when we start a piece of work, we think it will have a nice, straight, and effective trip through the workflow with no u-turns or roadblocks. More often than not, that's not the case, and there are many reasons for that. One of the biggest reasons is how we build our workflow. When we build our workflow to model the linear progression of work as it moves from an option to an asset, you're more likely to have that straight path. If you build your workflow to model anything else - especially the different groups of people that will work on it then you end up with an erratic path. You can get a picture of how work moves between people (if you use tools like Inspekt). But what you don't get is a picture of how work moves through a lifecycle from option to asset. This is a problem if you think you're using your workflow to help optimize flow because you aren't seeing the signals you think you are. In a situation like this, what you have is a people flow -- not a work flow. That's great if you want to focus purely on managing resource efficiency (keeping people busy) but poor if you want to optimize flow and control your liabilities. The signal you can only get from a true workflow Once you can truly say that you have modeled the life cycle of turning options into assets, you can say that a card's position in the workflow reflects how close or far away it is from realizing its potential value. What this means is that when you move to the right in your workflow, then you're signaling you're closer to turning the liability into an asset, and when you move it to the left (backward) in your workflow, you're moving farther away from that outcome. (Does it make more sense now why we handle backward movement the way we do in ActionableAgile now?) Model your workflow so that how you move a work item is signal of movement towards or away from realising its potential value When you can say this, then you can start signaling how long an item is likely to take to become an asset. With tools like ActionableAgile's Cycle Time Scatterplot, you can see how long it's likely to take for an item to be completed from any workflow stage. It's like when you go to Disney World or someplace like it, and you're in line for a ride, and you see a sign that says your wait is 1 hour from this point. Each column of your workflow can have that metaphorical sign. Except you can also know the likelihood associated with that information. Want to make a change? Don't stress if you just learned that your workflow isn't all it's cracked up to be. You can make a change! It's all about board design and policies. If you want tips on how to change your board or process, check out my blog post on how to design your board to focus on flow, or watch my talk below on this topic from Lean Agile London 2022!

  • Probabilistic vs. deterministic forecasting

    Do you hear people throwing around words like probabilistic and deterministic forecasting, and you aren't sure exactly what they mean? Well, I'm writing this blog post specifically for you. Spoiler alert: it has to do with uncertainty vs. certainty. Forecasting is the process of making predictions based on past and present data (Wikipedia). Historically the type of forecasting used for business planning was deterministic (or point) forecasting. Increasingly, however, companies are embracing probabilistic forecasting as a way to help understand risk. What is deterministic forecasting? Just like fight club, people don't really talk about deterministic forecasting. It is just what they do, and they don't question it - at least until recently. I mean, if it is all someone knows, why would they even think to question it or explore the pros and cons? But what is it really? Deterministic forecasting is when only one possible outcome is given without any context around the likelihood of that outcome occurring. Statements like these are deterministic forecasts: It will rain at 1 P.M. Seventy people will cross this intersection today. My team will finish ten work items this week. This project will be done on June 3rd. For each of those statements, we know that something else could happen. But we have picked a specific possible outcome to communicate. Now, when someone hears or reads these statements, they do what comes naturally to humans... they fill in the gaps of information with what they want to be true. Usually, what they see or hear is that these statements are absolutely certain to happen. It makes sense. We've given them no alternative information. So, the problem with giving a deterministic forecast when more than one possible outcome really exists is that we aren't giving anyone, including ourselves, any information about the risk associated with the forecast we provided. How likely is it truly to happen? Deterministic forecasts communicate a single outcome with no information about risk. If there are factors that could come into play that could change the outcome, say external risks or sick employees, then deterministic forecasting doesn't work for us. It doesn't allow us to give that information to others. Fortunately, there's an alternative - probabilistic forecasting. What is probabilistic forecasting? A probabilistic forecast is one that acknowledges the range of possible outcomes and assigns a probability, or likelihood of happening, to each. The image above is a histogram showing the range of possible outcomes from a Monte Carlo simulation I ran. The question I effectively asked it was "How many items we can complete in 13 days?" Now, there are a lot of possible answers to that question. In fact, each bar on the histogram represents a different option - anywhere from 1 to 90 or more. We can, and probably should, work to make that range tighter. But, in the meantime, we can create a forecast by understanding the risk we are willing to take on. In the image above we see that in approx 80% of the 10,000 trials we finished at least 27 items in 13 days. This means we can say that, if our conditions stay roughly similar, there's an 80% chance that we can finish at least 27 items in 13 days. That means that there's a 20% chance we could finish 26 or less. Now I can discuss that with my team and my stakeholders and make decisions to move forward or to see what we can do to improve the likelihood of the answer we'd rather have. Here are some more probabilistic forecasts: There is a 70% chance of rain between now and 1 P.M. There's an 85% chance that at least seventy people will cross this intersection today. There's a 90% chance that my team will finish ten or more work items this week. There's only a 50% chance that this project will be done on or before June 3rd. Every probabilistic forecast has two components: a range and a probability, allowing you to make informed decisions. Learn more about probabilistic forecasts Which should I use? To answer this question you have to answer another: Can you be sure that there's a single possible outcome or are there factors that could cause other possibilities? In other words, do you have certainty or uncertainty? If the answer is certainty, then deterministic forecasts are right for you. However, that is rarely, if ever, the case. It is easy to give into the allure of the single answer provided by a deterministic forecast. It feels confident. Safe. Easy. Unfortunately, those feelings are an illusion. Deterministic forecasts are often created using qualitative information and estimates but, historically, humans are really bad at estimating. Our brains just can't account for all the possible factors. Even if you were to use data to create a deterministic forecast you still have to pick an outcome to use and often people choose the average. Is it ok being wrong half the time? It is better to be vaguely right than exactly wrong. Carveth Read (1920) If the answer is uncertainty (like the rest of us) then probabilistic forecasts are the smart choice. By providing the range of outcomes and the probability of each (or a set) happening, you give significantly more information about the risk involved with any forecast, allowing people to make more informed decisions. Yes, it's not the tidy single answer that people want but its your truth. Carveth Read said it well: "It is better to be vaguely right than exactly wrong." Remember that the point of forecasting is to manage risk. So, use the technique that provides as much information about risk as possible. How can I get started? First, gather data about when work items start and finish. If you're using work management tools like Jira or Azure DevOps then you are already capturing that data. With that information you can use charts and simulations to forecast how long it takes to finish a single work item, how many work items you can finish in a fixed time period, or even how long it can take you to finish a fixed scope of work. These are things we get asked to do all the time. You don't even need a lot of data. If you. have at least 10 work items, preferably a representative mix, then you have enough data to create probabilistic forecasts. Once you have the data you need, tools like ActionableAgile™️ and Portfolio Forecaster from 55 Degrees help you determine the forecast that matches your risk tolerance with ease. You can also use our tools to improve the predictability of your process. When you do that you are happier with your forecasts because you get higher probability with a narrower range of outcomes. If you're interested in chatting with us or other users on this topic, join us in our community and create a post! See you there!

bottom of page