top of page

Search Results

55 items found for ""

  • Other Myths About Little's Law

    This is post 8 of 9 in our Little's Law series. In the previous blog post, we talked about the single biggest error people make when applying Little's Law. That's not to say there aren't others out there. Thankfully, Prateek Singh and I recorded an episode of our Drunk Agile podcast to go over some of these other myths in more detail. While a large portion of what we talk about below is a rehash of the forecasting debacle, we also get into lesser-known problems like: 1. Using LL to set WIP Limits 2. "Proving" LL using Cumulative Flow Diagrams 3. All items need to be the same size 4. Cycle Times must be normally distributed 5. FIFO queuing is required BTW, you will recall from a previous post where I quoted Little as saying, "...but it is quite surprising what we do not require. We have not mentioned how many servers there are, whether each server has its own queue or a single queue feeds all servers, what the service time distributions are, what the distribution of inter-arrival times is, or what is the order of service of items, etc." (1). If Little himself says that these are myths, who are we to disagree? So grab your favourite whisky and enjoy! References Little, J. D. C., S. C. Graves. 2008. Little's Law. D. Chhajed, T. J. Lowe, eds. Building Intuition: Insights from Basic Operations Management Models and Principles. Springer Science + Business Media LLC, New York. Drunk Agile YouTube channel https://www.youtube.com/@drunkagile4780 Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law (this article) Little's Law - Why You Should Care About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Little's Law - Why You Should Care

    This is post 9 of 9 in our Little's Law series. I personally can't fathom how someone could call themselves a flow practitioner without a concerted effort to study Little's Law. However, the truth is that some of the posts in this series have gone into way more detail about LL than most people would ever need to practically know. Having said that, without an understanding of what makes Little's Law work, teams are making decisions every day that are in direct contravention of established mathematical facts (and paying the consequences). To that end, for those who want to learn more, here is my suggested reading list for anyone interested in learning more about Little's Law (in this particular order): 1. http://web.eng.ucsd.edu/~massimo/ECE158A/Handouts_files/Little.pdf Frank Vega and I call this "Little's Law Chapter 5" as it is a chapter taken from a textbook that Little contributed to. For me, this is hands down the best introduction to the law in its various forms. I am not lying when I say that I've read this paper 50 times (and probably closer to 100 times) and get something new from it with each sitting. 2. https://people.cs.umass.edu/~emery/classes/cmpsci691st/readings/OS/Littles-Law-50-Years-Later.pdf This is a paper Little wrote on the 50th anniversary of the law. It builds on the concepts of Chapter 5 and goes into more detail about the history of L=λW since its first publication in 1961. This paper, along with Chapter 5, should tell you 95% of what you need to know about LL. 3. http://fisherp.scripts.mit.edu/wordpress/wp-content/uploads/2015/11/ContentServer.pdf Speaking of the first publication of the proof of L=λW, there's no better teacher than going right to the source. This article would be my 3rd recommendation as it is a bit mathy, but its publication is one of the seminal moments in the history of queuing theory and any buff should be familiar with this proof. For extra credit: 4. http://www.columbia.edu/~ww2040/ReviewLlamW91.pdf This article is not for the faint of heart. I recommend it not only for its comprehensive review of L=λW but also (and mostly) for its exhaustive reference list. Work your way through all of the articles listed at the end of this paper, and you can truly call yourself an expert on Little's Law. If you read all of these, then you can pretty much ignore any other blog or LinkedIn post (or Wikipedia article, for that matter) that references Little's Law. Regardless of the effort that you put in, however, expertise in LL is not the end goal. No, the end goal is altogether different. Why You Really Should Care If you are studying Little's Law, it is probably because you care about process improvement. Chances are the area of process improvement that you care most about is predictability. Remember that being predictable is not completely about making forecasts. The bigger part of predictability is operating a system that behaves in a way that we expect it to. By designing and operating a system that follows the assumptions set forth by Little's Law, we will get just that: a process that behaves the way we expect it to. That means we will have controlled the things that we can control and that the interventions that we take to make things better will result in outcomes more closely aligned with our expectations. That is to say, if you know how Little's Law works, then you know how flow works. And if you know how flow works, then you know how value delivery works. I hope you have enjoyed this series and would welcome any comments or feedback you may have. Thanks for going on this learning journey with me! Explore all entries in this series When an Equation Isn't Equal A (Very) Brief History of Little's Law The Two Faces of Little's Law One Law. Two Equations It's Always the Assumptions The Most Important Metric of Little's Law Isn't In the Equation How NOT to use Little's Law Other Myths About Little's Law Little's Law - Why You Should Care (this article) About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • What's the Tallest Mountain On Earth?

    If, like most everyone else, you answered, "Mount Everest," then you are not quite right. But you are not quite wrong, either. The real answer has to do with a concept I wrote about in an earlier blog post. Scientists can all objectively agree where mountains "finish". That is, it's extremely hard to argue about where a mountain "peaks". But when measuring, we know that "finished" is only half the battle. Agreeing where a mountain "starts" is a whole other conversation altogether -- and not nearly as straightforward as it may sound. For example, more than half of the Mauna Kea volcano in Hawaii is underwater. Only 4,205 meters of the whole mountain is above sea level. But if we measure from the base to the summit of Mauna Kea, it is 10,211 meters -- that's about 20% taller than Everest's 8,848 meters. If you only want to talk about mountains on land, then, base-to-summit, Denali in Alaska is actually taller (5,900m) than Everest base-to-summit (4,650m). So why does Everest get the crown? The reason is that most scientists choose to start their measurements of mountain heights from a concept known as sea level. But the problem with sea level is that anyone who has studied geography knows that the sea ain't so level. The physics of the earth are such that different densities of the earth's makeup at different locations cause different gravitational pulls resulting in "hills and valleys" of sea level across the planet (the European Space Agency has an outstanding visualization of this) Add to that things like tides, storms, wind, and a bulge around the equator due to the earth's rotation means there is no one true level for the sea. Scientists cheat to solve this problem by calculating a "mean" (arithmetic mean or average) sea level. This "average" sea level represents the zero starting point at which all land mountains are measured (cue the "Flaw of Averages"). You might ask, why don't we choose a more rigorous starting point like the center of the earth? The reason for that is... remember that bulge around the equator that I just alluded to? The earth itself is not quite spherical, and the distance from its center at the equator is longer than the distance from the center to either the north or south pole. In case you were wondering, if we were to measure from the center of the earth, then Mount Chimborazo in Ecuador would win. It seems that geologists fall prey to the same syndrome that afflicts most Agile methodologies. A bias toward defining only when something is "done" ignores half of the equation -- and the crucial half at that. What's more, you have Agilists out there who actively rant against any notion of a defined "start" or "ready". What I hope to have proven here is that, in many instances, deciding where to start can be a much more difficult (and usually much more important) problem to solve, depending on what question you are trying to solve. At the risk of repeating myself, a metric is a measurement, and any measurement contains BOTH a start point AND a finish point. Therefore, begin your flow data journey by defining the start and end points in your process. Then consider updating those definitions as you collect data and as your understanding of your context evolves. Anything else is just theatre. References PBS.org, "Be Smart", Season 10, Episode 9, 08/10/2022 The European Space Agency, https://www.esa.int/ About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Applying Flow Metrics for Scrum

    Are you using ActionableAgile™ in a Scrum context? Well, good news! Our friends at ProKanban.org have just published a class called "Applying Flow Metrics for Scrum." While this class is technically tool agnostic, you will learn much about how to get the most out of ActionableAgile™ while using Scrum. To learn more, please visit https://prokanban.org/applying-flow-metrics-for-scrum/ Happy Forecasting! About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • All Models Are Wrong. Some Are Random.

    Disclaimer: This post is for those who really like to geek out on the inner workings of Monte Carlo Simulations. If you are not interested in the inner workings of these simulations, hopefully, you will find our other blog posts more to your liking! Have you ever wondered why we choose to implement Monte Carlo Simulations (MCS) the way we do in ActionableAgile™️ Analytics (AA)? Before we get too deep into answering that question, it is worthwhile to first take a step back and talk about the one big assumption all Monte Carlo Simulations in AA make and that is that the future we are trying to predict roughly looks like the past we have data for. For example, in North America, is it reasonable to use December's data to forecast what can be done in January? Maybe not. In Europe, can we use August's data to predict what can be done in September? Again, probably not. The trick, then, is to find a time period in the past that we believe will accurately reflect what will happen in the future we want to forecast. If you don't account for this assumption, then any Monte Carlo Simulation you run will be invalid. The big assumption: The future we are trying to predict roughly looks like the past we have data for. Let's say we do account for this assumption, and we have a set of historical data that we are confident to plug into our simulation. The way AA works, then, is to say that ANY day in the past data can look like ANY day in the future that we are trying to forecast. So, we randomly sample data from a day in the past (we treat each day in the past as equally likely) and assign that data value to a day in the future. We do this sampling thousands of times to understand the risk associated with all the outcomes that show up in our MCS results. Each day in the past is treated as equally likely (to happen in the future.) But let's think about this for a second. We are assigning a random day in the past to a random day in the future. Doesn't that violate our big assumption that we just talked about? In other words, if any day from the past can look like any day in the future, then we could presumably (and almost certainly do) use data from a past Monday and assign it to a future Saturday. Or we use data from a past Sunday and assign it to a future Wednesday. Surely, Mondays in the past don't look like Saturdays in the future, and Sundays in the past don't look like Wednesdays in the future, right? Doesn't this mean that we should refine our sampling algorithm and make it a bit more sophisticated in order to eliminate these obvious mistakes? I.e., shouldn't we have an algorithm that only assigns past Mondays to future Mondays or past Sundays to future Sundays? Or even just assign past weekdays to future weekdays and past weekends to future weekends? Well, Prateek Singh did just that when he tried different sampling algorithms for different simulations, and the results may surprise you. I highly encourage you to read his blog here as it is the more scientific justification for why we use the sampling algorithm that we do in AA. I don't want to ruin the surprise for you but (spoiler alert) with AA, we chose the best one. Happy Forecasting! P.S. For a much more robust treatment of the actual MCS algorithm, please see my book "When Will It Be Done?" or my self-paced video class on Metrics and Forecasting in the 55 Degrees Community. About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • In God We Trust. All Others Bring Data.

    Before proceeding, it would be worth reviewing Julia's excellent posts on the four basic metrics of Flow: Work Item Age Cycle Time Throughput WIP The definitions are great but are, unfortunately, meaningless unless we know what data we need to capture to calculate them. In terms of data collection, this is where our harping on you to define started and finished points will finally pay off. Take a timestamp when a work item crosses your started point and take another timestamp when that same work item crosses your finished point. Do that for every work item that flows through your process as shown below (forgive the American-style dates): That's it. To calculate all the basic flow metrics, this is the only data you will need. To calculate any or all of the basic metrics of flow, the only data you need is the timestamp for when an item started and the timestamp for when an item finished. Even better, if you are using some type of work item tracking tool to help your team, then most likely your tool will already be collecting all of this data for you. The downside of using a tracking tool, though, is that you may not be able to rely on any out-of-the-box metrics calculations that it may give you. It is one of the great secrets of the universe as to why many Agile tools cannot calculate flow metrics properly, but, for the most part, they cannot. Luckily for you, that's what this blog post is all about. To properly calculate each of the metrics from the data, do as follows: WIP WIP is the count of all work items that have a started timestamp but not a finished timestamp for a given time period. That last part is a bit difficult for people to grasp. Although technically, WIP is an instantaneous metric--that is, at any time you could count all of the work items in your process to calculate WIP--it is usually more helpful to talk about WIP over some time unit: days, weeks, Sprints, etc. Our strong recommendation--and this is going to be our strong recommendation for all of these metrics--is that you track WIP per day. Thus, if we would want to know what our WIP was for a given day, we would just count all the work items that had started but not finished by that date. For the above picture, our WIP on January 5th is 3 (work items 3, 4, and 5 have all started before January 5th but have not been finished by that day). Cycle Time Cycle Time equals the finished date minus the started date plus one (CT = FD - SD + 1). If you are wondering where the “+ 1” comes from in the calculation, it is because we count every day in which the item is worked as part of the total. For example, when a work item starts and finishes on the same day, we would never say that it took zero time to complete. So we add one, effectively rounding the partial day up to a full day. What about items that don't start and finish on the same day? For example, let's say an item starts on January 1st and finishes on January 2nd. The above Cycle Time definition would give an answer of two days (2 – 1 + 1 = 2). We think this is a reasonable, realistic outcome. Again, from the customers' perspective, if we communicate a Cycle Time of one day, then they could have a realistic expectation that they will receive their item on the same day. If we tell them two days, they have a realistic expectation that they will receive their item on the next day, etc. You might be concerned that the above Cycle Time calculation might be too biased toward measuring Cycle Time in terms of days. In reality, you can substitute whatever notion of "time" that is relevant for your context (that is why up until now, we have kept saying track a "timestamp" and not a "date"). Maybe weeks are more relevant for your specific situation. Or hours. Or even Sprints. [For Scrum, if you wanted to measure Cycle Time in terms of Sprints, then the calculation would just be Finished Sprint – Start Sprint + 1 (assuming work items cross Sprint boundaries in your context).] The point here is that this calculation is valid for all contexts. However, as with WIP, our very strong recommendation is to calculate Cycle Time in terms of days. The reasons are too numerous to get into here, so when starting out, calculate Cycle Time in terms of days and then experiment with other time units later should you feel you need them (our guess is you won't). Work Item Age Work Item Age equals the current date minus the started date plus one (Age = CD - SD + 1). The "plus one" argument is the same as for Cycle Time above. Our apologies, but you will never have a work item that has an Age of zero days. Again, our strong recommendation is to track Age in days. Throughput Let's take a look at a different set of data to make our Throughput calculation example a bit clearer: To calculate Throughput, begin by noting the earliest date that any item was completed, and the latest date that any item was completed. Then enumerate those dates. In our example, those dates in sequence are: Now for each enumerated date, simply count the number of items that finished on that exact date. For our data, those counts look like this: From Figure 2.4, we can see that we had a Throughput of 1 item on 03/01/2016, 0 items the next day, 2 items the third day, and 2 items the last day. Note the Throughput of zero on 03/02/2016 --nothing was finished that day. As stated above, you can choose whatever time units you want to calculate Throughput. If you are using Scrum, your first inclination might be to calculate Throughput per Sprint: "we got 14 work items done in the last Sprint". Let us very strongly advise against that and advise very strongly that you measure Throughput in terms of days. Again, it would be a book in itself to explain why, but let us just offer two quick justifications: (1) using days will provide you much better flexibility and granularity when we start doing things like Monte Carlo simulation; and, (2) using consistent units across all of your metrics will save you a lot of headaches. So if you are tracking WIP, Cycle Time, and Age all in days, then you will make your life a whole lot simpler if you track Throughput in days too. For Scrum, you can easily derive Throughput per Sprint from this same data if that still matters to you. Randomness We've saved the most difficult part for last. You now know how to calculate the four basic metrics of flow at the individual work item level. Further, we now know that all of these calculations are deterministic. That is, if we start a work item on Monday and finish it a few days later on Thursday, then we know that the work item had a Cycle Time of *exactly* four days. But what if someone asks us what our overall process Cycle Time is? What our overall process Throughput is? How do we answer those questions? Our guess is you immediately see the problem here. If, say, we look at our team's Cycle Time for the past six weeks, we will see that we had work items finish in a wide range of times. Some in one day, some in five days, some in more than 14 days, etc. In short, there is no single deterministic answer to the question, "What is our process Cycle Time?". Stated slightly differently, your process Cycle Time is not a unique number, rather, it is a distribution of possible values. That's because your process Cycle Time is really what's known as a random variable. [By the way, we've only been talking about Cycle Time in this section for illustrative purposes, but each of the basic metrics of flow (WIP, Cycle Time, Age, Throughput) are random variables.] What random variables are and why you should care is one of those topics that is way beyond the scope of this post. But what you do need to know is that your process is dominated by uncertainty and risk, which means all flow metrics that you track will reflect that uncertainty and risk. Further, that uncertainty and risk will show up as randomness in all of your Flow Metric calculations. How variation impacts the interpretation of flow metrics and how it impacts any action that should be taken to improve your process will be the topic of a blog series coming later this year. For now, what you need to know is that the randomness in your process is what makes it stochastic. You don't necessarily need to understand what stochastic means, but you should understand that all stochastic processes behave according to certain "laws". One such law you may have heard of before... About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • The Four-Letter Word That Begins With F

    Many of our future planned posts will refer to a concept known as Flow. For as much as Flow is talked about in Lean-Agile circles, there really aren't many reliable definitions for what Flow actually is. Our inspiration for the definition we will use going forward is the definition of Flow that is found in the Kanban Guide (and by inspiration, I mean the document that we will shamelessly steal from). What Is Flow? The whole reason for the existence of your current job (team) is to deliver value for your customers/stakeholders. Value, however, doesn't just magically appear. Constant work must be done to turn product ideas into tangible customer value. The sum total of the activities needed to turn an idea into something concrete is called a process. Whether you know it or not, you and your team have built a value delivery process. That process may be explicit or implicit, but it exists. Having an understanding of your process is fundamental to the understanding of Flow. Once your process is established, then Flow is simply defined as the movement of potential value through that process. Flow: the movement of potential value through a given process. Flow: the movement of potential value through a give process. Maybe you've heard of the other name for process, known as workflow. There is a reason it is called workFLOW. Because for any process, what really matters is the flow of work. Note: In future posts, I will often use the words "process", "workflow", and "system" interchangeably. I will try my best to indicate a difference between these when a difference is warranted. For most contexts, however, any difference among these words is negligible so that they can easily be used synonymously. The reason you should care about Flow is because your ability to achieve Flow in your process will dictate how effective, efficient, and predictable you are as a team at delivering customer value--which, as we stated at the beginning, is the whole reason you are here. Setting Up To Measure Flow As important as Flow is as a concept, it can really only act as a guide for improvements if you can measure it. Thankfully for us (and thankfully for ActionableAgile™️), Flow comes with a set of basic metrics that will give us such insight. But before we can talk about what metrics to use, we need first talk about what must be in place in order to calculate those metrics. All metrics are measurements, and all measurements have the same two things in common: a start point and an end point. Measuring Flow is no different. To measure flow, we must know what it means for work to have started in our process and what it means for work to have finished in our process. The decision around started and finished may seem trivial, but we can assure you it is not. How to set started and finished points in your process is beyond the scope of this book, but here are some decent references to check out if you need some help. It gets a little more complicated than that because it is perfectly allowed in Flow to have more than one started point and more than one finished point within a given workflow. Maybe you want to measure both from when a customer asks for an item as well as from when the team starts working on the item. Or maybe a team considers an item finished when it has been reviewed by Product Management, put into production, validated by the end user, or whatever. Any and all permutations of started and finished in your process are allowed. Not only are the different permutations allowed, it is encouraged that you experiment with different started and finished points in your process to better understand your context. You will quickly learn that changing the definition of started/finished will allow you to answer very different questions about flow in your process. If all goes well, expanding your started/finished points will get you down the path toward true business agility. The point is--as you will see--that the questions that Little's Law will help you answer will depend completely on your choices for started and finished. Conclusion Assuming you care about optimizing the value-delivery capabilities of your process, then you should care about Flow. And it should be pointed out that it doesn't matter if you are using Scrum, SAFe, Kanban, or something else for value delivery--you should still care about Flow. Therefore, if you haven't already, you need to sit down and decide--for your process--what does it mean for work to have started and what does it mean for work to have finished. All other Flow conversations will be dependent on those boundary decisions. Once defined, the movement of potential value from your defined started and finished points is what is called Flow. The concept of movement is of crucial importance because the last thing we want as a team is to start a whole bunch of work that never gets finished. That is the antithesis of value delivery. What's more, as we do our work, our customers are constantly going to be asking (whether we like it or not) questions like "how long?" or "how many?"--questions that will require our understanding of movement to answer. That's where Flow Metrics come in... About Daniel Vacanti, Guest Writer Daniel Vacanti is the author of the highly-praised books "When will it be done?" and "Actionable Agile Metrics for Predictability" and the original mind behind the ActionableAgile™️ Analytics Tool. Recently, he co-founded ProKanban.org, an inclusive community where everyone can learn about Professional Kanban, and he co-authored their Kanban Guide. When he is not playing tennis in the Florida sunshine or whisky tasting in Scotland, Daniel can be found speaking on the international conference circuit, teaching classes, and creating amazing content for people like us.

  • Want to succeed? Start by accepting uncertainty.

    In business, the quest for predictability is universal. We all want to grab hold of the reality we face everyday and, somehow, bend it to our will. When we are surprised by the unexpected, we often assume that we have failed in some way. We have this underlying belief that if we just do our job well enough, we can prevent any and all surprises and that success will follow. Unfortunately, that’s nothing more than a nice fairy tale. In real life, we have no hope of overcoming all uncertainty — zero. Instead, we must begin to accept it and learn how to operate, even thrive, within it. But we can’t do any of that if we don’t try to understand it. Stephen Bungay, the author of The Art of Action, helps us understand the shape of our uncertainty by expressing it via something he calls “the three gaps.” These gaps are places where uncertainty shows up: The Knowledge Gap: the difference between what we’d like to know and what we actually know. This gap occurs when you’re trying to plan, but often only manifests when you are trying to execute the plan. Often, we try to combat this gap, not by doing something different than before, but by doubling down on what we’ve already done. In other words, we just didn’t do it well enough the first time. So, instead of accepting that we may never know everything we’ll need to know up front, we double down on detailed plans and estimates. The Alignment Gap: the difference between what we want people to do and what they actually do. This gap occurs during execution. Like with the knowledge gap, we try to fix it by doubling down. In this case we double down on providing more detailed instructions and requirements. We are quite arrogant in our thinking and believe that if we can just be more thoughtful and more detailed, we can prevent all surprises. The Effects Gap: the difference between what we expect our actions to achieve and what they actually achieve. This gap occurs during verification. We don’t often consider that, in a complex environment, you can do the same thing over and over and get different outcomes despite your best efforts. Instead, we think we just didn’t have enough controls. We are stubborn to the point of stupidness and continue to think that we can manage our way to uncertainty. A most excellent example of how we like to try to overcome the gaps! by Jim Benson The ugly truth By reinforcing the idea that you can control your way to certainty, you aren’t teaching people how to be resilient and how to operate despite what comes their way. This means that when surprises do sneak through, people will be woefully unprepared and, more often than not, efforts will start to veer towards blaming the responsible party instead of figuring out a way forward. The ugly truth that we all must face is that, in complex environments like software development, healthcare, social work, product development, marketing, and more — we will never defeat uncertainty. To be honest, we wouldn’t like what would happen if we did. It would be the end of learning and innovation. So, what now then? While we have to accept that some uncertainty will always remain, we can try to tackle the low hanging fruit. For instance, we don’t abandon all research or planning. We just accept that things may not always go to plan and have an idea of how we’d react when uncertainty pops up. When I managed the web development team for NBA.com, we would run drills for our major events like the Draft and walk through scenarios like “What happens if a team drafts someone we don’t have a bio for?” and “What will we do if our stats engine breaks down?”We accepted that because we can’t control everything, the skill that we really need to survive in business is resiliency. We needed to learn to anticipate, react and recover. We learned how to think about resiliency and build it into our work processes, not just our technical systems. So, if you are finally getting to the point of accepting that you can’t conquer uncertainty, the next mission is to begin to build the skills of resiliency. There is no comprehensive list of ways to become resilient but I’ll share a few things I use while working in an uncertain environment. The Agile Manifesto The Agile manifesto is an excellent embrace of uncertainty and a pushback against our natural tendencies when reacting to Bungay’s three gaps. While there is a place for plans, documentation, contracts, and processes they are not the only, or even most important, things we need to excel in uncertain environments. The Scrum Framework One of the biggest benefits to the scrum framework is that sprints act as a forcing function to work in small batches. If you work in a smaller batch you notice the gaps more quickly and, if you fall prey to those natural tendencies to double down on instruction and planning, you’ll do so in a smaller way and, hopefully, learn more before the next piece of work starts. This is a perfect example of accepting uncertainty and trying to limit the potential damage. Kanban and Limiting Work-In-Progress Adopting Kanban forces you to limit the amount of work going on at one time. This has a similar benefit to the Scrum framework, but at an even more granular level. While Scrum limits how much you start in a Sprint, Kanban limits how much you have in progress at any one time. Thinking of your work-in-progress in economic terms can really help you understand the value of limiting it. My friend and generally awesome person, Cat Swetel, once said that you can think of your work falling into three buckets: Options  -  work not started Liabilities  - work in progress Assets  - work already finished It is in our liabilities that we are subject to the effects of uncertainty. If we limit the potential impact to a manageable amount, we limit the possible damage and, more often than not, we turn liabilities into assets faster. Probabilistic forecasting Often, even though we know there are many potential outcomes, we still provide a single forecast. A better way, that visualizes existing uncertainty, is to give forecasts AND state the likelihood that a particular forecast will occur. You’re very familiar with this whether you realize it or not. Every weather forecast you’ve seen uses this approach. Doing this is easy. You can use your historical data to forecast probable outcomes with cycle time scatterplots (for single items) or monte carlo simulations (for a group of items). Wrapping it up No matter what you choose to do going forward, by far the most important choice you can make is to accept the inevitability of uncertainty and to commit to learning how to thrive in the face of it. Sharing stories of your successes and failures helps both yourself and those who hear or read them to widen their perspectives. Having to tell the story makes you synthesize the information and make conclusions so that you understand what happened enough to tell the tale. And, while your context will not likely perfectly match that of your readers or listeners, it may provide them perspective and information that they can incorporate into their hypotheses.

  • Contingency Planning During a Pandemic

    Our customer, Redox, tells their story This is part one of Redox’s four-part “Contingency Planning During a Pandemic” series. It was authored by Morgen Donovan and is republished here with permission. Check out part two / part three / part four on the Redox blog. I had the good fortune of already working from home for a distributed company when COVID-19 struck. As the virus began to impact life as we knew it, I looked around and saw how other companies struggled to transition to a remote workforce. While all Redoxers felt the abrupt effects of COVID-19 social changes, because we were already working from home we had a leg up in responding to the impending crisis. We quickly began shifting our priorities to adjust to what may very well become a new normal. As we did this, we also realized we wanted to share our experiences, as other companies surely are or will be attempting some similar efforts. My hope is that this series, from a remote-first perspective, will help some of you who are facing the same challenges. The focus will be to expose our process and results clearly, and whenever possible, my co-writers and I will offer suggestions for alternative tools or methods to replace some of our own. Planning for the unexpected As cities and states began going into lockdown and the world had to start thinking about what to do when schools close or loved ones became sick, we realized we didn’t have a solid system in place for contingency planning around individual workload. Who would take over for me if I was out unexpectedly for two weeks, and would they know what to do? we wondered. And how could we tap into the capacity demands of each team and watch as it shifted throughout this crisis, so we would know where to focus our attention and provide help? There was a lot to do; we would need an army. So in a joint effort between the People Operations and Operations teams, Dietke Fowler, our director of Business Operations, and I enlisted the help of every HR recruiter plus our Knowledge Manager to get the ball rolling. We packaged all of the projected work into a single project, which we launched on March 26 and concluded on April 15. We kicked the project off with some goals already identified. We needed to: Implement a method for maintaining a current understanding of our capacity 2. Address potential capacity imbalances by: Identifying who has capacity and can help out in other areas Identifying who does not have capacity and needs help Shifting help to where it’s needed 3. Build resilience into teams through: Ensuring backups are in place for all work functions Documenting and storing our knowledge and processes in an easily accessible place From these goals, we built out some key deliverables we thought would get us to our desired end state. The deliverables evolved as we worked through our project, some becoming absorbed by others or outscoped, with some new needs arising. Here’s where we landed: Launch weekly capacity pulse checks that can be reported out by team and role. Create a virtual Help Desk that allows Redoxers to post their needs, help-wanted style, and lets other Redoxers offer up their help. Implement individual contingency plans that identify who is working on what, backups for that work, and whether the work does or does not have process documentation already in place. In this post, I’ll talk about our first deliverable, the work we did to achieve it, and its outcome. Each following post will dive into the remaining deliverables. Deliverable №1: Implement regular capacity pulse checks The work We wanted to send out surveys every Monday, and because they had a three-day turnaround they had to be short, with a low barrier to entry. We needed to be able to filter our results down by team and generate data on capacity, when considering factors both internal and external to work, as well as cognitive load. We used a Google form* and kept it simple. The first component of the survey checks stress levels on a 5-point agree/disagree scale: My own stress, worry, or general cognitive load is negatively affecting my ability to complete my work. Situations external to Redox, such as caring for children or other family needs, are negatively affecting my work capacity. Situations internal to Redox, such as disruptions to my team/other Redoxer outages are negatively affecting my work capacity. The second component asked Redoxers to compare their current work capacity (availability and presence) to their capacity prior to the COVID-19 pandemic using a scale of 0 to 10 (where 0 represents “no capacity,” and 10 represents “still at 100% capacity”). Finally, we asked Redoxers to compare the current demand for their time against the demand prior to the COVID-19 pandemic using a scale of 1 to 5 (where 1 represents strongly decreased and 5 represents strongly increased). We launched our first survey on March 30th via email to every Redoxer with a follow-up post in our “important-only” slack channel. That same evening, our CEO, Luke Bonney, asked all of us in his video address to complete the survey. This crucial component of our messaging effectively told all that this high priority effort required everyone’s participation. As the results came in, we fed the data into Chartio, linking the email addresses of respondents from the Google form with demographic data from our human resources information system (department, subteam, coach, location). The resulting dataset served up various charts on a dashboard that displayed the survey results graphically as well as in plain language. The dashboard could be filtered down by team, coach (manager), and status of response (responded or not responded). While we used SQL in Chartio to join the survey data with our team directory, you could also use visual dataset builders or SQL in other data visualization tools, or even VLOOKUPS in MS Excel or Google sheets. The result By Thursday of that first survey week, we had achieved 89% participation companywide, and we were pleasantly surprised by this level of engagement. The results were surprising as well — they told us that the average impact of COVID-19 on our capacity wasn’t as drastic as we had originally estimated. Redoxers were reporting that situations internal and external to Redox were having a moderate impact on their work capacity, and their general cognitive load was adding an additional moderate level of impact. On the upside, there was only a little more work to do on average. For the first week, these results were reassuring and provided us with a good baseline to track against in the upcoming weeks and months as the pandemic’s effects continue to evolve. In addition to company-wide metrics, we also created visualizations that compared departments across the company. This allowed us to pay special attention to departments that were feeling the most impact. The plan was to survey our team weekly, but Redoxers reported that they were already experiencing some survey fatigue (there’d been a few other surveys launched in the previous two weeks). After some consideration, we decided that biweekly would get the job done just as well, and our CEO again communicated this new plan to the company. We are incredibly fortunate to have our Knowledge manager, Jessica, working tirelessly to get each of us documenting our knowledge and work processes. Going forward, she will review the biweekly results and reach out to teams most in need to determine if she or others can help them reach knowledge management goals. For teams needing help beyond that scope, we created a brand-new resource to make it easy to ask for and receive assistance — the Help Desk, which Becky will talk about in our next post for this series, so stay tuned! As we all continue to work through a challenge most of us have never experienced in our lifetimes, let’s keep up this culture of educating each other, and continue to lean on one another for support. Note from Morgen @ Redox: We chose this over our regular survey platform Culture Amp, as we didn’t want to overuse and diminish the future effectiveness of this powerful tool. Your organization might prefer a different way to manage surveying.

  • Uploading Data to ActionableAgile

    ActionableAgile connects directly to Jira, Trello and Azure DevOps Services so that you can connect to your data with zero hassle. However, if you want to analyze data from other systems or simply don’t want to use the built-in data loaders, you can manually upload your data! The options available to you depend on which version of ActionableAgile you’re using. In the table below, click the links to read detailed specifications about file formats, get tips, and see templates and example files for each available option. No matter how you load data into into ActionableAgile, you can feel secure knowing that we never store your data. All of your important information stays in your browser.

  • What is Throughput?

    Throughput is the total number of items completed per unit of time. You might have a throughput of 2 per day, 10 per week, or even 17 per sprint. Whatever your preferred time time unit, this flow metric helps you understand how quickly you finish work. That understanding is critical for forecasting how long it will take to complete a collection of work items. How do you calculate Throughput? There are two things you need to define to calculate throughput: Your Finish Line – or the point in your process that items are considered complete Your Time Unit – day, week, month, etc. Defining the Finish Line In order to define a finish line, you have to understand your process. A Kanban board is a great way to do visualize your process and make sure that it is clearly understood by all. Take a look at the board below: Kanban board with clearly marked “Finish Line” This team has defined the Done column as their finish line. So, any items that move into the Done column are counted as Throughput. Choosing a Time Unit You can use any time unit you desire to measure throughput. You can measure per day, per week, per month, per Sprint – you get the idea. If you aren't sure, use day as that is easy to measure and other types of time units are made up of days (unless you need to get smaller than that). Why should I care about Throughput? Looking at your Throughput allows us to analyze how consistently you deliver value. Consistency of throughput, and how it compares to the rate at which you start work, is one indicator of how stable your process is. Perhaps the most common use for the Throughput metric is providing forecasts for completing multiple work items. You can use Cycle Time to forecast for single items, but you need a rate metric like Throughput to provide forecasts for groups of work items. How do you use Throughput to forecast? Traditionally, people use their Throughput to determine an average rate at which work is finished and then divide the total work by that average. However, forecasting based on averages will produce average results. Obviously, we don't suggest you do that. Fortunately, you can use a Monte Carlo simulations that can use your Throughput data to simulate probable outcomes based on the variation found there. It’s a much more accurate, not to mention risk-aware, way to deliver forecasts. Read more about Monte Carlo simulations and forecasting. Getting started Teams often start looking at Throughput around the same time they begin looking at Cycle Time, WIP, and WIP Age. Focusing on building stability in these key flow metrics is a good start. The more stable your basic metrics are, the fewer outliers your forecasts have to account for and the more your forecasts are perceived as acceptable and, most importantly, accurate. Interested in tracking flow metrics like this one? Try out ActionableAgile for free and reach out if you’re interested in joining our customer success program!

  • Designing your board to focus on flow

    We all want to design our Kanban boards to enable consistent, smooth flow of work all the way from start to finish. Unfortunately, that capability doesn’t come naturally and the way we often visualize our work processes can unknowingly cause dysfunction. In this post, we share some important things to consider (at least from our experience) when you want to design your board to enable great flow. Focus on the work, not the workers Do the column names of your board sound like the titles of the people in your team? Do you, or others in your team, often work in just one column of your board? If so, your board design might be built for resource efficiency instead of flow efficiency. Resource efficiency is making sure that every individual person (in this case) is always busy – often at the cost of completing work. Flow efficiency is a focus on making sure that the path is easy and clear for work moving through a workflow. A team that focuses on flow efficiency encourages collective ownership of work from start to finish. A team that focuses on resource efficiency tosses work over proverbial walls to the next column and moves onto something else without looking back (or forwards, in this case). A desire for flow efficiency doesn’t mean that everyone has to be assigned to everything all the time. Even when there is a primary assignee, a team is collectively responsible for getting work completed and team members may need to work on items in multiple columns at one time. These practices will put a priority on team performance over individual performance. Purposefully handling cyclical activities If your board is built with too much granularity, team members can be confused about what to do when they experience the cyclical portions of your process. Let’s tackle the most talked about scenario – testing! Let’s imagine the team has a column on their board called Executing followed by a Validating column. The team’s policy is that an item moves from Executing to Validating when it is ready for external review. The team recognizes that bugs will be found during validation from time to time. If the team doesn’t talk about how to handle the discovery of bugs when work is in the Validating column, they may think that the best course of action is to move it back to the Executing column (and repeat this process as many times as it takes.) Unfortunately, moving cards backwards causes us to lose visibility and data. (Read about the impact of backwards flow on data.) Instead, the team can take steps to better accommodate this expected cycle. One option is to keep all cards in Validating until all found bugs are fixed. If they want to separate the initial validation from subsequent fixes, they can create a new column to the right of Validating called Fixing. Now when bugs are found, cards move there until are all fixed. The good thing about both of these choices is that data is preserved. We can still measure the original time spent in Executing AND tell how long it takes to complete since it first entered Validating. Separate activity from wait In order to really focus on improving the flow of work through your system, you will want to visualize when work is actively being worked on versus when work is ready and waiting for attention. This allows you to see important things like: bottlenecks in different stages of the workflow as seen by large queues directly to the left unreasonable amounts of work sitting in active column, masking additional wait time whether or not you should focus on reducing wait or speeding up certain activities. As you might have realized from this post so far, column names matter! A good practice is to use verbs for columns representing active work. This way they are not confused as waiting columns. Even better, break down your active columns into Doing and Done sub-columns! Visually separating active work from waiting periods allows you to use the Flow Efficiency chart more effectively. Banish Blocked and On Hold columns Our final tip for this post is to say goodbye to Blocked or On Hold columns – or any other column with the same purpose. While these columns do help us see wait, they do not represent a stage of a lifecycle that happens in a predictable place. Rather, they are fleeting attributes of work residing in a particular stage of its lifecycle. We use columns to display the lifecycle stages of work. So, we need to use other visual queues to denote fleeting attributes like this. If you try to treat these attributes like a workflow stage by using columns, you are artificially picking a place in the workflow for it to reside. That wreaks all kinds of havoc! First, when you move an item into this kind of column, you can no longer see where in the workflow you experienced the wait. Did it get blocked in this stage or that one? Additionally, teams often use these pseudo-stages as a place to stash work so they can get around their WIP limits. This results in cycle times becoming longer and longer. Finally, when you are ready to put the item back into the real workflow, you have all of the data issues caused by backwards movement. A better practice is to visualize this kind of wait in the place where it is incurred. Yes, that may make it more painful but… that’s kind of the point. It can be the forcing function that causes you to face the problems instead of hiding them in forgotten columns. Add columns representing handoffs Handoffs are expected parts of the workflow and, when they come back from the handoff, they can move directly to next column on the board. In this way they are different than blocked or on hold columns and it is a very good practice to represent them using columns. The Key Takeaway In the end, there are no board police to decide what is right and wrong. There is no external governing body to keep you from making certain decisions for your board. Instead, it is up to you to understand the consequences of the decisions that you make and how they impact your ability to meet your goals. Take a moment to consider your board and the points in this blog post and determine if you can better enable flow!

bottom of page