Posts Tagged ‘agile’

From Predicting Agile to Agile Predicting The need for #NoEstimates fades when no estimate need be heavy-weight to be accurate.

The need for #NoEstimates fades when no estimate need be heavy-weight to be accurate.

Luke Woydziak

Luke Woydziak

Director of Engineering at EMC
Luke Woydziak
Luke Woydziak

Often Agile teams shun the notion of predicting a date. After all, for teams to make a prediction seems anti-Agile. There is even a #NoEstimates movement afoot. However, in business and in my experience giving some ballpark idea of when something will finish is not only beneficial but also often required.

Estimating accurately is surprisingly difficult as items have varying levels of complexity, certainty, size and dependencies. At #EMCDojo, we believe in providing a realistic date for the various stakeholders of the project. I want to show you a couple of methods we use to do that.

Armed with these methods, I think you will find that the need for #NoEstimates fades when no estimate need be heavy-weight to be accurate.

Let’s get started. We practice the weekly XP practice of an Iteration Planning Meeting. This is where the journey begins. During this meeting, we look at items on the backlog do an engineering analysis to scope, add detail, validate assumptions and address open questions. After such analysis, we Roman Vote on the point value in less that 30-seconds per item.

A common scale of 3-points is often used to point the story (as a team we selected Fibonacci, but in my experience as long as this is held constant, the discrepancies are negligible). We enter these values into Pivotal Tracker as it is a great tool for Agile projects in that it automates the velocity calculation.

The week progresses with pairs of engineers pulling the top priority stories off of the backlog and working them to completion (with unit/integration and system automated tests). The Technical Product Owner for the team will accept or reject the story depending on the actual functionality matching the desired behavior. During this time, Pivotal Tracker will automatically track and calculate velocity and various other metrics.

Below is an analysis screenshot from a live project in progress with the CPT team. You can view the project at https://www.pivotaltracker.com/n/projects/1518687

Analytics

 

As you can see, the tracker reported velocity was 6 points during the month of March 2016.

Velocity

 

While the velocity is interesting, the real value comes when you observe it over time. Again Pivotal Tracker will help you with this. By clicking on the velocity report, you can view a more detailed analysis.

VelocityReport

 

The most interesting data point in this screen is the standard deviation (σ). We use this measurement to establish a high confidence interval (CI) and a reasonable CI for a given prediction (assuming a normal distribution, which is arguably a very big assumption). If you are performing sophisticated business modeling, these are the numbers you can plug-in to model confidently with.

StdDev

One word of caution:

Velocity is extremely susceptible to gaming. So I would strongly recommend against using it to compare across teams. It is far more useful as a predictive element as opposed to a diagnostic element.

 

In our case, we extensively use both epics and releases. You can view and export both as spreadsheets.

Releases:

Releases

Epics:

Epics

 

When you have exported the data, you will get a spreadsheet like ours of releases (if you use releases, however, you can apply the same analysis to epics):

Screen Shot 2016-03-10 at 3.39.07 PM

 

Zooming in on the projected column show us the projected completion date as calculated by Pivotal Tracker:

projected

 

This is a raw prediction; you’ll need to adjust it with the standard deviation (σ) and some calculations. First, let’s calculate the σ adjustment to get a reasonable CI:

  1. Find the velocity (6)
  2. Find σ (2.2)
  3. Calculate the multiplication factor:
    1. v / (v – σ) = adjustment
    2. 6 / (6-2.2) = 1.579

You can apply this adjustment to the time duration from now until the raw prediction in days to get a reasonable CI projected date. For example (the date that we took this measurement was on 3/10/16): The first release “V1 Volume Manager Complete” was projected to finish 18-days from then on 3/28/16. 18-days multiplied by the adjustment is ~28-days. Add 28-days to 3/10/16 and you get 4/7/16.

You can apply the same to get a high CI by using 2σ:

  1. Find the velocity (6)
  2. Find the 2σ (4.4)
  3. Calculate the multiplication factor:
    1. v / (v – 2σ) = adjustment
    2. 6 / (6-4.4) = 3.75

Now you can begin to see how certainty affects the calculation. The breakdown is as follows:

  • Projected: 3/28/2016
  • Reasonable: 4/7/2016
  • High: 5/16/2016

You can add the calculations to an Excel spreadsheet with the following formula: “=(((Projected-Now)*Adjustment)+Now)”

Screen Shot 2016-03-23 at 9.28.47 AM

 

Zooming in you can see the new columns:

adjustmens

 

Now you have some options when communicating predictions and planning non-development, but related activities. With the assumption that this is a normal distribution, the σ and 2σ adjustments represent 68% and 95% respectively. I deliberately used reasonable and high in place of those numbers as I think there is reasonable doubt about regarding the shape of the distribution. Of course, you could manually track those numbers and increase measurement quality. Also, note this is very project specific.

Let’s see how this all worked out. The first release “V1 Volume Manager Complete” was renamed to “CF Summit Demo Day” and completed on 5/26/2016 versus the high confidence date of 5/16/2016.

finalFinish

 

We were 10-days off of the high estimate. So what happened? Well as with all projects we discovered additional items/stories and most likely it’s not a normal distribution.

So we learn velocity alone will only get you so far and doesn’t account for the incoming rate, but for most projects two-weeks variances is within the ballpark. We used this method to plan for presenting a demonstration of Legacy Application support functionality at the Cloud Foundry Summit 2016.

Likewise, you can use this method to make lightweight and reasonably accurate estimates and go from worrying about predicting Agile to making Agile predictions.

Can we do better?

In my next post, I’ll discuss another method, that while it is a bit more complicated and labor intensive, it can decrease the variance and is flexible enough for you to apply across projects easily.

What If We Do A Little Less: Watch Dan Ward & The Simplicity Cycle #EMCDojo Meetup with Agile Expert and Author

#EMCDojo Meetup with Agile Expert and Author

“Simplicity is not the point, the point is not to reduce complexity. Goodness is the point. Goodness and value.”

 

Last week we were lucky enough to have Dan Ward come to the #EMCDojo to discuss the principles of his new book The Simplicity Cycle: A Field Guide to Making Things Better Without Making Them Worse. Watch the meetup below! If you don’t have time to watch right this second, read the Q&A for a teaser.

 

 

Q: How do you sort through customers’ different definitions of simplicity? Isn’t on customer’s complexity another customer’s value?

 

A: Different customers are going to have different definitions of goodness. Different customers will also have different levels of tolerance for complexity. For some customers, a certain level of complexity makes it less good and hard to use. for a super user, that same level of complexity is still okay. So part of the challenge is to understand with a large user community what goodness means. How do we mitigate that when you have users that want a lot of complexity and users that tolerate complexity less well? When you optimize a tool for the least capable user you improve its performance for all the users, including the most capable users. Simple tools tend to outperform their specs. Simple tools satisfy a broader range of interests and user sets than the designers and engineers might have anticipated. Simpler approaches tend to satisfy a broader range of users and customers. Which isn’t to say we shouldn’t provide a more complicated alternative for those users who want it – for whom goodness and complexity are directly proportional. And you only find that out if you’re engaged with your customers to get a sense of how they value complexity/simplicity. You have to talk to your customers to figure out what goodness means for them.

Q: What about modularity?

 

A: Modularity is a great strategy for mitigating and managing complexity. A modular design is one where you can plug and play, and let the users plug and play simpler or more complex modules. A modular design approach gives your users that optionality, the chance to engage and customize without complexifying things for those simpler users. Modularity is a simplifier and a goodifier. It improves the quality and flexibility of your design, but makes it robustly flexible. Complexity tends to increase fragility, and modularity is a way to combat that. 

 

Q: What about machine learning and AI? Does it change the vector position on the goodness and complexity graph?

 

A: Automation and machine learning take some of the burden of processing complexity from that user, and puts it below the screen/surface. Theres a man that coined the term the law of conservation of complexity: Complexity is neither created nor destroyed, we either move it above or below the surface, we let the automation or the users take care of it. I don’t know if I agree with that, but he has a point. There are ways to hide complexity and not expose the users to it, and automation and machine learning are a great way to simplify the user’s experience while still allowing the architecture to have some level of complexity. But all that being said, we still want to look for opportunities or instances where below the skin we’ve complexified things to the point where it becomes heavy and hard to debug/maintian. The back end matters too, its not just about the UX or UI.

 

Join our meetup group to hear about our other events!

 

TDD at the #EMCDojo Test Driven Development, Brian Roche Sr Dr Engineering

Test Driven Development, Brian Roche Sr Dr Engineering

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

I work on a team where we practice pair programming and TDD every day. Pairing alone isn’t the key to our success, another important element is Test Driven Development or TDD.

Traditional Engineering Teams

Most engineering teams today, make changes to their code and often these changes break a whole bunch of ‘stuff’.  If you’re lucky, you find out which functionality was regressed prior to pushing code to production.  But more often than not, we’re not that lucky.  The breaking change isn’t caught because automation does not exist.  So the code gets into the hands of the customer long after it’s been written and results in an escalation.  This leads to frustration and increased customer dissatisfaction.  But it doesn’t just lead to frustration and inefficiencies for our customers, it results in Increased TCO for everybody involved.  The cost of working this way is ENORMOUS.

(more…)

At the New EMC Dojo, App Developers Learn by Doing EMC-Pivotal Dojo, Brian Roche Sr Director of Engineering

EMC-Pivotal Dojo, Brian Roche Sr Director of Engineering

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

There is a secular shift at play within the IT industry. Traditional markets are eroding rapidly. As organizations seek to innovate using new digital business mediums, many are moving their infrastructures to the cloud. A major challenge they find is that building cloud apps not only means a whole new set of tools, but also a new mindset. To truly embrace any new way of doing things requires a commitment to learn by doing, which is why we are proud to announce the official opening of the new EMC – Pivotal Dojo in Cambridge, Massachusetts.

The Dojo is “the place of the way,” the new way that we develop software in today’s world. It’s where we contribute to open source Cloud Foundry. It’s where we work with customers to build software.  It’s where we practice lean software development and continuously innovate to solve customer needs.

Building Software for the Cloud

The cloud is a relatively new delivery model, but customers still need to maintain the same rigorous quality, security and up-time SLAs expected in the on-premise world.  Cloud Foundry represents the best-in-class technology that customers can trust to run their businesses in the cloud.  More than just technology, it is a development platform supported by lean practices and methodologies that lead to high quality and continuous rapid innovation.

(more…)

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.