Archive for the ‘TDD’ Category

Dell EMC Dojo at Hopkinton! Reviewing what we covered, and what we learned

Reviewing what we covered, and what we learned

Hey again everyone! We’re writing out today to talk about a few topics that we covered during our time at the Dell EMC Dojo Days in Hopkinton. We met with a lot of great minds and took plenty of input into how we work. We also gave a lot of insight to passersby in what we did and how we worked.

Firstly, Brain Roche and Megan Murawski debuted the famous “Transformation Talk”. We normally give this presentation in preparation for an engagement with another team, but in this case, it was given to allow open criticism and circulation of our methodology to the rest of the company. We cover in this presentation: pairing, and why it’s important to our process, why we use TDD (Test Driven Development) (and why you should too!), and our weekly meetings including Retros, IPM, and Feedback to name a few. We had plenty of great ideas and questions, as usual, and we realized twenty minutes over time that we couldn’t get Brian off the stage.

Xuebin He eventually got Brian off-stage for a talk he conducted on CICD (Continuous Integration, and Continuous Deployment). Xuebin being one of the developers at the Dojo allowed him to be a bit more technical in his talk, and cover some of the programming practices and tools we use to achieve this at the Dojo. Concourse is our tool for running our beautifully constructed tests, along with standard mocking design patterns and the code quality produced with TDD.

We picked up again on Tuesday at 1145 to talk about why a PaaS exists, and why it’s important. That talk, given by yours truly, was focused on some of the common technical roadblocks that keep developers, customers, and managers from being able to work efficiently; as well as the ways using a PaaS can solve those problems to build a better business.

To containerize applications for a PaaS, we would need to learn basics like “What is a 12 factor application, and what’s a container?” Thinh Nguyen stepped in and gave a great description on how we use guiding principles while developing our application environment to be better for us and our customers.

Throughout all of our talks, we worked away on two pair stations very carefully brought from our lair in Cambridge. We gave away some free swag, some free candy, and raffled off some super giveaways. We thank everyone involved in preparing and executing these few days for their hard work. We also want to give a huge thanks to everyone who attended our talks (rambles) and participated in some mind-expanding conversations.

Finally, I want to close with a few notes. We always enjoy fresh perspective. If you had more to say, or you missed us during our time and you want to start a conversation, leave a comment in the comment section! If you don’t want to comment here, then drop us a line in my email. We’d love to hear from you.

Until next time, remember: Cloud Foundry, Open Source, The Way. #DellEMCDojo.

From Predicting Agile to Agile Predicting The need for #NoEstimates fades when no estimate need be heavy-weight to be accurate.

The need for #NoEstimates fades when no estimate need be heavy-weight to be accurate.

Luke Woydziak

Luke Woydziak

Director of Engineering at EMC
Luke Woydziak
Luke Woydziak

Often Agile teams shun the notion of predicting a date. After all, for teams to make a prediction seems anti-Agile. There is even a #NoEstimates movement afoot. However, in business and in my experience giving some ballpark idea of when something will finish is not only beneficial but also often required.

Estimating accurately is surprisingly difficult as items have varying levels of complexity, certainty, size and dependencies. At #EMCDojo, we believe in providing a realistic date for the various stakeholders of the project. I want to show you a couple of methods we use to do that.

Armed with these methods, I think you will find that the need for #NoEstimates fades when no estimate need be heavy-weight to be accurate.

Let’s get started. We practice the weekly XP practice of an Iteration Planning Meeting. This is where the journey begins. During this meeting, we look at items on the backlog do an engineering analysis to scope, add detail, validate assumptions and address open questions. After such analysis, we Roman Vote on the point value in less that 30-seconds per item.

A common scale of 3-points is often used to point the story (as a team we selected Fibonacci, but in my experience as long as this is held constant, the discrepancies are negligible). We enter these values into Pivotal Tracker as it is a great tool for Agile projects in that it automates the velocity calculation.

The week progresses with pairs of engineers pulling the top priority stories off of the backlog and working them to completion (with unit/integration and system automated tests). The Technical Product Owner for the team will accept or reject the story depending on the actual functionality matching the desired behavior. During this time, Pivotal Tracker will automatically track and calculate velocity and various other metrics.

Below is an analysis screenshot from a live project in progress with the CPT team. You can view the project at https://www.pivotaltracker.com/n/projects/1518687

Analytics

 

As you can see, the tracker reported velocity was 6 points during the month of March 2016.

Velocity

 

While the velocity is interesting, the real value comes when you observe it over time. Again Pivotal Tracker will help you with this. By clicking on the velocity report, you can view a more detailed analysis.

VelocityReport

 

The most interesting data point in this screen is the standard deviation (σ). We use this measurement to establish a high confidence interval (CI) and a reasonable CI for a given prediction (assuming a normal distribution, which is arguably a very big assumption). If you are performing sophisticated business modeling, these are the numbers you can plug-in to model confidently with.

StdDev

One word of caution:

Velocity is extremely susceptible to gaming. So I would strongly recommend against using it to compare across teams. It is far more useful as a predictive element as opposed to a diagnostic element.

 

In our case, we extensively use both epics and releases. You can view and export both as spreadsheets.

Releases:

Releases

Epics:

Epics

 

When you have exported the data, you will get a spreadsheet like ours of releases (if you use releases, however, you can apply the same analysis to epics):

Screen Shot 2016-03-10 at 3.39.07 PM

 

Zooming in on the projected column show us the projected completion date as calculated by Pivotal Tracker:

projected

 

This is a raw prediction; you’ll need to adjust it with the standard deviation (σ) and some calculations. First, let’s calculate the σ adjustment to get a reasonable CI:

  1. Find the velocity (6)
  2. Find σ (2.2)
  3. Calculate the multiplication factor:
    1. v / (v – σ) = adjustment
    2. 6 / (6-2.2) = 1.579

You can apply this adjustment to the time duration from now until the raw prediction in days to get a reasonable CI projected date. For example (the date that we took this measurement was on 3/10/16): The first release “V1 Volume Manager Complete” was projected to finish 18-days from then on 3/28/16. 18-days multiplied by the adjustment is ~28-days. Add 28-days to 3/10/16 and you get 4/7/16.

You can apply the same to get a high CI by using 2σ:

  1. Find the velocity (6)
  2. Find the 2σ (4.4)
  3. Calculate the multiplication factor:
    1. v / (v – 2σ) = adjustment
    2. 6 / (6-4.4) = 3.75

Now you can begin to see how certainty affects the calculation. The breakdown is as follows:

  • Projected: 3/28/2016
  • Reasonable: 4/7/2016
  • High: 5/16/2016

You can add the calculations to an Excel spreadsheet with the following formula: “=(((Projected-Now)*Adjustment)+Now)”

Screen Shot 2016-03-23 at 9.28.47 AM

 

Zooming in you can see the new columns:

adjustmens

 

Now you have some options when communicating predictions and planning non-development, but related activities. With the assumption that this is a normal distribution, the σ and 2σ adjustments represent 68% and 95% respectively. I deliberately used reasonable and high in place of those numbers as I think there is reasonable doubt about regarding the shape of the distribution. Of course, you could manually track those numbers and increase measurement quality. Also, note this is very project specific.

Let’s see how this all worked out. The first release “V1 Volume Manager Complete” was renamed to “CF Summit Demo Day” and completed on 5/26/2016 versus the high confidence date of 5/16/2016.

finalFinish

 

We were 10-days off of the high estimate. So what happened? Well as with all projects we discovered additional items/stories and most likely it’s not a normal distribution.

So we learn velocity alone will only get you so far and doesn’t account for the incoming rate, but for most projects two-weeks variances is within the ballpark. We used this method to plan for presenting a demonstration of Legacy Application support functionality at the Cloud Foundry Summit 2016.

Likewise, you can use this method to make lightweight and reasonably accurate estimates and go from worrying about predicting Agile to making Agile predictions.

Can we do better?

In my next post, I’ll discuss another method, that while it is a bit more complicated and labor intensive, it can decrease the variance and is flexible enough for you to apply across projects easily.

Introducing Ginkgo4J Ginkgo for Java

Ginkgo for Java

Paul Warren

Paul Warren

Paul Warren

Latest posts by Paul Warren (see all)

Having been an engineer using Java since pretty much version 1, and having practiced TDD for the best part of 10 years one thing that always bothered me was the lack of structure and context that Java testing frameworks like JUnit provided.  On larger code bases, with many developers, this problem can become quite accute. When pair programming I have sometimes even had to say to my pair

“Give me 15 minutes to figure out what this test is actually doing!”

The fact of the matter is that the method name is simply not enough to convey the required given, when, then semantics present in all tests.

I recently made a switch in job roles. Whilst I stayed with EMC, I left the Documentum division (ECD) with whom I had been for 17 years and moved to the EMC Dojo & Cloud Platform Team, whose remit is to help EMC make a transition to the 3rd platform.  As a result I am now based in the Pivotal office in San Francisco, I pair program and I am now working in Golang.

Golang has a testing framework called Ginkgo that was actually created by one of Pivotal’s VPs Onsi Fakhouri.  It mirrors frameworks from other languages like RSpec in Ruby.  All of these framework provide a very simply DSL that the developer can use in his test to build up a described context with closures. Having practiced this for the last six months I personally find this way of writing tests very useful.  Perhaps the most useful when I pick up an existing test and try to modify it.

Java version 8 has included it’s version of closures, called Lambda’s.  Whilst there aren’t quite as flexible as some of their equivalents in other languages; all variable access must be to ‘finals’ for example, they are sufficient to build an equivalent testing DSL.  So that’s what I decided to do with Ginkgo4J, pronounced Ginkgo for Java.

So let’s take a quick look at how it works.

In your Java 8 project, add a new test class called BookTests.java as follows:


  package com.github.paulcwarren.ginkgo4j.examples;

  import static com.github.paulcwarren.ginkgo4j.Ginkgo4jDSL.*;
  import org.junit.runner.RunWith;
  import com.github.paulcwarren.ginkgo4j.Ginkgo4jRunner;

  @RunWith(Ginkgo4jRunner.class)
  public class BookTests {
  {
      Describe("A describe", () -> {
      });
  }
  }

Let’s break this down:

  • The imports Ginkgo4jDSL.* and Ginkgo4jRunner add Ginkgo4J’s DSL and JUnit runner. The Junit runner allows these style of tests to be run in all IDEs supporting Junit (basically all of them) and also in build tools such as Ant, Maven and Gradle.
  • We add a top-level Describe container using Ginkgo4J’s Describe(String title, ExecutableBlock block) method. The top-level braces {}trick allows us to evaluate the Describe at the top level without having to wrap it.
  • The 2nd argument to the Describe () -> {} is a lambda expression defining an anonymous class that implements the ExecutableBlock interface.  

The 2nd argument lamdba expression to the Describe will contain our specs.  So let’s add some now to test our Book class.


  private Book longBook;
  private Book shortBook;
  {
      Describe("Book", () -> {
        BeforeEach(() -> {
          longBook = new Book("Les Miserables", "Victor Hugo", 1488);
          shortBook = new Book("Fox In Socks", "Dr. Seuss", 24);
        });

      Context("Categorizing book length", () -> {
        Context("With more than 300 pages", () -> {
          It("should be a novel", () -> {
            assertThat(longBook.categoryByLength(), is("NOVEL"));
          });
        });

        Context("With fewer than 300 pages", () -> {
          It("should be a short story", () -> {
            assertThat(shortBook.categoryByLength(), is("NOVELLA"));
          });
        });
      });
    });
  }

Let’s break this down:

  • Ginkgo4J makes extensive use of lambdas to allow you to build descriptive test suites.
    You should make use of Describe and Context containers to expressively organize the behavior of your code.
  • You can use BeforeEach to set up state for your specs.  You use It to specify a single spec.
    In order to share state between a BeforeEach and an It you must use member variables.
  • In this case we use Hamcrest’s assertThat syntax to make expectations on the categoryByLength() method.

Assuming a Book model with this behavior, running this JUnit test in Eclipse (or Intellij) will yield:
Screen Shot 2016-06-02 at 8.31.35 AM

 

Success!

Focussed Specs

It is often convenient, when developing, to be able to run a subset of specs.  Ginkgo4J allows you to focus individual specs or whole containers of specs programmatically by adding an F in front of your Describe, Context, and It:


FDescribe("some behavior", () -> { ... })
FContext("some scenario", () -> { ... })
FIt("some assertion", () -> { ... })

doing so instructs Ginkgo4J to only run those specs.  To run all specs, you’ll need to go back and remove all the Fs.

Parallel Specs

Ginkgo4J has support for running specs in parallel. It does this by spawning separate threads and dividing the specs evenly among these threads. Parallelism is on by default and will use 4 threads. If you wish to modify this you can add the additional annotation to your test class:-

@Ginkgo4jConfiguration(threads=1)

which will instruct Ginkgo4J to run a single thread.

Spring Support

Ginkgo4J also offers native support for Spring. To test a Spring application context simply replace the @RunWith(Ginkgo4jRunner.class) with @RunWith(Ginkgo4jSpringRunner.class) and initialize you test class’ Spring application context in exactly the same way you normally would when using Spring’s SpringJUnit4ClassRunner.


  @RunWith(Ginkgo4jSpringRunner.class)
  @SpringApplicationConfiguration(classes = Ginkgo4jSpringApplication.class)
  public class Ginkgo4jSpringApplicationTests {

  @Autowired
  HelloService helloService;
  {
      Describe("Spring intergation", () -> {
        It("should be able to use spring beans", () -> {
          assertThat(helloService, is(not(nullValue())));
        });

        Context("hello service", () -> {
          It("should say hello", () -> {
            assertThat(helloService.sayHello("World"), is("Hello World!"));
          });
        });
     });
  }

  @Test
  public void noop() {
  }
  }

The nooptest @Test method is required as Spring’s JUnit runner requires at least one test class.

Trying it out for yourself

Please feel free to try it out on your Java projects. For a maven project add:

<dependency>
    <groupId>com.github.paulcwarren</groupId>
    <artifactId>ginkgo4j</artifactId>
    <version>1.0.0</version>
</dependency>

or for a Gradle project add:

compile ‘com.github.paulcwarren:ginkgo4j:1.0.0’

for others see here.

TDD at the #EMCDojo Test Driven Development, Brian Roche Sr Dr Engineering

Test Driven Development, Brian Roche Sr Dr Engineering

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

I work on a team where we practice pair programming and TDD every day. Pairing alone isn’t the key to our success, another important element is Test Driven Development or TDD.

Traditional Engineering Teams

Most engineering teams today, make changes to their code and often these changes break a whole bunch of ‘stuff’.  If you’re lucky, you find out which functionality was regressed prior to pushing code to production.  But more often than not, we’re not that lucky.  The breaking change isn’t caught because automation does not exist.  So the code gets into the hands of the customer long after it’s been written and results in an escalation.  This leads to frustration and increased customer dissatisfaction.  But it doesn’t just lead to frustration and inefficiencies for our customers, it results in Increased TCO for everybody involved.  The cost of working this way is ENORMOUS.

(more…)

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.