Archive for the ‘Uncategorized’ Category

Kubernetes and UDP Routing

Hey Guys, Gary Here.

With all of the fun stuff happening around Kubernetes and Cloud Foundry, we decided to do some fun stuff to play around with it! One of the (few) capabilities we don’t have with Cloud foundry that we can get with Kubernetes is UDP routing.

To learn more about why UDP routing doesn’t work with the containers in Diego runtime (yet, but will), check out ONSI’s proposal for the feature.

UDP Routing. Why would you use it? In short, for applications that continually post data that isn’t important enough, or would soon be replaced with a more recent copy anyways, UDP packets can be a less intensive alternative than using the TCP routing solution. Or, if you’re really hardcore, you could implement your own verification with UDP, but that would be a blog post in itself 🙂

Overall, setting up Kubernetes and getting it to expose ports was very simple. If you are reading this without any Kubernetes setup, go check out minikube. Even better, you could set up a GCP cluster, vSphere, or (gasp) AWS and follow along. The kubectl commands should be about the same either way.

Once you’ve got your instance set up, check out our kube-udp-tennis repo on Github. We use this  repo to store very simple python scripts that accept environment variables for ports and will either send or receive messages based on which script we execute. We also baked these into a Dockerfile to allow Kubernetes to reference an image on docker hub.

Before you worry about deploying your own docker images, know that you are not required to for this example. If you were to deploy the listener, add the service link, then go ahead and deploy the server, this solution would be a working UDP connection! This is because it’s referencing our existing images already on the Docker Hub. Before I go and give you the commands, I want to explain what they do.

from /udp_listen:

this command will go into the udplisten-deployment.yaml file, which gives the specification for our udp-listen application. We spec this out so we can extend it for the udp-listen service.

this command will go into the udplisten-service.yaml file, which after the udplisten deployment has been made live, will allow us to talk into the port through the service functionality in Kubernetes. Here’s the documentation for services.

At this point, we will have the kubernetes udplisten service running, and we will be ready to deploy our dummy application to talk into it.

from /udp_server:

This will deploy the udpserver application, and should ping messages into the udplisten-service, which you should see through the logs in the service’s pod.

The way that the udp-server.py application can find and ping into the udplisten-service is by leveraging the Kubernetes Service Functionality. Basically, when we start Kubernetes services, we will be able to find those services using environment variables. From the documentation:

For example, the Service "redis-master" which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11 produces the following environment variables:

[/crayon]
We search, therefore, for the udplistener_service_host and udplistener_service_port to communicate with the udplistener pods directly. Since we defined the UDP protocol as network traffic into the service, this works right out of the box!

Thanks for reading everyone, as always, reach out to us on twitter @DellEMCDojo, or me specifically @garypwhitejr, or post on the blog to get some feedback. Let us know what you think!

Until next time,

Cloud Foundry, Open Source, The way. #DellEMCDojo

TCP Routing and SSL: A Walkthrough Using Spring Boot An incredible guest blog by Ben Dalby, Advisory Consultant at DellEMC

An incredible guest blog by Ben Dalby, Advisory Consultant at DellEMC

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

Walkthrough: Cloud Foundry TCP Routing and SSL

Guest Blog by Ben Dalby, Advisory Consultant (Applications and Big Data) at DellEMC

Use Cloud Foundry’s TCP routing feature to terminate SSL directly in your application

Introduction

A common security requirement for customers in regulated industries such as banking and healthcare is that all traffic should be secured end-to-end with SSL.

Prior to Pivotal Cloud Foundry 1.8, inbound SSL connections would always terminate on the Gorouter, and further encryption could only be achieved between the Gorouter and running applications by installing Pivotal’s IPsec Add-on

With the introduction in version 1.8 of TCP routing, it is now possible to terminate SSL right at your application – and this article will walk you through a working example of a Spring Boot application that is secured with SSL in this way.

Prerequisites

PCF Dev version 0.23.0 or later
JDK 1.8 or later
Gradle 2.3+ or Maven 3.0+
git (tested on 2.10.1)
A Linux-like environment (you will need to change the file paths for the directory commands to work on Windows)

How to do it

Step 1 – Create a Spring Boot application

We’re going to be lazy here, and simply make a couple of small modifications to the Spring Boot Getting Started application:

Step 2 – Create an SSL certificate

Step 3 – Configure Spring Boot to use SSL and the new certificate

(You can also retrieve the application.properties shown below from here)

Step 4 – Package the application

Step 5 – Push the application to PCF Dev (use default org and space)

Step 6 – Create a TCP route and map it to your application

Step 7 – Verify you can now connect directly to your application over SSL

Browse to https://tcp.local.pcfdev.io:61015/ (substitute your own port after the colon):

View details of the certificate to verify it is the one you just generated (note the procedure has just changed if you are using Chrome):

Further Reading

Enabling TCP Routing
http://docs.pivotal.io/pivotalcf/1-9/adminguide/enabling-tcp-routing.html

How to tell application containers (running Java apps) to trust self-signed certs or a private/internal CA https://discuss.pivotal.io/hc/en-us/articles/223454928-How-to-tell-application-containers-running-Java-apps-to-trust-self-signed-certs-or-a-private-internal-CA

Enable HTTPS in Spring Boot
https://drissamri.be/blog/java/enable-https-in-spring-boot/

Dell EMC Dojo at Hopkinton! Reviewing what we covered, and what we learned

Reviewing what we covered, and what we learned

Hey again everyone! We’re writing out today to talk about a few topics that we covered during our time at the Dell EMC Dojo Days in Hopkinton. We met with a lot of great minds and took plenty of input into how we work. We also gave a lot of insight to passersby in what we did and how we worked.

Firstly, Brain Roche and Megan Murawski debuted the famous “Transformation Talk”. We normally give this presentation in preparation for an engagement with another team, but in this case, it was given to allow open criticism and circulation of our methodology to the rest of the company. We cover in this presentation: pairing, and why it’s important to our process, why we use TDD (Test Driven Development) (and why you should too!), and our weekly meetings including Retros, IPM, and Feedback to name a few. We had plenty of great ideas and questions, as usual, and we realized twenty minutes over time that we couldn’t get Brian off the stage.

Xuebin He eventually got Brian off-stage for a talk he conducted on CICD (Continuous Integration, and Continuous Deployment). Xuebin being one of the developers at the Dojo allowed him to be a bit more technical in his talk, and cover some of the programming practices and tools we use to achieve this at the Dojo. Concourse is our tool for running our beautifully constructed tests, along with standard mocking design patterns and the code quality produced with TDD.

We picked up again on Tuesday at 1145 to talk about why a PaaS exists, and why it’s important. That talk, given by yours truly, was focused on some of the common technical roadblocks that keep developers, customers, and managers from being able to work efficiently; as well as the ways using a PaaS can solve those problems to build a better business.

To containerize applications for a PaaS, we would need to learn basics like “What is a 12 factor application, and what’s a container?” Thinh Nguyen stepped in and gave a great description on how we use guiding principles while developing our application environment to be better for us and our customers.

Throughout all of our talks, we worked away on two pair stations very carefully brought from our lair in Cambridge. We gave away some free swag, some free candy, and raffled off some super giveaways. We thank everyone involved in preparing and executing these few days for their hard work. We also want to give a huge thanks to everyone who attended our talks (rambles) and participated in some mind-expanding conversations.

Finally, I want to close with a few notes. We always enjoy fresh perspective. If you had more to say, or you missed us during our time and you want to start a conversation, leave a comment in the comment section! If you don’t want to comment here, then drop us a line in my email. We’d love to hear from you.

Until next time, remember: Cloud Foundry, Open Source, The Way. #DellEMCDojo.

Building a healthy Concourse CI pipeline for Bosh deployed products

The Cloudfoundry Diego Persistence team recently spent a fair amount of time and effort building and refactoring the CI pipeline for our Ceph filesystem, volume driver, and service broker.  The end state from this exercise, while not perfect, is nonetheless pretty darn good:  It deploys Cloudfoundry, Diego, and a Cephfs cluster, along with our volume driver and service broker.  It runs our code through unit tests, certification tests, and acceptance tests.  It keeps our deployment up to date with the latest releases of CloudFoundry and the latest development branch changes to Diego.  It does all of this with minimal rework or delay; changes in our driver/broker bosh release typically flow through the pipeline in about 10 minutes.  

But our first attempt at creating the pipeline did not work very well or very quickly, so we thought it would be worth documenting our initial assumptions, what was wrong about them, some of what we learned while fixing them.

Our First Stab at It

We started with a set of assumptions about what we could run quickly and what would run slowly, and we tried to organize our pipeline around those assumptions to make sure that the quick stuff didn’t get blocked by the slow stuff.

Assumptions:

  • Cephfs cluster deployment is slow–it requires us to apt-get a largish list of parts and then provision a cluster.  This can take 20-30 minutes.
  • Since cluster deployment is slow, and we share a bosh release for the cephfs bosh job and our driver and broker bosh jobs, we should only trigger cephfs deployment nightly when nobody is waiting–we shouldn’t trigger it when our bosh release is updated.
  • Redeploying Cephfs is not safe–to make sure that it stays in a clean state, we should undeploy it before deploying it again.
  • CloudFoundry deployment is slow–we should not automatically pick up new CF releases because it might paralyze our pipeline during the work day.
  • The pipeline should clean up on failure–bad deployments of cephfs should get torn down automatically.

 

What We Eventually Learned

Our first pass at the pipeline (mostly) worked, but it was slow and inefficient.  Because we structured it to deploy some of the critical components nightly or on demand, and we tore down the ceph file system vm before redeploying it, any time we needed an update, we had to wait a long time.  In the case of cephfs, we also had to create a shadow pipeline just for manually triggering cephfs redeployment.  It turned out that most of of the assumptions above were wrong, so let’s take another look at those:

 

Bad Assumptions:

  • Cephfs cluster deployment is slow. This is only partially true.  Because we installed cephfs using apt-get, we were doing an end-run around Bosh package management, effectively ensuring that we would re-do work in our install script whether it was necessary or not.  We switched from apt-get to Bosh managed debian packages and that sped things up a lot.  Bosh caches packages and only fetches things that have actually changed.
  • We should only trigger cephfs deployment nightly or we will repeat slow cephfs deployments whenever code changes.  This is totally untrue.  Bosh is designed to detect changes from one version to the next, so when the broker job or the driver job changes, but cephfs hasn’t changed, deploying the cephfs job will result in a no-op for bosh.  
  • Redeploying Cephfs is not safe.  This might be partially true. In theory our ceph filesystem could get corrupted in ways that would cause the pipeline to keep failing, but treating this operation as unsafe is somewhat antithetical to cloud operations.  Bosh jobs should as much as possible be safe to redeploy without removing them.
  • CloudFoundry deployment is slow.  This is usually not true.  When there are new releases of CloudFoundry, they deploy incrementally just like other bosh deployments, so only the changed jobs will result in deployment changes.  The real culprit in slow deployment times happens when there is an update to the bosh stemcell, and bosh needs to download the stemcell before it can deploy.  In order to keep that from slowing down our pipeline during the workday, we created a “nightly stemcell” task in the pipeline that doesn’t do anything, but can only run at night.  Using the latest passed stemcell from that task, and setting the stemcell as a trigger in our deploy tasks ensures that when there is a stemcell change, our pipeline will pick it up at night, and redeploy with it, and that we will never have to wait for a stemcell download during the day:

  • The pipeline should clean up on failureThis is generally a bad practice.  It means that we have no way of diagnosing failures in the pipeline.  Teardown after failure also doesn’t restore the health of the pipeline, unless the deployments in question are re-deployed after, but in the case of a deployment error, that could easily result in a tight loop of deployment and undeployment, so we never did that.

Where We Ended Up

Screen Shot 2016-06-29 at 2.57.33 PM

After we corrected all of our wrong assumptions, our pipeline is in much better shape:

  • Bosh deployments are incremental and frequent.  We pick up new releases as soon as they happen, and we re-test against them, so we get early warning of failures even when we didn’t make the breaking changes.
  • Our bosh job install scripts are as much as possible idempotent.  The only undeploy jobs we have in the pipeline are manually triggered.
  • We trigger slow stemcell downloads at night when nobody is working, and stick to the same stemcells during the day to avoid slow downloads.
  • Since we share the same bosh release for 3 different deployments (broker, driver, and file system) we trigger deployment of all 3 things whenever our bosh release changes.  Since Bosh is clever about not doing anything for unchanged jobs, this is a much easier approach than trying to manage separate versions of the bosh release for different jobs.
  • We use concourse serial groups to force serialization between the tasks that deploy things and the tasks that rely on those deployments.  Serial groups are far from perfect–they operate as a simple mutex with no read/write lock semantics–but for our purposes they proved to be good enough, and they are far easier than implementing our own locks.

The yaml for our current pipeline is here for reference.

Housekeeping

In addition to our nightly job to download stemcells, we also run a nightly task to clean up bosh releases by invoking bosh cleanup.  This is a very good idea–otherwise bosh keeps everything that’s been uploaded to it, which can quickly use up available disk space.

At some point in the future, we will probably want to add additional tasks to the pipeline to clean out our Amazon S3 buckets, but so far we haven’t done that.

Thanks

A special thanks to Connor Braa who recently joined our team from the Diego team where he did a great deal of Concourse wrangling.  Connor is responsible for providing us with most of the insights in this post.

Overview of GoLang with Xuebin He

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

Join us tonight for a special Meetup to talk about GoLang.

Digital Transformation – Learn by doing Brian Gallagher, President EMC Cloud Foundry Dojo

Brian Gallagher, President EMC Cloud Foundry Dojo

Last week we held the official opening of the EMC Cloud Foundry Dojo in Cambridge, Massachusetts. The term ‘dojo’ is a Japanese word that translates to ‘the place of the way’. In our dojo, software developers learn and contribute to Cloud Foundry, the leading open source platform for Cloud Native Applications. Dojo 1The Cambridge dojo is also co-located with Pivotal Labs to help customers develop these applications via modern software practices. The pairing of cloud software and cloud platforms are key ingredients in leading businesses through their digital transformation journey.

Why is the dojo opening a significant milestone for EMC? First, it underscores EMC’s commitment to open source software. Open source is a key purchasing criteria for 3rd platform applications and infrastructure. During the second half of 2015, EMC emerged from a non-participating company to one of the top contributors to the Cloud Foundry open source. EMC is helping to enhance the governance, risk, and compliance requirements of Cloud Foundry for enterprise businesses.

Second, it demonstrates EMC’s ability to transform itself via a DevOps model. Cloud Foundry’s methodology is a combination of the ‘best-of-the-best’ modern software development practices including Agile, Lean, Extreme Programming, and CI/CD. All contributors to the open source community follow this ‘way’ of development everyday.

(more…)

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.