Deploy Kafka cluster by Kubernetes

Introduction


This blog will show you how to deploy Apache Kafka cluster on Kubernetes. We assume you already have kubernetes setup and running.

Apache Kafka is a distributed streaming platform which enables you to publish and subscribe to streams of records, similar to enterprise messaging system.

There are few concepts we need to know:

  • Producer: an app that publish messages to a topic in Kafka cluster.
  • Consumer: an app that subscribe a topic for messages in Kafka cluster.
  • Topic:  a stream of records.
  • Record: a data block contains a key, a value and a timestamp.

We borrowed some ideas from defuze.org and updated our cluster accordingly.

Pre-start


Zookeeper is required to run Kafka cluster.

In order to deploy Zookeeper in an easy way, we use a popular Zookeeper image from Docker Hub which is  digitalwonderland/zookeeper. We can create a deployment file zookeeper.yml which will deploy one zookeeper server.

If you want to scale the Zookeeper cluster, you can basically duplicate the code block into the same file and change the configurations to correct values. Also you need to add ZOOKEEPER_SERVER_2=zoo2 to the container env for zookeeper-deployment-1 if scaling to have 2 servers.

zookeeper.yml

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: zookeeper-deployment-1
spec:
  template:
    metadata:
      labels:
        app: zookeeper-1
    spec:
      containers:
      - name: zoo1
        image: digitalwonderland/zookeeper
        ports:
        - containerPort: 2181
        env:
        - name: ZOOKEEPER_ID
          value: "1"
        - name: ZOOKEEPER_SERVER_1
          value: zoo1

We can deploy this by:

kubectl create --filename zookeeper.yml

It’s good to have a service for Zookeeper cluster. We have a file zookeeper-service.yml to create a service. If you need to scale up the Zookeeper cluster, you also need to scale up the service accordingly.

zookeeper-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: zoo1
  labels:
    app: zookeeper-1
spec:
  ports:
  - name: client
    port: 2181
    protocol: TCP
  - name: follower
    port: 2888
    protocol: TCP
  - name: leader
    port: 3888
    protocol: TCP
  selector:
    app: zookeeper-1

Deploy Kafka cluster


Service

We need to create a Kubernetes service first to shadow our Kafka cluster deployment. There is no leader server in terms of server level, so we can talk to any of the server. Because of that, we can redirect our traffic to any of the Kafka servers.

Let’s say we want to route all our traffic to our first Kafka server with id: "1". We can generate a file like this to create a service for Kafka.

kafka-service.yml

---
apiVersion: v1
kind: Service
metadata:
  name: kafka-service
  labels:
    name: kafka
spec:
  ports:
  - port: 9092
    name: kafka-port
    protocol: TCP
  selector:
    app: kafka
    id: "1"
  type: LoadBalancer

After the service being created, we can get the external IP of the Kafka service by:

kubectl get service kafka-service

Kafka Cluster

There is already a well defined Kafka image on Docker Hub. In this blog, we are going to use the image wurstmeister/kafka to simplify the deployment.

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kafka-broker1
spec:
  template:
    metadata:
      labels:
        app: kafka
        id: "1"
    spec:
      containers:
      - name: kafka
        image: wurstmeister/kafka
        ports:
        - containerPort: 9092
        env:
        - name: KAFKA_ADVERTISED_PORT
          value: "9092"
        - name: KAFKA_ADVERTISED_HOST_NAME
          value: $SERVICE_EXTERNAL_IP
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: zoo1:2181
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_CREATE_TOPICS
          value: topic1:3:3

If you want to scale up Kafka Cluster, you can always duplicate a deployment into this file, changing KAFKA_BROKER_ID to another value.

KAFKA_CREATE_TOPICS is optional. If you set it to topic1:3:3, it will create topic1 with 3 partitions and 3 replicas.

Test Setup

We can test the Kafka cluster by a tool named kafkacat. It can be used by both Producers and Consumers.
To publish system logs to topic1, we can type:

tail -f /var/log/system.log | kafkacat -b $EXTERNAL_IP:9092 -t topic1

To consume the same logs, we can type:

kafkacat -b $EXTERNAL_IP:9092 -t topic1

Upgrade Kafka


Blue-Green update

Kafka itself support rolling upgrade, you can have more detail at this page.

Since we can access Kafka by any broker of the cluster, we can upgrade one pod at a time. Let’s say our Kafka service routing traffic to broker1, we can upgrade all other broker instances first. Then we can change the service to route traffic to any of the upgraded broker. At last, upgrade broker1.

We can upgrade our broker by replacing the image to the version we want like:

image: wurstmeister/kafka:$NEW_VERSION, then do:

kuberctl replace --filename kafka.yml

After applying the same procedure to all other brokers, we can edit our service by:

kubectl edit service kafka-service

Change id: "1"to another upgraded broker. Save it and quit. All new connections would be established to the new broker.
At the end, we could upgrade broker1 using above step. But it will kill previous connections of producers and consumers to broker1.

Kubernetes and UDP Routing

Hey Guys, Gary Here.

With all of the fun stuff happening around Kubernetes and Cloud Foundry, we decided to do some fun stuff to play around with it! One of the (few) capabilities we don’t have with Cloud foundry that we can get with Kubernetes is UDP routing.

To learn more about why UDP routing doesn’t work with the containers in Diego runtime (yet, but will), check out ONSI’s proposal for the feature.

UDP Routing. Why would you use it? In short, for applications that continually post data that isn’t important enough, or would soon be replaced with a more recent copy anyways, UDP packets can be a less intensive alternative than using the TCP routing solution. Or, if you’re really hardcore, you could implement your own verification with UDP, but that would be a blog post in itself 🙂

Overall, setting up Kubernetes and getting it to expose ports was very simple. If you are reading this without any Kubernetes setup, go check out minikube. Even better, you could set up a GCP cluster, vSphere, or (gasp) AWS and follow along. The kubectl commands should be about the same either way.

Once you’ve got your instance set up, check out our kube-udp-tennis repo on Github. We use this  repo to store very simple python scripts that accept environment variables for ports and will either send or receive messages based on which script we execute. We also baked these into a Dockerfile to allow Kubernetes to reference an image on docker hub.

Before you worry about deploying your own docker images, know that you are not required to for this example. If you were to deploy the listener, add the service link, then go ahead and deploy the server, this solution would be a working UDP connection! This is because it’s referencing our existing images already on the Docker Hub. Before I go and give you the commands, I want to explain what they do.

from /udp_listen:

kubectl apply -f udplisten-deployment.yaml

this command will go into the udplisten-deployment.yaml file, which gives the specification for our udp-listen application. We spec this out so we can extend it for the udp-listen service.

kubectl apply -f udplisten-service.yaml

this command will go into the udplisten-service.yaml file, which after the udplisten deployment has been made live, will allow us to talk into the port through the service functionality in Kubernetes. Here’s the documentation for services.

At this point, we will have the kubernetes udplisten service running, and we will be ready to deploy our dummy application to talk into it.

from /udp_server:

kubectl apply -f udpserver-deployment.yaml

This will deploy the udpserver application, and should ping messages into the udplisten-service, which you should see through the logs in the service’s pod.

The way that the udp-server.py application can find and ping into the udplisten-service is by leveraging the Kubernetes Service Functionality. Basically, when we start Kubernetes services, we will be able to find those services using environment variables. From the documentation:

For example, the Service "redis-master" which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11 produces the following environment variables:

REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11

We search, therefore, for the udplistener_service_host and udplistener_service_port to communicate with the udplistener pods directly. Since we defined the UDP protocol as network traffic into the service, this works right out of the box!

Thanks for reading everyone, as always, reach out to us on twitter @DellEMCDojo, or me specifically @garypwhitejr, or post on the blog to get some feedback. Let us know what you think!

Until next time,

Cloud Foundry, Open Source, The way. #DellEMCDojo

Spreading The Way Announcing the Dojo in Bangalore!

Announcing the Dojo in Bangalore!

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

It is with unbelievable excitement that we are officially announcing the opening of our third global branch with a Dell EMC Dojo in Bangalore! By sharing our DevOps and Xtreme programming culture, including but not exclusive to the practices of pair programming, test driven development and lean product development at scale, we have the deepest confidence that Bangalore is the geographical mecca that sets the tone of Digital Transformation we hope for in the larger company.

So what does this mean beyond the logistical rollercoaster that comes with opening a new office? Well, I’m glad you asked!

We are Hiring! Over the next few weeks, we will be rapidly and qualitatively (only because how else would we operate?) looking for and interviewing developers and product managers interested in becoming a part of this exciting new Dojo from its inception. So, if you know of anyone in the area that may be interested, please point them in the direction of Sarv Saravanan (sarv.saravanan@emc.com) who will be handling the process on the ground.

 

Otherwise, stay tuned on our team’s impending growth, engagement (both here and in India), and overall adventure!

Until next time…

 

Running Legacy Apps on CloudFoundry with NFS How to re-platform your apps and connect to existing shared volumes using CloudFoundry Volume Services

How to re-platform your apps and connect to existing shared volumes using CloudFoundry Volume Services

This week the Cloud Foundry Diego Persistence team released the 1.0 version of our nfs-volume-release for existing NFS data volumes.  This Bosh release provides the service broker and volume driver components necessary to quickly connect Cloud Foundry deployed applications to existing NFS file shares.

In this post, we will take a look at the steps required to add the nfs-volume-release to your existing Cloud Foundry deployment, and the steps required after that to get your existing file system based application moved to Cloud Foundry.

Deploying nfs-volume-release to Cloud Foundry

If you are using OSS cloud foundry, you’ll need to deploy the service broker and driver into your cloudfoundry deployment.  To do this, you will need to colocate the nfsv3driver on the Diego cells in your Cloud Foundry deployment, and then run the nfs service broker either as a Cloud Foundry application or a BOSH deployment.

Detailed instructions for deploying the driver are here.

Detailed instructions for deploying the broker are here.

If you are using PCF, nfs-volume-release is built in.  As of PCF 1.10, you can deploy the broker and driver through simple checkbox configuration in the Advanced features tab in Ops Manager.  Details here.

Moving your application into Cloud Foundry

There are a range of issues you might hit when moving a legacy application from a single server context into Cloud Foundry, and most of them are outside the scope of this article.  See the last section of this article for a good reference discussing how to migrate more complex applications.  For the purposes of this article we’ll focus on a relatively simple content application that’s already well suited to run in CF except that it requires a file system.  We’ll use servocoder/RichFileManager as our example application.  It supports a couple different HTTP backends, but we’ll use the PHP backend in this example.

Once you have cloned the RichFileManager repository and followed the set up instructions, you should theoretically be able to run the application in Cloud Foundry’s php buildpack with a simple cf push from the RichFileManager root directory:

cf push -b php_buildpack rich-file-manager

But RichFileManager requires the gd package which isn’t included by default in the php buildpack.  If we push the application as-is, file upload operations will fail after RichFileManager dies while trying to create thumbnail images for uploaded files.  To fix this, we need to create a .bp-options directory in the root folder for our application and put a file named options.json in it with the following content:

{
 "PHP_EXTENSIONS": [ "gd"]
}

Re-pushing the application fixes the problem.  Now we are able to upload files and use all the features of RichFileManager:

But we aren’t done yet! By default, the RichFileManager application stores uploaded file content in a subdirectory of the application itself.  As a result, any file data will be treated as ephemeral by cloudfoundry and discarded when the application restarts.  To see why this is a problem, upload some files, and then type:

cf restart rich-file-manager

When you refresh the application in your browser, you’ll see that your uploaded files are gone!  That’s why you need to bind a volume service to your application.

In order to do that, we first need to tweak the application a little to tell it that we want to put files in an external folder.  Inside the application, open connectors/php/config.php in your editor of choice, and change the value for “serverRoot” to false.  Also set the value of “fileRoot” to “/var/vcap/data/content”.  (As of today, cloudfoundry has the limitation that volume services cannot create new root level folders in the container.  Soon that limitation will be lifted, but in the mean time, /var/vcap/data is a safe place to bind our storage directory to.)

Now push the application again:

cf push rich-file-manager

When you go back to the application, you should see that it is completely broken and hangs waiting to get content.  That’s because we told it to use a directory that doesn’t yet exist.  To fix that, we need to create a volume service, and bind it to our application.  You can follow the instructions on the nfs-volume-release to set up an nfs test server in your environment, or if you already have an NFS server available (for example, Isilon, ECS, Netapp or the like) you can skip the setup steps and go directly to the service broker registration step.  Once you have created a volume service instance, bind that service to your application:

cf bind-service rich-file-manager myVolume 
-c '{"uid":"1000","gid":"1000","mount":"/var/vcap/data/content"}'

If you are using an existing NFS server, you will likely need to specify different values for uid and gid.  Pick values that correspond to a user with write access to the share you’re using.

Now restage the application:

cf restage rich-file-manager

You should see that the application now works properly again.  Furthermore, you can now “cf restart” your application, and “cf scale” it to run multiple instances, and it will continue to work and to serve up the same files.

Caveats

Volume services enable filesystem based applications to overcome a major barrier to cloud deployment, but they will not enable all applications to run seamlessly in the cloud.  Applications that rely on transactions across http requests, or otherwise store state in memory will still fail to run properly when scaled out to more than one instance in cloud foundry.  CF provides best-effort session stickiness for any application that sets a JSESSIONID cookie, but no guarantees that traffic will not get routed to another instance.

More detail on steps to make complex applications run in the cloud can be found in this article.

Doers for Today, Visionaries for Tomorrow and Change Agents for Always! We Truly Are The Trifecta

We Truly Are The Trifecta

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

The Dell EMC Dojo has a mission that is two-fold; we contribute to open source Cloud Foundry, and we evangelize ‘the way’ (XP, Lean Startup, etc) by engaging with internal Dell EMC teams in a purely DevOps manner. Our mission is direct with a scope that some could argue is boundless. By practicing ‘the way,’ hours, days, and weeks fly by as we push code to production at times every few minutes. Not only is our push to market rapid, so is our overall productivity. Oftentimes teams working with us nearly guffaw when they come to our office in Cambridge, MA and are able to see the ‘wizard(s) behind the curtain.’ We are asked how we keep three to five projects on track while also engaging with internal teams, planning large technical conferences, and working in the realm of R&D in our greater TRIGr team with an east coast contingent of only eight people and a west coast contingent of five. The secret? We LOVE what we do!

 

A team with empathy at its core, there is never a moment when a task seems impossible.

Truly, we could be featured on one of those billboards along the highway stating that there is no ‘I’ in ‘Team.’ Two baseball players carrying an opposing team member across Home because she/he has hurt themselves, a photo of all of the characters from Disney’s “The Incredibles,” The Dell EMC Dojo team… Take your pick… TEAMWORK. Pass It On.

In all seriousness, the pace at the Dojo can be absolutely exhausting, and with such a small team, the absence of one person (which let’s face it, vacation and life needs to happen at points) could in theory be a huge deal. But, because DevOps is what we live and breathe, any member of the team can fill this gap at any point, truly putting to practice the idea that there doesn’t have to be and should never be a single-point-of-failure. Albeit the industry or sector, what more emulates the ‘Go Big, Win Big’ message than this? By continually pushing ourselves to pair and to up-level the knowledge of our entire team, we never wait until tomorrow to take action. There is no need or desire to.

 

Agility is not a term we just talk about, but is simply inherent to everything we do.

With the combination of the rapidly changing market (externally and internally) and the pace in which we work, we at the Dojo have learned that we must stay on our toes. For those reading this that are familiar with sports, one of the first lessons learned in soccer is to never plant your feet. Holding such a stance allows for the opposing team to outpace you when the unexpected happens, which is most of the time. Same goes here. Pivoting is now second nature for us, and it doesn’t come with the scares. Instead, it is actually exciting when we are able to take data and identify ways in which we can better align with efficiency and effectiveness of software and methodology; to truly keep the user omnipresent in everything we do. We are happier. The ‘customer’ is happier. It is a (Go Big) win-win (Big) game. The cool thing too is that the more we practice this, the more we also feel somewhat like we can predict the future, because we begin to see trends before they are even a thing.

 

 

Doers for Today. Visionaries for Tomorrow. Change Agents for Always.

Cloud Foundry Certified Developer Program Your Exclusive Chance to be in the BETA

Your Exclusive Chance to be in the BETA

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

It is with unbridled excitement that we continue to prepare for Cloud Foundry Summit in Santa Clara. Here at the DellEMC Dojo and the larger Technology, Research, Innovation Group of DellEMC, gears are constantly turning and research and development is moving at a pace unprecedented. We are so excited to share some of these findings and demos with all those related to and involved with the Foundation.

While all is under works, we want to ensure that we and all those we care about are taking up every opportunity possible to continue to develop ourselves further as thought provoking leaders in the industry and in the world. Part of this is keeping ourselves relevant in what exists and then further acting as promoters for the solutions we have found work best. Which brings me to the reason for writing this blog post.

If you have not yet heard, The Cloud Foundry Foundation has been working toward the launch of a “Cloud Foundry Certified Developer” program. The date is getting closer and closer to its unveiling, and it is now more pressing to share the intent of the program so that all wanting and willing to get involved take the opportunity to do so.

The intent of the program is to use a performance-based testing methodology to certify that individual developers/engineers have the skills necessary to be productive developing applications on top of the Cloud Foundry platform. This will allow for a better streamlined data-driven approach to the way we contribute to the Open Source community and our more intimate communities as well. To be active leaders in this realm should be something we are striving for daily. To be PRODUCTIVE active leaders who can then go forth and teach with a congruent and strong set of skills is the mark we should be making, reaching and expanding. So here is how:

The Foundation is currently accepting applications from individuals that may want to participate in the BETA or early access program. If you (or anyone in your organization) are interested in being among the first certified developers, please use the Google Form found here to register interest: https://docs.google.com/forms/d/e/1FAIpQLSeXtGUMLyJ3NkJQLnWhXCafh3SgziHr1fsSYM7mXi6JPcLaPw/viewform

A NOTE: All applications for participation should be filed by March 10th.

The space is limited so the BETA program will be short-listed to 30 candidates, and will communicate with everyone completing the form as to their acceptance into the program. Those that are not accepted will be offered an opportunity to enter the early access phase of the rollout. Developers that pass the exam either during the BETA or early access will get a fancy CF Certified Dev sweatshirt, also… so no harm, no foul!

The BETA period will begin on March 20th and close on March 31st. Candidates will be asked to use the system to schedule a four hour exam window during the two weeks of BETA testing.

Passing the exam requires practical hands-on skills in the following: (1) CF Basics, (2) Troubleshooting Applications and CF Configurations, (3) Application Security, (4) Working with Services, (5) Application Management, (6) Cloud Native Architectural Principles, (7) Container Management within CF, (8) Candidates should be comfortable modifying simple Java, Node.js or Ruby Applications.

You think you have what it takes? We think you do too. So apply today!

TCP Routing and SSL: A Walkthrough Using Spring Boot An incredible guest blog by Ben Dalby, Advisory Consultant at DellEMC

An incredible guest blog by Ben Dalby, Advisory Consultant at DellEMC

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

Walkthrough: Cloud Foundry TCP Routing and SSL

Guest Blog by Ben Dalby, Advisory Consultant (Applications and Big Data) at DellEMC

Use Cloud Foundry’s TCP routing feature to terminate SSL directly in your application

Introduction

A common security requirement for customers in regulated industries such as banking and healthcare is that all traffic should be secured end-to-end with SSL.

Prior to Pivotal Cloud Foundry 1.8, inbound SSL connections would always terminate on the Gorouter, and further encryption could only be achieved between the Gorouter and running applications by installing Pivotal’s IPsec Add-on

With the introduction in version 1.8 of TCP routing, it is now possible to terminate SSL right at your application – and this article will walk you through a working example of a Spring Boot application that is secured with SSL in this way.

Prerequisites

PCF Dev version 0.23.0 or later
JDK 1.8 or later
Gradle 2.3+ or Maven 3.0+
git (tested on 2.10.1)
A Linux-like environment (you will need to change the file paths for the directory commands to work on Windows)

How to do it

Step 1 – Create a Spring Boot application

We’re going to be lazy here, and simply make a couple of small modifications to the Spring Boot Getting Started application:

$ git clone https://github.com/spring-guides/gs-spring-boot.git

Step 2 – Create an SSL certificate

$ cd [GITHUB HOME]/gs-spring-boot/intial/src/main/resources
$ keytool -genkey -alias tomcat -storetype PKCS12 -keyalg RSA -keysize 2048 \
-keystore keystore.p12 -validity 3650 -keypass CHANGEME -storepass CHANGEME \
-dname "C=GB,ST=Greater London,L=London,O=Dell EMC,OU=Apps and Data,CN=abd.dell.com"  

Step 3 – Configure Spring Boot to use SSL and the new certificate

(You can also retrieve the application.properties shown below from here)

$ cd [GITHUB HOME]/gs-spring-boot/intial/src/main/resources
$ cat <<EOT >> application.properties  
server.port: 8080
server.ssl.key-store: classpath:keystore.p12
server.ssl.key-store-password: CHANGEME
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: tomcat
EOT  

Step 4 – Package the application

$ cd [GITHUB HOME]/gs-spring-boot/intial  
$ mvn clean package  

Step 5 – Push the application to PCF Dev (use default org and space)

$ cd [GITHUB HOME]/gs-spring-boot/intial
$ cf target -o pcfdev-org -s pcfdev-space
$ cf push gs-spring-boot -p target/gs-spring-boot-0.1.0.jar  

Step 6 – Create a TCP route and map it to your application

$ cf create-route pcfdev-space tcp.local.pcfdev.io --random-port
Creating route tcp.local.pcfdev.io for org pcfdev-org / space pcfdev-space as admin...
OK
Route tcp.local.pcfdev.io:61015 has been created
$ cf map-route gs-spring tcp.local.pcfdev.io --port 61015

Step 7 – Verify you can now connect directly to your application over SSL

Browse to https://tcp.local.pcfdev.io:61015/ (substitute your own port after the colon):

View details of the certificate to verify it is the one you just generated (note the procedure has just changed if you are using Chrome):

Further Reading

Enabling TCP Routing
http://docs.pivotal.io/pivotalcf/1-9/adminguide/enabling-tcp-routing.html

How to tell application containers (running Java apps) to trust self-signed certs or a private/internal CA https://discuss.pivotal.io/hc/en-us/articles/223454928-How-to-tell-application-containers-running-Java-apps-to-trust-self-signed-certs-or-a-private-internal-CA

Enable HTTPS in Spring Boot
https://drissamri.be/blog/java/enable-https-in-spring-boot/

Do You Hear That? It’s the sound of Keyboards! Call for Papers | Cloud Foundry Summit Silicon Valley 2017 is quickly approaching!

Call for Papers | Cloud Foundry Summit Silicon Valley 2017 is quickly approaching!

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

Our brains are on fire, our keyboards are hot, and the joke in the office the past few days has been over our extreme excitement for the eventual need to buy sunscreen since our Boston winter leaves us Vitamin D deprived. Why is this the case, you may or may not be asking? Well, I plan on telling you anyway because it is just too exciting not to share!

Our team is preparing for CLOUD FOUNDRY SUMMIT SILICON VALLEY! We felt a social duty to let all of those we care about and want to be there with us for what’s sure to be the summit of the summer (how can it not be when it is being held in June in Santa Clara?!), that the last call for papers is quickly approaching (no seriously, it’s this Friday, February 17th).

Just as a refresher for those on the fence, Cloud Foundry Summit is the premier event for enterprise app developers. This year the Foundation, through market research and feedback, found that interest and industry need is engrained in the focus on innovation and the streamlining of development pipelines. For this reason, Summit 2017 is majorly honing in on microservices and continuous delivery in developers’ language and framework of choice. That is why the session tracks available will be Use Cases, Core Project Updates, Experiments, Extension Projects, and Cloud Native Java. Each session that is chosen for the conference is enabled (1) primary speaker and (1) co-speaker. The primary speaker receives a complimentary conference pass while the co-speaker receives a discounted conference pass. So what’s stopping us from getting involved? Absolutely NOTHING!

As a sneak peak to a few of the topics our team have submitted for approval, see below:

  • Adopting DevOps and Building a Cloud Foundry Dojo (Lessons Learned)
  • Lift & Shift Your Legacy Apps to Cloud Foundry
  • How to Develop Scalable Cloud Native Application with Cloud Foundry
  • Enabling GPU-as-a-Service in Cloud Foundry
  • Blockchain as a Service
  • Avoiding pitfalls while migrating BOSH deployments
  • Spring Content: Cloud-Native Content Services for Spring

 

So, now what’s stopping YOU from getting involved? Submit papers here: https://www.cloudfoundry.org/cfp-2017/ and/or register here: https://www.regonline.com/registration/Checkin.aspx?EventID=1908081&utm_source=flash&utm_campaign=summit_2017_sv&utm_medium=landing&utm_term=cloud%20foundry%20summit&_ga=1.199163247.1732851993.1460056335

Last but definitely not least, let us know if you plan on coming—we are more than happy to share sunscreen 🙂 We cannot wait to see you there!

Thriving in a world of disruption – the “Dojo Way” Brian Roche, Lead of the @DellEMCDojo

Brian Roche, Lead of the @DellEMCDojo

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

We in live a world where ideas can be dreamt up, implemented and delivered more rapidly than ever before.  The pace of innovation is like nothing we’ve ever seen in our lifetime and it will likely only get faster.  Old patterns of software delivery, 12 month releases are insufficient in meeting demands of today’s marketplace.  As a result many people will face extinction if they do not change their work patterns and focus on two key objectives; customer needs and delivering software rapidly to meet that need.  Iterate and repeat.

This disruption creates opportunity if we’re in the right position to take advantage of the constant changes. In fact it is possible thrive in this new world and lay the foundation for a successful future.  At the @DellEMCDojo we have found that better way. The Dojo was created for 2 simple reasons. First to adopt a DevOps culture to achieve lower cost innovation, more rapid product delivery and to build innovative solutions that meet the needs of customers.  Second, to contribute to OS Cloud Foundry.  All of this is important because we not only demonstrate to our customers that we walk-the-walk and talk-the-talk but we also can respond to their market needs and deliver solutions to them to enable and empower them to compete effectively.

 

The dojo methodology is important but it’s what we create with this new way of working that is most important.  After all, this is a business, revenue is the score card and we measure the success of our software based on user adoption.  Here is a brief run through of the projects we’ve worked on in recent months.

 

Persistence in Cloud Foundry 

Cloud Foundry is WAY cool & 12 Factor Apps are WAY cool too BUT you and I don’t just have to worry about 12 Factor Apps.  We have legacy apps that live in our family that we still need to pay attention to. Our position is there should be a place for these legacy apps in our cool New World called Cloud Foundry. That’s why we enabled container persistence in Cloud Foundry.  Shipping in PCF 1.9 customers can take advantage of this functionality by mounting NFSv3 volumes from within their containers.  The obvious first technology integration was with Isilon; customers can now mount Isilon volumes.  As a result we have created a world where 12 Factor apps and non 12 Factor apps can live together and experience the benefit of Cloud Foundry.  We are actively developing this technology for and with customers and the response thus far has been very positive.

 

GPUaaS and Cloud Foundry 

We are seeing a growing demand for big data processing for one simple reason, the ability to make intelligent decisions about data is as important as continuous delivery.  Running those big data workloads that require massive processing power is not easy for Cloud Foundry developers today.  One obvious challenge is they cannot choose where these application workloads will be executed.  Leveraging the great work that Jack Harwood and the team in China has done on GPUaaS we were able to integrate this with Cloud Foundry.  The result is application developers will now have choice—they will be able to choose where their workloads are run leading to even greater efficiency and value from their information.

 

Blockchain and Cloud Foundry 

Blockchain technology comes up more and more when the larger Dell EMC talks to customers.  The first question is often ‘what is it and how can we leverage it?’  Up until recently we didn’t have an opinion on this technology.  So we kicked off a project with the following goals:

 

1.      Understand the different Blockchain distributions

2.      Identify ways in which we could integrate Blockchain with Cloud Foundry

3.      Make this available to the community so we can all iterate on this together

 

I’m pleased to say we now have a Blockchain implementation with Cloud Foundry and are happy to share not only the code/implementation but also our learnings.  Together as a community we can go far in developing this technology to solve real business problems.

 

 

The Dojo Effect

The Dojo team has come a long way in two years, now everyone wants to create a dojo.  The brush fires have been lit and the ‘dojo way’ is spreading.  A word of caution for those eager to get started quickly; while we’re happy with the enthusiasm and the willingness to change work patterns, there’s more to building a dojo and adopting DevOps than changing the physical space.  This methodology is nuanced, you don’t know what you don’t know especially early on.  It can be easy, especially early on in the adoption phase, to get lost and give up.  To embrace this new way of working it’s a good idea to have help from your friends, like we did from Pivotal and Pivotal Labs.  That’s where the dojo team comes in.  By working with us in a 6 week engagement we will pair with you, teaching you to fish so you form a solid foundation from which to build on after you leave.  We’re pretty strict in implementing Lean Startup and Running Lean to the letter.  Why? Because we know when you leave you’re going to relax so we want to keep the standard high knowing you may relax later.  There’s no better way to transform than to work with the @DellEMCDojo team.

Lastly, at the heart of our success is the dojo team – the people.  The incredibly talented individuals that have adopted these new work patterns and refine their art every single day by practicing at the dojo.  I would like to acknowledge and thank an amazing dojo team for riding the waves of change and finding ways to be successful regardless of what obstacles are thrown in our path.  You and your teams can find a way to thrive and enjoy the rapid pace of innovation in the same way the dojo team has.

Until next time, c ya.

Dell EMC Dojo at Hopkinton! Reviewing what we covered, and what we learned

Reviewing what we covered, and what we learned

Hey again everyone! We’re writing out today to talk about a few topics that we covered during our time at the Dell EMC Dojo Days in Hopkinton. We met with a lot of great minds and took plenty of input into how we work. We also gave a lot of insight to passersby in what we did and how we worked.

Firstly, Brain Roche and Megan Murawski debuted the famous “Transformation Talk”. We normally give this presentation in preparation for an engagement with another team, but in this case, it was given to allow open criticism and circulation of our methodology to the rest of the company. We cover in this presentation: pairing, and why it’s important to our process, why we use TDD (Test Driven Development) (and why you should too!), and our weekly meetings including Retros, IPM, and Feedback to name a few. We had plenty of great ideas and questions, as usual, and we realized twenty minutes over time that we couldn’t get Brian off the stage.

Xuebin He eventually got Brian off-stage for a talk he conducted on CICD (Continuous Integration, and Continuous Deployment). Xuebin being one of the developers at the Dojo allowed him to be a bit more technical in his talk, and cover some of the programming practices and tools we use to achieve this at the Dojo. Concourse is our tool for running our beautifully constructed tests, along with standard mocking design patterns and the code quality produced with TDD.

We picked up again on Tuesday at 1145 to talk about why a PaaS exists, and why it’s important. That talk, given by yours truly, was focused on some of the common technical roadblocks that keep developers, customers, and managers from being able to work efficiently; as well as the ways using a PaaS can solve those problems to build a better business.

To containerize applications for a PaaS, we would need to learn basics like “What is a 12 factor application, and what’s a container?” Thinh Nguyen stepped in and gave a great description on how we use guiding principles while developing our application environment to be better for us and our customers.

Throughout all of our talks, we worked away on two pair stations very carefully brought from our lair in Cambridge. We gave away some free swag, some free candy, and raffled off some super giveaways. We thank everyone involved in preparing and executing these few days for their hard work. We also want to give a huge thanks to everyone who attended our talks (rambles) and participated in some mind-expanding conversations.

Finally, I want to close with a few notes. We always enjoy fresh perspective. If you had more to say, or you missed us during our time and you want to start a conversation, leave a comment in the comment section! If you don’t want to comment here, then drop us a line in my email. We’d love to hear from you.

Until next time, remember: Cloud Foundry, Open Source, The Way. #DellEMCDojo.

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.