Cloud Foundry on Kubernetes

Amanda Alvarez

Amanda Alvarez

Amanda Alvarez

Latest posts by Amanda Alvarez (see all)

Hello ūüôā

The Dojo team is here again. Recently, we worked on a project called Cloud Foundry (CF) on Kubernetes (K8S) and we are thrilled to share with you how we did it.

Table of contents:
1. Why CF on K8S?
2. Architecture
3. Demo Video

1.Why CF on K8S?

Putting CF on K8S is a good idea because of its ease of deployment, resource utilization, and flexibility.

  • The overall deployment of CF on K8S is simpler than other IaaS because creating and destroying containers is quicker than that of VMs. Therefore, we are saving a significant amount of time and resources when we deploy CF components, which involves more than ten VMs. Woah! ūüėģ
  • Having CF on K8S helps utilize resources better. A traditional Diego Cell VM consumes more resources when NATS or Consul VMs are deployed as separate VMs because resources are assigned to each VM. For example, the NATS and Consul VMs each need 2GB of RAM on top of the Diego Cell using 5GB of RAM, but this not an efficient way of using resources as this is not scalable. Instead of deploying two VMs for NATS and Consul, we can deploy these as jobs using two containers sitting inside a node (or VM) that the Diego Cell has access to. Inside these nodes, the containers share the same resources that are allocated for the VMs.
  • We can pretend K8S is acting as an IaaS to put CF on top of. Knowing this, CF can be in any environment K8S is on because K8S can be deployed on top of any IaaS (including bare metal).

2.Architecture

Figure 1: CF on K8S on GCP architecture
There is a Kubernetes cluster of nodes (or VMS) that reside in GCP. Inside of the Kubernetes cluster, there are CF components that are utilizing K8S nodes. The Diego Cells nodes are separate from the main CF components because it is difficult to run containers within a container (as Diego is also a container orchestrator).

3.Demo Video

Special thanks to the kubernetes bosh cpi team from SAP for helping us. Please check out their repo at kubernetes_cpi.

A Week in the Dojo

Amanda Alvarez

Amanda Alvarez

Amanda Alvarez

Latest posts by Amanda Alvarez (see all)

Hello readers! My name is Amanda and I am the Dojo’s newest member here in Cambridge. Words cannot describe how excited I am to be here!¬†I am normally afraid of big changes, but I felt comfortable with this new beginning as I had a gut feeling this team would help me begin my career in this journey.¬†It really helped my anxiety when Victor Fong gave everyone on the team a Lego Pokemon toy after he just returned from a trip. Here he is!

Once I met my team I jumped right into standup with literally no time being wasted to get my day started. Shortly after, I was paired to work on UI project. This exposure to code without needing documentation about it really blew my mind away. However, I get to see Ruby, HTML/CSS, git, and angular JS all in one day! Later, I got to rotate with my team to work on deploying Kubernetes. It can be challenging to understand what is going on, but that is because this is my first time really using things like Ruby or cloud technologies. My week with this new team finished with retro on Friday, which is when we all get together and talk about the good, okay, and bad things that happened during the week. I admitted in the bad category I felt ashamed for taking too longto learn, but I was reassured by the whole team these things take time and that I will get better.

So what have I learned?

  • Ask lots of questions. People want you to learn! There is no such thing as a stupid question. ūüôā
  • Things will break. Sometimes it is an easy fix like adding a missing parenthesis. Sometimes it is a challenge that takes a day or two to figure out.
  • Ruby is a weird language. That is all I have to say about that.
  • DevOps is a really efficient way of rapidly delivering code.
  • Test Driven Development and pair programming made my first week feel almost seamless. I say “almost” because I have so much learning to do in order to get familiar with this kind of environment.
  • Tools such as Diego, Kubernetes, and Bosh can do many “things.” You might ask, “What kind of things?” And I could probably tell you they help manage deployment of containers and VMs.
  • Don’t be afraid to make mistakes. They’re something to be learned from.
  • Everyone has something to bring to the table. Share your ideas, even if you might be disagreeing with others.

Hopefully this gives you an idea of how much I learned during my first week, I am fortunate to be working with intelligent individuals who make up an amazing team. From my past experiences, I have never worked so closely with people as I have been since working here. No more contributing to one thing from the confinements of the cube of solitude and instead working with people on multiple project at a given moment. It is so easy to ask anyone what is going on because they are all familiar with the ongoing projects that are happening. Being able to pair with someone has definitely made my transition into this role feel easier. This team is passionate about what they do, and it really motivates me to do my best to get up to speed with their skills. At the time of this post, I have been working in this role two weeks now and the time feels like it elapsed in seconds. I wake up everyday to come to work feeling energized and thrilled to be at the office. Hopefully I can share something more technical next time!

~$ whoami
Before I go, I should probably share a few things about me. My favourite hobbies include: gardening, 3D printing, video games, and reading.¬†This year I have successfully grown various herbs, such as basil and parsley, and I am an avid succulent/cactus collector. I like to 3D print miniatures and tiles that get painted for D&D, which I play occasionally when I¬†find a good group to play with. I have always been a PS2 girl at heart, but I have been playing PC games for the last 4 years now. Lastly, I like to read mostly sci-fi books and I am currently reading through Stephen King’s “The Dark Tower” series.¬†This pretty much sums me up outside of work. So feel free to reach out if you ever want to talk about what I do for work or my hobbies! I would love to get to know more people in this community. ūüôā

Why The Dojo Matters a guide to digital transformation wrapped up in a scipab

a guide to digital transformation wrapped up in a scipab

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

 

The Situation | Digital transformation has become a nuanced term. Somewhat like ice cream in the summer. When you see someone eating your favorite flavor on a hot summer day, it incites an immediate craving. You know that you want the cone, that it will bring you a sense of happiness not only in the taste, but also maybe in the social outing it will revolve around. But then you think about that bathing suit you hope to wear later in the day or the water that will then be needed to quench your thirst, and oftentimes there is hesitation in fulfilling the craving you know ultimately will bring no regret. Okay, maybe that’s a little too distilled of an explanation, but I hope you get the point. Digital Transformation. We know it is important, in fact it is the impending future no matter how much you try to avoid or deny it. But¬†then by¬†embarking on the journey, you¬†also know that it will create work and a probable disruption in your comfortable ‘plan’. So you begin to question the value. And begin to cringe at the term, or try to validate your thought that the term has too much hype.

The¬†Complication | This hesitancy and resistance in diving head first into the process actually hinders long-term success. Companies that are not investing fully all in money, support and effort are running the risk of falling behind competitors.¬†¬†In order to¬†build more qualitative products at a more rapid speed, there comes a time where the company, each of its teams and its employees need to embrace and overcome true digital transformation. But it’s hard. Really hard.

The¬†Implication | As seen in the image attached,¬†companies are running the risk of losing the opportunity to¬†obtain that expected 30% revenue by 2020 where customers are investing to be a part of the movement. Not only this, but if companies don’t move quickly they will miss the sweet spot in the maturity of digital transformation¬†that their competitors are gaining as time lapses. This most immediately causes depreciation of Net Promoter Score, which arguably now is more important than any vanity metric (i.e. how many lines of code are being written, number of commits, etc.) ever was. And once trust is lost, it is close to impossible to rebuild especially in¬†the Fortune 500 customer base.

Position | It is now more than ever that we need to look past the nuance and move our teams toward modernization. At the Dojo, our mission is two-fold; to practice modern software development methodology (XP, Lean Startup)¬†and to further evangelize ‘The Way’ to internal Dell EMC product teams, and to contribute to Cloud Foundry Foundation sanctioned OS projects. We are very lucky to work for a company that is investing in and understands fully that in order to stay alive, and most importantly thrive as IT leaders, we must continue to scale in this world of Digital Transformation. Our power at the Dojo lies in the buy-in from all levels.

Action | Our power as Dell EMC on the Digital Transformation world stage lies in the buy-in from every member of the company. It has been proven time and time again that customers LOVE the modern way in which we are building software. There are definitely challenges to rewiring the way that we work and the way that we measure the work we produce, but with hard work, comes not only a thrilling journey, but a highly productive one that produces amazingly positive results. There is no better time than now to jump on this Digital Transformation train.

Benefit | Use the Dojo as a testament and witness to all of the aforementioned sentiments and Digital Traction Metrics as seen in the attached image. Join us in paving the path to the Future. And eat an ice cream cone while you are at it.

Deploy Kubernetes on vSphere using BOSH – Kubo

Introduction


During CloudFoundry Summit 2017, Kubo was released. The name originated from the combination of Kubernetes and Bosh. Now we can deploy Kubernetes on many different IaaS using Bosh. It’s the first step to integrate Kubernetes into CloudFoundry.

In this post, we are going to deploy a Kubernetes instance on vSphere using Bosh.

Prerequisite


We suppose you already have a Bosh Director running, one public network and one private network ready on vSphere. Your cloud-config would look like this:

cloud-config.yml

All capitalized fields and IP fields should be replaced with correct values based on your vSphere settings.

We use our bosh director as private network gateway by setting up iptables on bosh director following this instruction.

Deploy


We are going to use kubo-release from CloudFoundry Community. More deploy instructions could be found here.

1. Download releases

We need to download three releases: kubo, etcd and docker. Then upload them to bosh director.

2. Generate certificates

Kubernetes requires certificates for the communication between api server and kubelets, and also between clients and api server. The following script will do the job for us. Replace API_PRIVATE_IP and API_PUBLIC_IP with private IP and public IP for Kubernetes api server.

key-generator.sh

3. Fill bosh deployment manifest

Replace the red fields with the correct values. And paste the contents of the certificate files, generated above, into the correspondent fields.

kubernetes.yml

In order to access deployed Kubernetes instance, we need to create a config file:

~/.kube/config

After your bosh deployment is done, you should be able to type kubectl cluster-info and see this:

Test


We can test our Kubernetes by creating a simple Redis deployment using following deployment file:

redis.yml

kubectl create --filename redis.yml will deploy redis. If we type kubectl describe pods redis-master, we should not see any errors.

If you have any questions, leave a comment here or email xuebin.he@emc.com. Thank you!

Deploy Kafka cluster by Kubernetes

Introduction


This blog will show you how to deploy Apache Kafka cluster on Kubernetes. We assume you already have kubernetes setup and running.

Apache Kafka is a distributed streaming platform which enables you to publish and subscribe to streams of records, similar to enterprise messaging system.

There are few concepts we need to know:

  • Producer: an app that publish messages to a topic in Kafka cluster.
  • Consumer: an app that subscribe a topic for messages in Kafka cluster.
  • Topic: ¬†a stream of records.
  • Record: a data block contains a key, a value and a timestamp.

We borrowed some ideas from defuze.org and updated our cluster accordingly.

Pre-start


Zookeeper is required to run Kafka cluster.

In order to deploy Zookeeper in an easy way, we use a popular Zookeeper image from Docker Hub which is  digitalwonderland/zookeeper. We can create a deployment file zookeeper.yml which will deploy one zookeeper server.

If you want to scale the Zookeeper cluster, you can basically duplicate the code block into the same file and change the configurations to correct values. Also you need to add ZOOKEEPER_SERVER_2=zoo2 to the container env for zookeeper-deployment-1 if scaling to have 2 servers.

zookeeper.yml

We can deploy this by:

It’s good to have a service for Zookeeper cluster. We have a file zookeeper-service.yml to create a service. If you need to scale up the Zookeeper cluster, you also need to scale up the service accordingly.

zookeeper-service.yml

Deploy Kafka cluster


Service

We need to create a Kubernetes service first to shadow our Kafka cluster deployment. There is no leader server in terms of server level, so we can talk to any of the server. Because of that, we can redirect our traffic to any of the Kafka servers.

Let’s say we want to route all our traffic to our first Kafka server with id: "1". We can generate a file like this to create a service for Kafka.

kafka-service.yml

After the service being created, we can get the external IP of the Kafka service by:

Kafka Cluster

There is already a well defined Kafka image on Docker Hub. In this blog, we are going to use the image  wurstmeister/kafka to simplify the deployment.

kafka-cluster.yml

If you want to scale up Kafka Cluster, you can always duplicate a deployment into this file, changing KAFKA_BROKER_ID to another value.

KAFKA_CREATE_TOPICS is optional. If you set it to topic1:3:3, it will create topic1 with 3 partitions and 3 replicas.

Test Setup

We can test the Kafka cluster by a tool named kafkacat. It can be used by both Producers and Consumers.
To publish system logs to topic1, we can type:

To consume the same logs, we can type:

Upgrade Kafka


Blue-Green update

Kafka itself support rolling upgrade, you can have more detail at this page.

Since we can access Kafka by any broker of the cluster, we can upgrade one pod at a time. Let’s say our Kafka service routing traffic to broker1, we can upgrade all other broker instances first. Then we can change the service to route traffic to any of the upgraded broker. At last, upgrade broker1.

We can upgrade our broker by replacing the image to the version we want like:

image: wurstmeister/kafka:$NEW_VERSION, then do:

After applying the same procedure to all other brokers, we can edit our service by:

Change id: "1"to another upgraded broker. Save it and quit. All new connections would be established to the new broker.
At the end, we could upgrade broker1 using above step. But it will kill previous connections of producers and consumers to broker1.

Kubernetes and UDP Routing

Hey Guys, Gary Here.

With all of the fun stuff happening around Kubernetes and Cloud Foundry, we decided to do some fun stuff to play around with it! One of the (few) capabilities we don’t have with Cloud foundry that we can get with Kubernetes is UDP routing.

To learn more about why UDP routing doesn’t work with the containers in Diego runtime (yet, but will), check out ONSI’s proposal¬†for the feature.

UDP Routing. Why would you use it? In short, for applications that continually post data that isn’t important enough, or would soon be replaced with a more recent copy anyways, UDP packets can be a less intensive alternative than using the TCP routing solution. Or, if you’re really hardcore, you could implement your own verification with UDP, but that would be a blog post in itself ūüôā

Overall, setting up Kubernetes and getting it to expose ports was very simple. If you are reading this without any Kubernetes setup, go check out minikube. Even better, you could set up a GCP cluster, vSphere, or (gasp) AWS and follow along. The kubectl commands should be about the same either way.

Once you’ve got your instance set up, check out our kube-udp-tennis¬†repo on Github. We use this¬†¬†repo to store very simple python scripts that accept environment variables for ports and will either send or receive messages based on which script we execute. We also baked these into a Dockerfile to allow Kubernetes to reference an image on docker hub.

Before you worry about deploying your own docker images, know that you are not required to for this example. If you were to deploy the listener, add the service link, then go ahead and deploy the server, this solution would be a working UDP connection! This is because it’s referencing our existing images already on the Docker Hub. Before I go and give you the commands, I want to explain what they do.

from /udp_listen:

this command will go into the udplisten-deployment.yaml file, which gives the specification for our udp-listen application. We spec this out so we can extend it for the udp-listen service.

this command will go into the udplisten-service.yaml file, which after the udplisten deployment has been made live, will allow us to talk into the port through the service functionality in Kubernetes. Here’s the documentation for services.

At this point, we will have the kubernetes udplisten service running, and we will be ready to deploy our dummy application to talk into it.

from /udp_server:

This will deploy the udpserver application, and should ping messages into the udplisten-service, which you should see through the logs in the service’s pod.

The way that the udp-server.py application can find and ping into the udplisten-service is by leveraging the Kubernetes Service Functionality. Basically, when we start Kubernetes services, we will be able to find those services using environment variables. From the documentation:

For example, the Service "redis-master" which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11 produces the following environment variables:

[/crayon]
We search, therefore, for the udplistener_service_host and udplistener_service_port to communicate with the udplistener pods directly. Since we defined the UDP protocol as network traffic into the service, this works right out of the box!

Thanks for reading everyone, as always, reach out to us on twitter @DellEMCDojo, or me specifically @garypwhitejr, or post on the blog to get some feedback. Let us know what you think!

Until next time,

Cloud Foundry, Open Source, The way. #DellEMCDojo

Spreading The Way Announcing the Dojo in Bangalore!

Announcing the Dojo in Bangalore!

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

It is with unbelievable excitement that we are officially announcing the opening of our third global branch with a Dell EMC Dojo in Bangalore! By sharing our DevOps and Xtreme programming culture, including but not exclusive to the practices of pair programming, test driven development and lean product development at scale, we have the deepest confidence that Bangalore is the geographical mecca that sets the tone of Digital Transformation we hope for in the larger company.

So what does this mean beyond the logistical rollercoaster that comes with opening a new office? Well, I’m glad you asked!

We are Hiring! Over the next few weeks, we will be rapidly and qualitatively (only because how else would we operate?) looking for and interviewing developers and product managers interested in becoming a part of this exciting new Dojo from its inception. So, if you know of anyone in the area that may be interested, please point them in the direction of Sarv Saravanan (sarv.saravanan@emc.com) who will be handling the process on the ground.

 

Otherwise, stay tuned on our team’s impending growth, engagement (both here and in India), and overall adventure!

Until next time…

 

Running Legacy Apps on CloudFoundry with NFS How to re-platform your apps and connect to existing shared volumes using CloudFoundry Volume Services

How to re-platform your apps and connect to existing shared volumes using CloudFoundry Volume Services

This week the Cloud Foundry Diego Persistence team released the 1.0 version of our nfs-volume-release for existing NFS data volumes.  This Bosh release provides the service broker and volume driver components necessary to quickly connect Cloud Foundry deployed applications to existing NFS file shares.

In this post, we will take a look at the steps required to add the nfs-volume-release to your existing Cloud Foundry deployment, and the steps required after that to get your existing file system based application moved to Cloud Foundry.

Deploying nfs-volume-release to Cloud Foundry

If you are using OSS cloud foundry, you’ll need to deploy the service broker and driver into your cloudfoundry deployment. ¬†To do this, you will need to colocate the nfsv3driver on the Diego cells in your Cloud Foundry deployment, and then run the nfs service broker either as a Cloud Foundry application or a BOSH deployment.

Detailed instructions for deploying the driver are here.

Detailed instructions for deploying the broker are here.

If you are using PCF, nfs-volume-release is built in.  As of PCF 1.10, you can deploy the broker and driver through simple checkbox configuration in the Advanced features tab in Ops Manager.  Details here.

Moving your application into Cloud Foundry

There are a range of issues you might hit when moving a legacy application from a single server context into Cloud Foundry, and most of them are outside the scope of this article. ¬†See the last¬†section of this article for a¬†good reference discussing how to migrate more complex applications. ¬†For the purposes of this article we’ll focus on a relatively simple content application that’s already well suited to run in CF except that it requires a file system. ¬†We’ll use servocoder/RichFileManager¬†as our example application. ¬†It supports a couple different HTTP backends, but we’ll use the PHP backend in this example.

Once you have cloned the RichFileManager repository and followed the set up instructions, you should theoretically be able to run the application in Cloud Foundry’s php buildpack with a simple cf push from the RichFileManager root directory:

But RichFileManager requires the gd package which¬†isn’t included by default in the php buildpack. ¬†If we push the application as-is, file upload operations will fail after RichFileManager dies while trying to create thumbnail images for uploaded files. ¬†To fix this, we need to create a .bp-options directory in the root folder for our application and put a file named options.json in it with the following content:

Re-pushing the application fixes the problem.  Now we are able to upload files and use all the features of RichFileManager:

But we aren’t done yet! By default, the RichFileManager application stores uploaded file content in a subdirectory of the application itself. ¬†As a result, any file data will be treated as ephemeral by cloudfoundry and discarded when the application restarts. ¬†To see why this is a problem, upload some files, and then type:

When you refresh the application in your browser, you’ll see that your uploaded files are gone! ¬†That’s why you need to bind a volume service to your application.

In order to do that, we first need to tweak the application a little to tell it that we want to put files in an external folder. ¬†Inside the application, open connectors/php/config.php in your editor of choice, and change the value for “serverRoot” to false. ¬†Also set the value of “fileRoot” to “/var/vcap/data/content”. ¬†(As of today, cloudfoundry has the limitation that volume services cannot create new root level folders in the container. ¬†Soon that limitation will be lifted, but in the mean time, /var/vcap/data is a safe place to bind our storage directory to.)

Now push the application again:

When you go back to the application, you should see that it is completely broken and hangs waiting to get content. ¬†That’s because we told it to use a directory that doesn’t yet exist. ¬†To fix that, we need to create a volume service, and bind it to our application. ¬†You can follow the instructions on the nfs-volume-release to set up an nfs test server in your environment, or if you already have an NFS server available (for example, Isilon, ECS, Netapp or the like) you can skip the setup steps and go directly to the service broker registration step. ¬†Once you have created a volume service instance, bind that service to your application:

If you are using an existing NFS server, you will likely need to specify different values for uid and gid. ¬†Pick values that correspond to a user with write access to the share you’re using.

Now restage the application:

You should see that the application now works properly again. ¬†Furthermore, you can now “cf restart” your application, and “cf scale” it to run multiple instances, and it will continue to work and to serve up the same files.

Caveats

Volume services enable filesystem based applications to overcome a major barrier to cloud deployment, but they will not enable all applications to run seamlessly in the cloud.  Applications that rely on transactions across http requests, or otherwise store state in memory will still fail to run properly when scaled out to more than one instance in cloud foundry.  CF provides best-effort session stickiness for any application that sets a JSESSIONID cookie, but no guarantees that traffic will not get routed to another instance.

More detail on steps to make complex applications run in the cloud can be found in this article.

Doers for Today, Visionaries for Tomorrow and Change Agents for Always! We Truly Are The Trifecta

We Truly Are The Trifecta

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

The Dell EMC Dojo has a mission that is two-fold; we contribute to open source Cloud Foundry, and we evangelize ‘the way’ (XP, Lean Startup, etc) by engaging with internal Dell EMC teams in a purely DevOps manner. Our mission is direct with a scope that some could argue is boundless. By practicing ‘the way,’ hours, days, and weeks fly by as we push code to production at times every few minutes. Not only is our push to market rapid, so is our overall productivity. Oftentimes teams working with us nearly guffaw when they come to our office in Cambridge, MA and are able to see the ‘wizard(s) behind the curtain.’ We are asked how we keep three to five projects on track while also engaging with internal teams, planning large technical conferences, and working in the realm of R&D in our greater TRIGr team with an east coast contingent of only eight people and a west coast contingent of five. The secret? We LOVE what we do!

 

A team with empathy at its core, there is never a moment when a task seems impossible.

Truly, we could be featured on one of those billboards along the highway stating that there is no ‘I’ in ‘Team.’ Two baseball players carrying an opposing team member across Home because she/he has hurt themselves, a photo of all of the characters from Disney’s “The Incredibles,” The Dell EMC Dojo team… Take your pick… TEAMWORK. Pass It On.

In all seriousness, the pace at the Dojo can be absolutely exhausting, and with such a small team, the absence of one person (which let’s face it, vacation and life needs to happen at points) could in theory be a huge deal. But, because DevOps is what we live and breathe, any member of the team can fill this gap at any point, truly putting to practice the idea that there doesn’t have to be and should never be a single-point-of-failure. Albeit the industry or sector, what more emulates the ‘Go Big, Win Big’ message than this? By continually pushing ourselves to pair and to up-level the knowledge of our entire team, we never wait until tomorrow to take action. There is no need or desire to.

 

Agility is not a term we just talk about, but is simply inherent to everything we do.

With the combination of the rapidly changing market (externally and internally) and the pace in which we work, we at the Dojo have learned that we must stay on our toes. For those reading this that are familiar with sports, one of the first lessons learned in soccer is to never plant your feet. Holding such a stance allows for the opposing team to outpace you when the unexpected happens, which is most of the time. Same goes here. Pivoting is now second nature for us, and it doesn’t come with the scares. Instead, it is actually exciting when we are able to take data and identify ways in which we can better align with efficiency and effectiveness of software and methodology; to truly keep the user omnipresent in everything we do. We are happier. The ‘customer’ is happier. It is a (Go Big) win-win (Big) game. The cool thing too is that the more we practice this, the more we also feel somewhat like we can predict the future, because we begin to see trends before they are even a thing.

 

 

Doers for Today. Visionaries for Tomorrow. Change Agents for Always.

Cloud Foundry Certified Developer Program Your Exclusive Chance to be in the BETA

Your Exclusive Chance to be in the BETA

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

It is with unbridled excitement that we continue to prepare for Cloud Foundry Summit in Santa Clara. Here at the DellEMC Dojo and the larger Technology, Research, Innovation Group of DellEMC, gears are constantly turning and research and development is moving at a pace unprecedented. We are so excited to share some of these findings and demos with all those related to and involved with the Foundation.

While all is under works, we want to ensure that we and all those we care about are taking up every opportunity possible to continue to develop ourselves further as thought provoking leaders in the industry and in the world. Part of this is keeping ourselves relevant in what exists and then further acting as promoters for the solutions we have found work best. Which brings me to the reason for writing this blog post.

If you have not yet heard, The Cloud Foundry Foundation has been working toward the launch of a ‚ÄúCloud Foundry Certified Developer‚ÄĚ program. The date is getting closer and closer to its unveiling, and it is now more pressing to share the intent of the program so that all wanting and willing to get involved take the opportunity to do so.

The intent of the program is to use a performance-based testing methodology to certify that individual developers/engineers have the skills necessary to be productive developing applications on top of the Cloud Foundry platform. This will allow for a better streamlined data-driven approach to the way we contribute to the Open Source community and our more intimate communities as well. To be active leaders in this realm should be something we are striving for daily. To be PRODUCTIVE active leaders who can then go forth and teach with a congruent and strong set of skills is the mark we should be making, reaching and expanding. So here is how:

The Foundation is currently accepting applications from individuals that may want to participate in the BETA or early access program. If you (or anyone in your organization) are interested in being among the first certified developers, please use the Google Form found here to register interest: https://docs.google.com/forms/d/e/1FAIpQLSeXtGUMLyJ3NkJQLnWhXCafh3SgziHr1fsSYM7mXi6JPcLaPw/viewform

A NOTE: All applications for participation should be filed by March 10th.

The space is limited so the BETA program will be short-listed to 30 candidates, and will communicate with everyone completing the form as to their acceptance into the program. Those that are not accepted will be offered an opportunity to enter the early access phase of the rollout. Developers that pass the exam either during the BETA or early access will get a fancy CF Certified Dev sweatshirt, also… so no harm, no foul!

The BETA period will begin on March 20th and close on March 31st. Candidates will be asked to use the system to schedule a four hour exam window during the two weeks of BETA testing.

Passing the exam requires practical hands-on skills in the following: (1) CF Basics, (2) Troubleshooting Applications and CF Configurations, (3) Application Security, (4) Working with Services, (5) Application Management, (6) Cloud Native Architectural Principles, (7) Container Management within CF, (8) Candidates should be comfortable modifying simple Java, Node.js or Ruby Applications.

You think you have what it takes? We think you do too. So apply today!

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.