Author Archive

Deploy Kubernetes on vSphere using BOSH – Kubo

Introduction


During CloudFoundry Summit 2017, Kubo was released. The name originated from the combination of Kubernetes and Bosh. Now we can deploy Kubernetes on many different IaaS using Bosh. It’s the first step to integrate Kubernetes into CloudFoundry.

In this post, we are going to deploy a Kubernetes instance on vSphere using Bosh.

Prerequisite


We suppose you already have a Bosh Director running, one public network and one private network ready on vSphere. Your cloud-config would look like this:

cloud-config.yml

All capitalized fields and IP fields should be replaced with correct values based on your vSphere settings.

We use our bosh director as private network gateway by setting up iptables on bosh director following this instruction.

Deploy


We are going to use kubo-release from CloudFoundry Community. More deploy instructions could be found here.

1. Download releases

We need to download three releases: kubo, etcd and docker. Then upload them to bosh director.

2. Generate certificates

Kubernetes requires certificates for the communication between api server and kubelets, and also between clients and api server. The following script will do the job for us. Replace API_PRIVATE_IP and API_PUBLIC_IP with private IP and public IP for Kubernetes api server.

key-generator.sh

3. Fill bosh deployment manifest

Replace the red fields with the correct values. And paste the contents of the certificate files, generated above, into the correspondent fields.

kubernetes.yml

In order to access deployed Kubernetes instance, we need to create a config file:

~/.kube/config

After your bosh deployment is done, you should be able to type kubectl cluster-info and see this:

Test


We can test our Kubernetes by creating a simple Redis deployment using following deployment file:

redis.yml

kubectl create --filename redis.yml will deploy redis. If we type kubectl describe pods redis-master, we should not see any errors.

If you have any questions, leave a comment here or email xuebin.he@emc.com. Thank you!

Deploy Kafka cluster by Kubernetes

Introduction


This blog will show you how to deploy Apache Kafka cluster on Kubernetes. We assume you already have kubernetes setup and running.

Apache Kafka is a distributed streaming platform which enables you to publish and subscribe to streams of records, similar to enterprise messaging system.

There are few concepts we need to know:

  • Producer: an app that publish messages to a topic in Kafka cluster.
  • Consumer: an app that subscribe a topic for messages in Kafka cluster.
  • Topic:  a stream of records.
  • Record: a data block contains a key, a value and a timestamp.

We borrowed some ideas from defuze.org and updated our cluster accordingly.

Pre-start


Zookeeper is required to run Kafka cluster.

In order to deploy Zookeeper in an easy way, we use a popular Zookeeper image from Docker Hub which is  digitalwonderland/zookeeper. We can create a deployment file zookeeper.yml which will deploy one zookeeper server.

If you want to scale the Zookeeper cluster, you can basically duplicate the code block into the same file and change the configurations to correct values. Also you need to add ZOOKEEPER_SERVER_2=zoo2 to the container env for zookeeper-deployment-1 if scaling to have 2 servers.

zookeeper.yml

We can deploy this by:

It’s good to have a service for Zookeeper cluster. We have a file zookeeper-service.yml to create a service. If you need to scale up the Zookeeper cluster, you also need to scale up the service accordingly.

zookeeper-service.yml

Deploy Kafka cluster


Service

We need to create a Kubernetes service first to shadow our Kafka cluster deployment. There is no leader server in terms of server level, so we can talk to any of the server. Because of that, we can redirect our traffic to any of the Kafka servers.

Let’s say we want to route all our traffic to our first Kafka server with id: "1". We can generate a file like this to create a service for Kafka.

kafka-service.yml

After the service being created, we can get the external IP of the Kafka service by:

Kafka Cluster

There is already a well defined Kafka image on Docker Hub. In this blog, we are going to use the image  wurstmeister/kafka to simplify the deployment.

kafka-cluster.yml

If you want to scale up Kafka Cluster, you can always duplicate a deployment into this file, changing KAFKA_BROKER_ID to another value.

KAFKA_CREATE_TOPICS is optional. If you set it to topic1:3:3, it will create topic1 with 3 partitions and 3 replicas.

Test Setup

We can test the Kafka cluster by a tool named kafkacat. It can be used by both Producers and Consumers.
To publish system logs to topic1, we can type:

To consume the same logs, we can type:

Upgrade Kafka


Blue-Green update

Kafka itself support rolling upgrade, you can have more detail at this page.

Since we can access Kafka by any broker of the cluster, we can upgrade one pod at a time. Let’s say our Kafka service routing traffic to broker1, we can upgrade all other broker instances first. Then we can change the service to route traffic to any of the upgraded broker. At last, upgrade broker1.

We can upgrade our broker by replacing the image to the version we want like:

image: wurstmeister/kafka:$NEW_VERSION, then do:

After applying the same procedure to all other brokers, we can edit our service by:

Change id: "1"to another upgraded broker. Save it and quit. All new connections would be established to the new broker.
At the end, we could upgrade broker1 using above step. But it will kill previous connections of producers and consumers to broker1.

Using Docker Container in Cloud Foundry

Using Docker Container in Cloud Foundry

As we all know, we can push source code to CF directly, and CF will compile it and create a container to run our application. Life is so great with CF.

But sometimes, for some reason, such as our App needs a special setup or we want to run an app on different platforms or infrastructures, we may already have a preconfigured container for our App. This won’t block our way to CF at all. This post will show you how to push docker images to CF.

Enable docker feature for CF

We can turn on docker support with the following cf command

We can also turn it off by

Push docker image to CF

Unlike the normal way, CF won’t try to build our code and run it inside the image we specified. CF would assume that you already put  everything you need into your docker image. We have to rebuild the docker image every time we push a change to our repository.

We also need to tell CF how to start our app inside the image by specifying the start command. We can either put it as an argument for cf push or put it into manifest.yml as below.

In this example, we are using an official docker image from docker hub. In the start command, we clone our demo repo from Github, do something and run our code.

Update Diego with private docker registry

If you are in the EMC network, you may not able to use Docker Hub due to certificate issues. In this case, you need to setup a private docker registry. The version of registry needs to be V2 for now. Also, you have to redeploy your CF or Diego with the changes being shown below.

Replace 12.34.56.78:9000 with your own docker registry ip and port.

Then, you need to create a security group to reach your private docker registry. You can put the definition of this security group into docker.json as shown below

And run

Now you can re-push to CF by

How to Set up a Concourse Pipeline Xuebin He Dojo Developer

Xuebin He Dojo Developer

How to Set up a Concourse Pipeline

The first step to continuous integration is setting up your own CI pipeline. The #EMCdojo uses Concourse for our own pipeline and we love it! Concourse (the official CI tool for Cloud Foundry) can pull commmitted code and run tests against it, and even create a release after passing tests.

Before I tell you HOW, I’ll tell you WHY

In our workspace, our pipeline monitor is displayed on a wall right next to the team. A red box (aka failed task) is a glaring indicator that something went wrong. Usually the first person who notices shouts out “Ooh! What happened?” and then we roll up our sleeves and start debugging. Each job block can be clicked on to get output logs about what happened. The Concourse CLI lets you ‘hijack’ the container running the job for hands-on debugging. Combining these tools, it’s usually fairly quick to find a problem and fix it.

Having this automated setup, it’s easy to push small features one at a time into production and see their immediate effect on the product. We can see if the feature breaks any existing tests (unit, integration, lifecycle, etc). We also push new tests with the new feature and those are added to the pipeline. At the end of the pipeline, we know for sure if the feature is done, or still needs more work.

Step 1: Set up Concourse

Set up Server

The easiest way to set up concourse is using vagrant

You can access your concourse at 192.168.100.4:8080

Download Concourse cli

You can only start, pause, and stop pipelines or tasks on the concourse website. If you want to configure the pipeline, you have to download fly from concourse. Fly is the name of concourse cli.

Step 2: Configure Pipeline

Make a CI Folder

You can generate your CI folder under the root of your project. See code block below.

pipeline.yml will define what your pipeline looks like.

pipeline.yml

So now your pipeline should look like this:

pipeline

Using groups, we can make a different combination of jobs. Each job can have several tasks. The tasks are located here ci/tasks/*.yml.

task.yml

This defines a task. A task is like a function from inputs to outputs that succeed or fail. Each task runs in a seperate container that requires you to give the address of the docker image that you want use. You can put Dockerfile under ci/docker/. Inputs are already defined in pipeline.yml. The duplication here is to make it easy for us to run one-off tests. The outputs of the task can be reused by later tasks in the same job.

Make a Secret File

You have to generate a secret file that has all of the environments required by the pipeline. All required environments are in pipeline.yml wrapped by double curly braces.

secrets.yml

Set Pipeline

Start Pipeline

The initial state of the pipeline is paused. You have to start it by clicking the menu button on the concourse website OR with fly by:

Run One-off

You can run a one-off test for a specific job. This will not be shown in pipeline.

one-off.sh

The lines above fly execute are the environment variables, and lines below are the inputs of that task. Those are already defined in ci/tasks/*.yml.

Debug

You can hijack into the container thats running the task that you want to debug by:

If you run one-off, you can just run:

You can find build number by clicking the top right button on your pipeline page.

And you’re done! And remember: Continuous Integration = Continuous Confidence.

If you have any questions, please comment below.

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.