Author Archive

Xuebin He

Latest posts by Xuebin He (see all)

Using Docker Container in Cloud Foundry

Using Docker Container in Cloud Foundry

As we all know, we can push source code to CF directly, and CF will compile it and create a container to run our application. Life is so great with CF.

But sometimes, for some reason, such as our App needs a special setup or we want to run an app on different platforms or infrastructures, we may already have a preconfigured container for our App. This won’t block our way to CF at all. This post will show you how to push docker images to CF.

Enable docker feature for CF

We can turn on docker support with the following cf command

  cf enable-feature-flag diego_docker

We can also turn it off by

  cf disable-feature-flag diego_docker
Push docker image to CF
  cf push cf-docker -o golang/alpine

Unlike the normal way, CF won’t try to build our code and run it inside the image we specified. CF would assume that you already put  everything you need into your docker image. We have to rebuild the docker image every time we push a change to our repository.

We also need to tell CF how to start our app inside the image by specifying the start command. We can either put it as an argument for cf push or put it into manifest.yml as below.

- name: cf-docker
  command: git clone && cd cf-docker && mkdir -p app/tmp && go run main.go

In this example, we are using an official docker image from docker hub. In the start command, we clone our demo repo from Github, do something and run our code.

Update Diego with private docker registry

If you are in the EMC network, you may not able to use Docker Hub due to certificate issues. In this case, you need to setup a private docker registry. The version of registry needs to be V2 for now. Also, you have to redeploy your CF or Diego with the changes being shown below.


Replace with your own docker registry ip and port.

Then, you need to create a security group to reach your private docker registry. You can put the definition of this security group into docker.json as shown below

        "destination": "",
        "protocol": "all"

And run

  cf create-security-group docker docker.json
  cf bind-staging-security-group docker

Now you can re-push to CF by

  cf push -o

How to Set up a Concourse Pipeline Xuebin He Dojo Developer

Xuebin He Dojo Developer

How to Set up a Concourse Pipeline

The first step to continuous integration is setting up your own CI pipeline. The #EMCdojo uses Concourse for our own pipeline and we love it! Concourse (the official CI tool for Cloud Foundry) can pull commmitted code and run tests against it, and even create a release after passing tests.

Before I tell you HOW, I’ll tell you WHY

In our workspace, our pipeline monitor is displayed on a wall right next to the team. A red box (aka failed task) is a glaring indicator that something went wrong. Usually the first person who notices shouts out “Ooh! What happened?” and then we roll up our sleeves and start debugging. Each job block can be clicked on to get output logs about what happened. The Concourse CLI lets you ‘hijack’ the container running the job for hands-on debugging. Combining these tools, it’s usually fairly quick to find a problem and fix it.

Having this automated setup, it’s easy to push small features one at a time into production and see their immediate effect on the product. We can see if the feature breaks any existing tests (unit, integration, lifecycle, etc). We also push new tests with the new feature and those are added to the pipeline. At the end of the pipeline, we know for sure if the feature is done, or still needs more work.

Step 1: Set up Concourse

Set up Server

The easiest way to set up concourse is using vagrant

vagrant init concourse/lite
vagrant up

You can access your concourse at

Download Concourse cli

You can only start, pause, and stop pipelines or tasks on the concourse website. If you want to configure the pipeline, you have to download fly from concourse. Fly is the name of concourse cli.

Step 2: Configure Pipeline

Make a CI Folder

You can generate your CI folder under the root of your project. See code block below.

| |____bosh.rackhd-cpi-release
| | |____Dockerfile

pipeline.yml will define what your pipeline looks like.


- name: bosh-rackhd-cpi
- integration
- lifecycle
- setup-director
- bats-centos
- bats-ubuntu
- promote-candidate

- name: integration
serial: true
- aggregate:
- {trigger: true, get: bosh-cpi-release, resource: bosh-rackhd-cpi-release}
- put: emccmd-env-ci
params: {acquire: true}
- task: test
file: bosh-cpi-release/ci/tasks/integration.yml
RACKHD_API_URL: {{rackhd_server_url}}
put: emccmd-env-ci
params: {release: emccmd-env-ci}
- name: lifecycle
- name: setup-director
- name: bats-centos
- name: bats-ubuntu
- name: promote-candidate

- name: bosh-rackhd-cpi-release
type: git
branch: master
private_key: {{github_key__bosh-rackhd-cpi-release}}
- .final_builds/**/*.yml
- releases/**/*.yml

So now your pipeline should look like this:


Using groups, we can make a different combination of jobs. Each job can have several tasks. The tasks are located here ci/tasks/*.yml.


platform: linux
image: docker:///emccmd/rackhd-cpi
- name: bosh-cpi-release
- name: release-version-semver
- name: promote
path: bosh-cpi-release/ci/tasks/
S3_ACCESS_KEY_ID: replace-me
S3_SECRET_ACCESS_KEY: replace-me

This defines a task. A task is like a function from inputs to outputs that succeed or fail. Each task runs in a seperate container that requires you to give the address of the docker image that you want use. You can put Dockerfile under ci/docker/. Inputs are already defined in pipeline.yml. The duplication here is to make it easy for us to run one-off tests. The outputs of the task can be reused by later tasks in the same job.

Make a Secret File

You have to generate a secret file that has all of the environments required by the pipeline. All required environments are in pipeline.yml wrapped by double curly braces.


agent_public_key: c3NoLXJzYSBBQUFBQjNOemFDMYwo=
github_key__rackhd: |

Set Pipeline

fly set-pipeline -p pipeline-name -c repo/ci/pipeline.yml -l secrets.yml

Start Pipeline

The initial state of the pipeline is paused. You have to start it by clicking the menu button on the concourse website OR with fly by:

fly unpause-pipeline -p pipeline-name

Run One-off

You can run a one-off test for a specific job. This will not be shown in pipeline.

#!/usr/bin/env bash
set -x -e

fly execute -c $HOME/workspace/repo/ci/tasks/bats.yml
-i bosh-cpi-release=$HOME/workspace/repo/

The lines above fly execute are the environment variables, and lines below are the inputs of that task. Those are already defined in ci/tasks/*.yml.


You can hijack into the container thats running the task that you want to debug by:

fly intercept -j pipeline-name/job-name -b build-no-of-job

If you run one-off, you can just run:

fly intercept -b build-no

You can find build number by clicking the top right button on your pipeline page.

And you’re done! And remember: Continuous Integration = Continuous Confidence.

If you have any questions, please comment below.

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.