Posts Tagged ‘pipeline’

Do You Hear That? It’s the sound of Keyboards! Call for Papers | Cloud Foundry Summit Silicon Valley 2017 is quickly approaching!

Call for Papers | Cloud Foundry Summit Silicon Valley 2017 is quickly approaching!

Emily Kaiser

Emily Kaiser

Head of Marketing @DellEMCDojo #CloudFoundry #OpenSource #TheWay #LeanPractices #DevOps #Empathy

Our brains are on fire, our keyboards are hot, and the joke in the office the past few days has been over our extreme excitement for the eventual need to buy sunscreen since our Boston winter leaves us Vitamin D deprived. Why is this the case, you may or may not be asking? Well, I plan on telling you anyway because it is just too exciting not to share!

Our team is preparing for CLOUD FOUNDRY SUMMIT SILICON VALLEY! We felt a social duty to let all of those we care about and want to be there with us for what’s sure to be the summit of the summer (how can it not be when it is being held in June in Santa Clara?!), that the last call for papers is quickly approaching (no seriously, it’s this Friday, February 17th).

Just as a refresher for those on the fence, Cloud Foundry Summit is the premier event for enterprise app developers. This year the Foundation, through market research and feedback, found that interest and industry need is engrained in the focus on innovation and the streamlining of development pipelines. For this reason, Summit 2017 is majorly honing in on microservices and continuous delivery in developers’ language and framework of choice. That is why the session tracks available will be Use Cases, Core Project Updates, Experiments, Extension Projects, and Cloud Native Java. Each session that is chosen for the conference is enabled (1) primary speaker and (1) co-speaker. The primary speaker receives a complimentary conference pass while the co-speaker receives a discounted conference pass. So what’s stopping us from getting involved? Absolutely NOTHING!

As a sneak peak to a few of the topics our team have submitted for approval, see below:

  • Adopting DevOps and Building a Cloud Foundry Dojo (Lessons Learned)
  • Lift & Shift Your Legacy Apps to Cloud Foundry
  • How to Develop Scalable Cloud Native Application with Cloud Foundry
  • Enabling GPU-as-a-Service in Cloud Foundry
  • Blockchain as a Service
  • Avoiding pitfalls while migrating BOSH deployments
  • Spring Content: Cloud-Native Content Services for Spring

 

So, now what’s stopping YOU from getting involved? Submit papers here: https://www.cloudfoundry.org/cfp-2017/ and/or register here: https://www.regonline.com/registration/Checkin.aspx?EventID=1908081&utm_source=flash&utm_campaign=summit_2017_sv&utm_medium=landing&utm_term=cloud%20foundry%20summit&_ga=1.199163247.1732851993.1460056335

Last but definitely not least, let us know if you plan on coming—we are more than happy to share sunscreen 🙂 We cannot wait to see you there!

Continuous integration at the #EMCDojo CI at the EMC Dojo, Brian Roche Sr Director of Engineering

CI at the EMC Dojo, Brian Roche Sr Director of Engineering

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

If you have worked on a project for the last 10 years and you have a nicely written (and probably bound) Product Requirements Document, then you’re probably working on a Waterfall project.  If you’re too busy to even ponder the first sentence because you’re overly concerned with the ’testing phase’ of the project that’s coming in a few months, then you’re definitely on a Waterfall project.

The truth is most of the IT projects in-flight today use the Waterfall methodology of Plan, Do, Check, Release.  These projects measure release cycles in months and sometimes years – not in minutes or seconds like most agile projects.  For them, releasing software is a BIG DEAL.  After all, they’ve been working on this software for 12-18 months.  These teams don’t practice the art of releasing software very often.  So what happens when we don’t build the muscle memory to do something?  We’re not very good at it. Most Waterfall teams are not very good at releasing software to their customers.  Subsequently, the product they produce reflects the fact they’re out of shape when it comes to creating installers, packages and the release process in general.

(more…)

How to Set up a Concourse Pipeline Xuebin He Dojo Developer

Xuebin He Dojo Developer

How to Set up a Concourse Pipeline

The first step to continuous integration is setting up your own CI pipeline. The #EMCdojo uses Concourse for our own pipeline and we love it! Concourse (the official CI tool for Cloud Foundry) can pull commmitted code and run tests against it, and even create a release after passing tests.

Before I tell you HOW, I’ll tell you WHY

In our workspace, our pipeline monitor is displayed on a wall right next to the team. A red box (aka failed task) is a glaring indicator that something went wrong. Usually the first person who notices shouts out “Ooh! What happened?” and then we roll up our sleeves and start debugging. Each job block can be clicked on to get output logs about what happened. The Concourse CLI lets you ‘hijack’ the container running the job for hands-on debugging. Combining these tools, it’s usually fairly quick to find a problem and fix it.

Having this automated setup, it’s easy to push small features one at a time into production and see their immediate effect on the product. We can see if the feature breaks any existing tests (unit, integration, lifecycle, etc). We also push new tests with the new feature and those are added to the pipeline. At the end of the pipeline, we know for sure if the feature is done, or still needs more work.

Step 1: Set up Concourse

Set up Server

The easiest way to set up concourse is using vagrant

vagrant init concourse/lite
vagrant up

You can access your concourse at 192.168.100.4:8080

Download Concourse cli

You can only start, pause, and stop pipelines or tasks on the concourse website. If you want to configure the pipeline, you have to download fly from concourse. Fly is the name of concourse cli.

Step 2: Configure Pipeline

Make a CI Folder

You can generate your CI folder under the root of your project. See code block below.

.
|____pipeline.yml
|____docker
| |____bosh.rackhd-cpi-release
| | |____Dockerfile
|____tasks
|____bats.sh
|____bats.yml
|____integration.sh
|____integration.yml
|____lifecycle.sh
|____lifecycle.yml
|____promote-candidate.sh
|____promote-candidate.yml
|____utils.sh

pipeline.yml will define what your pipeline looks like.

pipeline.yml

---
groups:
- name: bosh-rackhd-cpi
jobs:
- integration
- lifecycle
- setup-director
- bats-centos
- bats-ubuntu
- promote-candidate

jobs:
- name: integration
serial: true
plan:
- aggregate:
- {trigger: true, get: bosh-cpi-release, resource: bosh-rackhd-cpi-release}
- put: emccmd-env-ci
params: {acquire: true}
- task: test
file: bosh-cpi-release/ci/tasks/integration.yml
config:
params:
RACKHD_API_URL: {{rackhd_server_url}}
on_failure:
put: emccmd-env-ci
params: {release: emccmd-env-ci}
- name: lifecycle
- name: setup-director
- name: bats-centos
- name: bats-ubuntu
- name: promote-candidate

resources:
- name: bosh-rackhd-cpi-release
type: git
source:
uri: git@github.com:cloudfoundry-incubator/bosh-rackhd-cpi-release.git
branch: master
private_key: {{github_key__bosh-rackhd-cpi-release}}
ignore_paths:
- .final_builds/**/*.yml
- releases/**/*.yml

So now your pipeline should look like this:

pipeline

Using groups, we can make a different combination of jobs. Each job can have several tasks. The tasks are located here ci/tasks/*.yml.

task.yml

---
platform: linux
image: docker:///emccmd/rackhd-cpi
inputs:
- name: bosh-cpi-release
- name: release-version-semver
outputs:
- name: promote
run:
path: bosh-cpi-release/ci/tasks/promote-candidate.sh
params:
S3_ACCESS_KEY_ID: replace-me
S3_SECRET_ACCESS_KEY: replace-me

This defines a task. A task is like a function from inputs to outputs that succeed or fail. Each task runs in a seperate container that requires you to give the address of the docker image that you want use. You can put Dockerfile under ci/docker/. Inputs are already defined in pipeline.yml. The duplication here is to make it easy for us to run one-off tests. The outputs of the task can be reused by later tasks in the same job.

Make a Secret File

You have to generate a secret file that has all of the environments required by the pipeline. All required environments are in pipeline.yml wrapped by double curly braces.

secrets.yml

---
gateway: 172.31.128.1
agent_static_ip1: 172.31.129.54
agent_public_key: c3NoLXJzYSBBQUFBQjNOemFDMYwo=
github_key__rackhd: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEApfLnmvRuC+2mD+0XvVsRJdFq0FhtLkiLXJJs46JFBuM4H/GS
efiz8FSZZ4suRVc7h4iazEEK/FnqFv1TAcGMD+0LDqqpz/yDnbnI/w==
-----END RSA PRIVATE KEY-----

Set Pipeline

fly set-pipeline -p pipeline-name -c repo/ci/pipeline.yml -l secrets.yml

Start Pipeline

The initial state of the pipeline is paused. You have to start it by clicking the menu button on the concourse website OR with fly by:

fly unpause-pipeline -p pipeline-name

Run One-off

You can run a one-off test for a specific job. This will not be shown in pipeline.

one-off.sh

#!/usr/bin/env bash
set -x -e

DIRECTOR_PRIVATE_KEY_DATA="$(cat <<-EOF
-----BEGIN RSA PRIVATE KEY-----
HUGq9lxl5e6FwMJYKIVYXPlD+zrgOk+UehGGnLaZhPs0XQ9f6kv1/Q==
-----END RSA PRIVATE KEY-----
EOF
)"
AGENT_PUBLIC_KEY=c3NRVAMtaU1hYwo=
BOSH_DIRECTOR_PUBLIC_IP=192.168.10.215
fly execute -c $HOME/workspace/repo/ci/tasks/bats.yml
-i bosh-cpi-release=$HOME/workspace/repo/

The lines above fly execute are the environment variables, and lines below are the inputs of that task. Those are already defined in ci/tasks/*.yml.

Debug

You can hijack into the container thats running the task that you want to debug by:

fly intercept -j pipeline-name/job-name -b build-no-of-job

If you run one-off, you can just run:

fly intercept -b build-no

You can find build number by clicking the top right button on your pipeline page.

And you’re done! And remember: Continuous Integration = Continuous Confidence.

If you have any questions, please comment below.

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.