Posts Tagged ‘cloudfoundry’

Doers for Today, Visionaries for Tomorrow and Change Agents for Always! We Truly Are The Trifecta

We Truly Are The Trifecta

The Dell EMC Dojo has a mission that is two-fold; we contribute to open source Cloud Foundry, and we evangelize ‘the way’ (XP, Lean Startup, etc) by engaging with internal Dell EMC teams in a purely DevOps manner. Our mission is direct with a scope that some could argue is boundless. By practicing ‘the way,’ hours, days, and weeks fly by as we push code to production at times every few minutes. Not only is our push to market rapid, so is our overall productivity. Oftentimes teams working with us nearly guffaw when they come to our office in Cambridge, MA and are able to see the ‘wizard(s) behind the curtain.’ We are asked how we keep three to five projects on track while also engaging with internal teams, planning large technical conferences, and working in the realm of R&D in our greater TRIGr team with an east coast contingent of only eight people and a west coast contingent of five. The secret? We LOVE what we do!

 

A team with empathy at its core, there is never a moment when a task seems impossible.

Truly, we could be featured on one of those billboards along the highway stating that there is no ‘I’ in ‘Team.’ Two baseball players carrying an opposing team member across Home because she/he has hurt themselves, a photo of all of the characters from Disney’s “The Incredibles,” The Dell EMC Dojo team… Take your pick… TEAMWORK. Pass It On.

In all seriousness, the pace at the Dojo can be absolutely exhausting, and with such a small team, the absence of one person (which let’s face it, vacation and life needs to happen at points) could in theory be a huge deal. But, because DevOps is what we live and breathe, any member of the team can fill this gap at any point, truly putting to practice the idea that there doesn’t have to be and should never be a single-point-of-failure. Albeit the industry or sector, what more emulates the ‘Go Big, Win Big’ message than this? By continually pushing ourselves to pair and to up-level the knowledge of our entire team, we never wait until tomorrow to take action. There is no need or desire to.

 

Agility is not a term we just talk about, but is simply inherent to everything we do.

With the combination of the rapidly changing market (externally and internally) and the pace in which we work, we at the Dojo have learned that we must stay on our toes. For those reading this that are familiar with sports, one of the first lessons learned in soccer is to never plant your feet. Holding such a stance allows for the opposing team to outpace you when the unexpected happens, which is most of the time. Same goes here. Pivoting is now second nature for us, and it doesn’t come with the scares. Instead, it is actually exciting when we are able to take data and identify ways in which we can better align with efficiency and effectiveness of software and methodology; to truly keep the user omnipresent in everything we do. We are happier. The ‘customer’ is happier. It is a (Go Big) win-win (Big) game. The cool thing too is that the more we practice this, the more we also feel somewhat like we can predict the future, because we begin to see trends before they are even a thing.

 

 

Doers for Today. Visionaries for Tomorrow. Change Agents for Always.

NFS app support in under 5 minutes with CloudFoundry, Wahooo!

Peter Blum

Developers want quick and easy access to storage! They don’t care how they get it, they just want it now. Let me show you how a developer can go from 0 to 100 with persistence in CloudFoundry, starting with Dell EMC Isilon.

Cloud Foundry. Open Source. The Way. Dell EMC [⛩] Dojo.

Road trip to Persistence on CloudFoundry Laying the framework with ScaleIO

Laying the framework with ScaleIO

Peter Blum

Over the past few months the Dojo has been working with all the types of storage to enable persistence within CloudFoundry. Across the next few weeks we are going to be road tripping through how we enabled EMC storage on the CloudFoundry platform. For our first leg of the journey, we start laying the framework by building our motorcycle, a ScaleIO cluster, which will carry us through the journey. ScaleIO, a software defined storage service that is both flexible to allow dynamic scaling of storage nodes as well as reliable to enable enterprise level confidence.

What is ScaleIO – SDS, SDC, & MDM!?

ScaleIO as we already pointed out is a software defined block storage. In laymen terms there are two huge benefits I see with using ScaleIO. Firstly, the actual storage backing ScaleIO can be dynamically scaled up and down by adding and removing SDS (ScaleIO Data Storage) server/nodes. Secondly, SDS nodes can run parallel to your applications running on a server, utilizing any additional free storage your applications are not using. These two points allow for a fully automated datacenter and a terrific base to start for block storage in CloudFoundry.

Throughout this article we will use SDS, SDC, and MDM, lets define them for some deeper understanding!
All three of these terms are actually services running on a node. These nodes can either be a Hypervisor (in the case of vSphere), a VM, or a bare metal machine.

SDS – ScaleIO Data Storage

This is the base of ScaleIO. SDS nodes store information locally on storage devices specified by the admin.

SDC – ScaleIO Data Client

If you intend to use a ScaleIO volume, you are required to become an SDC. To become an SDC you are required to install a kernel module (.KO) which is compiled specially for your specific Operating system version. These all can be found on EMC Support. In addition to the KO that gets installed there also will be a handy binary, drv_cfg. We will use this later on but make sure you have it!

MDM – Meta Data Manager

Think of the MDMs as the mothers of your ScaleIO deployment. They are the most important part of your ScaleIO deployment, they allow access to the storage (by means of mapping volumes from SDS’s to SDC’s), and most importantly they keep track of where all the data is living. Without the MDM’s you lose access to your data since “Mom” isn’t there to piece together the blocks you have written! Side Note: make sure you have at least 3 MDM nodes. This is the smallest number allowed since it is required to have 1 MDM each for Master, Slave, and Tiebreaker.

How to Install ScaleIO

The number of different ways to install ScaleIO is unlimited! In the Dojo we used two separate ways, each with their ups and downs. The first, “The MVP”, is simple and fast, and it will get you the quickest minimal viable product. The second method, “For the Grownups”, will provide you with a start for a fully production ready environment. Both of these will suffice for the rest of our road tripping blog.

The MVP

This process uses a Vagrant box to deploy a ScaleIO cluster. Using the EMC {Code} ScaleIO vagrant Github Repository, checkout the ReadMe to install ScaleIO in less than an hour (depending on your internet of course :smirk: ). Make sure to read through the Clusterinstall function of the ReadMe to understand the two different ways of installing the ScaleIO cluster.

For the GrownUps

This process will deploy ScaleIO on four separate Ubuntu machines/VMs.

Checkout The ScaleIO 2.0 Deployment Guide for more information and help

  • Go to EMC Support.
    • Search ScaleIO 2.0
    • Download the correct ScaleIO 2.0 software package for your OS/architecture type.
    • Ubuntu (We only support Ubuntu currently in CloudFoundry)
    • RHEL 6/7
    • SLES 11 SP3/12
    • OpenStack
    • Download the ScaleIO Linux Gateway.
  • Extract the *.zip files downloaded

Prepare Machines For Deploying ScaleIO

  • Minimal Requirements:
    • At least 3 machines for starting a cluster.
      • 3 MDM’s
      • Any number of SDC’s
    • Can use either a virtual or physical machine
    • OS must be installed and configured for use to install cluster including the following:
      • SSH must be installed, and be available for root. Double-check that passwords are properly provided to configuration.
      • libaio1 package should be installed as well. On Ubuntu: apt-get install libaio1

Prepare the IM (Installation Manager)

  • On the local machine SCP the Gateway Zip file to the Ubuntu Machine.
    scp ${GATEWAY_ZIP_FILE} ${UBUNTU_USER}@${UBUNTU_MACHINE}:${UBUNTU_PATH}
    
  • SSH into Machine that you intend to install the Gateway and Installation Manager on.
  • Install Java 8.0
    sudo apt-get install python-software-properties
    sudo add-apt-repository ppa:webupd8team/java
    sudo apt-get update
    sudo apt-get install oracle-java8-installer
    
  • Install Unzip and Unzip file
    sudo apt-get install unzip
    unzip ${UBUNTU_PATH}/${GATEWAY_ZIP_FILE}
    
  • Run the Installer on the unzipped debian package
    sudo GATEWAY_ADMIN_PASSWORD=<new_GW_admin_password> dpkg -i ${GATEWAY_FILE}.deb
    
  • Access the gateway installer GUI on a web browser using the Gateway Machine’s IP. http://{$GATEWAY_IP}
  • Login using admin and the password you used to run the debian package earlier.
  • Read over the install process on the Home page and click Get Started
  • Click browse and select the following packages to upload from your local machine. Then click Proceed to install
    • XCache
    • SDS
    • SDC
    • LIA
    • MDM

    Installing ScaleIO is done through a CSV. For our demo environment we run the minimal ScaleIO install. We built the following install CSV from the minimal template you will see on the Install page. You might need to build your own version to suit for your needs.

    IPs,Password,Operating System,Is MDM/TB,Is SDS,SDS Device List,Is SDC
    10.100.3.1,PASSWORD,linux,Master,Yes,/dev/sdb,No
    10.100.3.2,PASSWORD,linux,Slave,Yes,/dev/sdb,No
    10.100.3.3,PASSWORD,linux,TB,Yes,/dev/sdb,No
    
  • To manage the ScaleIO cluster you utilize the MDM, make sure that you set a password for the MDM and LIA services on the Credentials Configuration page.
  • NOTE: For our installation, we had no need to change advanced installation options or configure log server. Use these options at your own risk!
  • After submitting the installation form, a monitoring tab should become available to monitor the installation progress.
    • Once the Query Phase finishes successfully, select start upload phase. This phase uploads all the correct resources needed to the nodes indicated in the CSVs.
    • Once the Upload Phase finishes successfully, select start install phase.
    • Installation phase is hopefully self-explanatory.
  • Once all steps have completed, the ScaleIO Cluster is now deployed.

Using ScaleIO

  • To start using the cluster with the ScaleIO cli you can follow the below steps which are copied from the post installation instructions.

    To start using your storage:
    Log in to the MDM:

    scli --login --username admin --password <password>

    Add SDS devices: (unless they were already added using a CSV file containing devices)
    You must add at least one device to at least three SDSs, with a minimum of 100 GB free storage capacity per device.

    scli --add_sds_device --sds_ip <IP> --protection_domain_name default --storage_pool_name default --device_path /dev/sdX or D,E,...

    Add a volume:

    scli --add_volume --protection_domain_name default --storage_pool_name default --size_gb <SIZE> --volume_name <NAME>

    Map a volume:

    scli --map_volume_to_sdc --volume_name <NAME> --sdc_ip <IP>

Managing ScaleIO

When using ScaleIO with CloudFoundry we will use the ScaleIO REST Gateway to manage the cluster. There are other ways to manage the cluster such as the ScaleIO Cli and ScaleIO GUI, both of which are much harder for CloudFoundry to communicate with.

EOF

At this point you have a fully functional ScaleIO cluster that we can use with CloudFoundry and RexRay to deploy applications backed by ScaleIO storage! Stay tuned for our next blog post in which we will deploy a minimal CloudFoundry instance.

Cloud Foundry. Open Source. The Way. EMC [⛩] Dojo.

 

 

Sneak Preview: Cloud Foundry Persistence Under the Hood How Does Persistence Orchestration Work in Cloud Foundry?

How Does Persistence Orchestration Work in Cloud Foundry?

A few weeks ago at the Santa Clara Cloud Foundry Summit we announced that Cloud Foundry will be supporting persistence. From a developer’s perspective, the user experience is smooth and very similar to using other existing services. Here is an article that describes the CF persistence user experience.

For those who are wondering how Cloud Foundry orchestrates persistence service, this article will provide a high level overview of the architecture and user experience.

How Does it Work when Developer Creates Persistence Service?

Like other Cloud Foundry services, before an application can gain access to the persistence service, the user needs to first sign up for a service plan from the Cloud Foundry marketplace.

Initially, our Service Broker uses an Open Source technology called RexRay, which is a persistence orchestration engine that works with Docker.

Screen Shot 2016-06-07 at 11.02.31 AM

Creating Service

When the service broker receives a request to create a new service plan, it would go into its backing persistence service provider, such as ScaleIO or Isilon, to create a new volume.

For example, the user can use:

cf create-service scaleio small my-scaleio1

Deleting Service

When a user is done with the volume and no longer needs the service plan, the user can make a delete-service call to remove the volume. When the service broker receives the request to delete the service plan, it would go into its backing persistence service provider to remove the volume and free up storage space.

For example, the user can use:

cf delete-service my-scaleio1

Binding Service

After a service plan is created, the user can then bind the service to one or multiple Cloud Foundry applications. When the service broker receives the request to bind an application, the service broker would would include a special flag in the JSON response to the Cloud Controller, so that Cloud Controller and Diego would know how to mount the directory in Runtime. The runtime behavior will be described in more details below.

 

How Does it work in Runtime?

Cloud Foundry executes each instance of application in container-based runtime environment called Diego. For Persistence Orchestration, a new project called Volman (short for Volume Manager) has become the newest addition to the Diego release. Volman is part of Diego and will live in a Diego Cell. At a high level, Volman is responsible for picking up special flags from Cloud Controller, invoke a Volume Driver to mount a volume into a Diego Cell, then provide access to the directory from the runtime container.

Screen Shot 2016-06-07 at 12.21.18 PM

 

How to Deploy Diego in a Hybrid Environment Deploying Cloud Foundry Diego in a Virtual and Bare Metal Environment

Deploying Cloud Foundry Diego in a Virtual and Bare Metal Environment

In my last post “Deploying Cloud Foundry in a Hybrid Environment“, I showed you how to deploy DEA runners in a bare-metal environment. However, Cloud Foundry now has new and exciting Diego runtime! So now I will guide you through how to deploy Diego Cells on bare metal machines. When we’re all done, your environment will look like this:

Diagram7

(more…)

Creating Cloud Foundry Applications with Persistence Traditional & Cloud Native Applications with Persistence

Traditional & Cloud Native Applications with Persistence

Cloud Foundry and 12-Factor applications are great to create Cloud Native Applications and has become the standard. But you and I don’t just have to worry about our new 12 Factor apps, we also have legacy applications in our family.  Our position is that your traditional apps and your new cool 12-Factor apps should be able to experience the benefits of running on Cloud Foundry.  So, we have worked to create a world where your legacy apps and new apps can live together.

cf-logo

In the past, Cloud Foundry applications cannot use any filesystem or block storage. That totally makes sense, given that Cloud Foundry apps are executed in elastic containers, which can go away at any time. And if that happens, any data written to the local filesystem of the containers will be wiped.

If we can externalize persistent storage to be a service – and bind and unbind those services to Cloud Foundry applications – a lot more apps can run in Cloud Foundry. For example, heavy data access apps, like databases and video editing software, can now access extremely fast storage AND at the same time experience the scalability and reliability provided by Cloud Foundry.

Allowing Cloud Foundry applications to have direct persistence access opens a lot of doors for developers. Traditional applications that require persistence can migrate to Cloud Foundry a lot easier. Big Data Analytics applications can now use persistence to perform indexing and calculation.

Traditionally, a lot of data services consumed by Cloud Foundry applications, such as MySQL, Cassandra, etc, need to be deployed by Bosh as Virtual Machines. With Persistence, we can start looking bringing these services to run in Cloud Foundry or create the next generation of Cloud Native data services.

What can Developers Expect?

When developers come to the Cloud Foundry marketplace by using cf marketplace, they will see services that can offer their applications persistence:

Screen Shot 2016-05-20 at 10.37.37 AM

The details of the service plan can be seen by cf marketplace -s ScaleIO

Screen Shot 2016-05-20 at 10.57.04 AM

This is a plan that would offer 2GB of storage for your Cloud Foundry applications. Let’s sign up for it by running cf create-service scaleio small my-scaleio-service1

Screen Shot 2016-05-20 at 11.02.26 AM

By creating a service instance, the ScaleIO Service Broker goes into ScaleIO and creates a volume of 2GB. We are now ready to bind this new service to our app. To demonstrate the functionality, we have created a very simple application that will write and read to the filesystem:

Screen Shot 2016-05-20 at 11.09.51 AM

After a cf bind-service call, the storage will be mounted as a directory and the path will indicate an environment variable inside the service. For example:

Screen Shot 2016-05-20 at 11.35.07 AM

Based on the container_path variable, the application can read and write as if it’s a local filesystem.

 

UniK: Build and Run Unikernels with Ease UniK, Unikernels, Cloud Foundry, Docker, Kubernetes, Raspberry Pi, IoT

UniK, Unikernels, Cloud Foundry, Docker, Kubernetes, Raspberry Pi, IoT

Idit Levine

Idit Levine (@Idit_Levine), Office of the CTO & CTO of Cloud Platform Team, EMC

Unikernels are lightweight, immutable operating systems compiled specifically to run a single application. Unikernel compilation combines source code with the specific device drivers and operating system libraries necessary to support the needs of the application. The result is a machine image that can run directly on a hypervisor or bare metal, eliminating the need for a host operating system (like Linux). The unikernel represents the smallest subset of code required to run the application, giving us portable applications with smaller footprints, less overhead, smaller attack surfaces, and faster boot times than traditional operating systems. Together, I believe unikernels have the potential to change the cloud-computing ecosystem as well as to dominate the emerging IoT market.

However, compiling a unikernel is a challenging assignment. It requires rare expertise often absent in application developer’s toolkit. The difficulty of compiling Unikernels may significantly hamper their widespread adoption. I believe that the community will benefit from a straightforward way to build and manage unikernels.

This is why UniK was developed.

unik-logo

UniK (pronounced you-neek) is a tool for compiling application sources into unikernels — lightweight bootable disk images — rather than binaries. UniK runs and manages instances of compiled images across a variety of cloud providers as well as locally on Virtualbox. UniK utilizes a simple docker-like command line interface, making building unikernels as easy as building containers. UniK is built to be easily extensible, allowing – and encouraging – adding support for unikernel compilers and cloud providers.

UniK is fully controllable through a REST API to allow for seamless integration between UniK and orchestration tools such as Kubernetes or Cloud Foundry.

Docker Integration: Recognizing the open source community’s popular adoption of the Docker API, we extend UniK’s REST API to serve some of the same endpoints as Docker, allowing some Docker commands such as docker run, docker rm, and docker ps to control UniK, which we hope will make UniK easier to adopt for those already familiar with Docker.

Kubernetes Integration: To demonstrate the value of cluster management of unikernels, we implemented a UniK runtime for Kubernetes, making Kubernetes the first cluster manager to support unikernels. This integration allows UniK to take advantage of core Kubernetes features like horizontal scaling, automated rollouts and rollbacks, storage orchestration, self-healing, service discovery, load balancing and batch execution.

Cloud Foundry Integration: To provide the user with a seamless PaaS experience, we added UniK as a backend to the Cloud Foundry runtime, positioning Cloud Foundry as the first platform to run applications as unikernels. This adds the lightweight scalability of unikernels with the security and sophistication of vms (lightweight, immutable, performant), persistent storage, and the ability to run on bare metal.

ARM Integration: We believe the quintessential use case for unikernels is the advantage they give to smart devices in the Internet of Things. Their airtight security, immutable infrastructure, high performance and light footprint make them the ideal solution for deploying software on embedded devices. To demonstrate this vision for the future of unikernels, we implemented ARM processor support into UniK to run unikernels on the architecture used in most embedded devices such as the Raspberry Pi.

Today, we are thrilled to share the open source UniK under the Apache 2.0 license. We hope that the community will join us in making the Unikernel a first class citizen in the future cloud ecosystem and emerging market of IoT devices. Please visit our repository at https://github.com/emc-advanced-dev and try our Getting Started with UniK Tutorial https://github.com/emc-advanced-dev/unik/blob/master/docs/getting_started.md

Vegas Baby! EMC Dojo at EMC World 2016 EMC Dojo at EMC World 2016

EMC Dojo at EMC World 2016

EMC Dojo is at EMC World!

Come check out our sessions and play #Dojosnake! Scroll to see more!

What’s up at our Booth?

Come to booth #1051# to see what the EMC Dojo is up to! Learn more about persistence on Diego, UniK unikernel support for Cloud Foundry + Docker, RackHD CPI, & our other OSS Cloud Foundry projects!

Want to try pair programming? Give it a try at our booth! See how easy it is to $CF push apps to Cloud Foundry & pair with us on #Dojosnake! Won’t be able to make it to Vegas this year? Then see if you can beat the high score and play #Dojosnake! Make sure you tweet and follow @EMCdojo to get on the leaderboard 🙂

Screen Shot 2016-05-02 at 9.01.16 AM

If you will be in Vegas, come to one (or more!) of our sessions listed below.  Hope to see you there! (more…)

How to Deploy Cloud Foundry in a Virtual & Bare Metal Hybrid Environment Deploying Cloud Foundry on VSphere and Bare Metal

Deploying Cloud Foundry on VSphere and Bare Metal

This is a guide for setting up Cloud Foundry in a hybrid environment with virtualization and bare-metal infrastructure. Cloud Foundry is made up of many components. It makes sense to run most of them in a virtual environment for resource sharing and separation reasons. And these pieces might not require the resources for an entire physical machine all to itself.

Computing components, such as DEA Runner or Diego Cells, should run on Bare-Metal to fully make use of the computing power, without spending resources for your virtualization tier.

This tutorial will guide you step by step, to set up Cloud Foundry components on vSphere and DEA Runner on Bare-Metal machines. When you finish, your environment will look like this:

Diagram1

(more…)

2 Heads Are Better Than 1: Pair Programming at the #EMCDojo Pairing at the EMC Dojo, Brian Roche Sr Director of Engineering

Pairing at the EMC Dojo, Brian Roche Sr Director of Engineering

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

I work on a team where we practice ‘pairing’ and pair programming every day.  Before joining this team I had only a passing experience with pair programming. And now, after many months of pairing, I have a much better understanding of why we pair.


Pair Programming Explained
Pair programming is a technique in which 2 programmers work as a pair at one workstation.  One, the driver writes code and focuses on the tactical aspects of syntax and task completion.  Two, the observer considers the strategic direction of the code they’re writing together.  In our case, each developer has their own monitor, keyboard and mouse but is connected to one IDE.  The two programmers switch roles often.

(more…)

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.