Archive for the ‘Cloud Foundry’ Category

Using Docker Container in Cloud Foundry

Using Docker Container in Cloud Foundry

As we all know, we can push source code to CF directly, and CF will compile it and create a container to run our application. Life is so great with CF.

But sometimes, for some reason, such as our App needs a special setup or we want to run an app on different platforms or infrastructures, we may already have a preconfigured container for our App. This won’t block our way to CF at all. This post will show you how to push docker images to CF.

Enable docker feature for CF

We can turn on docker support with the following cf command

  cf enable-feature-flag diego_docker

We can also turn it off by

  cf disable-feature-flag diego_docker
Push docker image to CF
  cf push cf-docker -o golang/alpine

Unlike the normal way, CF won’t try to build our code and run it inside the image we specified. CF would assume that you already put  everything you need into your docker image. We have to rebuild the docker image every time we push a change to our repository.

We also need to tell CF how to start our app inside the image by specifying the start command. We can either put it as an argument for cf push or put it into manifest.yml as below.

---
applications:
- name: cf-docker
  command: git clone https://github.com/kaleo211/cf-docker && cd cf-docker && mkdir -p app/tmp && go run main.go

In this example, we are using an official docker image from docker hub. In the start command, we clone our demo repo from Github, do something and run our code.

Update Diego with private docker registry

If you are in the EMC network, you may not able to use Docker Hub due to certificate issues. In this case, you need to setup a private docker registry. The version of registry needs to be V2 for now. Also, you have to redeploy your CF or Diego with the changes being shown below.

properties:
  garden:
    insecure_docker_registry_list:
    - 12.34.56.78:9000
  capi:
    stager:
      insecure_docker_registry_list:
      - 12.34.56.78:9000

Replace 12.34.56.78:9000 with your own docker registry ip and port.

Then, you need to create a security group to reach your private docker registry. You can put the definition of this security group into docker.json as shown below

[
    {
        "destination": "12.34.56.78:9000",
        "protocol": "all"
    }
]

And run

  cf create-security-group docker docker.json
  cf bind-staging-security-group docker

Now you can re-push to CF by

  cf push -o 12.34.56.78:9000/your-image

Road trip to Persistence on CloudFoundry Chapter 2

Chapter 2

nguyen thinh

nguyen thinh

nguyen thinh

Latest posts by nguyen thinh (see all)

Road trip to Persistence on CloudFoundry

Chapter 2 – Bosh

Bosh is a deployment tool that can provision VMs on different IAAS such as AWS, OpenStack, vSphere, and even Baremetals. It monitors the VMs’ lives and keeps track of all processes that it deploys to the VMs. In this tutorial, we will focus on how to use bosh with vSphere. However, you can apply the same technique for your IAAS too. The reason we talk about Bosh in this roadtrip is because we use it to deploy Cloud Foundry.


Table of Contents


1. How it works

2. Install Bosh

  1. Create a Bosh Director Manifest
  2. Install
  3. Verify Installation

3. Configure Bosh

  1. Write your Cloud Config
  2. Upload cloud config to bosh director

4. Use Bosh

  1. Create a deployment Manifest
  2. Upload stemcell and redis releases
  3. Set deployment manifest and deploy it
  4. Interact with your new deployed redis server

1. How it works

Let’s say you want to bring up a VM that contains Redis on vSphere, you provide Bosh a vSphere stemcell (Operating System image) and a Redis Bosh Release (Installation scripts). Bosh then does the job for you.

alt text

Are you excited yet? Let’s get started on how to install Bosh on your IAAS and use it.

2. Install Bosh

In order to use Bosh in vSphere, you will need to deploy a Bosh Director VM

1. Create a Bosh Director Manifest

This manifest describes the director VM’s specs. Copy the example manifest from this docs and place it on your machine. Modify the networks section to match that of your vSphere environment.

2. Install

After configuring your deployment manifest, install bosh-init tool from this docs. Then type the following command to deploy a Bosh director VM

bosh-init deploy PATH_TO_YOUR_DIRECTOR_MANIFEST

3. Verify Installation

After completing the installation, download Bosh client to interact with Bosh director. If you have any installation problem, please refer to: docs

gem install bosh_cli --no-ri --no-rdoc

Then type

bosh target BOSH_DIRECTOR_URI

If the command succeed, you now have a functioning Bosh director.

3. Configure Bosh

To configure Bosh Director, you pass it a configuration file called Bosh Cloud Config. It allows you to define resources, networks, vms type specifically for all of your Bosh vms deployments.

alt text

1. Write your Cloud Config

To write your cloud config, copy the vSphere Cloud Config example here: tutorial and modify it accordingly.

2. Upload cloud config to bosh director

bosh update cloud-config PATH_TO_YOUR_CLOUD_CONFIG

After defining a cloud config, you are then able to deploy VMs on Bosh.

4. Use Bosh

Let’s deploy a simple redis server vm on vSphere using Bosh.

1. Create a deployment Manifest

---
name: redis-deployment
director_uuid: cd0eb8bc-831e-447d-99c1-9658c76e7721
stemcells:
- alias: trusty
  os: ubuntu-trusty
  version: latest
releases:
- name: redis
  version: latest
jobs:
- name: redis-job
  instances: 1
  templates:
  - {name: redis, release: redis}
  vm_type: medium
  stemcell: trusty
  azs: [z1]
  networks:
  - name: private
  properties:
    redis:
      password: REDIS_PASSWORD
      port: REDIS_PORT
update:
  canaries: 1
  max_in_flight: 3
  canary_watch_time: 30000-600000
  update_watch_time: 5000-600000

In this deployment manifest, I am deploying a redis VM using redis release and ubuntu stemcell. I wanted the VM type to be medium and the VM to be located at availability zone z1.

2. Upload stemcell and redis releases

bosh upload stemcell https://bosh.io/d/stemcells/bosh-vsphere-esxi-ubuntu-trusty-go_agent
bosh upload release https://bosh.io/d/github.com/cloudfoundry-community/redis-boshrelease

Alternatively, you can download them and run bosh upload locally.

3. Set deployment manifest and deploy it

bosh deployment PATH_TO_YOUR_DEPLOYMENT_MANIFEST
bosh deploy

4. Interact with your new deployed redis server

Find your vm IP using

bosh vms

Connect to your redis server

redis-cli -h YOUR_REDIS_VM_ADDRESS -a REDIS_PASSWORD -p REDIS_PORT

Test it

SET foo bar
GET foo

Road trip to Persistence on CloudFoundry Laying the framework with ScaleIO

Laying the framework with ScaleIO

Peter Blum

Over the past few months the Dojo has been working with all the types of storage to enable persistence within CloudFoundry. Across the next few weeks we are going to be road tripping through how we enabled EMC storage on the CloudFoundry platform. For our first leg of the journey, we start laying the framework by building our motorcycle, a ScaleIO cluster, which will carry us through the journey. ScaleIO, a software defined storage service that is both flexible to allow dynamic scaling of storage nodes as well as reliable to enable enterprise level confidence.

What is ScaleIO – SDS, SDC, & MDM!?

ScaleIO as we already pointed out is a software defined block storage. In laymen terms there are two huge benefits I see with using ScaleIO. Firstly, the actual storage backing ScaleIO can be dynamically scaled up and down by adding and removing SDS (ScaleIO Data Storage) server/nodes. Secondly, SDS nodes can run parallel to your applications running on a server, utilizing any additional free storage your applications are not using. These two points allow for a fully automated datacenter and a terrific base to start for block storage in CloudFoundry.

Throughout this article we will use SDS, SDC, and MDM, lets define them for some deeper understanding!
All three of these terms are actually services running on a node. These nodes can either be a Hypervisor (in the case of vSphere), a VM, or a bare metal machine.

SDS – ScaleIO Data Storage

This is the base of ScaleIO. SDS nodes store information locally on storage devices specified by the admin.

SDC – ScaleIO Data Client

If you intend to use a ScaleIO volume, you are required to become an SDC. To become an SDC you are required to install a kernel module (.KO) which is compiled specially for your specific Operating system version. These all can be found on EMC Support. In addition to the KO that gets installed there also will be a handy binary, drv_cfg. We will use this later on but make sure you have it!

MDM – Meta Data Manager

Think of the MDMs as the mothers of your ScaleIO deployment. They are the most important part of your ScaleIO deployment, they allow access to the storage (by means of mapping volumes from SDS’s to SDC’s), and most importantly they keep track of where all the data is living. Without the MDM’s you lose access to your data since “Mom” isn’t there to piece together the blocks you have written! Side Note: make sure you have at least 3 MDM nodes. This is the smallest number allowed since it is required to have 1 MDM each for Master, Slave, and Tiebreaker.

How to Install ScaleIO

The number of different ways to install ScaleIO is unlimited! In the Dojo we used two separate ways, each with their ups and downs. The first, “The MVP”, is simple and fast, and it will get you the quickest minimal viable product. The second method, “For the Grownups”, will provide you with a start for a fully production ready environment. Both of these will suffice for the rest of our road tripping blog.

The MVP

This process uses a Vagrant box to deploy a ScaleIO cluster. Using the EMC {Code} ScaleIO vagrant Github Repository, checkout the ReadMe to install ScaleIO in less than an hour (depending on your internet of course :smirk: ). Make sure to read through the Clusterinstall function of the ReadMe to understand the two different ways of installing the ScaleIO cluster.

For the GrownUps

This process will deploy ScaleIO on four separate Ubuntu machines/VMs.

Checkout The ScaleIO 2.0 Deployment Guide for more information and help

  • Go to EMC Support.
    • Search ScaleIO 2.0
    • Download the correct ScaleIO 2.0 software package for your OS/architecture type.
    • Ubuntu (We only support Ubuntu currently in CloudFoundry)
    • RHEL 6/7
    • SLES 11 SP3/12
    • OpenStack
    • Download the ScaleIO Linux Gateway.
  • Extract the *.zip files downloaded

Prepare Machines For Deploying ScaleIO

  • Minimal Requirements:
    • At least 3 machines for starting a cluster.
      • 3 MDM’s
      • Any number of SDC’s
    • Can use either a virtual or physical machine
    • OS must be installed and configured for use to install cluster including the following:
      • SSH must be installed, and be available for root. Double-check that passwords are properly provided to configuration.
      • libaio1 package should be installed as well. On Ubuntu: apt-get install libaio1

Prepare the IM (Installation Manager)

  • On the local machine SCP the Gateway Zip file to the Ubuntu Machine.
    scp ${GATEWAY_ZIP_FILE} ${UBUNTU_USER}@${UBUNTU_MACHINE}:${UBUNTU_PATH}
    
  • SSH into Machine that you intend to install the Gateway and Installation Manager on.
  • Install Java 8.0
    sudo apt-get install python-software-properties
    sudo add-apt-repository ppa:webupd8team/java
    sudo apt-get update
    sudo apt-get install oracle-java8-installer
    
  • Install Unzip and Unzip file
    sudo apt-get install unzip
    unzip ${UBUNTU_PATH}/${GATEWAY_ZIP_FILE}
    
  • Run the Installer on the unzipped debian package
    sudo GATEWAY_ADMIN_PASSWORD=<new_GW_admin_password> dpkg -i ${GATEWAY_FILE}.deb
    
  • Access the gateway installer GUI on a web browser using the Gateway Machine’s IP. http://{$GATEWAY_IP}
  • Login using admin and the password you used to run the debian package earlier.
  • Read over the install process on the Home page and click Get Started
  • Click browse and select the following packages to upload from your local machine. Then click Proceed to install
    • XCache
    • SDS
    • SDC
    • LIA
    • MDM

    Installing ScaleIO is done through a CSV. For our demo environment we run the minimal ScaleIO install. We built the following install CSV from the minimal template you will see on the Install page. You might need to build your own version to suit for your needs.

    IPs,Password,Operating System,Is MDM/TB,Is SDS,SDS Device List,Is SDC
    10.100.3.1,PASSWORD,linux,Master,Yes,/dev/sdb,No
    10.100.3.2,PASSWORD,linux,Slave,Yes,/dev/sdb,No
    10.100.3.3,PASSWORD,linux,TB,Yes,/dev/sdb,No
    
  • To manage the ScaleIO cluster you utilize the MDM, make sure that you set a password for the MDM and LIA services on the Credentials Configuration page.
  • NOTE: For our installation, we had no need to change advanced installation options or configure log server. Use these options at your own risk!
  • After submitting the installation form, a monitoring tab should become available to monitor the installation progress.
    • Once the Query Phase finishes successfully, select start upload phase. This phase uploads all the correct resources needed to the nodes indicated in the CSVs.
    • Once the Upload Phase finishes successfully, select start install phase.
    • Installation phase is hopefully self-explanatory.
  • Once all steps have completed, the ScaleIO Cluster is now deployed.

Using ScaleIO

  • To start using the cluster with the ScaleIO cli you can follow the below steps which are copied from the post installation instructions.

    To start using your storage:
    Log in to the MDM:

    scli --login --username admin --password <password>

    Add SDS devices: (unless they were already added using a CSV file containing devices)
    You must add at least one device to at least three SDSs, with a minimum of 100 GB free storage capacity per device.

    scli --add_sds_device --sds_ip <IP> --protection_domain_name default --storage_pool_name default --device_path /dev/sdX or D,E,...

    Add a volume:

    scli --add_volume --protection_domain_name default --storage_pool_name default --size_gb <SIZE> --volume_name <NAME>

    Map a volume:

    scli --map_volume_to_sdc --volume_name <NAME> --sdc_ip <IP>

Managing ScaleIO

When using ScaleIO with CloudFoundry we will use the ScaleIO REST Gateway to manage the cluster. There are other ways to manage the cluster such as the ScaleIO Cli and ScaleIO GUI, both of which are much harder for CloudFoundry to communicate with.

EOF

At this point you have a fully functional ScaleIO cluster that we can use with CloudFoundry and RexRay to deploy applications backed by ScaleIO storage! Stay tuned for our next blog post in which we will deploy a minimal CloudFoundry instance.

Cloud Foundry. Open Source. The Way. EMC [⛩] Dojo.

 

 

Creating a Cloud Foundry Service Broker

Megan Murawski

Megan Murawski

Megan Murawski

Latest posts by Megan Murawski (see all)

12-factor apps are cool and Cloud Foundry is cool but, we don’t just have to worry about 12 factor apps.  We have legacy apps that we also need to pay attention to.  We believe there should be a way for all of your apps to enjoy the benefits of Cloud Foundry.  We have enabled this by implementing a Service Broker that binds an external storage service.  Before we talk about the creation of the Service Broker we will describe the role of a Service Broker in Cloud Foundry.

Services are integrated with Cloud Foundry by implementing a documented API for which the cloud controller is the client; we call this the Service Broker API.  Service brokers advertise a catalog of service offerings and service plans, as well as interpreting calls for provision (create), bind, unbind, and deprovision (delete).  By externalizing the backend services from the application in a PaaS provides a clear separation that can improve application development. Dev only needs to be able to connect to an external service to consume its APIs. In Cloud Foundry, this interface is “brokered” by the Service Broker.  Some examples of services which are essential to the 12 Factor app are: MySql, Redis, RabbitMQ, and now ScaleIO!

Screen Shot 2016-05-23 at 1.28.26 PM

A service broker sits in between the Cloud Foundry Runtime and the service itself. To create a service broker, you only need to implement the 5 Service Broker APIs: catalog, provision, deprovision, bind and unbind. The service broker essentially translates between the Cloud Controller API and the service’s own API to orchestrate creating a service instance (maybe provisioning a new database or creating a new user), providing the credentials to connect to the service, and disconnecting and deleting the service instance.

The APIs

Catalog is used to fetch the service catalog. This describes the service plans that are available with the backend and any cost.

Provision is used to create the service in the backend. Based on a chosen service plan, this call will actually create the service. In the case of persistence, this is provisioning storage to be used by your applications.

Bind will provide Cloud Foundry runtime with the credentials and configuration to access the service. This allows the application to connect to the service.

Unbind will remove the access credentials from the application’s environment.

Deprovision is used to delete the service on the backend. This could remove an account, delete a database, or in the case of persistence, delete the volume created on provision.

 

Creating the Broker

To create the ScaleIO service broker, we made use of the EMC open source tool Libstorage for communicating with the ScaleIO backend. Libstorage is capable of managing many different storage backends including ScaleIO, XtremIO, Isilon and VMAX in a client/server model.  By adding an API layer on top of Libstorage, we were able to quickly translate between Cloud Foundry API calls and storage APIs on the backend to provide persistence as a service within CF. Like much of the work we do at the EMC Dojo we have open sourced the Service Broker.  If you’d like to download it or get involved, check it out on github!

Sneak Preview: Cloud Foundry Persistence Under the Hood How Does Persistence Orchestration Work in Cloud Foundry?

How Does Persistence Orchestration Work in Cloud Foundry?

A few weeks ago at the Santa Clara Cloud Foundry Summit we announced that Cloud Foundry will be supporting persistence. From a developer’s perspective, the user experience is smooth and very similar to using other existing services. Here is an article that describes the CF persistence user experience.

For those who are wondering how Cloud Foundry orchestrates persistence service, this article will provide a high level overview of the architecture and user experience.

How Does it Work when Developer Creates Persistence Service?

Like other Cloud Foundry services, before an application can gain access to the persistence service, the user needs to first sign up for a service plan from the Cloud Foundry marketplace.

Initially, our Service Broker uses an Open Source technology called RexRay, which is a persistence orchestration engine that works with Docker.

Screen Shot 2016-06-07 at 11.02.31 AM

Creating Service

When the service broker receives a request to create a new service plan, it would go into its backing persistence service provider, such as ScaleIO or Isilon, to create a new volume.

For example, the user can use:

cf create-service scaleio small my-scaleio1

Deleting Service

When a user is done with the volume and no longer needs the service plan, the user can make a delete-service call to remove the volume. When the service broker receives the request to delete the service plan, it would go into its backing persistence service provider to remove the volume and free up storage space.

For example, the user can use:

cf delete-service my-scaleio1

Binding Service

After a service plan is created, the user can then bind the service to one or multiple Cloud Foundry applications. When the service broker receives the request to bind an application, the service broker would would include a special flag in the JSON response to the Cloud Controller, so that Cloud Controller and Diego would know how to mount the directory in Runtime. The runtime behavior will be described in more details below.

 

How Does it work in Runtime?

Cloud Foundry executes each instance of application in container-based runtime environment called Diego. For Persistence Orchestration, a new project called Volman (short for Volume Manager) has become the newest addition to the Diego release. Volman is part of Diego and will live in a Diego Cell. At a high level, Volman is responsible for picking up special flags from Cloud Controller, invoke a Volume Driver to mount a volume into a Diego Cell, then provide access to the directory from the runtime container.

Screen Shot 2016-06-07 at 12.21.18 PM

 

Introducing Ginkgo4J Ginkgo for Java

Ginkgo for Java

Paul Warren

Paul Warren

Paul Warren

Latest posts by Paul Warren (see all)

Having been an engineer using Java since pretty much version 1, and having practiced TDD for the best part of 10 years one thing that always bothered me was the lack of structure and context that Java testing frameworks like JUnit provided.  On larger code bases, with many developers, this problem can become quite accute. When pair programming I have sometimes even had to say to my pair

“Give me 15 minutes to figure out what this test is actually doing!”

The fact of the matter is that the method name is simply not enough to convey the required given, when, then semantics present in all tests.

I recently made a switch in job roles. Whilst I stayed with EMC, I left the Documentum division (ECD) with whom I had been for 17 years and moved to the EMC Dojo & Cloud Platform Team, whose remit is to help EMC make a transition to the 3rd platform.  As a result I am now based in the Pivotal office in San Francisco, I pair program and I am now working in Golang.

Golang has a testing framework called Ginkgo that was actually created by one of Pivotal’s VPs Onsi Fakhouri.  It mirrors frameworks from other languages like RSpec in Ruby.  All of these framework provide a very simply DSL that the developer can use in his test to build up a described context with closures. Having practiced this for the last six months I personally find this way of writing tests very useful.  Perhaps the most useful when I pick up an existing test and try to modify it.

Java version 8 has included it’s version of closures, called Lambda’s.  Whilst there aren’t quite as flexible as some of their equivalents in other languages; all variable access must be to ‘finals’ for example, they are sufficient to build an equivalent testing DSL.  So that’s what I decided to do with Ginkgo4J, pronounced Ginkgo for Java.

So let’s take a quick look at how it works.

In your Java 8 project, add a new test class called BookTests.java as follows:


  package com.github.paulcwarren.ginkgo4j.examples;

  import static com.github.paulcwarren.ginkgo4j.Ginkgo4jDSL.*;
  import org.junit.runner.RunWith;
  import com.github.paulcwarren.ginkgo4j.Ginkgo4jRunner;

  @RunWith(Ginkgo4jRunner.class)
  public class BookTests {
  {
      Describe("A describe", () -> {
      });
  }
  }

Let’s break this down:

  • The imports Ginkgo4jDSL.* and Ginkgo4jRunner add Ginkgo4J’s DSL and JUnit runner. The Junit runner allows these style of tests to be run in all IDEs supporting Junit (basically all of them) and also in build tools such as Ant, Maven and Gradle.
  • We add a top-level Describe container using Ginkgo4J’s Describe(String title, ExecutableBlock block) method. The top-level braces {}trick allows us to evaluate the Describe at the top level without having to wrap it.
  • The 2nd argument to the Describe () -> {} is a lambda expression defining an anonymous class that implements the ExecutableBlock interface.  

The 2nd argument lamdba expression to the Describe will contain our specs.  So let’s add some now to test our Book class.


  private Book longBook;
  private Book shortBook;
  {
      Describe("Book", () -> {
        BeforeEach(() -> {
          longBook = new Book("Les Miserables", "Victor Hugo", 1488);
          shortBook = new Book("Fox In Socks", "Dr. Seuss", 24);
        });

      Context("Categorizing book length", () -> {
        Context("With more than 300 pages", () -> {
          It("should be a novel", () -> {
            assertThat(longBook.categoryByLength(), is("NOVEL"));
          });
        });

        Context("With fewer than 300 pages", () -> {
          It("should be a short story", () -> {
            assertThat(shortBook.categoryByLength(), is("NOVELLA"));
          });
        });
      });
    });
  }

Let’s break this down:

  • Ginkgo4J makes extensive use of lambdas to allow you to build descriptive test suites.
    You should make use of Describe and Context containers to expressively organize the behavior of your code.
  • You can use BeforeEach to set up state for your specs.  You use It to specify a single spec.
    In order to share state between a BeforeEach and an It you must use member variables.
  • In this case we use Hamcrest’s assertThat syntax to make expectations on the categoryByLength() method.

Assuming a Book model with this behavior, running this JUnit test in Eclipse (or Intellij) will yield:
Screen Shot 2016-06-02 at 8.31.35 AM

 

Success!

Focussed Specs

It is often convenient, when developing, to be able to run a subset of specs.  Ginkgo4J allows you to focus individual specs or whole containers of specs programmatically by adding an F in front of your Describe, Context, and It:


FDescribe("some behavior", () -> { ... })
FContext("some scenario", () -> { ... })
FIt("some assertion", () -> { ... })

doing so instructs Ginkgo4J to only run those specs.  To run all specs, you’ll need to go back and remove all the Fs.

Parallel Specs

Ginkgo4J has support for running specs in parallel. It does this by spawning separate threads and dividing the specs evenly among these threads. Parallelism is on by default and will use 4 threads. If you wish to modify this you can add the additional annotation to your test class:-

@Ginkgo4jConfiguration(threads=1)

which will instruct Ginkgo4J to run a single thread.

Spring Support

Ginkgo4J also offers native support for Spring. To test a Spring application context simply replace the @RunWith(Ginkgo4jRunner.class) with @RunWith(Ginkgo4jSpringRunner.class) and initialize you test class’ Spring application context in exactly the same way you normally would when using Spring’s SpringJUnit4ClassRunner.


  @RunWith(Ginkgo4jSpringRunner.class)
  @SpringApplicationConfiguration(classes = Ginkgo4jSpringApplication.class)
  public class Ginkgo4jSpringApplicationTests {

  @Autowired
  HelloService helloService;
  {
      Describe("Spring intergation", () -> {
        It("should be able to use spring beans", () -> {
          assertThat(helloService, is(not(nullValue())));
        });

        Context("hello service", () -> {
          It("should say hello", () -> {
            assertThat(helloService.sayHello("World"), is("Hello World!"));
          });
        });
     });
  }

  @Test
  public void noop() {
  }
  }

The nooptest @Test method is required as Spring’s JUnit runner requires at least one test class.

Trying it out for yourself

Please feel free to try it out on your Java projects. For a maven project add:

<dependency>
    <groupId>com.github.paulcwarren</groupId>
    <artifactId>ginkgo4j</artifactId>
    <version>1.0.0</version>
</dependency>

or for a Gradle project add:

compile ‘com.github.paulcwarren:ginkgo4j:1.0.0’

for others see here.

How to Deploy Diego in a Hybrid Environment Deploying Cloud Foundry Diego in a Virtual and Bare Metal Environment

Deploying Cloud Foundry Diego in a Virtual and Bare Metal Environment

In my last post “Deploying Cloud Foundry in a Hybrid Environment“, I showed you how to deploy DEA runners in a bare-metal environment. However, Cloud Foundry now has new and exciting Diego runtime! So now I will guide you through how to deploy Diego Cells on bare metal machines. When we’re all done, your environment will look like this:

Diagram7

(more…)

Creating Cloud Foundry Applications with Persistence Traditional & Cloud Native Applications with Persistence

Traditional & Cloud Native Applications with Persistence

Cloud Foundry and 12-Factor applications are great to create Cloud Native Applications and has become the standard. But you and I don’t just have to worry about our new 12 Factor apps, we also have legacy applications in our family.  Our position is that your traditional apps and your new cool 12-Factor apps should be able to experience the benefits of running on Cloud Foundry.  So, we have worked to create a world where your legacy apps and new apps can live together.

cf-logo

In the past, Cloud Foundry applications cannot use any filesystem or block storage. That totally makes sense, given that Cloud Foundry apps are executed in elastic containers, which can go away at any time. And if that happens, any data written to the local filesystem of the containers will be wiped.

If we can externalize persistent storage to be a service – and bind and unbind those services to Cloud Foundry applications – a lot more apps can run in Cloud Foundry. For example, heavy data access apps, like databases and video editing software, can now access extremely fast storage AND at the same time experience the scalability and reliability provided by Cloud Foundry.

Allowing Cloud Foundry applications to have direct persistence access opens a lot of doors for developers. Traditional applications that require persistence can migrate to Cloud Foundry a lot easier. Big Data Analytics applications can now use persistence to perform indexing and calculation.

Traditionally, a lot of data services consumed by Cloud Foundry applications, such as MySQL, Cassandra, etc, need to be deployed by Bosh as Virtual Machines. With Persistence, we can start looking bringing these services to run in Cloud Foundry or create the next generation of Cloud Native data services.

What can Developers Expect?

When developers come to the Cloud Foundry marketplace by using cf marketplace, they will see services that can offer their applications persistence:

Screen Shot 2016-05-20 at 10.37.37 AM

The details of the service plan can be seen by cf marketplace -s ScaleIO

Screen Shot 2016-05-20 at 10.57.04 AM

This is a plan that would offer 2GB of storage for your Cloud Foundry applications. Let’s sign up for it by running cf create-service scaleio small my-scaleio-service1

Screen Shot 2016-05-20 at 11.02.26 AM

By creating a service instance, the ScaleIO Service Broker goes into ScaleIO and creates a volume of 2GB. We are now ready to bind this new service to our app. To demonstrate the functionality, we have created a very simple application that will write and read to the filesystem:

Screen Shot 2016-05-20 at 11.09.51 AM

After a cf bind-service call, the storage will be mounted as a directory and the path will indicate an environment variable inside the service. For example:

Screen Shot 2016-05-20 at 11.35.07 AM

Based on the container_path variable, the application can read and write as if it’s a local filesystem.

 

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.