Archive for the ‘Persistence’ Category

Running Legacy Apps on CloudFoundry with NFS How to re-platform your apps and connect to existing shared volumes using CloudFoundry Volume Services

How to re-platform your apps and connect to existing shared volumes using CloudFoundry Volume Services

This week the Cloud Foundry Diego Persistence team released the 1.0 version of our nfs-volume-release for existing NFS data volumes.  This Bosh release provides the service broker and volume driver components necessary to quickly connect Cloud Foundry deployed applications to existing NFS file shares.

In this post, we will take a look at the steps required to add the nfs-volume-release to your existing Cloud Foundry deployment, and the steps required after that to get your existing file system based application moved to Cloud Foundry.

Deploying nfs-volume-release to Cloud Foundry

If you are using OSS cloud foundry, you’ll need to deploy the service broker and driver into your cloudfoundry deployment.  To do this, you will need to colocate the nfsv3driver on the Diego cells in your Cloud Foundry deployment, and then run the nfs service broker either as a Cloud Foundry application or a BOSH deployment.

Detailed instructions for deploying the driver are here.

Detailed instructions for deploying the broker are here.

If you are using PCF, nfs-volume-release is built in.  As of PCF 1.10, you can deploy the broker and driver through simple checkbox configuration in the Advanced features tab in Ops Manager.  Details here.

Moving your application into Cloud Foundry

There are a range of issues you might hit when moving a legacy application from a single server context into Cloud Foundry, and most of them are outside the scope of this article.  See the last section of this article for a good reference discussing how to migrate more complex applications.  For the purposes of this article we’ll focus on a relatively simple content application that’s already well suited to run in CF except that it requires a file system.  We’ll use servocoder/RichFileManager as our example application.  It supports a couple different HTTP backends, but we’ll use the PHP backend in this example.

Once you have cloned the RichFileManager repository and followed the set up instructions, you should theoretically be able to run the application in Cloud Foundry’s php buildpack with a simple cf push from the RichFileManager root directory:

But RichFileManager requires the gd package which isn’t included by default in the php buildpack.  If we push the application as-is, file upload operations will fail after RichFileManager dies while trying to create thumbnail images for uploaded files.  To fix this, we need to create a .bp-options directory in the root folder for our application and put a file named options.json in it with the following content:

Re-pushing the application fixes the problem.  Now we are able to upload files and use all the features of RichFileManager:

But we aren’t done yet! By default, the RichFileManager application stores uploaded file content in a subdirectory of the application itself.  As a result, any file data will be treated as ephemeral by cloudfoundry and discarded when the application restarts.  To see why this is a problem, upload some files, and then type:

When you refresh the application in your browser, you’ll see that your uploaded files are gone!  That’s why you need to bind a volume service to your application.

In order to do that, we first need to tweak the application a little to tell it that we want to put files in an external folder.  Inside the application, open connectors/php/config.php in your editor of choice, and change the value for “serverRoot” to false.  Also set the value of “fileRoot” to “/var/vcap/data/content”.  (As of today, cloudfoundry has the limitation that volume services cannot create new root level folders in the container.  Soon that limitation will be lifted, but in the mean time, /var/vcap/data is a safe place to bind our storage directory to.)

Now push the application again:

When you go back to the application, you should see that it is completely broken and hangs waiting to get content.  That’s because we told it to use a directory that doesn’t yet exist.  To fix that, we need to create a volume service, and bind it to our application.  You can follow the instructions on the nfs-volume-release to set up an nfs test server in your environment, or if you already have an NFS server available (for example, Isilon, ECS, Netapp or the like) you can skip the setup steps and go directly to the service broker registration step.  Once you have created a volume service instance, bind that service to your application:

If you are using an existing NFS server, you will likely need to specify different values for uid and gid.  Pick values that correspond to a user with write access to the share you’re using.

Now restage the application:

You should see that the application now works properly again.  Furthermore, you can now “cf restart” your application, and “cf scale” it to run multiple instances, and it will continue to work and to serve up the same files.

Caveats

Volume services enable filesystem based applications to overcome a major barrier to cloud deployment, but they will not enable all applications to run seamlessly in the cloud.  Applications that rely on transactions across http requests, or otherwise store state in memory will still fail to run properly when scaled out to more than one instance in cloud foundry.  CF provides best-effort session stickiness for any application that sets a JSESSIONID cookie, but no guarantees that traffic will not get routed to another instance.

More detail on steps to make complex applications run in the cloud can be found in this article.

Getting Started with Cloud Foundry Volume Services Trying out Persistent File Systems in Your Cloud Foundry Applications with PCFDev

Trying out Persistent File Systems in Your Cloud Foundry Applications with PCFDev

This week, the PCFDev team released a new version of PCFDev that includes local-volume services out of the box.  This new release of PCFDev gives us an option to get up and running with volume services that is an order of magnitude easier than any of the options we had before. We thought it would be a good time to write a post detailing the steps to try out volume services with your Cloud Foundry applications, now that the post doesn’t need to be quite so long.

1. Install PCFDev

(Time: 30-90 minutes, depending on your internet connection speed, and what you already have installed.)

Instructions for PCFDev installation can be found here.  Installation requires VirtualBox and the Cloud Foundry CLI, and you will need to have (or create) an account on Pivotal Network.  Don’t be deterred.  It’s free and easy.  If you already have PCFDev, make sure that you have version 0.22.0 or later.

Once you have installed PCFDev and successfully started it up, you should see something like this:

screen-shot-2016-11-16-at-4-39-13-pm

Now log in and select the pcfdev-org space:

2. Create a New Volume

(Time: 1 minute)

If you have the right version of PCFDev, then the local-volume service broker should already be installed.  You can tell this by typing

You should see local-volume in the list of available services:

screen-shot-2016-11-16-at-4-43-59-pm

Use cf create-service to create a new file volume that you can bind to your application:

3. Push an Application

(Time: 5 minutes)

For the purposes of this blog post, we’ll use the “pora” test application that we use in our persi CI pipeline. “Pora” is the persistent version of the Cloud Foundry acceptance test “Dora” application.  It is a simple ‘hello world” app that writes a message into a file, and then reads it back out again.

To get the Pora app, first clone the persi-acceptance-tests github repo:

Now change to the pora folder, and push the app to cf with the “no-start” option:

4. Bind the Service to the Application and Start the App

(Time: 5 minutes)

The cf “bind-service” command makes our new volume available to the Pora application.

Now start the pora application:

Once the app is started, you can use curl with the reported url to make sure it is reachable.  The default endpoint for the application will just report back the instance index for the application:

To test the persistence feature, add “/write” to the end of the url.  This will cause the pora app to pull the location of the shared volume from the VCAP_SERVICES environment variable, create a file, write a message into it, and then read the message back out:

5. Use Persistent Volumes with Your Own Application

By default, the volume service broker will generate a container mount path for your mounted volume, and it will pass that path into your application via the VCAP_SERVICES environment variable. You can see this when you type “cf env pora” which produces output like this:

In your application, you can parse the VCAP_SERVICES environment variable as json and determine the value of “container_path” to find the location of your shared volume in the container, or simply parse out the path with a regular expression.  You can see an example of this code flow in the Pora application here.

If it isn’t practical to connect to an arbitrary location in your application for some reason, (e.g. the directory is hard coded everywhere, or your code is precompiled, or you are reluctant to change it) then you can also tell the broker to put the volume in a specific location when you bind the application to the service.  To do that, unbind the service, and then rebind it using the -c option.  This will create any necessary directories on the container, and put the volume mount in the specific path you need:

Type “cf restage pora” to restart the application in this new configuration, and you should observe that the application operates as before, except that it now accesses the shared volume from “/my/specific/path”.  You can also see the new path transmitted to the application by typing “cf env pora”.

6. If You Need to Copy Data

If you need to copy existing data into your new volume in order for your application to use it, you will need to get the data onto the PCFDev host, and then copy it into the source directory that the local-volume service uses to mimic a shared volume.  In a real-world scenario, these steps wouldn’t be necessary because the volume would come from a real shared filesystem that you could mount and copy content to, but the local-volume service is a test service that mimics shared volumes for simple cloudfoundry deployments.

The key that PCFDev uses for scp/ssh is located in ~/.pcfdev/vms/key.pem.  This file must be tweaked to reduce its permissions before it can be used by scp:

Now invoke scp to copy your content across to the PCFDev VM.  The example below copies a file named “assets.tar” to the var/vcap/data directory on the VM:

Now, use “cf dev ssh” to open a ssh session into the vm, and cd to /var/vcap/data.  You should find your file there:

“ls volumes/local/_volumes” gives us a listing of the volumes that have been created.  If you are following this tutorial, there should be only one directory in there, with a long ugly name.  Move the file into that directory, and then exit the ssh session

Finally, invoke “cf ssh pora” to open a ssh session into the cloudfoundry app.  This will allow you to verify that the data is available to your application.  If you used the -c option to specify a mount path, you should find your content there.  If not, you will find it under “/var/vcap/data/{some-guid}”:

7. Where to Find More Information

The best place to reach the CF Diego Persistence team is on our Slack channel here: https://cloudfoundry.slack.com/messages/persi/. We’re happy to hear from you with any questions you have, and to help steer you in the right direction.

If you don’t already have an account on CF slack, you can create an account here: https://cloudfoundry.slack.com/signup

For an introduction to Volume Services in Cloud Foundry, refer to this document: http://bit.ly/2gHPVJM

Or watch our various presentations this year:

CF Summit Frankfurt 2016: https://www.youtube.com/watch?v=zrQPns47xho

Spring One Platform 2016: https://www.youtube.com/watch?v=VisP5ebZoWw

CF Summit Santa Clara 2016: https://www.youtube.com/watch?v=ajNoPi1uMjQ

Road trip to Persistence on CloudFoundry Chapter 2

Chapter 2

nguyen thinh

nguyen thinh

nguyen thinh

Latest posts by nguyen thinh (see all)

Road trip to Persistence on CloudFoundry

Chapter 2 – Bosh

Bosh is a deployment tool that can provision VMs on different IAAS such as AWS, OpenStack, vSphere, and even Baremetals. It monitors the VMs’ lives and keeps track of all processes that it deploys to the VMs. In this tutorial, we will focus on how to use bosh with vSphere. However, you can apply the same technique for your IAAS too. The reason we talk about Bosh in this roadtrip is because we use it to deploy Cloud Foundry.


Table of Contents


1. How it works

2. Install Bosh

  1. Create a Bosh Director Manifest
  2. Install
  3. Verify Installation

3. Configure Bosh

  1. Write your Cloud Config
  2. Upload cloud config to bosh director

4. Use Bosh

  1. Create a deployment Manifest
  2. Upload stemcell and redis releases
  3. Set deployment manifest and deploy it
  4. Interact with your new deployed redis server

1. How it works

Let’s say you want to bring up a VM that contains Redis on vSphere, you provide Bosh a vSphere stemcell (Operating System image) and a Redis Bosh Release (Installation scripts). Bosh then does the job for you.

alt text

Are you excited yet? Let’s get started on how to install Bosh on your IAAS and use it.

2. Install Bosh

In order to use Bosh in vSphere, you will need to deploy a Bosh Director VM

1. Create a Bosh Director Manifest

This manifest describes the director VM’s specs. Copy the example manifest from this docs and place it on your machine. Modify the networks section to match that of your vSphere environment.

2. Install

After configuring your deployment manifest, install bosh-init tool from this docs. Then type the following command to deploy a Bosh director VM

3. Verify Installation

After completing the installation, download Bosh client to interact with Bosh director. If you have any installation problem, please refer to: docs

Then type

If the command succeed, you now have a functioning Bosh director.

3. Configure Bosh

To configure Bosh Director, you pass it a configuration file called Bosh Cloud Config. It allows you to define resources, networks, vms type specifically for all of your Bosh vms deployments.

alt text

1. Write your Cloud Config

To write your cloud config, copy the vSphere Cloud Config example here: tutorial and modify it accordingly.

2. Upload cloud config to bosh director

After defining a cloud config, you are then able to deploy VMs on Bosh.

4. Use Bosh

Let’s deploy a simple redis server vm on vSphere using Bosh.

1. Create a deployment Manifest

In this deployment manifest, I am deploying a redis VM using redis release and ubuntu stemcell. I wanted the VM type to be medium and the VM to be located at availability zone z1.

2. Upload stemcell and redis releases

Alternatively, you can download them and run bosh upload locally.

3. Set deployment manifest and deploy it

4. Interact with your new deployed redis server

Find your vm IP using

Connect to your redis server

Test it

Road trip to Persistence on CloudFoundry Laying the framework with ScaleIO

Laying the framework with ScaleIO

Peter Blum

Over the past few months the Dojo has been working with all the types of storage to enable persistence within CloudFoundry. Across the next few weeks we are going to be road tripping through how we enabled EMC storage on the CloudFoundry platform. For our first leg of the journey, we start laying the framework by building our motorcycle, a ScaleIO cluster, which will carry us through the journey. ScaleIO, a software defined storage service that is both flexible to allow dynamic scaling of storage nodes as well as reliable to enable enterprise level confidence.

What is ScaleIO – SDS, SDC, & MDM!?

ScaleIO as we already pointed out is a software defined block storage. In laymen terms there are two huge benefits I see with using ScaleIO. Firstly, the actual storage backing ScaleIO can be dynamically scaled up and down by adding and removing SDS (ScaleIO Data Storage) server/nodes. Secondly, SDS nodes can run parallel to your applications running on a server, utilizing any additional free storage your applications are not using. These two points allow for a fully automated datacenter and a terrific base to start for block storage in CloudFoundry.

Throughout this article we will use SDS, SDC, and MDM, lets define them for some deeper understanding!
All three of these terms are actually services running on a node. These nodes can either be a Hypervisor (in the case of vSphere), a VM, or a bare metal machine.

SDS – ScaleIO Data Storage

This is the base of ScaleIO. SDS nodes store information locally on storage devices specified by the admin.

SDC – ScaleIO Data Client

If you intend to use a ScaleIO volume, you are required to become an SDC. To become an SDC you are required to install a kernel module (.KO) which is compiled specially for your specific Operating system version. These all can be found on EMC Support. In addition to the KO that gets installed there also will be a handy binary, drv_cfg. We will use this later on but make sure you have it!

MDM – Meta Data Manager

Think of the MDMs as the mothers of your ScaleIO deployment. They are the most important part of your ScaleIO deployment, they allow access to the storage (by means of mapping volumes from SDS’s to SDC’s), and most importantly they keep track of where all the data is living. Without the MDM’s you lose access to your data since “Mom” isn’t there to piece together the blocks you have written! Side Note: make sure you have at least 3 MDM nodes. This is the smallest number allowed since it is required to have 1 MDM each for Master, Slave, and Tiebreaker.

How to Install ScaleIO

The number of different ways to install ScaleIO is unlimited! In the Dojo we used two separate ways, each with their ups and downs. The first, “The MVP”, is simple and fast, and it will get you the quickest minimal viable product. The second method, “For the Grownups”, will provide you with a start for a fully production ready environment. Both of these will suffice for the rest of our road tripping blog.

The MVP

This process uses a Vagrant box to deploy a ScaleIO cluster. Using the EMC {Code} ScaleIO vagrant Github Repository, checkout the ReadMe to install ScaleIO in less than an hour (depending on your internet of course :smirk: ). Make sure to read through the Clusterinstall function of the ReadMe to understand the two different ways of installing the ScaleIO cluster.

For the GrownUps

This process will deploy ScaleIO on four separate Ubuntu machines/VMs.

Checkout The ScaleIO 2.0 Deployment Guide for more information and help

  • Go to EMC Support.
    • Search ScaleIO 2.0
    • Download the correct ScaleIO 2.0 software package for your OS/architecture type.
    • Ubuntu (We only support Ubuntu currently in CloudFoundry)
    • RHEL 6/7
    • SLES 11 SP3/12
    • OpenStack
    • Download the ScaleIO Linux Gateway.
  • Extract the *.zip files downloaded

Prepare Machines For Deploying ScaleIO

  • Minimal Requirements:
    • At least 3 machines for starting a cluster.
      • 3 MDM’s
      • Any number of SDC’s
    • Can use either a virtual or physical machine
    • OS must be installed and configured for use to install cluster including the following:
      • SSH must be installed, and be available for root. Double-check that passwords are properly provided to configuration.
      • libaio1 package should be installed as well. On Ubuntu: apt-get install libaio1

Prepare the IM (Installation Manager)

  • On the local machine SCP the Gateway Zip file to the Ubuntu Machine.
  • SSH into Machine that you intend to install the Gateway and Installation Manager on.
  • Install Java 8.0
  • Install Unzip and Unzip file
  • Run the Installer on the unzipped debian package
  • Access the gateway installer GUI on a web browser using the Gateway Machine’s IP. http://{$GATEWAY_IP}
  • Login using admin and the password you used to run the debian package earlier.
  • Read over the install process on the Home page and click Get Started
  • Click browse and select the following packages to upload from your local machine. Then click Proceed to install
    • XCache
    • SDS
    • SDC
    • LIA
    • MDM

    Installing ScaleIO is done through a CSV. For our demo environment we run the minimal ScaleIO install. We built the following install CSV from the minimal template you will see on the Install page. You might need to build your own version to suit for your needs.

  • To manage the ScaleIO cluster you utilize the MDM, make sure that you set a password for the MDM and LIA services on the Credentials Configuration page.
  • NOTE: For our installation, we had no need to change advanced installation options or configure log server. Use these options at your own risk!
  • After submitting the installation form, a monitoring tab should become available to monitor the installation progress.
    • Once the Query Phase finishes successfully, select start upload phase. This phase uploads all the correct resources needed to the nodes indicated in the CSVs.
    • Once the Upload Phase finishes successfully, select start install phase.
    • Installation phase is hopefully self-explanatory.
  • Once all steps have completed, the ScaleIO Cluster is now deployed.

Using ScaleIO

  • To start using the cluster with the ScaleIO cli you can follow the below steps which are copied from the post installation instructions.

    To start using your storage:
    Log in to the MDM:

    scli --login --username admin --password <password>

    Add SDS devices: (unless they were already added using a CSV file containing devices)
    You must add at least one device to at least three SDSs, with a minimum of 100 GB free storage capacity per device.

    scli --add_sds_device --sds_ip <IP> --protection_domain_name default --storage_pool_name default --device_path /dev/sdX or D,E,...

    Add a volume:

    scli --add_volume --protection_domain_name default --storage_pool_name default --size_gb <SIZE> --volume_name <NAME>

    Map a volume:

    scli --map_volume_to_sdc --volume_name <NAME> --sdc_ip <IP>

Managing ScaleIO

When using ScaleIO with CloudFoundry we will use the ScaleIO REST Gateway to manage the cluster. There are other ways to manage the cluster such as the ScaleIO Cli and ScaleIO GUI, both of which are much harder for CloudFoundry to communicate with.

EOF

At this point you have a fully functional ScaleIO cluster that we can use with CloudFoundry and RexRay to deploy applications backed by ScaleIO storage! Stay tuned for our next blog post in which we will deploy a minimal CloudFoundry instance.

Cloud Foundry. Open Source. The Way. EMC [⛩] Dojo.

 

 

Creating a Cloud Foundry Service Broker

Megan Murawski

Megan Murawski

Megan Murawski

Latest posts by Megan Murawski (see all)

12-factor apps are cool and Cloud Foundry is cool but, we don’t just have to worry about 12 factor apps.  We have legacy apps that we also need to pay attention to.  We believe there should be a way for all of your apps to enjoy the benefits of Cloud Foundry.  We have enabled this by implementing a Service Broker that binds an external storage service.  Before we talk about the creation of the Service Broker we will describe the role of a Service Broker in Cloud Foundry.

Services are integrated with Cloud Foundry by implementing a documented API for which the cloud controller is the client; we call this the Service Broker API.  Service brokers advertise a catalog of service offerings and service plans, as well as interpreting calls for provision (create), bind, unbind, and deprovision (delete).  By externalizing the backend services from the application in a PaaS provides a clear separation that can improve application development. Dev only needs to be able to connect to an external service to consume its APIs. In Cloud Foundry, this interface is “brokered” by the Service Broker.  Some examples of services which are essential to the 12 Factor app are: MySql, Redis, RabbitMQ, and now ScaleIO!

Screen Shot 2016-05-23 at 1.28.26 PM

A service broker sits in between the Cloud Foundry Runtime and the service itself. To create a service broker, you only need to implement the 5 Service Broker APIs: catalog, provision, deprovision, bind and unbind. The service broker essentially translates between the Cloud Controller API and the service’s own API to orchestrate creating a service instance (maybe provisioning a new database or creating a new user), providing the credentials to connect to the service, and disconnecting and deleting the service instance.

The APIs

Catalog is used to fetch the service catalog. This describes the service plans that are available with the backend and any cost.

Provision is used to create the service in the backend. Based on a chosen service plan, this call will actually create the service. In the case of persistence, this is provisioning storage to be used by your applications.

Bind will provide Cloud Foundry runtime with the credentials and configuration to access the service. This allows the application to connect to the service.

Unbind will remove the access credentials from the application’s environment.

Deprovision is used to delete the service on the backend. This could remove an account, delete a database, or in the case of persistence, delete the volume created on provision.

 

Creating the Broker

To create the ScaleIO service broker, we made use of the EMC open source tool Libstorage for communicating with the ScaleIO backend. Libstorage is capable of managing many different storage backends including ScaleIO, XtremIO, Isilon and VMAX in a client/server model.  By adding an API layer on top of Libstorage, we were able to quickly translate between Cloud Foundry API calls and storage APIs on the backend to provide persistence as a service within CF. Like much of the work we do at the EMC Dojo we have open sourced the Service Broker.  If you’d like to download it or get involved, check it out on github!

Sneak Preview: Cloud Foundry Persistence Under the Hood How Does Persistence Orchestration Work in Cloud Foundry?

How Does Persistence Orchestration Work in Cloud Foundry?

A few weeks ago at the Santa Clara Cloud Foundry Summit we announced that Cloud Foundry will be supporting persistence. From a developer’s perspective, the user experience is smooth and very similar to using other existing services. Here is an article that describes the CF persistence user experience.

For those who are wondering how Cloud Foundry orchestrates persistence service, this article will provide a high level overview of the architecture and user experience.

How Does it Work when Developer Creates Persistence Service?

Like other Cloud Foundry services, before an application can gain access to the persistence service, the user needs to first sign up for a service plan from the Cloud Foundry marketplace.

Initially, our Service Broker uses an Open Source technology called RexRay, which is a persistence orchestration engine that works with Docker.

Screen Shot 2016-06-07 at 11.02.31 AM

Creating Service

When the service broker receives a request to create a new service plan, it would go into its backing persistence service provider, such as ScaleIO or Isilon, to create a new volume.

For example, the user can use:

cf create-service scaleio small my-scaleio1

Deleting Service

When a user is done with the volume and no longer needs the service plan, the user can make a delete-service call to remove the volume. When the service broker receives the request to delete the service plan, it would go into its backing persistence service provider to remove the volume and free up storage space.

For example, the user can use:

cf delete-service my-scaleio1

Binding Service

After a service plan is created, the user can then bind the service to one or multiple Cloud Foundry applications. When the service broker receives the request to bind an application, the service broker would would include a special flag in the JSON response to the Cloud Controller, so that Cloud Controller and Diego would know how to mount the directory in Runtime. The runtime behavior will be described in more details below.

 

How Does it work in Runtime?

Cloud Foundry executes each instance of application in container-based runtime environment called Diego. For Persistence Orchestration, a new project called Volman (short for Volume Manager) has become the newest addition to the Diego release. Volman is part of Diego and will live in a Diego Cell. At a high level, Volman is responsible for picking up special flags from Cloud Controller, invoke a Volume Driver to mount a volume into a Diego Cell, then provide access to the directory from the runtime container.

Screen Shot 2016-06-07 at 12.21.18 PM

 

Creating Cloud Foundry Applications with Persistence Traditional & Cloud Native Applications with Persistence

Traditional & Cloud Native Applications with Persistence

Cloud Foundry and 12-Factor applications are great to create Cloud Native Applications and has become the standard. But you and I don’t just have to worry about our new 12 Factor apps, we also have legacy applications in our family.  Our position is that your traditional apps and your new cool 12-Factor apps should be able to experience the benefits of running on Cloud Foundry.  So, we have worked to create a world where your legacy apps and new apps can live together.

cf-logo

In the past, Cloud Foundry applications cannot use any filesystem or block storage. That totally makes sense, given that Cloud Foundry apps are executed in elastic containers, which can go away at any time. And if that happens, any data written to the local filesystem of the containers will be wiped.

If we can externalize persistent storage to be a service – and bind and unbind those services to Cloud Foundry applications – a lot more apps can run in Cloud Foundry. For example, heavy data access apps, like databases and video editing software, can now access extremely fast storage AND at the same time experience the scalability and reliability provided by Cloud Foundry.

Allowing Cloud Foundry applications to have direct persistence access opens a lot of doors for developers. Traditional applications that require persistence can migrate to Cloud Foundry a lot easier. Big Data Analytics applications can now use persistence to perform indexing and calculation.

Traditionally, a lot of data services consumed by Cloud Foundry applications, such as MySQL, Cassandra, etc, need to be deployed by Bosh as Virtual Machines. With Persistence, we can start looking bringing these services to run in Cloud Foundry or create the next generation of Cloud Native data services.

What can Developers Expect?

When developers come to the Cloud Foundry marketplace by using  cf marketplace, they will see services that can offer their applications persistence:

Screen Shot 2016-05-20 at 10.37.37 AM

The details of the service plan can be seen by cf marketplace -s ScaleIO

Screen Shot 2016-05-20 at 10.57.04 AM

This is a plan that would offer 2GB of storage for your Cloud Foundry applications. Let’s sign up for it by running cf create-service scaleio small my-scaleio-service1

Screen Shot 2016-05-20 at 11.02.26 AM

By creating a service instance, the ScaleIO Service Broker goes into ScaleIO and creates a volume of 2GB. We are now ready to bind this new service to our app. To demonstrate the functionality, we have created a very simple application that will write and read to the filesystem:

Screen Shot 2016-05-20 at 11.09.51 AM

After a cf bind-service call, the storage will be mounted as a directory and the path will indicate an environment variable inside the service. For example:

<a href="http://dojoblog.emc.com/wp-content/uploads/2016/05/Screen-Shot-2016-05-20-at-11.35.07-AM.png"><img class="alignnone wp-image-295 size-Standard" src="http://dojoblog.emc.com/wp-content/uploads/2016/05/Screen-Shot-2016-05-20-at-11.35.07-AM-820x422.png" alt="Screen Shot 2016-05-20 at 11.35.07 AM" width="820" height="422" /></a>

Based on the container_path variable, the application can read and write as if it’s a local filesystem.

 

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.