This week the Cloud Foundry Diego Persistence team released the 1.0 version of our nfs-volume-release for existing NFS data volumes.  This Bosh release provides the service broker and volume driver components necessary to quickly connect Cloud Foundry deployed applications to existing NFS file shares.

In this post, we will take a look at the steps required to add the nfs-volume-release to your existing Cloud Foundry deployment, and the steps required after that to get your existing file system based application moved to Cloud Foundry.

Deploying nfs-volume-release to Cloud Foundry

If you are using OSS cloud foundry, you’ll need to deploy the service broker and driver into your cloudfoundry deployment.  To do this, you will need to colocate the nfsv3driver on the Diego cells in your Cloud Foundry deployment, and then run the nfs service broker either as a Cloud Foundry application or a BOSH deployment.

Detailed instructions for deploying the driver are here.

Detailed instructions for deploying the broker are here.

If you are using PCF, nfs-volume-release is built in.  As of PCF 1.10, you can deploy the broker and driver through simple checkbox configuration in the Advanced features tab in Ops Manager.  Details here.

Moving your application into Cloud Foundry

There are a range of issues you might hit when moving a legacy application from a single server context into Cloud Foundry, and most of them are outside the scope of this article.  See the last section of this article for a good reference discussing how to migrate more complex applications.  For the purposes of this article we’ll focus on a relatively simple content application that’s already well suited to run in CF except that it requires a file system.  We’ll use servocoder/RichFileManager as our example application.  It supports a couple different HTTP backends, but we’ll use the PHP backend in this example.

Once you have cloned the RichFileManager repository and followed the set up instructions, you should theoretically be able to run the application in Cloud Foundry’s php buildpack with a simple cf push from the RichFileManager root directory:

cf push -b php_buildpack rich-file-manager

But RichFileManager requires the gd package which isn’t included by default in the php buildpack.  If we push the application as-is, file upload operations will fail after RichFileManager dies while trying to create thumbnail images for uploaded files.  To fix this, we need to create a .bp-options directory in the root folder for our application and put a file named options.json in it with the following content:

{
 "PHP_EXTENSIONS": [ "gd"]
}

Re-pushing the application fixes the problem.  Now we are able to upload files and use all the features of RichFileManager:

But we aren’t done yet! By default, the RichFileManager application stores uploaded file content in a subdirectory of the application itself.  As a result, any file data will be treated as ephemeral by cloudfoundry and discarded when the application restarts.  To see why this is a problem, upload some files, and then type:

cf restart rich-file-manager

When you refresh the application in your browser, you’ll see that your uploaded files are gone!  That’s why you need to bind a volume service to your application.

In order to do that, we first need to tweak the application a little to tell it that we want to put files in an external folder.  Inside the application, open connectors/php/config.php in your editor of choice, and change the value for “serverRoot” to false.  Also set the value of “fileRoot” to “/var/vcap/data/content”.  (As of today, cloudfoundry has the limitation that volume services cannot create new root level folders in the container.  Soon that limitation will be lifted, but in the mean time, /var/vcap/data is a safe place to bind our storage directory to.)

Now push the application again:

cf push rich-file-manager

When you go back to the application, you should see that it is completely broken and hangs waiting to get content.  That’s because we told it to use a directory that doesn’t yet exist.  To fix that, we need to create a volume service, and bind it to our application.  You can follow the instructions on the nfs-volume-release to set up an nfs test server in your environment, or if you already have an NFS server available (for example, Isilon, ECS, Netapp or the like) you can skip the setup steps and go directly to the service broker registration step.  Once you have created a volume service instance, bind that service to your application:

cf bind-service rich-file-manager myVolume 
-c '{"uid":"1000","gid":"1000","mount":"/var/vcap/data/content"}'

If you are using an existing NFS server, you will likely need to specify different values for uid and gid.  Pick values that correspond to a user with write access to the share you’re using.

Now restage the application:

cf restage rich-file-manager

You should see that the application now works properly again.  Furthermore, you can now “cf restart” your application, and “cf scale” it to run multiple instances, and it will continue to work and to serve up the same files.

Caveats

Volume services enable filesystem based applications to overcome a major barrier to cloud deployment, but they will not enable all applications to run seamlessly in the cloud.  Applications that rely on transactions across http requests, or otherwise store state in memory will still fail to run properly when scaled out to more than one instance in cloud foundry.  CF provides best-effort session stickiness for any application that sets a JSESSIONID cookie, but no guarantees that traffic will not get routed to another instance.

More detail on steps to make complex applications run in the cloud can be found in this article.

Leave a Comment

Comments are moderated. Dell EMC reserves the right to remove any content it deems inappropriate, including but not limited to spam, promotional and offensive comments.

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.