Archive for the ‘Cloud Foundry’ Category

Oh look! It’s a Docker Container Running on CloudFoundry!

Peter Blum

Not many people know how simple it is to run and scale there containerized application on CloudFoundry. Checkout this simple and cool video showcasing how it can be done!

Cloud Foundry. Open Source. The Way. Dell EMC [⛩] Dojo.

 

A Fresh Developer’s take on Cloud Foundry Why the third platform is the new way to iterate

Why the third platform is the new way to iterate

Coming to the Dojo a little over two months ago for my first day of work at my first (out of college) job, I was mystified by the amount of understanding and knowledge I still had to gain by using the Cloud Platform as a development and deployment environment. After the first few weeks, we started developing a brand new application for internal use. There was no existing infrastructure, but a clear idea to use a SQL database, and a web portal for the UI. I was thrilled! I worked on my fair share of web applications, so this would be a cake walk.

Great! What were the IP’s? What was the endpoint? What was the downtime for our environment on updates? What was our server technology? They said it was up to me. I began thinking of asking IT for a VM spin up our database, scripts designed to back up the data with cron jobs, registering a DNS for our website, and for a potential duplicate set for our test environment; technical aches and pains that are first steps when building a web application. I then began asking what seemed the obvious question of who to go to for the resources, and I was told we would do it all ourselves. Wait, I thought, we were supposed to deploy and maintain the VM’s set up users for the database, download the applications, and backup technology? That was easily another two or three days short term, and days of complications long term. Or so I thought.

Except…I hadn’t known that we were using Cloud Foundry, and the benefits that came from this. All of the pain points I used to face in setting up a functioning web app were moot. Instead of spending time configuring and maintaining environments, the Dojo, as a team, has put together a full-functioning web application with production environment, architecture, an extensive automated testing harness, and automated deployment. All as painlessly as we might use Github.

Removing Development Roadblocks

When I thought of the Web application, a typical structure comes to mind of the architecture for the app.

Screen Shot 2016-08-31 at 12.20.49 PM

A typical developer headache for a web application

There’s plenty of steps for our team, as developers, to maintain and care about here. But with Cloud Foundry, we can remove many of these concerns.

Lets talk about Databases first. Using service brokers as provided through CF, we can simply bind a SQL service to our application. For extra sweetness, we can use a buildpack that supports that our server is connecting to a service-provided database. The buildpack will, at runtime, connect our application’s backend into a bound SQL database, no environment variables necessary.

Servers and UI? No problem. Pushing applications to Cloud Foundry takes as many commands as pushing to git. We can also update and configure the application directly through a UI included in our environment. This means changing something we forgot to put in our three-four deployment commands doesn’t warrant another deployment. As far as firewalls go, using the proper security groups to avoid exposing secretive endpoints is a few minutes behind the scenes of a UI, or a couple commands in a terminal.

Alright, so our development roadblocks are covered. What does all this mean for our Customers?

Rapid Integration and Feedback Loops

With our Test Driven Development and Continuous Integration mantras, we can rapidly iterate on our products. With Cloud Foundry, those rapid iterations allow us to take customer feedback. Using our real product on something as quick to set up and easily accessible as CF platform is, we can respond to our customer’s concerns much more efficiently.

Allowing our customers a way to directly interact with our product in this way is not only helpful for them, but for us, for a variety of reasons. Firstly, we’re able to get valuable feedback from them in a timely fashion, not when a feature has been done and untouched for two months (less old code touching, the better). Additionally, we can test what our real runtime environment will be like for our production level product, which means faster satisfaction for customers.

Screen Shot 2016-08-31 at 12.48.14 PM

Our development cycle. Made simple with less setup behind the scenes

Finally, we don’t have to worry about maintaining servers and VMS when they may go down. Cloud Foundry can sense when an app crashes, and restart it for us. Same database, same DNS, and limited downtime. Doesn’t get much easier than that.

Final thoughts

Using Cloud Foundry also means we can simplify our development process by having a consistent API and well-documented environment surrounding our application. When we rotate around our projects, people coming into our web application are using similar knowledge and practices they might use when deploying our Minecraft Docker Containers.

It’s for the reasons in this article, and I’m sure for many more to come, that the cloud platform and Cloud Foundry has made development easier for us. It allows us to create better products, faster. We can get immediate feedback from customers and get stories accepted faster. For most developers, that’s a no brainer.

Multi-Cloud MineCraft! Minecraft + Docker + CloudFoundry + vSphere + RackHD + ScaleIO = Awesomeness

Minecraft + Docker + CloudFoundry + vSphere + RackHD + ScaleIO = Awesomeness

Peter Blum

If you didn’t know, VMworld begins next week! Here at the Dojo we thought we would show you a cool project we’ve been working on. MINECRAFT in the cloud!!

With all the clouds out there, both public and private, we wanted to bridge the gap between them. We took a public vSphere cloud and a private bare metal cloud, and layered CloudFoundry ontop. Then with a simple CF push we spin up a Minecraft server from a Docker image. Hmm don’t know if I could put more buzz words into a sentence if I tried…anyhow check it out!

Big shout out to VMWare, RackHD, RexRay, Libstorage, and of course CloudFoundry for helping us make it possible. Also here is a link to our Github (https://github.com/EMC-Dojo/CloudFoundry-MineCraft) where you can find the Docker Image that you can run inside cloudFoundry for your own private Minecraft server.

Cloud Foundry. Open Source. The Way. EMC [⛩] Dojo.

Using Docker Container in Cloud Foundry

Using Docker Container in Cloud Foundry

As we all know, we can push source code to CF directly, and CF will compile it and create a container to run our application. Life is so great with CF.

But sometimes, for some reason, such as our App needs a special setup or we want to run an app on different platforms or infrastructures, we may already have a preconfigured container for our App. This won’t block our way to CF at all. This post will show you how to push docker images to CF.

Enable docker feature for CF

We can turn on docker support with the following cf command

We can also turn it off by

Push docker image to CF

Unlike the normal way, CF won’t try to build our code and run it inside the image we specified. CF would assume that you already put  everything you need into your docker image. We have to rebuild the docker image every time we push a change to our repository.

We also need to tell CF how to start our app inside the image by specifying the start command. We can either put it as an argument for cf push or put it into manifest.yml as below.

In this example, we are using an official docker image from docker hub. In the start command, we clone our demo repo from Github, do something and run our code.

Update Diego with private docker registry

If you are in the EMC network, you may not able to use Docker Hub due to certificate issues. In this case, you need to setup a private docker registry. The version of registry needs to be V2 for now. Also, you have to redeploy your CF or Diego with the changes being shown below.

Replace 12.34.56.78:9000 with your own docker registry ip and port.

Then, you need to create a security group to reach your private docker registry. You can put the definition of this security group into docker.json as shown below

And run

Now you can re-push to CF by

Road trip to Persistence on CloudFoundry Chapter 2

Chapter 2

nguyen thinh

nguyen thinh

nguyen thinh

Latest posts by nguyen thinh (see all)

Road trip to Persistence on CloudFoundry

Chapter 2 – Bosh

Bosh is a deployment tool that can provision VMs on different IAAS such as AWS, OpenStack, vSphere, and even Baremetals. It monitors the VMs’ lives and keeps track of all processes that it deploys to the VMs. In this tutorial, we will focus on how to use bosh with vSphere. However, you can apply the same technique for your IAAS too. The reason we talk about Bosh in this roadtrip is because we use it to deploy Cloud Foundry.


Table of Contents


1. How it works

2. Install Bosh

  1. Create a Bosh Director Manifest
  2. Install
  3. Verify Installation

3. Configure Bosh

  1. Write your Cloud Config
  2. Upload cloud config to bosh director

4. Use Bosh

  1. Create a deployment Manifest
  2. Upload stemcell and redis releases
  3. Set deployment manifest and deploy it
  4. Interact with your new deployed redis server

1. How it works

Let’s say you want to bring up a VM that contains Redis on vSphere, you provide Bosh a vSphere stemcell (Operating System image) and a Redis Bosh Release (Installation scripts). Bosh then does the job for you.

alt text

Are you excited yet? Let’s get started on how to install Bosh on your IAAS and use it.

2. Install Bosh

In order to use Bosh in vSphere, you will need to deploy a Bosh Director VM

1. Create a Bosh Director Manifest

This manifest describes the director VM’s specs. Copy the example manifest from this docs and place it on your machine. Modify the networks section to match that of your vSphere environment.

2. Install

After configuring your deployment manifest, install bosh-init tool from this docs. Then type the following command to deploy a Bosh director VM

3. Verify Installation

After completing the installation, download Bosh client to interact with Bosh director. If you have any installation problem, please refer to: docs

Then type

If the command succeed, you now have a functioning Bosh director.

3. Configure Bosh

To configure Bosh Director, you pass it a configuration file called Bosh Cloud Config. It allows you to define resources, networks, vms type specifically for all of your Bosh vms deployments.

alt text

1. Write your Cloud Config

To write your cloud config, copy the vSphere Cloud Config example here: tutorial and modify it accordingly.

2. Upload cloud config to bosh director

After defining a cloud config, you are then able to deploy VMs on Bosh.

4. Use Bosh

Let’s deploy a simple redis server vm on vSphere using Bosh.

1. Create a deployment Manifest

In this deployment manifest, I am deploying a redis VM using redis release and ubuntu stemcell. I wanted the VM type to be medium and the VM to be located at availability zone z1.

2. Upload stemcell and redis releases

Alternatively, you can download them and run bosh upload locally.

3. Set deployment manifest and deploy it

4. Interact with your new deployed redis server

Find your vm IP using

Connect to your redis server

Test it

Road trip to Persistence on CloudFoundry Laying the framework with ScaleIO

Laying the framework with ScaleIO

Peter Blum

Over the past few months the Dojo has been working with all the types of storage to enable persistence within CloudFoundry. Across the next few weeks we are going to be road tripping through how we enabled EMC storage on the CloudFoundry platform. For our first leg of the journey, we start laying the framework by building our motorcycle, a ScaleIO cluster, which will carry us through the journey. ScaleIO, a software defined storage service that is both flexible to allow dynamic scaling of storage nodes as well as reliable to enable enterprise level confidence.

What is ScaleIO – SDS, SDC, & MDM!?

ScaleIO as we already pointed out is a software defined block storage. In laymen terms there are two huge benefits I see with using ScaleIO. Firstly, the actual storage backing ScaleIO can be dynamically scaled up and down by adding and removing SDS (ScaleIO Data Storage) server/nodes. Secondly, SDS nodes can run parallel to your applications running on a server, utilizing any additional free storage your applications are not using. These two points allow for a fully automated datacenter and a terrific base to start for block storage in CloudFoundry.

Throughout this article we will use SDS, SDC, and MDM, lets define them for some deeper understanding!
All three of these terms are actually services running on a node. These nodes can either be a Hypervisor (in the case of vSphere), a VM, or a bare metal machine.

SDS – ScaleIO Data Storage

This is the base of ScaleIO. SDS nodes store information locally on storage devices specified by the admin.

SDC – ScaleIO Data Client

If you intend to use a ScaleIO volume, you are required to become an SDC. To become an SDC you are required to install a kernel module (.KO) which is compiled specially for your specific Operating system version. These all can be found on EMC Support. In addition to the KO that gets installed there also will be a handy binary, drv_cfg. We will use this later on but make sure you have it!

MDM – Meta Data Manager

Think of the MDMs as the mothers of your ScaleIO deployment. They are the most important part of your ScaleIO deployment, they allow access to the storage (by means of mapping volumes from SDS’s to SDC’s), and most importantly they keep track of where all the data is living. Without the MDM’s you lose access to your data since “Mom” isn’t there to piece together the blocks you have written! Side Note: make sure you have at least 3 MDM nodes. This is the smallest number allowed since it is required to have 1 MDM each for Master, Slave, and Tiebreaker.

How to Install ScaleIO

The number of different ways to install ScaleIO is unlimited! In the Dojo we used two separate ways, each with their ups and downs. The first, “The MVP”, is simple and fast, and it will get you the quickest minimal viable product. The second method, “For the Grownups”, will provide you with a start for a fully production ready environment. Both of these will suffice for the rest of our road tripping blog.

The MVP

This process uses a Vagrant box to deploy a ScaleIO cluster. Using the EMC {Code} ScaleIO vagrant Github Repository, checkout the ReadMe to install ScaleIO in less than an hour (depending on your internet of course :smirk: ). Make sure to read through the Clusterinstall function of the ReadMe to understand the two different ways of installing the ScaleIO cluster.

For the GrownUps

This process will deploy ScaleIO on four separate Ubuntu machines/VMs.

Checkout The ScaleIO 2.0 Deployment Guide for more information and help

  • Go to EMC Support.
    • Search ScaleIO 2.0
    • Download the correct ScaleIO 2.0 software package for your OS/architecture type.
    • Ubuntu (We only support Ubuntu currently in CloudFoundry)
    • RHEL 6/7
    • SLES 11 SP3/12
    • OpenStack
    • Download the ScaleIO Linux Gateway.
  • Extract the *.zip files downloaded

Prepare Machines For Deploying ScaleIO

  • Minimal Requirements:
    • At least 3 machines for starting a cluster.
      • 3 MDM’s
      • Any number of SDC’s
    • Can use either a virtual or physical machine
    • OS must be installed and configured for use to install cluster including the following:
      • SSH must be installed, and be available for root. Double-check that passwords are properly provided to configuration.
      • libaio1 package should be installed as well. On Ubuntu: apt-get install libaio1

Prepare the IM (Installation Manager)

  • On the local machine SCP the Gateway Zip file to the Ubuntu Machine.
  • SSH into Machine that you intend to install the Gateway and Installation Manager on.
  • Install Java 8.0
  • Install Unzip and Unzip file
  • Run the Installer on the unzipped debian package
  • Access the gateway installer GUI on a web browser using the Gateway Machine’s IP. http://{$GATEWAY_IP}
  • Login using admin and the password you used to run the debian package earlier.
  • Read over the install process on the Home page and click Get Started
  • Click browse and select the following packages to upload from your local machine. Then click Proceed to install
    • XCache
    • SDS
    • SDC
    • LIA
    • MDM

    Installing ScaleIO is done through a CSV. For our demo environment we run the minimal ScaleIO install. We built the following install CSV from the minimal template you will see on the Install page. You might need to build your own version to suit for your needs.

  • To manage the ScaleIO cluster you utilize the MDM, make sure that you set a password for the MDM and LIA services on the Credentials Configuration page.
  • NOTE: For our installation, we had no need to change advanced installation options or configure log server. Use these options at your own risk!
  • After submitting the installation form, a monitoring tab should become available to monitor the installation progress.
    • Once the Query Phase finishes successfully, select start upload phase. This phase uploads all the correct resources needed to the nodes indicated in the CSVs.
    • Once the Upload Phase finishes successfully, select start install phase.
    • Installation phase is hopefully self-explanatory.
  • Once all steps have completed, the ScaleIO Cluster is now deployed.

Using ScaleIO

  • To start using the cluster with the ScaleIO cli you can follow the below steps which are copied from the post installation instructions.

    To start using your storage:
    Log in to the MDM:

    scli --login --username admin --password <password>

    Add SDS devices: (unless they were already added using a CSV file containing devices)
    You must add at least one device to at least three SDSs, with a minimum of 100 GB free storage capacity per device.

    scli --add_sds_device --sds_ip <IP> --protection_domain_name default --storage_pool_name default --device_path /dev/sdX or D,E,...

    Add a volume:

    scli --add_volume --protection_domain_name default --storage_pool_name default --size_gb <SIZE> --volume_name <NAME>

    Map a volume:

    scli --map_volume_to_sdc --volume_name <NAME> --sdc_ip <IP>

Managing ScaleIO

When using ScaleIO with CloudFoundry we will use the ScaleIO REST Gateway to manage the cluster. There are other ways to manage the cluster such as the ScaleIO Cli and ScaleIO GUI, both of which are much harder for CloudFoundry to communicate with.

EOF

At this point you have a fully functional ScaleIO cluster that we can use with CloudFoundry and RexRay to deploy applications backed by ScaleIO storage! Stay tuned for our next blog post in which we will deploy a minimal CloudFoundry instance.

Cloud Foundry. Open Source. The Way. EMC [⛩] Dojo.

 

 

Creating a Cloud Foundry Service Broker

Megan Murawski

Megan Murawski

Megan Murawski

Latest posts by Megan Murawski (see all)

12-factor apps are cool and Cloud Foundry is cool but, we don’t just have to worry about 12 factor apps.  We have legacy apps that we also need to pay attention to.  We believe there should be a way for all of your apps to enjoy the benefits of Cloud Foundry.  We have enabled this by implementing a Service Broker that binds an external storage service.  Before we talk about the creation of the Service Broker we will describe the role of a Service Broker in Cloud Foundry.

Services are integrated with Cloud Foundry by implementing a documented API for which the cloud controller is the client; we call this the Service Broker API.  Service brokers advertise a catalog of service offerings and service plans, as well as interpreting calls for provision (create), bind, unbind, and deprovision (delete).  By externalizing the backend services from the application in a PaaS provides a clear separation that can improve application development. Dev only needs to be able to connect to an external service to consume its APIs. In Cloud Foundry, this interface is “brokered” by the Service Broker.  Some examples of services which are essential to the 12 Factor app are: MySql, Redis, RabbitMQ, and now ScaleIO!

Screen Shot 2016-05-23 at 1.28.26 PM

A service broker sits in between the Cloud Foundry Runtime and the service itself. To create a service broker, you only need to implement the 5 Service Broker APIs: catalog, provision, deprovision, bind and unbind. The service broker essentially translates between the Cloud Controller API and the service’s own API to orchestrate creating a service instance (maybe provisioning a new database or creating a new user), providing the credentials to connect to the service, and disconnecting and deleting the service instance.

The APIs

Catalog is used to fetch the service catalog. This describes the service plans that are available with the backend and any cost.

Provision is used to create the service in the backend. Based on a chosen service plan, this call will actually create the service. In the case of persistence, this is provisioning storage to be used by your applications.

Bind will provide Cloud Foundry runtime with the credentials and configuration to access the service. This allows the application to connect to the service.

Unbind will remove the access credentials from the application’s environment.

Deprovision is used to delete the service on the backend. This could remove an account, delete a database, or in the case of persistence, delete the volume created on provision.

 

Creating the Broker

To create the ScaleIO service broker, we made use of the EMC open source tool Libstorage for communicating with the ScaleIO backend. Libstorage is capable of managing many different storage backends including ScaleIO, XtremIO, Isilon and VMAX in a client/server model.  By adding an API layer on top of Libstorage, we were able to quickly translate between Cloud Foundry API calls and storage APIs on the backend to provide persistence as a service within CF. Like much of the work we do at the EMC Dojo we have open sourced the Service Broker.  If you’d like to download it or get involved, check it out on github!

Sneak Preview: Cloud Foundry Persistence Under the Hood How Does Persistence Orchestration Work in Cloud Foundry?

How Does Persistence Orchestration Work in Cloud Foundry?

A few weeks ago at the Santa Clara Cloud Foundry Summit we announced that Cloud Foundry will be supporting persistence. From a developer’s perspective, the user experience is smooth and very similar to using other existing services. Here is an article that describes the CF persistence user experience.

For those who are wondering how Cloud Foundry orchestrates persistence service, this article will provide a high level overview of the architecture and user experience.

How Does it Work when Developer Creates Persistence Service?

Like other Cloud Foundry services, before an application can gain access to the persistence service, the user needs to first sign up for a service plan from the Cloud Foundry marketplace.

Initially, our Service Broker uses an Open Source technology called RexRay, which is a persistence orchestration engine that works with Docker.

Screen Shot 2016-06-07 at 11.02.31 AM

Creating Service

When the service broker receives a request to create a new service plan, it would go into its backing persistence service provider, such as ScaleIO or Isilon, to create a new volume.

For example, the user can use:

cf create-service scaleio small my-scaleio1

Deleting Service

When a user is done with the volume and no longer needs the service plan, the user can make a delete-service call to remove the volume. When the service broker receives the request to delete the service plan, it would go into its backing persistence service provider to remove the volume and free up storage space.

For example, the user can use:

cf delete-service my-scaleio1

Binding Service

After a service plan is created, the user can then bind the service to one or multiple Cloud Foundry applications. When the service broker receives the request to bind an application, the service broker would would include a special flag in the JSON response to the Cloud Controller, so that Cloud Controller and Diego would know how to mount the directory in Runtime. The runtime behavior will be described in more details below.

 

How Does it work in Runtime?

Cloud Foundry executes each instance of application in container-based runtime environment called Diego. For Persistence Orchestration, a new project called Volman (short for Volume Manager) has become the newest addition to the Diego release. Volman is part of Diego and will live in a Diego Cell. At a high level, Volman is responsible for picking up special flags from Cloud Controller, invoke a Volume Driver to mount a volume into a Diego Cell, then provide access to the directory from the runtime container.

Screen Shot 2016-06-07 at 12.21.18 PM

 

Introducing Ginkgo4J Ginkgo for Java

Ginkgo for Java

Paul Warren

Paul Warren

Paul Warren

Latest posts by Paul Warren (see all)

Having been an engineer using Java since pretty much version 1, and having practiced TDD for the best part of 10 years one thing that always bothered me was the lack of structure and context that Java testing frameworks like JUnit provided.  On larger code bases, with many developers, this problem can become quite accute. When pair programming I have sometimes even had to say to my pair

“Give me 15 minutes to figure out what this test is actually doing!”

The fact of the matter is that the method name is simply not enough to convey the required given, when, then semantics present in all tests.

I recently made a switch in job roles. Whilst I stayed with EMC, I left the Documentum division (ECD) with whom I had been for 17 years and moved to the EMC Dojo & Cloud Platform Team, whose remit is to help EMC make a transition to the 3rd platform.  As a result I am now based in the Pivotal office in San Francisco, I pair program and I am now working in Golang.

Golang has a testing framework called Ginkgo that was actually created by one of Pivotal’s VPs Onsi Fakhouri.  It mirrors frameworks from other languages like RSpec in Ruby.  All of these framework provide a very simply DSL that the developer can use in his test to build up a described context with closures. Having practiced this for the last six months I personally find this way of writing tests very useful.  Perhaps the most useful when I pick up an existing test and try to modify it.

Java version 8 has included it’s version of closures, called Lambda’s.  Whilst there aren’t quite as flexible as some of their equivalents in other languages; all variable access must be to ‘finals’ for example, they are sufficient to build an equivalent testing DSL.  So that’s what I decided to do with Ginkgo4J, pronounced Ginkgo for Java.

So let’s take a quick look at how it works.

In your Java 8 project, add a new test class called BookTests.java as follows:

Let’s break this down:

  • The imports Ginkgo4jDSL.* and Ginkgo4jRunner add Ginkgo4J’s DSL and JUnit runner. The Junit runner allows these style of tests to be run in all IDEs supporting Junit (basically all of them) and also in build tools such as Ant, Maven and Gradle.
  • We add a top-level Describe container using Ginkgo4J’s Describe(String title, ExecutableBlock block) method. The top-level braces {}trick allows us to evaluate the Describe at the top level without having to wrap it.
  • The 2nd argument to the Describe () -> {} is a lambda expression defining an anonymous class that implements the ExecutableBlock interface.  

The 2nd argument lamdba expression to the Describe will contain our specs.  So let’s add some now to test our Book class.

Let’s break this down:

  • Ginkgo4J makes extensive use of lambdas to allow you to build descriptive test suites.
    You should make use of Describe and Context containers to expressively organize the behavior of your code.
  • You can use BeforeEach to set up state for your specs.  You use It to specify a single spec.
    In order to share state between a BeforeEach and an It you must use member variables.
  • In this case we use Hamcrest’s assertThat syntax to make expectations on the categoryByLength() method.

Assuming a Book model with this behavior, running this JUnit test in Eclipse (or Intellij) will yield:
Screen Shot 2016-06-02 at 8.31.35 AM

 

Success!

Focussed Specs

It is often convenient, when developing, to be able to run a subset of specs.  Ginkgo4J allows you to focus individual specs or whole containers of specs programmatically by adding an F in front of your Describe, Context, and It:

doing so instructs Ginkgo4J to only run those specs.  To run all specs, you’ll need to go back and remove all the Fs.

Parallel Specs

Ginkgo4J has support for running specs in parallel. It does this by spawning separate threads and dividing the specs evenly among these threads. Parallelism is on by default and will use 4 threads. If you wish to modify this you can add the additional annotation to your test class:-

which will instruct Ginkgo4J to run a single thread.

Spring Support

Ginkgo4J also offers native support for Spring. To test a Spring application context simply replace the @RunWith(Ginkgo4jRunner.class) with @RunWith(Ginkgo4jSpringRunner.class) and initialize you test class’ Spring application context in exactly the same way you normally would when using Spring’s SpringJUnit4ClassRunner.

The nooptest @Test method is required as Spring’s JUnit runner requires at least one test class.

Trying it out for yourself

Please feel free to try it out on your Java projects. For a maven project add:

or for a Gradle project add:

compile ‘com.github.paulcwarren:ginkgo4j:1.0.0’

for others see here.

How to Deploy Diego in a Hybrid Environment Deploying Cloud Foundry Diego in a Virtual and Bare Metal Environment

Deploying Cloud Foundry Diego in a Virtual and Bare Metal Environment

In my last post “Deploying Cloud Foundry in a Hybrid Environment“, I showed you how to deploy DEA runners in a bare-metal environment. However, Cloud Foundry now has new and exciting Diego runtime! So now I will guide you through how to deploy Diego Cells on bare metal machines. When we’re all done, your environment will look like this:

Diagram7

(more…)

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.