Posts Tagged ‘dojo’

Doers for Today, Visionaries for Tomorrow and Change Agents for Always! We Truly Are The Trifecta

We Truly Are The Trifecta

The Dell EMC Dojo has a mission that is two-fold; we contribute to open source Cloud Foundry, and we evangelize ‘the way’ (XP, Lean Startup, etc) by engaging with internal Dell EMC teams in a purely DevOps manner. Our mission is direct with a scope that some could argue is boundless. By practicing ‘the way,’ hours, days, and weeks fly by as we push code to production at times every few minutes. Not only is our push to market rapid, so is our overall productivity. Oftentimes teams working with us nearly guffaw when they come to our office in Cambridge, MA and are able to see the ‘wizard(s) behind the curtain.’ We are asked how we keep three to five projects on track while also engaging with internal teams, planning large technical conferences, and working in the realm of R&D in our greater TRIGr team with an east coast contingent of only eight people and a west coast contingent of five. The secret? We LOVE what we do!

 

A team with empathy at its core, there is never a moment when a task seems impossible.

Truly, we could be featured on one of those billboards along the highway stating that there is no ‘I’ in ‘Team.’ Two baseball players carrying an opposing team member across Home because she/he has hurt themselves, a photo of all of the characters from Disney’s “The Incredibles,” The Dell EMC Dojo team… Take your pick… TEAMWORK. Pass It On.

In all seriousness, the pace at the Dojo can be absolutely exhausting, and with such a small team, the absence of one person (which let’s face it, vacation and life needs to happen at points) could in theory be a huge deal. But, because DevOps is what we live and breathe, any member of the team can fill this gap at any point, truly putting to practice the idea that there doesn’t have to be and should never be a single-point-of-failure. Albeit the industry or sector, what more emulates the ‘Go Big, Win Big’ message than this? By continually pushing ourselves to pair and to up-level the knowledge of our entire team, we never wait until tomorrow to take action. There is no need or desire to.

 

Agility is not a term we just talk about, but is simply inherent to everything we do.

With the combination of the rapidly changing market (externally and internally) and the pace in which we work, we at the Dojo have learned that we must stay on our toes. For those reading this that are familiar with sports, one of the first lessons learned in soccer is to never plant your feet. Holding such a stance allows for the opposing team to outpace you when the unexpected happens, which is most of the time. Same goes here. Pivoting is now second nature for us, and it doesn’t come with the scares. Instead, it is actually exciting when we are able to take data and identify ways in which we can better align with efficiency and effectiveness of software and methodology; to truly keep the user omnipresent in everything we do. We are happier. The ‘customer’ is happier. It is a (Go Big) win-win (Big) game. The cool thing too is that the more we practice this, the more we also feel somewhat like we can predict the future, because we begin to see trends before they are even a thing.

 

 

Doers for Today. Visionaries for Tomorrow. Change Agents for Always.

Do You Hear That? It’s the sound of Keyboards! Call for Papers | Cloud Foundry Summit Silicon Valley 2017 is quickly approaching!

Call for Papers | Cloud Foundry Summit Silicon Valley 2017 is quickly approaching!

Our brains are on fire, our keyboards are hot, and the joke in the office the past few days has been over our extreme excitement for the eventual need to buy sunscreen since our Boston winter leaves us Vitamin D deprived. Why is this the case, you may or may not be asking? Well, I plan on telling you anyway because it is just too exciting not to share!

Our team is preparing for CLOUD FOUNDRY SUMMIT SILICON VALLEY! We felt a social duty to let all of those we care about and want to be there with us for what’s sure to be the summit of the summer (how can it not be when it is being held in June in Santa Clara?!), that the last call for papers is quickly approaching (no seriously, it’s this Friday, February 17th).

Just as a refresher for those on the fence, Cloud Foundry Summit is the premier event for enterprise app developers. This year the Foundation, through market research and feedback, found that interest and industry need is engrained in the focus on innovation and the streamlining of development pipelines. For this reason, Summit 2017 is majorly honing in on microservices and continuous delivery in developers’ language and framework of choice. That is why the session tracks available will be Use Cases, Core Project Updates, Experiments, Extension Projects, and Cloud Native Java. Each session that is chosen for the conference is enabled (1) primary speaker and (1) co-speaker. The primary speaker receives a complimentary conference pass while the co-speaker receives a discounted conference pass. So what’s stopping us from getting involved? Absolutely NOTHING!

As a sneak peak to a few of the topics our team have submitted for approval, see below:

  • Adopting DevOps and Building a Cloud Foundry Dojo (Lessons Learned)
  • Lift & Shift Your Legacy Apps to Cloud Foundry
  • How to Develop Scalable Cloud Native Application with Cloud Foundry
  • Enabling GPU-as-a-Service in Cloud Foundry
  • Blockchain as a Service
  • Avoiding pitfalls while migrating BOSH deployments
  • Spring Content: Cloud-Native Content Services for Spring

 

So, now what’s stopping YOU from getting involved? Submit papers here: https://www.cloudfoundry.org/cfp-2017/ and/or register here: https://www.regonline.com/registration/Checkin.aspx?EventID=1908081&utm_source=flash&utm_campaign=summit_2017_sv&utm_medium=landing&utm_term=cloud%20foundry%20summit&_ga=1.199163247.1732851993.1460056335

Last but definitely not least, let us know if you plan on coming—we are more than happy to share sunscreen 🙂 We cannot wait to see you there!

THE EVENT OF THE SEASON Coming to Hopkinton December 5th and 6th

Coming to Hopkinton December 5th and 6th

Before reading this blog post, I suggest that you and anyone that may be reading over your shoulder sit down. In your chairs? Great, because the excitement you are about to feel will surely leave you overwhelmed.

 

The event of the quarter is right around the corner! Please consider this your personal invitation to join us for DOJO DAYS next Monday and Tuesday, December 5th and 6th from 10 am to 4 pm in the 176 Café out at Dell EMC in Hopkinton.

official-dojo-days-logo

DevOps is a hot word in the world of technology. It represents a way of working and technique that aims at making the customer omnipresent in everything that is created and delivered. Companies today consider this way the future.

For this reason, Dell EMC and Pivotal paired to create the first ever Dell EMC – Pivotal Cloud Foundry Dojo. Our goals are centered around personal, team and company transformation. Through the practice of modern software development, we spend our days contributing to Cloud Foundry Foundation sanctioned Open Source projects through R&D modernization and methodology known as ‘the way’ (XP, Lean Startup).

Many know us as the team in Cambridge with great food and beverage (no really, it’s amazing- come visit us to see for yourself). We realize, though, that the commute isn’t ideal for most. So, we bring you Dojo Days to experience #adayinthelifeofthedojo, with a replication of our office environment and our team in action representing the steps we took to transform.

In addition to seeing #adayinthelifeofthedojo, you will have the chance to pair with us, join our team for lightning sessions that are driven by the audience’s goals (schedule below) and have the chance to win a Dell EMC Dojo branded Patagonia, S’well water bottles, t-shirts, and more!

 

The schedule of the lightning sessions are as follows. Please come by the Café day of to sign up and/or just drop in:

Monday, December 5th:

11:45 am to 12:45 pm: Steps to Transformation starring Brian Roche and Megan Murawski

2:00 pm to 2:30 pm: TDD and CI/CD starring Xuebin He

 

Tuesday, December 6th:

11:45 am to 12:45 pm: PaaS, and Why your Developers Care starring Gary White

1:30 pm to 2:00 pm: Factor, Cloud Native, and Containers (oh my!) starring Thinh Nguyen

 

If you have any questions, please contact Emily Kaiser at Emily.Kaiser@dell.com. Otherwise, we cannot wait to see you, your team, and all of your friends out in Hopkinton next week!

Road trip to Persistence on CloudFoundry Laying the framework with ScaleIO

Laying the framework with ScaleIO

Peter Blum

Over the past few months the Dojo has been working with all the types of storage to enable persistence within CloudFoundry. Across the next few weeks we are going to be road tripping through how we enabled EMC storage on the CloudFoundry platform. For our first leg of the journey, we start laying the framework by building our motorcycle, a ScaleIO cluster, which will carry us through the journey. ScaleIO, a software defined storage service that is both flexible to allow dynamic scaling of storage nodes as well as reliable to enable enterprise level confidence.

What is ScaleIO – SDS, SDC, & MDM!?

ScaleIO as we already pointed out is a software defined block storage. In laymen terms there are two huge benefits I see with using ScaleIO. Firstly, the actual storage backing ScaleIO can be dynamically scaled up and down by adding and removing SDS (ScaleIO Data Storage) server/nodes. Secondly, SDS nodes can run parallel to your applications running on a server, utilizing any additional free storage your applications are not using. These two points allow for a fully automated datacenter and a terrific base to start for block storage in CloudFoundry.

Throughout this article we will use SDS, SDC, and MDM, lets define them for some deeper understanding!
All three of these terms are actually services running on a node. These nodes can either be a Hypervisor (in the case of vSphere), a VM, or a bare metal machine.

SDS – ScaleIO Data Storage

This is the base of ScaleIO. SDS nodes store information locally on storage devices specified by the admin.

SDC – ScaleIO Data Client

If you intend to use a ScaleIO volume, you are required to become an SDC. To become an SDC you are required to install a kernel module (.KO) which is compiled specially for your specific Operating system version. These all can be found on EMC Support. In addition to the KO that gets installed there also will be a handy binary, drv_cfg. We will use this later on but make sure you have it!

MDM – Meta Data Manager

Think of the MDMs as the mothers of your ScaleIO deployment. They are the most important part of your ScaleIO deployment, they allow access to the storage (by means of mapping volumes from SDS’s to SDC’s), and most importantly they keep track of where all the data is living. Without the MDM’s you lose access to your data since “Mom” isn’t there to piece together the blocks you have written! Side Note: make sure you have at least 3 MDM nodes. This is the smallest number allowed since it is required to have 1 MDM each for Master, Slave, and Tiebreaker.

How to Install ScaleIO

The number of different ways to install ScaleIO is unlimited! In the Dojo we used two separate ways, each with their ups and downs. The first, “The MVP”, is simple and fast, and it will get you the quickest minimal viable product. The second method, “For the Grownups”, will provide you with a start for a fully production ready environment. Both of these will suffice for the rest of our road tripping blog.

The MVP

This process uses a Vagrant box to deploy a ScaleIO cluster. Using the EMC {Code} ScaleIO vagrant Github Repository, checkout the ReadMe to install ScaleIO in less than an hour (depending on your internet of course :smirk: ). Make sure to read through the Clusterinstall function of the ReadMe to understand the two different ways of installing the ScaleIO cluster.

For the GrownUps

This process will deploy ScaleIO on four separate Ubuntu machines/VMs.

Checkout The ScaleIO 2.0 Deployment Guide for more information and help

  • Go to EMC Support.
    • Search ScaleIO 2.0
    • Download the correct ScaleIO 2.0 software package for your OS/architecture type.
    • Ubuntu (We only support Ubuntu currently in CloudFoundry)
    • RHEL 6/7
    • SLES 11 SP3/12
    • OpenStack
    • Download the ScaleIO Linux Gateway.
  • Extract the *.zip files downloaded

Prepare Machines For Deploying ScaleIO

  • Minimal Requirements:
    • At least 3 machines for starting a cluster.
      • 3 MDM’s
      • Any number of SDC’s
    • Can use either a virtual or physical machine
    • OS must be installed and configured for use to install cluster including the following:
      • SSH must be installed, and be available for root. Double-check that passwords are properly provided to configuration.
      • libaio1 package should be installed as well. On Ubuntu: apt-get install libaio1

Prepare the IM (Installation Manager)

  • On the local machine SCP the Gateway Zip file to the Ubuntu Machine.
    scp ${GATEWAY_ZIP_FILE} ${UBUNTU_USER}@${UBUNTU_MACHINE}:${UBUNTU_PATH}
    
  • SSH into Machine that you intend to install the Gateway and Installation Manager on.
  • Install Java 8.0
    sudo apt-get install python-software-properties
    sudo add-apt-repository ppa:webupd8team/java
    sudo apt-get update
    sudo apt-get install oracle-java8-installer
    
  • Install Unzip and Unzip file
    sudo apt-get install unzip
    unzip ${UBUNTU_PATH}/${GATEWAY_ZIP_FILE}
    
  • Run the Installer on the unzipped debian package
    sudo GATEWAY_ADMIN_PASSWORD=<new_GW_admin_password> dpkg -i ${GATEWAY_FILE}.deb
    
  • Access the gateway installer GUI on a web browser using the Gateway Machine’s IP. http://{$GATEWAY_IP}
  • Login using admin and the password you used to run the debian package earlier.
  • Read over the install process on the Home page and click Get Started
  • Click browse and select the following packages to upload from your local machine. Then click Proceed to install
    • XCache
    • SDS
    • SDC
    • LIA
    • MDM

    Installing ScaleIO is done through a CSV. For our demo environment we run the minimal ScaleIO install. We built the following install CSV from the minimal template you will see on the Install page. You might need to build your own version to suit for your needs.

    IPs,Password,Operating System,Is MDM/TB,Is SDS,SDS Device List,Is SDC
    10.100.3.1,PASSWORD,linux,Master,Yes,/dev/sdb,No
    10.100.3.2,PASSWORD,linux,Slave,Yes,/dev/sdb,No
    10.100.3.3,PASSWORD,linux,TB,Yes,/dev/sdb,No
    
  • To manage the ScaleIO cluster you utilize the MDM, make sure that you set a password for the MDM and LIA services on the Credentials Configuration page.
  • NOTE: For our installation, we had no need to change advanced installation options or configure log server. Use these options at your own risk!
  • After submitting the installation form, a monitoring tab should become available to monitor the installation progress.
    • Once the Query Phase finishes successfully, select start upload phase. This phase uploads all the correct resources needed to the nodes indicated in the CSVs.
    • Once the Upload Phase finishes successfully, select start install phase.
    • Installation phase is hopefully self-explanatory.
  • Once all steps have completed, the ScaleIO Cluster is now deployed.

Using ScaleIO

  • To start using the cluster with the ScaleIO cli you can follow the below steps which are copied from the post installation instructions.

    To start using your storage:
    Log in to the MDM:

    scli --login --username admin --password <password>

    Add SDS devices: (unless they were already added using a CSV file containing devices)
    You must add at least one device to at least three SDSs, with a minimum of 100 GB free storage capacity per device.

    scli --add_sds_device --sds_ip <IP> --protection_domain_name default --storage_pool_name default --device_path /dev/sdX or D,E,...

    Add a volume:

    scli --add_volume --protection_domain_name default --storage_pool_name default --size_gb <SIZE> --volume_name <NAME>

    Map a volume:

    scli --map_volume_to_sdc --volume_name <NAME> --sdc_ip <IP>

Managing ScaleIO

When using ScaleIO with CloudFoundry we will use the ScaleIO REST Gateway to manage the cluster. There are other ways to manage the cluster such as the ScaleIO Cli and ScaleIO GUI, both of which are much harder for CloudFoundry to communicate with.

EOF

At this point you have a fully functional ScaleIO cluster that we can use with CloudFoundry and RexRay to deploy applications backed by ScaleIO storage! Stay tuned for our next blog post in which we will deploy a minimal CloudFoundry instance.

Cloud Foundry. Open Source. The Way. EMC [⛩] Dojo.

 

 

Sneak Preview: Cloud Foundry Persistence Under the Hood How Does Persistence Orchestration Work in Cloud Foundry?

How Does Persistence Orchestration Work in Cloud Foundry?

A few weeks ago at the Santa Clara Cloud Foundry Summit we announced that Cloud Foundry will be supporting persistence. From a developer’s perspective, the user experience is smooth and very similar to using other existing services. Here is an article that describes the CF persistence user experience.

For those who are wondering how Cloud Foundry orchestrates persistence service, this article will provide a high level overview of the architecture and user experience.

How Does it Work when Developer Creates Persistence Service?

Like other Cloud Foundry services, before an application can gain access to the persistence service, the user needs to first sign up for a service plan from the Cloud Foundry marketplace.

Initially, our Service Broker uses an Open Source technology called RexRay, which is a persistence orchestration engine that works with Docker.

Screen Shot 2016-06-07 at 11.02.31 AM

Creating Service

When the service broker receives a request to create a new service plan, it would go into its backing persistence service provider, such as ScaleIO or Isilon, to create a new volume.

For example, the user can use:

cf create-service scaleio small my-scaleio1

Deleting Service

When a user is done with the volume and no longer needs the service plan, the user can make a delete-service call to remove the volume. When the service broker receives the request to delete the service plan, it would go into its backing persistence service provider to remove the volume and free up storage space.

For example, the user can use:

cf delete-service my-scaleio1

Binding Service

After a service plan is created, the user can then bind the service to one or multiple Cloud Foundry applications. When the service broker receives the request to bind an application, the service broker would would include a special flag in the JSON response to the Cloud Controller, so that Cloud Controller and Diego would know how to mount the directory in Runtime. The runtime behavior will be described in more details below.

 

How Does it work in Runtime?

Cloud Foundry executes each instance of application in container-based runtime environment called Diego. For Persistence Orchestration, a new project called Volman (short for Volume Manager) has become the newest addition to the Diego release. Volman is part of Diego and will live in a Diego Cell. At a high level, Volman is responsible for picking up special flags from Cloud Controller, invoke a Volume Driver to mount a volume into a Diego Cell, then provide access to the directory from the runtime container.

Screen Shot 2016-06-07 at 12.21.18 PM

 

Creating Cloud Foundry Applications with Persistence Traditional & Cloud Native Applications with Persistence

Traditional & Cloud Native Applications with Persistence

Cloud Foundry and 12-Factor applications are great to create Cloud Native Applications and has become the standard. But you and I don’t just have to worry about our new 12 Factor apps, we also have legacy applications in our family.  Our position is that your traditional apps and your new cool 12-Factor apps should be able to experience the benefits of running on Cloud Foundry.  So, we have worked to create a world where your legacy apps and new apps can live together.

cf-logo

In the past, Cloud Foundry applications cannot use any filesystem or block storage. That totally makes sense, given that Cloud Foundry apps are executed in elastic containers, which can go away at any time. And if that happens, any data written to the local filesystem of the containers will be wiped.

If we can externalize persistent storage to be a service – and bind and unbind those services to Cloud Foundry applications – a lot more apps can run in Cloud Foundry. For example, heavy data access apps, like databases and video editing software, can now access extremely fast storage AND at the same time experience the scalability and reliability provided by Cloud Foundry.

Allowing Cloud Foundry applications to have direct persistence access opens a lot of doors for developers. Traditional applications that require persistence can migrate to Cloud Foundry a lot easier. Big Data Analytics applications can now use persistence to perform indexing and calculation.

Traditionally, a lot of data services consumed by Cloud Foundry applications, such as MySQL, Cassandra, etc, need to be deployed by Bosh as Virtual Machines. With Persistence, we can start looking bringing these services to run in Cloud Foundry or create the next generation of Cloud Native data services.

What can Developers Expect?

When developers come to the Cloud Foundry marketplace by using cf marketplace, they will see services that can offer their applications persistence:

Screen Shot 2016-05-20 at 10.37.37 AM

The details of the service plan can be seen by cf marketplace -s ScaleIO

Screen Shot 2016-05-20 at 10.57.04 AM

This is a plan that would offer 2GB of storage for your Cloud Foundry applications. Let’s sign up for it by running cf create-service scaleio small my-scaleio-service1

Screen Shot 2016-05-20 at 11.02.26 AM

By creating a service instance, the ScaleIO Service Broker goes into ScaleIO and creates a volume of 2GB. We are now ready to bind this new service to our app. To demonstrate the functionality, we have created a very simple application that will write and read to the filesystem:

Screen Shot 2016-05-20 at 11.09.51 AM

After a cf bind-service call, the storage will be mounted as a directory and the path will indicate an environment variable inside the service. For example:

Screen Shot 2016-05-20 at 11.35.07 AM

Based on the container_path variable, the application can read and write as if it’s a local filesystem.

 

What If We Do A Little Less: Watch Dan Ward & The Simplicity Cycle #EMCDojo Meetup with Agile Expert and Author

#EMCDojo Meetup with Agile Expert and Author

“Simplicity is not the point, the point is not to reduce complexity. Goodness is the point. Goodness and value.”

 

Last week we were lucky enough to have Dan Ward come to the #EMCDojo to discuss the principles of his new book The Simplicity Cycle: A Field Guide to Making Things Better Without Making Them Worse. Watch the meetup below! If you don’t have time to watch right this second, read the Q&A for a teaser.

 

 

Q: How do you sort through customers’ different definitions of simplicity? Isn’t on customer’s complexity another customer’s value?

 

A: Different customers are going to have different definitions of goodness. Different customers will also have different levels of tolerance for complexity. For some customers, a certain level of complexity makes it less good and hard to use. for a super user, that same level of complexity is still okay. So part of the challenge is to understand with a large user community what goodness means. How do we mitigate that when you have users that want a lot of complexity and users that tolerate complexity less well? When you optimize a tool for the least capable user you improve its performance for all the users, including the most capable users. Simple tools tend to outperform their specs. Simple tools satisfy a broader range of interests and user sets than the designers and engineers might have anticipated. Simpler approaches tend to satisfy a broader range of users and customers. Which isn’t to say we shouldn’t provide a more complicated alternative for those users who want it – for whom goodness and complexity are directly proportional. And you only find that out if you’re engaged with your customers to get a sense of how they value complexity/simplicity. You have to talk to your customers to figure out what goodness means for them.

Q: What about modularity?

 

A: Modularity is a great strategy for mitigating and managing complexity. A modular design is one where you can plug and play, and let the users plug and play simpler or more complex modules. A modular design approach gives your users that optionality, the chance to engage and customize without complexifying things for those simpler users. Modularity is a simplifier and a goodifier. It improves the quality and flexibility of your design, but makes it robustly flexible. Complexity tends to increase fragility, and modularity is a way to combat that. 

 

Q: What about machine learning and AI? Does it change the vector position on the goodness and complexity graph?

 

A: Automation and machine learning take some of the burden of processing complexity from that user, and puts it below the screen/surface. Theres a man that coined the term the law of conservation of complexity: Complexity is neither created nor destroyed, we either move it above or below the surface, we let the automation or the users take care of it. I don’t know if I agree with that, but he has a point. There are ways to hide complexity and not expose the users to it, and automation and machine learning are a great way to simplify the user’s experience while still allowing the architecture to have some level of complexity. But all that being said, we still want to look for opportunities or instances where below the skin we’ve complexified things to the point where it becomes heavy and hard to debug/maintian. The back end matters too, its not just about the UX or UI.

 

Join our meetup group to hear about our other events!

 

2 Heads Are Better Than 1: Pair Programming at the #EMCDojo Pairing at the EMC Dojo, Brian Roche Sr Director of Engineering

Pairing at the EMC Dojo, Brian Roche Sr Director of Engineering

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

I work on a team where we practice ‘pairing’ and pair programming every day.  Before joining this team I had only a passing experience with pair programming. And now, after many months of pairing, I have a much better understanding of why we pair.


Pair Programming Explained
Pair programming is a technique in which 2 programmers work as a pair at one workstation.  One, the driver writes code and focuses on the tactical aspects of syntax and task completion.  Two, the observer considers the strategic direction of the code they’re writing together.  In our case, each developer has their own monitor, keyboard and mouse but is connected to one IDE.  The two programmers switch roles often.

(more…)

At the New EMC Dojo, App Developers Learn by Doing EMC-Pivotal Dojo, Brian Roche Sr Director of Engineering

EMC-Pivotal Dojo, Brian Roche Sr Director of Engineering

Brian Roche

Brian Roche - Senior Director, Cloud Platform Team at Dell EMC. Brian Roche is the Leader of Dell EMC’s Cloud Platform Team. He is based in Cambridge, Massachusetts, USA at the #EMCDojo.

There is a secular shift at play within the IT industry. Traditional markets are eroding rapidly. As organizations seek to innovate using new digital business mediums, many are moving their infrastructures to the cloud. A major challenge they find is that building cloud apps not only means a whole new set of tools, but also a new mindset. To truly embrace any new way of doing things requires a commitment to learn by doing, which is why we are proud to announce the official opening of the new EMC – Pivotal Dojo in Cambridge, Massachusetts.

The Dojo is “the place of the way,” the new way that we develop software in today’s world. It’s where we contribute to open source Cloud Foundry. It’s where we work with customers to build software.  It’s where we practice lean software development and continuously innovate to solve customer needs.

Building Software for the Cloud

The cloud is a relatively new delivery model, but customers still need to maintain the same rigorous quality, security and up-time SLAs expected in the on-premise world.  Cloud Foundry represents the best-in-class technology that customers can trust to run their businesses in the cloud.  More than just technology, it is a development platform supported by lean practices and methodologies that lead to high quality and continuous rapid innovation.

(more…)

Follow Us on Twitter

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.