The problem: updating critical 3rd party software

A core element in delivering a high performance Identity & Access Management service is our ability to rapidly release new versions of the LibLynx platform. This allows us to deploy new features with ease, as well as enabling quick deployment of updates and security patches for all the underlying systems.
Shibboleth SP is a good example of mission-critical 3rd party software. It’s a set of components that manage SAML-based single sign on and require tight integration with the target application, so deploying updates to it can be a tricky proposition.
We thought you may be interested to know more about how we do it, and why we selected a technology called Docker to help us achieve this.

The traditional approach

Developers of software services often need to work within a more limited environment than that used by live customers. This is usually because of the large number of different types of servers that drive modern web applications. Virtualizing these servers on a developer’s laptop is typically too resource-intensive to be practical, and so developers commonly work within a small subset of their live environment. This more lightweight setup often has just enough capability to enable a developer to test what they are working on.
However, this approach also slows down testing and release cycles because a live environment test can reveal issues that did not crop up during development, and which require additional work to diagnose and fix.
At LibLynx, we adopted Docker because it allows us to use the same environment for development, testing and production.

What is Docker?

If you’re not already familiar with Docker, you can think of it as a way of providing extremely lightweight virtual machines. These virtual machines are really just a highly isolated process, wrapped up in an efficient storage container.
A typical application will be made of many such docker containers, providing application servers, databases, queues, caches and so on.

Why do we use Docker?

  • Our developers can run an accurate simulation of our live environment on modest hardware
  • We can try out third party updates in minutes
  • Each container, once built, is very fast to deploy and start up
  • Containerization forces architectural decisions that improve the scalability of the application
  • Docker is also one of the enablers behind our blue/green deployment, allowing to release updates with zero downtime

Example – Shibboleth SP updates

A good example for publishers is keeping on top of security updates for software like Shibboleth SP. We’ve isolated Shibboleth SP into several Docker containers. When an update is released, we can rebuild these containers in a few minutes and try them out. If all is well, we run an automated test suite against our entire application to ensure everything else still works as expected. Once that passes, we can release the new containers into our live environment with zero-downtime.

“Test as you fly, and fly as you test”

NASA have a mantra “Test as you fly, and fly as you test”. In other words, they test spacecraft in exactly they way they mean to fly them, and then operate them in exactly they way they were tested. Mission failures can be traced back to not adhering to this principle.
Docker allows us to follow this principle too. Once built, a container is immutable, and we can test it in an accurate simulation of the live environment. Once released, we can be confident the container is *exactly* what was tested.
Please contact us if you’d like to learn more about how our high performance platform can help you increase the range of access scenarios you support, simplify your access architecture and free up valuable technical resources.

Credits

Image adapted from A pit-stop for Army Racing by U.S.Army (used and adapted under cc-by-2.0 licence)