How to Manage Staging Environments to Speed-up DevOps

header-picture
avatar

Posted by Jessie A. Pincus - November 19, 2019

Staging Environments are a challenge not only because they simulate and aim to replicate production, but because they also serve different teams with varying use cases and requirements. In today’s world where DevOps processes and public clouds (AWS, Azure, etc.) take a big role and where there are innovative solutions, it’s worth taking a fresh approach to how we redefine and rethink our development process.

This past month, Quali’s Meni Besso (Product Manager) and Tomer Admon (Solution Architect) took on this topic 28 Floors above Tel Aviv, at AWS’s shared working space headquarters. They discussed where staging environments fit in the DevOps era and how to streamline the integration process. This is the first of a two-part series where we share practical tips and best practices.

There are many sub-steps in the DevOps process that make it tricky to make the big leaps needed to construct a reliable application release pipeline. Staging environments are used as potentially temporary environments that match production, ensuring that productions are well-tested. But even today, DevOps teams find it very hard to create staging environments that simulate production, and in many cases this obstacle causes the entire CI/CD pipeline to break.

In this article, we aim to describe common patterns for creating staging environments and offer some best practices that will help you minimize the risks.

Redefining the Development Process with Staging Environments

What is an Environment?

Environments are made from multiple components because it’s not about the technology. Today, you can have VMs, load balancers, cloud services or databases in your environment. Tomorrow, your application may encompass a whole different arrangement of applications, infrastructure, data, integrations, and monitoring systems.

When we talk about staging environments, there are several use-cases that stretch across different organization stakeholders. The Engineering Team use-cases include last-mile validation on a production-like environment and testing 3rd Party Integrations. The Product Team commonly employs a staging environment to review the product for bugs and identify potential errors in the running application. The DevOps Team requires the staging environment to test the production upgrade process and make sure the Monitoring, Alerting, and other testing systems run smoothly. And let’s not forget the Biz Dev and/or Sales Teams that require environments in order to run company demos and PoCs (Proof of Concept).

Common Patterns for Creating Staging Environments

Static Staging Environments

The traditional “static” staging reality of application development is “bumpy” with many back-loops. Engineers may often restart services while development teams suffer from corrupted databases and thus troubleshooting ensues. This is expected, but it certainly isn’t linear.

Challenges with static staging environmentsUnfortunately, static staging environments are difficult to scale when many teams are concurrently working on the same static infrastructure. But if static is what works best for your application at this time, then follow some of the strategies below:

  • When starting every sprint, recreate the environment
  • Occasionally refresh the data
  • Rely on “cleaned-up” production data
  • Apply monitoring and alerting processes that closely match production
  • Assign owners and clearly define their responsibilities

On-demand Staging Environments

So, if static staging environments limit your teams’ development, what are some solutions? Here we find ourselves in the world of dynamic infrastructure options that are offered by cloud providers such as AWS, Azure or Google Cloud. With dynamic infrastructure and on-demand staging we’re able to provide each of our development teams their own resources that they can manage as needed; the result of which is no bottlenecks and decreased work frustration.

Give each team self-service dynamic staging environmentsDynamic staging/self-service environments allow each team to decide and regulate their environment as a product. Now they can determine:

  • When specific environments will start and end (affects cost implications)
  • Who has access to the environment and can edit or view it

Of course, nothing is ever perfect for every situation. Dynamic staging is challenging because governance and use policies must be determined per team, which can be complicated. Security issues and the integration of multiple cloud providers and teams can get intense. But the best things about working with dynamic staging are that developers are closer to production, environments become scalable and easy to consume, there are minimal work interruptions for each team, and environments become automated and properly managed, thus providing the oil for a smooth-running machine.

The “Live” Staging Model

For those of you who want to practice continuous delivery and release new versions to production without manual gates, we recommend investing in the live staging model. The production is upgraded with the popular blue/green strategy, and there’s no need for standalone staging. This means continuous and automatic deployment is in the users’ hands.

Risk management and staging phases co-exist as new versions are constantly developed, tested, and put into production. A big part of the live staging model is the concept of canary releases and blue/green deployment. Quali Colony provides both.

Blue/green means we have two versions running simultaneously in production. Before upgrading the production to a newer version, we can test, monitor and debug on the real (live) production.  

Blue Green DeploymentVia the CI/CD tool, teams can choose to deploy to production, get feedback, update and fix, and then redeploy the new parts while the stable sections have been maintained for the application users. This reduces risk and enables continuous usage.

When using blue/green deployment, teams must design for backwards/forwards compatibility, be very careful with state and using different versioning patterns, emphasize specific testing characteristic of dynamic version changes, and create backlog planning for the entire blue/green cycle.

Although there’s added complexity, the added benefits clearly show blue/green’s super powers. Risks are mitigated since changes are rolled out slowly and corrected continuously during each iteration. Testing and monitoring of the production process is ongoing like a system of checks and balances. Releases are completed more quickly and feedback is available faster. All these small iterations and packaged changes allow business to run smoothly with significantly reduced downtime.

Exciting stuff, right? We’ve got a lot more up our sleeves and the experienced practice for some good discussion! Tune in next month for the nitty gritty of CI/CD practical tips and best practices.

Download our “Buyer’s Guide To Scaling DevOps” to learn more about what it takes to scale DevOps using an EaaS tool.

Download-our-Buyers-Guide-to-Scale-DevOps-CTA-Blog

 

  • Posted by Jessie A. Pincus on November 19, 2019
  • About Author: Jessie is a Technical Writer focused on the CloudShell Colony Help Center at Quali. After many years of geophysical field surveys and research to help people make strategic decisions about their resources, she transitioned to the world of software technology and efficient automation. In her spare time, you’ll find her running outside, training Brazilian Jiu-Jitsu, and having adventures.

Topics: DevOps, Continuous Integration/Delivery


Recent Posts

TechBlog: The Joy of Sharing the Product Roller Coaster Ride with our Customers

read more

Cloud Management Platform VS Environment as a Service: What do You Need to Scale DevOps

read more

Environment as a Service blueprinting vs. Terraform

read more