3 Reasons Your Optimized Cloud Deployment is Still Too Expensive (and how to overcome this)

As the initial excitement of the cloud revolution wanes and we look back, there is a growing realization that achieving the promised benefits is not as effortless as we once thought. Instead, it is becoming evident that realizing these advantages is a journey that demands a combination of knowledge, meticulous planning, and unwavering discipline.

In this three-part series, we will delve into the challenges many companies face in maximizing the ROI of their cloud deployments and provide practical insights to enhance the situation.

From our experience, there are three primary reasons your post-optimization cloud deployment is still too expensive:

1. You used fin-ops to solve a technical problem

The typical approach of financial analysts is to “buy better” which locks overprovisioned resources into long-term contracts.  This negates one of the marquee benefits of cloud computing, which is flexibility. These well-intentioned analysts usually don’t have the skills required to ensure each of your services is sized and scaled correctly, a process that should be done before you lock yourself in. Native cloud advisor tools can only spot the most obvious over provisioning and cannot leverage important business context.

To overcome this, you need skilled infrastructure practitioners in and around your team who understand how to analyze your deployments and right-size them, even recommending re-platforming in some cases.

2. You’re using managed cloud services at scale

Managed cloud services are a great way to get a small team to value quickly, or for larger organizations to avoid expanding support teams to shore up a skills deficit as an application lifecycle delves into foreign technologies. That convenience comes with a premium at scale.

The challenging part is determining when you’ve reached that scale, which will be different for every organization. It requires careful analysis of your resources and goals to understand when you’ve reached the point where DIY is the right approach.

3. Dark Data Hoarding

It’s great that storage in the cloud is infinitely scalable, however this can also have a big downside. Traditional datacenter technologies impose natural barriers to unabated storage consumption. In the datacenter, purchase events driven by capacity limitations often lead to rationalization and review of how an organization is using storage.

This rarely happens in the cloud, where storage growth becomes like the proverbial frog swimming in a slowly boiling pot of water. Compounding the problem further, organizational paranoia around the untapped value of data leads many companies to keep everything, and worse, to replicate it for DR, a process cloud providers make deceptively easy.

These problems are usually rooted in culture, and solutions will likely cross organizational boundaries. Sunlight is the best disinfectant, and so the first step is coming up with a focused data strategy so you can understand the true value of your bits, separating the wheat from the chaff. Then you can align your tooling and services to execute on that strategy, which should include data lakes, warehouses and lifecycle policies (including only necessary archival), and finally implementing data destruction.

Involve your GRC team and take the necessary time to understand the true meaning of regulatory requirements – unnecessarily conservative retention policies are a billion-dollar market and it’s likely that your organization is a loyal consumer.

If you’re feeling stuck, or mired in sacred cultural cows or restrictive politics, Cuesta’s experts can help by analyzing and reporting on the true opportunity cost of inaction and tailoring your cloud deployments to recoup those costs, getting you on track for ongoing optimization.

Background Image

Technology doesn’t wait. Let’s start the conversation.

You want to achieve all your goals. We want to hear about them. Let’s talk about the future of your technology.