I leave the oven on quite a bit. It’s an issue that my significant other takes great umbrage at, and for good reason — it’s a waste of resources! It doesn’t seem like a lot, but it adds up to the tune of about $0.30 per hour. This is a great example of waste at the scale of an individual, at it probably costs me 5 scolds and maybe $1.50 per year.
All over the world, people are leaving the oven on. In this case, it is a metaphorical oven with an even greater cost — that metaphorical oven is servers. Waste in computing is a much bigger problem than my tendency to heat our house with our Kenmore. Organizations are paying for the equivalent of millions of ovens, and they’re running them 24×7. Let’s break down the problem.
In This Post
Data, Analytics are growing
We all know that data is growing at an accelerating rate. We as a society are capturing more and more data points, from connected devices to clickstream and beyond.
Organizations are now trying to harness that data, turning it into a competitive advantage. They’re applying AI and automation to put access to this data in the hands of more people, multiplying its value by enabling insights to be extracted — something that was once out of reach for so many.
The gap between demand and hardware
Analytics demand is highly variable across two main time periods: the daily cycle (9am-5pm) and the monthly/quarterly cycle. Many organizations today are paying for hardware 24x7x365, while their demand for analytics has a curve like a piece of spaghetti.
Classic analytics deployments are generally fixed. Someone from a vendor recommends how many servers you should deploy. What you see below is hardware uptime and actual usage of that hardware, plotted over time for a classical deployment. The hardware doesn’t change (fixed) while the usage does (demand is variable). The section with red lines, are waste — it’s huge.
But aren’t the tools I am using scalable?
Some might think while reading this that they too can capture this waste and turn it into cost savings — their technology stack supports it! Right?
Wrong. Most legacy analytics deployments look like the above for a reason.
With the advent of flexible, elastic cloud infrastructure, many analytics tools tried to adapt their architecture to take advantage of hardware that was easy to scale.
However, this meant adopting old architecture to new ideas. Legacy analytics tools don’t scale well. From visualization to data science, most platforms in place today are painful to manage for growth. There is no automation, performance is often not linearly scalable due to bottlenecks, and it takes serious chops to be able to add/remove capacity even once — let alone instantaneously as demand changes.
Many analytics tools claim scalability, but they are only scalable in the sense that with significant effort and investment, resources can be added or removed. Their waste graph looks like this:
Continuous automated scaling
Snowflake popularized the concept of continuous automated scaling. This is the idea that hardware resources supporting a deployment should always match demand for hardware resources. Not only should they always match — continuous automated scaling is the technology, logic and integrations that insure your hardware always matches your usage demand. Here is our cost over time graph for continuous automated scaling:
That’s a pretty novel idea, right? Imagine if my oven always turned on when I needed it, and off when I don’t — I don’t even have to think about turning the dial, hitting the switch, or setting the temperature.
The Tellius development team built a continuous automated scaling technology that allows modern teams to break free of their excess hardware they are paying for. Not only do we help companies claim waste, but we also help them never have performance issues again. How often has your MicroStrategy or Tableau configuration been an issue any time you talked about scaling? No longer.
How did we do it?
Turns out it is really hard to build, which is why Snowflake only works on the Snowflake cloud as a managed service. They tightly control all of the moving parts in the background in order to make sure they deliver continuous automated scaling.
Our team spent 12 months building our own cluster manager, autoscaling logic and interface with common cloud platforms. This means that Tellius can run behind your firewall, with continuous automated scaling. There is no sacrifice of a hosted solution, as this is completely self-managed and on your premises.
Want to learn more?
Taking advantage of continuous automated scaling can be a game changer for large and small organizations alike. Anyone can get started with just a few GB of data and grow to petabytes dynamically. To learn more about the technology, check out our blog. For a free trial of Tellius, visit www.tellius.com/free-trial.
Postscript
I promise I’m not being neglectful — when I am cooking (and I love to cook), my mind is oftentimes focused on the task at hand and not the minute details. I’m scooping, stirring, weighing, measuring, mixing and sometimes even making something great.
This is why continuous automated scaling is so great…people are more productive when they don’t have to sweat the small stuff.