Software-defined infrastructure is among the most noteworthy advances in data centre tech, providing new levels of flexibility in scale-out data infrastructures. Decoupling hardware and software has enabled freedom that was once unavailable and seeded a scaling revolution that continues. From this revolution, many software-defined storage (SDS) solutions were born.
Vendors worked to build storage software to simplify storage management with hardware-agnostic solutions, empowering a “lego effect” that has allowed firms to scale up and down. Hardware agnostic storage, independent of hardware delivering unlimited scalability, greater efficiency, freedom, and mobility in the data centre. It’s a great vision, but is it the reality?
What are the facts of SDS?
The reality doesn’t match the vision and increasingly, data centre managers are feeling the drag. While SDS has provided a great many benefits, the truth is that it hasn’t been a panacea. Vendor lock-in is still largely the reality in the SDS ecosystem. Data that has landed is expensive to move, and switching between proprietary solutions is cost-prohibitive.
What’s more, a dangerous idea has emerged in the industry at large – the idea that the hardware doesn’t matter. This idea has stalled innovation and created a race to the bottom, with vendors looking at hardware as a place to cut corners for margins, relying on opaque commercial-off-the-shelf (COTS) solutions to deliver sophisticated storage software.
The idea is that the software storage vendors can optimize their packages to eliminate inefficiencies present in the one-size-fits-all COTS hardware. But the dirty secret is that they can’t – and especially not at scale. Not everything can be solved in software.
Clever workarounds that “optimise” a system might be good for a one-off release that will get an organisation’s customers a working solution for the meantime, but the truth is that storage vendors can’t code physics out of existence – and for every bottleneck you try to code around, you could end up coding-in more power draw and more heat.
This creates more demand for cooling, which in turn means the additional requirement for even more power and more space. The truth is that inefficiencies in these systems end up creating a vicious cycle of waste that organizations can’t easily escape from.
What are the reasons why hardware matters?
The real truth is that the first rule of data infrastructure is this: hardware matters. This will become an increasingly apparent reality as so-called “core-to-edge data infrastructure” – the shift to building more infrastructure outside of the hyperscale data centre – matures.
There are three key reasons why:
COTS-based systems aren’t optimal for edge deployments
We put this to test years ago working with Aussie special forces, trying to create systems that could be used to collect sensitive data in extreme environments like the Mariana Trench. We found out that real-time data infrastructures run headlong into the reality of physics.
High-performance, low latency infrastructure necessitates that it is placed close to where the data is being created and used – and in edge use cases, space will always be a constraint. One-size-fits-all COTS-based systems are fundamentally inefficient for edge deployments.
It’s a vicious cycle: space constraints combined with inefficiencies created between the hardware and software lead to overheating. This creates a need for additional cooling, which requires additional real estate. Much innovation ends up going into the cooling infrastructure – hoses, liquid, cabinets, immersion – all of which take power and space. Wouldn’t it make more sense to build cooler operating hardware that is optimized for the software it’s running?
COTS-based supply chains are opaque and increasingly unreliable
Virtually every country is reliant on foreign manufactured chips and sub-assemblies – most of the componentry coming from southeast Asia. This has created dependencies that have become unavoidable, creating both economic and security challenges. These become exaggerated during uncontrollable events – which the world is all too familiar with in the post-COVID era, where chip shortages and weak global supply chains have become common.
Aside from these challenges, the industry faces a great contradiction as the rise of so-called “Zero Trust” security models take root in enterprises and gov’t agencies around the globe.
Zero trust is required because most vendors ask their customers to trust their black box designs. In a world where the entire value chain – from design, through sourcing, manufacturing, and delivery – is entirely transparent, you no longer have to trust.
This is the purest form of zero trust. The reality is that COTS-based hardware systems, at least as they currently exist, eliminate the ability for sovereign resilience or for mission-critical infrastructures to have the secure provenance that is possible through transparent audit.
COTS-based systems sabotage sustainability
The unfortunate reality is that software-defined infrastructure, while being a very good idea, has led to software bloat and an innovation malaise that has become increasingly detrimental to carbon reduction goals, especially as systems scale. A tremendous amount of waste exists in the current IT manufacturing ecosystem, making it difficult for organisations with large amounts of data to reduce their carbon footprints while keeping pace with growth.
Instead of innovating hardware to be more efficient, IT solutions companies have thrown more processing power at I/O problems, and work on “outside-in” approaches like attempts at software optimization. The result is inefficient, power-draining, heat-producing, expensive architectures that create as many problems as they solve in fast-growing data centres.
Energy reduction and the achievement of carbon footprint goals end up being clever exercises in greenwashing numbers rather than actually innovating in the data centre.
How can hardware be brought back into the picture?
This is not an argument against software-defined infrastructure in the least. The issue is that the industry has discarded the value of hardware in the quest to sell cheap systems at top prices. This has created the opposite of what the software-defined ethos is fundamentally trying to achieve. Ask yourself this question: who benefits more from software-defined infrastructure in your own racks: you or the vendor whose name is stamped on it?
Solving this comes down to adding a little more rigor to the purchasing and acquisition process. IT architects need to start asking questions that impact their firm’s future – especially in mission-critical systems. Does our SDS solution allow us to scale at the Edge?
Does it enable us to switch our suppliers with relative ease? If we’re applying zero trust principles to the networks, is that being extended to the scrutiny of the hardware?
Where is the hardware manufactured? Who assembled it? Can we prove the provenance of every component? Could we audit the source code if we wanted to? Can we scale without destroying our carbon reduction goals, or requiring new real estate to do it?
Questions like these will help firms keep the industry on a better path – a path that responds to what the customers really need in a more holistic way. The software-defined paradigm has helped revolutionize the data centre and especially scalable storage, but it’s important that leaders remember that hardware still matters. This will only become apparent as Edge strategies become more dominant and core data centre scalability reaches its physical limits.
When IT leaders start turning over more rocks looking for innovation at the hardware level, that’s when the true value of software-defined infrastructure will be found.
Phil Straw is the CEO and Co-Founder of ‘slightly controversial, quietly confident’ venture-backed company SoftIron. Founded in 2012 by a group of tech enthusiasts assembling specific designs for challenging special forces operations environments, SoftIron today is growing rapidly and making huge waves across the globe as the world’s leader in purpose-built and performance-optimised data centre solutions.