Go back a few years ago, and measuring the growth of the data storage industry was easy. Industry watchers could just check the latest quarterly IDC or Gartner reports to see the storage hardware revenue or capacity of the storage industry as a whole and for each of the top players in the market.
Today, it’s not so simple. Data today may reside on a traditional dedicated on-premises storage array. It may be sitting on an industry-standard server configured via software to act as a storage array. It may be sitting on a public cloud, using either the cloud provider’s own technology or a traditional storage vendor’s cloud-native version of its array software. Or the data may be in between some on-premises and some cloud infrastructures, maybe even temporarily based on an application’s needs.
All these changes have made data storage much more capable than in the past, said John Woodall, vice president and CTO of hybrid cloud at Dallas-based solution provider General Datatech.
“First there was file storage, then file and block,” Woodall said. “Then it was hybrid marketplace offerings, and then it was hybrid multi-cloud and then a redefined ‘unified,’ which is file, block, object and cloud. Cisco recently reported that 82 percent of enterprises are operating in a hybrid cloud model, meaning on-prem and one or more hyperscalers. Those in the cloud, I think it was 92 percent operate in a multi-cloud model. So every time you have a different technology, a different set of APIs, a different set of services, even though they might be in the same storage category, they’re different.”
Now multiply that across compute, hypervisors, networking, security, on-premises, Amazon Web Services, Microsoft Azure, Google Cloud, and the idea of a hybrid cloud becomes the ‘Nirvana,’ Woodall said.ADVERTISEMENT
“The promise is simplified operations, easier-to-do infrastructure as code, a more consistent set of services, observability and all these other things,” he said. “To deliver on that compute, you can extend it using maybe VMware, VMware Cloud, or containers, to create a consistent model and operating model observability around the compute layer. But if you can’t expand your storage, meaning the operations thereof, and the APIs and automation and infrastructure as code capabilities to make on-prem and cloud storage the same and extend the services of snapshots, replication, quality, etc., then you really have only extended your fabric at the compute and network layer, but the storage layer is still left to more variability.”
Furthermore, Woodall said, users are now showing a preference for cloud-native technologies, doing things like going to their Chrome or other browser, clicking on their cloud console and consuming native services for everything, Woodall said. There are options for storage, but they are not as clean as for compute and networking, he said.
“If we look back over the last 10, 20 years, storage vendors in general have responded with the ability to either provide observability and manageability of cloud-native storage resources, or provide their own version with primary or secondary storage either via a marketplace or via first-party technologies,” he said. “And so it’s a move in the right direction. It is an essential dynamic where we must see more maturity and less siloing. And that’s where third parties or established vendors are providing an overlay for command and control irrespective of the underlying technology.”
The storage industry continues to rapidly change, and many vendors are indeed looking to provide ways to better extend and manage storage regardless of where it exists.
Here are 100 vendors solution providers should have on their radar across software-defined storage; data recovery, observability and resiliency; and components.