It’s no secret DCIM didn’t live up to early market hype. Now, in a bid to improve things, terms such as ‘next-gen’ and ‘2.0’ are flying around the industry – but what’s changed? Is this ‘new and improved’ technology really capable of not only addressing the shortcomings of legacy DCIM, but of mitigating the unique challenges posed by hybrid environments? DCR spoke with Kevin Brown of Schneider Electric to get his take on re-inventing a platform for the complex hybrid world in which we find ourselves.
When the DCIM market didn’t take off as initially predicted, what were Schneider’s first thoughts?
Well, we had a serious conversation internally about whether we should continue investing in this area. That was what prompted some of the discussion about how we could make things better.
Why do you think DCIM failed to meet customer needs the first time round?
We came to the conclusion that a lot of the DCIM vendors, us included, had been focusing on the wrong things. We were all chasing very high-end features, so tools were over-designed and difficult to use so it was hard to get started. It was difficult to scale and maintain and it was expensive, so these were all pain points we had to solve. We were also focusing heavily on enterprise data centres, and ultimately all these features started driving us and driving the market.
We didn’t focus enough on the basics. So we took a step back and realised that the DCIM market really is no longer enterprise data centres, it’s this hybrid environment. Our theory was to focus on, how do I get visibility everywhere as well as mitigating the pain points from before. We also recognised the need for service and support and that the ecosystem had a much greater role to play in this hybrid environment than maybe you would’ve thought in the traditional enterprise.
This drove us to say, okay we’re going to throw out everything we did and we’re going to create a whole new architecture.
What spurred Schneider to start entirely from scratch?
Firstly, we concluded that in this hybrid environment there is always going to be challenges that customers are facing, and we needed to help them address those challenges. Secondly, there were new technologies available that were not available when we originally started DCIM. Namely, we wanted to rebuild from the ground up for the cloud.
Speaking of new technology, for DCIM to be truly ‘next-gen’ what in your opinion does that encompass?
In our view there are five attributes of a next-generation DCIM. First, it’s got to be cloud based; without the cloud you’ve got no analytics – companies can’t say they’re using AI and machine learning without it. Secondly, in order to utilise AI and machine learning you’ve got to get the information into a data lake. Thirdly, it should be mobile first, with open APIs so that it can more easily integrate in and customers can use it from anywhere. We also look at it like a compliance tool for security and we’ve put a real focus on simplicity of use.
In our initial conversations surrounding DCIM years ago, we had people come to us saying that it was complex software and that we needed to provide training, we needed to provide installation services, customisation services, all these things, because that’s what high-end enterprise software requires. Reflecting on that today, really DCIM is a tool that you should just be able to use, it should just work, it should not be this big complex thing.
These five ‘next-gen’ attributes are obviously unique to Schneider, how have your views been received so far?
Are we right? I don’t know, but we have a new white paper on this (#281) that anyone can download and have a read through. The industry needs a definition, we can’t just say ‘next-gen’, so we’re at least trying to put forward, here’s our view and I’m happy to have debates with people about whether it’s right or not.
But we’ve put forward our view as to the attributes of next-gen and we’re very excited about the response we are getting. The primary focus we were on was, people who have these distributed edge environments weren’t managing them. The goal with the new architecture was to get them managing their stuff first, that was the target audience, and we’re getting a very good response and we’re super excited by the feedback we’re getting from those customers.
But what we want to make clear is that we aren’t going to leave any customers behind. Yes, we’ve moved to this new architecture, but our intention is to continue to support both until customers want to move and we’re going to make that as easy as possible.
With great amounts of technology, comes great amounts of power consumption. How is Schneider helping to keep things efficient?
If you take a look at energy consumption, there’s a debate about how much data centres take as an overall percentage of the grid, but if you add in telco and 5G buildout, along with the more traditional edge we anticipate, that’s going to grow twice as fast. So we understand this is going to take significant energy consumption.
At Schneider, one of the things that motivates us is making sure that people are using energy responsibly. We anticipated people would be concerned about this, which is why we developed EcoStruxure IT. This tool allows our customers to see exactly what they are managing out on the edge, how much power it’s taking, whether they are managing it efficiently and more importantly the life cycle management of it.
A guy actually came up to me today and pulled up EcoStruxure IT on his phone and said, ‘I just wanted you to know, I had to go to Mexico City and we had a problem with our network and I was able to solve it from Mexico City.’ And that was all because he had EcoStruxure IT on his phone.
We are obviously making great progress with the type of analytics DCIM is capable of, but of course we can’t predict everything, where are we at currently?
Whenever you install a device you get early failures due to engineering/manufacturing defects, mid-life failures during normal operations and then end of life failures.
Take a car for example. Normal operating period failures are like hitting a nail or bursting a tyre or falling into a pothole. Wear out failure is when your tyre is going to be worn down, and you can usually predict that pretty accurately when you take into account all aspects of the tyre.
Schneider is currently concentrating on this wear out failure. We’re rolling out analytics that should be able to predict when every single device will wear out. For example, our wear out model can predict when to replace batteries by looking at the type of battery, its age, cycling, temperature etc. It can also tell you why a battery is failing.
This capability will be rolling out through an update in the next couple of weeks and we’re looking at predicting early and midlife failures in the future – we’re in the research stages now.
What are the key benefits or opportunities for a data centre using next-gen DCIM?
The scope of analytics next-gen DCIM is capable of can’t be underestimated. Analytics help to improve systems availability, reducing downtime, the cost of maintenance, eliminating the need for scheduled visits, as well as the ability to predict when equipment will fail. You are also able to determine the root causes of failures and detect any human error. Machine learning algorithms can also help dramatically reduce energy costs, as well as allocate valuable resources to other more worthwhile tasks.
Finally, what in your view is the future of DCIM?
It’s hard to tell as we’re still in the early stages, we’re not even at the middle stage yet. We’ve been talking about it for three years and only now are we getting customers coming to us asking the questions we’ve been talking about over the last two or three years.
It’s only really been over the last 12 months we’ve been able to feel this traction, the customers are asking different questions than they have in the past. But overall, we’re very excited about the opportunities present and very pleased with the progress we’ve made.