Q&A: Kevin Brown, Schneider Electric

Q&A

Kevin Brown is a man with two job titles (and two bosses) at Schneider Electric, the company known for its focus on energy efficiency and, perhaps less well-known but undeservedly so, for its data centre services and solutions.

IT Pro caught up with the firm's senior vice president of innovation and CTO at its technology innovation centre in Missouri to talk tech.

Can you tell us a little bit about your role and what your key responsibilities are?

I started at APC in 1991 and was there until 2004. I left for a few years and then came back before Schneider acquired APC. I then had various roles and on 1 July, they asked me to become CTO of the IT division.

Schneider has different divisions and businesses within the group in terms of structure. I'm CTO of the IT business and I have a dotted line into Prith Banerjee, executive vice president (of the IT division) and CTO at Schneider Electric.

My role involves all the things you'd expect. I look at where the market trends are going and I have a team working for me called the data centre science centre, and all they do is go out and conduct this type of research. And, if they find things that are interesting, the vast majority of it is made open to the public for free.

A lot of it is stuff we invest in and use to drive our internal strategy, but then we also make it publicly available.

Can you give us an example of that research? Data centre availability seems to be a hot topic at the moment

We had customers asking us about micro data centres and we did have things in our portfolio to respond to that but we thought it was interesting that customers were starting to ask the questions. We wanted to look at why and what was driving this trend.

So we created a whitepaper that goes through an analysis of what happens when you move to a hybrid cloud environment. Almost every company now has some level of cloud public cloud computing. But it's also clear they can't move everything out to the cloud so they have some things left over on-site. So we started to look at the implications of that.

The idea that everything is going to be in the cloud doesn't work because of latency, bandwidth and regulations etc. Even Netflix in the US, for example is distributed out because it was cheaper to buy a co-lo and put the equipment in than it was to pay for the bandwidth.

You have all these dynamics. It's really quite interesting. What started dawning on us is that we're about to see this big shift in the way people perceive and what their expectations are from the computing environment.

For our industry, the culture tends to look at one data centre thousands of square metres. Failure has been defined as losing power to an IT rack. That's the way the industry has been geared. But, if you combine the new expectations and the way millennials have been raised in terms of IT, it's a much different generation to any other.

Gaming is a great example. I was using the internet at home and it was just fine for my wife a graphic designer - and I. We didn't have any problems. But my sons were playing this game and all they were doing was debating about the network. It turned out it wasn't necessarily the bandwidth, it was latency.

Pokemon Go, for example, made people complain about the mobile phone signal they were getting. They're being raised to think about technology in the same way as we think about electricity If I lose email for a day, I'm actually happy, whereas if they lose connectivity to a social network for even five minutes they go nuts.

When electricity was rolled out people had it when it worked but when it was out it wasn't a necessity. But now if the power doesn't go on when I hit the switch, it means something really bad has happened. We're moving more into that environment with IT

So instead of the focus being power being cut off to the rack, it needs to be about the user experience. That's what will become important.

That change of thinking must present quite a few challenges?

If you go into a typical Tier 3 data centre, there are guards at the door, if I want to go in I have to hand over my licence. They give me a badge, I get escorted around and when I leave, I have to give the badge back. It's a highly secure environment.

But if you go into what people have in the office environment, the wiring closet or server room, anybody can get at it. That physical access is part of cyber security risk.

Yesterday's wiring closets and server rooms are going to become data centres, localised data centres and micro data centres. It's going to become more critical. When you start putting maths behind that some interesting things become apparent you need to focus on user experience and you need to focus on availability.

If you take a typical Tier 1 data centre, that implies 99.67% availability. A Tier 4 would be 99.98%. It doesn't sound like a big difference, but the difference is 30 minutes vs 30 hours 60 times the difference.

So we've started saying don't look at the % of availability, look at the amount of downtime in terms of the hours you lose.

What if you have 10 sites that are running at 99.67% and they're connected into a centralised data centre that's running at 99.98%? You actually just went down to 99.65%. So that 30 hours of downtime just went up.

You also need to think about the number of people at each site and calculate the man-hours of downtime. If you have a Tier 3 data centre where you think you only have 1.5 hours of downtime each year, but it could actually translate to 30,000 man-hours.

Are we in denial about the problems then?

It's not so much a case of being in denial, but there does need to be this shift in thinking away from centralised data centres into what the overall system is and how many people are affected.

So the first thing you need to think about is: What is the availability of the system of all the data centres connected together in this hybrid environment? And then, the second thing is how many people are impacted? The third thing you need to think about is how critical the business function is.

A call centre is really important, for example, so if that goes down it's a problem. If my network access goes down, for the most part it's not a problem and many people on my team would probably say their productivity goes up if I'm not available. If I lose connectivity, I joke that I would hope over a period of time it would become important, but over a short period of time it doesn't matter. But you can imagine with a call centre somebody not being connected is a disaster.

We just need to get more sophisticated in our thinking. We need to give datacentre managers the tools to get the argument up the chain and into the CFO's office. The way to often present it is not the expense of getting it, but the cost of the downtime and putting a business case around it.

It's all being driven by two things: 1) the hybrid cloud environment and 2) people's expectations around using experience with the IT as opposed to something being on or off. They're digital natives.

How much of what you do as a business is driven by customer feedback vs. your own research?

It's a combination. The only way I can really get a feel for what's going on in the industry and talking to customers. But in my role I can also look at where technology is headed.

I don't want to make the solutions customers are asking for, I want to make the solutions they're not asking for that really solve their problems. If we do that really well, everything takes care of itself.

We never do technology for technology's sake. There is no shortage of technology for technology's sake in this industry. There always has to be a customer at the other end and a customer's problem we're solving.

So how do you prioritise in terms of focus and investment?

Most of it is looking at the next wave. And wherever that wave is you want to be on it. A lot of the time it can be difficult to predict as there might be four or five waves out there on the horizon, so how do you pick which one to bet on? We try to be on a few of them, see which one is really picking up and then put more focus behind that.

There's a lot of talk at the moment about edge computing. How important is this going to be?

I think we will become more industry focused. The way I would talk to a retail environment vs, say, oil and gas, for example, would be quite different. For oil and gas, their edge environment is HPC in some cases, where they're out doing exploration and the amount of compute they need and data being generated is really quite astonishing.

A lot of these things are remote and may have a satellite uplink. That's edge for them and is IoT.

Factory floors is another interesting area. Historically, industrial automation was all proprietary PCs. Now, it's all moving to more standard IT. If you go to the more classic verticals. Yesterday's server room and wire closet is becoming your data centre. That is the weakest link. Your downtime now is driven by those things. We have pictures of wiring closest that are like rat's nests!

We're also talking about trying to get better standardisation. We've got these edge environments so how do we manage them. Or get others to manage them for us?

As fast as this wave is hitting us, it's still early days. What 10 years from now will be accepted as fact is currently a big debate. So I tend to look at the problem in terms of what's keeping CIOs awake am I going to have an outage? Am I going to be the next Target (in terms of data being exposed)?

There is so much focus on industry about the big co-los, but the big blind spot for everyone is what's happening at the edge. If I can walk into a wiring closet, plug into a port and I'm a black hat hacker, you've just given them complete access to everything. I have to give you so many details to get access to a datacentre, but I can just go to a janitor and get keys to your wiring closet. That's insane!

If you're a CIO, at a minimum you should be looking at hardening' the edge.

Maggie Holland

Maggie has been a journalist since 1999, starting her career as an editorial assistant on then-weekly magazine Computing, before working her way up to senior reporter level. In 2006, just weeks before ITPro was launched, Maggie joined Dennis Publishing as a reporter. Having worked her way up to editor of ITPro, she was appointed group editor of CloudPro and ITPro in April 2012. She became the editorial director and took responsibility for ChannelPro, in 2016.

Her areas of particular interest, aside from cloud, include management and C-level issues, the business value of technology, green and environmental issues and careers to name but a few.