Thursday 8 May 2014

Internet of Things Architecture

I've recently been thinking about the Internet of Things or more specifically home automation and control.

I'm not happy with either of the 2 main methods for accessing Things remotely.  The first method is to poke holes in your firewall and do port forwarding.  This is a really great and simple method for 1 device.  For example if you have a WiFi thermostat to control your heating, it can connect to your home network and the router can be configured so incoming connections to the appropriate port are forwarded to the device.

This method is fine if you are technically strong enough to do this.

There are some downsides to this.
  • If you have more than 1 device, you now need to manage multiple port mappings since the port can't be shared
  • Manufacturers are new to this and the software isnt always flexible enough to allowing user defined mappings
  • You are poking holes in your firewall - are the devices secure from hackers?
  • The home user needs to be technically competent to do the configuration

The alternative method is for the device to connect to the manufactuers service and you access the device via the manufacturer.  This method also has downsides
  • Some manufacturers charge for the service. I personally distrust connections to the "mothership".
  • You are locked in to the manufacturer. What if they do bankrupt - do you  lose your service?
  • You are reliant on the security of the manufacturer
  • They could be gathering personal data about you 
The other problem with the centralised method is scale. Firstly the service provider needs to understand how to scale these systems.  It would be really annoying to not be able to turn the heating on because they can't cope with load - OK maybe extreme but it could happen. Secondly these systems are proprietary  - I end up with a login for my home thermostat, another for my home lighting maybe another for the dishwasher.  It also means it won't be possible for my LG dishwasher to chat with my Sony TV if it ever needed to.

Given we have had high profile events like Sony Playstation network being shut down for days due to hacking, I generally distrust the central control method.  It's feasible a hacker could create a denial of service attack on the electricity grid by deciding to turn on all the dishwashers in the world at the same time causing a huge power surge in demand triggering brown-outs.

I'm not sure what the solution is but I can't help thinking Software Defined Networking has a role to play here. SDN is good at address abstraction and both of the scenarios above are basically address abstraction problems.  The challenge is to be able to address the devices in the home in an open non-proprietary manner.

Wednesday 9 April 2014

The rise of DevOps

The software world has for a while embraced the idea that the programmers that wrote the software are the ones best placed to run the systems.  This mindset is finding it's way into the networking arena.  It's always been the case in small enterprise networks that the networking designers are also the network support team but in large complex networks there has always been a clear distinction (and mentality) between the designers and operations. 

The Operations mindset is clearly different to that of the designer. Operations resist change. Change = Risk.  Risk = Problems.  Operations are invisible and ignored when things are running well - but shouted at when it urgently needs fixing  - it's a thankless job. 

Designers like change and are likely to make experimental changes on a live network.  Change = risk.

The shift to SDN is an interesting one.  Clearly the extreme Operations mindset of Change = Bad is not a great way to build a responsive business however there is clear value in the mindset of preserving quality and minimising risk to services and revenue that the network enables.

So the shift to DevOps certainly presents some conflicts in behaviours.

So why is the shift happening? I think the key to this is the word Software. The Software in Software Defined Network (SDN).  The change is one where the value and innvovation lies not in dumb networking boxes but at the high level applications. It's likely, at least for the next few years, that these applications will be written internally by the business- in other words software developers will be in control of the network.

The application domain is where there's opportunity for innovation, experimentation and potential new business value.

So does this mean that all these developers playing God with the network will create chaos?  Maybe.  There's clearly the opportunity for a new class of bug.  Todays legacy networking issues may become less common but a new breed of transient application specific networking bugs may emerge.

Now for the good news.  Building a test network today which fully replicates the live production network is for most businesses not possible or uneconomic. With the shift to SDN however it's possible to cheaply build a virtual replica of the current live network including behaviours. This is possible since the controller knows the exact state of the network.  In legacy networks, it's a Plan, Build, Operate model.  Someone designs the network, someone builds the network and someone operates the network.   Often changes are made to the live network so the original plan bares little resemblance to  live network and the build network probably doesnt reflect the design either!  The build engineer might find a port designated in the design is already connected so the builder uses his intelliegence and connects to the next available port and doesnt correct the design documentation.

In an SDN network the controller knows the actual "as-built" state.   Since the controller has an accurate picture, planned changes to the network can be simulated, characterised and fully tested. This enables a new way of working.

Today making changes to a legacy network is a piecemeal basis. Engineers are issued work packs and they implement the changes network element by network element.  Humans are involved so theirs the opportunity to make mistakes at each stage.  In  high risk networks, these changes are schedule for night working where the tired engineer might be more likely to make mistakes or may rush them through in his desire to get to bed.

By testing and simulating these changes off-line in an SDN network, the changes can be tested to ensure they are low risk. Implementation of the changes can be automated - the changes can be scheduled for implementaiton at night without humans to make mistakes and the tests to ensure the chanegs are succesful can be embedded into the process so that if there's a problem the network can automatically be backed out back to the known working state.

In the data centre environment, tools like Chef and Puppet have revolutionised server provisioning and automation of operational tasks.  These concepts and tools will find their way into the networking space and change the way of working.  Welcome to network DevOps.