Devices Gone Wild IV: Hacking Critical Infrastructure through the IoT.
Critical infrastructure is of course infrastructure – communications, power, transportation – that we depend on to support how we live: not just our ‘lifestyle’ but often our life itself.
One may ask why infrastructure of any kind is connected to the notoriously vulnerable Internet at all, but it is (in some places) – for reasons of remote monitoring and control, coordination, effectiveness. A good deal of effort is made to keep it secure. Canada has a national strategy and action plan on the topic, updated this year to replace the former three-year plan. President Obama, who cared about such matters, issued wide-ranging Executive Orders and Policy Directives about it.
In the light of all this attention, how will the bad guys break in? It may be through the little things, the add-ons and non-critical, and thus non-criticized, infrastructure of the infrastructure operations – in short, Internet of Things devices to keep the buildings working – the thermostat, the ‘smart’ light switches, whatever.
We have noted before that such devices are often very simple, so simple they have no defences against intrusion, no means to be secured or upgraded, and no patch for long-known bugs and flaws.
Once something is connected to a system, it becomes a point of vulnerability for the whole system. Just as one can attack a car’s operations and steer it off the road by an attack through a tire-pressure valve or a stereo system, or hack into credit card records through a business’s heating and ventilation system, or access high-value gamblers’ records through the casino’s aquarium, so too one can get at the operations of critical national infrastructure through the little devices no one pays attention to.
No one? Well, the FBI is paying attention – and not just to critical infrastructure. The FBI warning here applies to business and personal misuses, and the linked document focuses on consumers’ connecting devices that can compromise home systems as well – including ‘critical’ personal systems like pacemakers and other medical devices.
In short, we all need to be vigilant, and sometimes our systems may be too ‘smart’ for our own good.
Devices Gone Wild V: Algorithms that Cheat
Artificial intelligence (AI) is all the rage these days – competing only against the blockchain as the New Big Thing in technology circles.
People have been working on AI for decades, but it seems to have become credible as a label for anything that actually exists, only in the past few years. Two reasons combine for this progress: higher speed computing and big data.
Computers can analyse data for many different variables and integrate the results, and with vast stocks of data, can consider far more examples (of anything) than humans could do.
As a result, machines can ‘learn’ to recognize patterns more accurately than humans; computers have made better medical diagnoses than experienced physicians, for example, because the access to big data has given them more ‘experience’.
Moreover, computers do not need to be told the answers they are looking for. They can be ‘turned loose’ on data and asked to find patterns in the data on their own, for a particular purpose. As a result, they can come up with conclusions that are surprising to the programmers that set them on the task – they can go beyond their ‘mandate’. Machine learning has produced new ‘information’.
It is sometimes thought that such exploits may take the computers beyond the legal responsibility of their owners, or make them autonomous actors in law. For reasons discussed elsewhere, this seems unlikely.
We have looked previously at the risk that such learning is subject to biases inherent in the data used to produce it. The data may be ‘big’ but they may also be incomplete, selective or misleading. So use of AI algorithms in predicting recidivism in criminals, or the chance of violating bail conditions, to take only two examples, can be problematic.
Recently a coalition of human rights organizations published the “Toronto Declaration” on how machine learning systems should be designed to avoid discriminatory results. The threat is being more widely recognized. One may hope that recognition will lead to avoidance.
The article on proprietary algorithms used in criminal matters linked to an analysis in The Guardian about algorithms lying, i.e. coming up with ‘wrong’ answers through negligence or intent.
A more recent article in Wired deals not with lying but with cheating.
It is well known that computers have no common sense. They are completely literal. Great efforts over the years have been invested in making computers understand ‘fuzzy logic’, the language of more or less. But it all must be conveyed in binary code, ones and zeroes, on and off signals. So it is all still a matter of responding to instructions.
If one sets a computer to solve a problem by machine learning, i.e. by analysing large numbers of use cases or scenarios to draw a conclusion, one has to be extremely careful to set all the rules that matter. It is difficult to anticipate all the ways in which expectations might be … not broken but ignored. ‘Teach a learning algorithm to fish, and it might just drain the lake.’
As a researcher quoted in the article points out, the ‘algorithms do what you say, not what you meant’. In other words, they are computers as all computers have always been. They can’t tell a loophole from a rule. People still have to tell them.
AI has great promise to solve a lot of problems, if it is carefully watched – for discrimination, for ‘lies’, and for ‘cheating’. But it is not going to take over the world unless we tell it to. On its own, it is too stupid.