Is that software running correctly?
Dystopias seem to be in fashion. Consider the demise of humanity due to the rise of artificial intelligence – the fancily named ‘singularity’, when machines will be our equal, beyond which we will play second fiddle to the machines’ first.
Fear of machines is hardly a new phenomenon. Back in the 50s, Isaac Asimov wrote thousands of pages of science fiction exploring how humans would relate to robots which are superior to us puny humans but regulated by the Laws of Robotics intended to safeguard us. This fear can be traced further back to Mary Shelley’s Frankenstein, contemporary to the real-life Luddites destruction of weaving machinery fearing the loss of their jobs.
With machines outperforming the best humans in many activities – playing chess, finding patterns to help you invest your money better, etc, perhaps it is understandable that this fear has reached a new high. It’s straightforward to extrapolate a dystopian future in which machines will make us obsolete.
But this is fear of what is yet to come. What has never piqued the paranoia of the masses are the dangers of today’s machines. When we, error-prone humans, wrongly design them, the result is that of machines doing things other than what they were intended to. History is rife with such stories: from financial disasters – as when engineers at Intel optimised the way their computer arithmetic units performed division, only to discover too late that the method they came up with gave an answer faster but, alas, wrong, and resulting in financial losses of almost half a billion US dollars; to human tragedies – such as the wrongly programmed Therac-25 devices, used in hospitals to treat cancer patients, inadvertently delivered radiation overdose, resulting in the death of a number of patients.
With computers increasingly interacting with our lives, one would think that software engineers must have found ways of guaranteeing the correctness of these systems. And yet, the state-of-the-art is still analogous to a civil engineer testing the design of a bridge by building it and randomly heaping cars on it. If it does not collapse, it is declared safe.
Luckily, civil engineers design bridges more meticulously, using mathematical and physical models to show that no configuration of cars can damage the bridge. Alas, practically no commercial software is built to such high specifications.
This is where computer science comes in. For more than half a century, computer scientists have been trying to develop techniques to be able to model software mathematically in order to prove that it cannot do anything but what it was originally intended to do. However, even a small computer programme’s complexity exceeds that of a bridge by many orders of magnitude.
And this is where the challenge lies – scaling up the techniques developed to work for large, real-life systems.
This way, if one day we were to try to embody Asimov’s Laws of Robotics into the machines we build to safeguard humanity, we would like to ensure that they are really what the machine would ever do.
Gordon Pace is a professor with the Department of Computer Science at the University of Malta.
Did you know?
• The first actual computer ‘bug’ was a dead moth which was stuck in a Harvard Mark II computer in 1947.
• There is a Scandinavian company that is working on a way to translate what a dog is thinking using EEG-sensors and microcomputers.
• A computer as powerful as the human brain would be able to perform about 38 thousand trillion operations per second and hold about 3,584 terabytes of memory.
• In September 1956 IBM launched the 305 RAMAC, the first ‘SUPER’ computer with a hard disk drive. The drive weighed over a ton and stored 5 MB of data.
• Your smartphone today is far more powerful than all the computers aboard the Apollo 11 Lunar Lander that put two men on the moon and returned them safely home to earth.
For more trivia see: www.um.edu.mt/think
• The UK government has launched a £20 million digital skills initiative designed to encourage greater interest in cybersecurity among teenagers. Known as the Cyber Discovery Programme, the scheme involves both online and offline challenges aimed at young adults aged between 15 and 18, including tasks that pit participants against fictional hackers.
• Until now, assessing the extent and impact of network or computer system attacks has been largely a time-consuming manual process. A new software system being developed by cybersecurity researchers will largely automate that process, allowing investigators to quickly and accurately pinpoint how intruders entered the network, what data they took and which computer systems were compromised.