Computer security is a contradiction in terms. Consider the past year alone: Cyberthieves stole $81 million from the central bank of Bangladesh. The $4.8 billion takeover of Yahoo by Verizon was nearly derailed by two enormous data breaches. Russian hackers interfered in the American presidential election.
Away from the headlines, a black market in computerized extortion, hacking-for-hire and stolen digital goods is booming. The problem is about to get worse.
Computers increasingly deal not only with abstract data such as credit-card details and databases but also with the real world of physical objects and vulnerable human bodies. A modern car is a computer on wheels, an airplane is a computer with wings. The arrival of the “internet of things” will see computers baked into everything from road signs and MRI scanners to prosthetics and insulin pumps.
There is little evidence that these gadgets will be any more trustworthy than their desktop counterparts. Hackers already have proven that they can take remote control of connected cars and pacemakers.
It is tempting to believe that the security problem can be solved with yet more technical wizardry and a call for heightened vigilance. It certainly is true that many companies still fail to take security seriously enough. That requires a kind of cultivated paranoia that does not come naturally to non-tech companies. Organizations of all stripes should embrace initiatives such as “bug bounty” programs, whereby companies reward ethical hackers for discovering flaws so that they can be fixed before they are abused.
There is no way to make computers completely safe, however. Software is hugely complex. Across its products, Google must manage around 2 billion lines of source code, so errors are inevitable. The average program has 14 separate vulnerabilities, each of them a potential point of illicit entry. Such weaknesses are compounded by the history of the internet, in which security was an afterthought.
This is not a counsel of despair. The risk from fraud, car accidents and the weather can never be eliminated completely either. However, societies have developed ways of managing such risk — from government regulation to the use of legal liability and insurance to create incentives for safer behavior.
Start with regulation. Governments’ first priority is to refrain from making the situation worse. Terrorist attacks, such as the recent ones in St. Petersburg and London, often spark calls for encryption to be weakened so that the security services can better monitor what individuals are up to. It is impossible to weaken encryption for terrorists alone, however. The same protection that guards messaging programs such as Whatsapp also guards bank transactions and online identities. Computer security is best served by encryption that is strong for everyone.
The next priority is to set basic product regulations. A lack of expertise always will hamper the ability of users of computers to protect themselves. Governments therefore should promote “public health” for computing. They could insist that internet-connected gizmos be updated with fixes when flaws are found. They could force users to change default usernames and passwords. Reporting laws, already in force in some American states, can oblige companies to disclose when they or their products are hacked. That encourages them to fix a problem instead of burying it.
Setting minimum standards gets you only so far, though. Users’ failure to protect themselves is only one instance of the general problem with computer security — that the incentives to take it seriously are too weak. Often the harm from hackers is not to the owner of a compromised device. Think of botnets — networks of computers, from desktops to routers to “smart” light bulbs, that are infected with malware and attack other targets.