Logo

Logo

Breaking & entering

The National Health Services’ recent WannaCry ramsomware ordeal is the latest in a series of cyber incidents that have habitually…

Breaking & entering

(Photo: Getty Images)

The National Health Services’ recent WannaCry ramsomware ordeal is the latest in a series of cyber incidents that have habitually come to be described as a “wake-up call”.

One might believe that a cyber wake-up call should result in behaviours that improve security. So does this happen and what are the prospects for countering future malware?

Despite persistent advice to users much malware is still contracted via “explicit user action”, like clicking on a web link causing a browser to download malicious code and run it with the user’s privileges or inserting compromised USB sticks into the system. Take-up of security behaviour advice seems patchy and effectively communicating good cyber hygiene practices across diverse user communities remains a major challenge.

Advertisement

Many contracted infections are plainly avoidable. Anti-malware, for example, does a good job but isn’t always deployed.

The WannaCry incident revealed how obsolete and unsupported operating systems such as Windows XP continue to be operated in major organisations but Microsoft had fixed the problem for its more modern supported operating systems. Application software is also persistently problematic, with the most common application vulnerabilities seeming to be implemented time after time.

Injection attack vulnerabilities, for example, regularly head or appear in Owasps’ top10 list. Here inputs are carefully crafted to cause security breaches. For example, a hacker claiming to be john supplying a decidedly odd password string of characters wrong_password OR '1'='1' to an authentication system may cause that system to form a database query along the lines of STORED_PASSWORD[john] =wrong_password OR '1'= '1'?

From a security point of view this should be interpreted as “Is the input string wrong_password OR '1'= '1' equal to the stored password for user john?”

However, a typical database may interpret this as “Is the stored password equal to wrong_password, OR is 1 equal to 1?

Since 1 is equal to 1, the authentication system returns TRUE and the hacker will be logged in. There are many such database injection attacks (the above among the simplest) with a range of corresponding countermeasures. Input validation — checking for malicious input crafting — is a widely recommended countermeasure against many.

Nevertheless, developers continue to build systems with plainly avoidable database injection vulnerabilities. Similarly, careful choice of supplied parameters to a software function or procedure can violate memory constraints on the system and cause specific areas of memory to be overwritten with supplied data that contains arbitrary malicious code.

With skill, this code can be located appropriately to ensure subsequent execution. Such “buffer overflows” are well-known and there are well-understood means of countering them.

Yet they occur as major vulnerabilities year on year. Educating a wider application developer base remains a critical task. We seem to have a patchy record on countering known technical problems.

The above gives just an indication. So what of the future? Some modern viruses and worms may radically change the way they and their progeny “look” — by reformulating themselves and using encryption to evade detection of characteristic structure. These polymorphic malware use such shapeshifting to evade the detectors (which typically look for characteristic patterns).

However, the behaviours of instances of such malware will often constant or very similar. This means that there will be a shift to detectors that monitor behaviour rather than form. However, constant behaviour is not essential and malware can always vary its behaviour to evade detection.

Some malware may remain difficult to detect. Detection of malware using a “covert channels” (one which uses the very existence or non-existence of a file to signal one bit of information) may be a rather difficult affair, particularly when required through put may be extraordinarily low — leaking a 256 bit crypto key at the rate of one bit per hour would take 11 days but the key may have vast significance.

The low bandwidth will likely evade detection.

Detecting previously unseen malice and malware will remain a major technical goal. Provided new malware itself has characteristics similar to previously identified malware, behaves in a similar way, or has similar effects on the system, then technology for detection of new malware and attacks has some hope.

But a new attack may exhibit none of these. The deep integration of computation into the fabric of our increasingly smart society presents further challenges.

The Internet of Things, with its major interconnectivity of diverse components, will create a vastly increased threat surface. Furthermore, we are increasingly seeing systems for which safety depends on security. Imagine a moving car or in-use operating theatre system discovering it has been compromised by malware.What is it to do? Dealing with discovered compromise here may require considerable technical subtlety.

Finally, we have already glimpsed the probably inevitable rise of the resource rich malware developer. The Stuxnet worm targeting Iranian nuclear re-processing centrifuges around 2010 used four previously unseen vulnerabilities, a substantial resource indeed. Some have estimated the development cost of Stuxnet as several million dollars, with state involvement suspected by some.

The recent WannaCry ransomware attack that hit the NHS exploited a vulnerability (Eternal Blue) discovered originally by the US National Security Agency.

It is clear that various nation states are capable of deploying considerable resources to develop malware. This perhaps will have the greatest effect of all.

The writer is professor of computer and information security, department of computer science, university of sheffield, uk 

Advertisement