I’m sceptical about the claims that Russian state intelligence was behind the hacks of the DNC servers. Maybe they were, but the process of attribution for attacks is a complicated one, and this association was claimed at the very outset.

For background, I investigate attacks on servers most weeks. Most of the time, you can’t say who was behind them with any certainty just from the attacks – you might have as a starting point the concern that a particular party is having a go at you or your client, and that’s different, of course. You start out tracing them to machines that were themselves compromised, and rely on some cooperation from their administrators. IP addresses or social media accounts used for the attacks are cut outs, in the classic tradecraft sense.  This is true when the people behind them are probably kids, or common commercial competitors. There are tens of millions of compromised computers in the world, any of which can be used to front an attack.

So when you read that an attack has been attributed to Russian hackers, this does not often mean there’s been any sort of trace through the internet.

Instead, there will have been some analysis of the toolkits or techniques used. This is the identification technique used by the investigators of the DNC hacks. But toolkits get shared and sold, and copied. This is true of toolkits and malicious code that’s used at first by intelligence agencies. I don’t think there’s much doubt that national agencies were the origin of the Stuxnet trojan that affected centrifuges in Iran. This first appeared, in an early form, in 2009 (although there are claims of earlier forms four years beforehand). The final form contained a timestamp from February 2010. By November 2010, having been discovered in June 2010, it was reportedly being traded commercially on the black market.

So a toolkit used in an attack that was likely to have been a state agency can, and will, turn up in other hands within months of being identified.

I’m not the only one who isn’t sure this was the Russian government. Fidelis Security has become involved in the DNC response, and this is how they blogged it:

Over a 12-month period, the DNC was victim to not just one, but two intrusions from a nation-state actor, Russia.

[…]

Finally, if Russia is to blame, this breach marks the first time that a nation-state has used cyber espionage to influence a United States election.

The first claim is what’s being reported, the ‘if’ isn’t. There’s a worrying degree of certainty being displayed in many reports at a stage in the investigation that’s so early it can’t be possible to say who was responsible. But confirmation bias is a powerful thing.

Crowdstrike did say they thought phishing, and spearphishing in particular, played a part in these attacks. That amounts to saying that people were induced by deceptive websites or other techniques to install malware themselves, unknowingly, on the DNC systems. That suggests they don’t think it was a remote exploit – some vulnerability in the internet-facing part of the systems that attackers could use to get in.

If malicious software could have been installed unknowingly, it could also have been installed knowingly. Rather like murder investigations, an actual penetration of a system casts suspicion on those closest to it, if you’re being an objective investigator.

I’ve been involved in electronic security since the late 1980s. Then, it was finding, and planting, listening devices and using other techniques to gather information. The most notorious thing I did was tap Darius Guppy’s telephone, and record the conversation he had with Boris Johnson about beating up a journalist, but most of the work I did was finding rather than planting. When you find an intrusion of some kind, and even then it could be external to the location that was being monitored, you need to consider who was behind it. You also need to consider whether it’s actually best to leave things in place, so the intrusion that’s happening is a known quantity, rather than blowing that and leaving the road open to further unknown ones.

When you try to figure out who was behind an intrusion, the first thing to think about is, who has a motive? Who benefits? And the first thing you need to think about when an attack is publicised, is why? Why not just watch it and gather intelligence?

So the Cui Bono question is worth considering here. Who benefited from these attacks, or who might have been the intended beneficiary? The main take-home was that the DNC favoured Clinton’s candidacy over that of Sanders. The releases of files came just before the Democratic Party’s convention. If you were a Sanders last-ditcher, that’s what and when you’d have wanted.

Who benefits from the claim it was Russia behind the attacks? Clinton does. Her main line of attack has shifted from Trump’s alleged racism, which isn’t such a strong line in the wake of the BLM movement stopping ambulances and inspiring the murders of police officers, to Putin wanting Trump to win. She is repeating the claim that Russia was behind this, when with the best will in the world the most that could be said is that some of the software used is similar to that used in what was thought to have been a Russian assault on some German systems a few years ago.

Maybe Putin does want Trump to win, and maybe he was behind these leaks of data. But Putin hasn’t done badly under the Obama administration Clinton served in. Russia has become the most credible external power in the Middle East and has invaded two Eastern European countries. More of the same would suit Putin. The only real problem he has is that fracking in the USA has depressed the price of gas, which Russia relies on. Clinton has given out mixed messages on fracking, but she did say, in a debate with Sanders, that:

“By the time we get through all of my conditions, I do not think there will be many places in America where fracking will continue to take place,”

Trump has benefited from the hacking in one way. He’s trying to get disaffected Sanders voters to switch to him, and the idea that their candidate was stitched up by the DNC, which is a stretch from what has actually been revealed – a preference rather than a manipulation of the process – would help him.

So it’s complicated, more so because the earliest DNC penetration was dated to last summer which, depending what ‘summer’ means, saw Trump on as low as single figure polling and makes it hard to believe an attack was started with the intended effect of helping him in his campaign.

It might be the case that Russia was behind this. It’s most likely, given the facts we know so far, that any definite attribution will be hard to make. But it is certainly true that if at the moment you think this is a Russian cyber attack designed to help Trump beat Clinton, you’re believing what you want to believe.

Out of interest, though, one of the techniques pioneered by one of the groups fingered for this is very cool. It uses Twitter accounts and steganography – which today is mainly the embedding of encrypted data in image files, but which was first described in 1499.

Post to Twitter

Tim has a piece up at Forbes talking about the relative costs of local storage and the cloud. I think it and the comments that follow at the time of writing miss an important point about the cloud.

We’ve had remote servers in data centres for decades. We’ve had at least some integration between different types of client platforms for decades, though Microsoft has done its best to inhibit this interoperability. Neither of these things are cloud computing.

If it means anything, a computer ‘cloud’ is a network with more than one physical computer and more than one storage device, an integrated control system and a high degree of virtualisation and redundancy. You can string together pieces of hardware so they look and behave like a single logical system, you can operate multiple virtual machines on one hardware system (Amazon’s cloud uses Xen, for example). If you’re really feeling good you can combine these two approaches. And you can often do these things using a nice control interface. The physical reality of the hardware and the logical structure of the system have been separated. Adding new hardware adds to the pool from which the virtual, logical units are constructed.

This means that cloud computing can’t be directly compared with a (more primitive) local computer and its hard drives. Cloud computing is intrinsically more robust and more expensive. It’s also far more flexible because you can add new nodes (computing units, storage units) or remove them as demand fluctuates. Many cloud services charge by the hour to reflect this flexibility.

These techniques of high redundancy and virtualisation have been around for years. I was hosting on a network of FreeBSD servers using jails for virtualisation for years before any marketing executive dreamed up the label ‘cloud computing’. Like ‘data mining’ before it, this is a more a marketing than a technical term; virtualisation and redundancy have long been found in well-designed systems. As marketing terms tend to, ‘cloud’ has now stretched to the point where some smaller IT businesses offer their own ‘cloud’ services that are actually based on single servers and not cloud computing at all. They are simply services housed in a data centre rather than onsite.

Tim’s point is that local storage has been getting cheaper at a faster rate than bandwidth. But then, these storage savings are also available to cloud providers. ‘Local’ storage can be accessible from any connected devices – all you need is a static IP address or a dynamic DNS service and you can host them from your bedroom. But they’re not offsite, which matters when you’re burgled or your bedroom catches fire. Expanding the system to meet a temporary upsurge in demand means buying new hardware and being stuck with it when demands falls back again.

Google can offer very cheap access to highly redundant cloud-based services like GMail because they monetise in ways other than direct charging (though GMail is also available as a chargeable service). But if you want to run your own cloud-based system, hiring the components from a cloud provider, it will be more expensive than operating a local workstation. This says nothing about the future direction of computing. A proper cloud system simply isn’t comparable with a single computer.

For what it’s worth, my view is that the separation between physical hardware and logical systems will continue to increase.

Post to Twitter

For Android only, so far. But this is a good initiative:

To mitigate the risks of misappropriation of the user’s data by today’s Android applications, the researchers of the study have developed a system, called AppFence, that implements two privacy controls that (1) convertly substitue shadow data in place of data that the user wants to keep private and (2) block network transmissions that contain data the user made available to the application for on-device use only.

Post to Twitter

Your next computer but one will look a bit like this (I need to widen this template…):

That’s a smartphone in a dock.

Combine that idea with this research:

Apple share falls less quickly as Google operating system [Android] takes over – but Windows Phone has barely sold half of the 2m handsets shipped, say new figures

And it becomes less insane than you’d think to suggest that Microsoft is in the process of experiencing the fastest and deepest collapse of market share in history, wars and catastrophes aside.

See also esr’s analysis.

Post to Twitter

Worth noting that Google was founded using not one, but two successive code bases that were very poor, first in Java(!) then in Python (which together with a Django-like application development framework remains the language for the Google Apps platform).

Moral: implementing an idea with quick and dirty code is fine – if the idea is good, you can polish or re-write later, if it isn’t you’ve saved time.

Post to Twitter

Excellent essay by Bruce Schneier:

In the next 10 years, the traditional definition of IT security—­that it protects you from hackers, criminals, and other bad guys—­will undergo a radical shift. Instead of protecting you from the bad guys, it will increasingly protect businesses and their business models from you.

Post to Twitter

In the field of patent law:

Intellectual Ventures, which is based in a Seattle suburb and claims 30,000 patents and patent applications, is believed to have the largest portfolio among firms that don’t make or sell products. It claims to have earned nearly $2 billion from licensing its patents.

[…]

The threat posed by Intellectual Ventures helped prompt the rise of firms like RPX Corp. It is paid by companies to buy up potentially threatening patents; the companies receive licenses to those patents, and RPX pledges never to sue over them.

Post to Twitter