manifesto for cybersecurity
The recent ransomware attacks have focused lots of minds onto cyber security, however many of the solutions being proposed are little more than sticking plasters to the larger underlying issue - namely systems are not secure by default. The ‘trend’ in software has been to launch it, then fix it. This is a very attractive proposition for business, as it lets them discover the ideas that work and don’t work, and then iteratively improve them. Most of the gadgets we use in our lives today would not exist without this mentality. However the dark side of this approach is almost all software is not secure, the evidence is pretty much every system deployed has security flaws, the only question is who finds them first - bad people or good people.
This situation is not going to be viable in the long term. Technology is becoming a larger and larger part of our lives and you cannot have an ecosystem of software collapsing every few months or years because someone has found a weakness. In 10-15 years all our transport, logistics, energy, entertainment will be dependent on systems - such an attack could literally kill millions and send us back to the dark ages - the UK emergency committee (amusingly named Cobra) has a saying - that we’re 9 meals from anarchy. A few years ago in the UK we had a ‘petrol’ strike where the tanker drivers refused to deliver fuel, this very nearly broke the entire food supply chain in a few short days.
In IT this is solvable - the IT industry need to stand up and take some responsibility for this mess we’ve created. Our obsession with innovation and speed is now coming with too high a price, and if something doesn’t change, it will kill people. The sticking plaster approaches to security get more and more complex and less effective over time, and this will all end in tears if action is not taken. It may take government regulations to force it, hopefully not - as they are blunt tools, but action must be taken. Therefore I’d like to propose a manifesto (being election season in the UK) we take onboard as an industry…
We will not ship software that does not have
- Updatability - All software must be able to be securely updated, we know we make mistakes, we know we have to be able to fix them.
- Integrity - All software should have integrity built in, if it’s been messed with it should not just be blinding charge ahead doing what the attacker wants.
- Security outcomes not security features - tick lists of security features is not security, focus on the outcome of a secure system, not if you’ve got all the ‘usual’ check boxes. This usually means you’ve done a threat analysis and have got a plan for at least the known threats.
- Logging/Telemetry - If you’re being attacked, tell someone - if no-one knows, no-one can respond
- A diverse ecosystem - if we all use the same tools, libraries, suppliers one attacker can take us all out, we need diversity
Software written complying with this manifesto would still be vulnerable to attacks, that is never going to change, however the outcome of such an attack is different. Let’s analyse the Wannacry ransomware attack and what could have changed….
Wannacry started by infecting a small numbers of PC’s and then spreading.
- The first PC’s it infected should have noticed something changed (integrity)
- the PC should have reported it - possibly to their users, local admins or Microsoft (Logging/Telemetry).
- That would have allowed Microsoft to notice, further warn people and to issue patches (Updatability)
- Users would have been able to take action (patch or turn computer off)
- The computer itself could take action - preventing it’s integrity being further compromised
- If these organizations had a diverse range of technology (say some Macs or Linux) they’d have found themselves not totally stuck when their windows PC’s got infected. This is not to say Mac’s/Linux are perfect for security, but the diversity itself is a benefit.
- Many of these computers would have had Anti-Virus installed, but it didn’t work (outcomes not features), Anti-virus although useful in some cases is far from an panacea and gives a false sense of security, if the people running these computers had focused on the outcome, as opposed to just ticking boxes they may have solved this another way (e.g disconnect them from the internet)
The end result in a compliant ecosystem, such an attack would be a minor annoyance, and not a disaster. It’s simply building in some resilience.
If we take another (unfortunately common example) - a website with a security blunder in it that allows attackers to gain access to a server. Often these attacks result in data loss, or worse, yet with these axioms there are many places they can be stopped.
- The process of web application suffers a code injection, and runs the attackers code - it should notice (integrity) and report it (logging/telemetry)
- Then the first server gets infected and the attackers starts adding tools to gain remote access - it should notice (integrity) and report it (logging/telemetry)
- The system admin should notice and should take action and update the app, or disable the server to prevent spreading. If the application is built with diversity the same attack should not easily execute on other servers preventing rapid spread.
- The hacked server should ensure the credentials it utilized to access other servers (databases etc) are locked to it’s application code (integrity) - preventing the attacker from pivoting onwards
- The network security should not just rely on the server being trusted and should insist on applications authenticating themselves (integrity)
- The system admins should have reviewed the security model and made sure that hackers can’t pivot (outcomes not features) and that all the expected attack trees were protected
We are not going to be able to remove security as a problem from IT systems, they are now sufficiently complex we can’t fix it. However we can design a diverse ecosystem of integral software, that logs issues, allows updates and is designed to handle the threats it’s likely to encounter. The is a case that some standards are needed, poorly designed software is analogous to overuse of anti-biotics - everyone who does it, has little motivation to fix it, but as a society we are all hurt by this behavior, and over time it’s going to get really serious for us all.
WannaCrypt was it good for the security industry?
This weekend we saw ‘the biggest cyber attack ever’ and a few people (who don’t work in IT) have asked me - will it be good for you (as I work for Irdeto - a Digital Platform Security company). It’s an interesting question to consider - these big attacks make a lot of noise, so you’d expect on Monday morning the business of cyber security will get easier! However I think the reality is a bit more nuanced.
The first big impact of the attack is everyone is talking about cyber security. This must be a good thing for the security industry, far too often, I find, people don’t take cyber security seriously. This is often a factor of how humans think, we are very bad at estimating and reacting to risks and cyber security is one of those things that seems so big and scary it’s easier to ignore it. It would be great if this morning the world was systematically analysing their cyber security risk, but I think what will actually happen is sticking plasters will be applied, it will be noted down by many as on of those things that happens, and life will continue as it always did. We will probably find some people moving to address cyber security, and I think over time governments and regulators will start treating the overall IT security of their country as something they have to worry about, but that will take several years to have any significant impact.
The next impact is patching has now gone to the top of everyones agenda. This is a good thing, and we’ll see lots of IT teams running around patching everything in sight. However in many cases this will just for a few weeks, until the fuss dies down and old (bad) habits will return. Microsoft’s decision to issue a patch for Windows XP is an interesting example. One one hand Microsoft have been excellent corporate citizens ‘vacinating’ a chunk of the vulnerable PC’s, on the other the people owning those PC’s may now have a false sense of security. Microsoft have not patched Windows XP against all it’s known vulnerabilities, they have only fixed the immediate one this worm was exploiting. I’ve seen a number of articles criticising Microsoft for their policy of discontinuing free support for Windows XP - however there is one big point in their favour - they have patched all them - the patch is called Windows 10 and for most of last year they gave it away free to anyone who wanted it (or not in some cases).
Patching being at the top of everyones agenda is not necessary good news for the IT security industry. Patching is part of a defense in depth strategy, but is far from all of it. The danger of patching being at the top is people consider once patched, the job is done, and then they’ll stop thinking about security until the next attack. As with this attack, I’d argue that people behaving that way are being negligent, but I also can see it’s a normal human reaction.
The final impact is people may actually get more slack about IT security. It seems no-one is being ‘fired’ over this attack. The UK Government have been stressing this is a [global attack (http://www.bbc.co.uk/news/health-39906019), the implication being it’s not their fault. We need some really hard questions to be asked of all the IT managers who signed off running unpatchable/unpatched systems, and the business managers who squeezed budgets to make that their only option. A cynical person would probably learn from this attack, cyber attacks are
- Too hard to fix
- And if one happens I won’t get blamed
- Therefore I won’t try and stop them
I don’t agree with the above sentiment, as these are very preventable, at a reasonable cost, but I also know people are unlikely to stop their day-day activty to fix these issues unless there are real and visible consequences to not taking action.
wanntcryptor 2.0 ransomware and negligence
Yesterday the news rapidly filled up with reports on a ‘massive cyberattack’, as I’m in the UK the press coverage was focused on the NHS and initially was full of comments about ‘smart’ hackers. This reporting is, in my opinion, giving these organizations an excuse for their negligence. The reporting often implies the attack is some kind of ‘act of god’ that they could not avoid, in this case it was trivial to avoid it, simply don’t connect out of date systems to the internet.
I did write an article recently pointing out the role of luck in cyber security, but there is also plenty of room for negligence. If we analyse this attack we find
- The malware is a ransomware - which as a family are regulaly updated to expliot new known security flaws, mainly in common software such as windows
- On 12th May at 01:24 AM Malwarehunterteam first spotted this new malware
- The malware is spreading via a known Windows exploit patched on 14th March 2017
- This exploit is part of the leaked ‘shadowbrokers’ tools, that (alledgely) the NSA had been hoarding and not reporting to vendors
- The NHS in the UK appear to have been particually badly effected as they are running a lot of Windows XP, which has been out of support since April 2014. This deadline was extremly well publicised - and any organisation still running any windows XP PC, on a internet connected network is negligent
- The worm spread very quickly - NY Times has a good map - showing quite how many unpatched systems there are
If we analyse these issues in a bit more detail we find a tale of negligance..
Ransomware is becomming a modern plague, for many years it was hard to hackers to monetize hacking. They could occaisionally break into systems like banks or payment gateways and directly extract money, but it was difficult, and even more difficult to get away with the cash. Ransomware solved this, by using exploits to break into consumers PC’s (their primary target), they get lots of small amounts of pretty near untracable money. This is a very profitable business - estimates up to $1 billion dollars have been made.
There are a few fixes to ransomware that could happen
- Fix all the vulnerabilities - However this is difficult and unlikely to happen, computers are complex and it’s probably not within our power to fix this.
- Stop the money flow - If people stopped paying then ransomware would not pay, however if you’ve just lost your photo’s to it, it’s going to be very tempting to pay. I would advocate making it extremely hard for them to be paid, for example you could ‘taint’ any bitcoin balance paid to an ransomware address, and ban any finance company handling it. Or you could make it illegal to pay a bounty - (it already is in some countries).
Paying ransomware is a bit like over using anti-biotics - it’s bad for humanity as a whole and has to stop!
Patching is not optional, it hasn’t been optional for 15 years, yet as this incident demonstrates the message has still not got out. Microsoft introduced automatic updates in 2000, anyone caught out by not patching since then is simply negligent.
One of the main causes of this issue is IT teams choosing not to patch, either by delaying it or stopping it entirely. This is ususally defended as an argument about compatibility. It’s true that some security updates have broken applications, but this is a case where the ‘cure is worse than the disease’. If you have system, that is all important to you, you cannot let is go unpatched. The ‘standard’ process of reviewing patches is harmful - hackers won’t wait for your IT team to get round to reviewing them and installing them. As soon as a vulnerability is known they will start adding to their ransomware. The only viable option is automatic updates.
Windows XP and ‘legacy’ systems
The NHS example with Windows XP is almost certainly down to money, the NHS as we all know is tight on resources and as a result will be ‘sweating’ old assets. This is once again simply negligence - I would not be surprised if all the delays and choas caused by this attack have killed or harmed patients - that will probably come out over next few days. It’s a false economy to keep running old systems, still connected to the internet. The cost in staff time/impact now will vastly exceed the cost of upgrading them. The managers who decided to make these decisions should be held accountable.
There is a serious argument to be made, that is any computer system is at all important to you, you cannot afford to let it fall into ‘legacy’ state. If you do you can guarentee at some point it will fail and stop your hospital or business. This also raises serious questions of negligence - the new GDPR regulations and the assoicated fines should focus peoples attention - just because something is legacy does not get you off the hook!
There has been a lot of commentary on the NSA exploits - but this is a fine proof point for why the government, or anyone having a ‘back door’ is a bad idea. It’s blind luck in this case that patching actually prevents the ransomware, in this case the shadowbrokers leak gave Microsoft the chance to fix it before it became a major issue. However it could easily have been far worse, if the ransomware had appeared before the patch every windows PC would have been vulnerable and this would have been far far worse.
What have we learnt? A tale of negligence
I think the main lessons to learn from this are
- Patching is not and has never been optional - if you don’t patch you are simply negligent
- If you have a system that’s important to you, your customers (or patients) you can’t declare it ‘legacy’ and ignore it - if you do you are negligent
- If you find a vulnerability (looking at you - NSA) and you don’t tell the vendor - you are negligent
- If you pay a ransomware vulnerability - you are encouraging this behaviour and you should be culpable
The argument that your limited budgets won’t pay to secure you computers really doesn’t work, if an organisation can’t afford to use computer systems in a safe way, you can’t afford the computer system at all. The organisation would be better off sticking to lower tech methods as the impact of these attacks is going to cost a huge sum in lost staff time and direct costs.
Are you feeling lucky?
How lucky do you feel today? It’s an important question as your IT security is probably mostly down to luck.
If we examine most ‘hacks’ we usually see the organisation hit issuing statements about ‘sophisticated hackers’ and the public image of hackers, as lone genius’s wearing hoodies in darkend rooms is re-enforced. In fact most attacks are perpetrated by far less skilled people and succeed by luck. That’s not to say there aren’t some super skilled experts out there, but they are few and far between.
What I mean by luck here is simply a factor of the complexity of IT systems. Every single IT system that is in service today has some form of known or unknown security weakness. My evidence for this claim - is simply history - just look at the list of security issues found and you’ll discover pretty much everything has problems. There are lots and lots of security improvements going on in the IT industry, but in parallel we’re building more and more features, this means that the overall risk is at best level, or even getting higher.
When we see a system being hacked, normally it’s because someone has found one of these weaknesses. The process of finding a specific weakness, against a specific organization is mainly luck. This is how phishing works, send a million emails, some of them bite and those are the people who get hacked/defrauded. The same is true for more sophisticated attacks - in many cases finding a certain website is vulnerable to an attack & exploiting it is down to the luck of the attacker.
Some people argue we can fix all these attacks, by a combination of good tools and practices. I’d disagree. If we look at hacks on websites - the main ways to hack a website really haven’t changed for 15 years, all of them are solvable, yet developers keep shipping websites with these issues. Why is this - it’s because most software is too complex to understand, and thus secure. Even with the best tools, process things get through, and most people building websites don’t have those, they simply have a limited budget and deadline to get the website launched.
If you gave me an infinite security budget, the best engineers, security analysts etc - I still could not build, a non trivial, system that was 100% guarenteed to be secure. We could build a system with great defense in depth, the best security tools - but there would still be some unknown bugs/defects in it, and with luck an attacker could beat it. However I can build a system where an attacker has to be very lucky, and even if they are lucky there is a very high chance they’ll get caught. I’d argue that is what one should focus on when designing a solution.
That’s not to say you can’t make your own luck, both bad and good. You can certainly increase your risk by not being aware of security, for example take the UK bank using sequential card numbers that let people guess the account details - that is simply negligent. You can reduce your risk by
- Constructing an ‘attack tree’ detailing what your are trying to protect and how you are doing it
- Training development teams and ensure they practice secure coding via processes & tools
- Assembling a defense in depth strategy - make sure your not relying on a single security system/approach to stop each attack
- Having robust monitoring and incident reponse plan so that when the worst does happen, you can limit the damage
- Designing into your systems techniques to mitigate the damage from a successful attack, e.g. ‘break one, break only one’ - where you make sure hackers have to break each asset (account, database record etc) seperately. This means they don’t just have to be lucky, they have to be lucky repeatidly!
Overall Luck is a big factor in a hack succeeding. Next time you see Bank XYZ or Shop ABC has been hacked, don’t just assume either the hackers are super smart, or they’ve been incompentent. Instead remember that even the best systems can be beaten if the hacker gets a lucky break. However you should judge very harsly those companies if they aren’t ready & able to notice the breach, respond, communicate to customers and deal with it. Next time it could be your system - are you ready and prepared?
Computers are complex, so is protecting them
Computer systems are complex, and the complexity has been at the point for quite a few years now it’s impossible for any one person to understand ‘everything’ about any given system. There will often be people with a good understanding the ‘building blocks’ but it’s pretty much impossible to understand all the detail of the code, libaries and platforms it depends on.
Complexity has massive implications for the security of computer systems. If no-one understands a system how can you have any surity that it’s secure? The developers of the system will have tried to design for ‘known’ security issues, and tried to assemble the ‘building blocks’ in such a way they are secure but as they aren’t full understood it’s highly likely there will be some issues. This is not just an academic claim - if we simply look at the ‘security patchs’ for major building block components like Java, .NET, Windows, Linux - all of which have regular security issues that could compromize any systems built on them. On top of the building blocks, even in a mid size dev team, you will have a mixture of skills and abilities in the team and even with ‘2 person reviews’ security bugs do get through. Add in that many systems depend on services supplied by other companies - things like SaaS, hosting, ISP’s, Certificate Authorities and DNS - any or all of which are critical for security.
With this level of complexity, it becomes impossible to prevent all vulnerabilities. This is becomming a larger & larger problem in the real world. We do hear about famous hacks like Yahoo, Talk Talk, Best Buy etc - but mostly they were unlucky. We know from the security patch lists every system up and running (for example) on 1st Jan 2016 had vulnerabilities due to issues in some of the underlying infrastructure components.
There is hope however - although every system was vulnerable, not every system was exploitable. This is a key distinction to make. For example leaving my front door unlocked makes me vulnerable to someone walking into my house and stealing things, however this is not exploitable if I’m at home paying attention. The term for this is defense in depth - where you have multiple, overlapping security procedures. Defense in depth allows for one or more components to fail and still have a chance of stopping the attacker.
If we take an example of credit card theft on the web and how you secure it. A defense in depth metholdogy would suggest you try and
- Protect the connection to the web browser
- Protect the code running on the web browser
- Protect the code running on the server
- Protect the web server/API
- Train developers in secure coding practices
- Implement strong admin controls & audit logging
- Analyse the transaction pattern for signs of fraud
- Analyse the client profile for signs of fraud
All of these on their own is not enough to stop an attack - however combined there is a good chance one (or more) of them will delay/impede an attacker. This can often be enough to stop theft if those alerts are (manually or automatically) monitored.
The challenge with needing to deploy a defense in depth is decide which defense to deploy. The list above in the credit card example is far from exhaustive and it’s hard to choose what to pick. This is where modelling an Attack Tree comes in - with an Attack tree you simply write the goal (steal money) and then list all the ways to do it, and all the ways to stop it as a tree applying scores at each node. Bruce Schneier wrote a great article explaining it back in 1999. This is a very practical methodology to use to explain what protection to use.
Computer systems are extremly complex, and the only way to protect them is with a layered defense, consisting of multiple, overlapping solutions to try and prevent attacks. The best way to decide what you need is an Attack Tree, rigourous attack tree modelling gives you a good way to decide which attacks are feasible if a single security defense fails, and which are well protected by multiple layers of defense. Unfortunately very few systems today are modelled with attack trees to consider the threat level and instead many people rely on ‘tick box’ security which is dangerous as is does not address all the threats, or assess the risk of someone making a mistake (and you can guarentee people sometimes make mistakes).
Once again a major CA (Symantec) has been ‘caught’ issuing certificates improperly. There is a great write up on Ars Technica. This is really significant as falsly issued CA certificates are one (of many) way to MITM SSL.
This underlies the extreme difficulty in securing anything in IT. There are simply too many ‘moving parts’ and people in involved in securing anything. Your computers security depends on thousands of people and companies all doing everything correctly all of the time, and simple law of averages suggests this is unlikely to ever happen!
There is a really good talk about some vulnerabilities found in the N26 banking app presented at the CCC congress this year.
The talk is worth a watch but it does highlight some key points
- No Certificate Pinning was being used that made it easy for the research to MITM the app
- that’s not to say Cert Pinning fixes all issues but doing it makes things a lot harder for attackers.
- The API’s exposed to the web were far too verbose and didn’t really care about who was calling them
- I think (shameless plug) that Application Hardening techniques for both web and mobile are going to be needed to secure these things long term. You need to ensure the code calling your API is what you think it is. This is where products like Irdeto’s Cloakware API Protection come in.
- A lot of the exploit relied on coding/logic errors - but they were quite easy to exploit
- API Protection techniques will mitigate when the (inevitable) mistakes in logic occur in your code and make it much harder to exploit
- That’s not to say you shouldn’t also work on fixing the logic!
- A number of the exploits relied on the engineers assuming ID were secret that were not (the ‘mastercard ID’ in this case)
- This kind of assumption is quite common - if you think something is secret you should not just document it, but you need to have tests scanning logs/apis looking for that data occuring to ensure it’s actually still secret.
- A good breach response helps you manage PR
- This was a pretty bad breach for N26 - but they handled it well. In particular they engaged with the researcher constructively and they fixed the issues in a reasonable time period.
- Many companies either ignore the issue or head straight for legal threats in these cases, this is a mistake as doing so will increase likelihood of it being publicised before you have fixed it.
I suggest watching the whole talk - it’s well presented and shows a great real world example of how MITM can ruin your day as a bank or fintech.
Ouch - Kaspersky have been enabling MITM attacks on their customer base. The Register citig a Chrome bug report explains how you can use this to trick consumers in thinking a site is valid/safe when it is not.
This underlines the ease of MITM SSL/TLS - see my previous article for all the different ways this can be done!
I’ve been travelling quite a bit recently for work and have been reminded (again) how ‘human factors’ can defeat any attempt to improve security.
A good example of this is chip and pin/contactless. Chip and Pin is common and popular in Europe and as a result in Europe I never ‘give’ my card to members of staff for them to process it. This reduces the risk of fraud substantially as staff cannot easily clone/copy cards when they’ve never handled them.
However contrasting this with the USA - in the USA even when they have chip and pin machines it still seems common for the shop staff to take the card and ‘swipe’ it first. This seems in 95% of cases to result in you signing for the transaction. If you think why shops are not using the chip and pin slots - I guess it’s human nature. People are used to the old method and there has been no incentive to force change. A good write up of the issues is at http://qz.com/717876/the-chip-card-transition-in-the-us-has-been-a-disaster/
What’s even more worrying is how shops in the USA are handling contactless. Shop staff have taken my card and tapped it on the pad. This is a logical extension of the current behaviour but is rife with fraud possiblities. I have no way to verify the amount and the staff could be wandering off to clone it.
This just shows dealing with the human factors is essential. People will accept less security for convience (until it goes horribly wrong)!
Man in the middle is easier than you think
I’m often heard saying it’s quite easy to MITM HTTPS (also called SSL/TLS) and decided that maybe I should list all the methods I know of (there are quite a few).
The attacker has many options to try and get in the middle between the user and web server/API
- Pure Technical Approaches
- Social Engineering Approaches
Pure Technical Approaches
The pure technical approaches rely on attacks that don’t require users to make any mistakes and anyone can be vulnerable.
Zero Day Vulnerabilities in browsers
Your web browser is pretty secure, but it’s not perfect there is a continous stream of exploits, known as Zero Day vulnerability being found in browsers. They are called Zero day as the hackers find them before the vendor and the vendor has 0 days to fix them. Take for example the annual pwn2own competition where every year for the last few years security researchers have managed to break all of the major browsers. The browser vendors have definitely made their browsers more secure - but each year they get hacked, fix the hacks but more hacks are found.
The root issue here is software is complicated. As an example Fieox has 14 million lines of code and Chrome has 14.9 million lines. At that volume no one human can understand it all and it gets incredibly hard to ensure there are no weaknesses in it. There are tools that try and solve this but none of them can catch all bugs - as it’s quite hard to define what a bug is, until you see it! There are some approaches using machine learning (also called AI) that may help - but I suspect for the foreseeable future we’ll continue to have 0 day flaws found in browsers at regular intervals.
Zero Days are very relevant for breaking HTTPS as typically once a zero day is found it will be used (often via advertising banners) to install malware on the consumers computer. The malware will do a range of things including breaking HTTPS and logging keystrokes.
TLS/SSL are the cryptographic foundation of security on the internet, however they are definitely not flawless. In the last few years we’ve seen discovered techniques to break them, exploiting flaws in both designs and implementations.
TODO ADD PICTURE HERE
Over the last few years there have been several breaches (all with odd names) on TLS/SSL including
If we extrapolate this would suggest we’re going to get a steady stream of such issues over the next few years. Once you have broken the encryption then it becomes possible to MITM the connection. It can be argued that over time TLS will get more and more secure but so far we don’t seem to have got there!
Incorrectly issued ‘trusted’ certificate
TLS/SSL relies on certificate authorities (CA) to work - those are the companies that certify that when you connect to ‘www.mybank.com’ that you are actually talking to your bank. Each web browser vendor (Microsoft, Apple, Google, Mozilla) have a list of approved certificate authorities they trust by default - and they require them to only issue certificates to legitimate companies. In theory this should mean when you see the green padlock on your browser you know who you are talking to - except this doesn’t work all the time.
The issue is in how the CA verifies the company - there have been repeated issues with legitimate certificate authorities issuing certificates in error to 3rd parties who then impersonate the site. This can happen due to weak verification processes or in some cases it looks delibrate. When this is spotted the CA gets ‘told off’ by the browser vendors but in most cases they are just made to apologise and told not to do it again.
The end result of this is hackers have managed to get certificates that let them MITM legitimate websites and use them for profit. There are a number of upcoming web standards designed to make this stronger (e.g. HSTS) but many sites are not using this yet as it’s hard to configure reliably.
Aquire vendor issued ‘trusted’ certificate
A varient on getting a CA to issue a certificate is to find the private key for a certificate that has been trusted by the user. Most users are not aware of it but many computers have extra certificates installed and trusted by their IT department or 3rd party software. For example if your organization uses Microsoft Active Directory there is a good chance someone has installed a certificate authority for that and your local intranet, another example is Dell who installed for several years such a certificate.
This matters because if that certificates private key is not protected (and unfortunately it often isn’t) and a hacker gets it they can intercept all of your HTTPS traffic.
Social Engineering Approaches
For social engineering approaches we combine technical ‘tricks’ with getting the computer user to do something that makes our lives hacking them easier. It’s surprising easy to convince people to do things that let you hack their computer.
Convince user to install MITM certificate
Many legitimate Wifi hotspots require users to install either software or a certificate to access them. This is especially common in emerging markets (and relatively rare in Europe/North America). The purpose of this is to allow the wifi hotspot owner to inspect all your secure traffic e.g. to prevent misuse. However the same certificate also allows the hotspot owner to MITM all your connections and view any data you see and potentially trigger sites to perform actions as you.
With a ‘legitimate’ Wifi hotspot this probably won’t happen. However there is a device known as a Wifi Pineapple available for $99 that can be used either a security testing tool, or more nefariously, to mimic a legitimate wifi network. The hacker simply has to set up a Wifi pineapple, via it’s easy to use GUI, and you’ll be asked to install a MITM certificate from it, they can then intercept and modify anything you see.
The user has had to do something ‘silly’ for this to happen - namely install the MITM SSL certificate (a simple double click), but it’s generally quite easy to convince people this is legitimate. Relying on consumers IT education being sufficent to not do stuff is a strategy based on ‘hope’, which I’m pretty sure is not going to work!
Convince user to install software
Asking a user to install a MITM certificate is not the only way to ‘break’ HTTPS - another way to achieve this is to ask them to install some software. Typically this software is in the form of a Trojan Horse. For example some wifi hotspots ask you to download a ‘security’ or ‘connection manager’ program. These can be harmless or could also install the tools required to intercept your HTTPS.
Malicious Browser Extensions
Browser extensions have a lot of power - users install them typically from externsion stores and once installed often they can see all the pages you view. This allows them to bypass the protections of HTTPS and gather all the data served on the page.
The main protection against this is the browser vendors who police the extension stores for malicious extensions. However their policing is far from perfect, some malicious extensions do sneak through and they can generally aquire data from consumers for a while before anyone notices.
A varient of this attack is to ‘buy’ an existing popular extension from it’s developer (and remember most are made as hobbies so developers will consider selling for relatively small sums) and then issue an ‘upgrade’ with the malicious code. This allows attackers to quickly distribute their code to many browsers.
So are we all doomed to never having secure websites?
Securing computers from attackers was described by John Oliver as ‘Dancing on the edge of a volcano trying desperately not to fall in’ and to some degrees this is true. Computer systems are now sufficiently complicated there will probably never be a useful and totally secure system again.
However we don’t need totally security - it doesn’t existing in the physical world and we get by just fine. What we need is a assessment of the risks, mitigations and active responses to limit the damage an attacker can do. Techniques are starting to appear to reduce these risks (e.g. shameless plug Cloaked.JS from Irdeto ) which will make it significantly less lucrative for hackers to try this. Combined with a defense in depth approach this can get us to the point where we are controlling the risks and losses to an acceptable level.