While at BSides Detroit, I was able to catch some really great sessions. One in particular, Josh Little’s A Cascade of Pebbles: How Small Incident Response Mistakes Make for Big Compromises, was extremely informative, as it allowed the audience a chance to see mitigation efforts from the the outside of a targeted attack, giving clear takeaways that all companies can use to make their incident response processes more effective.
Immediately Little’s team tried a straightforward phishing email, to no avail. So, they spent a few days creating an intricate fake company; one that showed on Google searches, had a company news ticker, and information on the ‘executives’. The point of the email was to have users fill out a survey and ‘login’, thus giving their username and password. One user did fill out the survey, and they had credentials to get into the network. Simultaneously, someone in the IT department realized that it was bogus, and sent an email out to everyone who received the email telling them not to click on it. He also checked the logs, and seeing that no one had clicked, he decided that it was taken care of. What he didn’t know was that the user who did click was at home, so the form submission wasn’t on the company’s logs.
With the credentials, and because there was no two-factor authentication on their VPN, they were able to get into the network, and proceeded to find lots of important information on the fileshare (including the previous month’s vulnerability scans). Their movements triggered alerts, and the IT department used an antivirus scanner to check the system. It came up clean, so they figured it was a false positive and ignored the alerts.
They finally made the connection between the computer that is the source of the continued suspicious activity and the user that clicked on the email, and had the user change his password. Unfortunately, he changed it in the same pattern that he had done in the past (he kept a note in Outlook of his emails, and because they had been in his account for four days, they had already seen the list), so once it was changed, they logged right back in with the new password.
Through a loaner desktop, they were able to get access to other administrative accounts. They then created a new user that they could utilize. This caused another alert, which prompted the IT department to ask the specific administrator if they had created a new account. The admin said no, and the issue was ignored.
At this point, Little’s group was able to get on management’s computers, and proceeded to get access into pretty much everywhere they needed to be for the test. They were running low on time, so they stopped being so careful and stealthy, and ended up being caught quickly, and kicked off the system for good (after they had gotten access to all the information they had wanted).
This whole attack lasted about two weeks. Little explained that if they had been a little stealthier and had a little more time, they could have been on their system, undetected, for several months, or even years. So, what could this company have done to strengthen their mitigation efforts? Here are some of Little’s biggest takeaways for any company looking to strengthen their security (i.e. everyone):
Education – When the compromised user was changing his password, the IT department should have helped instruct him on best practices for strong passwords, and encouraged him to think up something totally new. Also, there are so many tools and methods that can be used to try and mitigate an attack. While AV is a great tool, it isn’t the catchall for malicious attacks. Having an IT department understand how their system works, and all the tools and techniques that can catch an incident like this can help lower the amount a company stands to lose from network compromise. More generally, it’s important to have responders thinking critically about an attack. If an antivirus scan comes up clean, it doesn’t mean the alert was worthless in the first place. It’s time to think harder about what may be going on within the system. Putting total reliance (or not enough reliance, in the case of ignoring alerts) on the accuracy of a tool can be detrimental.
Communication – The user should have spoken up when he found out the email was a scam to admit that he clicked. The IT responder should have individually called every recipient of the email to make sure they hadn’t clicked. The responder should have informed management that the email had happened in the first place, whether or not he thought someone filled out a survey. Others in the IT department should have been made aware that there was a targeted attack against the company. All of these issues have to do with people within the company communicating that there is something going on they need to be aware of. This could have stopped the attack at the start. The reason for people staying quiet oftentimes is concern over getting fired if they self-report something they’ve done. Clarifying that this isn’t the case could help get issues resolved well before the company takes losses. Also, having a process in place necessitating management notification in cases like this can clear up any doubt about if/who an IT responder should explain the situation to.
Importance – Each step of the way, events that didn’t add up for the company (alerts going off with no results in a scan, users created with no recollection of having made them by the administrative person who supposedly did…) were ignored as non-issues. At each of these points, they were shrugged off because the answer wasn’t clear. Giving importance to the things that just don’t make sense means that workers will get to the bottom of it, and make sure it’s taken care of correctly and completely.
Josh Little is a Senior Security Consultant for VioPoint, an Auburn Hills-based security services company. He is also the chapter leader for OWASP Detroit and one of the founders of MiSec. Josh has over 14 years experience in the IT and security fields.