{"id":384,"date":"2018-01-27T13:31:00","date_gmt":"2018-01-27T18:31:00","guid":{"rendered":"https:\/\/domainsure.com.wp.easypress.ca\/?p=384"},"modified":"2019-02-27T13:39:04","modified_gmt":"2019-02-27T18:39:04","slug":"a-deep-dive-into-the-mirai-botnet-attack","status":"publish","type":"post","link":"https:\/\/domainsure.com\/news\/a-deep-dive-into-the-mirai-botnet-attack\/","title":{"rendered":"A Deep Dive into the Mirai Botnet Attack"},"content":{"rendered":"
<\/p>\n
As we all know, on Friday Oct 21, 2016 DNS provider Dynect was severely impacted by a big DDoS attack which has since been\u00a0<\/span>attributed to the Mirai Botnet<\/a>. (interesting to note that \u201cMirai\u201d\u00a0<\/span>means \u201cfuture\u201d<\/a>\u00a0<\/span>in Japanese).<\/p>\n Briefly: The Mirai Botnet is constructed by commandeering network connected Internet of Things (IoT) devices such as remote cameras, or any other device somebody thought would be \u201cneat\u201d to connect to the Internet, albeit with crappy security like a default admin password. \u00a0These devices, aggregate into the 10\u2019s of thousands or potentially more and can be coordinated to launch traffic at a target like a website (such as the possibly world-record setting DDoS against security researcher Brian Krebs recently \u2013 also attributed to Mirai), or the nameservers for a target. Which is what happened to Dynect on Friday.<\/span><\/p>\n As we know too well, when you bring down a target\u2019s nameservers, you effectively disappear that target from the Internet, and unfortunately you also bring down every other domain name that is using the same set of nameservers (unless they have additional nameservers, see below).<\/p>\n DDoS attacks are nothing new, and neither are attacks against DNS infrastructure. God knows we\u2019ve had our fair share here at easyDNS and I still have the psychological scars from a few of them.<\/p>\n The fact is they are only getting worse as time goes on:<\/p>\n <\/a><\/p>\n This graph is for the DDoS chapter in\u00a0<\/span>my upcoming O\u2019Reilly book<\/a>\u00a0<\/span>(which is almost finally done, thank gawd), the data points are as follows:<\/p>\n Not in the graph: days after the 2016 Krebs Online attack, French ISP OVH reported one at \u201cdouble the magnitude\u201d that hit Krebs. If true, we\u2019re in the 1.2 Tb\/sec ballpark, a few years early.<\/p>\n The pattern is clear, this is\u00a0only getting worse.<\/p>\n If you are employing mitigation devices, or using third-party scrubbing services you are taking an \u201carms race\u201d approach and you are always fighting the\u00a0last<\/em>\u00a0<\/span>war. There will\u00a0always come a bigger botnet, there will be\u00a0always a larger attack.<\/p>\n In the past there was debate that things like\u00a0<\/span>BCP38<\/a>\u00a0<\/span>could be a silver\u00a0bullet against DDoS attacks (BCP38 is network ingress filtering) which would prevent spoofed traffic from entering the network of a DDoS target. Which is true\u00a0<\/span>if<\/em>\u00a0<\/span>the attack traffic is using forged headers and spoofing where the packets are originating from. This was a common feature to DDoS attack traffic in the past.<\/p>\n The uptake of BCP38 has been slow-to-none given the problem that ROI for implementing it\u00a0is \u00a0\u201casymmetric\u201d (to use Paul Vixie\u2019s phrase\u00a0<\/span>when he analyzes this issue<\/a>), meaning that the party who\u00a0makes the investment to implement BCP 38 wouldn\u2019t be the same party who would reap the direct benefits of it.<\/p>\n But since DNS attacks in particular have become mainly reflection and amplification attacks, BCP38 doesn\u2019t necessarily help. My understanding so far the Mirai attack is the same: the packets aren\u2019t forged or spoofed so BCP 38 wouldn\u2019t make a difference.<\/p>\n Even the problem of too many open resolvers being present on the internet, a big factor in DNS amplification and reflection attacks, is less of an issue here. \u00a0Mirai attacks don\u2019t necessarily need or use open resolvers \u00a0because these are just dead stupid devices generating queries from within their own networks\u00a0and they\u2019ll simply hit their local resolver service (that being the ISP or whatever is configured).<\/p>\n (Those wanting to skip my possibly meandering \u00a0analysis of the \u201cdeep politics\u201d behind the attack may want to to simply skip to the\u00a0<\/span>\u201cWhat Do I Do About It?<\/strong>\u201c, below.)<\/em><\/p>\n The source code to Mirai\u00a0<\/span>was known to have been leaked into the wild<\/a>\u00a0<\/span>prior to Friday\u2019s attack so the hit on Dynect could have been quite literally\u2026\u00a0<\/span>anybody<\/span><\/em><\/strong>\u00a0with knowledge of the Internet\u2019s back alleys and the requisite skills to set it up and deploy it.<\/p>\n Predictably, I\u2019ve already seen a lot of nonsense floating around the \u2018net about who it \u201ccould\u201d have been and why they did it,\u00a0<\/span>like this<\/a>:<\/p>\n \u201cThe DDOS Attacks Are Most Likely A FALSE FLAG By The Clinton Camp To Blame RUSSIA And WIKILEAKS, But WIKILEAKS Is Strangely Blaming Supporters\u201d<\/span><\/p>\n<\/blockquote>\n In another example,\u00a0Representative Marsha Blackburn (R-Tennessee)<\/a>\u00a0went on CNN and somehow drew a straight line to Friday\u2019s attack from the failure of SOPA to pass (remember that?<\/a>)\u00a0It\u2019s a strange allegation. Think of it as arguing that cancer could be eliminated if only Green Day won\u00a0the World Series. It\u2019s an incomprehensible premise that just makes no sense. Perhaps the most frightening\u00a0realization\u00a0to emerge from Friday is that Blackburn (and presumably others like her) comprise the US Senate\u2019s Communications and Technology Subcommittee.<\/p>\n Granted, I did find the timing \u201codd\u201d that the attack happened a mere two days after I wrote here on the easyBlog about\u00a0<\/span>how deliberately provoking a cyberwar with Russia\u00a0<\/span><\/a>(as\u00a0<\/span>Joe Biden indicated the US would on \u201cMeet The Press\u201d<\/a>\u00a0<\/span>the prior weekend), would end badly. I personally attribute that to coincidence as opposed to an opening salvo.<\/p>\n A hacker group called\u00a0<\/span>\u201cNew World Hacking\u201d<\/a>\u00a0actually claimed responsibility for the attack, but\u00a0it\u2019s hard to say if it was really them (some security types I\u2019m in contact with\u00a0are skeptical).<\/p>\n Attention to the Mirai botnet emerged after\u00a0Brian Krebs\u00a0<\/span>wrote an article exposing a shadowy group based in Israel called \u201cvDos\u201d<\/a>, which was billing itself as an \u201cIP Stresser Service\u201d (a service that is supposed to exist to \u201cstress test\u201d your network for DDoS attacks). It turned out they weren\u2019t really running an \u201cIP stresser\u201d service but a full-on \u201cDDoS-for-hire\u201d service, and his expose earned him the receiving end of the largest attack on record, forcing his mitigation service, Prolexic (now owned by Akamai) to dump him (in their defense, that\u2019s what happens anywhere when you get hit with \u00a0a DDoS you cannot fully handle. You don\u2019t have much of a choice).<\/p>\n After that happened,\u00a0<\/span>the Mirai botnet source leaked<\/a>, so the field has massively expanded in terms of who could do it.\u00a0As Krebs delved deeper into what happened he also discovered that a DDoS mitigation firm called BackConnect\u00a0<\/span>had employed a questionable tactic<\/a>\u00a0<\/span>(hijacking the BGP prefixes of an external entity) in claimed self-defense against an attack. His further analysis was that the company\u00a0<\/span>had a history of doing it<\/a>.<\/p>\n The emergence of \u201cDefensive BGP attacks\u201d\u00a0<\/span>became a vigorous discussion<\/a>\u00a0<\/span>on\u00a0the North American Network Operators Group (NANOG) mailing list and \u00a0Dyn scurity researcher\u00a0Doug Mallory\u00a0<\/span>gave a talk\u00a0<\/span><\/a>\u00a0on the subject\u00a0at a NANOG meeting on\u00a0Thursday<\/em>\u00a0<\/span>\u2013 the day before the attack.<\/p>\n I don\u2019t know who did it and I don\u2019t have the inside track on any suspects. The timing could be coincidence\u00a0and because the source code is out there, it could be anybody. But \u00a0tracing through the Krebs On Security sequence, including the vDos operations Krebs exposed (the two men who ran it were arrested in Israel after Krebs\u2019 story broke), \u00a0and proceed through the NANOG thread and then Doug Mallory\u2019s talk and you would probably be lot closer to what actually happened. This avenue of investigation seems more promising\u00a0than Russians, rogue-DNC staffers, Wikileaks or hyper-dimensional grey aliens.<\/p>\n As KrebsOnSecurity (among others)\u00a0<\/span>has observed<\/a>:<\/p>\n \u201cDDoS mitigation firms simply did not count on the size of these attacks increasing so quickly overnight, and are now scrambling to secure far greater capacity to handle much larger attacks concurrently.\u201d<\/p><\/blockquote>\n When employing DDoS mitigation gear head-on or external scrubbing centers, you are in an arms race and fighting the last war. That\u2019s not to say you shouldn\u2019t employ these devices (far from it, we use Cloudflare, StackPath, and Koddos and every time I think about how much fscking money we spend on DDoS mitigation it\u00a0pisses me off<\/em>\u00a0<\/span>because we can\u2019t\u00a0<\/span>not\u00a0<\/em>do it and we can\u00a0still<\/em>\u00a0<\/span>get taken down \u2013 and so can anybody else).<\/p>\n But wait, there\u2019s more:<\/p>\n Not all DNS outages are caused by DDoS attacks. Other things cause them too. The number one cause of\u00a0<\/span>Data Center outages is actually UPS and power failure<\/a>\u00a0(although the\u00a0fastest growing<\/em>\u00a0<\/span>cause according to that same study is cybercrime).\u00a0But hopefully you don\u2019t have all of your DNS servers in one data center. If we had to pick the single biggest DNS outage in terms of the number of domains taken offline, I would guess it was the great Godaddy outage of 2012 which took down\u00a0millions<\/em>\u00a0<\/span>of domains for approximately 8 hours and was\u00a0<\/span>attributed to corrupted routing tables<\/a>.<\/p>\n So as mentioned in our\u00a0<\/span>\u201cDNS and DoS Attacks, How to Stay Up When Your DNS Provider Goes Down\u201d<\/a>\u00a0\u00a0(which still gets boatloads of traffic every time there is a major DNS outage), the magic bullet for maintaining High Availability DNS even when your main DNS solution goes down is:<\/p>\n If the point hasn\u2019t been made yet, I\u2019ll restate it here: No matter how much redundancy, how much network capacity, how many POPs worldwide, all DNS providers are a logical SPOF unto themselves:<\/p>\n So if you absolutely, positively have to have 100% DNS availability all the time, you must:<\/p>\n and, the one piece which all of the above hinges on, usually the one part you usually\u00a0cannot do\u00a0<\/i>once \u201cThe Black Swan Event\u201d has started is to<\/p>\n At easyDNS this has been our mantra for years and we\u2019ve created a number of tools and systems to provide that ability. As an easyDNS member you received our email directing you to this article because:<\/p>\n\n
The traditional\u00a0silver bullets won\u2019t work<\/h3>\n
Who Did It? Was it the Russians?<\/h2>\n
\n
So Who Actually Did Do It Then?<\/h3>\n
\u00a0What Do I Do About It?<\/h2>\n
It\u2019s not just DDoS attacks (Sorry).<\/h3>\n
Use Multiple DNS Solutions<\/h3>\n
\n
\n
\n
High Availability DNS Methods at easyDNS<\/h3>\n
\n