C&C Botnet Detection over SSL

C&C Botnet Detection over SSL

C&C Botnet Detection over SSL Riccardo Bortolameotti University of Twente - EIT ICT Labs masterschool [email protected] Dedicated t...

NAN Sizes 0 Downloads 6 Views

Recommend Documents

Botnet Detection Combining Botnet Detection - DNS-OARC
Monitoring DNS traffic is an effective approach for g pp detecting botnet behavior [Kristoff05]. – By matching DNS tra

Botnet Detection Using Internet Alerts
Mar 25, 2011 - Comparison of public DNS blacklists can be found in [15]:. • SORBS [12] – SORBS (Spam and Open Relay

Regional Botnet Detecti gional Botnet Detection - Merit Network
Regional Botnet Detecti. Michael Bailey. Jake Czyz. Michael Bailey. Research Faculty. University of Michigan. NANOG 49 â

Abstract. Among the diverse forms of malware, Botnet is the most widespread and serious threat which occurs commonly in

Automatically Generating Models for Botnet Detection - CiteSeerX
A botnet is a network of compromised hosts that is un- der the control of a single, malicious entity, often called the b

BotGraph: Large Scale Spamming Botnet Detection - CiteSeerX
NSDI '09: 6th USENIX Symposium on Networked Systems Design and Implementation. 321. BotGraph: Large Scale Spamming Botne

Botnet Detection and Countermeasures- A Survey - IJETTCS
malware dissemination and phishing. This paper is a survey of recent advances in botnet detection research. The survey c

A Study on Botnet Detection Techniques - CiteSeerX
Abstract- A botnet is a network of compromised computers, termed bots that are used for malicious purposes. When a compu

Botnet detection by correlation analysis - DiVA
was done by looking at the botnet characteristics especially the capacity of ... The thesis consists of the idea to a pr

C&C Botnet Detection over SSL Riccardo Bortolameotti University of Twente - EIT ICT Labs masterschool [email protected]

Dedicated to my parents Remo and Chiara, and to my sister Anna


Abstract Nowadays botnets are playing an important role in the panorama of cybercrime. These cyber weapons are used to perform malicious activities such financial frauds, cyber-espionage, etc... using infected computers. This threat can be mitigated by detecting C&C channels on the network. In literature many solutions have been proposed. However, botnet are becoming more and more complex, and currently they are trying to move towards encrypted solutions. In this work, we have designed, implemented and validated a method to detect botnet C&C communication channels over SSL, the security protocol standard de-facto. We provide a set of SSL features that can be used to detect malicious connections. Using our features, the results indicate that we are able to detect, what we believe to be, a botnet and malicious connections. Our system can also be considered privacy-preserving and lightweight, because the payload is not analyzed and the portion of analyzed traffic is very small. Our analysis also indicates that 0.6% of the SSL connections were broken. Limitations of the system, its applications and possible future works are also discussed.



Contents 1 Introduction 1.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Research questions . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Layout of the thesis . . . . . . . . . . . . . . . . . . .

7 9 10 11

2 State of the Art 2.1 Preliminary concepts . . . . . . . . . . . . 2.1.1 FFSN . . . . . . . . . . . . . . . . 2.1.2 n-gram analysis . . . . . . . . . . . 2.1.3 Mining techniques . . . . . . . . . 2.2 Detection Techniques Classification . . . . 2.2.1 Signature-based . . . . . . . . . . . 2.2.2 Anomaly-based . . . . . . . . . . . 2.3 Research Advances . . . . . . . . . . . . . 2.3.1 P2P Hybrid Architecture . . . . . 2.3.2 Social Network . . . . . . . . . . . 2.3.3 Mobile . . . . . . . . . . . . . . . . 2.4 Discussion . . . . . . . . . . . . . . . . . . 2.4.1 Signature-based vs Anomaly-based 2.4.2 Anomaly-based subgroups . . . . . 2.4.3 Research Advances . . . . . . . . . 2.5 Encryption . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

13 13 13 13 14 14 15 18 30 30 30 33 35 35 36 37 38

. . . . . . . . .

41 41 42 42 43 43 44 46 47 47

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

3 Protocol Description: SSL/TLS 3.1 Overview . . . . . . . . . . . . . . . . . . . . . 3.2 SSL Record Protocol . . . . . . . . . . . . . . . 3.3 SSL Handshake Protocols . . . . . . . . . . . . 3.3.1 Change Cipher Spec Protocol . . . . . . 3.3.2 Alert Protocol . . . . . . . . . . . . . . 3.3.3 Handshake Protocol . . . . . . . . . . . 3.4 Protocol extensions . . . . . . . . . . . . . . . . 3.4.1 Server Name . . . . . . . . . . . . . . . 3.4.2 Maximum Fragment Length Negotiation 5

. . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . .




3.4.3 3.4.4 3.4.5 3.4.6 x.509 3.5.1 3.5.2

Client Certificate URLs . . . . Trusted CA Indication . . . . . Truncated HMAC . . . . . . . Certificate Status Request . . . Certificates . . . . . . . . . . . . x.509 Certificate Structure . . . Extended Validation Certificate

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

47 48 48 48 48 49 50

4 Our Approach 53 4.1 Assumptions and Features Selected . . . . . . . . . . . . . . . 54 4.1.1 SSL Features . . . . . . . . . . . . . . . . . . . . . . . 55 5 Implementation and Dataset 61 5.1 Overview of n-gram technique implementation . . . . . . . . . 61 5.2 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3 Overview of our setup . . . . . . . . . . . . . . . . . . . . . . 64 6 Experiments 6.1 First Analysis . . . . . . 6.1.1 Results . . . . . 6.2 Second Analysis . . . . . 6.2.1 Considerations . 6.3 Third Analysis . . . . . 6.3.1 Botnet symptoms 6.3.2 Decision Tree . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

65 65 66 80 84 84 85 86

7 Summary of the Results 89 7.1 Limitations & Future Works . . . . . . . . . . . . . . . . . . . 90 7.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Chapter 1

Introduction Cyber-crime is a criminal phenomenon characterized by the abuse of IT technology, both hardware and software. With the increase of technology within our daily life, it has become one of the major trends among criminals. Today, most of the applications that work on our devices, store sensitive and personal data. This is a juicy target for criminals, and it is also easy accessible than ever before, due to the inter-connectivity provided by the Internet. The cyber-attacks can be divided in two categories: targeted and massive attacks. The first category mainly concerns companies and governments. Customized attacks are delivered to a specific target, for example stealing industry secrets, government information or attack critical infrastructures. Examples of attacks of this category are Stuxnet [31] and Duqu [7]. The second category mostly concerns the Internet users. The attacks are massive, and they attempt to hit more targets as possible. The attacks that belong to this category are well-known cyber-attacks like: spam campaigns, phising, etc... In this last category, it is included a cyber-threat called botnet. Botnets can be defined as one of the most relevant threats for Internet users and companies businesses. Botnet, as the word itself suggests, means net-work of bots. Users usually get infected while they are browsing the Internet (e.g. through an exploit), a malware is downloaded on the PC, and the attacker is then able to remotely control the machine. It became a bot in the sense that it executes commands that are sent by the attacker. Therefore, attackers can control as many machines they are able to infect, creating the so called: botnets. They can be used by criminals to steal financial information to people, attack company networks (e.g. DDos), cyber-espionage, send spam, etc... In the past years, the role of botnets has emerged in the criminal business and they became easily accessible. On the "black market" it is possible to rent or buy these infrastructures in order to accomplish your cyber-crime. This business model is very productive for the criminal as very dangerous for Internet users, because it allows common people (e.g. script 7



kiddies) to commit crimes with few simple clicks. This cyber threat is a relevant problem because it is widely spread and it can harm (e.g. economically) millions of unaware users around the world. Furthermore, this threat seriously harms also companies and their businesses are daily in danger. Building such infrastructure is not too difficult today. For this reason, every couple of months we hear about new botnets or new variants of botnets are taking place in the wild. Botnets have to be consider as a hot current topic. In the last weeks, one of the most famous botnets (i.e. Gameover Zeus [2]) has been taken down from a coordinate operation among international law enforcement agencies. This highlights the level of threat such systems represent for law enforcement and security professionals. Botnet detection techniques should be constantly studied and investigated, because today with the "Internet of Things" this issue can just get worse. Security experts should develop mitigation methods for these dangerous threats mainly for two reasons: to make the Internet a more secure place for users, otherwise they risk to lose their sensitive data, or even money, without knowing it (i.e. ethical reason) and to improve the security of companies infrastructures in order to protect their business (i.e. business reason). However, in the last decades botnets changed their infrastructures in order to avoid the new detection technologies that rose in the market. They are evolving in more and more complex architectures, making their detection a really hard task for security experts. On the other side, researchers are constantly working hard to find effective detection methods. In the past years a lot of literature has been written regarding botnets: analysis of real botnets, proposals of possible future botnet architectures, detection methods, etc... The focus of these works is wide, from the point of view of the technology that has been analyzed. One of the first botnet analyzed in literature was based on IRC. As time goes by they have evolved their techniques using different protocols for their communication, like HTTP and DNS. They also improved their reliability changing their infrastructure from client-server to peer-to-peer. Recently, botnets are trying to move towards encryption, in order to improve the confidentiality of their communications and increase the difficulty level of detection. Our work addresses this last scenario, botnets using an encrypted Command&Control communication channel. In literature, it has been shown that botnets started to use encryption techniques. However, these techniques were mainly home-made schema and they were not following any standard. Therefore, we try to detect botnet that try to exploit the standard (de-facto) for encrypted communication: SSL. In this thesis we propose a novel detection system that is able to detect malicious connections over SSL, without looking at the payload. Our technique is based on the characteristics of the SSL protocol, and it focuses just on a small part of it: the handshake. Therefore this solution is considered privacy-preserving and lightweight. Our detection system is mainly based



on the authentication features offered by the protocol specifications. The key-feature of the system is the server name extension, which indicates to what hostname the client is trying to connect. This feature is combined with the SubjectAltName extension of the x509 certificates, which was introduced in order to fight phising attacks. These checks are not enforced by the protocol, but it is the application itself that has to take care of them. We want to take advantage of these features in order to detect malicious connections over SSL. Moreover, we add additional features that checks other characteristics of the protocol. We do two checks on the server name field: to understand whether it has the correct format (i.e. DNS hostname) or not, and to understand if it is possibly random or not. Moreover, we control the validation status of the certificate and its generation date, and we check if any self-signed certificate wants to authenticate itself as a famous website (i.e. 100 most visited website by Alexa.com [1]). These features have been validated through three experiments, and a final set of detection rules have been created. This rules allow us to detect malicious connections over SSL. Moreover, our set of features allow us to detect TOR and separate it from other SSL protocols. During our work we have also confirmed the work of Georgiev et al. [19]. There are many SSL connections that are broken and vulnerable to man-in-the-middle attack (0.6% of connection of our dataset). Different content providers are vulnerable to this attack and therefore all the hosted websites. This thesis makes the following contributions: • We proposed the first SSL-based malware identification system, that analyzes SSL handshakes messages to identify malicious connections • Our solution completely respect the privacy of the users and it is lightweight


Problem Statement

This research aims at detecting botnet C&C communication channels based on the Secure Socket Layer protocol. We focus on detecting botnets at network level, so we do not have access to any machine but our server, which collects and analyzes the traffic. Current techniques for detecting C&C channels are designed for detecting known malware or botnets. However, none of these techniques focus on the SSL protocol. Moreover, it is not known in literature, whether such networks of infected machines are using SSL or not. This is an additional challenge that we face in this research. We have to define an intrusion detection mechanism non-based on existing botnet, and it has to be able to detect them. Additionally, current detection techniques are based on inspection of the content of network traffic. We cannot afford payload inspection of SSL, because that would mean a man-in-the-middle



attack, and we do not have neither the capabilities nor the permission to do it. Therefore, we have to find a different solution to accomplish this challenge.


Research questions

The recent developments of malware, which are starting to exploit standard cryptographic protocols (i.e. SSL), have attracted our attention. The role played by the Secure Socket Layer protocol in Internet security is fundamental for the entire Internet infrastructure. Therefore, we have decided to investigate in this direction in order to identify possible C&C botnets over SSL. Today, we do not know whether SSL-based botnets are already implemented in the cyber-criminal world or not, however it is our goal to eventually identify them. The advances of detection techniques for SSL-based botnets would be a novel work in literature and this will let us to open a new research direction. Then, our research would be a possible important improvement for the research community. To the best of our knowledge, there is no detection technique that can detecting malware by observing encrypted C&C traffic over SSL. Therefore the main research question is: How can we detect botnet C&C communication channels based on SSL? To address this question, we decided to design, implement and evaluate a technique that is based on anomaly detection, in combination with machine learning techniques. This combination have collected successful results in the past years in literature, therefore we want to act in a similar fashion. By further problem decomposition we extract the following research subquestions: 1. What SSL features could be useful in order to detect possible misbehavior? 2. How can we validate those features? 3. How we construct our detection rules with those features? 4. What characteristic our dataset should have, in order increase our chances to find infected machines? 5. How can we redefine our detection rules, in order to obtain the least number of false positives? 6. What data mining technique can fit best? 7. How do we set-up our experiment?



To answer these questions we started to study the SSL protocol in order to understand potential features useful for detection. Then we started to implement our solution using BRO [46]. Afterwards, we obtained the access to the network traffic of the University, and we set-up our experiment. We collected the network traffic and we ran three different analysis in order to validate and refine our features and detection rules. The first analysis was done manually, in order to analyze connection per connection and define false positives and true positives. For the second analysis we used the same dataset, but different rules (based on our features), which were refined after the first analysis. The third analysis is done on a longer period and a different dataset. The detection rules used are tailored based on the true positive previously found. This last analysis aims to validate these detection rules. Moreover, we built a decision tree, the best data mining solution in our scenario, and we have tested it on our first dataset, to classify malicious and benign connections and see the effectiveness of our features.


Layout of the thesis

The rest of this thesis focuses on addressing the main research question and sub-questions. Every step we have done in our research as been reported in this document. In more details, in Chapter 2 we provide a deep analysis about the state of the art of old and modern techniques for botnet detection, describing their advantages and disadvantages. Chapter 3 provides an introduction of the main protocol used in this project (i.e. SSL), with a deeper description of most relevant characteristics used in our final solution. Chapter 4 describes our approach to the problem: from the hypotheses to the selected features. In Chapter 5 it is described the implementation of our prototype system and the dataset that has been used. Chapter 6 describes the entire process that has been done to validate our features and the results achieved. In Chapter 7 are explain our contributions to the research community, while in Chapter 7.1 we describe the limitations of our system and the future works that can follow this project. Lastly, in Chapter 7.2, we define the conclusions of this Master Thesis project.



Chapter 2

State of the Art In this chapter we discuss the main works that have been done during the last decade in literature. Firstly, we give an preliminary introduction about the concepts that are used during this thesis. Secondly, we group the literature works in two different clusters. The first one includes the botnet detection techniques that have been evaluated by researchers. The second cluster includes advances of possible future botnets architectures and implementations. The detection techniques section is divided in two macro groups:signature-based and anomaly-based. Furthermore, these two groups are composed by several sections, which represent the main protocols that are used by those specific techniques in order to detect botnets. The section regarding the advances is structured to describe the works based on the protocols they exploit and the topics they are related to (e.g. Mobile).

2.1 2.1.1

Preliminary concepts FFSN

In the last years, botmasters started to propose a new offensive technique called Fast-Flux Service Network (i.e. FFSN), which try to exploit the DNS protocol. It has been introduced by cyber-criminals to sustain and to protect their service infrastructures, in order to make it more robust and more difficult to be taken down. This technique essentially aims to hide C&C server locations. The main idea is to have a high number of IP addresses associated with a single or multiple domains and they are swapped in and out with a very high frequency, through changing DNS records. It is widely deployed in modern botnets.


n-gram analysis

n-gram is adjacent sequence of items, from a given sequence of text or speech. This technique is characterized by the number of items that are used. The n13



gram size can be of one item (unigram), two items (bigram), three (trigram), and so on. These items can be identified as letters, words, syllables, or phonemes. n-gram models, in few words, can be defined as a probabilistic language model that uses the Markov model in order to predict the next item. n-gram models are widely used in different scientific fields like probability, computational linguistics, computational biology etc... They are also widely used in computer security, and as it will be possible to see in this work, several detection techniques are using n-gram analysis as a core technique.


Mining techniques

The definition of mining techniques, which we refer to, was described by Feily et al. in their work [17]. The definition of Mining-based techniques can be briefly described as those detection methods that use data mining techniques like machine learning, clustering, classification etc...


Detection Techniques Classification

In literature, there are several studies regarding botnets detection techniques [73] [17] and, as it is stated in these works, there are two main approaches to detect botnets. The first one is to set up honeypots within the network infrastructure. The second one is to use Intrusion Detection Systems (i.e. IDS). The focus of this paper is on the second approach. All the detection methods proposed in literature can be clustered in two macro-groups: Signature-based and Anomaly-based detection techniques. In addition, we divide each of these groups in several sections. These sections describe the main techniques (i.e. signature-based or anomaly-based) that work with a specific or multiple protocols. Our categorization is significantly different from those proposed by Feily et al. [17] and Zeidanloo et al. [73]. The first work analyzes four macrogroups of detection techniques: Signature-based, Anomaly-based, DNS-based and Mining-based. Data mining is a fundamental feature in order to detect botnets. However, we do not consider mining techniques a significant feature to distinguish detection techniques. These techniques focus on the construction or study of systems that can learn from data, therefore they can be used as a helpful method in a detection system but they do not design an abstract concept to detect anomalies, like anomaly-based and signature-based techniques do. Consequently, these techniques are considered in our method as important parameters for our detection methods classification. Furthermore, Mining-techniques are widely deployed in detection techniques, due to their effectiveness, and this would led this classification method to cluster most of the techniques in this cluster. Another important difference is that the authors make a clear distinction for DNS-based techniques and not for other



protocols, without providing specific reasons why DNS is more relevant than other protocols. Zeidanloo et al. [73] propose a different classification in their paper. They divide anomaly-based techniques in: host-based and network-based techniques. Moreover, they also divide the network-based techniques in active and passive. In our opinion most of the research papers would be included in the passive network-based group, because most of the techniques proposed in literature exploit this feature. These two different classification approaches do not clearly show which protocols are exploited by the detection techniques. This aspect is very important for us, because it helps to show what are the existing botnets detection techniques for a specific protocol. A structure where it is possible to understand clearly the protocols used by each technique, would give us a more organized view of the literature.



Signature-based detection is one of the most popular techniques used in Intrusion Detection Systems (IDS). The signature is the representation of a specific malicious behavioral pattern (e.g. network level, application level, etc..). The basic idea of signature-based detection systems is to find a match between the observed traffic and one of the signatures that are stored in the database. Therefore, if a match is found, a malicious behavior is spotted, and the administrator will be warned. Creating C&C traffic signatures is a time-consuming process. This type of techniques is efficient when the botnet behavior does not change in the course of time, because its patterns will match exactly with the signature previously created. To generate these signatures, the malware should be analyzed. Generally there are two main approaches to do this: manual analysis or honeypots. Both of these solutions are expensive (e.g. human costs vs infrastructure costs). Honeypots are efficiently used to detect the malware presence and to analyze its behavior. However, they are not reliable enough to create accurate signatures, therefore manual analysis is preferred. For this reason automatic techniques for C&C channel detection signature generation have been proposed as an alternative. IRC One of the most famous signature-based detection techniques that have been proposed in the last decades is Rishi, which is focused on the IRC protocol (e.g. Internet Relay Chat). Goebel and Holz propose [20] a regular expression signature-based Botnet detection technique for IRC bots. It focuses on suspicious IRC servers and IRC nicknames, using passive traffic monitoring. Rishi collects network packets and analyses them extracting the following



information: Time of suspicious connection, Ip and port of suspected source host, IP and port of destination IRC server, channels joined and utilized nickname. Connection objects are created to store these information, and they are inserted in a queue. Rishi then tests the nickname of hosts against several regular expressions which match with known bots nickname. The nickname receives a score by a scoring function. The scoring function checks different criteria within the nickname of hosts, for example: special characters, long numbers and substrings. If the score reaches a certain threshold, the object is signed as possible bot. After this step, Rishi checks nicknames against whitelists and blacklists. These list are both static and dynamic. Dynamic lists (both white and black) are updated using a n-gram analysis for similarity checks. If the nickname results similar to one of those present in the whitelist (or blacklist), then are automatically added to the dynamic whitelist (dynamic blacklist in the other case). Rishi shows to be an effective and simple way for detecting IRC-based bots based on characteristics of the communication channel. However, there are two main limitation that make this solution unfeasible for modern botnet. Rishi bases its regular expression check on regular expressions of known bots. Therefore, it is not able to detect bots that have different regular expressions than the known bots. Nevertheless, the most important drawback is that this solution works with the IRC protocol, and modern botnets are not using it anymore. Thus, we can consider this solution not suitable for modern botnets.

HTTP As regards HTTP, Perdisci et al. [49] propose a network-level behavioural malware clustering system. They analyze similarities among groups of malware that interact with the Web. Their system learns a network behavior model for each group. This is done defining similarity metrics among malware samples and grouping them in clusters. These clusters can be used as high-quality malware signatures (e.g. network signatures). These signatures are then used to detect machines, in a monitored network, that are compromised by malware. This system is able to unveil similarities among malware sample. Moreover, this way of clustering malware sample, can be used as input for algorithms for automatically generate network signatures. However, these system has many drawbacks that are very significant for modern botnets. Encryption is the main limitation of this technique. The analysis is done on HTTP content of requests and responses, thus encrypted messages can completely mislead the entire system. Moreover, the signature process relies on testing signatures against a large dataset of legitimate traffic. Collecting a perfectly "clean" traffic dataset may be very hard in practice. These drawbacks make this solution very unreliable for modern C&C botnets.



Encrypted protocols This section is quite particular, in comparison with previous ones. It regards signature-based detection techniques that are able to detect botnets analysing protocols encrypted messages. Rossow and Dietrich propose ProVeX [54], a system that is able to automatically derive probabilistic vectorized signatures. It is completely based on network traffic analysis. Given the prior knowledge of C&C encryption algorithms (e.g. encryption/decryption keys, encryption algorithm, ... ), ProVeX starts its training phase brute forcing, through reverse engineering, all the C&C messages from a malware family. In a second phase, it groups them based on the type of messages. The system calculates the distribution of characteristic bytes. Afterwards, ProVeX derives probabilistic vectorized signatures. Those signatures can be used in order to verify whether the decrypted network packets stem from a specific C&C malware family or not. This tool works properly for all those malware families where the encryption algorithm is known. ProVeX is able to identify C&C traffic of current undetectable malware families, and its computational costs are cheap. It is a stateless solution, therefore does not need to keep semantic information about the messages. However, ProVeX applies a brute-force decryption on the all network packages. This is the biggest limitation of this tool, because it assumes we are able to decrypt them. In case the messages are sent through SSL, it would be impossible (theoretically if well implemented) to decrypt and to analyze the bytes of the payload, therefore ProVeX would not be a good solution at all.

Blacklisting One of the simplest signature-based techniques that can be used to mitigate botnets, and deserves to be mentioned, is IP blacklisting. The idea is to create a blacklist in order to block the access to domains or IP addresses that are used by botmasters’ servers. On the Internet is possible to find several blacklists, which contain domain names and IP addresses of specific botnets, like Zeus [74] or more general websites that are hosting malware like [34]. One of the greatest advantages of this technique is the ease of implementation. Moreover, the number of false positives generated by such technique is very low, because, if it is properly maintained, the list of domains or IP addresses contains only those that are certainly malicious. Unfortunately the drawbacks of this technique are significant. The list should be constantly updated, and this can be done through manual work of researchers or using automated systems like honeypots. Usually both of these processes are used. In addition, botmasters can act freely until their domains are not on the blacklist.



Conclusions There is an important common drawback for signature-based systems that should be highlighted: unknown botnets, where signatures have not been generated yet, cannot be detected by IDSs. Therefore, these systems should always chase botmasters, and this gives them the opportunity to freely act until the malware is analyzed and a signature is created. There can be some flexibility in the matching phase. Malware of the same families can still be detected. The drawback regards malware completely different from those that have been previously analyzed. Another common disadvantage of signature-based techniques is encryption (or obfuscation). When an encrypted version of the same malware is encountered, it is not recognized by the system, because the encryption somehow changes its behavior or its patterns, where the signature generation is based on. [54] has the same problem, because if the malware changes its encryption algorithm, it is likely that ProVeX will not be able to recognize it again. As it is possible to see from the summary represented in Table 2.1, there are no many detection techniques for botnets that are based on signatures. The most-right column in Table 2.1 says if these methods are implementing mining techniques. For the definition mining techniques we refer to the definition stated in [17]. This type of techniques are static, and they are certainly not suitable as main detection techniques for modern botnets, which are very dynamic. At most they can be used to complement other more sophisticated techniques. Signatures-based review Detection Approach [20] [49] [74] [34] [54]


Mining technique Y Y N Y

Table 2.1: Signature-based detection techniques summary



Anomaly-based detection techniques have been investigated a lot in the past, and they are the most common detection techniques. They attempt to detect botnets by monitoring system activities that are classified as normal or anomalous. This classification is done by comparing these activities with a model, which represents the normal behavior of such activities. Therefore, an anomaly-based detection system is in principle able to detect any type



of misuse that falls out of the normal system behavior. An advantage of this system is that it can spot new malicious behaviors even though they are not already known. This cannot be done by a system based on signatures. However, the "quality" of these systems depends on the model of normal behavior. Thus, the combination of the selected parameters, which represent the most-significant features of the targeted system activity, and heuristics (or rules) is fundamental for the quality of the results. Since most of the techniques are based on anomalies detection, we will describe them based on the protocols they use. This is done to give to the reader a more structured reading and a clearer understanding of the main characteristics of such techniques. IRC In the past years, a lot of botnets have based their C&C communication channels on IRC protocol. Thus researchers started to investigate new techniques to be able to detect them. These techniques try to examine particular features of the IRC protocol, for example distinguishing IRC traffic that is generated by bots, from those generated by humans, or to examine the similarities of nicknames in the same IRC channel. One of the first works, regarding IRC-based C&C botnets, has been done by Binkley and Singh [9], where they propose an algorithm to detect IRCbased botnet meshes. It combines an IRC tokenization and IRC message statistics with TCP-based anomaly detection. The algorithm is based on two assumptions: IRC hosts are clustered into channels by a channel name and it is possible to recognize malicious channels by looking TCP SYN host scanning activities. The solution has a front-end collector, which collects network information into three tuples, and a back-end. Two of these tuples are related to IRC characteristics, the third one is related to TCP SYN packets. During the collection phase, the system calculates a specific metric on TCP packets, defined by the authors and called TCP work weight. A host high score of this metric would identify a scanner, or P2P host or a client that is lacking a server for some reason. One host with a high work weight value can be also a benign host. However, if a channel has six hosts out of eight, with a high weight, it is likely that something anomalous is going on. The IRC tuples are used mainly for statistical and reporting purposes. At a later stage, these tuples are passed to the back-end for report generation. The report is structured in such a way it is easy to identify evil channels and malicious hosts. The results of this solution showed effectiveness in detection of client bots and server bots in IRC channels. However, the system can be easily defeated by trivial encoding of IRC commands. Therefore, if we add this significant limitation to the fact that this solution works just for the IRC protocol and that IRC-botnets are obsolete, we can easily state that this solution is definitely not effective for today’s botnets.



Strayer et al. [57] propose a network-based detection technique for IRCbased botnets. The first step of this solution is to filtering the data traffic. This filtering phase is done in a simple way using black/white lists of good known websites, for example: inbound and outbound from Amazon is considered "safe", then it is discarded. The remaining part of the data is then classified, using NaÄśve Bayes machine learning algorithm, in Flow-based groups. These groups are correlated in order to find clusters of flow that share similar characteristics, like timing and packet size. As final step, this solution applies a topological analysis to these cluster in order to detect and to identify botnet controller hosts. In order to be more accurate, this technique requires to analyze the payload. This solution were able to identify within the network nine zombies out of ten and this approach shows that machine-learning classifiers can perform well and be effective in training legitimate and malicious traffic. However, we cannot say whether it is a reliable solution or not, due to the limitation of the dataset. Nonetheless, it works specifically for IRC-based botnets, that we consider obsolete. In [66] Wang et al. propose a novel approach for IRC-based botnets detection. The algorithm is based on channel distance, which represents the similarity among nicknames that are in the same channel. The assumption that is done by the authors is that bot nicknames within one channel must have the same structure. More specifically, they assume bot nicknames, even though contain random parts (letters, numbers or symbols), the length of each of these parts is basically the same. The nickname is represented as a four-tuple vector (length of nickname, length of letters, length of numbers, length of symbols). Then a Euclidean Distance is used to represent the distance of two nicknames and the channel distance. The algorithm proposed, based on these assumptions and functions, is able to detect IRC-based botnets. The work done by the authors achieves good results. Needless to say, this technique exploit particular features of the IRC protocol, therefore is not suitable for detection of modern botnets. In [32] Lu et al. propose a new approach using an n-gram technique. The assumption made by the authors is that the content of malicious traffic is less diverse than the benign traffic. First of all they classify the network traffic in application groups. Secondly, they calculate for each session the n-gram distribution in a determine time slot. Therefore, these distributions are clustered and the group with the smallest standard deviation is flagged as botnet cluster. Through this technique it is possible to detect groups of hosts that are using the same C&C protocol. This system shows to work efficiently for IRC botnets, whereas it has not been tested on other protocols, therefore it cannot be considered reliable for modern botnets. Lastly, this technique is the only one IRC-based that unifies anomaly-based techniques with signature-based techniques. However, the signatures are not protagonist in this solution, therefore we have preferred to describe this work as an anomaly-based technique.



Developing detection techniques on a specific protocol, like for IRC, gives the opportunity to researchers to exploit specific features of the protocol itself. Researchers have improved along these years and their detection methods are reliable for such protocol-specific botnets. However, these techniques become less reliable, even useless, when a scenario with different protocols is being used. This is the case for modern botnets. Nowadays botmasters do not use IRC for their C&C communication channels, but they prefer more common protocols like HTTP, HTTPS, DNS, etc... Therefore, we can consider these detection techniques out-dated, even though they were successful in their case scenario. HTTP Due to the passing of time, botmasters realized that concealing their malicious traffic as normal traffic would make them less detectable. They started implementing their C&C channels over HTTP, in order to blur with the common traffic. Moreover, development and deployment of such botnets are quite easy. Considering the enormous amount of traffic, that it is generated every day, detecting malicious communication becomes a very hard task. However, researchers started to investigate in this area, and several solutions have been proposed to address this threat. They were able to discover several botnets that were using HTTP in different ways: Zeus [10], Torpig [56], Twebot [40] and Conficker [50] are some examples. Beside these takedown operations, in the past years researchers have also proposed several techniques in order to detect such kind of botnets. Xiong et al. [70] propose a host-based security tool that it is able to analyze the user surfing activities in order to identify suspicious outbound network connections. The system monitors and analyzes outbound network requests, and it does it in three steps: a sniffer, that intercepts and filters all outbound HTTP requests; A predictor component, which predicts legitimate outbound HTTP requests based on user’s activities, and it parses the Web content retrieved out-of-band (it is the process of the predictor that fetches the requested Web page independent of the browser); A user interface, which indicates whether or not observed network attempts are initialized by her. The disadvantage of this system is that the user has to make a final decision. Users are very often unconscious of what is really going on, and they should be supposed to be unable to decide by themselves, because in most of the cases they do not have the proper knowledge. However, an important feature of this method is that it works independently from the browser. Hsu et al. [25] propose a novel way to detect web-service hosted by botnets, that are using FFSN, in real time. This technique is based on three assumptions about some intrinsic and invariant characteristics of botnets, that have been studied by the authors: i) the request delegation model, because a FFSN bot does not process users’ requests itself, therefore bots can



be used by FFSN as proxies, redirecting requests transparently to the webserver; ii) bots may not be dedicated to malicious services; iii) the network links of bots are not usually comparable to dedicated servers’ links. Whenever a client tries to download a webpage from a suspected FFSN bot, the system will start monitoring the communication. To understand if the server is suspicious or not, three delay metrics are determined: Network delay, Processing delay and Document fetch delay. These metrics are used as input of a decision algorithm, which determines whether the server is a FFSN bot or not, using an supervised classification framework. The results achieved are largely positive with a high detection rate and low error rate. However, there is a disadvantage for this technique that should not be underestimated. Since it will detect all websites hosted on "slower" servers, then it may also block those servers that are legitimate, but are just hosted on lower-hardware machines. The disadvantage of HTTP-based detection techniques is that they are not reliable in case of encryption. In addition, the detection capabilities of such techniques are limited. The limitations are both technical and computational. It is hard to find patterns that are able to clearly distinguish malicious and benign traffic. At the same time the quantity of HTTP traffic is huge, and it is very hard to deeply analyze it due to the high computational costs. Therefore, these two gaps make current HTTP techniques not reliable for botnet detection. However, even though botmasters start to go in a new direction, HTTP-based botnets are certainly still "alive" within the cyber-crime world. In the near future, HTTP could be used as a support or main protocol for botnet based on social-networks or mobile. P2P Botnets having a peer to peer architecture have been widely used in recent years and they are still active (e.g. Zeus variant is using p2p C&C). Botmasters use P2P communications to send and receive commands from compromised hosts that belong to the botnet infrastructure. Every bot is able to provide data (downloading commands, configuration files, ...) to the other bots. The principal advantage of a P2P architecture is its robustness against mitigation measures. Usually, in C&C botnet, servers represent a single point of failure and if security researchers are able to take it down, then the botnet can be considered destroyed. P2P architectures are decentralized, therefore there are no specific single points of failures. Thus, it becomes really hard to track and take them down. François et al. [18] propose a novel approach to track large scale botnets. The focus of this work addresses the automated detection of P2P-based botnets. Firstly, NetFlow-related data is used in order to build a host dependency model. This model captures all the information regarding hosts conversations (who talk with whom). Secondly, a linked analysis is done us-



ing the Page Rank algorithm with an additional clustering process, in order to detect stealthy botnets in an efficient way. This linked analysis builds clusters of bot infected systems that have similar behaviours. Unfortunately the system is not always able to distinguish between legitimate P2P networks and P2P-based botnets. Yen and Reiter [72] develop a technique to identify P2P bots and to distinguish them from file-sharing hosts. The detection is based on flow records (i.e. traffic summaries) without inspecting the payloads. The characteristics of the network traffic that have been analyzed, which are independent from specific particular malicious activities, are: Volume, Peer churn and HumanDriven vs Machine Driven (temporal similarities among activities). Due to the NetFlow characteristics, this technique is scalable and cost-effective for busy networks and immune to bot payload encryption. This technique is able to distinguish file-sharing and C&C P2P traffic, but unfortunately is not always able to distinguish legitimate non file-sharing P2P networks. Noh et al. [43] focus on multiple traffic generated by peer bots to communicate with a big number of remote peers. This traffic has similar patterns at irregular time intervals. First of all the correlation among P2P botnets is analyzed, from a significant volume of UDP and TCP traffic. The second step is the compression of the duplicated flows through flow grouping, and the flow state is defined via a 7-bit state. A Markov model for each cluster is created based on these states. The P2P C&C detection is done by comparing the network traffic observed with the Markov models previously generated. The comparison is done with both legitimate and C&C traffic. If the compared traffic has similar values to the C&C model, then it is flagged as botnet traffic. A clear disadvantage of this technique is its dependency to the training phase. The detection happens if the P2P traffic is similar to the "trained" one, therefore if a different new C&C traffic comes into the network, it will not be detected. Elhalabi et al. [16] give an overview of others P2P detection techniques that have been proposed in the past years. This work has been published in 2014, and it is a clear demonstration that P2P botnets are currently active and still play a relevant role. P2P detection techniques are generally based on comparison of legitimate behavioral model (e.g. file sharing networks) and malicious behavioral model (e.g. C&C P2P networks). This turns in a clear disadvantage when malware starts using legitimate P2P networks. In this way malicious software becomes less detectable if not completely undetectable. This is one of the main limitations of current P2P-based detection techniques, that let us consider these techniques not reliable enough (as a final solution) beside the good results achieved in research. Therefore, more effort are needed by researchers to address these type of C&C botnets. The greatest advantage of P2P botnets, in comparison with C&C botnets, is their structure. Having no clear single point of failures, makes their



structures more robust than centralized models based on C&C servers. This is one of the main reasons why botmasters will keep try to implement their botnets using this type of infrastructure. However, the complexity of these structures is proportional to their robustness. Therefore it is very hard to implement them and it requires great skills. DNS The introduction of FFSN in botnets infrastructure attracted researchers attentions. Researchers started to investigate on this protocol in order to find possible detection solutions, since FFSN is the main trend of current botnets. Today, being able to distinguish malicious and benign domains, allow researchers to spot many of these botnets. Botnets known to use this technique are Bobax, Kraken, Sinowal (a.k.a. Torpig), Srizbi, Conficker A/B, Conficker C and Murofet (and probably more). Several detection methods have been proposed in the last years. In this section we highlight those we consider more relevant. Villamarin and Brustoloni [62] evaluate two approaches to identify botnet C&C servers on anomalous Dynamic DNS traffic. The first approach looks for all domain names that have a high query rates, or are highly concentrated in a short time slot. This approach is not able to distinguish precisely legitimate servers from those infected, thus it generates a high number of false positives. On the other hand, the second approach that tries to detect DDNS with a high number of NXDOMAIN replies, shows to be more effective. The authors show that a host that continuously tries to reach a domain name, which does not exist, may indicates that it is trying to connect to a C&C server that has been taken down. This work can be considered one of the ancestors for DNS (premonitory of FFSN) techniques applied to botnets infrastructures. This paper opens the doors for a new battle, where criminals try to exploit DNS protocol to protect their botnets and researchers try to find new ways to detect them. Perdisci et al. [47] present a system based on Recursive DNS (RDNS) traffic traces. The system is able to detect accurately malicious flux service networks. The approach works in three steps: data collection, conservatively filtered for efficiency purposes; clustering, where domains that belong to the same network are grouped together by a clustering-algorithm; classifier, where domains are classified as either malicious or benign, using a statistical supervised learning approach. On the contrary to previous works, it is not limited to the analysis of suspicious domain names extracted from existing sources as spam emails or blacklists. They are able to detect malicious flux services, through different forms of spam, when they are accessed by users that were scammed by malicious advertisements. This technique seems to be suitable for spam filtering applications, also because the number of false positives is less than the 0,002%.



Antonakakis et al. [3] propose Notos. This system dynamically assigns to domain names, which are not known to be malicious or not, a reputation score. It is based on some DNS characteristics that are claimed by authors to be unique in the DNS protocol in order to distinguish between benign and malicious domains. These characteristics are network-based features, zone-based features and evidence-based features. These features build models of known legitimate benign and malicious domains, and these models are used to compute reputation scores for new domains, to indicate them as malicious or not. The authors demonstrate that its detection accuracy is very high, however it has some limitations. One of the biggest limitations lies in the training phase, needed by Notos in order to be able to assign a reputation score. In case domain names have very little historical information, Notos cannot assign a reputation score. It requires a training phase to be able to work properly, and this is a drawback that decreases its level of reliability, because considering the dynamism of modern botnets, we cannot expect to have big training phase. We should be able to work even with few information. Bilge et al. [8], after [3] present a new system called EXPOSURE. It works on large scale passive DNS analysis to detect domain names involved in malicious activities. This system does not consider just botnet-related activities, but it has a more generic scope (e.g. spam detection). This technique does not rely on prior knowledge on domain malicious activities. Moreover, a very short training is needed in order to work properly. EXPOSURE does not strongly rely on network-based features as Notos does, but it is based on 15 different features, grouped in 4 categories: time-based, DNS answer-based, TTL value-based and domain-based. EXPOSURE is able to detect malicious FFSN with the same accuracy of Perdisci’s work [47]. This is one of the main advantages it has in comparison with Notos. However, also a shorter training and less data needed before start working properly are other important advantages. The limitations of EXPOSURE come from the quality of training data, and if an attacker would be informed on how EXPOSURE is implemented, probably he could avoid detection, even though this would mean less reliability among his hosts. However, the problem of the quantity of information is solved, and EXPOSURE can be definitely considered as an important improvement. Yadav et al. in [71] propose a different methodology to detect flux domains looking at patterns inherent to domain names generated by humans or automatically (e.g. through an algorithm). This detection method is divided in two parts. Firstly, the authors propose different ways to group DNS queries: Top Level Domain, IP-address mapping and the connected components that they belong to. Secondly, for each group, distribution metrics of the alphanumeric characters or bigrams are computed. Three metrics are proposed by the author: Information Entropy of the distribution of alphanumeric within a group of domains (Kullback-Leibler divergence); Jaccard in-



dex to compare the set of bigrams between a malicious domain name with good domains, and; Edit-distance (Levenshtein), which measures the number of characters changes needed to convert one domain name to another. The three methodologies are applied for each dataset. One of the key contributions of this paper is the relative performance characterization of each metric in different scenarios. Ranking the performances of each measurement method, Jaccard would be at the first place, followed by Edit distance measure, and finally the KL divergence. The methodology proposed in this paper is able to detect well-known botnets as Conficker but also unknown and unclassified botnets (Mjuyh). This methodology can be used as a first alarm to indicate the presence of domain fluxing services in a network. The main limitation of this technique is when an attacker is able to automatically generate domain names that have a meaning and do not look as randomized and automatically generated. If this would happen, this method would be completely useless. However, it is also clear that it is very hard to implement an algorithm that it is able to completely fake automatic domain names to human generated domain names. Thus, this technique can be very useful, but it cannot be used as a main technique for botnet detection, because looking at domain names it is not enough to claim that they belong to a botnet. Antonakakis et al. in [4] present a new technique to detect randomly generated domains. Domain generation algorithms (DGA) are those algorithms that dynamically produce a big number of random domain names and then select a small subset for actual command and control. The technique presented by the authors does not use reverse engineering, that could be a possible hard solution for detecting them, even though it is very hard to analyze bots that are often updated and obfuscated. The system proposed is called Pleiades. It analyzes DNS queries for domains that have NXDOMAIN responses. Pleiades looks for large clusters of NXDOMAINs that have similar syntactic features and that are queried by many "potentially" compromised machines in a given time. Pleiades uses a lightweight DNS-based monitoring approach, and this allows it to focus its analysis on small parts of the entire traffic. Thus, Pleiades is able to scale well to very large ISP networks. One of the limitations of Pleiades is that it is not able to distinguish different botnets that are using the same DGA. Furthermore, Pleiades is not able to reconstruct the exact domain generation algorithm. Nonetheless this solution can be considered reliable from the point of view of results and moreover it analyzes streams of unsuccessful DNS instead of manually reverse engineer that malware DGA. FFSN are a hot topic that has been investigated a lot within the research community. Very good proposals have been presented, but it is necessary to continuously work on this direction in order to make as hard as possible their evasion.



Multi Protocol Analysis Researchers have also proposed solutions that exploit different protocols. In 2004 Wang and Stolfo [64] propose a payload-based anomaly detector, PAYL, which works for several application protocols. This system builds payload models based on n-gram analysis. A payload model is computed for all payloads of TCP packets that differ in any combination of traffic length, port, service and direction of payload flow. Therefore, PAYL is able to clearly identify and to cluster payloads for different application protocols. The payload is inspected and the occurrences of each n-gram are counted, moreover the standard deviation and variance are also calculated. Thus, the average byte frequency of each ASCII character is obtained, since n is equal to 1 (0-255 ASCII character). A set of payload models is calculated, storing the average byte frequency and the standard deviation of each byte frequency of a payload of a specific length and port. After this training phase, the system is ready to detect anomalous traffic, computing the distribution for each incoming payload. If there any significant difference between the normal payload distribution and the incoming payload, which is calculated using standard distance metric to compare two statistical distributions, the detector flags the packet as anomalous and generates an alert. The authors showed successful results. They were able to detect several attacks analyzing the network traffic, with a false positive rate of 0.1%. Nonetheless, in case of encryption this technique can lose all its effectiveness because it would not be able to properly analyze the payload. Furthermore, the attacker under certain conditions would be able to evade this detection technique. However, this is unlikely because the attacker should have access to the same information of the victim, in order to replicate the same network behavior. Nowadays, detection techniques should be able to deal with encryption in order to be reliable and effective, and should have less training phase as possible (e.g. even better if it is not needed at all). Later on, after the success of PAYL and its new way of approaching anomaly-based techniques in IT security, other researchers have proposed more complex models which try to model the distribution of n-gram in a more efficient way, for example Ariu et al. [5], Perdisci et al. [48] and Wang et al.[63]. Burghouwt et al. [13] propose CITRIC, a tool for passive host-external analysis that exploits the HTTP and the DNS protocol. The analysis focuses on causal relationships between traffic flows, prior traffic and user activities. This system tries to find malicious C&C communications, looking at anomalous causes of traffic flow, comparing them to a previously identified direct cause of the traffic flow. The positive results obtained by the author show that it is possible to detect cover C&C botnets, examining the causal relationships in the traffic. Unfortunately this method needs to monitor not just the network traffic, but also the key strokes of users. Therefore the system should be installed on each machine and additionally there could be several



privacy problems with employees. Thus this method, even though seems to be good, has many problems that can severely obstacle its deployment. Another work that exploits more than one protocol is propose by Gu et al. in [24]. This work describes a network-based anomaly detection method to identify botnet C&C channels within a Local Area Network. This tool does not need any prior knowledge of signatures or C&C servers addresses and works with IRC and HTTP. It exploits the spatial-temporal correlation and similarity properties of botnets. It is based on the assumption that a botnet runs the same bot program and the attacks are conducted in a similar manner. Botsniffer groups hosts based on IRC channels or webservers that have been contacted. Once the groups are defined, the system checks if enough hosts, in that group, perform similar malicious activities in a time slot, and eventually they are flagged as being part of a botnet. Botsniffer, due to its several correlation and similarity analysis algorithms, is able to identify hosts that show strong correlations in their activities as bot of the same botnets. Nevertheless, Botsniffer has some important limitations. The main one is the protocol matching, where if a bot is able to use different protocols beside IRC and HTTP, it will not be detected, because the whole solution is based on features of these two protocols. Protocol Independent In literature is also possible to find some detection techniques that are able to detect C&C botnets without depending on a specific protocol. On of these works is BotMiner [23], that works in a similar fashion than its predecessor (i.e. [24]), but it differs in the group settings, using server IP addresses. The authors in BotMiner analyze the network traces, that include normal traffic and C&C botnets traffic (contains IRC, HTTP and P2P traces). Therefore, one of the improvements of Botminer in comparison with Botsniffer is that it is protocol independent. Moreover, the efficiency of the first one is proved to be better than the second one. However, if botmasters delay bots’ tasks or slow down their malicious activities (e.g. spam slowly), they would be able to avoid detection. Therefore, bots can be undetected if they assume a stealthy behavior and try to make less "noise" as possible (e.g. decreasing the number of interactions as much as possible). Cavallaro et al. [14] propose a cluster-based network traffic monitoring system. The properties analyzed by such detection system are based on network features and timing relationships. Through this technique they are able to reach good detection results without inspecting the payloads. This adds a robustness value to the proposed detection method, and the system is able to work properly on HTTP, IRC and P2P. This system works on different protocols and it is also signature independent because it analyses the malware behavior and tries to find semantic matches. The disadvantages are: the computational costs and the time that it is needed to accomplish



its tasks. Moreover it should be installed on every user machine. Another limitation is the model creation, which is a typical limitation of anomalybased systems. If the model is not correctly constructed, then the risk of false positives definitely increases. Protocol independent detection techniques are probably the most powerful since they are able to detect botnets looking at general traffic. However, it is very hard to find some patterns in general traffic that let us to detect with high precision the presence of a botnet. Today, modern botnets are camouflaging very well in the network traffic, therefore it is almost impossible to find clear patterns using general network information without mining the messages of a specific protocol. A solution protocol independent would be the best case scenario for a security researcher.

Conclusions Anomaly-based detection systems have the great advantage that they are able to detect unknown C&C channels, but unfortunately it is very hard to develop a good model. Moreover, when botnets communications are able to camouflage within the "normal" traffic, then it becomes even harder to detect them. However, looking at the literature, anomaly-based techniques seem still to be the most effective. A summary of the techniques is described in Table 2.2. As it is possible to notice, mining techniques are often implemented in these detection methods. Anomaly-based review Detection Approach [9] [57][66][32] [70] [25] [18] [72] [43] [62] [47] [3] [8] [71] [4] [13] [24] [23] [14]


Mining technique N Y N Y Y N Y Y Y Y Y

Table 2.2: Anomaly-based detection techniques summary




Research Advances

Beside tested and evaluated detection techniques against real botnets, researchers have also investigated this topic from another important perspective. They have designed, implemented and evaluated potential botnet architectures and solutions that may appear in the near future. These works, which are scientifically evaluated, play an important role: they warn the community about possibilities that can be exploited in the future by criminals to improve their current infrastructures. In these cases, researchers try to be the first movers, in order to anticipate possible criminal solutions.


P2P Hybrid Architecture

Beside detection techniques, in literature have been proposed also possible advanced botnet architectures that could appear in the next future. Wang et al. [65] in their work present the design of an advanced hybrid peer-to-peer botnet. They analyze the actual botnets and their weaknesses, like C&C servers that if are taken down can decrease the control of botmasters over their botnet. They propose a more robust architecture, which is a possible extension of common C&C botnets. In this case, the C&C servers are substituted by servent bots that behave with both client and server features. Therefore, the number of bots becomes bigger than in the traditional architectures and additionally these bots are interconnected with each other. This is one of the aspects that proves greater robustness than the usual C&C botnets. The proposed C&C communication channel expects a combination of public key encryption, symmetric encryption and port diffusion techniques in order to make harder to take it down. This paper give a demonstration on how botnet can be improved (e.g. more robust network connectivity)in the next years by botmasters, and it warns about their high danger. Moreover the authors state the importance of foreseen possible future architectures, in order to give to researchers a warning and eventually possible tools to prevent damages. The authors conclude that honeypots may play an important role against such botnets.


Social Network

In the past years, everybody had the opportunity to notice the exponential growth of social networks. This phenomenon rapidly became part of our daily life. These interconnections among devices and social networks started to attract cyber-criminals and researchers attention. These platforms are a great opportunity for criminals, because they can be exploited by them in order to spread their malicious software out millions of users and easily add new bots to their networks. Researchers made several proposals regarding the topic of social networks and most of them are based on HTTP protocol.



Athanasopoulos et al. [6] show how it is possible to exploit social networks as an attack platform. The experimental part of this work is a proof of concept called FaceBot (i.e. a malicious Facebook application), which exports HTTP request to a victim host, when she interacts with it (e.g. clicking on a picture). The author asked to their colleagues (e.g. unaware about the experiment) to invite people to subscribe to this application. Few days after the publication of the application, the authors have seen an exponential increase of the HTTP requests. Beside the sudden boom of subscriptions, the authors also show that at least three different kind of attacks are possible on this platform (apart launching DDoS attack on third parties): hosts scanning, malware propagation (exploiting URL-embedded attack vector) and attacking cookie-based mechanism. This work is important because it is one of the first works to acknowledge that social networks can be exploited for malicious purposes. Therefore, this work should be considered one of the ancestor advances for social network botnets. Athanasopoulos successfully predicted the possible risk of botnets over social platforms. The first practical "warning" of this new type of botnets was raised by a blog post of Nazario [40]. This is one of the first versions of botnets working on OSN (i.e. Online Social Network), where the author analyzes a Twitter botnet command channel. Afterwards other analysis by researchers have been done on real social network botnets: Kartaltepe et al. make a deep analysis of the case published by Nazario in [30] and Thomas and Nicol accurately dissect a different botnet called Koobface in [60] (exploiting URL-embedded attack vector described in [6]). However, in this section we want to focus only on research advances (e.g. detection techniques, potential implementations or structures) and not on "real" botnet analysis. Nagaraja et al. [38] propose StegoBot. StegoBot is a new generation botnet that is designed to exploit stenography techniques to spread rapidly via social malware attacks and to steal information from its victims. The architecture of the botnet is pretty much the same as classical C&C botnets (e.g. centralized), where botmasters send commands to their bots in order to execute specific activities. However, the main difference lies in the C&C communication channel. StegoBot uses the images shared by the social network users as a media for building up the C&C channel. StegoBot exploits stenography techniques in order to set up a communication channel within the social network and to use it as botnet’s C&C channel. This botnet is designed and implemented to understand how powerful would be a botnet of this type with such unobservable communication channel. It has been tested on a real scenario (i.e. social network) and it has been shown that its stealthy behavior would led botmasters to retrieve tens of megabytes of sensitive data every month. Natarjan et al. in [39], after [38] has been published, present a method that is able to detect StegoBot. The method proposed is based on passive monitoring of social networks profiles, which can be categorized as a



statistical anomaly-based detection scheme. This detection technique analyzes images publicly shared on social networks. The authors define entropy measures as the main feature that would be able to differentiate vulnerable images from normal images. A classifier is built upon these features, and it is able to state whether an image is vulnerable or not. The experiments done by the authors show that the finale solution is able to detect different embedded malware (e.g. exploits, worm, trojan, etc...) with a detection accuracy of 80%. This solution has also some drawbacks in terms of computations and scalability in social networks. From the point of view of results, we can say that this solution is not good enough to be applied to a real scenario. However, it is an important work because it highlights the difficulties that stenography can raise in detection techniques, and also it is one of the first detection techniques proposed in literature. Blasco et al. [11] propose a stenography avoidance framework based on HTTP. Such framework is able to eliminate stenography content from HTTP packets. There are several fundamental components for this system: HTTP inspector, which analyzes incoming packets and creates stenographic units (i.e. SU) from HTTP messages; Stenographic detector, that inspects each SU content and if there is any hidden information it sanitizes the carrier; Stego unit sanitizer, which is responsible to remove the information from the carrier; HTTP assembler, which assembles all the sanitized SU and builds back the HTTP message. The main advantage of this solution is that it limits the transmission of hidden information through HTTP that tries to avoid HTTP structure using stenography. On the other hand the main disadvantage is the high computational cost to analyze the payloads. This work should be considered important because it introduces in literature an important improvement from the technical perspective for stenography-based detection techniques. However, the limits imposed by the computational costs are still too significant, therefore this solution has to be improved. Thanks to research, stenography has been discovered as a very powerful tool in the context of botnets. Its characteristics allow a fast spread of the malware on social networks. Users get infected just loading the image, where the script embedded in it is executed. Researchers have also shown that stenography it is quite effective if it is applied to a real scenario. Therefore, it is possible that this technique will be applied in the future by botmasters in order to increase the capabilities of their bots. Nowadays, social networks are part of the daily life for most of the people. There are no better opportunities to exploit for cyber-criminals. As we have seen in this section, they are already developing some solutions. Consequently, researchers started to move towards this direction, and some proposals have been already introduced. However, this is the dawn of social network based botnets, and more of these botnets will probably appear in the near future.





Nowadays smartphones are the most common IT devices used daily by people. Smartphones have great capabilities, but the security measures for such devices are definitely lower in comparison to desktops. Moreover, the constant interaction between social media and smartphones, makes mobile devices very appealing as a spread means for malware. This makes the mobile market one of the best places where criminals could enlarge their botnets size. As a consequence researchers, which recognized this very dangerous threat, started to investigate this problem, proposing possible detection techniques and also botnets implementations. In the next sections we will describe these proposals, differentiating them by protocol or technology (e.g. in case of SMS). SMS Weidman presents an example of SMS-based C&C botnet at the BlackHat Conference [68]. The reasoning of the usage of SMS as communication channel comes through three factors: IP (i.e. Internet Protocol) on mobile systems consumes more battery than SMSs, it is fault tolerant and it is hard to monitor. The proposed botnet works in 4 steps. Firstly, the bot receives all the communications and if it is an SMS, then it continues the analysis otherwise it will pass the communication to the user space. Secondly, the bot starts decoding the user data, more precisely it decodes the 7-bit GSM data to plaintext. Thirdly, it checks if there is any key in the message, otherwise the message is again forwarded to the user space. Lastly, the bot reads the functionalities requested from the botmaster through the message, and if are found then it performs them, otherwise it fails silently. All these statements are invisible to the user. This SMS-based botnet has a decentralized structure, where the botmaster communicates with botslaves through some botsentinels (trustworthy infected bots). Limitations of this botnet are: possible detection from phone bills (not if messages are free), and the user data, because it can be at most 16 characters. However, the advantage of this botnet is that attackers can remotely control mobile devices without alarming the users. SMS and HTTP Mulliner and Seifert in [37] describe, implement and evaluate two different cellular botnet architectures. Firstly, they implement a P2P-based C&C, using an already existing P2P protocol called Kademlia and they join an existing P2P-network called Overnet. The basic idea of such botnet design is to use the P2P-network as communication channel using the publish and search functionalities of the DHT (i.e. Distributed Hash Table). The botmaster publish a command in the P2P network, and the bot independently



search that command looking for a specific key. An important drawback of this solution is that the smartphone have to periodically connect to the network, performing an important battery consumption, which increases the chances of detection. The authors show also a possible C&C solution based on SMS. However, this botnet communication channel is just discussed as a theoretical possibility, because the authors show significant drawbacks that would make this solution not convenient. One of these drawbacks is that the botmaster should pay for each SMS sent to communicate with his bots (this is definitely economically not convenient). The second implementation proposed is a hybrid of SMS-HTTP C&C. The basic idea is to split the communication into a HTTP and a SMS part, in order to make the botnet more resilient. The distribution of commands is done uploading pre-crafted SMS messages on a website. These URLs are sent via SMS to random bots. These bots download and decrypt, with the encryption key included in the message, and these files send out the pre-crafted SMS messages. Since there is no fixed structure through the communication it is very hard for a telco operator to understand whether a botnet is active or not within a mobile phone network just by looking at the SMS traffic. As final result they prove that it is possible to create a fully functional mobile botnet on popular smartphones (i.e. Iphone). In conclusion they point out that botnets combining HTTP and SMS could be highly dangerous and a promising solution for cyber-criminals. HTTP Xiang et al. [69] propose a mobile botnet called Andbot. This botnet is based on a novel C&C communication technique: URL flux. The design is based on valuable aspects that can be strongly considered by botmasters: stealth, resilience and low energy consumption. The architecture of the botnet is centralized and each bot interacts with a set of servers owned by the botmaster. URL flux is similar to the FFSN concept: both have an hardcode public key, but instead of having a Domain Generation Algorithm, URL flux uses a Username Generation Algorithm and exploits the HTTP protocol instead of DNS protocol. The URL flux works as follows: the AndBot connects to one of the webservers IP addresses that it knows (e.g. hard-coded in AndBot). Once it is connected, it tries to visit the users generate by the UGA, one by one. If the user exists, AndBot checks the user last message and verifies it using its hard-coded public key, in order to understand if the message is issued by the botmaster. The public-key, webserver addresses and UGA are hard-coded in AndBot. The authors prove the efficiency and the feasibility of this new type of botnets, which is very robust, low cost and stealthy. This work is important because shows that powerful HTTP mobile-based botnets can be developed. Considering the increase of mobile capabilities, these devices become more and more attractive solutions for criminals.



The literature regarding this topic is growing year by year, and it will likely be a trend in research for the next decade. However, mobile C&C botnets are still an arcane topic, since most of work that was done (e.g. botnets design and implementations) so far was done by researchers and no significant mobile-based botnets have been discovered in the underground. However, it is a promising trend, and research should keep working on that to mitigate possible, almost certain, threats that we will face in the next years.



In the previous chapter we have presented a complete overview regarding the topic of botnets. We have discussed several works, highlighting their pro and cons. Now we want to make a final discussion regarding the two macro-groups, the sub-groups of anomaly-based techniques and the research advances.


Signature-based vs Anomaly-based

As we have discussed before, signature-based techniques are static techniques. They are able to detect a botnet, creating zero false positives, if and only if the botnet was previously known. However, they have important limitations, for example: they do not work properly against encryption and they allow free time slots where criminals can freely act (before the malware is analyzed). These systems work well against malware that do not change their behavior in time, therefore against static malware. Unfortunately, today the most dangerous botnets are working in a dynamic manner, therefore blacklisting or other signature-based detection techniques cannot be used as main detection techniques against modern botnets. Nonetheless, they can still be useful when an additional confirmation of maliciousness of the software is needed (it can confirm true positives). Anomaly-based techniques are very common in IDSs. They are able to detect a botnet, comparing its behavioral model against a model of "benign" behavior, which was built before with an analysis of the normal behavior of the system. The comparison of these models is based on the examination of critical parameters, which trigger a warning in case they exceed a certain value (i.e. threshold) that determines the maliciousness of a model. The biggest disadvantage of this method is the construction of the model. Everything it is based on the model, therefore if it is not built "correctly", the number of false negatives and true positives can drastically increase. However, the advantage of such systems is that they usually are able to detect unknown botnets. Those anomaly-based techniques that are implemented on payload inspection are not able to properly detect botnets under encryption. Therefore, if a technique wants to detect payload encrypted botnet, it



has to build its model based on features that are not related to the payload. On the other hand, if no encryption is present, n-gram techniques (applied on the payload) provide a high level of reliability. Anomaly-based techniques have been deeply studied by researchers and they are still very useful, also because they are less static than signature techniques. We think anomalybased techniques can still be a good solution (e.g. helper technique) against modern botnets.


Anomaly-based subgroups

As we have also discussed before, here we want to finally discuss pro and cons of each subgroup of anomaly-based techniques. IRC IRC-based techniques showed to work properly against IRC-based botnets. Unfortunately today the most successful botnets are not based on IRC. Thus, these techniques can be considered obsolete, because they were always trying to exploit features of IRC protocol. HTTP HTTP-based techniques are still quite limited in their capabilities (e.g. technical and computational). This is understandable, since HTTP is the most used protocol on the web, so it is very hard to distinguish properly malicious traffic from benign traffic. Most of C&C botnets are still using HTTP, and they will continue to use it because seems to be reliable, also because it is hard to find good detection solutions. Therefore, researchers should not abandon this type of techniques. Nevertheless, we think that today there are no valuable HTTP-based techniques to exploit in order to detect botnets. Thus, it is necessary to find more effective methods or to focus on different protocols. P2P P2P-based techniques showed to be able to detect separate P2P networks. They still have some limits in their ability to distinguish those networks. Further research on P2P-based techniques is necessary, because as we have seen in the previous section, P2P architectures are more reliable than normal C&C servers. These architectures are significantly complex to set up, but if botmasters are skilled enough, this would be the first choice, due to their high level of reliability and robustness. Thus, to be able to detect complex botnets, P2P detection techniques should be improved, because P2P is one of the main characteristics that make them very robust. It is important to



underline that the topic of P2P architecture-based botnets is still alive and very active in the underground, and should be considered as a current topic. DNS DNS-based techniques are good in mitigating the problem of FFSN. The situation is in constant development, because botmasters are always trying to find new solutions and protocols design exploitations and researchers, immediately after, are trying to find proper solutions. Nowadays, we can say that research reached important results in FFSN detection. These techniques can be certainly used as significant weapons to spot current botnets. However, it is suggested to associate other techniques to them, in order to be able to detect future botnets, where FFSN detection will probably not be enough to determine whether it is malicious traffic or not. Multiple protocols and protocol independent As regards multiple protocols we can say that they can be a good solution for botnet detection. Using two different protocols gives the opportunity to the author to use the most characteristic features of both protocols. This would definitely help them in order to increase the accuracy of the solution. However, there is a fundamental assumption made by the authors of these papers: we should be able to take traffic that represents both of them. If the only traffic we are able to capture contains just one of those protocols, the reliability and accuracy of that technique should be revisited, unless the characteristics of each protocol are good enough to detect botnets by themselves. In this case, the technique can be seen as a union of different techniques. Perhaps, this would increase the overall accuracy in comparison of the individual techniques. Protocol independent techniques would be the best-case scenario for botnet detection. Unfortunately, if we generalize, we lose information that could be crucial for detection purposes. Modern botnets are very dynamic and details are fundamental to detect them. Therefore, a protocol independent technique seems to unfeasible.


Research Advances

Hybrid Architectures The hybrid-architectures proposed in literature seem to be very valuable proposals, because in those works it is clearly shown that the trade-offs, between implementation costs and robustness, are really in favor of criminals. These works have significantly alarmed the community about this possible solutions.



Social Network and Mobile Detection techniques based on social network cannot be properly evaluated. There is a main problem for them: very few sample discoveries have been analyzed in literature. Therefore, it is hard to define them as reliable techniques, if not impossible. This is the same for mobile-based and stenography-based botnets detection techniques. However, all these techniques are considered important in literature because they have investigated potential threats and they publicized awareness among the research community. However, as we have discussed before, researchers wrote proposals of possible botnet architectures, implemented them and showed their results in a real scenario. The reliability of these techniques have been scientifically proved on a real scenario. Therefore this proposals should be taken in consideration by the entire community. Unfortunately, we cannot say the same for proposed detection techniques, because since the phenomena is not common yet. They cannot be tested on "real malicious" botnets, and so far they are valuable just in theory and not in practice.



In these sections we have described a complete overview of the world of botnets: what proposals have been done in literature, what are the techniques that have been presented, what trends botmasters can look for in the next years, etc... We also discussed advantages and disadvantages of each work and each topic. In this section we discuss about the solutions provided by research in detecting C&C botnets over encrypted channels and the problems related. Encryption is a fundamental technique used from all applications related to sensitive data. This cryptographic function is widely used in benign applications (e.g. online shopping), but it is also started to spread in the cybercriminal solutions (e.g. botnets). The reason is quite simple, bot hunters would not be able to get any meaning from messages sent through the C&C channels. This would be used by botmasters in order to decrease the chance of detection of their botnet infrastructures. Recently, botmasters have understood the importance of cryptography for their communications. They are starting to implement it in their solutions, and researchers are trying to respond to them. Researchers have shown, in some case-studies, that modern botnets are deploying different techniques of encryption. For instance, Zeus implements diverse cryptographic techniques depending on its version: RC4 [33], or AES [35]. The power of cryptography is almost limitless, if well implemented, and it would generate significant problems for security researchers. As we have seen so far, researchers proposed very few methods that are able to detect encrypted C&C botnets. The best one proposed is [54], but it takes advan-



tage of a prior knowledge (e.g. decryption keys, encryption algorithm) of the cryptographic schema. Encryption-based detection techniques are very hard to implement, because we have to assume that the encryption schema lacks of some cryptographic properties. Then we would be able to find some patterns (e.g. statistical cryptanalysis) that would let us to reverse engineer the schema or at least find some information that would let us to identify them also in the future. Therefore, in a scenario where an encryption standard is properly implemented, there would not be any chance to detect a botnet by looking at the encrypted traffic (unless we know the key). In [15] Chapman et al. state the importance of the role that will be played by encryption in the future botnets. They make an analysis of possible scenarios for botnets in the next years. In their description, botnets will start to implement in the near future proper encryption schemes and integrity checks in their protocols, and moreover authentication will be added for commands and updates. In the paper it is also stated that probably they will start to tunnel their communication with legitimate protocols like SSL. This work is an important warning regarding future botnet implementations. Cyber-criminals have several reasons to move to SSL encryption. First of all, SSL is a consolidated standard [28] [22] and it is widely used, therefore it would guarantees robustness and reliability to botnets infrastructures. Secondly, since it is widely used, it gives the opportunity to camouflage botnet traffic within benign traffic. Thirdly, SSL has been deeply tested by research community, therefore the protocol is considered secure and this is not the case when cyber-criminals practice "Security through obscurity" developing their own encryption protocols, which are likely vulnerable to cryptanalysis. Thus, malicious coders have enough reasons to look in this direction, and they already started to deliver malware that uses SSL. Warmer in [67] makes a descriptive analysis on how malware TLS-based is used to behave. He collected a dataset of malware that it was documented as malicious software using TLS. A basic analysis of these executable shows that many of malware samples, that are using the TLS port 443, are communicating through HTTP or a custom protocol, instead of TLS. An analysis on legitimate TLS traffic shows that most of the TLS sessions generated by malware were very short, this due to the fact that the client does not send any request because the certificate could not be validated. Therefore, the client starts a new session with the same host, ignoring the validity of the certificate and sends the request. Lastly, the author states that all the certificates that are not valid, are self-signed or use a private CA. Thus, we have samples of malware that clearly do not use properly the standard security protocol. They have lacks (on purpose) in their implementation: clients ignore the certificates, certificates that are used are not valid, or even do not use TLS at all (e.g. HTTP on port 443). During the same year, SecureWorks posted on their website an article [58]



that analyzes the famous malware Duqu. In this work there is a confirmation that Duqu, as many other malware, generates non-SSL traffic on port 443. Recently, Palo Alto Networks released a report regarding a review of modern malware [42]. In this document, one of the highlighted results is that nonSSL traffic on port 443 was the most common non-standard port behaviour. This is a further confirms of what Warmer describes in his work. Malware authors sent some signals regarding possible exploitation of the SSL implementation in their solutions. This can be just one of the first steps undertaken by botmasters. Further SSL exploitation could lead in significant threats for company businesses and Internet users. These works suggested us the research question that we asked for this project.

Chapter 3

Protocol Description: SSL/TLS In this chapter, we give a description about how SSL/TLS works [22]. This explanation is intended to provide an overview of this protocol. Some parts of the description will be more detailed than others [29], in order to give the reader the tools to understand our methodology described in the next chapter. TLS is the improved version of SSL, but there are not structural differences among the two versions of the protocol, therefore in order to avoid misunderstandings, we will mention the protocols under the common name of SSL.



Transport Layer Security (i.e. TLS), and its predecessor the Secure Socket Layer (i.e. SSL), are security protocols, which aim to provide privacy and data integrity between two communicating applications. The SSL protocol works over the transport layer (i.e. 4th layer of the ISO/OSI stack), and it is application independent. This advantage let higher-level protocols to lay on top of SSL transparently. The protocol is divided in two layers, as shown in Figure 3.1: one layer contains the SSL Record Protocol (i.e. lower layer); the second layer (i.e. higher layer) contains three different SSL protocols: SSL Handshake Protocol, SSL Alert Protocol and Change Cipher Spec protocol. The SSL Record Protocol works on top the transport protocol (e.g. TCP) and it provides secure connections that have two properties: privacy, through data encryption using symmetric cryptography; and reliability, through message integrity check using a keyed MAC (i.e. Message Authentication Code). On the other hand, the Handshake Protocol provides different security properties to the connection: authentication among peers, using asymmetric or public-key cryptography; secure negotiation of the shared secret, resistant to eavesdropping and man in the middle; and negotiation’s reliability. Summarizing, the SSL protocol provides authentication, confidentiality and integrity 41



services. However, it does not provide any non-repudiation service.

Figure 3.1: SSL Record Protocol and Handshake Protocol representation


SSL Record Protocol

As we have mentioned before, the Record Protocol provides confidentiality and reliability to the connection. In Figure 3.2 is shown the procedure followed by the protocol to ensure these properties. Every time a message has to be transmitted, the data is fragmented in different data blocks. Each of these fragments is optionally compressed through a compression method that was agreed upon the two peers. A MAC is applied to the block of data, in order to provide integrity and the data is then encrypted, therefore the result it is transmitted.


SSL Handshake Protocols

SSL has three subprotocols that are used to allow peers to agree upon security parameters for the record layer. These parameters allow to authenticate the peers, to instantiate negotiated security parameters and to report error condition to each other. The Handshake protocol is responsible for negotiating a session, which is defined by the following items: • Session identifier: arbitrary sequence of byte used to identify an active or resumable session state. • Peer certificate: x509v3 certificate.



Figure 3.2: SSL Record Protocol procedure • Compression method: algorithm to compress data prior to encryption. • Cipher spec: specifies the pseudorandom function to generate keying material, the data encryption algorithm and other cryptographic attributes. • Master secret: secret shared between client and server. • Is resumable: flag that indicates whether the session can be used to initiate new connections. These items are used to create security parameters. They will be used by the record protocol in order to protect the application data.


Change Cipher Spec Protocol

One of the three SSL Handshake subprotocols is called Change Cipher Spec protocol. Its main task is to signal transitions in ciphering strategies. In this protocol, the client and the server send a ChangeCipherSpec message in order to notify the receiving party that the next records will be protected using the newly negotiated CipherSpec and keys. The ChangeCipherSpec message consists of a single message (i.e. a single byte of value 1) which is encrypted and compressed under the current connection state.


Alert Protocol

Alert messages communicate to the peer the severity of the message (i.e. warning or fatal) and a description of the alert. Alert messages with a



severity level of fatal terminate immediately the connection. There are two types of alert messages: closure alerts and error alerts. Closure alerts are those messages sent by both parties in order to end the connection. Any data received after a closure alert is ignored. On the other hand, error alerts messages are exchange by parties every time an error is detected. Whenever the parties receive a fatal alert message, both immediately must close the connection. When a warning is sent and received, generally the connection continue normally. Warning messages are not useful when sending party wants to continue the connection, therefore sometimes they are omitted.


Handshake Protocol

When a SSL client and server start communicating, they agree on several parameters: the version of the protocol, a cryptographic algorithms, a publickey encryption technique to generate shared secrets and eventually they can also authenticate each other. This phase of the connection is called handshake. As shown in Figure 3.3, this phase is represented in eight steps. The first message of the SSL handshake is always sent from the client to the server and it is called Client Hello. In this message are contained: protocol version, session ID, cipher suite (i.e. combination of a key exchange, an encryption and a MAC algorithm), compression method and a random value. The server chooses an SSL version, cipher suite and a compression method from the ClientHello message, it generates a random value and sends a Server Hello message back to the client. Following the hello message, the server can optionally sends its certificate in a Certificate (i.e. x.509v3 certificate) message if its authentication is required, a ServerKeyExchange message which contains key material from the server, and a CertificateRequest message which requires the client to authenticate himself (i.e. mutual authentication). Afterwards, the server sends a ServerHelloDone message, indicating that the hello-message phase of the handshake is complete. At this moment the client has all the information needed to complete the key exchange. It verifies the validity of the server’s certificate, computes the secret key and sends a ClientKeyExchange message. In this message the content depends on the public key algorithm selected between the hellomessage phase. If the server required client authentication the client must send a Certificate message before the ClientKeyExchange that is followed by a CertificateVerify message for signing certificates. The server receives the content of the ClientKeyExchange and computes the secret key. At this point both, the client and the server, send a ChangeCipherSpec message indicating that all the messages after are encrypted. This message is followed by a Finished message, that allows the parties to verify if they have computed correctly the secret key and the authentication was successful. This is the last step of the full implementation of the Handshake protocol, after this the connection can be considered established and everything coming afterwards



is encrypted.

Figure 3.3: SSL Handshake Protocol - Full Implementation However, the handshake phase can be shorter in case of session resumption. When client and server decide to resume a previous session or duplicate an existing session, the message flow is shorter and it is shown in Figure 3.4. The client sends a ClientHello message and indicates the session ID of the session he wants to resume. The server, after it received the message, check if the client’s session ID matches with its session cache. If there such a match and the server wants to re-establish the session, it sends a ServerHello message with the same session ID value. At this point, the parties exchange ChangeCipherSpec messages, followed by Finished messages. In case the



server does not find any match for the session ID, it generates a new session ID and the parties perform a full handshake. This short version of the handshake protocol is clearly more efficient. The number of messages is less therefore the traffic generated is less. Moreover, it is also computationally less expensive, since no cryptographic operations have to be performed (e.g. no authentication required).

Figure 3.4: SSL Handshake Protocol - Short Implementation


Protocol extensions

The SSL protocol provides some extensions that aim to expand the functionalities of the SSL message format [29]. All the extensions are relevant only when a session is initiated. These extensions are used by the client and appended to the ClientHello message. Once the extensions sent by the client are recognized, the server appends them to its ServerHello message. Most of these extensions are aimed to lighten the protocol (e.g. memory usage, bandwidth, etc...) in order to facilitate the usage of constrained clients.




Server Name

The SSL protocol does not have a mechanism for a client to tell a server the name of the server it is contacting. This extension is an important additional function, which facilitates secure connections to servers that host multiple "virtual" servers that use the same network address. Current clients implementations only send one name (i.e. multiple names are prohibited), and the server names type supported are DNS hostnames (e.g. example.com). When the server receives a server_name extension, it has to include an extension of the same type in the ServerHello message (this is not true when the client wants to resume a session).


Maximum Fragment Length Negotiation

SSL usually specifies a fixed maximum fragment length of 214 bytes. For some constrained clients (e.g. mobile phones) it might be relevant to negotiate a smaller maximum fragment length. This would help in case of memory and bandwidth limitations. However, the acceptable values are restricted to certain values: 29 , 210 , 211 and 212 bytes. If other values are requested by the clients, the connection should be terminated by the server, otherwise the parties immediately begin fragmenting messages to ensure no fragment larger than the negotiated length is sent.


Client Certificate URLs

This extension is related to constrained clients. During a usual SSL handshake, the client sends its certificate to the server in order to get authenticated. Since constrained clients have to optimize the consumption of memory, it might be desirable to send certificate URLs instead of certificates, so they would not need to store their certificates. The negotiation of this extension happens during the handshake phase, where the client adds an extension "client_certificate_url " to the (extended) ClientHello. If the server accepts certificate URLs, it includes an empty extension of type "client_certificate_url ". After the negotiation has been completed, the client can send a CertificateURL message instead of a Certificate message during the SSL handshake. This message contains a list of URLs and hashes. Each of these URLs must be an absolute URI (i.e. HTTP scheme) reference where it should be immediately possible to fetch the certificate. Furthermore, there are two different types of certificate that can be used within the CertificateURL in case of x.509 certificates: individual_certs, where each URL refers to a single DER-encoded x.509v3 certificate; and pkipath, where the list contains a single URL which refers to a DER-encoded certificate chain. At this point, the server receiving the CertificateURL message should attempt to fetch the client’s certificate chain from the URLs and validate it



as usual. In case the server would not be able to obtain the certificate from the url and the certificate was required, it has to terminate the connection.


Trusted CA Indication

This extension is also related to constrained clients. The extension trusted_ca_keys provides a list of CA root key identifiers that are possessed by the client. The client, in order to avoid repeated handshake failures, may wish to indicate to the server which root keys it possesses. In this extension, the client can include none, some or all the CA root keys they have. These keys can be described in four different ways: pre_agreed, where the key identity is not supplied, key_sha1_hash that contains the SHA-1 hash of the key, x509_name which contains the x.509 Distinguished Name of the Certificate Authority and cert_sha1_hash that contains the SHA1-hash of the DER-encoded Certificate which contains the CA root key.


Truncated HMAC

This extension is desirable for constrained environment. In order to save bandwidth the output of the hash function, which is used by SSL to authenticate the record-layer communications, is truncated to 80 bits when forming MAC tags. This extension would take effect only if cipher suites uses HMAC. This extension is negotiated through the field truncated_hmac. Moreover, it has effect for the duration of the all session, including session resumptions.


Certificate Status Request

Constrained clients that want to use a certificate-status protocol like OCSP to check the validity of server certificates, can you this extension. This extension avoids the transmission of Certificate Revocation Lists (CRLs) and therefore save resources (i.e. bandwidth). Servers that receive a ClientHello containing a "status_request" extension may return a certificate status response along with their certificate by sending a CertificateStatus message immediately after the Certificate message (the type of the status_request must be included).


x.509 Certificates

In the section we have seen that x.509 certificates play an important role within the SSL Handshake protocol: they provide parties authentication. This happens because the certificate binds a subject (e.g. person or company) to a public key value. The authentication is directly dependent to the integrity of the public key. If an attacker would be able to compromise it, he would be able to impersonate the victim and gain access to the application



under a fake identity. For this reason, all the certificates should be signed by a Certificate Authority (CA), which is defined as a trusted node of the PKI infrastructure that confirms the integrity of the public key value within the certificate. When the CA signs the certificate, it adds its digital signature to the certificate, which is a message encoded with its private key. On this way, the application is able to verify the validity of the certificate decoding the CA’s digital signature using the CA’s public key, which is publicly available. There are mainly two types of certificate: a CA signed certificate and a self-signed certificate. The first one, as we have mentioned before, it is "authorized" and signed by a trusted authority. The second one, is a certificate generated by the owned himself, without any validation from an authority. These types of certificates are used usually in different public-key infrastructures: the x.509 certificates are used within the x.509 public key infrastructure [21], which is built upon trusted certificate authorities; the self-signed certificates are usually used in testing environments or the PGP (i.e. Pretty Good Privacy) infrastructure [41], which is based on the trust of users. In this work we will consider just the x.509 PKI and its certificates because is the one used by the SSL protocol. Nevertheless we will not describe how these infrastructures works, for further readings refer to the references. Next in this section we give a description of the classical structure and the newer extended version of the x.509 certificate , because they are important for the understanding of out methodology.


x.509 Certificate Structure

A x.509 certificates has the role of binding an identity to a public key. Therefore, it has to contain information regarding the certificate subject and the issuer (i.e. CA who issued the certificate). The certificate is encoded in ASN.1 (i.e. Abstract Syntax Notation One), which is a standard syntax to describe messages sent and received over a network. The certificate has the following main structure (see Figure 3.5): • Version of the certificate • Serial Number: a unique ID value of the certificate. • Algorithm identifier: identifier for the algorithm used to sign the certificate from the CA. • Issuer: the distinguished name (DN) of the certificate issuer. • Subject: the distinguished name (DN) of the certificate subject. • Validity interval: the period of validity of the certificate. • Public-key of the subject



Figure 3.5: x.509 Certificate Structure • CA’s digital signature This structure has been extended lately with the second and third version of the certificate. As it possible to see in Figure 3.5, three optional fields have been added: • Issuer Unique Identifier • Subject Unique Identifier • Extensions: a set of descriptive fields that aims to provide a higher level of security for Internet transactions.


Extended Validation Certificate

The extensions defined for the x.509 v3 certificates provide methods to associate additional attributes with users or public keys and for managing relationships between CAs [21]. The certificates that use such extensions are called Extended Validation (EV) Certificates or High Assurance (HA) certificates. The primary purpose of this certificates is to identify the legal entity that controls a Website and to enable an encrypted communication with it. The secondary goals of such certificates are to fight phising, to help



organizations against fraud and to help law enforcement. Each of these extensions are designated as critical or non-critical. In case critical extensions cannot be processed or are not recognized, the certificate should be rejected. Non-critical extensions should be processed if recognized, otherwise may be ignored. There are many extensions that can be applied to an EV certificate, some of those are: Authority Key Identifier, Subject Key Identifier, Key Usage, Certificate Policies, Subject Alternative Names, Issuer Alternative Names, Basic Constraints, Policy Constraints, CRL Distribution Points, and others. For the scope of this thesis we are interested in the extension called SubjectAlternativeNames. This extension allows identities to be bound to the subject of the certificate. This extension allows a company to have a single certificate for several activities, instead of a single certificate for a single activity. Therefore, the certificate with such extension is valid not only for the identity described in the subject field but also for all the identities listed in the field of such extension.



Chapter 4

Our Approach In this chapter we explain the core of this project: our assumptions and the SSL features used for detection. As explained in Section 1.2, in literature there is no reference to botnets, that are using SSL as a C&C communication channel. This had a lot of impact on our approach. The main question that raised was: How can we detect something that perhaps does not even exist? Analyzing the literature it is possible to see how botmasters are evolving their infrastructures in order to make them more complex and reliable. As we have seen in Section 2.5, in the past years they are moving towards encrypted solutions. Therefore, it is possible that they will also try in the near future to exploit SSL for data exfiltration. For data exfiltration we mean the process of data transmission from the bot to the server, through the communication channel. If we have a deeper look at the state of the art, we can see that most of the intrusion detection systems have been built by researchers upon existing malware or solutions that were proposed before by researchers. Therefore, they could analyze the behavior of that malicious software, and create a detection system based on that behavior. In our situation this is not possible, because, since it is not known if there are botnet with this specific protocol, there are no malware to analyze. Therefore, we tried a different approach. So, we decided to take a preventive approach. In this case, we do not mean prevent the infection of the machine, since the data exfiltration starts post-infection. For preventive approach we mean an intrusion detection solution that is built without the analysis of malware, that exploits just apriori knowledge like typical botnet behaviors and SSL protocol characteristics. We tried to define some potential protocol features that could be helpful in order to detect possible botnet behaviors within SSL traffic. Needless to say, the goal of the project is to build an anomaly based intrusion detection 53



system based on those features, and not signature based since it would not even be possible due to the lack of malware.


Assumptions and Features Selected

Before starting the selection of potential features we made two fundamental assumptions. As it has been shown in the previous section (Section 3), the SSL protocol is mainly divided in two parts: the handshake, which is plaintext and aims to exchange the security parameters for the connection, and the application data, which is the encrypted part of the protocol where the peers communicate and exchange data. Our assumption is that the encrypted part cannot be attacked, due to capability reasons. Since it does not reveal any information, it is not considered in our approach. Our focus in on the plaintext part, which mainly involves the handshake protocol. This part reveals some information that could be useful in order to detect malicious connections. There are two main benefits of our assumption: it is a privacy-preserving solution, because the payload, which is in the encrypted part, is not analyzed at all, and it is lightweight, because we are concentrating on a small part of the SSL traffic, which is the initialization of the connection. The other key assumption that has been done is that the malware author has complete control over the client and server applications, because he spreads and builds his own malware. Therefore, he can easily avoid following the standards of the protocol and make its "own rules". In case the criminal would correctly follow the rules of the protocol, our solution would not be able to detect the malicious behavior, because the data exfiltration of the botnet would camouflage within the benign SSL traffic. However, due to the characteristics of botnets (e.g. short lifetime of servers domains), in our opinion it is unlikely the criminal would pay a lot of money for SSL certificates for each of those websites. Every time a web-server would be taken down, they would lose the money of the certificate (i.e. it would considered invalid), therefore it would be non-convenient. Some controls on SSL connections, are made by browsers. For example they check if the certificate of the server is valid for the domain requested by the client (Figure 4.1). These controls have been introduced in order to avoid phising attacks and to warn the user about possible untrusted connections. However, this is not true for background SSL applications (e.g. malware). Therefore, malware authors can communicate through broken SSL connections. In this project we will bring these checks at network level, because would allow us to check more SSL connections at the same time. Doing this we can spot possible SSL misbehaviors and eventually detect malicious connections.



Figure 4.1: Green bar represents a trusted SSL connection


SSL Features

Once our assumptions have been made, we focused on the analysis of the protocol in order to find possible features that could be useful for the detection of malicious software. To do this we use two different thinking approaches: • Find the features of SSL that could signal possible misbehaviors • As an attacker, find the SSL features that could be useful if I should build a botnet Using an offensive and defensive approach we are able to design some features that could be useful for the detection of a botnet. These features are represented in Table 4.1. The first selected features is the typical validation check on the x509 certificates. Here we want to check whether the certificate is still valid or selfsigned, has expired, has been revocated, etc... The validity of the certificate can help detect misbehavior: we do not expect to see expired certificates used during server authentications, neither we expect to see Facebook.com to use a self-signed certificate. These are just two examples of possible misuse scenarios. The second feature is related to the release date of the certificate. If a selfsigned certificate has been released two minutes before the connection has been established, it can be potentially suspicious in the context of malicious software. As all we know, malicious domains lifetime is quite short, so it is possible that criminals generate self-signed certificate on the fly, just for a short period of time or for a specific connection. We consider self-signed certificates in this example because it is easier and cheaper to generate them through open libraries instead of going to a CA (i.e. Certificate Authority) and ask for a valid certificate. Moreover, considering the short lifetime of a domain, it would not be worthy to spend around 100 euro for a certificate that could be potentially revocated after a short period of time.









DGA - Server name

Levenshtein distance for self-signed certificates

Hostname Contained

Certificate Request & Certificate Validity

Certificate x509 Time Validity

Certificate x509 Validation


It checks whether the server name required by the client is a random domain or not

It checks if the server name, in case of self-signed certificate, has a similar name to one of the top 100 visited website

It checks if the server name, required by the client, is contained in the subject list of the x509 Certificate

In SSL it is used when a mutual authentication is required by the server

It checks the time passed from the generation of the certificate and when the connection was established

It checks if the x509 and its chain is valid

What this control represents in SSL

Why it can be useful This is a standard check in SSL and it is important because just validated certificates (e.g. not expired certificates) should be used during the authentication phase. Botnet domains have a short lifetime, therefore it is possible that they generate self-signed certificates on the fly in order to authenticate their connections. Mutual authentication could be exploited in a SSL botnet by the botmaster in a p2p scenario, where peers should authenticate themselves. When we connect to a server, the certificate provided by it, should be valid for that domain. (e.g. If I want to connect to google.com, I expect to see a Certificate valid for google.com and not facebook.com) When we connect to a famous website like facebook, we do not expect a self-signed certificate. This could be a symptom of a Man-in-the-Middle attack. Botnet are using Fast-Flux techniques in these days. It is possible that they try to use it for SSL as well. Table 4.1: SSL Features selected for botnet detection over SSL



The third feature is related to mutual-authentication. In SSL there is the possibility for a server to ask the client to provide its own certificate in order to authenticate itself. This mutual authentication could be used in a botnet scenario, where criminal wants his peers to authenticate with the servers. It could be exploited by criminals as a form of protection and anti-intrusion from unknown people within their system. The fourth feature is the most relevant. It is similar to the browser checks on the certificates (Figure 4.1), but it is done at network level. It exploits the TLS extension [29] field server name (Section 3.4.1) and the x509 certificate extensions (Section 3.5.2). This SSL characteristic is very important because it shows when a connection can be trusted or not. Every time the server name requested by the client does not match with any of the domains, for which the certificate should be valid, the connection should be considered untrusted. As mentioned before, since the application is built by the criminal, he can use stolen or valid certificates in order to establish connections with malicious domains, without being detected, because everything run in background and nobody checks it! In Figure 4.2 it is possible to see an example of a trusted connection. The first image (Figure 4.2a) represents a Client Hello request for the domain www.google.nl. The number 1 shows the type of the handshake message, the number 2 indicates that the client uses the TLS extension server name, and the number 3 represents the hostname requested by the client. The second image (Figure 4.2b) shows the response of the server to that specific request during a Server Hello message. The number 4 represents the type of the handshake message and in this case indicates that a certificate has been sent by the server. The number 5 represents the subject of the certificate, which in this case is *.google.com. As it is possible to see the subject does not contain the server name requested by the client. However, the certificate has also a list of other possible domains for which it is valid, and it is shown by the number 6. As it is possible to see, there are 46 other domains for which that certificate is valid (i.e. subjectAltName certificate extension). If we scroll the whole list we can see that (Figure 4.2c) in the list is present also *.google.nl (number 7). The server name requested by the client is clearly a subdomain of *.google.nl, therefore the certificate is valid for the domain requested and the authentication can be considered successful. In [19] Georgiev et al. thought about using the same approach, even though applied in a completely different scenario. They tried to check the SSL certificates validation of non-browser software in order to test SSL common applications and libraries (e.g. OpenSSL, Amazon’s EC2 Java client library, etc...). They made a work of security testing, to proof the security of such tools used in daily life by Internet users and companies. They were able to proof the vulnerability to man-in-the-middle attacks of many applications that are lacking of proper certificate validation checks, including the check on the server name field. Therefore, this is a solid work that confirms the



(a) Client Hello Request

(b) Server Hello Response

(c) Server name match

Figure 4.2: Example of benign connection reliability of this feature. However, we apply it to a different context: detect malicious software. The fifth feature is the calculation of the Levenshtein distance between the server name and a list of the 100 most visited website [1], whenever a self-signed certificate is encountered during the handshake. This is a relevant feature because it checks if the user is trying to connect to a famous website and its replies with a self-signed certificate. When a user connects to a server (e.g. www.google.com) he should expect to receive a valid certificate,



otherwise it could be a symptom of a man-in-the-middle attack, after a DNS poisoning, where the user is redirected to the machine of the attacker. In [27] Huang et al. analyzed forged SSL certificates for facebook.com. They analyzed over 3 million of SSL connections and found that the 0,2% of those connections were tampered with a forged SSL certificate. The structure of the forged certificates described by the authors is varied, however the most clear common characteristic is that those certificates are self-signed. Summarizing the forged certificates are self-signed and used to connect to facebook.com. In addition, the authors showed in their paper that most of these certificates were generated by IT security solutions (e.g. antivirus) that create a man-in-the-middle attack in order to analyze the SSL payload looking for malicious content. This help us, because highlight the fact that possible self-signed certificates for famous website could be a symptom of malicious attacks. The sixth feature is the analysis of the server name field, requested by the client, and define it as a random value or not. Looking at the current trend in botnets, like Fast-Flux techniques (Paragraph 2.1.1), we believe it is possible that botmasters try to connect to random-looking domains for SSL connections as well, therefore we want to build a possible countermeasure. Summarizing, in this chapter the selected features have been described. These features are based on the assumptions that we did at the beginning of our work. The validity of some of these features have been confirmed by some previous research works. If the assumptions would be correct and the features as well, we would be able to obtain a privacy-preserving and lightweight solution which would also be able to detect zero-day attacks.



Chapter 5

Implementation and Dataset Once the features have been selected, we need to implement our system in order to verify and to test their effectiveness. Our system is based on Bro [46], an NIDS (i.e. Network Intrusion Detection System) developed at the University of Berkeley. We have decided to use this specific software because it has its own scripting language (ad-hoc for network monitoring), and it has a very good SSL parser, that would simplify our job during the gathering information phase from the network. For the first features we base our solution on an already existing script for the Bro system, which is called validate-certs.bro. All the other features are implemented by us through Bro’s scripting language, but not the last features (number 6, Table 4.1) related to DGA domains. The features related to mutual-authentication: CertificateVerify and CertificateRequest, have been added to Bro, using BinPac [45], because they were not present in the SSL parser. For the last feature (i.e. DGA domains) we have decided to use a n-gram technique.


Overview of n-gram technique implementation

The implementation of our n-gram technique is represented in Figure 5.1. This technique takes a sequence as input. This sequence has length equal to n, where n is the number of grams (Paragraph 2.1.2). A model is built with each of these sequences. Our implementation uses two modes: training mode, where our model is built and a testing mode, where it checks whether a sequence is contained in the model or not. In our implementation the sliding window has dimension equal to 4. The first step is to create our trained model (i.e. blue arrows). In order to do this, we select the top 500k websites from Alexa.com. We extract the TLD (i.e. Top Level Domain) of each website and we use it as input string. This string is divided in a set of sequences of length 4 (see the example of facebook in Figure 5.1). Each of these sequences will be used as input for our n-gram method, which elaborates it and insert it in our model. We do this 61



procedure for all the 500k TLD. After the trained model is prepared, we can start to test other strings against it. We think the amount of data analyzed in order to build this model, can fairly represent a set of non-random sequences. The testing phase is done during the monitoring of the network. As shown in the picture below, we analyze the TLD of the server name string. This string is divided in sequences, and each sequence is sent as input to our n-gram technique implementation (i.e. red arrows). It elaborates the sequence and checks against our model to see whether is present or not. At the end of the process our implementation will output a score, which represents the number of sequences not found (i.e. green arrows) or as we call them, missing frames.

Figure 5.1: n-gram Technique Implementation Representation We use such system in order to define whether a domain is randomly generated or not. We tested this technique against DGA domains (see Figure 5.2) of some well-known botnets (e.g. Zeus) that are publicly available on the Internet. From the chart represented in Figure 5.3 we can see on the X Axis the detection rate and on the Y Axis the threshold of missing frames. The threshold represents the maximum number of missing frames in order to consider the domain as benign, if this number is surpassed then it is considered detected as malicious. We analyzed four different types of domains generated by malicious applications: Upatre, Zeus, Zeus Gameover, and Cryptolocker. In the figure it is possible to see between the brackets the number of entries for each type of malicious domain. The first three are a subset of the famous Zeus Botnet and, as we can see, the detection rate is very similar because probably they have the same DGA algorithm. As it is possible to see from the result, this n-gram technique can not be considered as a stand-alone solution to detect a malicious random-looking domain, since the detection rate of Cryptolocker drops quickly with the increasing of the threshold. Nonetheless, we can still consider this technique as a potential



solution when it is supported by other features, which is our case!

Figure 5.2: DGA domains of well known botnets

Figure 5.3: n-gram technique vs well-known DGA domains



To test our system we need to collect some data traffic. Considering that we do not know if there are botnets on SSL, we need to get a network traffic as variegate as possible. The University of Twente provided a port mirroring of the gateway of the University network. The bulk of traffic of the University



allow us to have a varied traffic. Moreover, we assume that the presence of students would increase our chances to find possible malicious behaviors, given the fact that students are usually more prone to use illegal services to access online contents. Therefore, for the data validation of our experiments, we have a server, connected with a fiber cable, that is capturing the traffic on the edge of the University’s network.


Overview of our setup

In Figure 5.4 it is possible to see an overview of the experimental setup. As we can see we capture the traffic using tcpdump on port 443 (i.e. HTTPS). The captured traffic is the mirrored traffic of the gateway of the University, therefore we are talking about a 10Gbit fiber connection (all inbound/outbound traffic of the network). On the server the pcap files are stored in order to be analyzed after the capture. The analysis of the pcap files is not done in real-time. Once the traffic has been captured, we analyze it using our Bro’s scripts and the logs of the traffic are generated. We decided to generate two different logs: one log is dedicated to the connection that we think could be malicious, the other log contains all the connections for troubleshooting purposes.

Figure 5.4: Experiment Setup

Chapter 6

Experiments The features we have selected have to be validated through proper experiments. Since we do not know the effectiveness of such features, we run a first analysis of traffic in order to see whether they are relevant or not. After the first analysis, we tailor the detection rules, exploiting just the relevant features, in order to drop effectively the number of false positives. These first two experiments are done on the same dataset. Furthermore, we show some statistics about the traffic analyzed and the results achieved by such analysis. The third experiment regards a final test for our detection rules. The server runs for a longer period, without storing the traffic, but just creating the logs in order to see its effectiveness and the number of false positive generated.


First Analysis

For the first analysis of our work, we have collected 300Gb of SSL traffic (i.e. HTTPS traffic). We analyze the SSL traffic through our Bro’s script. We generate two different logs, as mentioned before, one for warnings and another one with all the information of SSL connections. Since it is not known which features are relevant, we keep the detection rules as broad as possible. We log as warnings all the connections that: • Have the server name with an n-gram score (i.e. missing frames) greater than zero • Use mutual authentication (i.e. CertificateRequest and CertificateVerify extensions are sent during the handshake) • The server name field is not a subdomain of the list of subjects of the certificate • Have exchanged a self-signed certificate 65


CHAPTER 6. EXPERIMENTS • Have levenshtein distance value lower or equal to 1 • Have the certificate generated less than day before the connection has been established

As expected, this analysis has generated a lot of false positives. Therefore we have manually analyzed all the possible warnings that have been generated, in order to distinguish false positive from true positives. Moreover, we got some general statistics about the traffic and the warnings. In the next section, we show the result we obtained after this first analysis.



As it is described above, the first analysis have been done using rules as broad as possible to check the effectiveness of the selected features. After a manual analysis surprising results have been achieved. First of all, we are able to detect and clearly distinguish TOR traffic [52] from HTTPS. Secondly, we identify SSL misconfigurations of the SSL protocols of many websites. Lastly, and most importantly, we are able to detect botnet and malware that are using SSL. Traffic Analysis Overview The first step is to try to understand the characteristics of normal SSL traffic (i.e. including benign and malicious connections). This statistical overview has been done through a python script that analyzes the SSL important characteristics, according with our features. In Table 6.1 it is possible to see general statistics regarding the n-gram technique and the Certificate Validity distribution within the SSL traffic. The Average Value of missing frames is very low (0.098), and can give an idea of the differences among general SSL and TOR traffic, which will be described later. The n-gram Zero Ratio (i.e. zero missing frames) is a significant value: 98.4%. This means almost all the SSL traffic usually has zero missing frames, and this confirms the quality of our trained model, because it means that most of the traffic matches with sequences found in our training set (i.e. 500k top websites). Regarding the Certificate Validity values it is possible to see that the values are not very significant. The majority of the certificates is valid (70.9%), and very few certificates within the entire load of traffic is expired (0.01%). The percentage of self signed certificate is very low (0.5%). However, for one certificate out of five, it is not possible to retrieve the certificate of its issuer (21.5%). The 6.79% of the certificates has not been validated by the Bro’s script. In Table 6.2 the focus is on other SSL features we analyze. We can see that the 83.2% of the SSL connections use the SSL extension server name. This is an important statistic, because shows how this extension is commonly use in SSL applications. Another relevant statistics is that the 98%



SSL Traffic Overview (1) n-gram technique Stats Average Value 0.098 n-gram technique Zero Ratio 98.4% Certificate Validity Ratio Ok Certificate 70.9% Self Signed Certificate 0.5% Self Signed Certificate in Chain 0.3% Unable to Get Certificate Issuer 21.5% Expired Certificate 0.01% Certificates not validated 6.79% Table 6.1: Certificate Validity and n-gram technique SSL Stats Overview

(727224 instances)of SSL connections follow the RFC specifications for the server name extension. Most of the connections are properly authenticated with good certificates. However, the remaining 2% of non-correct use of the SSL extension represents 14193 connections, which is a high number of misconfigurations. In the 98% of traffic, where the hostname is contained, we can see a similar pattern to general statistics previously showed in Table 6.1. The majority of the certificates, is labeled as "Ok Certificate" (75.97%) and "Unable to Get Certificate Issuer" (23.58%). Regarding the traffic where the hostname is not contained, we can see a different distribution of certificates. For the 86.84% of certificates it is not possible to get the certificate of the issuer. The 4.47% of certificates is self-signed, and if we compare it with the previous value (0.17%) we can see a big increase, which means that the authors of such certificates do not follow the protocol specifications. Self-signed certificates are usually used in test environments, in internal communications or private purposes (e.g. the student connects to his home server), therefore the author seems to do not follow the specifics. Probably, the libraries that the authors use, do not force them to follow the protocol specifications. Regarding the "Ok Certificates" the value drops to 8.6%, which is expected, because it should not be common to see valid certificates that are used to authenticate nonallowed domains. However, the value it still too high, because there should not be such misconfigurations. Mutual Authentication, as shown in the table, is not a common features used in SSL communications. In our dataset the percentage of mutual authentication traffic is 0.38%, which is very low. Most of these connections are authenticated with a valid certificate (78.8%). However, there are also some misconfigurations in some applications that use mutual authentications. In our analysis we encountered 110 mutual authentication misbehavior. In this case a misbehavior is represented by a miss of a Certificate Verify response


CHAPTER 6. EXPERIMENTS SSL Traffic Overview (2) Hostname Domain Stats TLS Server Name Extension 83.2% Hostname Contained Ratio 98% (727224) Hostname Not Contained Ratio 2% (14193) Certificate Validity Ratio Ok Certificate 75.97% (8.59%) Self Signed Certificate 0.17% (4.47%) Self Signed Certificate in Chain 0.25% (0.04%) Unable to Get Certificate Issuer 23.58% (86.84%) Expired Certificate 0.01% (0.04%) Mutual Authentication Certificate Request/Verify 0.38% Ok Validated Certificate 78.8% (21.2%) Misbehavior 0.00012% (110 instances)

Table 6.2: Hostname Domain and Mutual Authentication Stats Overview to a Certificate Request message or viceversa. TOR Looking at the logs, the first thing that has been noticed is TOR traffic. TOR is a project for anonymity online [52]. It is an onion network that exploits the SSL protocol, therefore it also uses the port 443, the one we are sniffing on. A typical TOR entry has characteristics that are shown in Table 6.3 and Table 6.4. In these two tables it is possible to observe the main TOR’s traffic characteristics based on our features. As we can see in Table 6.3 the server name field is never contained in the subject field of the certificate. The domain names of both, server name and subject fields, look randomly generated. Moreover these two fields show a clear pattern, which is our rule for TOR traffic detection: the server name field always starts with "www." and always ends with ".com". On the other hand, the subject field starts with "www." but it ends with ".net". This rule is trivial but effective, as it is shown later. Furthermore, the certificate is periodically generated (as shown in Table 6.4 with Time Validity field) by the TOR application that is running on the server with different (random) subject values. For this reason the feature number two in Table 4.1 is not effective at all for TOR traffic. For the server name field, created during a client hello message, the hostname is randomly generated. As it is possible to see in the Table below, the n-gram values range from 0 to very high values like 19. Therefore, also this second feature it is not reliable for TOR detection. Nonetheless, the trivial rule: server name = "www. random value .com"



Hostname TOR Characteristics Server name Certificate Subject www.zn225q4eb.com www.mkkpb2qykax7rjeyvvm.net www.kb76zi4u7f23k6qyn2qn2uhow.com www.ikxxgaq27k53ti3a2ug.net www.cryz6spb.com www.ikxxgaq27k53ti3a2ug.net www.okiamt.com www.ikxxgaq27k53ti3a2ug.net www.sjk2qqos.com www.mv6np5ij4zthi7cyb.net Table 6.3: TOR Hostname Characteristics TOR Other Characteristics n-gram technique Time Validity 5 >1day 19 >1day 4 >1day 0 >1day 4 <10min Table 6.4: TOR - Features Characteristics

&& subject = "www. random value .net", showed to be very effective in TOR detection. Therefore, we created a third log in order to divide the TOR traffic entries from other SSL warnings. Some statistics have been derived about the captured TOR traffic, and they are shown in Table 6.5. As we can see there are some clear patterns defined by this traffic overview. The missing frame average value for server name encountered is 8.69, which is quite high, even if compared with the missing frame average of the general traffic. This means that, on average, for each server name are missing around eight frames (i.e. a sequence of length equal to four). The percentage, on the entire TOR traffic, of n-gram technique values that are equal to zero is around the 6%. This value shows that our technique cannot be used as a single reliable feature for TOR detection. The number of valid certificates is equal to 0.2%. The manual analysis showed that these connections are false positives. Therefore, our TOR detection rule has a 0.24% false positive rates (i.e. FPR), that can be decreased adding an additional control on the certificate validation, because as we can see from the statistics, the 99.76% of certificates it does not provide the issuer certificate. The same percentage (i.e. 99.76%) represents the number of server name values that are not contained in certificates subject values, which means that we have a 100% rate of server name not "contained" for TOR traffic. The other 0.24% is the percentage of "contained" certificates which correspond to our false positives! Regarding the validity of the certificate, looking at the Table, it is not possible to define a clear pattern for the generation of certificates.



Lastly, it is possible to see that mutual-authentication is never used within the context of TOR connections. TOR Traffic Overview n-gram technique Avg. 8.69 (missing frames) n-gram technique 0 Ratio 5.9 % Valid Certificate (FPR) 0.2% (17/7127) Unable to get Certificate Issuer 99.76% (7110/7127) Hostname Not Contained Ratio 99.76% Cert. generated <10min ago 262 Cert. generated <1 day 276 Cert. generated >1 day 6589 Number of Cert.Request/Verify 0% Table 6.5: TOR Traffic Statistics Overview Analyzing the IPs of the TOR traffic generated within our dataset, we have detected a TOR Node within the network of the University. In the logs it was possible to see that its IP was generating most of the traffic and it was doing both, the client and server. This means that that machine was receiving TOR connections from outside (i.e. server-side) and then it was forwarding it to another TOR node (i.e. client-side). The confirmation of the existence of such a node, has been retrieved from a public list of TOR nodes [53], as shown in Figure 6.1.

Figure 6.1: TOR node in TOR public list [53]

Warnings Overview As we have mentioned before, we have decided to keep the detection rules broad in order to understand which features are more relevant. Once all the warnings are written in the logs, we analyze them as we did with TOR and the whole traffic. We want to see if there is any relevant difference, or interesting pattern. As it is possible to see in Table 6.6, there are clear patterns highlighted by this overview. The n-gram technique average value is slightly higher than in the normal traffic, and the ratio of n-gram technique values equal to zero is around 66%. Here, the value of n-gram technique does not see to have any important impact. The distribution of the certificate validation status is spread all over the values. Therefore, it is not possible to get any significant pattern out of this data. The percentage of misconfigurations (server name field) is around 58%. The mutual authentication is used in



the 16% of the connections. This percentage is not relevant since all these communications resulted as benign. The certificate generation date seems to be the only interesting value, because we have a low percentage of "new" certificates. However, similar to mutual authentication, these connections resulted as benign. This statistics over "Warnings" traffic has shown us that we have collected a lot of false entries, and that among those it was not possible to find any clear pattern. After this analysis, we have started a manual analysis of the all entries of the Warning logs in order to find some possible malicious connections and to tailor our rules for a second analysis. Warnings Traffic Overview n-gram technique Avg Value 0.589 n-gram technique 0 Ratio 66.56% Valid Certificate 32.34% (6117) Unable to get Certificate Issuer 34.02% (6433) Self Signed Certificate 21.58% (4081) Self Signed Certificate in Chain 12.02% (2273) Expired Certificate 0.02% (5) Hostname Contained 58.69% (41.31% not contained) Mutual Authentication 16.24% (3072) Cert generated <10 min ago 0.037% (7) Cert generated <1 day ago 1.025% (191) Cert generated >1 day ago 98.937% (18708) Table 6.6: Statistic overview over the "Warning" connections

Misconfigurations Another important result that has been obtained by our first analysis is the detection of misconfigurations. As it is shown in Table 6.2, the 2% of connections have a certificate that is not valid for the server name requested by the client. After the manual analysis we have noticed that there are two different type of misconfigurations, that we define as light and heavy. Georgiev et al. [19] analyze the same feature in their work, claiming that these missing checks can lead to a man-in-the-middle vulnerability. In theory, this is true and we want to confirm this statement. However, we make a distinction between light and heavy misconfiguration, because in practice the first is far less vulnerable in practice than the other. We define a light misconfiguration when the TLD of the server name value matches with the TLD of the subject, but there are some mismatches among the subdomains. In Table 6.7 we can see some examples. Following strictly the specifications of the protocol (RFC 6066 [29]) these should be considered



as misconfigurations. However, these connections do not suggest any attempt of malicious attacks. They look as a mistake of configuration made by a system administrator, so it should not be considered as an issue. Moreover, for an attacker it would be hard to take advantage of it. For example, if an attacker wants to create a man in the middle attack, he has to generate or steal a certificate that has the same TLD and a different subdomain. To have the same TLD domain you have to attack directly the CA that released the certificated, or steal the certificate directly from the company. Therefore, it is not easy at all to take advantage of this misconfiguration, and this is the reason why we consider it "secure". Light misconfigurations Server name Subject watson.telemetry.microsoft.com telemetry.microsoft.com mozilla.debian.net debian.org software.itc.nl helpdesk.itc.nl cepreport.pdfm9.parallels.com report.parallels.com www.euroszeilen.utwente.nl dzeuros.snt.utwente.nl Table 6.7: Examples of Light misconfigurations We define a heavy misconfiguration when the TLD of the server name is different from the TLD of the subject. In Table 6.8 we can see some instances of heavy misconfigurations, which represent connections that have been authenticate even though the certificates were not valid for that requested domain. These connections are benign, but unfortunately they use SSL in a wrong way and they are vulnerable to man-in-the-middle [19]. In an intrusion detection system it is not possible to consider such connections as malicious, because they just create many false positives. Therefore, whether the domains match or not cannot be used as a stand-alone feature to determine the maliciousness of a connection. However, these misconfigurations can lead to a big problem. Beside being vulnerable to man-in-the-middle attacks, they simply destroy the utility of such authentication scheme, and as we can see (Table 6.8) there are also famous content providers like Akamai that are not following the specifications. This would mean that potentially all the website that are hosted on Akamai servers can be vulnerable to man-in-the-middle attack. The entire system based on x509 certificates and TLS extensions would be useless, because if we allow somebody to misbehave in this situation, we would not be able to detect those that do that with intentional purposes. For example: if a malicious application connects to a malicious server, and it authenticate himself with a stolen and valid certificate of facebook.com, we would not be able to detect it as a malicious behavior, because if we do, then we would have a lot of false positive due to administrators misconfigurations. In this way we



are simply destroying a security features (i.e. server name TLS extension) that have been introduced in order to strengthen the SSL authentication scheme. This scheme should not be valid just for browser software, but also for other applications. Probably the freedom given by the standard should be reviewed, in order to force the checks and kill the connection if there is any misbehavior. In this case it would be harder for malicious software communicate with malicious servers, because they would need a valid certificate for that specific malicious domain, but it probably would be too expensive, or use a self-signed certificate that are usually easy to spot. With a proper use of this TLS extension it would be impossible to use stolen certificates, because they would be instantly and easily spotted! Heavy misconfigurations Server name Subject www.geek.no *.proisp.no cookiesok.com webshop.beautifulhealth.nl static.app.widdit.com *.akamaihd.net b.scorecardresearch.com *.akamaihd.net www.predictad.com *.androidhomebase.com Table 6.8: Examples of Heavy misconfigurations

Botnet & Malware Detection Beside detecting TOR and SSL misconfigurations, we have achieved the most important result in detecting malicious connections. Two of these connections are malware related to spam activities through TOR network, and six of these we believe belong to a botnet. In Table 6.9 and Table 6.10 we can see the malicious connections detected during our manual analysis. The first table describes the SSL extension characteristics (i.e. server name and missing frames), and the second table describes the characteristics of certificates of such malicious connections. The first connection is towards a Pakistani IP address. This connection has a very suspicious server name that looks almost random, and the ngram technique confirms that with a value equal to 6. The certificate used to authenticate this connection is self-signed, which makes the connection even more suspicious. The next step that we have done to see whether the server is malicious or not was to run the unix commands host and nslookup, to understand where this domain was pointing to. The response was: NXDOMAIN. The last confirmation of the maliciousness of this machine was given by several blacklists (e.g. Project HoneyPot, ThreatStop, etc..), as it is possible to see in Figure 6.2. However, having a look to the server name, we can see that it has the TOR characteristics. The domain looks random.



The subject of the certificate does not match the pattern of TOR certificates because it is the real destination server (e.g. not a request bouncing among TOR nodes). Therefore it provides its own certificate. The source IP it is likely a exit node of the TOR network.

Figure 6.2: ThreatStop’s report [61] of the infected machine The second malicious connection we have detected regards an Iranian server. This connection has a source IP the TOR exit node we had detected before (Figure 6.1), and as it is possible to see, the server name field matches with a TOR’s server name request, even though it has zero missing frames. This request is the last one of the TOR request chain (i.e. exit node), therefore the destination IP, which does not belong to the TOR network, provides a certificate with a different pattern than usual TOR servers. We found this connection entry suspicious due to the source IP, the non-matching of the server name and the certificate and the location of the destination IP. We tried the unix commands host and nslookup, and we obtained NXDOMAIN as a response. We could expect that since TOR server name requests are random. Lastly, we found confirmation of the maliciousness of the machine asking to several blacklist (e.g. Project HoneyPot, ThreatStop, etc...), and as it is possible to see in Figure 6.3, the machine results to be infected.

Figure 6.3: ThreatStop’s report [61] of the infected machine

IP dest

Malicious connections (1) Country Server name Pakistan www.7cwxslap5dachi.com Iran www.sleihn.com Germany France Netherlands Ukraine Italy Ukraine n-gram technique 6 0 -1 -1 -1 -1 -1 -1

IP dest

Malicious connections (2) Cert validity self signed certificate self signed certificate in certificate chain certificate has expired certificate has expired certificate has expired certificate has expired certificate has expired certificate has expired

Subject OpenWrt TUSROCMetro VPNSSL www.amazon.com www.amazon.com www.amazon.com www.amazon.com www.amazon.com www.amazon.com

Table 6.10: Certificates characteristics of detected malicious connections

Connection ID 1 2 3 4 5 6 7 8

Table 6.9: Server name SSL extensions characteristics of malicious connections

Connection ID 1 2 3 4 5 6 7 8




Both the malicious connections we have analyzed previously are related to TOR network. In both cases, the exit node, present in our network, is connecting to a server that is blacklisted as infected from professional services. The other six connections that are present in Table 6.9, in our opinion, represent a botnet that is working on SSL. As it is possible to see, these connections are using the same expired certificate, which likely has been stolen. The subject of this certificate is amazon.com, which is one of the most visited website on the Internet. Therefore, having an expired certificate of a famous website on different unknown servers all-over the Europe, it is quite suspicious. We checked on ThreatStop to see whether those IPs are infected or not. We found that 2 IPs out of 5 (one repeat itself) are infected machines. In Figure 6.4 and Figure 6.5 is possible to see the report from the professional service regarding the infection. As we can see, in both the images the first identification is very recent: beginning of May for the Ukrainian server and 6 of June for the German Server. This means that these threats have been recently discovered by this professional service. However, as we have mentioned before, we collected the data from the 26th of May until the 28th, which means we were able to detect the malicious connections 10 days before a professional service like ThreatStop. The other three IPs are not marked as infected. Nonetheless, the connections characteristics are the same: same stolen and expired certificate, same hostname format misconfiguration (i.e. an IP address instead of a DNS hostname) and same missing frame value (i.e. if the format of server name field is not a DNS hostname, it outputs -1). Therefore, we think these communications are generated by the same malicious application and all the IP addresses should be flagged as infected machine.

Figure 6.4: ThreatStop’s report [61] of the German server Recently, FireEye, a well-known security company, as stated in a report [55] that a botnet, called Asprox, has been discovered to exfiltrate data using an SSL channel. In the report there are no many details regarding the SSL connections. In order to communicate, the botnet uses an RSA encrypted SSL session, and uses RC4 as a stream cipher to encrypt the payload. It is explained that the highest peak of traffic generated by this botnet started at the end of May. Moreover, the most targeted countries by this campaign



Figure 6.5: ThreatStop’s report [61] of the Ukraine server

are: USA, United Kingdom, Japan, France and Netherlands. These details given by FireEye, partially confirm our hypothesis of botnet. Our dataset has been collected at the end of May (26th, 27th and 28th), and it contains communications from the Netherlands. In addition, we analyzed every single pcap that was containing these malicious connections, and all those connections are using RSA and RC4 as parameters during the handshake. This information makes us believe that we have discovered SSL botnet traffic. We cannot give a 100% confirmation because we cannot analyze the infected machine for privacy reasons. However, we strongly believe that those six connections represent a botnet (e.g. Asprox). Another malware, a Zeus variant, has been also detected in the wild at the end of May. The malware it is called Zberp (i.e. hybridize version of Zeus and Carberp), and it uses SSL as communication protocol for its C&C. The Websense Security Labs states, in its article [26],that: "’Zberp’ also uses SSL for its command and control (C&C) communication, but this has been seen before in other variants. We have not seen any usage of valid certificates for this, though. Typically the certificates used are self-signed or non-valid certificates that were stolen or re-used from other domains. [...] For example one of the traits that allows us to identify Zeus is its way of transmitting encrypted data with the RC4 algorithm". Therefore, the malicious connections that we have detected, which use stolen and expired certificates and uses RC4 as encryption algorithm, might belong to the Zeus botnet, that use this new variant of malware. Nonetheless, we cannot determine whether the detected connections are related to Zeus or Asprox, because we did not have any possibility to analyze a sample of those malware. However, we strongly believe that our malicious connections reflect the behavior of a botnet, also due to the similarities to the other botnet that have been discovered.



Other Considerations We have manually analyzed the pcap files that were containing the malicious connections. All these connections contain in their client hello the Heartbeat extension, which is interesting because normally it is not contained in usual communications. It would be interesting to fingerprint with which libraries those application have been built, because it is possible that those applications have been compiled with the Heartbeat option activated. We have analyzed the length of the messages exchanged between client and servers belonging to a suspected botnet. It is possible to see some patterns among those messages, and this would confirm us that the same application is used, but the communication is done over different dislocated servers. In Table 6.11 it is possible to see quantity of data exchanged between the client and the server during the connections. As we can see not all the connections had the same number of exchanges. However, we can notice the very few quantity of data exchanged. This would suggest that the client and the server are just exchanging few information, perhaps they send commands. Additionally we can see that the messages are proportionate. For instance, at the first client message, correspond a bigger server response.

152 95 103

Client (bytes) 192 86 91

Server (bytes)



Client (bytes) 735

Server (bytes)


Table 6.11: Encrypted Payload’s characteristics of the malicious traffic

Server (bytes)

144 180 99 87 new Handshake 264 753

Client (bytes)


Netherlands Client Server (bytes) (bytes) 152 191 96 84 new Handshake 260 737




Nonetheless, it is hard to outdraw any conclusion from the analysis of the payload length. The only think we can say is that they seem to be very similar, and it is an additional parameter that could confirm that is the same malware communicating with different server, from diverse infected machines. During our analysis we faced other interesting misbehaviors of SSL applications, especially regarding the field server name. In the previous paragraphs we have seen some misbehaviors related to random domains or IP addresses within a field that should contain just DNS hostname. In a variegated network such a University network, we have seen other strange behaviors of the SSL protocol (all benign). In one connection we have seen a server name with the following value: scrape.php?passkey=7b9edd5c822b92f36e8fc95c92bb 21ea&info_hash=C%25ecp%2258b%2510%25fb%25f5%2520%258e%2505C %25e94%25e7%25a9%2586%25cf%253a. Other interesting traffic is generated by a Google open networking protocol for transporting web content called SDPY [51]. In the server name field of such connections we see hash values, like 01cf645e.32fa6d90, which clearly do not follow the standard. This indiscipline of SSL applications, that run in the background, raise some important issues. Since, apparently, there are no boundary checks on the server name extension, would it be possible a buffer-overflow attack against it? In addition, the lack of checks could be exploited by criminals in order to use that field as a message carrier for their C&C communication infrastructure.


Second Analysis

The second analysis is based on the same dataset, because we want to redefine our detection rules. As we described in the previous Section 6.1.1, we have obtained several results and we use them in order to tailor our rules for the second part of our experiment. In this second analysis we have decided to log the warnings in two different logs: minor and major warnings. In the major warnings log we store all the connections that match the following rules: • Has a server name format different from DNS hostname or IP address (e.g. the hash values) • Has a server name format equal to an IP address and a certificate validation status different from "ok" • Has a n-gram technique value bigger than 2 and the server name is not contained in the subjects of the certificate • Has a n-gram technique value bigger than 2 and it is used a self signed certificate



• Has a "heavy" misconfiguration (Paragraph 6.1.1) and the certificate is not self signed • Has a self signed certificate and the Levensthein distance is less or equal than 1 We think these rules can detect the most important and "dangerous" misbehavior. These rules are based on the results and entries we have seen in the previous analysis. The first rule detects all the server name fields that have a strange format, therefore we think they worth a deeper analysis. It is true that those detected before were benign connections, however a deep check in these situations can be very useful. The second rules focus on IP addresses present in the server name field. As we have seen before, the connections that we believe belong to a botnet, were using a similar pattern. Moreover, we remove those connections with a valid certificate, because in all the cases we have analyzed them, they were benign. However, it would be important to check (manually or automatically) if those IP really belongs to the authenticated website, otherwise we risk to miss stolen certificates. The third and the fourth rules regards the n-gram technique. Based on the first analysis we have empirically seen that entries with a value bigger than two, and combined with server name not contained or self sign certificates, raise mostly malicious connections and there are few false positives. Moreover, we think that all the connections that match these parameters are likely suspicious. The fifth rule is related to stolen certificates. Whenever we have a certificate, which is not self signed, and it is not valid for that domain, there is the risk we are facing a stolen certificate, which can be still valid. The last rule regards the Levensthein distance and certificates that are self signed. As we have described before in our work, we treat as suspicious connections, those connections that have as a subject, of their self signed certificate, a famous website. In this second draft, the generation date and mutual authentication features have been removed, because we think are not effective and relevant in order to detect malicious behaviors, at least for what we could analyze in our dataset. All these rules are not targeting just the malicious connections but all the possible major misconfiguration and connections that could be worthy to analyze and could be considered anomalous. In Table 6.12 we can see an overview over the characteristic of this warning traffic. The n-gram technique average value is similar to the previous analysis. The ratio of n-gram technique values equal to zero is high (90.83%). This means that most of the misconfigurations regards server name that still have a low number of missing frames. In this analysis we have added a new



parameter: an missing frame threshold. We define some different values as threshold, and everytime we analyze a log entry, we increase the counter by one if its value is greater than the threshold. We have seen that with a threshold equal to 2, we have obtained only one single entry, which has value 6 and matches with one of the malicious connections analyzed before (i.e. Pakistani server). Therefore, we have decided to keep n-gram technique > 2 as a possible identifier of malicious connections. Regarding the validation of certificates we have a 39% of valid certificate that are involved in misconfigurations. All these certificates are also not matching with the server name field, therefore they represent the so called heavy misconfiguration. For the 60% of the certificate it is not possible to get their issuer. The self signed certificate involved in these warnings are very few (0.45%) and the expired certificates contained in the log correspond to the exact number of malicious connections related to the botnet (i.e. 6 (0.25%)). In this analysis we have added one more statistic, related to the format of the server name field. The large majority respects the DNS hostname format (94.69%). However, there is still a big number of wrong formats, which with 127 entries represent the 5% of the warnings. Major Warnings Overview n-gram technique Avg Value 0.519 n-gram technique 0 Ratio 90.83% Over Threshold ( > 2) 1 (value = 6 = www.7cwxslap5dachi.com) Valid Certificate 39.0% (935) Unable to get Certificate Issuer 60% (1437) Self Signed Certificate 0.45% (11) Self Signed Certificate in Chain 0.25% (6) Expired Certificate 0.25% (6) Hostname NOT Contained 99.95% (39% valid certificates) - DNS = 94.69% (2268) - IP = 0.66% (16) Format Hostname - Other = 4.63% (111) Table 6.12: Statistic Overview of Major Warnings of the second analysis On the other hand, the minor warnings are related to those connections that do not follow the specifications of the protocol, so they have to be considered as misconfigurations, but are not significant issues from the security perspective. In the minor warnings log we store all the connections that match the following rules: • Has a server name format equal to an IP address and the certificate is valid • Has a minor misconfiguration (Paragraph 6.1.1)



These rules detect minor problems within the SSL traffic. We have stored these entries to understand how many true misconfigurations there are. The first rule stores all those connections that have a valid certificate but has an IP in the server name field. During the first analysis we have seen a lot of such connections, also from important companies, like Apple, Facebook, Twitter, etc... The connections we found in our dataset, which follow this pattern, are all benign. Nonetheless, the belonging of the IP to the domains of the certificate should always be checked, because it is the only way to understand if the connection is benign or not. We do not do that because an intrusion detection system, by definition, should be passive. However it would be a good practice, because otherwise there is the risk to do not detect possible malware that are using stolen and valid certificates. The second rules follows the definition of minor misconfiguration, that we have explained before. In Table 6.13 we can see an overview of the entries related to minor warnings. The missing frame value is negative, this means that most of the n-gram technique values are equal to zero (i.e. 93.90%) and a good number of values (i.e. 140) are negative (i.e. -1) because it is the default value for non-DNS hostname format. No entries have more than 2 missing frames. The number of valid certificates is equal to the 10.73%, which means that there are many connections that are using the certificates in a wrong way. The 84.84% of certificates, in minor configurations, do not provide the certificate of the issuer, and the 4.34% of the certificates is self signed. The 100% of these connections do not follow the specifications of the SSL protocol. However, the 95.25% of the server name strings have a correct format. Just the 4.75% of the connections have an IP address instead of DNS hostname value. Minor Warnings Overview n-gram technique Avg Value -0.06 n-gram technique 0 Ratio 93.90% Over Threshold ( = 2) 0 Valid Certificate 10.73% (316) Unable to get Certificate Issuer 84.84% (2497) Self Signed Certificate 4.34% (128) Self Signed Certificate in Chain 0.06% (2) Expired Certificate 0% Hostname NOT Contained 100% - DNS = 95.25% (2803) Format Hostname - IP = 4.75% (140) Table 6.13: Statistic Overview of Minor Warnings of the second analysis





In the Section 6.1 we have analyzed the dataset using broad rules. In our warning log we stored 18909 entries out of 891110 (i.e. entire number of connections in our dataset). These rules allowed us to collect many connections. Most of them were benign connections. However, we then analyzed those 18909 manually and we refined our SSL features. Some of them were discarded, some were tailored to the suspicious cases we found. In a second draft of our analysis, we changed our rules and we obtained 5338, more than 70% less than the first attempt. All these connections, which represent the 0.6% of the entire dataset, in our opinion, are either suspicious or breaking the rules of the SSL protocol, even though most of them are benign connections. Our procedure let us learn what features, from a starting set, can be relevant or not. We started with a broad approach, to narrow it later on, once we have understood how the SSL connections are in a real network traffic. This approach allowed us to tailor our detection rules, and detect 5338 broken SSL connections.


Third Analysis

The last analysis we have done in this work is strictly related to malware detection. We have defined some detection rules that aim to detect malicious connections. This rules are based on the malicious connections we have detected during the first analysis. We do this modifying our script with the following rules: • Log every connection that has a n-gram technique value greater than two, and a self signed certificate • Log every connection that has a n-gram technique value greater than two, and there is a misconfiguration (i.e. light or heavy) • Log every connection that has an expired certificate and there is a misconfiguration (i.e. light or heavy) The new script ran on our server, from beginning of July to the end of July. We monitored the logs in order to see how many false positives we obtain, and how many true positive. The first rule is based on the n-gram technique value. As we have seen before, two as the value of n-gram technique threshold seemed to be quite effective. Therefore we want to use it again, in addition to other two checks: one is the self signed certificate or a possible misconfiguration. The third rule is based on expired certificate. An expired certificate, implicitly means that it was a valid certificate before. Therefore, we do not expect to find



an expired certificate on a different domain than the one described in the subject field, otherwise this would mean it has been stolen. During this period we have captured and analyzed 1 Tb of SSL traffic. All the connections, that were matching our rules, have been stored in a log file, like in the previous experiments. In the logs it have been written 67 entries. Analyzing them manually we have seen that: 24 connections represent a misconfiguration of an expired certificate of the University of Twente, 5 are TOR connections (i.e. exit node), 4 connections are related to websites with a misconfiguration of their expired certificate, 6 entries has strange values as server name field (i.e. %9d6%eat%e9%00%28%08vSQ%c9%cc) and 28 connections are using the stolen amazon certificate. As it is shown in Table 6.14 the 55.3% of connections is considered malicious. Three out of five TOR connections have the destination IP address present in the ThreatStop blacklist. The six connections related at the certificate that belongs to "www.thegft.com" have the IP address presents in the Project Honey Pot, therefore these connections are flagged as malicious as well. The 28 connections with the Amazon’s expired certificate, are considered malicious due to the assumptions made in the previous chapters of this work. The 24 connections related to the internal misconfiguration of the University are benign, and they are all concentrated in a short time slot. The detection of such connections can be considered important for troubleshooting purposes.

Cluster name TOR Amazon Thegft.com Other University TOTAL

Results 3rd Analysis Number of connections Benign 5 2 28 0 6 0 4 4 24 24 67 30 (44.7%)

Malicious 3 28 6 0 0 37 (55.3%)

Table 6.14: Results of the Third Analysis This experiment can be considered successful, considering the amount of collected and analyzed data, and the percentage of true positives detected. Therefore these rules can be valuable in order to detect malicious connections over SSL.


Botnet symptoms

As shown in Table 6.14, 28 connections out of 67 are related to the Amazon’s expired certificate. Therefore, considering the first analysis, we have 34 suspected connections. We have analyzed the location of the IP addresses of those connections and we have seen that the servers are located in many



different countries. Usually botnet’s servers have a short lifetime and they change location quite often. This is a typical symptom of botnets. Therefore, even though we cannot confirm that what we have found it is a real botnet, we strongly believe this could be a potential botnet. In Figure 6.6 it is possible to see a visual representation of server locations, which were using the Amazon’s certificate to establish a secure connection. The IP addresses detected on ThreatStop have a blue color, the other not detected are red. As we can see, the certificate that should authenticate one of the most famous website on the Internet, it is used around the world. This is certainly not a normal behavior, whether this is a botnet or not.

Figure 6.6: SSL Record Protocol and Handshake Protocol representation


Decision Tree

Beside running a third experiment using new detection rules, we have created a decision tree, based on the data of the first and second analysis. We have created a input dataset of five thousand entries, including our eight malicious connections. We decided to use the four features we think are more relevant, and used in the third experiment: the number of missing frames, the certificate validation status, the format of server name string, and whether it is contained or not in the domains present in the subject list (i.e. light and heavy misconfigurations). We used a data mining tool called WEKA [12]. We tested our decision tree using the cross-validation (with k fold equal to 10) in order to test the efficiency of the decision tree based on those four features. We have selected the following algorithms: Best-First decision tree, C4.5, Random Forest, CART and Alternating decision tree. These algorithms have been previously proven by research and industry as effective tool in intrusion detection systems [36] [44] [59]. In Table 6.15 it is possible to see the results of such algorithm on our input dataset. As we can see, the all the algorithm have a very high percentage of correct classification. J48 has a percentage of False Negatives equal to 100%, therefore it is not able to detect the malicious connections. The other algorithms perform well and generates the same number of false negatives. Therefore, our decision trees are able to detect malicious connections,


Algorithm (cross-validation) BFTree CART J48 (C4.5) Random Forest ADTree

Results 3rd Analysis Correctly False Classified Instances Negatives 99.96% 2 (25%) 99.96% 2 (25%) 99.85% 8 (100%) 99.96% 2 (25%) 99.96% 2 (25%)


False Positives 0 0 0 0 0

Table 6.15: Results of the selected data mining algorithms with a FPR equal to 0, exploiting just four features of a SSL connection. Its detection rate is equal to 99.9636%. In the future it would be interesting to analyze such decision tree with real-time analysis and with a bigger dataset.



Chapter 7

Summary of the Results In our work we have made several contributions. The most important one is the detection of malicious SSL connections, and some of those, we believe, belong to a botnet, which use SSL as C&C communication channel. This was our goal for this work, and we believe we have accomplished it. Moreover, in literature there was no detection mechanism for malware on SSL, and the main reason was that it was not known, at the beginning of this project, whether there were botnets that were working on that protocol or not. We gave an answer to this question, moreover during the project we got a confirmation in a report from FireEye [55] about the existence of botnets on SSL. Another very important aspect of our solution is that it completely preservers privacy, because the payload is not analyzed and we inspect just the header of the protocol, and lightweight, because we focus on a small part of SSL traffic. This is very important because for instance current (very expensive) firewalls decrypt and re-encrypt HTTPS communication in order to inspect the content and look for malware, therefore this do not respect at all the privacy of the employees. Our solution instead, is cheap due to its characteristic, and respect the privacy of users. Moreover, those firewalls usually focus on HTTPS, our solutions work on all protocol that works above on SSL, because we exploit SSL itself. This solution can potentially be host-based and network-based. In addition, we were able to detect the infected machines even before professional service (i.e. 10 days before), and some of the IPs we have detected as infected are not even listed. Therefore, it is likely that we found something really new, since the patterns of the connections are the same as the listed servers. Since, our anomaly-based detection technique, which is not built upon existing malware, detected malicious connections, we can say that it is able to detect zero-day attacks. A side effect of our approach, allowed us to detect the TOR traffic in a deterministic way, with a simple but effective rule. Moreover, we have demonstrated that, having a "preventive" approach 89



(i.e. building intrusion detection solutions without analyzing malware) can bring surprising results. Studying the protocol and the structure of botnet is fundamental in order to succeed. We have also confirmed the issues raised by Georgiev et al. in their work [19]. Many applications are still using broken SSL handshake, and let them vulnerable to man-in-the-middle attack. However, we distinguish these misconfiguration in two different set, that we called light and heavy. Still many famous companies are using "bad" SSL connections, and some of those are content providers. This behavior puts in danger to man-in-the-middle attacks all the hosted website on those servers, which is very dangerous. The problem is both in the SSL standard and in its software implementations. Our suggestion, in this scenario, is to make the SSL standard more strict, and makes it drop all the connections that do not check properly the extensions. Moreover, this should be implemented and forced in SSL libraries as well.


Limitations & Future Works

Beside all the positive aspects of our solution, we have to consider also its limitations. The first limitation is the impossibility to detect malicious connections whenever they use valid certificates, so when the connection follows correctly the specifics of the protocol. Moreover, since we cannot raise alerts for all the possible misconfiguration (i.e. heavy) due to the high number of false positive that would be generated, we are not able to detect malicious connections that are using stolen, but valid, x509 certificates. Another limitation lies in the n-gram technique score of the server name field: whenever this value is less than two (i.e. therefore server name has a correct format), the domain requested is not labeled as random, therefore it is not stored in the warnings log, unless it matches with the third rule. In addition, since we do not look in the payload or to any signature, the maliciousness of the connections as to be checked on external services (i.e. we mainly used ThreatStop). During this work we found some interesting improvements that can be done on different sides. Some of them are related to the security of SSL implementations, and some to our solution itself. One interesting work could be done on the libraries used by the different applications, like browsers and others, in order to check whether they use the Heartbeat extension or not. In this way we might be able to understand the SSL libraries used by the malicious authors within their application, a sort of fingerprinting technique of SSL libraries. Moreover it would be really interesting to build a prototype of botnet based on SSL, trying to exploit some features, like server name 3.4.1 or certificate_url 3.4.3, as means of transportation for botnet commands. Modern firewalls are inspecting the payload, therefore it is likely that this botnet is going to be undetected. Furthermore, it would be interesting to



create a system which calculates a sort of "entropy" related to the country of the IPs used under the same certificate. Probably an high value of different countries for the same certificate, in a certain time period, could raise some alerts for botnet detection. Lastly, another interesting test could be done on different SSL libraries, with the goal of attack the applications filling the header fields (e.g. server name) with long random strings and see whether they are vulnerable to buffer overflow attacks, or DDos in case of the certificate_url field. Some improvements could also be done on our solution. Other SSL extensions could be tested, in order to see if are relevant or not to detect botnets. The Levensthein distance should be substituted with a substring check, in order to do the same check, because the Levensthein algorithm is to "strict". Checking whether a famous website is a substring of one of the domain in the list of the certificate or not, could be more reliable. Another improvement is related to the n-gram technique. In our scenario we apply the n-gram technique just on the TLD. However, in case the TLD is shorter than 4 characters length, the value it is not calculated, therefore we can miss some domains, which is one of our limitations. A malicious attacker could be able to obtain domain shorter than 4 characters and use random subdomains for the other connections. So it would be interesting to add this additional check on subdomains where the TLD is too short. Lastly, the generation date of the certificate it was not considered



In this work, we presented a novel detection system, that is able to detect malicious connections over SSL, without inspecting the payload of the message. This solutions is able to respect the privacy of the users, and at the same time protect them detecting possible infections within the network. This detection system is able to detect zero day attacks (we detected an infected machine before than professional services like ThreatStop), since it was able to detect infected machines before professional services. This work is important for literature because it focus on a problem that was not faced before, and we proposed a potential solution for it. We have shown that it is also possible to create detection algorithms in a "black-box" manner, where the malware are not available (or even not known to exist) to be analyzed, therefore the detection system is not biased by the single characteristics of the malicious software, but it built upon important characteristics of the analyzed protocol. We have also confirmed some weaknesses (i.e. broken SSL handshake vulnerable to man-in-the-middle attacks) of the SSL protocol that were previously highlighted in research. Moreover, we have detected malicious misbehaviors on SSL, that we believe could represent a botnet, that regards the SSL certificate of one of the most famous websites. This



misbehavior has been reported directly to Amazon, that is going to take further investigations on this problem.

Bibliography [1] Alexa. The top 500 sites on the web - http://www.alexa.com/topsites. [2] D Andriesse, C. Rossow, B. Stone-Gross, D. Plohmann, and H. Bos. Highly resilient peer-to-peer botnets are here: An analysis of gameover zeus. In Malicious and Unwanted Software:" The Americas"(MALWARE), 2013 8th International Conference on, pages 116–123. IEEE, 2013. [3] M. Antonakakis, R. Perdisci, D. Dagon, W. Lee, and N. Feamster. Building a dynamic reputation system for dns. In Proceedings of the 19th USENIX Conference on Security, pages 18–18, 2010. [4] M. Antonakakis, R. Perdisci, Y. Nadji, Vasiloglou N., S. Abu-Nimeh, W. Lee, and D. Dagon. From throw-away traffic to bots: Detecting the rise of dga-based malware. In Presented as part of the 21st USENIX Security Symposium, pages 491–506, 2012. [5] D. Ariu, R. Tronci, and Giacinto. G. Hmmpayl: An intrusion detection system based on hidden markov models. In JMLR: Workshop and Conference Proceedings 11, pages 81–87, 2010. [6] E. Athanasopoulos, A. Makridakis, S. Antonatos, D. Antoniades, S. Ioannidis, K. G. Anagnostakis, and E. P Markatos. Antisocial networks: Turning a social network into a botnet. In Information security, pages 146–160. 2008. [7] B. Bencsáth, G. Pék, L. Buttyán, and M. Félegyházi. Duqu: A stuxnet-like malware found in the wild. CrySyS Lab Technical Report, 14, 2011. [8] L. Bilge, E. Kirda, C. Kruegel, and M. Balduzzi. Exposure: Finding malicious domains using passive dns analysis. In NDSS, 2011. [9] J. R. Binkley and S. Singh. An algorithm for anomaly-based botnet detection. In Proceedings of the 2Nd Conference on Steps to Reducing Unwanted Traffic on the Internet - Volume 2, pages 7–7, 2006. [10] H. Binsalleeh, T. Ormerod, A. Boukhtouta, P. Sinha, A. Youssef, M. Debbabi, and L. Wang. On the analysis of the zeus botnet crimeware toolkit. In Privacy Security and Trust (PST), 2010 Eighth Annual International Conference on, pages 31–38, 2010. [11] J. Blasco, J. C. Hernandez-Castro, J. M. de Fuentes, and B. Ramos. A framework for avoiding steganography usage over http. Journal of Network and Computer Applications, 35(1):491–501, 2012. [12] R. R. Bouckaert, E. Frank, M. Hall, R. Kirkby, P. Reutemann, A. Seewald, and D. Scuse. Weka manual for version 3-7-8, 2013. [13] P. Burghouwt, M. Spruit, and H. Sips. Detection of covert botnet command and control channels by causal analysis of traffic flows. In Cyberspace Safety and Security, pages 117–131. 2013. [14] L. Cavallaro, C. Kruegel, G. Vigna, F. Yu, M. Alkhalaf, T. Bultan, L. Cao, L. Yang, H. Zheng, and C. Cipriano. Mining the network behavior of bots. Technical report, 2009.




[15] J. P. Chapman, E. Gerhards-Padilla, and F. Govaers. Network traffic characteristics for detecting future botnets. In Communications and Information Systems Conference (MCC), 2012 Military, pages 1–10, 2012. [16] M. J. Elhalabi, S. Manickam, L.B. Melhim, M. Anbar, and H. Alhalabi. A review of peer-to-peer botnet detection techniques. In Journal Computer Science, pages 169–177, 2014. [17] M. Feily, A. Shahrestani, and S. Ramadass. A survey of botnet and botnet detection. In Emerging Security Information, Systems and Technologies, 2009. SECURWARE ’09. Third International Conference on, pages 268–273, 2009. [18] J. François, S. Wang, and T. Engel. Bottrack: tracking botnets using netflow and pagerank. In NETWORKING 2011, pages 1–14, 2011. [19] M. Georgiev, S. Iyengar, S. Jana, R. Anubhai, D. Boneh, and V. Shmatikov. The most dangerous code in the world: validating ssl certificates in non-browser software. In Proceedings of the 2012 ACM conference on Computer and communications security, pages 38–49. ACM, 2012. [20] J. Goebel and T. Holz. Rishi: identify bot contaminated hosts by irc nickname evaluation. In in HotBots’07: Proceedings of the first conference on First Workshop on Hot Topics in Understanding Botnets, pages 8–8, 2007. [21] Network Working Group. Internet x.509 public key infrastructure certificate and crl profile. www.ietf.org/rfc/rfc2459.txt. [22] Network Working Group. The transport layer security (tls) protocol version 1.2. http://tools.ietf.org/html/rfc5246. [23] G. Gu, R. Perdisci, J. Zhang, and W. Lee. Botminer: Clustering analysis of network traffic for protocol and structure-independent botnet detection. In Proceedings of the 17th Conference on Security Symposium, pages 139–154, 2008. [24] G. Gu, J. Zhang, and W. Lee. BotSniffer: Detecting botnet command and control channels in network traffic. In Proceedings of the 15th Annual Network and Distributed System Security Symposium (NDSS’08), 2008. [25] C. Hsu, C. Huang, and K. Chen. Fast-flux bot detection in real time. In Recent Advances in Intrusion Detection, pages 464–483, 2010. [26] Websense Security LabsâĎć http://community.websense.com/blogs/securitylabs/archive/2014/06/19/zberpis-there-anything-to fear.aspx. Zberp - is there anything to fear? [27] Lin-Shung Huang, Alex Rice, Erling Ellingsen, and Collin Jackson. Analyzing forged ssl certificates in the wild. [28] Internet Engineering Task Force (IETF). The secure sockets layer (ssl) protocol version 3.0. http://tools.ietf.org/html/rfc6101. [29] Internet Engineering Task Force (IETF). Transport layer security (tls) extensions: Extension definitions. https://tools.ietf.org/html/rfc6066. [30] E. J Kartaltepe, J. A. Morales, S. Xu, and R. Sandhu. Social network-based botnet command-and-control: emerging threats and countermeasures. In Applied Cryptography and Network Security, pages 511–528, 2010. [31] R. Langner. Stuxnet: Dissecting a cyberwarfare weapon. Security & Privacy, IEEE, 9(3):49–51, 2011. [32] W. Lu, G. Rammidi, and A. A. Ghorbani. Clustering botnet communication traffic based on n-gram feature selection. Computer Communications, 34(3):502 – 514, 2010. [33] D. Macdonald. Zeus: God of http://www.fortiguard.com/legacy/analysis/zeusanalysis.html. [34] malwaredomainlist.com.





[35] J. Manuel. Another modified zeus variant seen in the wild. http://blog.trendmicro.com/trendlabs-security-intelligence/another-modified-zeusvariant-seen-in-the-wild/. [36] J. Markey. Using decision tree analysis for intrusion detection: A how-to guide. 2011. [37] C. Mulliner and J. P. Seifert. Rise of the ibots: Owning a telco network. In Malicious and Unwanted Software (MALWARE), 2010 5th International Conference on, pages 71–80, 2010. [38] S. Nagaraja, A. Houmansadr, P. Piyawongwisal, V. Singh, P. Agarwal, and N. Borisov. Stegobot: A covert social network botnet. In Proceedings of the 13th International Conference on Information Hiding, pages 299–313, 2011. [39] V. Natarajan, S. Sheen, and R. Anitha. Detection of stegobot: A covert social network botnet. In Proceedings of the First International Conference on Security of Internet of Things, pages 36–41, 2012. [40] J. Nazario. Twitter-based botnet command channel. http://www.arbornetworks.com/asert/2009/08/twitter-based-botnet-commandchannel/. [41] Inc Network Associates. How pgp works. http://www.pgpi.org/doc/pgpintro/. [42] Palo Alto Networks. The modern malware review. [43] S. Noh, J. Oh, J. Lee, B. Noh, and H. Jeong. Detecting p2p botnets using a multiphased flow model. In Digital Society, 2009. ICDS ’09. Third International Conference on, pages 247–253, 2009. [44] F. Ozturk and Subasi A. Comparison of decision tree methods for intrusion detection. 2010. [45] Ruoming Pang, Vern Paxson, Robin Sommer, and Larry Peterson. binpac: A yacc for writing application protocol parsers. In Proceedings of the 6th ACM SIGCOMM conference on Internet measurement, pages 289–300. ACM, 2006. [46] V. Paxson. Bro: a system for detecting network intruders in real-time. Computer networks, pages 2435–2463, 1999. [47] R. Perdisci, I. Corona, D. Dagon, and Wenke Lee. Detecting malicious flux service networks through passive analysis of recursive dns traces. In Computer Security Applications Conference, 2009. ACSAC ’09. Annual, pages 311–320, 2009. [48] R. Perdisci, G. Gu, and W. Lee. Using an ensemble of one-class svm classifiers to harden payload-based anomaly detection systems. In Data Mining, 2006. ICDM ’06. Sixth International Conference on, pages 488–498, 2006. [49] R. Perdisci, W. Lee, and N. Feamster. Behavioral clustering of http-based malware and signature generation using malicious network traces. In Proceedings of the 7th USENIX Conference on Networked Systems Design and Implementation, pages 26– 26, 2010. [50] P. Porras, H. Saïdi, and V. Yegneswaran. A foray into conficker’s logic and rendezvous points. In Proceedings of the 2Nd USENIX Conference on Large-scale Exploits and Emergent Threats: Botnets, Spyware, Worms, and More, pages 7–7, 2009. [51] SDPY Google Project. http://en.wikipedia.org/wiki/spdy. [52] Tor Project. - https://www.torproject.org/. [53] Tor Project. Tor network status - http://torstatus.blutmagie.de/. [54] Christian Rossow and Christian J Dietrich. Provex: Detecting botnets with encrypted command and control channels. In Detection of Intrusions and Malware, and Vulnerability Assessment, pages 21–40. 2013.



[55] A. Stewart and G. Timcang. A not-so civic duty: Asprox botnet campaign spreads court dates and malware - http://www.fireeye.com/blog/technical/malwareresearch/2014/06/a-not-so-civic-duty-asprox-botnet-campaign-spreads-court-datesand-malware.html. [56] B. Stone-Gross, M. Cova, L. Cavallaro, B. Gilbert, M. Szydlowski, R. Kemmerer, C. Kruegel, and G. Vigna. Your botnet is my botnet: Analysis of a botnet takeover. In Proceedings of the 16th ACM Conference on Computer and Communications Security, pages 635–647, 2009. [57] W.T. Strayer, D. Lapsely, R. Walsh, and C. Livadas. Botnet detection based on network behavior. In Botnet Detection: Countering the Largest Security Threat, pages 1–24. 2008. [58] SecureWorks Counter Threat Unit Research Team. Duqu trojan questions and answers, October 2011. http://www.secureworks.com/research/threats/duqu/. [59] S. Thaseen and C. Kumar. An analysis of supervised tree based classifiers for intrusion detection system. In Pattern Recognition, Informatics and Mobile Engineering (PRIME), 2013 International Conference on, pages 294–299. IEEE, 2013. [60] K. Thomas and D.M. Nicol. The koobface botnet and the rise of social malware. In Malicious and Unwanted Software (MALWARE), 2010 5th International Conference on, pages 63–70, 2010. [61] ThreatStop. Check an ip address - http://www.threatstop.com/checkip. [62] R. Villamarin-Salomon and J.C. Brustoloni. Identifying botnets using anomaly detection techniques applied to dns traffic. In Consumer Communications and Networking Conference, 2008. CCNC 2008. 5th IEEE, pages 476–481, 2008. [63] K. Wang, J. J. Parekh, and S. J. Stolfo. In proceedings of the 9 th international symposium on recent advances in intrusion detection (raid). In Recent Advances in Intrusion Detection, pages 226–248, 2006. [64] K. Wang and S. J Stolfo. Anomalous payload-based network intrusion detection. In Recent Advances in Intrusion Detection, pages 203–222. Springer, 2004. [65] P. Wang, S. Sparks, and C.C. Zou. An advanced hybrid peer-to-peer botnet. Dependable and Secure Computing, IEEE Transactions on, pages 113–127, 2010. [66] W. Wang, B. Fang, Z. Zhang, and C. Li. A novel approach to detect irc-based botnets. In Networks Security, Wireless Communications and Trusted Computing, 2009. NSWCTC ’09. International Conference on, pages 408–411, 2009. [67] M. Warmer. Detection of web based command & control channels, 2011. [68] G. Weidman. Transparent botnet control for smartphones over sms, 2011. [69] C. Xiang, F. Binxing, Y. Lihua, L. Xiaoyi, and Z. Tianning. Andbot: towards advanced mobile botnets. In Proceedings of the 4th USENIX conference on Largescale exploits and emergent threats, pages 11–11, 2011. [70] H. Xiong, P. Malhotra, D. Stefan, C. Wu, and D. Yao. User-assisted host-based detection of outbound malware traffic. In Proceedings of the 11th International Conference on Information and Communications Security, pages 293–307, 2009. [71] S. Yadav, A. K. K. Reddy, A. L. N. Reddy, and S. Ranjan. Detecting algorithmically generated malicious domain names. In Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, pages 48–61, 2010. [72] T. Yen and M.K. Reiter. Are your hosts trading or plotting? telling p2p file-sharing and bots apart. In Distributed Computing Systems (ICDCS), 2010 IEEE 30th International Conference on, pages 241–252, 2010.



[73] H.R. Zeidanloo, M.J.Z. Shooshtari, P.V. Amoli, M. Safari, and M. Zamani. A taxonomy of botnet detection techniques. In Computer Science and Information Technology (ICCSIT), 2010 3rd IEEE International Conference on, pages 158–162, 2010. [74] zeustracker.abuse.ch. Zeus tracker. https://zeustracker.abuse.ch/.