2) PROTECTION MODELS
This paper describes some of the key efforts done by the research community to prevent attacks on network infrastructures, mainly by using Firewall and Intrusion Detection Systems. We also cover some of the protection models in Cloud Computing. This paper is the second phase of our previous paper about Network Protection Security Threats and Attacks .
2) PROTECTION MODELS
The research community investigated the cyber attack prevention models heavily. Most of the work was focused on preventing such attacks by automating Firewall rules and also improving Access Control Lists on network infrastructure devices. Alshaer et al.  identified all anomalies that could exist in a single- or multi-firewall environment. They also presented a set of algorithms to detect rule anomalies within a single firewall (intra-firewall anomalies), and between inter-connected firewalls (inter-firewall anomalies) in the network. The authors also presented the Firewall Policy Advisor  which provides a number of techniques for purifying and protecting the firewall policy from rule anomalies. The administrator may use the firewall policy advisor to manage firewall policies without prior analysis of filtering rules. In this paper, they formally defined a number of firewall policy anomalies in both centralized and distributed firewalls and they proved that these are the only conflicts that could exist in firewall policies. Then they presented a set of algorithms to detect rule anomalies within a single firewall (intra-firewall anomalies), and between inter-connected firewalls (inter-firewall anomalies) in the network.
The authors in  analyzed the local consistency problem in firewall rule sets, with special focus on automatic frequent rule set updates. They also proposed a real time approach to detect inconsistencies in firewall rule sets when inserting, removing or modifying its rules.
FAME, Firewall Anomaly Management Environment,  is an innovative policy anomaly management framework that facilitates systematic detection and resolution of firewall policy anomalies. It also has a visualization-based firewall policy analysis tool that can used to design policies. In , the researchers designed and implemented a firewall analysis tool that allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one). Their tool uses a minimal description of the network topology, and directly parses the various vendor-specific low level configuration files. It interacts with the user through a query-and-answer session.
Alex Liu and his team  proposed a framework that can significantly reduce the number of rules in an access control list while maintaining the same semantics, and give an optimal algorithm for the one-dimensional range ACL compression problem, present a systematic solution for compressing multidimensional ACLs with mixed field constraints and conducted extensive experiments on both real-life and synthetic ACLs. Liu and his team also proposed a systematic approach, the TCAM Razor , that is effective, efficient, and practical. Systematic approach to minimizing TCAM rules for packet classifiers. While TCAM Razor does not always produce optimal packet classifiers, in their experiments with 40 structurally distinct real-life packet classifier groups, TCAM Razor achieves an average compression ratio of 31.3% and 29.0%, respectively. Unlike other solutions that require modifying TCAM circuits or packet processing hardware, TCAM Razor can be deployed today by network administrators and ISPs to cope with range expansion.
M. Gouda et al.  proposed a model of stateful firewalls, which is used to store some packets that the firewall has accepted previously and needs to remember in the near future. They designed a model of stateful firewalls that has several favorable properties. It allowed inheriting the rich results in stateless firewall design and analysis. Moreover, it provides backward compatibility such that a stateless firewall can also be specified using our model. Second, they presented methods for analyzing stateful firewalls that are specified using their model.
Lujo Bauer et al.  showed how to eliminate a large percentage of misconfigurations in advance of attempted accesses using a data-mining technique called association rule mining. Their methods can reduce the number of accesses that would have incurred a costly time-of-access delay by 43%, and can correctly predict 58% of the intended policy.
B. Hari et al.  proposed a new scheme for conflict resolution, which is based on the idea of adding resolve filters. Their main results are algorithms for detecting and resolving conflicts in a filter database. They have tried their algorithm on 3 existing firewall databases, and have found conflicts, which are potential security holes, in each of them. A general solution is presented for the k -tuple filter, and an optimized version is described for the more common 2-tuple filters consisting of source and destination addresses. They also showed how to use the 2-tuple algorithm for the 5-tuple case in which the other three tuples have a restricted set of values.
M. Waldvoge et al.  described an algorithm that contains both intellectual and practical contributions. On the intellectual side, after the basic notion of binary searching on hash tables, they found that they had to add markers and use precomputation, to ensure logarithmic time in the worst-case. Algorithms that only use binary search of hash tables are unlikely to provide logarithmic time in the worst case. They single out mutating binary trees as an aesthetically pleasing idea that leverages off the extra structure inherent in their particular form of binary search. On the practical side, they have a fast, scalable solution for IP lookups that can be implemented in either software or hardware. Their software projections for IPv4 are 80 ns and they expect 150– 200 ns for IPv6. Our average case speed projections are based on the structure of existing routing databases that they examined. The overall performance can easily be restricted to that of the basic algorithm which already performs
The goal of the work in  was to design and implement a high performance, modular, extended integrated services router software architecture in the NetBSD operating system kernel. This architecture allows code modules, called plugins, to be dynamically added and configured at run time. M. Al-Fares et al.  showed on their paper how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, they argued that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today’s higher-end solutions. Their approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.
M. Abedin et al.  presented an automated process for detecting and resolving such anomalies. The anomaly resolution algorithm and the merging algorithm should produce a compact yet anomaly free rule set that would be easier to understand and maintain. This algorithms can also be integrated into policy advisor and editing tools. They also established the complete deﬁnition and analysis of the relations between rules.
H. Hu et al.  represented an innovative mechanism that facilitates systematic detection and resolution of XACML policy anomalies. A policy-based segmentation technique was introduced to achieve the goals of effective anomaly analysis. Also, described an implementation of a policy anomaly analysis tool called XAnalyzer. The results showed that a policy designer could easily discover and resolve anomalies in an XACML policy with the help of XAnalyzer.
D. A. Applegate et al.  considered a geometric model for the problem of minimizing access control lists (ACLs) in network routers. Their goal was to create a colored rectilinear pattern within an initially white rectangular canvas, and the basic operation is to choose a subrectangle and paint it a single color, overwriting all previous colors in the rectangle. Rectangle Rule List (RRL) minimization is the problem of finding the shortest list of rules needed to create a given pattern. They provide several equivalent characterizations of the patterns achievable using strip-rules and present polynomial-time algorithms for optimally constructing such patterns when, as in the ACL application, the only colors are black and white (permit or deny). They also showed that RRL minimization is NP-hard in general and provide O(min(n1=3;OPT1=2))- approximation algorithms for general RRL and ACL minimization by exploiting our results about strip-rule patterns. This work was very substantial but it didn’t address, however, the integrity of router’s Access Control Lists. Consequently, Ahmat and Elnour  investigated the integrity of routers’ ACLs in large enterprise networks. More specifically, they studied the problem of discovering and eliminating redundant ACLs from multiple routers’ configurations and described efficient methods for removing such redundancies. They also implemented the algorithms they proposed and validated their practicality showing that their approach can discover potential security holes in complex network infrastructures.
Y. Bartal et al.  presented an initial design and implementation of a prototype for a new generation of firewall and security management tools that showed its usefulness on a real world example, demonstrating that the task of firewall and security configuration/management can be done successfully at a level of abstraction analogous to modern programming languages, rather than assembly code; as an important first step towards the convergence of security and network management. Later, M. Gritter et al.  described a content routing design based on name-based routing as part of an explicit Internet content layer. The content routing is a natural extension of current Internet directory and routing systems, allows efficient content location, and can be implemented to scale with the Internet. Their results indicate that client name lookup is then faster and far less variable.
The cloud computing attracts the attention of research community due to its potential to provide tremendous benefits to the industry and the community [25,26]. Security is one of the biggest obstacles that hamper the widespread adoption of cloud computing . Several business and research organization are reluctant in completely trusting the cloud computing to shift digital assets to the third-party service providers .