Monday, May 30, 2011

SEcuRITy In THE nExTGEnERATIon DATA cEnTER

Today’s data center architectures are in the crosshairs of several significant technology trends, with each presenting
fundamental security implications:
• Large-scale consolidation. To maximize economies of scale and cost efficiencies, enterprises continue to consolidate
data centers. consequently, extremely large data centers are increasingly the norm. This concentration of computing,
storage, and networking is creating unprecedented scale requirements for network security.
• Virtualization. The nature of computing inside the data center has fundamentally changed, with workloads increasingly
moving from dedicated physical servers to multiple virtual machines. As a result, a typical application workload is now
completely mobile—it can be instantiated anywhere in the data center, and it can even be moved from one physical server
to another while running. Furthermore, the increasing trend of desktop virtualization means that any number of clients can
access a virtual desktop hosted on a server located in the data center. And, most importantly, virtual machines running
on a single server communicate via an internal virtual switch (vSwitch). This has fundamental implications for traditional
network security architectures, which were not designed with a focus on intra-server traffic.
• Service-oriented architectures and application mashups. Application architectures are evolving from being relatively
monolithic to being highly componentized. The componentized application architectures emerging today allow for more
reuse and, given that each component can be scaled separately, provide better scalability. one result of this emerging
trend is that there is starting to be more “east-west” traffic (between components in the data center) than “north-south”
traffic (between servers inside the data center and points outside the data center). This application evolution effectively
acts as a traffic multiplier and can expose additional areas of vulnerability—further pushing the scale requirements of
security mechanisms in the data center.
• Fabric architectures. Both in reaction to the above trends, and in an ongoing effort to realize improvements in
administrative efficiency, network scalability and availability, and infrastructure agility, enterprises are increasingly looking
to adopt fabric architectures, which enable many physical networking devices to be interconnected so that they are
managed and behave as one logical device. network security infrastructures will correspondingly need to adapt to the
management and integration implications of these architectures.
These trends represent the characteristics of the next-generation data center, which will require a fundamental re-imagining
of how security gets implemented. consequently, traditional security approaches—which were characterized by a focus on
relatively static patterns of communication, the network perimeter, and the north-south axis—will no longer suffice


The Strategic Security Imperatives of the Next-Generation Data Center
Given the technology trends at play in the next-generation data center, effective network security approaches will need to
address the following requirements:
• Scale. network security will need to scale to accommodate increasing traffic, more processing-intensive intelligence to
combat increasingly sophisticated threats, and more deployment options and scenarios.
• Visibility. To be effective, network security solutions will need to have more contextual visibility into relevant traffic.
• Intelligent enforcement. Security teams will need capabilities for efficiently enforcing policies on both physical and
These core requirements are detailed in the sections below.


Scale
The next-generation data center presents fundamental implications for the scalability of network security. Following is an
overview of the areas in which this need for scale will be most evident.
More Intra-Server Traffic
In the wake of increased virtualization, industry experts estimate that network traffic will increasingly be comprised of traffic
between two servers, as opposed to traffic between clients and servers. The decomposition of data center applications
into a mashup of reusable components will also hasten this increase in server-to-server communications inside the nextgeneration data center. In fact, analysts estimate between 2010 and 2013, server-to-server traffic will grow from 5% to
75% of network traffic. Another implication of these trends is that enterprise security teams can expect that every gigabit
of capacity entering the next-generation data center via the north-south axis will typically require 10 gigabits of network
capacity on the east-west axis, and could scale up significantly from there. The increasing volumes and increasing dynamism
of this server-to-server traffic will create a corresponding increase in the criticality of network security mechanisms, and their
need to scale.
Support for Increased Bandwidth Usage
The traditional demands of perimeter protection will not go away in the next-generation data center. Plus, given the
concentration of services provided by a large next-generation data center, the aggregate bandwidth connecting the data
center to the Internet will commonly be measured in gigabits per second. These trends will only be exacerbated by the
increased prevalence of rich media applications and their associated network bandwidth requirements.
More Deployments
Emerging trends will increase the usage of network security devices. For example, given extensive server virtualization
within the next-generation data center, physical isolation can no longer be relied upon to ensure the separation of groups
of applications or users, which is a requirement of many compliance mandates and security policies. consequently, more
network security mechanisms will be required to supply this isolation.
Given the addition of virtual desktops to the next-generation data center, campus and branch perimeter security
mechanisms such as Web security now need to be delivered inside the data center—and with high levels of scalability.
Increased Processing and Intelligence
The increased sophistication of attacks means that the computing power and memory needed to secure each session
entering the next-generation data center will expand substantially. Further, these sophisticated threats will also make it too
difficult for perimeter security alone to determine all of the potential downstream effects of every transaction encountered at
the perimeter. As a result, security teams will need to deploy additional, specialized network security services—for example,
security specifically for Web services, xML, and SQL—closer to the enterprise’s most valuable assets or “crown jewels.

Visibility and Context
Broadly speaking, there are three kinds of network security products deployed in the next-generation data center:
• Proactive—including security risk management, vulnerability scanning, compliance checking, and more
• Real time—featuring firewalls, intrusion prevention systems (IPS), Web application firewalls (WAFs), anti-malware,
and so on
• Reactive—including logging, forensics, and security information and event management (SIEM)
Across the board, rather than operating solely based on IP addresses and ports, these solutions need to be informed by a
richer set of contextual elements to enhance security visibility.

Application-Level Visibility
Visibility into applications is particularly vital—and challenging. Without visibility into what applications are actually present
on a network, it is difficult to envision an effective security policy, let alone implement one. While in theory, data centers may
be viewed as highly controlled environments to which no application can be added without explicit approval of the security
team, operational realities often mean that the security team has limited visibility into all of the applications and protocols
present on the data center network at any given time. Following are a few common scenarios that organizations contend
with on a day-to-day basis:
• Application teams build virtual machine images with a particular application in mind, but the virtual machine templates
they work with include extraneous daemons that send and receive packets on the network.
• The process for green lighting new applications is insufficiently policed and enforced, so the security team often finds out
about applications well after they have been deployed.
• In virtual desktop environments, employees use their virtual desktops to access applications outside the data center, with
the applications in use evolving on a continuous basis.
Given these realities, the next-generation data center security architecture will need to possess the capability to see and
factor in the actual application being used (in lieu of TcP port 80).
Real-Time Context
Acquiring context in real time presents significant challenges. For example, for a firewall to be effective in the next-generation
data center, it needs to determine the following contextual aspects:
• The identity of the user whose machine initiated the connection
• Whether the user is connecting with a smartphone, laptop, tablet Pc, or other device
• The software—including oS, patch level, and the presence or absence of security software—on the user’s device
• Whether the user is connecting from a wireless or wired network, from within corporate facilities, or from a coffee shop,
airport, or some other public location
• The geographic location from which the user is connecting
• The application with which the user is trying to connect
• The transaction the user is requesting from the application
• The target virtual machine image to which the user’s request is going
• The software—including the oS, patch level, and more—installed on the target virtual machine
Some of this context may be derived purely from the processing of packets that make up a session. For example, a WAF may
be able to map the uRL being requested to a specific application, or a hypervisor may be able to provide specific context
about communications between virtual machines. However, other contextual information, such as information about the
type of oS on the source and destination hosts, will need to be acquired out of band.

Business Visibility
Many aspects of business context might also affect security policy decisions. For example, policies may be contingent upon
whether a service request is being made in relation to an end of quarter sale or a disaster recovery response. clearly, stitching
together the sequence of events required to identify this business context may prove very complex, but certain shortcuts may
be possible to reduce some attack surfaces. For example, IT teams could use a global indicator in the data center that signals
when disaster recovery is in progress and only permit certain actions—such as wholesale dumping or restoring of database
tables—during those times.

Visibility and Contextual Capabilities—Near Term Requirements
The contextual visibility outlined above provides security teams with an overall view of what’s going on in their network and
allows them to set policies that mitigate risk and align the data center’s risk profile with business requirements.
While acquiring some of these forms of context may not be possible immediately, there are some near term, must-have
requirements. For example, to simply return to traditional levels of control, security administrators must be able to map
virtual machine instances to IP addresses in virtualized environments. Any network security solution that cannot bridge this
gap risks irrelevance in the next-generation data center.

Policy Enforcement in Virtualized Environments
Traditionally, policy selection has been associated with the source and destination of traffic. For example, in the case of nonvirtualized workloads, the switch port can authoritatively identify the source of a given traffic flow.
For virtualized workloads, however, the identification of the workload source presents a more dynamic challenge.
communication within a group of virtual machines on the same physical host can occur freely, though having visibility into
this traffic may be required according to some security requirements. Within these virtualized environments, identifying the
source and destination of traffic, mapping that traffic to specific policies, and ensuring that enforcement points execute the
policies required can pose significant challenges.
To be effective, network security mechanisms need to be able to associate policies with groups of virtual machines, and
consistently and accurately execute on those policies. To do so, security teams will need capabilities for supporting the
virtualization technologies employed within the next-generation data center. In VMware environments, this requires integration
with vcenter, which is used to create and manage groups of virtual machines. This integration is essential to enabling
security teams to manage and monitor policies through a central console. Further, given the scalability demands of the nextgeneration data center, this central management infrastructure needs to have capabilities for aggregating information from
multiple vcenter instances and from the physical security infrastructure, in order to maximize administrative efficiency.
Granular Policy Enforcement
To be effective, a security enforcement point needs to have the required visibility into the traffic to which policies need to be
applied. Techniques such as VLAn partitioning can be used to ensure that a physical security appliance inspects all traffic
crossing a security trust boundary, even in the case of virtual machine to virtual machine traffic that is occurring on the same
physical host. However, this approach is suboptimal for two reasons:
1. In order to institute the requisite policy enforcement points, IT teams need to change the networking architecture, which
requires tight collaboration between networking and security teams.
2.  It represents a coarse-grained approach in which all traffic between VLAns has to be routed to a separate physical
appliance before being routed to the destination VLAn. This approach doesn’t enable finer grained filtering, so that only a
subset of traffic gets routed to the physical security appliance. Further, even if such a capability were available, there would
be no way for the physical security appliance to avoid having to process all of the packets in a given flow when it wants to
implement a simple “permit” firewall rule.

No comments:

Post a Comment