In partnership with the cloud security management professionals, you need to have a detailed understanding of the management operation of the cloud environment.
As complex networked systems, clouds face the traditional computer and network security issues such as AIC.
By imposing uniform management practices, clouds may be able to improve on some security updates and response issues.
Clouds, however, also have the potential to aggregate an unprecedented quantity and variety of customer data in cloud data centers.
This potential vulnerability requires a high degree of confidence and transparency that the cloud security management professionals can keep customer data isolated and protected.
Also, cloud users and administrators rely heavily on web browsers, so browser security failures can lead to cloud security breaches.
When considering management-related activities and the need to control and organize them to ensure accuracy and impact, you need to think about the impact of change.
It is important to schedule system repair and maintenance, as well as customer notifications, to ensure that they do not disrupt the organization’s systems.
When scheduling maintenance, the cloud security management professionals needs to ensure adequate resources are available to meet expected demand and SLA requirements.
You should make sure that appropriate change management procedures are implemented and followed for all systems and that scheduling and notifications are communicated effectively to all parties that will potentially be affected by the work.
Consider using automated system tools that send out messages.
Traditionally, a host system is placed into maintenance mode before starting any work on it that requires system downtime, rebooting, or disruption of services.
For the host to be placed into maintenance mode, the VMs currently running on it have to be powered off or moved to another host.
The use of automated solutions such as workflow or tasks to place a host into maintenance mode is supported by all virtualization vendors and is something that you should be aware of.
Regardless of whether the decision to enter maintenance mode is a manual or an automated one, ensure that all appropriate security protections and safeguards continue to apply to all hosts while in maintenance mode and to all VMs while they are being moved and managed on alternate hosts as a result of maintenance mode activities being performed on their primary host
When considering management-related activities and the need to control and organize them to ensure accuracy and impact, you need to think about the effect of automation.
Most virtualization platforms automate the orchestration of system resources, so little human intervention is required.
The goal of cloud orchestration is to automate the configuration, coordination, and management of software and software interactions.
The process involves automating the workflows required for service delivery.
Tasks involved include managing server runtimes and directing the flow of processes among applications.
The orchestration capabilities of the virtualization platforms should meet the SLA requirements of the cloud security management professionals .
Building a Logical Infrastructure for Cloud Environments
The logical design of the cloud environment should include redundant resources, meet the requirements for anticipated customer loading, and embrace the secure configuration of hardware and guest virtualization tools
Logical design is the part of the design phase of the software development lifecycle in which all functional features of the system chosen for development in the analysis are described independently of any computer platform.
The following is true about the logical design for a network:
- It lacks specific details such as technologies and standards while focusing on the needs at a general level.
- It communicates with abstract concepts, such as a network, router, or workstation, without specifying concrete details.
Abstractions for complex systems, such as network designs, are important because they simplify the problem space so humans can manage it.
An example of a network abstraction is a WAN, which carries data between remote locations.
To understand a WAN, you do not need to understand the physics behind fiber-optic data communication, although WAN traffic may be carried over optical fiber, satellite, or copper wire.
Someone specifying the need for a WAN connection on a logical network diagram can understand the concept of a WAN connection without understanding the detailed technical specifics behind it.
Logical designs are often described using terms from the customer’s business vocabulary. Locations, processes, and roles from the business domain can be included in a logical design.
An important aspect of a logical network design is that it is part of the requirements set for a solution to a customer problem
The basic idea of physical design is that it communicates decisions about the hardware used to deliver a system.
The following is true about a physical network design:
- It is created from a logical network design.
- It often expands elements found in a logical design.
For instance, a WAN connection on a logical design diagram can be shown as a line between two buildings.
When transformed into a physical design, that single line can expand into the connection, routers, and other equipment at each end of the connection.
The actual connection media might be shown on a physical design, along with manufacturers and other qualities of the network implementation.
Secure Configuration of Hardware-Specific Requirements
The support that different hardware provides for a variety of virtualization technologies varies.
Use the hardware that best supports the chosen virtualization platform.
Incorrect BIOS settings may degrade performance, so follow the vendor-recommended guidance for the configuration of settings.
For instance, if you are using VMware’s distributed power management (DPM) technology, you would need to turn off any power management settings in the host BIOS because they could interfere with the proper operation of DPM.
Be aware of the requirements for secure host configuration based on the vendor platforms being used in the enterprise.
Storage Controllers Configuration
The following should be considered when configuring storage controllers:
- Turn off all unnecessary services, such as web interfaces and management services that will not be needed or used.
- Validate that the controllers can meet the estimated traffic load based on vendor specifications and testing (1Gbps | 10Gbps | 16Gbps | 40Gbps).
- Deploy a redundant failover configuration such as a NIC team
- Consider deploying a multipath solution.
- Change default administrative passwords for configuration and management access to the controller. Note that specific settings vary by vendor
The two networking models that should be considered are traditional and converged.
Traditional Networking Model
A traditional model is a layered approach with physical switches at the top layer and logical separation at the hypervisor level.
This model allows for the use of traditional network security tools.
There may be some limitations on the visibility of network segments between VMs.
Converged Networking Model
The converged model is optimized for cloud deployments and utilizes standard perimeter protection measures.
The underlying storage and IP networks are converged to maximize the benefits of a cloud workload.
This method facilitates the use of virtualized security appliances for network protection.
You can think of a converged network model as being a super network, one that is capable of carrying a combination of data, voice, and video traffic across a single network that is optimized for performance.
Running a Logical Infrastructure for Cloud Environments
There are several considerations for the operation and management of cloud infrastructure. A secure network configuration assists in isolating customer data and helps prevent or mitigate denial-of-service (DoS) attacks.
Numerous key methods are widely used to implement network security controls in a cloud environment, including physical devices, converged appliances, and virtual appliances.
You need to be familiar with standard best practices for secure network design, such as defense-in-depth, as well as the design considerations specific to the network topologies you may be managing, such as single-tenant versus multitenant hosting systems.
Further, you need to be familiar with the vendor-specific recommendations and requirements of the hosting platforms they support.
Building a Secure Network Configuration
The information in this section is merely a high-level summary of the functionality of the technology being discussed.
Please refer to the “Running a Physical Infrastructure for Cloud Environments” section of this domain, earlier in this chapter, for specific details as needed when reviewing this material.
VLANs: Allow for the logical isolation of hosts on a network. In a cloud environment, VLANs can be utilized to isolate the management network, storage network, and customer networks. VLANs can also be used to separate customer data.
TLS: This allows for the encryption of data in transit between hosts. Implementation of TLS for internal networks prevents the sniffing of traffic by a malicious user. A TLS VPN is one method to allow for remote access to the cloud environment.
DNS: DNS servers should be locked down. They should only offer required services and utilize domain name system security extensions (DNSSEC) when feasible. DNSSEC is a set of DNS extensions that provide authentication, integrity, and authenticated DOS for DNS data. Zone transfers should be disabled. If an attacker comprises DNS, he may be able to hijack or reroute data.
IPSec: IPSec VPN is one method to remotely access the cloud environment. If an IPSec VPN is utilized, IP whitelisting, only allowing approved IP addresses, is considered a best practice for access.
Two-factor authentication can also be used to enhance security.
OS Hardening via Application Baseline
The concept of using a baseline, which is a preconfigured group of settings, to secure, or harden, a machine is a common practice.
The baseline should be configured to allow only the minimum services and software that are required to ensure that the system can perform as needed.
A baseline configuration should be established for each OS and the virtualization platform in use.
The baseline should be designed to meet the most stringent customer requirement.
There are numerous sources for recommended baselines.
By establishing a baseline and continuously monitoring for compliance, the provider can detect any deviations from the baseline.
Capturing a Baseline
The cloud security management professionals should consider the items outlined next as the bare minimum required to establish a functional baseline for use in the enterprise.
There may be other procedures that would be engaged in at various points, based on specific policy or regulatory requirements pertinent to a certain organization.
If needed, the cloud security management professionals can refer to many sources of guidance on the methodology for creating a baseline.19
- A clean installation of the target OS must be performed (physical or virtual).
- All nonessential services should be stopped and set to disabled to ensure that they do not run.
- All nonessential software should be removed from the system.
- All required security patches should be downloaded and installed from the appropriate vendor repository.
- All required configuration of the host OS should be accomplished per the requirements of the baseline is created.
- The OS baseline should be audited to ensure that all required items have been configured properly
- Full documentation should be created, captured, and stored for the baseline being created.
- An image of the OS baseline should be captured and stored for future deployment. This image should be placed under change management control and have appropriate access controls applied.
- The baseline OS image should also be placed under the configuration management (CM) system and cataloged as a CI.
- The baseline OS image should be updated on a documented schedule for security patches and any additional required configuration updates as needed.
Baseline Configuration by Platform
There are several differences between Windows, Linux, and VMware configurations. The following sections examine them.
Microsoft provides several tools to measure the security baseline of a Windows system.
- The use of a toolset such as the Windows Server Update Service (WSUS) makes it possible to perform patch management on a Windows host and monitor for compliance with a pre-configured baseline.
- The Microsoft Deployment Toolkit (MDT), either as a standalone toolset or integrated into the System Center Configuration Manager (SCCM) product, allows you to create, manage, and deploy one or more Microsoft Windows Server OS baseline images.
- One or more of the Best Practice Analyzers (BPAs) that Microsoft makes available should also be considered.
The actual Linux distribution in use plays a large part in helping to determine what the baseline deployment will look like.
The security features of each Linux distribution should be considered, and the one that best meets the organization’s security requirements should be used.
However, you still should be familiar with the recommended best practices for Linux baseline security.
VMware vSphere has built-in tools that allow the user to build custom baselines for their specific deployments.
These tools range from host and storage profiles, which force configuration of an ESXi host to mirror a set of pre-configured baseline options, to the VMware Update Manager (VUM) tool, which allows for the updating of one or more ESXi hosts with the latest
VMware security patches to allow updates to the VMs running on the host. VUM can be used to monitor compliance with a pre-configured baseline
Availability of a Guest OS
The mechanisms available to the cloud security management professionals to ensure the availability of the guest OSs running on a host are varied.
Redundant system hardware can be used to avert system outages due to hardware failure.
Backup power supplies and generators can be used to ensure that the hosts have power, even if the electricity is cut off for some time.
In addition, technologies such as HA and fault tolerance are important to consider.
HA should be used when the goal is to minimize the impact of system downtime.
Fault tolerance should be used when the goal is to eliminate system downtime as a threat to system availability altogether.
Different customers in the cloud environment have different availability requirements.
These can include things such as live recovery and automatic migration if the underlying host goes down.
Every cloud vendor has its specific toolsets available to provide for HA on its platform.
It is your responsibility to understand the vendor’s requirements and capabilities within the HA area and to ensure these are documented properly as part of the DRP/BCP processes within the organization.
Network components, storage arrays, and servers with built-in fault-tolerance capabilities should be utilized. In addition, if there is a fault-tolerance solution that a vendor makes available via software implementation that is appropriately scaled for the level of fault tolerance required by the guest OS, consider it as well.