Best practices for cloud security are crucial to build the robust environment that prevent upcoming vulnerability/ Attacks. The actual settings for the hardware depending on the chosen operating system (OS) and virtualization platform.
Servers Best practices for cloud security
Implement the following best practice recommendations to secure host servers within cloud environments:
Secure build: To implement fully, follow the specific recommendations of the OS vendor to securely deploy their operating system.
Secure initial configuration: This may mean many different things depending on several variables, such as OS vendor, operating environment, business requirements, regulatory requirements, risk assessment, risk appetite, and workloads to be hosted on the system
Following is the common list of best practices
Host hardening: Achieve this by removing all nonessential services and software from the host.
Host patching: To achieve this, install all required patches provided by the vendors whose hardware and software are being used to create the host server.
These may include basic input/output system (BIOS)/firmware updates, driver updates for specific hardware components, and OS security patches
Host lockdown: Implement host-specific security measures, which vary by vendor. These may include the following:
Blocking of non-root access to the host under most circumstances (that is, local console access only via a root account)
Only allow the use of secure communication protocols and tools to access the host remotely, such as PuTTY with secure shell (SSH)
Configuration and use of a host-based firewall to examine and monitor all communications to and from the host and all guest OSs and workloads running on the host
Use of role-based access controls (RBAC) to limit which users can access a host and what permissions they have
Secure ongoing configuration maintenance: Achieved through a variety of mechanisms, some vendor-specific and some not. Engage in the following types of activities:
Patch management of hosts, guest OSs, and application workloads running on them
Periodic vulnerability assessment scanning of hosts, guest OSs, and application workloads running on hosts
Periodic penetration testing of hosts and guest OSs running on them
Best Practices for Storage Controllers
Storage controllers may be in use for Internet small computer system interface (iSCSI),
Fiber Channel (FC), or Fibre Channel over Ethernet (FCoE). Regardless of the storage protocols being used, they should be secured by vendor guidance plus any required additional measures.
For example, some storage controllers offer a built-in encryption capability that may be used to ensure confidentiality of the data transiting the controller.
In addition, close attention to configuration settings and options for the controller is important because unnecessary services should be disabled, and insecure settings should be addressed.
A detailed discussion of each storage protocol and its associated controller types is beyond the scope of this section
This section focuses on iSCSI as an example of the types of issues and considerations you may encounter in the field while working with cloud-based storage solutions.
iSCSI is a protocol that uses transmission control protocol (TCP) to transport small computer system interface (SCSI) commands, enabling the use of the existing transmission control protocol/Internet protocol (TCP/IP) networking infrastructure as a storage area network (SAN).
iSCSI presents SCSI targets and devices to iSCSI initiators (requesters). Unlike network-attached storage (NAS), which presents devices at the file level, iSCSI makes block devices available via the network.
Initiators and Targets
A storage network consists of two types of equipment:
- Initiator: The consumer of storage, typically a server with an adapter card in it called a host bus adapter (HBA). The initiator commences a connection over the fabric to one or more ports on your storage system, which are called target ports.
- Target: The ports on your storage system that deliver storage volumes (called target devices or logical unit numbers [LUNs]) to the initiators.
iSCSI should be considered a local-area technology, not a wide-area technology, because of latency issues and security concerns. You should also segregate iSCSI traffic from the general traffic.
Layer 2 virtual local area networks (VLANs) are a particularly good way to implement this segregation.
Beware of oversubscription. It occurs when more users are connected to a system that can be fully supported at the same time.
Networks and servers are almost always designed with some amount of oversubscription with the assumption that users do not all need the service simultaneously
If they do, delays are certain and outages are possible. Oversubscription is permissible on general-purpose LANs, but you should not use an oversubscribed configuration for iSCSI.
Here’s best practice:
- To have a dedicated local area network (LAN) for iSCSI traffic
- Not to share the storage network with other network traffic such as management, fault tolerance, or vMotion/Live Migration
iSCSI Implementation Considerations
The following items are the security considerations when implementing iSCSI:
Private network: iSCSI storage traffic is transmitted in an unencrypted format across the LAN. Therefore, it is considered a best practice to use iSCSI on trusted networks only and to isolate the traffic on separate physical switches or to leverage a private VLAN.
All iSCSI-array vendors agree that it is good practice to isolate iSCSI traffic for security reasons.
This means isolating the iSCSI traffic on its separate physical switches or leveraging a dedicated VLAN (IEEE 802.1Q).4
Encryption: iSCSI supports several types of security. IP Security (IPSec) is used for security at the network or packet-processing layer of network communication.
Internet Key Exchange (IKE) is an IPSec standard protocol used to ensure security for virtual private networks (VPNs).
Authentication: Numerous authentication methods are supported with iSCSI:
Kerberos: A network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography.
The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection.
After a client and server have used Kerberos to prove their identities, they can encrypt all their communications to ensure privacy and data integrity as they go about their business.5
Secure remote password (SRP): SRP is a secure password-based authentication and key-exchange protocol. SRP exchanges a cryptographically strong secret as a by-product of successful authentication, which enables the two parties to communicate securely.
Simple public-key mechanism (SPKM1/2): Provides authentication, key establishment, data integrity, and data confidentiality in an on-line distributed application environment using a public-key infrastructure.
SPKM can be used as a drop-in replacement by any application that uses security services through Generic Security Service Application Program Interface (GSSAPI) calls.
The use of a public-key infrastructure allows digital signatures supporting nonrepudiation to be employed for message exchanges.6
Challenge handshake authentication protocol : Used to periodically verify the identity of the peer using a three-way handshake.
This is done upon initial link establishment and may be repeated anytime after the link has been established. The following are the steps involved in using.
After the link establishment phase is complete, the authenticator sends a challenge message to the peer.
The peer responds with a value calculated using a one-way hash function.
The authenticator checks the response against its calculation of the expected hash value.
If the values match, the authentication is acknowledged; otherwise, the connection should be terminated.
At random intervals, the authenticator sends a new challenge to the peer and repeats steps 1 to 3.
Network Controllers Best practices for cloud security
As an increasing number of servers in the data center become virtualized, network administrators and engineers are pressed to find ways to better manage traffic running between these machines.
Virtual switches aim to manage and route traffic in a virtual environment, but often network engineers do not have direct access to these switches.
When they do, they often find that virtual switches living inside hypervisors do not offer the type of visibility and granular traffic management they need.
Traditional physical switches determine where to send message frames based on MAC addresses on physical devices.
Virtual switches act similarly in that each virtual host must connect to a virtual switch the same way a physical host must connect to a physical switch.
But a closer look reveals major differences between physical and virtual switches.
With a physical switch, when a dedicated network cable or switch port goes bad, only one server goes down.
Yet with virtualization, one cable can offer connectivity to 10 or more virtual machines (VMs), causing a loss in connectivity to multiple VMs.
In addition, connecting multiple VMs requires more bandwidth, which the virtual switch must handle.
These differences are especially apparent in larger networks with more intricate designs, such as those that support VM infrastructure across data centers or DR sites.
Virtual Switches Best practices for cloud security
Virtual switches are the core networking component on a host, connecting the physical network interface cards (NICs) in the host server to the virtual NICs in VMs.
In planning virtual switch architecture, engineers must decide how they will use physical NICs to assign virtual switch port groups to ensure redundancy, segmentation, and security.
All these switches support 802.1Q tagging, which allows multiple VLANs to be used on a single physical switch port to reduce the number of physical NICs needed in a host.
This works by applying tags to all network frames to identify them as belonging to a certain VLAN.8 Security is also an important consideration when using virtual switches.
Utilizing several types of ports and port groups separately rather than all together on a single virtual switch offers higher security and better management
Virtual switch redundancy is another important consideration.
Redundancy is achieved by assigning at least two physical NICs to a virtual switch, with each NIC connecting to a different physical switch.
Redundancy can also be achieved through the use of port channeling, which does the following:
- Increases the available bandwidth between two devices
- Creates one logical path out of multiple physical paths
The key to virtual network security is isolation. Every host has a management network through which it communicates with other hosts and management systems.
In a virtual infrastructure, the management network should be isolated physically and virtually. Connect all hosts, clients, and management systems to a separate physical network to secure the traffic.
You should also create isolated virtual switches for your host management network and never mix virtual-switch traffic with normal VM network traffic. Although this does not address all problems that virtual switches introduce, it’s an important start
Other Virtual Network Security Best Practices
In addition to isolation, there are other virtual network security best practices to keep in mind.
Note that the network that is used to move live VMs from one host to another does so in cleartext.
That means it may be possible to “sniff” the data or perform a man-in-the-middle attack when a live migration occurs.
When dealing with internal and external networks, always create a separate isolated virtual switch with its own physical network interface cards and never mix internal and external traffic on a virtual switch.
Lockdown access to your virtual switches so that an attacker cannot move VMs from one network to another and so that VMs do not straddle an internal and external network.
In virtual infrastructures where a physical network has been extended to the host as a virtual network, physical network security devices and applications are often ineffective.
Often, these devices cannot see network traffic that never leaves the host (because they are, by nature, physical devices). Plus, physical intrusion detection and prevention systems may not be able to protect VMs from threats.
For a better virtual network security strategy, use security applications that are designed specifically for virtual infrastructure and integrate them directly into the virtual networking layer.
This includes network intrusion detection and prevention systems, monitoring and reporting systems, and virtual firewalls that are designed to secure virtual switches and isolate VMs.
You can integrate physical and virtual network security to provide complete data center protection.
If you use network-based storage such as iSCSI or Network File System (NFS), use proper authentication.
For iSCSI, bidirectional CHAP authentication is best.
Be sure to physically isolate storage network traffic because the traffic is often sent as clear text.
Anyone with access to the same network can listen and reconstruct files, alter traffic, and possibly corrupt the network. Installation and Configuration