Securely configuring the virtualization management VM tools set is one of the most important steps when building a cloud environment.
Compromising on the management VM tools may allow an attacker unlimited access to the VM, the host, and the enterprise network.
Therefore, you must securely install and configure the management VM tools and then adequately monitor them.
All management should take place on an isolated management network.
The virtualization platform determines what management tools need to be installed on the host.
The latest tools should be installed on each host, and the configuration management plan should include rules on updating these tools.
Updating these tools may require server downtime, so sufficient server resources should be deployed to allow for the movement of VMs when updating the virtualization platform. You should also conduct external vulnerability testing of the tools.
Follow the vendor security guidance when configuring and deploying these VM tools . Access to these management VM tools should be role-based. You should audit and log the management tools as well.
You need to understand what management tools are available by vendor platform, as well as how to securely install and configure them appropriately based on the configuration of the systems involved.
Regardless of the toolset used to manage the host, ensure that the following best practices are used to secure the tools and ensure that only authorized users are given access when necessary to perform their jobs.
- Defense in depth: Implement the tools used to manage the host as part of a larger architectural design that mutually reinforces security at every level of the enterprise. The VM tools should be seen as a tactical element of host management, one that is linked to operational elements such as procedures and strategic elements such as policies.
- Access control: Secure the VM tools and tightly control and monitor access to them.
- Auditing and monitoring: Monitor and track the use of the tools throughout the enterprise to ensure proper usage is taking place.
- Maintenance: Update and patch the VM tools as required to ensure compliance with all vendor recommendations and security bulletins.
Running a Physical Infrastructure for Cloud Environments
Although virtualization and cloud computing can help companies accomplish more by breaking the physical bonds between an IT infrastructure and its users, security threats must be overcome to benefit fully from this paradigm.
This is particularly true for the SaaS provider. In some respects, you lose control of certain assets in the cloud, and your security model must account for that.
Enterprise security is only as good as the least reliable partner, department, or vendor. Can you trust your data to your service provider?
In a public cloud, you are sharing computing resources with other companies. In a shared pool outside the enterprise, you will not have knowledge of or control over where the resources run.
Following are some important considerations when sharing resources:
Legal: Simply by sharing the environment in the cloud, you may put your data at risk of seizure. Exposing your data in an environment shared with other companies can give the government “reasonable cause” to seize your assets because another company has violated the law.
Compatibility: Storage services provided by one cloud vendor may be incompatible with another vendor’s services should you decide to move from one to the other.
Control: If information is encrypted while passing through the cloud, does the customer or cloud vendor control the encryption and decryption keys? Most customers probably want their data encrypted both ways across the Internet using the secure sockets layer (SSL) protocol. They also most likely want their data encrypted while it is at rest in the cloud vendor’s storage pool.
Make sure you control the encryption and decryption keys, just as if the data were still resident in the enterprise’s servers.
Log data: As more and more mission-critical processes are moved to the cloud, SaaS suppliers have to provide log data in a real-time, straightforward manner, probably for their administrators as well as their customers’ personnel.
Will customers trust the CSP enough to push their mission-critical applications out to the cloud? Because the SaaS provider’s logs are internal and not necessarily accessible externally or by clients or investigators, monitoring is difficult
PCI DSS access: Because access to logs is required for PCI DSS compliance and may be requested by auditors and regulators, security managers need to make sure to negotiate access to the provider’s logs as part of any service agreement.
Upgrades and changes: Cloud applications undergo constant feature additions. Users must keep up to date with application improvements to be sure they are protected.
The speed at which applications change in the cloud affects both the software development lifecycle and security.
A secure software development lifecycle may not be able to provide a security cycle that keeps up with changes that occur so quickly.
This means that users must constantly upgrade because an older version may not function or protect the data.
Failover technology: Having proper failover technology is a component of securing the cloud that is often overlooked.The company can survive if a non-mission-critical application goes offline, but this may not be true for mission-critical
Security needs to move to the data level so that enterprises can be sure their data is protected wherever it goes. Sensitive data is the domain of the enterprise, not of the cloud computing provider.
One of the key challenges in cloud computing is data-level security.
Compliance: SaaS makes the process of compliance more complicated because it may be difficult for a customer to discern where his data resides on a network controlled by the SaaS provider, or a partner of that provider, which raises all sorts of compliance issues of data privacy, segregation, and security.
Many compliance regulations require that data not be intermixed with other data, such as on shared servers or databases.
Some countries have strict limits on what data about their citizens can be stored and for how long, and some banking regulators require that customers’ financial data remain in their home country.
Regulations: Compliance with government regulations, such as the Sarbanes-Oxley Act (SOX), the Gramm-Leach-Bliley Act (GLBA), and the Health Insurance Portability and Accountability Act (HIPAA), and industry standards such as the PCI DSS are much more challenging in the SaaS environment.
There is a perception that cloud computing removes data compliance responsibility; however, the data owner is still fully responsible for compliance.
Those who adopt cloud computing must remember that it is the responsibility of the data owner, not the service provider, to secure valuable data.
Outsourcing: Outsourcing means losing significant control over data. Although this is not a good idea from a security perspective, the business ease and financial savings continue to increase the usage of these services.
You need to work with your company’s legal staff to ensure that appropriate contract terms are in place to protect corporate data and provide for acceptable SLAs.
Placement of security: Cloud-based services result in many mobile IT users accessing business data and services without traversing the corporate network.
This increases the need for enterprises to place security controls between mobile users and cloud-based services.
Placing large amounts of sensitive data in a globally accessible cloud leaves organizations open to large, distributed threats. Attackers no longer have to come onto the premises to steal data; they can find it all in one virtual location.
Virtualization: Virtualization efficiencies in the cloud require VMs from multiple organizations to be collocated on the same physical resources.
Although traditional data center security still applies in the cloud environment, physical segregation and hardware-based security cannot protect against attacks between VMs on the same server.
Administrative access is through the Internet rather than the controlled and restricted direct or on-premises connection that is adhered to in the traditional data center model.
This increases risk and exposure and requires stringent monitoring for changes in system control and access control restriction.
VM: The dynamic and fluid nature of VMs makes it difficult to maintain the consistency of security and ensure that records can be audited.
The ease of cloning and distribution between physical servers can result in the propagation of configuration errors and other vulnerabilities.
Proving the security state of a system and identifying the location of an insecure VM is challenging. The colocation of multiple VMs increases the attack surface and risk of VM-to-VM compromise.
Localized VMs and physical servers use the same OSs as well as enterprise and web applications in a cloud server environment, increasing the threat of an attacker or malware exploiting vulnerabilities in these systems and applications remotely.
VMs are vulnerable as they move between the private cloud and the public cloud.
A fully or partially shared cloud environment is expected to have a greater attack surface and therefore can be considered to be at greater risk than a dedicated resources environment.
Operating system and application files: Operating system and application files are on shared physical infrastructure in a virtualized cloud environment and require system, file, and activity monitoring to provide confidence and auditable proof to enterprise customers that their resources have not been compromised or tampered with.
In the cloud computing environment, the enterprise subscribes to cloud computing resources, and the responsibility for patching is the subscribers rather than the cloud computing vendors.
The need for patch maintenance vigilance is imperative. Lack of due diligence in this regard can rapidly make the task unmanageable or impossible.
Data fluidity: Enterprises are often required to prove that their security compliance is in accord with regulations, standards, and auditing practices, regardless of the location of the systems at which the data resides.
Data is fluid in cloud computing and may reside in on-premises physical servers, on-premises VMs, or off-premises VMs running on cloud computing resources.
This requires some rethinking on the part of auditors and practitioners alike.
In the rush to take advantage of the benefits of cloud computing, many corporations are likely rushing into cloud computing without serious consideration of the security implications.
To establish zones of trust in the cloud, the VMs must be self-defending, effectively moving the perimeter to the VM itself.
Enterprise perimeter security (that is, demilitarized zones [DMZs], network segmentation, intrusion detection systems [IDSs] and intrusion prevention systems [IPSs], monitoring tools, and the associated security policies) only controls the data that resides and transits behind the perimeter.
In the cloud computing world, the cloud computing provider is in charge of customer data security and privacy
Configuring Access Control and Secure Kernel Based Virtual Machine
You need to have a plan to address access control to the cloud-hosting environment. Physical access to servers should be limited to users who require access for a specific purpose.
Personnel who administer the physical hardware should not have other types of administrative access. Access to hosts should be done by a secure kernel-based virtual machine (KVM); for added security, access to KVM devices should require a checkout process.
A secure KVM prevents data loss from the server to the connected computer. It also prevents unsecured emanations.
The Common Criteria (CC) provides guidance on different security levels and a list of KVM products that meet those security levels.
Two-factor authentication should be considered for remote console access. All-access should be logged and routine audits conducted.
A secure KVM meets the following design criteria:
- Isolated data channels: Located in each KVM port, these make it impossible for data to be transferred between connected computers through the KVM.
- Tamper-warning labels on each side of the KVM: These provide clear visual evidence if the enclosure has been compromised.
- Housing intrusion detection: This causes the KVM to become inoperable and the LEDs to flash repeatedly if the housing has been opened.
- Fixed firmware: It cannot be reprogrammed, preventing attempts to alter the logic of the KVM.
- Tamper-proof circuit board: It’s soldered to prevent component removal or alteration.
- Safe buffer design: It does not incorporate a memory buffer, and the keyboard buffer is automatically cleared after data transmission, preventing the transfer of keystrokes or other data when switching between computers.
- Selective universal serial bus (USB) access: It only recognizes human interface device USB devices (such as keyboards and mice) to prevent inadvertent and insecure data transfer.
- Push-button control: It requires physical access to KVM when switching between connected computers.
Console-based access to VMs is also important. Regardless of vendor platform, all VM management software offers a “manage by console” option.
The use of these consoles to access, configure, and manage VMs offers an administrator the opportunity to easily control almost every aspect of the VMs’ configuration and usage.
As a result, a malicious hacker, or bad actor, can achieve the same level of access and control by using these consoles if they are not properly secured and managed.
The use of access controls for console access is available in every vendor platform and should be implemented and regularly audited for compliance as a best practice