Cloud Security Threats

What is cloud security threats?

Cloud Security Threats is nothing but to secure a server, it is essential to first define the threats that must be mitigated.

Organizations should conduct risk assessments to identify the specific threats against their servers and determine the effectiveness of existing security controls in counteracting the threats.

They then should perform risk mitigation to decide what additional measures, if any, should be implemented, as discussed in National Institute of Standards and Technology (NIST) Special Publication 800-30 Revision 1,

“Risk Assessment Guide for Information Technology Systems. ”Performing risk assessments and mitigation help organizations better understand their security posture and decide how their servers should be secured.

There are several types of Cloud Security Threats to be aware of

Types Of Cloud Security Threats
Types Of Cloud Security Threats
  1. Many threats against data and resources exist as a result of mistakes, either bugs in OS and server software that create exploitable vulnerabilities or errors made by end-users and administrators.
  1. Threats may involve intentional actors (such as attackers who want to access information on a server) or unintentional actors (such as administrators who forget to disable user accounts of former employees).
  1. Threats can be local, such as a disgruntled employee, or remote, such as an attacker in another geographical area.

The following general guidelines should be addressed when identifying and understanding threats:

  1. Use an asset management system that has configuration management capabilities to enable documentation of all system configuration items (CIs) authoritatively.
  1. Use system baselines to enforce configuration management throughout the enterprise. Note the following in configuration management:
  • A baseline is an agreed-upon description of the attributes of a product, at a point in time that serves as a basis for defining change.
  • A change is a movement from this baseline state to a next state
  1. Consider automation technologies that help with the creation, application, management, updating, tracking, and compliance checking for system baselines.
  1. Develop and use a robust change management system to authorize the required changes that need to be made to systems over time.

In addition, enforce a requirement that no changes can be made to production systems unless the change has been properly vetted and approved through the change management system in place.

This forces all changes to be clearly articulated, examined, documented and weighed against the organization’s priorities and objectives.

Forcing the examination of all changes in the context of the business allows you to ensure that risk is minimized whenever possible and that all changes are seen as being acceptable to the business based on the potential risk that they pose.

  1. Use an exception reporting system to force the capture and documentation of any activities undertaken that are contrary to the expected norm about the lifecycle of a system under management.
  1. Use vendor-specified configuration guidance and best practices as appropriate based on the specific platform(s) under management.

Using Standalone Hosts

Using Standalone Hosts
Using Standalone Hosts

As a Cloud Security Threats Professional, you may be called upon to help the business decide on the best way to safely host a virtualized infrastructure.

The needs and requirements of the business need to be identified and documented before a decision can be made as to which hosting models are the best to deploy.

 In general, the business seeks to do the following:

  1. Create isolated, secured, dedicated hosting of individual cloud resources; the use of a standalone host would be an appropriate choice.
  1. Make the cloud resources available to end-users so they appear as if they are independent of any other resources and are isolated; either a standalone host or a shared host configuration that offers multitenant secured hosting capabilities is appropriate.

The Cloud Security Threats Professional needs to understand the business requirements because they drive the choice of hosting model and the architecture for the cloud security framework.

For instance, consider the following scenario: ABC Corp. has decided that it wants to move its CRM system to a cloud-based platform.

The company currently has a “homegrown” CRM offering that it hosts in its data center and that is maintained by its internal development and IT infrastructure teams

ABC Corp. has to make its decision along the following lines:

  1. It could continue as is and effectively become a private CSP for its internal CRM application.
  1. It could look to a managed service provider to partner with and effectively hand over the CRM application to be managed and maintained according to the provider’s requirements and specifications.
  1. It could decide to engage in an RFP process and look for a third-party CRM vendor that would provide cloud-based functionality through a SaaS model that could replace its current application.

As the Cloud Security Threats Professional , you would have to help ABC Corp. figure out which of these three options would be the most appropriate one to choose.

Although on the surface that may seem to be a simple and fairly straightforward decision to make, it requires consideration of many factors. Aside from the business requirements already touched on, you would need to understand,

 to the best of your abilities, the following issues: 

  1. What are the current market conditions in the industry vertical that ABC Corp. is part of?
  1. Have ABC Corp.’s major competitors made a similar transition to cloud-based services recently? If so, what paths have they chosen?
  1. Is there an industry vendor that specializes in migrating or implementing CRM systems in this vertical for the cloud?
  1. Are there regulatory issues or concerns that would have to be noted and addressed as part of this project?
  1. What are the risks associated with each of the three options outlined as possible solutions? What are the benefits?
  1. Does ABC Corp. have the required skills available in-house to manage the move to becoming a private CSP of CRM services to the business? To manage and maintain the private cloud platform once it’s up and running?

As you can see, the path to making a clear and concise recommendation is long, and it’s often obscured by many issues that may not be apparent at the outset of the conversation.

The Cloud Security Threats Professional responsibilities will vary based on need and situation, but at its core.

The Cloud Security Threats Professional must always be able to examine the parameters of the situation at hand and frame the conversation with the business regarding risk and benefit to ensure that the best possible decision can be made.

Be sure to address the following standalone host availability considerations:

  1. Regulatory issues
  1. Current security policies in force
  1. Any contractual requirements that may be in force for one or more systems or areas of the business
  1. The needs of a certain application or business process that may be using the system in question
  1. The classification of the data contained in the system

Using Clustered Hosts

Using Clustered Hosts
Using Clustered Hosts

You should understand the basic concept of host clustering as well as the specifics of the technology and implementation requirements that are unique to the vendor platforms they support.

A clustered host is logically and physically connected to other hosts within a management framework.

This is done to allow central management of resources for the collection of hosts, applications, and VMs running on a member of the cluster to failover, or move, between host members as needed for the continued operation of those resources, with a focus on minimizing the downtime that host failures can cause

Resource Sharing

Resource Sharing
Resource Sharing

Within a host cluster, resources are allocated and managed as if they were pooled or jointly available to all members of the cluster.

The use of resource-sharing concepts such as reservation limits and shares may be used to further refine and orchestrate the allocation of resources according to requirements that the cluster administrator imposes.

  1. Reservations guarantee a minimum amount of the cluster’s pooled resources to be made available to a specified VM.
  1. Limits guarantee a maximum amount of the cluster’s pooled resources to be made available to a specified VM.
  1. Shares provision the remaining resources left in a cluster when there is resource contention.

Specifically, shares allow the cluster’s reservations to be allocated and then to address any remaining resources that may be available for use by members of the cluster through a prioritized percentage-based allocation mechanism.

Clusters are available for the traditional “compute” resources of the hosts that make up the cluster: random access memory (RAM) and central processing unit (CPU).

In addition, storage clusters can be created and deployed to allow back-end storage to be managed in the same way that the traditional compute resources are.

The management of the cluster involves a cluster manager or some kind of management toolset. The chosen virtualization platform determines the clustering capability of the cloud hosts. Many virtualization platforms utilize clustering for HA and DR

Compute Resource Scheduling

Compute Resource Scheduling
Compute Resource Scheduling

All virtualization vendors use distributed resource scheduling (DRS) in one form or another to allow for a cluster of hosts to do the following

  1. Provide highly available resources to your workloads
  1. Balance workloads for optimal performance
  1. Scale and manage computing resources without service disruption

Using the initial workload placement across the cluster as a VM is powered on is the beginning point for all load-balancing operations.

This initial placement function can be fully automated or manually implemented based on a series of recommendations made by the DRS service, depending on the chosen configuration for DRS.

Some DRS implementations offer the ability to engage in ongoing load-balancing once a VM has been placed and is running in the cluster.

This load balancing is achieved through a movement of the VM between hosts in the cluster to achieve or maintain the desired compute resource allocation thresholds specified for the DRS service.

These movements of VMs between hosts in the DRS cluster are policy-driven and are controlled through the application of affinity and anti-affinity rules.

These rules allow for the separation (anti-affinity) of VMs across multiple hosts in the cluster or the grouping (affinity) of VMs on a single host.

The need to separate or group VMs can be driven by architectural, policy and compliance, or performance and security concerns.

Accounting for Dynamic Operation

Accounting For Dynamic Operation
Accounting For Dynamic Operation

A cloud environment is dynamic. The cloud controller dynamically allocates resources to maximize their use.

In cloud computing, elasticity is defined as the degree to which a system can adapt to workload changes by provisioning and de-provisioning resources automatically, such that at each point in time the available resources match the current demand as closely as possible.

In outsourced and public deployment models, cloud computing also can provide elasticity. This refers to the ability for customers to quickly request, receive, and later release as many resources as needed.

By using an elastic cloud, customers can avoid excessive costs from overprovisioning— that is, building enough capacity for peak demand and then not using the capacity in nonpeak periods.

With rapid elasticity, capabilities can be rapidly and elastically provisioned, in some cases automatically, to scale rapidly outward and inward, commensurate with demand.

To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time. For a cloud to provide elasticity, it must be flexible and scalable.

An onsite private cloud, at any specific time, has a fixed computing and storage capacity that has been sized to correspond to anticipated workloads and cost restrictions.

If an organization is large enough and supports a sufficient diversity of workloads, an onsite private cloud may be able to provide elasticity to clients within the consumer organization.

Smaller onsite private clouds, however, exhibit maximum capacity limits similar to those of traditional data centers.

Using Storage Clusters

Using Storage Clusters
Using Storage Clusters

Clustered storage is the use of two or more storage servers working together to increase performance, capacity, or reliability.

Clustering distributes workloads to each server, manages the transfer of workloads between servers, and provides access to all files from any server regardless of the physical location of the file.

Clustered Storage Architectures

 Two basic clustered storage architectures exist, known as tightly coupled and loosely coupled:

  1. A tightly coupled cluster has a physical backplane into which controller nodes connect.
  • While this backplane fixes the maximum size of the cluster, it delivers a high-performance interconnect between servers for load-balanced performance and maximum scalability as the cluster grows.
  • Additional array controllers, input/ output (I/O) ports, and capacity can connect into the cluster as demand dictates.
  1. A loosely coupled cluster offers cost-effective building blocks that can start small and grow as applications demand.
  • A loose cluster offers performance, I/O, and storage capacity within the same node. As a result, performance scales with capacity and vice versa.

Storage Cluster Goals Storage clusters should be designed to do the following:

  1. Meet the required service levels as specified in the SLA
  1. Provide for the ability to separate customer data in multitenant hosting environments
  1. Securely store and protect data through the use of availability, integrity, and confidentiality (AIC) mechanisms, such as encryption, hashing, masking, and multipathing.

Using Maintenance Mode

Using Maintenance Mode
Using Maintenance Mode

Maintenance mode is utilized when updating or configuring different components of the cloud environment. While in maintenance mode, customer access is blocked, and alerts are disabled (although logging is still enabled).

Any data or hosted VMs should be migrated before entering maintenance mode if they still need to be available for use while the system undergoes maintenance.

This may be automated in some virtualization platforms. Maintenance mode can apply to both datastores and hosts.

Although the procedure to enter and use maintenance mode varies by vendor, the traditional service mechanism that maintenance mode is tied to is the SLA.

The SLA describes the IT service, documents the service-level targets, and specifies the responsibilities of the IT service provider and the customer.

You should enter maintenance mode, operate within it, and exit it successfully using vendor-specific guidance and best practices.

Providing HA on the Cloud

In the enterprise data center, systems are managed with an expectation of uptime, or availability.

This expectation is usually formally documented with an SLA and is communicated to all the users so that they understand the system’s availability.

Measuring System Availability

Measuring System Availability
Measuring System Availability

The traditional way that system availability is measured and documented in SLAs is using a measurement matrix.

Note that uptime and availability are not synonymous; a system can be up but not available, as in the case of a network outage. To ensure system availability, the focus needs to be on ensuring that all required systems are available as stipulated in their SLAs.

Achieving HA 

You can take many approaches to achieve HA

  1. One example is the use of redundant architectural elements to safeguard data in case of failure, such as a drive-mirroring solution. This system design, commonly called Redundant Array of Independent Disks (RAID), would allow for a hard drive containing data to fail.
  • Then, depending on the design of the system (hardware versus software implementation of the RAID functionality), it would allow for a small window of downtime while the secondary, or redundant, hard drive was brought online in the system and made available.
  1. Another example specific to cloud environments is the use of multiple vendors within the cloud architecture to provide the same services.
  • This allows you to build certain systems that need a specified level of availability to be able to switch, or failover, to an alternate provider’s system within the specified period defined in the SLA that is used to define and manage the availability window for the system.

Cloud vendors provide differing mechanisms and technologies to achieve HA within their systems. Always consult with the business stakeholders to understand the HA requirements that need to be identified, documented, and addressed.

The Cloud Security Threats Professional needs to ensure that these requirements are accurately captured and represented in the SLAs that are in place to manage these systems.

The Cloud Security Threats Professional must also periodically revisit the requirements by validating them with the stakeholder and then ensuring that, if necessary, the SLAs are updated to reflect any changes.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top