Cloud Computing Event Sources

Event Sources have tools at your disposal that can help you filter the large number of events that take place continuously within the cloud infrastructure, allowing you to selectively focus on those that are most relevant and important.

Event sources are monitored to provide the raw data on events that will be used to paint a picture of a system being monitored.

Event attributes are used to specify the kind of data or information associated with an event that you want to capture for analysis.

Depending on the number of events and attributes being tracked, a large volume of data is produced.

This data must be stored and then analyzed to uncover patterns of activity that may indicate threats or vulnerabilities are present in the system that has to be addressed.

A security information and event management (SIEM) system can be used to gather and analyze the data flows from multiple systems, allowing for the automation of this process.

Event Sources

The relevant event sources you will draw data from will vary according to the cloud services modules that the organization is consuming.

SaaS Event Sources

SaaS Event Sources
SaaS Event Sources

In SaaS environments, you typically have minimal control of, and access to, event and diagnostic data.

Most infrastructure-level logs are not visible to the CSP, and they will be limited to high-level, application-generated logs that are located on a client endpoint.

To maintain reasonable investigation capabilities, auditability, and traceability of data, it is recommended to specify required data access requirements in the cloud SLA or contract with the CSP.

The following data sources play an important role in event investigation and documentation: 

  1. Web server logs
  1. Application server logs
  1. Database logs
  1. Guest OS logs
  1. Host access logs
  1. Virtualization platform logs and SaaS portal logs
  1. Network captures
  1. Billing records

PaaS Event Sources

PaaS Event Sources
PaaS Event Sources

In PaaS environments, you typically have control of and access to the event and diagnostic data. Some infrastructure-level logs are visible to the CSP, along with detailed application logs.

Because the applications to be monitored are being built and designed by the organization directly, the level of application data that can be extracted and monitored is up to the developers.

To maintain reasonable investigation capabilities, auditability, and traceability of data, it is recommended that you work with the development team to understand the capabilities of the applications under development and to help design and implement monitoring regimes that maximize the organization’s visibility into the applications and their data streams

OWASP recommends that the following application events be logged:

  1. Input validation failures, such as protocol violations, unacceptable encodings, and invalid parameter names and values
  1. Output validation failures, such as database recordset mismatch and invalid data encoding
  1. Authentication successes and failures
  1. Authorization (access control) failures
  1. Session management failures, such as cookie session identification value modification
  1. Application errors and system events, such as syntax and runtime errors, connectivity problems, performance issues, third-party service error messages, file system errors, file upload virus detection, and configuration changes
  1. Application and related systems startups and shutdowns, and logging initialization (starting, stopping, or pausing)
  1. Use of higher-risk functionality, such as network connections, addition or deletion of users, changes to privileges, assigning users to tokens, adding or deleting tokens, use of systems administrative privileges, access by application administrators, all actions by users with administrative privileges, access to payment cardholder data, use of data encrypting keys, key changes, creation and deletion of system-level objects, data import and export including screen-based reports, and submission of user-generated content, especially file uploads
  1. Legal and other opt-ins, such as permissions for mobile phone capabilities, terms of use, personal data usage consent, and permission to receive marketing communications

IaaS Event Sources

IaaS Event  Sources
IaaS Event Sources

In IaaS environments, the CSP typically has control of and access to the event and diagnostic data.

Almost all infrastructure-level logs are visible to the CSP, as are detailed application logs.

To maintain reasonable investigation capabilities, auditability, and traceability of data, it is recommended that you specify data access requirements in the cloud SLA or contract with the CSP.

The following logs might be important to examine at some point but might not be available by default:

  1. Cloud or network provider perimeter network logs
  1. Logs from DNS servers
  1. Virtual machine manager (VMM) logs
  1. Host OS and hypervisor logs
  1. API access logs
  1. Management portal logs
  1. Packet captures
  1. Billing records

Identifying Event Attribute Requirements

So that you can perform effective audits and investigations, the event log should contain as much of the relevant data for the processes being examined as possible.

OWASP recommends the following data event logging and event attributes to be integrated into event data.1

When:

  1. Log data and time (international format).
  1. Event date and time. The event timestamp may be different from the time of logging; for example, in server logging, the client application is hosted on a remote device that is only periodically or intermittently online
  1. Interaction identifier.

Where:

  1. Application identifiers, such as name and version
  1. Application address, such as cluster/hostname or server IPv4 or IPv6 address and port number, workstation identity, and local device identifier
  1. Service name and protocol
  1. Geolocation
  1. Window, form, or page, such as entry point uniform resource locator (URL) and HTTP method for a web application and dialog box name
  1. Window, form, or page, such as entry point uniform resource locator (URL) and HTTP method for a web application and dialog box name
  1. Code location, including the script and module name Who (human or machine user):
  1. Source address, including the user’s device machine identifier, user’s IP address, cell tower ID, and mobile telephone number
  1. User identity (if authenticated or otherwise known), including the user database table primary key value, username, and license number

  What:

  1. Type of event
  1. The severity of the event (0=emergency, 1=alert, …, 7=debug), (fatal, error, warning, info, debug, and trace)
  1. Security-relevant event flag (if the logs contain nonsecurity event data, too)
  1. Description Additional considerations:
  1. Secondary time source (Global Positioning System [GPS]) event date and time.
  1. Action, which is the original intended purpose of the request. Examples are log in, refresh session ID, log out, and update profile.
  1. Object, such as the affected component or another object (user account, data resource, or file), URL, session ID, user account, or file.
  1. Result status. Whether the action aimed at the object was successful (can be Success, Fail, or Defer).
  1. Reason. Why the status occurred. Examples might be that the user was not authenticated in the database check or had incorrect credentials.
  1. HTTP status code (for web applications only). The status code is returned to the user (often 200 or 301).
  1. HTTP status code (for web applications only). The status code is returned to the user (often 200 or 301).
  1. User type classification, such as public, authenticated user, CMS user, search engine, authorized penetration tester, and uptime monitor.
  1. Analytical confidence in the event detection, such as low, medium, high, or a numeric value.
  1. Responses seen by the user or taken by the application, such as status code, custom text messages, session termination, and administrator alerts.
  1. Extended details, such as stack trace, system error messages, debug information, HTTP request body, and HTTP response headers and body.
  1. Internal classifications, such as responsibility and compliance references.
  1. Internal classifications, such as responsibility and compliance references.
  1. E xternal classifications, such as NIST Security Content Automation Protocol (SCAP) and Mitre Common Attack Pattern Enumeration and Classification (CAPEC).

Storage and Analysis of Data Events

Event and log data can become costly to archive and maintain depending on the volume of data being gathered.

Carefully consider these issues as well as the business and regulatory requirements and responsibilities of the organizations when planning for event data preservation.

Preservation is defined by ISO 27037:2012 as the “process to maintain and safeguard the integrity and/or original condition of the potential digital evidence.”17 Evidence preservation helps ensure admissibility in a court of law.

However, digital evidence is notoriously fragile and is easily changed or destroyed.

Given that the backlog in many forensic laboratories ranges from six months to a year (and that the legal system might create further delays), potential digital evidence may spend a significant period in storage before it is analyzed or used in a legal proceeding.

Storage requires strict access controls to protect the items from accidental or deliberate modification, as well as appropriate environmental controls. Also, note that certain regulations and standards require that event logging mechanisms be tamper-proof to avoid the risks of faked event logs.

The gathering, analysis, storage, and archiving of event and log data is not limited to the forensic investigative process, however. In all organizations, you are called on to execute these activities on an ongoing basis for a variety of reasons during the normal flow of enterprise operations.

Whether it is to examine a firewall log, diagnose an application installation error, validate access controls, understand network traffic flows, or manage resource consumption, the use of event data and logs is standard practice.

SIME

What you need to concern yourself with is how you can collect the volumes of logged event data available and manage it from a centralized location.

SIEM is a term for software products and services combining security information management (SIM) and security event management (SEM).

SIEM technology provides real-time analysis of security alerts generated by network hardware and applications.

SIEM is sold as software, appliances, or managed services and is used to log security data and generate reports for compliance purposes. The acronyms SEM, SIM, and SIEM are sometimes used interchangeably.

The segment of security management that deals with real-time monitoring, correlation of events, notifications, and console views is commonly known as SEM.

The second area provides long-term storage, analysis, and reporting of log data and is known as SIM. 

SIEM systems typically provide the following capabilities

Data aggregation: Log management aggregates data from many sources, including network, security, servers, databases, and applications, providing the ability to consolidate monitored data to help avoid missing crucial events.

Correlation: This involves looking for common attributes and linking events into meaningful bundles.

This technology provides the ability to perform a variety of correlation techniques to integrate different sources to turn data into useful information. Correlation is typically a function of the SEM portion of a full SIEM solution.

Alerting: This is the automated analysis of correlated events and production of alerts to notify recipients of immediate issues. Alerting can be to a dashboard or via third-party channels such as email.

Dashboards: Tools can take event data and turn it into informational charts to assist in seeing patterns or identifying activity that is not forming a standard pattern.

Compliance: Applications can be employed to automate the gathering of compliance data, producing reports that adapt to existing security, governance, and auditing processes.

Retention: This involves employing long-term storage of historical data to facilitate the correlation of data over time and to provide the retention necessary for compliance requirements.

Long-term log data retention is critical in forensic investigations because it is unlikely that the discovery of a network breach will coincide with the breach occurring.

Forensic analysis: This is the ability to search across logs on different nodes and periods based on specific criteria. It mitigates having to aggregate log information in your head or having to search through thousands and thousands of logs.

However, there are challenges with SIEM systems in the cloud that have to be considered when deciding whether this technology makes sense for the organization.

Turning over internal security data to a CSP requires trust, and many users of cloud services desire more clarity on the provider’s security precautions before being willing to trust a provider with this kind of information.

Another problem with pushing SIEM into the cloud is that targeted attack detection requires in-depth knowledge of internal systems the kind found incorporate security teams.

Cloud-based SIEM services may have trouble with recognizing the low-and-slow attacks. Often in targeted attacks, when organizations are breached, attackers create a relatively small amount of activity while carrying out their attacks.

To see that evidence, the customer must have access to the data gathered by the CSP’s monitoring infrastructure.

That access to monitoring data needs to be specified as part of the SLA and may be difficult to gain access to, depending on the contract terms enforced.

Leave a comment

Copy link
Powered by Social Snap