Data center of Google

How does google cloud data center of works?

Google Data center design, planning, and architecture have long formed an integral part of the information technology (IT) services for providers of computing services.

Over time, these have typically evolved and grown in line with computing developments and enhanced capabilities.

Google Data center continue to be refined, enhanced, and improved upon globally; however, they still rely heavily on the same essential components to support their activities (power, water, structures, connectivity, security, and more).

Implementing a secure design when creating a data center involves many considerations.

Before making any design decisions, work with senior management and other key stakeholders to identify all compliance requirements for the Google data center.

If you’re designing a Google data center for public cloud services, consider the different levels of security that will be offered to your customers.

Modern Google Data Center and Cloud Service Offering

Modern Google Data Center and Cloud Service Offering
Modern Google Data Center and Cloud Service Offering

Until recently, data centers were built with the mindset of supplying hosting, compute, storage, or other services with typical or standard organization types in mind.

The same cannot (and should not!) be said for modern-day Google data centers and cloud service offerings.

A fundamental shift in consumer use of cloud-based services has thrust the users into the same data centers as the enterprises, thereby forcing providers to take into account the challenges and complexities associated with differing outlooks, drivers, requirements, and services.

For example, if customers will host Payment Card Industry data or a payments platform, these need to be identified and addressed in the relevant design process to ensure a fit-for-purpose design that meets and satisfies all current Payment Card Industry Data Security Standard (PCI DSS) requirements.

Factors That Affect Google Data Center Design

Factors That Affect Google Data Center Design
Factors That Affect Google Data Center Design

The location of the data center and the users of the cloud affect compliance decisions and can further complicate the organization’s ability to meet legal and regulatory requirements because the geographic location of the data center impacts its jurisdiction.

Before selecting a location for the data center, an organization should have a clear understanding of requirements at the national, state, and local levels. Contingency, failover, and redundancy involving other data centers in different locations are important to understand.

The type of service models (platform as a service [PaaS], infrastructure as a service [IaaS], and software as a service [SaaS]) the cloud provides also influence design decisions.

Once the compliance requirements have been identified, they should be included in the data center design. Additional data center considerations and operating standards should be included in the design.

Some examples include ISO 27001 and Information Technology Infrastructure Library (ITIL) IT service management (ITSM). There is a close relationship between the physical and the environmental design of a data center.

Poor design choices in either area can affect the other and cause a significant cost increase, delay completion, or impinge upon operations if not done properly.

The early adoption of a data center design standard that meets organizational requirements is a critical factor when creating a Google cloud-based data center.

 Additional areas to consider as they pertain to Google data center design include the following:

  1. Automating service enablement
  1. Consolidating monitoring capabilities
  1. Reducing mean time to repair (MTTR)
  1. Reducing mean time between failure (MTBF)

Google Data Center Logical Design

Google Data Center Logical Design
Google Data Center Logical Design

The characteristics of cloud computing can affect the logical design of a Google data center.

Multitenancy

Multitenancy
Multitenancy

As enterprises transition from traditional dedicated server deployments to virtualized environments that leverage cloud services, the cloud computing networks they are building must provide security and segregate sensitive data and applications.

In some cases, multitenant networks are a solution. Multitenant networks, in a nutshell, are data center networks that are logically divided into smaller, isolated networks.

They share the physical networking gear but operate on their network without visibility into the other logical networks.

The multitenant nature of a cloud deployment requires a logical design that partitions and segregates client and customer data. Failure to do so can result in unauthorized access, viewing, or modification of tenant data.

Google Cloud Data Center Management Plane

Cloud Management Plane
Cloud Management Plane

Additionally, the cloud management plane needs to be logically isolated, although physical isolation may offer a more secure solution. The cloud management plan provides

Monitoring and Administration of the Google cloud network platform to keep the whole cloud operating normally

  1. Configuration management and services lifecycle management
  1. Services registry and discovery
  1. Monitoring, logging, accounting, and auditing
  1. Service-level agreement (SLA) management
  1. Security services and infrastructure management

Virtualization Technology

Virtualization Security Risks
Virtualization Security Risks

Virtualization technology offers many of the capabilities needed to meet the requirements for partitioning and data.

The logical design should incorporate a hypervisor that meets the system requirements.

Following are key areas that need to be incorporated in the logical design of the Google data center:

  1. Communications access (permitted and not permitted), user access profiles, and permissions, including application programming interface (API) access
  1. Secure communication within and across the management plane
  1. Secure storage (encryption, partitioning, and key management)
  1. Backup and disaster recovery (DR) along with failover and replication

Other Logical Design Considerations

Other logical design considerations include these: 

  1. Design for segregation of duties so data center staff can access only the data needed to do their job.
  1. Design for monitoring of network traffic. The management plan should also be monitored for compromise and abuse.
  1. Hypervisor and virtualization technology need to be considered when designing the monitoring capability.

Some hypervisors may not allow enough visibility for adequate monitoring. The level of monitoring depends on the type of cloud deployment.

  1. Automation and the use of APIs, which are essential for a successful cloud deployment. The logical design should include the secure use of APIs and a method to log API use.
  1. Logical design decisions that are enforceable and monitored. For example, access control should be implemented with an identity and access management (IAM) system that can be audited.
  1. The use of software-defined networking tools to support logical isolation.

Google Cloud Data Center Logical Design Levels

Google Data Center Logical Design Levels
Google Data Center Logical Design Levels

The logical design for data separation needs to be incorporated at the following levels:

  1. Compute nodes
  1. Management plane
  1. Storage nodes
  1. Control plane
  1. Network

Service Model

The service model influences the logical design. Here are examples:

  1. For IaaS, many of the hypervisor features can be used to design and implement security.
  1. For PaaS, logical design features of the underlying platform and database can be leveraged to implement security.
  1. For SaaS, the same as above applies, and additional measures in the application can be used to enhance security

All logical design decisions should be mapped to specific compliance requirements, such as logging, retention periods, and reporting capabilities for auditing.

There also needs to be ongoing monitoring systems designed to enhance effectiveness.

Physical Design

No two data centers are alike, and they should not be, for it is the business that drives the requirements for IT and the data centers.

IT infrastructure in today’s data center is designed to provide specific business services and can affect the physical design of the data center.

For example, thin blade rack-mounted web servers are required for high-speed user interaction, whereas data-mining applications require larger mainframe-style servers.

The physical infrastructure to support these different servers can vary greatly.

Given their criticality, data center design becomes an issue of paramount importance in terms of technical architecture, business requirements, energy efficiency, and environmental requirements.

Over the past decade, data center design has been standardized as a collection of standard components that are plugged together.

Each component has been designed to optimize its efficiency, with the expectation that, taken as a whole, optimum efficiency would be achieved.

That view is shifting to one in which an entire data center is viewed as an integrated combination designed to run at the highest possible efficiency level, which requires custom-designed subcomponents to ensure they contribute to the overall efficiency goal

One example of this trend can be seen in the design of the chicken coop data center, which is designed to host racks of physical infrastructure within long rectangles with a long side facing the prevailing wind, thereby allowing natural cooling.1 Facebook, in its open, compute design, places air intakes and outputs on the second floor of its data centers so that cool air can enter the building and drop on the machines, while hot air rises and is evacuated by large fans.

The physical design should also account for possible expansion and upgrading of both computing and environmental equipment.

For example, is there enough room to add cooling or access points that are large enough to support equipment changes? The physical design of a data center is closely related to environmental design.

Physical design decisions can shape the environmental design of the data center.

For example, the choice to use raised floors affects the heating, ventilation, and air conditioning (HVAC) design

When designing a Google cloud Data Center , consider the following areas:

  1. Does the physical design protect against environmental threats such as flooding, earthquakes, and storms?
  1. Does the physical design include provisions for access to resources during disasters to ensure the data center and its personnel can continue to operate safely? Examples include the following:
  • Clean water
  • Clean power
  • Food
  • Telecommunications
  1. Accessibility during and after a disaster
  1. Are there physical security design features that limit access to authorized personnel? Some examples include these:
  1. Perimeter protections such as walls, fences, gates, and electronic surveillance
  1. Access points to control ingress and egress and verify identity and access authorization with an audit trail; this includes egress monitoring to prevent theft

Building or Buying

Organizations can build a data center, buy one, or lease space in a data center.

Regardless of the decision made by the organization, certain standards and issues need to be considered and addressed through planning, such as data center tier certification, physical security level, and usage profile (multitenant hosting versus dedicated hosting).

As a certified cloud security professional (CCSP), both you and the enterprise architect play a role in ensuring these issues are identified and addressed as part of the decision process.

If you build the data center, the organization has the most control over its design and security.

However, a significant investment is required to build a robust data center.

Buying a data center or leasing space in a data center may be a cheaper alternative, but either one of these options may include limitations on design inputs.

The leasing organization needs to include all security requirements in the request for proposal (RFP) and contract.

When using a shared data center, physical separation of servers and equipment needs to be included in the design.

Google Data Center Design Standards

Google Data Center Design Standards
Google Data Center Design Standards

Any organization building or using a data center should design the data based on the standard or standards that meet its organizational requirements. An organization has many standards available to choose from:

  1. Building Industry Consulting Service International Inc. (BICSI): The ANSI/BICSI 002-2014 standard covers cabling design and installation.
  1. The International Data Center Authority (IDCA): The Infinity Paradigm covers data center location, facility structure, and infrastructure, and applications.
  1. The National Fire Protection Association (NFPA): NFPA 75 and 76 standards specify how hot or cold aisle containment is to be carried out, and NFPA standard 70 requires the implementation of an emergency power-off button to protect first responders in the data center in case of emergency.

This section briefly examines the Uptime Institute’s Data Center Site Infrastructure Tier Standard Topology.

The Uptime Institute is a leader in data center design and management. Its “Data Center Site Infrastructure Tier Standard: Topology” document provides the baseline that many enterprises use to rate their data center designs.2

The document describes a four-tiered architecture for data center design, with each tier progressively more secure, reliable, and redundant in its design and operational elements.

The four tiers are named as follows: 

  1. Tier I: Basic Data Center Site Infrastructure
  1. Tier II: Redundant Site Infrastructure Capacity Components
  1. Tier III: Concurrently Maintainable Site Infrastructure
  1. Tier IV: Fault-Tolerant Site Infrastructure

The document also addresses the supporting infrastructure systems that these designs rely on, such as power generation systems, ambient temperature control, and makeup (backup) water systems.

The Cloud security professionals may want to familiarize herself with the detailed requirements laid out for each of the four tiers of the architecture to be better prepared for the demands and issues associated with designing a data center to be compliant with a certain tier if required by the organization.

Google Data Center Environmental Design Considerations

Google Data Center Environmental Design Considerations
Google Data Center Environmental Design Considerations

The environmental design must account for adequate heating, ventilation, air conditioning, power with adequate conditioning, and backup. Network connectivity should come from multiple vendors and include multiple paths into the facility.

Temperature and Humidity Guidelines

The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) Technical Committee 9.9 has created a set of guidelines for temperature and humidity ranges in the data center.

The guidelines are available as the 2011 Thermal Guidelines for Data Processing Environments Expanded Data Center Classes and Usage Guidance. 3 These guidelines specify the recommended operating range for temperature and humidity.

These ranges refer to the IT equipment intake temperature.

Temperature can be controlled in several ways at locations in the Google Data Center including the following:

  1. Server inlet
  1. Server exhaust
  1. Floor tile supply temperature
  1. HVAC unit return air temperature
  1. Computer room air conditioning unit supply temperature

HVAC Considerations

Normally, data center HVAC units are turned on and off based on return air temperature. When used, the ASHRAE temperature recommendations that addresses how to produce lower inlet temperatures.

The CCSP should be aware that the lower the temperature in the data center is, the greater the cooling costs per month.

Essentially, the air conditioning system moves heat generated by equipment in the data center outside, allowing the data center to maintain a stable temperature range for the operating equipment.

The power requirements for cooling a data center depending on the amount of heat being removed as well as the temperature difference between the inside of the data center and the outside air.

Air Management for Google Data Center

Air management for data centers entails that all the design and configuration details minimize or eliminate mixing between the cooling air supplied to the equipment and the hot air rejected from the equipment.

Effective air management implementation minimizes the bypass of cooling air around rack intakes and the recirculation of heat exhaust back into rack intakes.

When designed correctly, an air management system can reduce operating costs, reduce first cost equipment investment, increase the data center’s power density (watts/square foot), and reduce heat-related processing interruptions or failures.

A few key design issues include the configuration of equipment’s air intake and heat exhaust ports, the location of supply and returns, the large-scale airflow patterns in the room, and the temperature set points of the airflow.

Google Data Center Cable Management

A data center should have a cable management strategy to minimize airflow obstructions caused by cables and wiring.

This strategy should target the entire cooling airflow path, including the rack-level IT equipment air intake and discharge areas, as well as underfloor areas.

 The development of hot spots can be promoted through these two methods:

  1. Under-floor and over-head obstructions, which often interfere with the distribution of cooling air. Such interferences can significantly reduce the air handlers’ airflow and negatively distress the air distribution.
  1. Cable congestion in raised-floor plenums, which can sharply reduce the total airflow as well as degrade the airflow distribution through the perforated floor tiles.

A minimum effective (clear) height of 24 inches should be provided for raised-floor installations.

Greater under-floor clearance can help achieve a more uniform pressure distribution in some cases.

Persistent cable management is a key component of effective air management.

Instituting a cable mining program (that is, a program to remove abandoned or inoperable cables) as part of an ongoing cable management plan optimizes the air delivery performance of data center cooling systems.

Aisle Separation and Containment

A basic hot aisle or cold aisle configuration is created when the equipment racks and the cooling system’s air supply and return are designed to prevent mixing of the hot rack exhaust air and the cool supply air drawn into the racks.

As the name implies, the data center equipment is laid out in rows of racks with alternating cold (rack air intake side) and hot (rack air heat exhaust side) aisles between them.

Strict hot aisle and cold aisle configurations can significantly increase the air-side cooling capacity of a data center’s cooling system.

All equipment should be installed into the racks to achieve a front-to-back airflow pattern that draws conditioned air in from cold aisles, located in front of the equipment, and rejects heat through the hot aisles behind the racks.

Equipment with nonstandard exhaust directions must be addressed (shrouds, ducts, and so on) to achieve a front-to-back airflow.

The racks are placed back to back, and holes through the rack (vacant equipment slots) are blocked off on the intake side to create barriers that reduce recirculation.

Additionally, cable openings in raised floors and ceilings should be sealed as tightly as possible.

With proper isolation, the temperature of the hot aisle no longer influences the temperature of the racks or the reliable operation of the data center; the hot aisle becomes a heat exhaust.

The air-side cooling system is configured to supply cold air exclusively to the cold aisles and pull return air only from the hot aisles.

One recommended design configuration supplies cool air via an under-floor plenum to the racks.

The air then passes through the equipment in the rack and enters a separated, semi-sealed area for return to an overhead plenum.

This approach uses a baffle panel or barrier above the top of the rack and at the ends of the hot aisles to mitigate short-circuiting (the mixing of hot and cold air).

Google Data Center HVAC Design Considerations

Industry guidance should be followed to provide adequate HVAC to protect the server equipment.

Include the following considerations in your design:

  1. The local climate will affect the HVAC design requirements.
  1. Redundant HVAC systems should be part of the overall design.
  1. The HVAC system should provide air management that separates the cool air from the heat exhaust of the servers.

Various methods provide air management, including racks with built-in ventilation or alternating cold and hot aisles.

The best design choices depend on space and building design constraints.

  1. Consideration should be given to energy-efficient systems.
  1. Backup power supplies should be provided to run the HVAC system for the time required for the system to stay up.
  1. The HVAC system should filter contaminants and dust.

Multivendor Pathway Connectivity

Uninterrupted service and continuous access are critical to the daily operation and productivity of your business.

With downtime translating directly to the loss of income, data centers must be designed for redundant, fail-safe reliability and availability. Datacenter reliability is also defined by the performance of the infrastructure.

Cabling and connectivity backed by a reputable vendor with guaranteed error-free performance help avoid poor transmission in the data center. There should be redundant connectivity from multiple providers into the data center.

This helps prevent a single point of failure for network connectivity.

The redundant path should offer the minimum expected connection speed for data center operations.

Implementing Physical Infrastructure for Google Cloud Data Center Environments

Google cloud Implementation
Google cloud Implementation

Many components make up the design of the data center, including logical components such as general service types and physical components such as the hardware used to host the logical service types envisioned.

The hardware has to be connected to allow networking to take place and information to be exchanged.

To do so securely, follow the standards for data center design, where applicable, as well as best practices and common sense.

Cloud computing removes the traditional silos within the data center and introduces a new level of flexibility and scalability to the IT organization.

This flexibility addresses challenges facing enterprises and IT service providers, including rapidly changing IT landscapes, cost reduction pressures, and focus on time-to-market.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top