Cloud computing is using technology that has five of these traits:
- Getting on-demand and self-service through an interface with no human intervention
- Clients get access from anywhere through an internet connection.
- Clients don't worry about the physical location of resources provided by the cloud provider.
- Resources are flexible. You can get more quickly, and if you need less, they can be scaled back just as fast.
- Payment is made on the resources used; pay as you go, model.
Google Cloud, in particular, provides: computing, storage, big data, machine learning, application services for web, mobile analytics, and backend solutions.
Cloud History
Colocation was the first wave of cloud computing that allowed users to rent physical spaces, which was financially efficient, instead of getting involved with data centres real estate.
The second wave involved virtualised data centres. I enjoy writing on paper using different colours of pens and sometimes pencils. I carry my stationery in a pouch because it's easy to lose pens, and I quite value my pens and pencils. However, as much as I love all my pens and pencils, I cannot carry them all everywhere because the size of my pouch is smaller than the number of pens and pencils I own. I have to choose what I want to use every time. But imagine if I had a fairy godmother who would whip out a magical purple pouch just for me that could fit all of my current and future pens and pencils. I would not have to worry about choosing because I could make the choice anywhere and use any pen at any given time. This is the same concept as virtualised data centres. They are like the magical purple pouch for computers and data, instead of having physical storage devices around. The magical purple pouch makes it easy to stay organised and has easy access, holding everything you need in one place.
Virtualisation allows users to control and configure the environment that suits their needs.
The third wave is the container-based architecture. This consists of services that are automated and scalable. Services automatically provision and configure their infrastructure to run applications.
Virtualised data centres lead to the introduction of new offerings:
IaaS (Infrastructure as a service) offers compute, storage, and network capabilities as virtual resources that are arranged similarly to physical data centres. Clients pay for resources they have chosen ahead of time.
PaaS (platform as a service): focuses on resources for the application logic by binding code to libraries that provide access to the infrastructure that the application needs. Clients only pay for the resources they use.
Managed infrastructure and services were also introduced. Managed infrastructure and services allow companies to focus on their business goals, like delivering products faster and more reliably, while spending fewer resources on maintaining their technical infrastructure.
Serverless computing was also introduced, where the developers focus on their code instead of server configuration.Google offers serverless technologies that include Cloud Functions and Cloud Run.Cloud Functions manages event-driven code and offers a pay-as-you-go service, while Cloud Run allows clients to deploy their containerized microservice applications in a managed environment.
SaaS (Software as a Service) is where applications are run and consumed in the cloud without the need for local storage. A good example is GMail or Drive.
Google Cloud runs on Google's own global network. Google's infrastructure is based in five major locations: North America, South America, Europe, Asia, and Australia. Having multiple service locations are vital since they affect the availability, durability, and latency of information.
Geographical locations are divided into different regions and zones. Regions represent geographic areas that are composed of zones. E.g., London, or Europe-West 2, is a region with three zones: europe-west2-a, europe-west2-b, and europe-west2-c. Google Cloud resources are deployed in the Zones. For redundancy and protection against natural disasters, you should run resources in different regions.
Multiregion is possible in Google Cloud using Cloud Spanner, which allows you to replicate the database not only in multiple zones but also in multiple regions as defined in the instance configuration. The replicas allow you to read data with low latency from multiple locations that are close to or within the region in the configuration.
Google Infrastructure Security
In order to secure data and avoid breaches, security is implemented on Google Cloud, from the physical location of the data centers to the hardware, software, and underlining infrastructure.
The hardware layer has three security features:
Server boards and networking equipment are made and designed by Google. The security chips deployed on servers and peripherals are also custom-made.
Have a secure boot stack to ensure that the right software is being booted correctly.
Premises security: Google designs and builds its own data centres to incorporate multiple layers of physical security protections as well as limit access to authorized personnel.
The Service deployment layer's main feature is the encryption of inter-service communication. Google's infrastructure automatically encrypts RPC (Remote Procedure Call) traffic between data centres, which is how Google services communicate with one another.
The user identity layer, in this case the central identity service, is displayed to users as the login or sign-up page. The service may ask for additional information based on the risk factors, like if the user has logged in to the same device in the past. Other factors, like the implementation of the U2F open standard while signing in for authentication purposes.
In the storage services layer, encryption is at rest as a security feature. Google's applications access physical storage indirectly. They use storage services and encryption using keys that are centrally managed and applied at the storage services. Hardware encryption is also enabled on the hardware in SSDs and hard drives.
The Internet communication layer consists of two key security features:
Google services are registered with an infrastructure service called the Google Front End(GFE) to ensure TLS connections are ended using a public-private key pair andan X.509 certificate from a Certified Authority (CA). GFE also protects against denial-of-service ("DoS") attacks.
Denial of Service ("DoS") protection through absorbing many DoS attacks due to the scale of its infrastructure. Google has multi-tier, multi-layer DoS protections that further reduce the risk of DoS impact on services running behind GFE.
Operational security layer, which provides four key features:
Intrusion detection: rules and machine intelligence provide warnings of possible incidents. Red Team exercises are conducted to improve the effectiveness of detection and response mechanisms.
Reducing insider risk: limiting the activities of authorized personnel and monitoring access to the infrastructure
Employees use U2F to protect against phishing attacks
Strict software development practices that require a two-party review of code to prevent introducing certain classes of security bugs
Having a choice whether to continue using a service or not is a factor that Google Cloud put in place to avoid locking their users to just them as vendors. Google publishes key elements using open-source licenses to create ecosystems that provide more options. For instance, the Google Kubernetes Engine allows clients to mix micro-services running across different clouds.
Functional Structure of Google Cloud
Google Cloud has four levels in terms of hierarchy, starting from the top to the bottom:
- Organization node: has all resources, projects, and folders of the organization.
- Folders are made up of projects.
- Projects Are made up of resources.
- Resources: represents virtual machines and cloud storage buckets, tables in BigQuery, or anything else in Google Cloud
The resource hierarchy mentioned above is vital because it relates to how policies are applied when using Google Cloud. Policies are inherited downward, which means if a policy is applied to a folder, all projects within that folder will also have the same policy.
Projects
Projects are the baseline for enabling and using Cloud services like managing APIs, enabling billing, adding and removing collaborators, and enabling additional services.
Projects:
- are separate entities under the organization node.
- hold resources that belong to exactly one project.
- can have different owners and users.
- are billed and managed separately.
Google Cloud projects have three key attributes:
Project IDs are unique and assigned by Google Cloud and cannot be changed after creation.
Project name: created by users, do not have to be unique, and can be changed at anytime.
Project number: created by Google, unique, and cannot be changed.
Folders
Folders allow you to assign policies and resources at the level of your choice. Resources inherit policies and permissions assigned to their folder.
In cases where a team manages two projects, you can add policies to a common folder so they have the same permissions and also avoid duplication of data.
Different organizations may contain many departments that have their own Google Cloud Resources . Folders allow the organizations to group the resources according to departments and have the ability to delegate administrative rights to work independently.
Organization Node
To use folders, you need to have an organization node, which is the peak of the hierarchy.
The organizational node comes with special roles, including administrators with the privilege to change policies and project creators with the role of controlling who can spend money.
Organization nodes can be created in two ways. If a company is already a Google Workspace client, cloud projects will automatically belong to the organization node; if it is not, you can use Google Cloud Identity to generate one.
IAM (Identity Access Management) is used to help administrators control who has access to folders, projects, and resources. Administrators apply policies that define who has access to perform what actions using which resources.
The "who" in an IAM policy can be a Google account or group, a service account, or a Cloud identity domain. The "who" is also referred to as a principal with its own email address as an identifier.
"Perform what" is defined by a role. The role is an accumulation of permissions. If you grant a role to a principal, you grant all permissions that the particular role entails.
You have the ability to define deny rules that prevent some principals from using certain permissions, regardless of the role they have. IAM always checks the relevant deny policies before the allow policies. Both deny and allow policies are inherited through the resource hierarchy.
There are three types of roles in IAM: basic, predefined, and custom.
Basic roles: when applied, they affect all resources in that project. They include owners, editors, viewers, and billing admins. Viewers can access resources but cannot make changes; editors and owners can access and change a resource, but owners can do more, like manage the associated roles and permissions as well as set up billing; billing admins set up billing but cannot change resources.
Predefined roles: some cloud services offer predefined roles and define where those roles will be applied.
Custom roles are used to give more specific permissions to a role. These roles can only be applied at the project or organizational level.
Service Accounts
Service accounts are used to give permissions to a Compute Engine virtual machine. Service accounts are email addresses that use cryptographic keys to access resources instead of passwords. Service accounts do not need to be managed. Besides being an identity, service accounts are resources that can have IAM policies attached to them.
Cloud Identity
Cloud identity is a tool that allows organizations to define policies and manage users and groups using the Google admin console.
Google Cloud Access
There are four ways to access Google Cloud.
The Google Cloud console is the Google Cloud GUI (Graphical User Interface) that helps you perform actions like diagnosing production issues on the web.
The Cloud SDK and Shell: The Cloud SDK is a set of tools to manage resources on Google Cloud. The cloud shell provides access to cloud resources through the command line.
The APIs: Google Cloud Services offers Google API Explorer, which shows available APIs and how to interact with them.
Google Cloud Console Mobile App: It provides a range of services like starting and stopping Cloud SQL instances, metrics alerts, and seeing logs from instances.
Through this article you now have a comprehensive understanding of the remarkable evolution of cloud computing,the structure and organization of Google Cloud, security features, and how to access Google Cloud.
Top comments (0)