DEV Community

Cover image for Basic Cloud Computing Concepts To Master
Ola
Ola

Posted on • Edited on

Basic Cloud Computing Concepts To Master

In this post, we shall discuss basic cloud computing concepts.
Before diving into these concepts, a brief explanation of Cloud computing is provided;

Cloud computing is a technology that provides on-demand access to computing resources like storage, servers, databases, software, and networking over the internet. Instead of managing physical hardware or software on-premises, users can rent these resources from cloud service providers as needed. This approach offers significant benefits like scalability, flexibility, cost savings, and easier collaboration. Organizations can choose different service models, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS), to tailor solutions according to their needs.

So, what are the key concepts in cloud computing?

1. Virtualisation

Alright, imagine you have a big box of LEGO bricks, and you want to share it with your friends so that everyone can build something. Instead of giving each person the entire box, you divide it into smaller boxes. Each friend gets their own smaller box, with enough LEGO bricks to build something cool, without bumping into each other or sharing bricks directly.

Picture depicting Virtual Computers using simple lego box(es) Figure 1: A big lego box (powerful computer) and smaller lego boxes (virtual computers or virtual machines)

In cloud computing, virtualization is like dividing up that big box. The "big box" is a powerful computer, and the "smaller boxes" are virtual computers called virtual machines. Each virtual machine acts like it's a real computer, but it's actually a slice of the big computer. People can use these virtual machines to store files, run programs, or even play games, just like with a regular computer. This way, many people can share the big computer without getting in each other’s way!

Here, computing resources refer to the various components that a virtual machine or application needs to function. These include:

  • CPU (Central Processing Unit): The processor that carries out instructions to perform tasks.

  • Memory/RAM (Random Access Memory): The temporary storage space for active processes and data.

  • Storage: Persistent storage for operating systems, applications, and data files, which could be hard drives or SSDs.

  • Networking: Connectivity and bandwidth required for communication between different machines, services, and users.

  • GPU (Graphics Processing Unit): Specialized processors for graphics and parallel processing tasks, especially important for machine learning, gaming, and scientific computations.

  • I/O Devices (Input/Output Devices): Interfaces for peripherals like keyboards, mice, printers, and more.

Virtualization abstracts and allocates these resources to virtual machines or containers based on requirements, enabling efficient sharing and management of the underlying physical hardware.

2. Scalability

Scalability refers to the ability to increase or decrease computing resources to match demand dynamically. This concept ensures that applications can handle varying workloads efficiently. There are two main types;

  • Vertical Scaling (Scaling Up): Enhancing a single server's capacity by adding more CPU, memory, or storage. This is like upgrading your computer to make it faster.

  • Horizontal Scaling (Scaling Out): Adding more servers to share the workload. This is akin to hiring more people to complete a task quicker. Each server works together to manage increasing demands.

3. Agility

Agility is the ability of an organization to rapidly develop, deploy, and scale applications and services in response to changing business needs or market conditions. It is enabled by on-demand provisioning, self-service capabilities, and a broad array of services that eliminate the delays often associated with traditional IT infrastructure. Agility is determined by the following factors;

  • Speed: Quickly setting up, adjusting, or removing resources as needed, which reduces the time required to bring new products or features to market.

  • Adaptability: Responding efficiently to changes in user demand or technical requirements, such as scaling up resources during peak periods or adopting new technologies without significant delays.

  • Flexibility: Offering diverse tools, architectures, and platforms that can be mixed and matched to create customized solutions, reducing dependency on specific technologies.

  • Continuous Improvement: Leveraging automation, monitoring, and data analysis to iteratively improve applications and services.

4. High Availability

High availability is the design and implementation of systems that ensure continuous and reliable operation, minimizing downtime. It's essential for mission-critical applications and services that need to be accessible at all times. Key principles of high availability include:

  • Redundancy: Multiple copies of essential components like servers, storage, and network devices are maintained. If one component fails, another takes over, preventing disruptions.

  • Failover: An automatic switch to a backup system in case of hardware, software, or network failure ensures that services remain operational.

  • Load Balancing: Distributing traffic across multiple servers helps prevent any single server from becoming overwhelmed, ensuring steady performance.

  • Geographic Distribution: Services are replicated in multiple data centers across different geographic locations. If one data center goes offline due to a local issue, another can continue providing services.

  • Monitoring and Alerting: Continuous monitoring detects problems early, allowing automated systems or engineers to address issues before they impact availability.

These principles create a resilient infrastructure that maximizes uptime and minimizes disruptions, ensuring services are consistently available.

5. Fault Tolerant

Fault tolerance is the ability of a system, particularly in computing and cloud environments, to continue functioning correctly even when some of its components fail. It ensures that no single point of failure causes significant disruption or complete system breakdown. Key aspects include:

  • Redundancy: Critical components, such as servers, storage devices, and network connections, are duplicated. If one fails, a backup takes over.

  • Replication: Data is stored across multiple locations, often geographically distributed, so it's accessible even if a particular data center has issues.

  • Failover Mechanisms: Automated systems detect component failures and shift tasks or traffic to backup components with minimal disruption.

  • Diversity: Using different types or vendors for critical components reduces the risk of simultaneous failures.

  • Monitoring: Continuous health checks and alerts help identify and mitigate issues before they escalate.

  • Graceful Degradation: The system continues to operate at reduced capacity if a failure occurs, prioritizing critical services while less crucial ones are temporarily limited.

Fault tolerance is crucial for maintaining service continuity in mission-critical applications where downtime or data loss can have severe consequences.

6. Global Reach

Global reach refers to the ability of a system or service, particularly in cloud computing, to be accessed and utilized from anywhere in the world. This is enabled by the worldwide network of data centers and infrastructure that cloud providers maintain, allowing organizations and users to leverage the following benefits:

  • Low Latency: By deploying services in multiple regions, users can connect to the nearest data center, reducing delays and providing a faster experience.

  • Content Localization: Data and services can be tailored to regional preferences and regulations, improving compliance and relevance.

  • Disaster Recovery: With data replicated across different regions, services can continue to function even if a particular location encounters issues.

  • Market Expansion: Organizations can reach new markets more easily by providing services to users around the globe without requiring a physical presence in every country.

  • Workforce Flexibility: Teams can collaborate from different time zones and locations, accessing centralized applications and data.

  • Compliance: Adapting to regional data storage and handling regulations is easier with geographically diverse data centers.

Overall, global reach in cloud computing enables businesses and individuals to scale their operations and improve accessibility worldwide.

7. Elasticity & Scalability

At the onset of this post, we discussed scalability, it is about that time to expatiate on Elasticity.

Elasticity is the ability to automatically adjust resources dynamically and quickly to meet real-time, fluctuating demand. In operations, elasticity fosters automatically increases or decreases resource capacity based on current demand. It often uses monitoring tools (automation) to scale resources up or down instantly.

Now, the key differences in a table.

Aspect Scalability Elasticity
Definition Ability to expand or reduce resource capacity to meet long-term, sustained changes in demand Ability to dynamically and automatically adjust resource capacity to meet real-time, fluctuating demand
Types 1. Vertical (Scaling Up/Down). 2. Horizontal (Scaling Out/In). N/A (involves immediate and dynamic adjustment)
Resources Adjustments Often requires planned adjustments or manual intervention Completely automated, often relying on monitoring tools
Response Time Takes time for planning and execution Instantaneous or near-instant adjustments
Ideal Use Cases Long-term growth in demand, predictable traffic Unpredictable, fluctuating, or seasonal demand (e.g., e-commerce)
Goal Ensure long-term capacity availability Optimize resource usage and cost in real-time
Foundation Provides capacity foundation Uses the scalable foundation to respond to changes
To learn more, follow me onIcon linking X accont

Top comments (0)