DEV Community

Cover image for Mastering Ninja Resource Management
Labby for LabEx

Posted on

Mastering Ninja Resource Management

Introduction

MindMap

This article covers the following tech skills:

Skills Graph

In the ancient land of the rising sun, nestled among the majestic peaks of Mount Fuji, a hidden village of ninjas thrived. Here, the art of stealth, precision, and resourcefulness was honed to perfection. Among the elite ranks of this village stood Yuki, a renowned master of ninja weaponry.

Yuki's forge was a sight to behold, a testament to her unwavering dedication and ingenuity. From the finest steel, she crafted blades that could slice through the air with effortless grace, shuriken that could find their mark with pinpoint accuracy, and kunai that could pierce even the toughest armor.

However, Yuki's true mastery lay not only in her craftsmanship but also in her ability to manage the resources of the village. As the ninja clan grew, so did the demand for weapons and gear, and Yuki found herself tasked with ensuring that every ninja had access to the tools they needed, when they needed them.

It was in this pursuit that Yuki discovered the power of the Hadoop Resource Manager, a powerful tool that would allow her to efficiently allocate and manage the village's resources, ensuring that every ninja's mission was successful.

Understanding the Hadoop Resource Manager

In this step, we will delve into the basics of the Hadoop Resource Manager and its role in the Hadoop ecosystem.

Firstly, switch the default user:

su - hadoop
Enter fullscreen mode Exit fullscreen mode

The Hadoop Resource Manager is a crucial component of the YARN (Yet Another Resource Negotiator) architecture in Hadoop. It is responsible for managing the cluster's computational resources and scheduling applications across the available nodes.

To begin, let's explore the architecture of the Resource Manager:

+------------------+
|  Resource Manager|
+------------------+
|  Scheduler       |
|  ApplicationsMaster
|  NodeManager     |
+------------------+
Enter fullscreen mode Exit fullscreen mode

The Resource Manager consists of three main components:

  1. Scheduler: This component is responsible for allocating resources to the various running applications based on predefined scheduling policies.
  2. ApplicationsManager: This component is responsible for accepting job submissions, negotiating the first container for executing the ApplicationMaster, and providing the service for restarting the ApplicationMaster container on failure.
  3. NodeManager: This component runs on each node in the cluster and is responsible for launching and monitoring the containers assigned by the Scheduler.

To better understand the Resource Manager's functionality, let's explore a simple example.

Submit a sample MapReduce job to the cluster:

yarn jar /home/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar pi 16 1000000
Enter fullscreen mode Exit fullscreen mode

Check the status of the job:

yarn application -list 
Enter fullscreen mode Exit fullscreen mode

The output should look something like this:

2024-03-23 22:48:44,206 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032
Total number of applications (application-types: [], states: [SUBMITTED, ACCEPTED, RUNNING] and tags: []):1
                Application-Id      Application-Name        Application-Type          User       Queue               State         Final-State         Progress                        Tracking-URL
application_1711205220447_0001       QuasiMonteCarlo               MAPREDUCE        hadoop     default             RUNNING           UNDEFINED  
Enter fullscreen mode Exit fullscreen mode

In this example, we submit a MapReduce job to the cluster using the yarn command. The Resource Manager receives the job request and assigns the necessary resources (containers) to run the job. We can then check the status of the job and view the logs using the provided commands.

Configuring the Resource Manager

In this step, we will explore how to configure the Resource Manager to meet the specific needs of our ninja village.

The Resource Manager's behavior can be customized through various configuration properties. These properties are typically set in the yarn-site.xml file located in the Hadoop configuration directory (/home/hadoop/hadoop/etc/hadoop).

Let's view the 'yarn' configuration file and add some additional configurations for it:

vim /home/hadoop/hadoop/etc/hadoop/yarn-site.xml
Enter fullscreen mode Exit fullscreen mode

Add the configurations:

  <!-- Specify the scheduling policy -->
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>

  <!-- Configure the maximum number of applications to run concurrently -->
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>8192</value>
  </property>

  <!-- Configure the minimum and maximum virtual cores per container -->
  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>4</value>
  </property>

Enter fullscreen mode Exit fullscreen mode

In this configuration file, we have set the following properties:

  • yarn.resourcemanager.scheduler.class: Specifies the scheduling policy to use. In this case, we're using the Fair Scheduler, which ensures that resources are allocated fairly among applications.
  • yarn.scheduler.maximum-allocation-mb: Sets the maximum amount of memory (in megabytes) that can be allocated to a single container.
  • yarn.scheduler.minimum-allocation-vcores and yarn.scheduler.maximum-allocation-vcores: Defines the minimum and maximum number of virtual cores that can be allocated to a container, respectively.

To apply these configuration changes, we need to restart the Hadoop services.

Monitoring and Managing Applications

In this step, we will learn how to monitor and manage applications running on the Hadoop cluster using the Resource Manager.

The Resource Manager provides a web user interface (UI) that allows you to monitor and manage the cluster's resources and running applications. To access the Resource Manager UI, open a web browser and navigate to http://<resource-manager-hostname>:8088.

In the Resource Manager UI, you will see various sections that provide information about the cluster, nodes, and applications. Here are some key features:

  1. Cluster Metrics: This section displays the overall cluster metrics, such as the total available resources, the number of running applications, and the resource utilization.
  2. Node Managers: This section lists all the active NodeManagers in the cluster, along with their status, available resources, and running containers.
  3. Running Applications: This section shows the currently running applications, their progress, resource usage, and other details.
  4. Application History: This section provides a historical view of completed applications, including their logs and metrics.

To demonstrate how to manage applications using the Resource Manager UI, let's submit a new application to the cluster.

# Submit a WordCount job to the cluster
yarn jar /home/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar wordcount /home/hadoop/input /home/hadoop/output
Enter fullscreen mode Exit fullscreen mode

This script submits a WordCount MapReduce job to the cluster. Before running the script, make sure to create the input directory and place some text files in it:

hdfs dfs -mkdir -p /home/hadoop/input
hdfs dfs -put /home/hadoop/hello.txt /home/hadoop/input
Enter fullscreen mode Exit fullscreen mode

After submitting the job, you can monitor its progress and manage it from the Resource Manager UI. You can view the job's logs, kill the job if necessary, or check the output directory once the job completes.

View the Input file content:

hadoop:~/ $ hadoop fs -cat /home/hadoop/input/*                      [22:56:37]
hello labex
hello hadoop
hello spark
hello flink
Enter fullscreen mode Exit fullscreen mode

View the Output file content:

hadoop:~/ $ hadoop fs -cat /home/hadoop/output/*                     [22:57:37]
flink   1
hadoop  1
hello   4
labex   1
spark   1
Enter fullscreen mode Exit fullscreen mode

Summary

In this lab, we explored the Hadoop Resource Manager, a powerful tool that enables efficient resource allocation and management in a Hadoop cluster. We delved into the architecture of the Resource Manager, learned how to configure it to meet specific needs, and discovered various techniques for monitoring and managing applications running on the cluster.

Through the journey of Yuki, the master ninja weaponsmith, we witnessed the transformative power of the Resource Manager in ensuring that every ninja had access to the tools they needed for successful missions. Just as Yuki mastered the art of resource management, we too can harness the capabilities of the Hadoop Resource Manager to optimize our big data processing workflows.

This lab not only provided hands-on experience with the Resource Manager but also instilled a deeper understanding of the Hadoop ecosystem and its versatile components. By embracing the principles of resource management and efficient scheduling, we can unlock new realms of data processing prowess and tackle even the most formidable big data challenges.


Want to learn more?

Join our Discord or tweet us @WeAreLabEx ! πŸ˜„

Top comments (0)