Set Up a Highly Available Hadoop Cluster on CentOS, RHEL 7, 8

This tutorial will help you to set up a highly available Hadoop cluster on CentOS7,8.

If you are deploying your Hadoop cluster in a virtual environment, you should prepare only one virtual machine (master-node) following Step1 to Step7. When you are done with these steps, clone your VM as worker-node1, and worker-node2.

Follow rest of the steps (steps8 to step11) in order to complete your Hadoop cluster setup.

Prerequisites

To follow this tutorial along, you will need three (physical or virtual) machines installed with CentOS7,8 or RHEL7,8.

We will use below information throughout this tutorial:

HostName IP Address
Purpose
master-node 192.168.10.1 Hadoop master node
worker-node1 192.168.10.2 Hadoop worker node1
worker-node2 192.168.10.3 Hadoop worker node2
 

You must set appropriate hostname on each node:

sudo hostnamectl set-hostname master-node
sudo hostnamectl set-hostname worker-node1 sudo hostnamectl set-hostname worker-node2


Also, set the correct timezone on each node using the below command:

sudo timedatectl set-timezone Asia/Karachi

Step1 - Update Hosts File

You need to update the /etc/hosts file on each node like below:
sudo nano /etc/hosts
Add your nodes IP address, and hostname like below:
192.168.10.1    master-node
192.168.10.2    worker-node1
192.168.10.3    worker-node2
Save and close the editor when you are finished.

Step2 - Add Hadoop User

We will create a user (hadoop) with sudo privileges:
sudo adduser -m hadoop -G wheel
sudo passwd hadoop

This will prompt you for new password and confirm password.

Step3 - Set Up SSH Key-Pair Authentication

The master node in hadoop cluster will use an SSH connection to connect to other nodes with key-pair authentication to actively manage the cluster. For this, we need to set up SSH key-pair authentication on each node.

Login to your master-node as the hadoop user, and generate an SSH key like below:
ssh-keygen
This will prompt you for passphrase,  make sure you leave the fields blank.

Repeat the same step on each worker node as the hadoop user. When you are finished generating ssh key-pair on all nodes, move to next step.

Now you need to copy id_rsa.pub contents to authorized_keys file and then transfer authorized_keys to remote node like below:
ssh-copy-id -i ~/.ssh/id_rsa.pub localhost
scp ~/.ssh/authorized_keys worker-node1:~/.ssh/

Next, login to worker-node1 as the hadoop user, copy id_rsa.pub contents to authorized_keys and then transfer to worker-node2 like below:
ssh-copy-id -i ~/.ssh/id_rsa.pub localhost
scp ~/.ssh/authorized_keys worker-node2:~/.ssh/

Next, login to worker-node2, copy id_rsa.pub contents to authorized_keys and then transfer to master-node, and worker-node1 like below:
ssh-copy-id -i ~/.ssh/id_rsa.pub localhost
scp ~/.ssh/id_rsa.pub master-node:~/.ssh/
scp ~/.ssh/id_rsa.pub worker-node1:~/.ssh/

If everything setup correctly as described,  you will be able to connect to each other node via ssh with key-pair authentication without providing password.


Step4 - Install Java

Hadoop comes with code and scripts that needs java to run, you can install latest version of java on each node with below command:
sudo dnf install -y java-latest-openjdk java-latest-openjdk-devel

 For CentOS,RHEL7:
sudo yum install -y java-latest-openjdk java-latest-openjdk-devel

Step5 - Set Java Home Environment

Hadoop comes with code and configuration that references the JAVA_HOME environment variable. This variable points to the java binary file, allowing them to run java code.

You can set up JAVA_HOME variable on each node like below:
echo "JAVA_HOME=$(which java)" | sudo tee -a /etc/environment

Reload your system’s environment variables with below command:
source /etc/environment

Verify that variable was set correctly:
echo $JAVA_HOME

This should return the path to the java binary. Make sure you repeat the same step on each worker node as well.

You need to manually set hadoop binaries location into system path so that default environment understand where to look for hadoop commands.

edit /home/hadoop/.bashrc like below:
vi /home/hadoop/.bashrc

add following lines at the end of the file:
export HADOOP_HOME=$HOME/hadoop
export PATH=${PATH}:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin

Save and close.

Next, edit /home/hadoop/.bash_profile:
vi ~/.bash_profile

add following line at the end of the file:
PATH=$HOME/hadoop/bin:$HOME/hadoop/sbin:$PATH

Save and close file.

Make sure you repeat the same step on each worker node as well.


Step6 - Download Hadoop

At the time of writing this article, hadoop 3.1.3 was the most latest available release.

Login to your master-node as the hadoop user, download the latest available version of Hadoop, and unzip it:
cd ~
wget http://apache.cs.utah.edu/hadoop/common/current/hadoop-3.3.1.tar.gz
tar xzf hadoop-3.3.1.tar.gz
mv hadoop-3.3.1 hadoop

Step7 - Configure Hadoop

At this stage, we'll configure hadoop on master-node first, then replicate the configuration to worker nodes later.

On master-node, type below command to find java installation path:
update-alternatives --display java

Take the value of the (link currently points to) and remove the trailing /bin/java. For example on CentOS or RHEL, the link is /usr/lib/jvm/java-11-openjdk-11.0.5.10-2.el8_1.x86_64/bin/java, so JAVA_HOME should be /usr/lib/jvm/java-11-openjdk-11.0.5.10-2.el8_1.x86_64.

 

Edit hadoop-env.sh like below:

nano ~/hadoop/etc/hadoop/hadoop-env.sh

Uncomment by removing # and update JAVA_HOME line like below:
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64

Save and close when you are finished.

Next, edit core-site.xml file to set the NameNode location to master-node on port 9000:
nano ~/hadoop/etc/hadoop/core-site.xml

add the following code, make sure you replace master-node with yours:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master-node:9000</value>
</property>
</configuration>

Save and close.

Next, edit hdfs-site.conf to resemble the following configuration:
nano ~/hadoop/etc/hadoop/hdfs-site.xml

add following code:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
            <name>dfs.namenode.name.dir</name>
            <value>/home/hadoop/data/nameNode</value>
    </property>
    <property>
            <name>dfs.datanode.data.dir</name>
            <value>/home/hadoop/data/dataNode</value>
    </property>
    <property>
            <name>dfs.replication</name>
            <value>2</value>
    </property>
</configuration>

Note that the last property string dfs.replication, indicates how many times data is replicated in the cluster. We set 2 to have all the data duplicated on the two of our worker nodes. If you have only one worker node, enter 1, if you have three, enter 3 but don’t enter a value higher than the actual number of worker nodes you have.

Save and close file when you are finished.

Next, edit the mapred-site.xml file, setting YARN as the default framework for MapReduce operations:
nano ~/hadoop/etc/hadoop/mapred-site.xml

add following code:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>
    <property>
            <name>yarn.app.mapreduce.am.env</name>
            <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
    </property>
    <property>
            <name>mapreduce.map.env</name>
            <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
    </property>
    <property>
            <name>mapreduce.reduce.env</name>
            <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
    </property>
<property>
        <name>yarn.app.mapreduce.am.resource.mb</name>
        <value>512</value>
</property>
<property>
        <name>mapreduce.map.memory.mb</name>
        <value>256</value>
</property>
<property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>256</value>
</property>
</configuration>

Save and close.

Next, edit yarn-site.xml, which contains the configuration options for YARN.
nano ~/hadoop/etc/hadoop/yarn-site.xml

add below code, make sure you replace 192.168.10.1 with the your master-node's ip address:
<?xml version="1.0"?>
<configuration>
<property>
            <name>yarn.acl.enable</name>
            <value>0</value>
    </property>
    <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>192.168.10.1</value>
    </property>
    <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
    </property>
<property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
</property>
<property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>2048</value>
</property>
<property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
</property>
<property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>
</configuration>
The last property disables virtual-memory checking which can prevent containers from being allocated properly with openjdk if enabled.

Note: Memory allocation can be tricky on low RAM nodes because default values are not suitable for nodes with less than 8GB of RAM. We have manually set memory allocation for MapReduce jobs, and provide a sample configuration for 4GB RAM nodes.

Save and close.

Next, edit workers file to include both of the worker nodes (worker-node1, worker-node2) in our case:
nano ~/hadoop/etc/hadoop/workers

Remove localhost if exists, add your worker nodes like below:
worker-node1
worker-node2

Save and close.

The workers file is used by hadoop startup scripts to start required daemons on all nodes.

At this stage, we have completed hadoop configuration on master-node. In the next step we will duplicate hadoop configuration on worker nodes.


Step8 - Configure Worker Nodes

This section will show you how to duplicate hadoop configuration from master-node to all work nodes.

First copy the hadoop tarball file from master-node to worker nodes like below:
cd ~
scp hadoop-*.tar.gz worker-node1:/home/hadoop/
scp hadoop-*.tar.gz worker-node2:/home/hadoop/

Next, login to each worker node as the hadoop user via SSH and unzip the hadoop archive, rename the directory then exit from worker nodes to get back on the master-node:
ssh worker-node1
tar xzf hadoop-3.3.1.tar.gz mv hadoop-3.3.1 hadoop exit

Repeat the same step on worker-node2.

From the master-node, duplicate the Hadoop configuration files to all worker nodes using command below:
for node in worker-node1 worker-node2; do
scp ~/hadoop/etc/hadoop/* $node:/home/hadoop/hadoop/etc/hadoop/; done

Make sure you replace worker-node1, worker-node2 with your worker nodes name.

Next,  on master-node as the hadoop user, type the below command to format hadoop file system:
hdfs namenode -format

You will see the output similar to the following which says hadoop cluster is ready to run.
WARNING: /home/hadoop/hadoop/logs does not exist. Creating.
2020-03-09 11:38:04,791 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master-node/192.168.10.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.1.3
STARTUP_MSG:   build = https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579; compiled by 'ztang' on 2019-09-12T02:47Z
STARTUP_MSG:   java = 11.0.5
************************************************************/
2020-03-09 11:38:04,818 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2020-03-09 11:38:05,062 INFO namenode.NameNode: createNameNode [-format]
2020-03-09 11:38:06,162 INFO common.Util: Assuming 'file' scheme for path /home/hadoop/data/nameNode in configuration.
2020-03-09 11:38:06,163 INFO common.Util: Assuming 'file' scheme for path /home/hadoop/data/nameNode in configuration.
Formatting using clusterid: CID-e791ed9f-f86f-4a19-bbf4-aaa06c9c3238
2020-03-09 11:38:06,233 INFO namenode.FSEditLog: Edit logging is async:true
2020-03-09 11:38:06,275 INFO namenode.FSNamesystem: KeyProvider: null
2020-03-09 11:38:06,276 INFO namenode.FSNamesystem: fsLock is fair: true
2020-03-09 11:38:06,277 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2020-03-09 11:38:06,387 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2020-03-09 11:38:06,387 INFO namenode.FSNamesystem: supergroup          = supergroup
2020-03-09 11:38:06,387 INFO namenode.FSNamesystem: isPermissionEnabled = true
2020-03-09 11:38:06,387 INFO namenode.FSNamesystem: HA Enabled: false
2020-03-09 11:38:06,476 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2020-03-09 11:38:06,510 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2020-03-09 11:38:06,510 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2020-03-09 11:38:06,516 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2020-03-09 11:38:06,516 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Mar 09 11:38:06
2020-03-09 11:38:06,518 INFO util.GSet: Computing capacity for map BlocksMap
2020-03-09 11:38:06,518 INFO util.GSet: VM type       = 64-bit
2020-03-09 11:38:06,530 INFO util.GSet: 2.0% max memory 908.7 MB = 18.2 MB
2020-03-09 11:38:06,530 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2020-03-09 11:38:06,537 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2020-03-09 11:38:06,557 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2020-03-09 11:38:06,557 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2020-03-09 11:38:06,557 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2020-03-09 11:38:06,557 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2020-03-09 11:38:06,558 INFO blockmanagement.BlockManager: defaultReplication         = 2
2020-03-09 11:38:06,558 INFO blockmanagement.BlockManager: maxReplication             = 512
2020-03-09 11:38:06,558 INFO blockmanagement.BlockManager: minReplication             = 1
2020-03-09 11:38:06,558 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2020-03-09 11:38:06,558 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2020-03-09 11:38:06,558 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2020-03-09 11:38:06,559 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2020-03-09 11:38:06,602 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2020-03-09 11:38:06,669 INFO util.GSet: Computing capacity for map INodeMap
2020-03-09 11:38:06,669 INFO util.GSet: VM type       = 64-bit
2020-03-09 11:38:06,669 INFO util.GSet: 1.0% max memory 908.7 MB = 9.1 MB
2020-03-09 11:38:06,669 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2020-03-09 11:38:06,670 INFO namenode.FSDirectory: ACLs enabled? false
2020-03-09 11:38:06,670 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2020-03-09 11:38:06,670 INFO namenode.FSDirectory: XAttrs enabled? true
2020-03-09 11:38:06,670 INFO namenode.NameNode: Caching file names occurring more than 10 times
2020-03-09 11:38:06,679 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2020-03-09 11:38:06,681 INFO snapshot.SnapshotManager: SkipList is disabled
2020-03-09 11:38:06,685 INFO util.GSet: Computing capacity for map cachedBlocks
2020-03-09 11:38:06,685 INFO util.GSet: VM type       = 64-bit
2020-03-09 11:38:06,686 INFO util.GSet: 0.25% max memory 908.7 MB = 2.3 MB
2020-03-09 11:38:06,686 INFO util.GSet: capacity      = 2^18 = 262144 entries
2020-03-09 11:38:06,697 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2020-03-09 11:38:06,697 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2020-03-09 11:38:06,697 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2020-03-09 11:38:06,700 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2020-03-09 11:38:06,701 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2020-03-09 11:38:06,707 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2020-03-09 11:38:06,707 INFO util.GSet: VM type       = 64-bit
2020-03-09 11:38:06,708 INFO util.GSet: 0.029999999329447746% max memory 908.7 MB = 279.1 KB
2020-03-09 11:38:06,708 INFO util.GSet: capacity      = 2^15 = 32768 entries
2020-03-09 11:38:06,760 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1174644765-192.168.10.1-1583735886736
2020-03-09 11:38:06,787 INFO common.Storage: Storage directory /home/hadoop/data/nameNode has been successfully formatted.
2020-03-09 11:38:06,862 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/data/nameNode/current/fsimage.ckpt_0000000000000000000 using no compression
2020-03-09 11:38:07,029 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/data/nameNode/current/fsimage.ckpt_0000000000000000000 of size 393 bytes saved in 0 seconds .
2020-03-09 11:38:07,045 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2020-03-09 11:38:07,072 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.
2020-03-09 11:38:07,074 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master-node/192.168.10.1
************************************************************/

With this hdfs format, your hadoop installation is now configured and ready to run.


Step9 - Start Hadoop Cluster

Login to master-node as the hadoop user and start the hadoop cluster by running the below command:
start-dfs.sh

You will see similar to the following output:

start-dfs.sh

You will see similar to the following output:
Starting namenodes on [master-node]
Starting datanodes
worker-node2: WARNING: /home/hadoop/hadoop/logs does not exist. Creating.
worker-node1: WARNING: /home/hadoop/hadoop/logs does not exist. Creating.
Starting secondary namenodes [master-node]

This will start NameNode and SecondaryNameNode component on master-node, and DataNode on worker-node1 and worker-node2, according to the configuration in the workers config file.

Check that every process is running with the jps command on each node.

On master-node, type jps and you should see the following:
8066 NameNode
8292 SecondaryNameNode
8412 Jps

On worker-node1 and worker-node2, type jps and you should see the following:
17525 DataNode
17613 Jps

You can get useful information about your hadoop cluster with the below command.
hdfs dfsadmin -report

This will print information (e.g., capacity and usage) for all running nodes in the cluster.

Next, open up your preferred web browser and navigate to http://your_master_node_IP:9870, and you’ll get a user-friendly hadoop monitoring web console like below:



Step10 - Test Hadoop Cluster

You can test your hadoop cluster functionality using hdfs dfs command.

First, manually create your home directory, all other commands will use a path relative to this default home directory:

On master-node, type below command:
hdfs dfs -mkdir -p /user/hadoop

We'll use few textbooks from the Gutenberg project as an example for this guide.

Create a books directory in hadoop file-system. The following command will create it in the home directory, /user/hadoop/books:
hdfs dfs -mkdir books

Now download a few books from the Gutenberg project:
cd /home/hadoop

wget -O franklin.txt http://www.gutenberg.org/files/13482/13482.txt
wget -O herbert.txt http://www.gutenberg.org/files/20220/20220.txt
wget -O maria.txt http://www.gutenberg.org/files/29635/29635.txt

Next, put these three books using hdfs, in the books directory:
hdfs dfs -put franklin.txt herbert.txt maria.txt books

List the contents of the books directory:
hdfs dfs -ls books

Next, move one of the books to the local filesystem:
hdfs dfs -get books/franklin.txt

You can also directly print the books on terminal from hdfs:
hdfs dfs -cat books/maria.txt

These are just few example of hadoop commands. However, there are many commands to manage your hdfs. For a complete list, you can look at the Apache hdfs shell documentation, or print help with:
hdfs dfs -help


Step11 - Start YARN

HDFS is a distributed storage system, and doesn’t provide any services for running and scheduling tasks in the cluster. This is the role of the YARN framework. The following section is about starting, monitoring, and submitting jobs to YARN.

On master-node, you can start YARN with the below script:
start-yarn.sh

You will see the output like below:
Starting resourcemanager
Starting nodemanagers

Check that everything is running with the jps command. In addition to the previous HDFS daemon, you should see a ResourceManager on master-node, and a NodeManager on worker-node1 and worker-node2.

To stop YARN, run the following command on master-node:
stop-yarn.sh

Similarly, you can get a list of running applications with below command:
yarn application -list

To get all available parameters of the yarn command, see Apache YARN documentation.

As with HDFS, YARN provides a friendlier web UI, started by default on port 8088 of the Resource Manager. You can navigate to http://master-node-IP:8088 to browse the YARN web console:


Step12 - Submit MapReduce Jobs to YARN

YARN jobs are packaged into jar files and submitted to YARN for execution with the command yarn jar. The Hadoop installation package provides sample applications that can be run to test your cluster. You’ll use them to run a word count on the three books previously uploaded to HDFS.

On master-node, submit a job with the sample jar to YARN:
yarn jar ~/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount "books/*" output

The last argument is where the output of the job will be saved - in HDFS.

After the job is finished, you can get the result by querying with below command:
hdfs dfs -ls output

Print the result with:
hdfs dfs -cat output/part-r-00000 | less

Wrapping up

Now that your hadoop cluster is up and running, you can learn how to code your own YARN jobs with Apache documentation and install Spark on top of YARN.

8 comments:

  1. Great! this tutorial helps me a lot

    ReplyDelete
  2. Thank you.

    With new version of Java, this hadoop 3.3.1 did not run well.
    You need to use java 8 instead the latest java.

    ReplyDelete
  3. In my case. Machines cannot communicate with each other. I disabled the firewall and the problem was solved.
    Thanks. The tutorial helps me a lot.
    Great.

    ReplyDelete
  4. AnonymousJune 03, 2022

    I am getting starting dtanodes I am getting following error.

    Starting datanodes
    Exception in thread "main" java.lang.InternalError: internal error: SHA-1 not available.
    at sun.security.provider.SecureRandom.init(SecureRandom.java:108)
    at sun.security.provider.SecureRandom.(SecureRandom.java:79)
    at java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:198)
    at java.security.SecureRandom.(SecureRandom.java:162)
    at java.util.UUID$Holder.(UUID.java:96)
    at java.util.UUID.randomUUID(UUID.java:142)
    at org.apache.hadoop.fs.audit.CommonAuditContext.(CommonAuditContext.java:80)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)
    at org.apache.hadoop.hdfs.tools.GetConf.main(GetConf.java:361)
    Caused by: java.security.NoSuchAlgorithmException: SHA MessageDigest not available
    at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
    at java.security.Security.getImpl(Security.java:695)
    at java.security.MessageDigest.getInstance(MessageDigest.java:170)
    at sun.security.provider.SecureRandom.init(SecureRandom.java:106)
    ... 9 more

    ReplyDelete
    Replies
    1. AnonymousJune 03, 2022

      This is a bug in java, you need to change in java.security file by specifying:

      security.provider.1=sun.security.provider.Sun

      Delete
  5. this is an awesome and clear tutorial.
    do you think you can add a section on installing Apache Impala on top of the hadoop cluster ?

    ReplyDelete
    Replies
    1. Thanks for the suggestion, I will add the section for Impala steps to install it on top of hadoop cluster.

      Delete

Powered by Blogger.