Set Up a Multi-node Oracle (RAC 18c) on CentOS/RHEL 7

This tutorial will walk you through the steps to set up a multi-node oracle real application cluster using (grid 18c and database 18c) on CentOS/RHEL 7. These instruction can also be applied if you are running Fedora or Oracle Linux 7.


Prerequisite

  • You will need two CentOS/RHEL 7 servers either installed on (physical or virtual) machines with minimum 8GB of memory and two network interface on each machine.
  • Must have root user privileges.
  • A working DNS server.

Environment

You should document everything before going into actual deployment:


You will need to create (HOST-A) record for database SCAN against its ip addresses in your DNS server like below:


1. Configuring SELinux

Log in to your first node, in our case (pdbserver1) and edit /etc/selinux/config file, replace SELINUX=enforcing parameter to SELINUX=permissive
vi /etc/selinux/config
SELINUX=permissive
Save and close file when you are done.

2. Disabling Firewall

You also need to stop and disable firewalld service if it is running, with below command:
systemctl disable firewalld
systemctl stop firewalld
iptables -F

3. Set Timezone

Make sure timezone is correctly set on your servers. If it is not already, then you must set correct timezone with below command by replacing Asia/Karachi with yours:
timedatectl set-timezone Asia/Karachi
It is recommended to sync your servers clock with locally or publicly available NTP server like below:
ntpdate -u your_ntp_server_name_or_ip

4. Set Hostname

You can set hostname of your servers like below by replacing red highlighted with you actual server name.
hostnamectl set-hostname your_server_name.domain
Edit your server's /etc/hosts file and add your nodes like below:

on pdbserver1:


on pdbserver2:


5. Adding EPEL Repository

It is recommended to add extra packages for enterprise Linux repository so that you can easily find and install required packages using yum package manager:
yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

6. Adding Oracle Repository

Since, we are installing oracle grid infrastructure and database on CentOS/RHEL 7 server, we need to install some of the important packages from oracle Linux repository:
vi /etc/yum.repos.d/ol7.repo
Add below contents in it:
[ol7_latest]
name=Oracle Linux $releasever latest ($basearch)
baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=0
enabled=1
Save and close file when you are finished.

7. Installing Prerequisites

These are the important and required packages you need to install on your Linux servers. Type the below command to install these packages all together:
yum -y install compat-libcap1 compat-libstdc++-33 compat-libstdc++-33.i686 gcc gcc-c++ glibc glibc.i686 glibc-devel glibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libaio libaio.i686 libaio-devel libaio-devel.i686 libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi libXi.i686 make sysstat unixODBC unixODBC-devel tar unzip zip wget ntpdate
You will also need to install oracleasm in order to configure your shared storage for oracle grid infrastructure:
yum -y install kmod-oracleasm oracleasm-support
yum -y install https://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el7.x86_64.rpm
yum -y install oracle-database-preinstall-18c
Now you need to create password for oracle user like below:
passwd oracle

8. Configuring OracleASM

At this point, you need to configure oracleasm on all of the servers with root user privileges like below:
oracleasm configure -i
Enter the user oracle and group dba when it prompts like the output shown in below image


Now type the below command to load oracleasm driver:
oracleasm init
This will show you the output like below:


When you are finished from step 1 to 8 on each node in your environment, you can proceed to next step.

9. Creating Disk Group

For this guide we are using openfiler as our ISCSI shared storage and we have already configured it using iscsiadm command as well as created raw disks using fdisk on our servers.

 Now, log in to your first node (pdbserver1) with root user and create disk group like below:
oracleasm createdisk ASM_DISK1 /dev/sdc1
oracleasm createdisk ASM_DISK2 /dev/sdd1
oracleasm createdisk ASM_DISK3 /dev/sde1
Type below commands to scan disk groups:
oracleasm scandisks
oracleasm listdisks
You will see the similar output like below:


Since this the shared storage so we will just execute below command to scan disk groups on the remaining nodes, in our case (pdbserver2) with root user. If you have more then two nodes in your environment, just type below command one by one.
oracleasm scandisks

10. Configuring Shell Environment

Its time to create environment variables on all of the nodes one by one. Log in with oracle user or you can switch from root user session to oracle directly with below command:
su - oracle
Now, you need to edit default .bash_profile of oracle user like below:
vi /home/oracle/.bash_profile
Add below contents at the end of the file but make sure you replace red highlighted with yours:
# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_HOSTNAME=$HOSTNAME
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/18.0.0.0.0/dbhome_1
export GRID_HOME=/u01/app/product/18.0.0.0/grid
export ORA_INVENTORY=/u01/app/oraInventory
export ORACLE_SID=pdbrac1
export DB_NAME=pdbrac
export DB_UNIQUE_NAME=pdbrac
export DATA_DIR=/u01/oradata

export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'
Save and close file when you are done.

Again, create grid_env file like below:
vi /home/oracle/grid_env
Add below contents in it:
export ORACLE_SID=+ASM1
export GRID_HOME=/u01/app/product/18.0.0.0.0/grid

export PATH=$GRID_HOME/bin:/usr/sbin:/usr/local/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Save and close file when you are finished.

Now create db_env file like below"
vi /home/oracle/db_env
Add below contents in it:
export ORACLE_SID=pdbrac1
export ORACLE_HOME=/u01/app/oracle/product/18.0.0.0.0/dbhome_1

export PATH=$ORACLE_HOME/bin:/usr/sbin:/usr/local/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
Save and close file when you are done.

Now, make grid_env and db_env executable with below command:
chmod u+x /home/oracle/grid_env
chmod u+x /home/oracle/db_env
Before proceeding to next step, make sure you have performed all of the above steps on each node. We are just preparing two nodes cluster, so the environment variables on our servers are as shown in image below.

bash_profile on pdbserver1


bash_profile on pdbserver2:


db_env on pdbserver1:


db_env on pdbserver2:



grid_env on pdbserver1:


grid_env on pdbserver2:


If /dev/shm size is less than 4GB then increase and remount it on each node using the below command.
mount -o remount 4G /dev/shm
To make it persistent even when system reboot, you need to modify /etc/fstab accordingly
vi /etc/fstab
tmpfs /dev/shm tmpfs defaults,size=4G 0 0
Save and close file when you are finished.

If you don’t increase, and keeping less than 4GB it will cause an error during prerequisites check of Grid installation.


11. Creating Oracle Base

With root user privileges, create these directories under /u01 on each node and set appropriate permission like blow:
mkdir -p /u01/app/oracle
mkdir -p /u01/app/oracle/product/18.0.0.0.0/dbhome_1
mkdir -p /u01/app/product/18.0.0.0.0/grid
mkdir -p /u01/oradata
chown -R oracle:oinstall /u01
chmod -R 775 /u01
Now log in with oracle user on your first node and extract grid setup under /u01/app/product/18.0.0.0.0/grid directory like below:
unzip -q /home/oracle/grid_setup_file.zip -d /u01/app/product/18.0.0.0.0/grid
At this point, you need to install cvuqdisk rpm package from grid setup directory with root user on each node like below:
rpm -ivh /u01/app/product/18.0.0.0.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
When you are done installing cvuqdisk rpm package on each node, proceed to next step of grid installation.

12. Installing Oracle Grid 18c

We will run graphical installation, if you wish you can go for silent installation as well. Log in to your first node with oracle user and run grid setup like below:
DISPLAY=192.168.70.1:0.0; export DISPLAY
/u01/app/product/18.0.0.0.0/grid/gridSetup.sh
Since this our first grid infrastructure installation so keep the default on below screen click Next,


{Configure an Oracle Standalone Cluster} and click Next


Enter Cluster Name, SCAN Name and keep the SCAN Port default.

Click Next


Click Add to add second node then click SSH connectivity, enter oracle user password in OS Password box then click Setup


This will bring you screen like below which says passwordless ssh connectivity for oracle user is established successfully across the servers.

Click OK


Click Next


Ensure that it shows correct network interfaces against its use for, then click Next.


We are using iscsi shared storage as we have already configured using oracleasm earlier, so we will go for first option on below screen.


Click No then Next


Click External in Redundancy then click Change Discovery Path and enter /dev/oracleasm/disks in the box to retrieve disk groups.

Click Next.


Click {Use same password for these accounts}, Specify Password then Confirm Password click Next.


Click Next.



Click Next.


Keep the default and click Next.


Click Yes on below warning box.


Click Next


Click Next.


Click Next.


It will take a moment to check prerequisites and if you see any warning then go back and fix it first. If prerequisites checks found everything in place you will see the below summary screen.

Verify everything then click Install.


Installation progress screen will prompts you for root scripts execution.


Stop here and go back to your servers, log in with root user and execute these root scripts one by one on each node but on primary node first.


When you are done with root scripts execution on each node, come back to installation screen and click OK on root script prompt. Now, it will take few minutes to complete remaining installation stages.


Click Close.


At this stage, your oracle grid infrastructure installation is completed successfully.

13. Installing Oracle Database 18c

Log in to your first node with root user and extract database under /u01/app/oracle/product/18.0.0.0.0/dbhome_1 location like below.
unzip -q /home/oracle/database_file.zip -d /u01/app/oracle/product/18.0.0.0/dbhome_1
Again, we will run graphical installation, if you wish you can go for silent installation.

Type below command to run graphical installation of database 18c
DISPLAY=192.168.70.1:0.0; export DISPLAY
/u01/app/oracle/product/18.0.0.0.0/dbhome_1/runInstaller
Choose {Set Up Software Only} click Next.



On the below screen, select {Oracle Real Application Clusters database installation} click Next


Click Next.


Select {Enterprise Edition} click Next


Click Next.


Keep the default and click Next.


After prerequisites checks, if all goes well, setup will bring you below summary screen.

Click Install.


When it says, execute root scripts, stop right there and go back to your first node execute root scripts and then on other nodes as well. Once you are finished executing root scripts, come back to installer screen and click OK


Click Close to finish setup.


Well, at this stage your database installation has been successful.

14. Creating a Database

Type blow command to set appropriate permission on each node with root user:
chmod 6751 /u01/app/oracle/product/18.0.0.0.0/dbhome_1/bin/oracle
chmod 6751 /u01/app/product/18.0.0.0.0/grid/bin/oracle
When you are finished with the above, log in to your first node with oracle user and start database creation wizard like below:
source /home/oracle/db_env
DISPLAY=192.168.70.1:0.0; export DISPLAY
Type dbca to run database creation wizard:
dbca
Choose {Create a database} click Next


Select {Advanced configuration} click Next


Click Next


Click Next


Provide {Fast Recovery Area Size} according to your environment and click Next.


Leave un-ticked and click Next


Set {Processes} according to your requirement and click Next.


Specify the management options and click Next.


Select {Use the same administrative password for all users} and enter the password.

Click Next.


Select {Create database} then click {Generate database creation scripts} and keep the default path.

Click { All Initialization Parameters}


Make sure {db_name}, {db_unique_name},{db_domain} and {sessions} are ticked to Include in spfile

When you done, close this window screen.



Click Next.

It will take a moment to perform Prerequisites Checks.



If all goes well with prerequisites checks, you will see the below summary screen.

Click Finish to start database creation process.


Database creation process has begun.


When its done, click Close to finish the database creation wizard.

15. Verifying RAC and Database Status

At this stage, your database creation has been successful. Now log in with oracle user on your first node and type below command to verify your RAC cluster and database status.

To verify RAC (Real Application Cluster) status, type below command:
cd $GRID_HOME/bin
./crsctl stat res -t

To verify database status:
./srvctl status database -d pdbrac

Type below command to verify database configuration in rac.
./srvctl config database -d pdbrac

You can also verify from SQL prompt as well:
sqlplus / as sysdba
Type below command on SQL prompt:
SELECT inst_name FROM v$active_instances;

16. Startup and Shutdown

In this step, we will show you how to start or shutdown your cluster and database whenever you required to.

Log in to any of the node with root user and type the below command to shutdown database first:
/u01/app/product/18.0.0.0.0/grid/bin/srvctl stop database -d pdbrac
You can verify the database status whether it is stopped or not:


When the database stopped successfully, you can initiate below command to stop real application cluster:
/u01/app/product/18.0.0.0.0/grid/bin/crsctl stop cluster -all
The output similar to like below shows that real application cluster has been stopped.


Top bring back the cluster online, log to any of the node with root user and type the below command:
/u01/app/product/18.0.0.0.0/grid/bin/crsctl start cluster -all
The output similar to like below shows that cluster is now online:


Now, you can type below command to start database as well:
/u01/app/product/18.0.0.0.0/grid/bin/srvctl start database -d pdbrac

Wrapping up

In this guide, we set up two nodes Oracle Real Application Cluster 18c as well as database 18c on CentOS/RHEL 7 server.

No comments:

Powered by Blogger.