- Posted by Gavin Soorma
- On November 30, 2011
- 0 Comments
This note illustrates how we can set up an Orace 11g Release 2 Real Application Clusters environment for test purposes using the Oracle VM templates which are currently available in under 30 minutes!
The templates are available from the Oracle E-Delivery web site and are available in both OEL 32bit Linux as well as 64bit Linux.
While we are demonstrating a test or development RAC setup where a single Oracle VM server is hosting both the guest nodes, a ‘Production’ type envrionment is also supported using the Oracle VM Templates where we have multiple OVM Servers and the shared disks are configured as ‘phy’ devices which are then passed on to the guest Oracle VM’s. So there are a number of Oracle VM Servers in this case.
In this test environment, however, the shared disk is configured as ‘file’ devices and both guests can run on the same Oracle VM Server.
In other words we have one single DOM-0 and we use files located in the DOM-0 to emulate the shared disks for the Oracle RAC ‘cluster’ nodes running in DOM-U.
The template requires about 40 Gb of disk space and assumes that we have at least 4 Gb of RAM available. In our test case, we have allocated 4 Gb of RAM to each Virtual Machine.
The template assumes the following are already in place.
a) We have already installed Oracle VM Server (we have used OVM 2.2)
b) We have already installed and configured Oracle VM Manager (OVM 2.2)
c) allocated 7 unused IP addresses and registered in our DNS
Assuming a two node RAC cluster, that is
2 IP’s for the public host names
2 IP’s for the VIP
3 IP’s for the SCAN
The high level steps involved would be:
- Download the zip files from Oracle E-Delivery site
- Import the Virtual Machine template into OVM
- Create the ASM shared storage
- Create the VM’s from the 11gR2 RAC template
- Add the shared disks to the VM’s
- Complete the first boot interview
- run the script buildcluster.sh which will install the 11g R2 Grid Infrastructure and database software and also create the ORCL RAC database
- Customize as required
- Installing OVM Manager 2.2
- Installing OVM Server 2.2
- RAC OVM Templates Overview
- RAC 11g Release 2 (188.8.131.52.0) Oracle VM templates Test and Development Configuration”
We need to unzip the 3 files we have downloaded fron the E-Delivery web site and then also extract the 11gR2 RAC template.
This is all done in the /OVS/seed_pool/ directory on the OVM Server
[root@kens-ovm-002 OVS]# mv p10113572*.zip ./tmp [root@kens-ovm-002 OVS]# cd tmp [root@kens-ovm-002 tmp]# ls p10113572_10_Linux-x86-64_1of3.zip p10113572_10_Linux-x86-64_2of3.zip p10113572_10_Linux-x86-64_3of3.zip [root@kens-ovm-002 tmp]# unzip -q p10113572_10_Linux-x86-64_1of3.zip [root@kens-ovm-002 tmp]# unzip -q p10113572_10_Linux-x86-64_2of3.zip [root@kens-ovm-002 tmp]# unzip -q p10113572_10_Linux-x86-64_3of3.zip [root@kens-ovm-002 tmp]# cd /OVS/seed_pool/ [root@kens-ovm-002 seed_pool]# tar xzf /OVS/tmp/OVM_EL5U4_X86_64_11202RAC_PVM-1of2.tgz [root@kens-ovm-002 seed_pool]# cat /OVS/tmp/OVM_EL5U4_X86_64_11202RAC_PVM-2of2-parta.tgz /OVS/tmp/OVM_EL5U4_X86_64_11202RAC_PVM-2of2-partb.tgz | tar xzf -
Importing the Virtual Machine Template
We will now import this template using OVM Manager.
Resources >> Virtual Machine Templates and click on Import button
Our template is stored in the /OVS/seed_pool on our OVM Server machine. So we select the option “Select from Server Pool (Discover and Register)”
From the Virtual Machine Template Name drop down list select the 11gR2 RAC PVM.
In the Virtual Machine System Password field, we need to enter the root password. The template assumes the root password to be ‘ovsroot’ and if we set it to anything else as in our case, we got an error.
But there is a configuration file we can edit to change the default root password to whatever we desire and I will show you how that’s done in a bit.
Click on the Confirm button
We will see that the status of the Virtual Machine Template is now changed to “Importing”
We will see that the status of the Virtual Machine Template is now changed to “Pending”
We need to click on the Approve button
We see that the template status has now changed to “Active” and we can now use this template to set up our 11g R2 RAC environment.
Create the shared ASM disks
Resources >> Shared Virtual Disks >> Create
Select the appropriate Server Pool Name and Group Name.
We will be creating 5 shared virtual disks – ASM1,ASM2,ASM3, ASM4 and ASM5 each of size 2048 Mb or 2 Gb
Click on Confirm
After the first disk ASM1 has been created, click on the Create button and similarly create the other ASM disks in the same manner
We should now see all the five Shared Virtual Disks with status Active
Create the Virtual Machines from the imported 11gR2 RAC template
Click on the Create Virtual Machine button
Select the option Create virtual machine based on virtual machine template
Select the template OVM_EL5U4_X86_64_11202RAC_PVM
Do not forget to add a SECOND Virtual Network Interface – in this case the name is VIF1 and Bridge name is xenbr1
Keep a note of the console password as we will need to enter this at a later stage
Note the status of the Virtual Machine is now ‘Creating’
After the Virtual Machine is created, it is in a Powered Off state
Click on the Configure tab as we would like to change the Memory Size of this VM from 2 Gb to 4 Gb
Change the Maximum Memory Size and Memory Size to 4096
Change the Memory Size of the second VM as well to 4096 MB and now we will see that both the VM’s have been created and are in the Powered Off state
Attach the Shared Disks to both the VM’s
Select one of the Virtual Machines and click on the Configure tab
Select the Storage tab and then the Attach/Detach Shared Virtual Disk
We see the 5 Shared Virtual Disks which we had earlier created – Click on Move All
We can now see that the 5 ASM Shared Disks have been attached to the VM and the disk status is Active
Repeat the same process for the second VM and attach the same 5 ASM disks to that Virtual Machine as well
Complete the First Boot Interview
Power On both the VM’s
Once the machines are powered on, click on the Console tab and launch both the consoles
This is called the First Boot Interview stage.
On the first screen we enter YES and on the second screen we enter NO.
Only after we have entered NO on the second console does the install continue.
We need to now provide the IP’s for the Public, Private and Virtual hostnames
We need to enter the SCAN name for this cluster. Note – we should have registered the SCAN in our DNS already and have assigned 3 unused IP addresses to the SCAN
Install Grid Infrastructure and Oracle 11g R2 RAC
We now need to run the buildcluster.sh script. This needs to be on only ONE of the VM’s – the one we have designated to be the first node of the cluster.
Note – because we had not used the standad root password ovsroot, we find that the script has failed becuase it tries to establish a passwordless ssh session which fails.
We needed to edit the /u01/racovm/params.ini file and change the entry for ROOTUSERPASSWORD to the one we want.
Have a read through of the params.ini file as it is through this file that we can add a lot of customisation to the RAC cluster build.
While the RAC installation is in progress, we can monitor the same by doing a tail -f of the progress-racovm.out file located on /tmp