Branded-ZoneCluster
branded ZoneCluster
This will show, how to install a ZoneCluster with branded Solaris10 zones running Solaris 11.1 and Solaris Cluster 4.1
# pkg install pkg:/system/zones/brand/brand-solaris10
we will need an interconnect in our virtual cluster, I will use virtual nic's on both nodes which reside on my global cluster interconnects.
# dladm create-vnic -l net1 vnic1 # dladm create-vnic -l net2 vnic2
The cluster framework will install our zones on both nodes. What we need, is a configuration for the zones:
# vi clzc-c0.conf create -b set zonepath=/zones/clzc-c0 set brand=solaris10 set autoboot=true set limitpriv=default,proc_priocntl,proc_clock_highres set enable_priv_net=true set ip-type=exclusive add attr set name=cluster set type=boolean set value=true end add node set physical-host=clnode01 set hostname=clzc-n0 add net set physical=aggr0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end add node set physical-host=clnode02 set hostname=clzc-n1 add net set physical=aggr0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end add sysid set root_password=$5$FLfq/va3$xiVtIvMWCSU5UmC6R.11LzmO6SDKCyqWsf4wLczFpf0 set name_service="DNS{domain_name=dbconcepts.local name_server=193.168.25.100,193.168.25.190,search=dbconepts.local}" set nfs4_domain=dynamic set security_policy=NONE set system_locale=C set terminal=vt100 set timezone=Europe/Vienna end root@clnode01:~/cluster# clzonecluster configure -f ./clzc-c0.conf clzc-c0 root@clnode01:~/cluster# root@clnode01:~/cluster# clzonecluster verify clzc-c0 Waiting for zone verify commands to complete on all the nodes of the zone cluster "clzc-c0"... root@clnode01:~/cluster# clzc status === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- clzc-c0 solaris10 clnode01 clzc-n0 Offline Configured clnode02 clzc-n1 Offline Configured root@clnode01:~/cluster#
Now we will install this zones with a flar archive done on Solaris 10
root@clnode01:~# clzonecluster install -a /downloads/sol10template.flar clzc-c0 Waiting for zone install commands to complete on all the nodes of the zone cluster "clzc-c0"... root@clnode01:~# clzonecluster status === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- clzc-c0 solaris10 clnode01 clzc-n0 Offline Installed clnode02 clzc-n1 Offline Installed root@clnode01:~#
In the next step, I will boot the zones, but "outside" the cluster to finish the installation setup.
root@clnode01:/# clzc boot -o clzc-c0 Waiting for zone boot commands to complete on all the nodes of the zone cluster "clzc-c0"... root@clnode01:/# zlogin -C clzc-c0 [Connected to zone 'clzc-c0' console] You did not enter a selection. What type of terminal are you using? 1) ANSI Standard CRT 2) DEC VT52 3) DEC VT100 4) Heathkit 19 5) Lear Siegler ADM31 6) PC Console 7) Sun Command Tool 8) Sun Workstation 9) Televideo 910 10) Televideo 925 11) Wyse Model 50 12) X Terminal Emulator (xterms) 13) CDE Terminal Emulator (dtterm) 14) Other Type the number of your choice and press Return: 13 Creating new rsa public/private host key pair Creating new dsa public/private host key pair Configuring network interface addresses: aggr0 clprivnet1 vnic1 vnic2 [...] root@clnode01:/# zlogin clzc-c0 [Connected to zone 'clzc-c0' pts/3] Last login: Wed Aug 6 21:18:37 on console Oracle Corporation SunOS 5.10 Generic Patch January 2005 # # # # # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 clprivnet1: flags=100001000842<BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2 inet 0.0.0.0 netmask 0 broadcast 255.255.255.255 ether 0:0:0:0:1:0 aggr0: flags=100001000863<UP,BROADCAST,NOTRAILERS,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 3 inet 192.168.56.202 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:4a:fd:a0 vnic1: flags=100001000842<BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 4 inet 0.0.0.0 netmask 0 broadcast 255.255.255.255 ether 2:8:20:8a:25:db vnic2: flags=100001000842<BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 5 inet 0.0.0.0 netmask 0 broadcast 255.255.255.255 ether 2:8:20:cf:1d:41 # ping 192.168.56.100 192.168.56.100 is alive # exit root@clnode01:/# clzc status === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- clzc-c0 solaris10 clnode01 clzc-n0 Offline Running clnode02 clzc-n1 Offline Running root@clnode01:/#
OK; let's install the cluster software for solaris 10, in my case, it was 3.3u2. I had to remove 146090-06, 150050-01 and 150101-01 from my patch_order, because these patches were not able to install (pkg not installed).
root@clnode01:/# clzonecluster install-cluster \ > -d /net/filer/downloads/cluster \ > -p patchdir=/net/filer/downloads/cluster-patches,\ > patchlistfile=/net/filer/downloads/cluster-patches/patch_order \ > -s all clzc-c0 Preparing installation. Do not interrupt ... Installing the packages for zone cluster "clzc-c0" ... root@clnode01:/# root@clnode01:/# clzonecluster reboot clzc-c0 Waiting for zone reboot commands to complete on all the nodes of the zone cluster "clzc-c0"... root@clnode01:/# clzc status === Zone Clusters === --- Zone Cluster Status --- Name Brand Node Name Zone Host Name Status Zone Status ---- ----- --------- -------------- ------ ----------- clzc-c0 solaris10 clnode01 clzc-n0 Online Running clnode02 clzc-n1 Online Running root@clnode01:/#
And that's it... to configure RG and RS, login to the zone and use the known cluster commands...