So, Idea to use OCFS2 to help... Anyway How can make OCFS2 to Shared?
Begin...
- Configure IP Address and Modify /etc/hosts on 2 nodes (can use "public interface" or "private interface") to make Heartbeats.
node01:
# ifconfig eth0
eth0 Link encap:Ethernet HWaddr xx.xx.xx.xx...node02:
inet addr:192.168.1.21
# ifconfig eth0
eth0 Link encap:Ethernet HWaddr xx.xx.xx.xx.../etc/hosts file on both nodes.
inet addr:192.168.1.22
192.168.1.21 node01- Make System auto-reboot when panic! (both nodes)
192.168.1.22 node02
Modify /etc/sysctl.conf
kernel.panic = 60
# sysctl -p- Check System for Download pacakages (http://oss.oracle.com) and install packages (Assume: redhat4)
kernel.panic = 60
# uname -rm-> Download:
2.6.9-78.ELsmp x86_64
# rpm -qf /boot/vmlinuz-`uname -r` --queryformat "%{ARCH}\n"
x86_64
ocfs2-tools-1.2.7-1.el4.x86_64.rpm-> Install (both nodes):
ocfs2console-1.2.7-1.el4.x86_64.rpm (if use ocfs2console command-line)
ocfs2-2.6.9-78.ELsmp-1.2.9-1.el4.x86_64.rpm
#rpm -ivh ocfs2-tools-1.2.7-1.el4.x86_64.rpm- Configure for Cluster Service (check on /etc/ocfs2/cluster.conf file) on node01.
Preparing... ########################################### [100%]
1:ocfs2-tools ########################################### [100%]
# rpm -ivh ocfs2console-1.2.7-1.el4.x86_64.rpm
Preparing... ########################################### [100%]
1:ocfs2console ########################################### [100%]
# rpm -ivh ocfs2-2.6.9-78.ELsmp-1.2.9-1.el4.x86_64.rpm
Preparing... ########################################### [100%]
1:ocfs2-2.6.9-78.ELsmp ########################################### [100%]
# o2cb_ctl -C -i -n node01 -t node -a number=1 -a ip_address=192.168.1.21 -a ip_port=7777 -a cluster=ocfs2-> Add Node (node02)
# o2cb_ctl -C -n node02 -t node -a number=2 -a ip_address=192.168.1.22 -a ip_port=7777 -a cluster=ocfs2Copy /etc/ocfs2/ folder to node02.
/etc/ocfs2/cluster.conf file.
node:- Configure Cluster Service (both nodes).
ip_port = 7777
ip_address = 192.168.1.21
number = 1
name = node01
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.1.22
number = 2
name = node02
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
Node01:
# /etc/init.d/o2cb configureNode02:
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hittingwithout typing an answer will keep that current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
# /etc/init.d/o2cb configure- Format Partition (Assume:use /dev/emcpowera1):
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hittingwithout typing an answer will keep that current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading module "configfs": OK
Creating directory '/config': OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
# mkfs.ocfs2 -b 4k -C 4k -L "shares" -N 8 /dev/emcpowera1- Testing (both nodes):
mkfs.ocfs2 1.2.7
Filesystem label=shares
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=10737401856 (2621436 clusters) (2621436 blocks)
82 cluster groups (tail covers 8700 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 8
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 2 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
# mount -t ocfs2 /dev/emcpowera1 /shares/Or modifiy /etc/fstab (all nodes)
/dev/emcpowera1 /shares ocfs2 _netdev 0 0
and then start ocfs2
# /etc/init.d/ocfs2 start
Starting Oracle Cluster File System (OCFS2) [ OK ]
# df- check Mount Point on both nodes and read & write files on Share Folder.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/emcpowera1 10485744 530616 9955128 6% /shares
# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active
# /etc/init.d/ocfs2 status
Configured OCFS2 mountpoints: /shares
Active OCFS2 mountpoints: /shares
.
.
.
Reference: http://oss.oracle.com/projects/ocfs2/dist/documentation/v1.2/ocfs2_faq.html
No comments:
Post a Comment