Showing posts with label ocfs2. Show all posts
Showing posts with label ocfs2. Show all posts

Thursday, April 16, 2009

OCFS2 share Folders

Need to share folder(files) on 2 nodes. Hope to share HTTP Contents and HTTP Configuration files.

So, Idea to use OCFS2 to help... Anyway How can make OCFS2 to Shared?

Begin...

- Configure IP Address and Modify /etc/hosts on 2 nodes (can use "public interface" or "private interface") to make Heartbeats.

node01:
# ifconfig eth0
eth0 Link encap:Ethernet HWaddr xx.xx.xx.xx...
inet addr:192.168.1.21
node02:
# ifconfig eth0
eth0 Link encap:Ethernet HWaddr xx.xx.xx.xx...
inet addr:192.168.1.22
/etc/hosts file on both nodes.
192.168.1.21 node01
192.168.1.22 node02
- Make System auto-reboot when panic! (both nodes)

Modify /etc/sysctl.conf
kernel.panic = 60
# sysctl -p
kernel.panic = 60
- Check System for Download pacakages (http://oss.oracle.com) and install packages (Assume: redhat4)
# uname -rm
2.6.9-78.ELsmp x86_64

# rpm -qf /boot/vmlinuz-`uname -r` --queryformat "%{ARCH}\n"
x86_64
-> Download:
ocfs2-tools-1.2.7-1.el4.x86_64.rpm
ocfs2console-1.2.7-1.el4.x86_64.rpm (if use ocfs2console command-line)
ocfs2-2.6.9-78.ELsmp-1.2.9-1.el4.x86_64.rpm
-> Install (both nodes):
#rpm -ivh ocfs2-tools-1.2.7-1.el4.x86_64.rpm
Preparing... ########################################### [100%]
1:ocfs2-tools ########################################### [100%]
# rpm -ivh ocfs2console-1.2.7-1.el4.x86_64.rpm
Preparing... ########################################### [100%]
1:ocfs2console ########################################### [100%]
# rpm -ivh ocfs2-2.6.9-78.ELsmp-1.2.9-1.el4.x86_64.rpm
Preparing... ########################################### [100%]
1:ocfs2-2.6.9-78.ELsmp ########################################### [100%]
- Configure for Cluster Service (check on /etc/ocfs2/cluster.conf file) on node01.
# o2cb_ctl -C -i -n node01 -t node -a number=1 -a ip_address=192.168.1.21 -a ip_port=7777 -a cluster=ocfs2
-> Add Node (node02)
# o2cb_ctl -C -n node02 -t node -a number=2 -a ip_address=192.168.1.22 -a ip_port=7777 -a cluster=ocfs2
Copy /etc/ocfs2/ folder to node02.
/etc/ocfs2/cluster.conf file.
node:
ip_port = 7777
ip_address = 192.168.1.21
number = 1
name = node01
cluster = ocfs2

node:
ip_port = 7777
ip_address = 192.168.1.22
number = 2
name = node02
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2
- Configure Cluster Service (both nodes).
Node01:
# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
Node02:
# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading module "configfs": OK
Creating directory '/config': OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
- Format Partition (Assume:use /dev/emcpowera1):
# mkfs.ocfs2 -b 4k -C 4k -L "shares" -N 8 /dev/emcpowera1
mkfs.ocfs2 1.2.7
Filesystem label=shares
Block size=4096 (bits=12)
Cluster size=4096 (bits=12)
Volume size=10737401856 (2621436 clusters) (2621436 blocks)
82 cluster groups (tail covers 8700 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 8
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 2 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
- Testing (both nodes):
# mount -t ocfs2 /dev/emcpowera1 /shares/
Or modifiy /etc/fstab (all nodes)

/dev/emcpowera1 /shares ocfs2 _netdev 0 0

and then start ocfs2
# /etc/init.d/ocfs2 start
Starting Oracle Cluster File System (OCFS2) [ OK ]
# df
Filesystem 1K-blocks Used Available Use% Mounted on

/dev/emcpowera1 10485744 530616 9955128 6% /shares

# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active

# /etc/init.d/ocfs2 status
Configured OCFS2 mountpoints: /shares
Active OCFS2 mountpoints: /shares
- check Mount Point on both nodes and read & write files on Share Folder.
.
.
.

Reference: http://oss.oracle.com/projects/ocfs2/dist/documentation/v1.2/ocfs2_faq.html

Wednesday, March 14, 2007

[OCFS2] setup example

I have 2 servers

cdb01
cdb02


And I want to install OCFS2
1. Download OCFS2 (http://oss.oracle.com) , install and enable o2cb
#rpm -ivh ocfs2*2 nodes
# /etc/init.d/o2cb enable (both nodes)

2. Create configfile

#ocfs2console (run on cdb01)


-- copyfile /etc/ocfs2/cluster.conf to node II

#cat /etc/ocfs2/cluster.conf
node:
ip_port = 17777
ip_address = 10.10.50.11
number = 0
name = cdb01
cluster = ocfs2
node:
ip_port = 17777
ip_address = 10.10.50.12
number = 1
name = cdb02
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2

--- load and mount disks
#/etc/init.d/o2cb load
#/etc/init.d/o2cb start
edit /etc/fstab
/dev/sda10 /shares ocfs2 rw,_netdev,heartbeat=local 0 0


---- on ocfs2
#/etc/init.d/o2cb load ocfs2
#/etc/init.d/ocfs2 start



----- edit kernel
#cat /boot/grub/grub.conf

title Red Hat Enterprise Linux AS (2.6.9-42.ELsmp)
root (hd0,0) kernel /vmlinuz-2.6.9-42.ELsmp ro root=LABEL=/ rhgb quiet elevator=deadline
initrd /initrd-2.6.9-42.ELsmp.img


# resolve kernel panic (reboot when panic)
# cat /etc/sysctl.conf
kernel.panic = 60

---- check heartbeat

# cat /etc/sysconfig/o2cb
# O2CB_ENABELED: 'true' means to load the driver on boot.
O2CB_ENABLED=true
# O2CB_BOOTCLUSTER: If not empty, the name of a cluster to start.
O2CB_BOOTCLUSTER=ocfs2
# O2CB_HEARTBEAT_THRESHOLD: Iterations before a node is considered dead.
O2CB_HEARTBEAT_THRESHOLD=61