티스토리 뷰
Setting up a Highly Available Storage Cluster using GlusterFS and UCARP
Tuesday, November 23rd, 2010 by William Edwards (See all posts by William Edwards)
Storage is very important in an enterprise environment. It is the back-end to all of your applications, databases, etc. In many circumstances, this data can be vital to the operation of your organization and must be available at all times. In these types of situations, most businesses would need to purchase SAN appliances, which can sometimes cost tens of thousands of dollars. Alternatively, you can provide this same functionality using commodity hardware with GlusterFS and UCARP.
GlusterFS is a clustered file-system capable of scaling to several peta-bytes and UCARP is a portable implementation of the CARP protocol which allows you to implement IP failover. In this article, I am going to show you how you can use GlusterFS and UCARP to configure your own Highly Available Storage Cluster for all your storage needs. This method will allow you to access the cluster via NFS, Samba, or the native GlusterFS client to allow you to use the cluster with Virtual Machines and other services.
Instructions
There are several ways you can create a highly available cluster. The most basic cluster would be a simple mirror between two storage nodes. In this tutorial however, we are going to use the “Distribute over mirrors” (RAID 10) file distribution method in order to have both high availability and high performance. The following diagram will give you a view of how the file distribution will operate between the nodes of the cluster:
So for this tutorial, we will be using 4 separate nodes using distributed replicated volumes to give us the high availability and high performance that we want. For demonstrative purposes, I have outlined the nodes here. Please ensure that you change the IP addresses of the nodes to suite your environment:
Hostname – IP Address
glusterfs-01 – 192.168.1.10
glusterfs-02 – 192.168.1.11
glusterfs-03 – 192.168.1.12
glusterfs-04 – 192.168.1.13
Step 1 – All Nodes
Install UCARP and the dependencies for GlusterFS:
CentOS / RHEL / Fedora
yum -y install fuse fuse-libs libibverbs wget openssh-server ucarp
Note: You may need to configure the EPEL repository for your server if these packages are not available. You can add the EPEL repository by executing the following command:
rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
Ubuntu / Debian
sudo apt-get install libibverbs1
Step 2 – All Nodes
Download and install the GlusterFS server:
CentOS / RHEL / Fedora
wget http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/RHEL/glusterfs-core-3.1.0-1.x86_64.rpm
wget http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/RHEL/glusterfs-fuse-3.1.0-1.x86_64.rpm
rpm -Uvh glusterfs-*.rpm
Ubuntu / Debian
wget http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/Ubuntu/glusterfs_3.1.0-1_amd64.deb
sudo dpkg -i glusterfs*.deb
Step 3 – All Nodes
Start the GlusterFS service and ensure that GlusterFS is set to start automatically:
CentOS / RHEL / Fedora
chkconfig glusterd on
service glusterd start
Ubuntu / Debian
/etc/init.d/glusterfs-server start
Step 4 – glusterfs-01
On the first node (glusterfs-01), probe for the other GlusterFS nodes. Be sure to replace the IP addresses here with the actual IPs of your glusterfs nodes. Note: Do NOT run these commands on the other servers.
gluster peer probe 192.168.1.11
gluster peer probe 192.168.1.12
gluster peer probe 192.168.1.13
Step 5 – All Nodes
Create the storage directory on all nodes to store the data.
mkdir -p /export/data
Step 6 – glusterfs-01
On the first node (glusterfs-01), create the datastore configuration using the following command. Again, be sure to change the IP addresses to the IP addresses of your actual storage nodes:
gluster volume create datastore replica 2 transport tcp 192.168.1.10:/export/data 192.168.1.11:/export/data 192.168.1.12:/export/data 192.168.1.13:/export/data
Next, start the datastore:
gluster volume start datastore
You can check the volume’s status by running the following command:
gluster volume info
You can now use your storage cluster! To mount the storage cluster as an NFS share, you can run this command:
mount -t nfs 192.168.1.10:/datastore /mnt
Ideally, you should use the GlusterFS client to connect to the storage cluster as the GlusterFS client will connect to all the storage nodes at once, without needing to configure any kind of manual fail-over mechanism. However, some services are unable to use the native GlusterFS client (such as VMware ESX servers), and will need to use an NFS or Samba share instead. Unfortunately when you introduce these protocols, you lose the ability to have High Availability if the node you are connecting to fails. This is where UCARP comes in.
Step 7 – All Nodes
With UCARP, we will be configuring a shared IP address that all the nodes share. When one of the nodes goes down, another node will take over, giving us the High Availability that we need.
In this example, we’re going to use a shared IP address of 192.168.1.14. Copy and edit the UCARP configuration example:
cp /etc/ucarp/vip-001.conf.example /etc/ucarp/vip-001.conf
You will then need to change the configuration on each node based on the node’s IP Address:
nano /etc/ucarp/vip-001.conf
glusterfs-01
#ID
ID=001
# Network Interface
BIND_INTERFACE=eth0
#Real IP
SOURCE_ADDRESS=192.168.1.10
# Virtual IP
VIP_ADDRESS=192.168.1.14
# Carp Password
PASSWORD=SuperSecretPassword
glusterfs-02
#ID
ID=002
# Network Interface
BIND_INTERFACE=eth0
#Real IP
SOURCE_ADDRESS=192.168.1.11
# Virtual IP
VIP_ADDRESS=192.168.1.14
# Carp Password
PASSWORD=SuperSecretPassword
glusterfs-03
#ID
ID=003
# Network Interface
BIND_INTERFACE=eth0
#Real IP
SOURCE_ADDRESS=192.168.1.12
# Virtual IP
VIP_ADDRESS=192.168.1.14
# Carp Password
PASSWORD=SuperSecretPassword
glusterfs-04
#ID
ID=004
# Network Interface
BIND_INTERFACE=eth0
#Real IP
SOURCE_ADDRESS=192.168.1.13
# Virtual IP
VIP_ADDRESS=192.168.1.14
# Carp Password
PASSWORD=SuperSecretPassword
Step 8 – All Nodes
Restart UCARP and ensure that it starts on boot.
chkconfig ucarp on
service ucarp start
You should now be able to mount the storage cluster using the shared IP address! Try it out with this command:
mount -t nfs 192.168.1.14:/datastore /mnt
Now should any one of the nodes in the cluster fail, another storage node will take over the shared IP, giving you the high availability that you need. You can also incorporate load balancing by implementing round robin DNS so that the load is distributed across all the storage nodes in the cluster.
참조사이트 : http://www.misdivision.com/blog/setting-up-a-highly-available-storage-cluster-using-glusterfs-and-ucarp
- Total
- Today
- Yesterday
- Storage
- rhel5
- 컴퓨터 관리
- Linux
- hp-ux
- Solaris
- Cisco
- dell 서버
- 리눅스
- 넷백업
- 컴퓨터 다운
- Oracle
- Unix
- network
- XP설치
- PC최적화
- 아파치 보안
- 윈도우즈 2000
- dell omsa
- pc정보
- netbackup
- 키보드 단축키
- 오라클
- 컴퓨터 소음
- RHEL4
- switch
- 솔라리스
- unix 보안
- Router
- 보안
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |