วันอังคารที่ 8 ธันวาคม พ.ศ. 2552

Ubuntu handy configuration and command after installation + apache PHP

Ubuntu

configuration after setup.

startup interface/fix ip on startup
> /etc/network/interfaces.
iface eth1 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1

set name server
> /etc/resolv.conf
nameserver 204.11.126.131


install ssh for remote login
> openssh
sudo apt-get install openssh-client openssh-server


>adduser
sudo adduser username


>user / add/ super user (group admin) sodoer
sudo adduser xeon
sudo adduser xeon admin


>php5 + mysql
sudo apt-get install
  • php5
  • php5-cgi
  • php5-cli
  • php5-mysql
  • php5-memcache
  • php5-gd
  • php5-imagick

>apache
sudo apt-get install apache2-mpm-worker


>remove service from startup
update-rc.d -f apache2 remove

>add service
update-rc.d apache2 defaults

>more reading on services
http://www.debuntu.org/how-to-manage-services-with-update-rc.d


>NFS server
sudo apt-get install nfs-kernel-server
sudo /etc/init.d/nfs-kernel-server start

>NFS client
sudo apt-get install nfs-common
For more info about nfs : https://help.ubuntu.com/community/SettingUpNFSHowTo


>/etc/exports
/ubuntu *(ro,sync,no_root_squash)
/home *(rw,sync,no_root_squash)

>package management-install
#apt-get install packagename
-see update current version

# apt-cache policy apache2
apache2:
Installed: 2.2.8-1ubuntu0.3
Candidate: 2.2.8-1ubuntu0.14
Version table:
2.2.8-1ubuntu0.14 0
500 http://th.archive.ubuntu.com hardy-updates/main Packages
500 http://security.ubuntu.com hardy-security/main Packages
*** 2.2.8-1ubuntu0.3 0
100 /var/lib/dpkg/status
2.2.8-1 0
500 http://th.archive.ubuntu.com hardy/main Packages

Remark
*apache worker mpm not work with libapach2-php
using fcgid mod and php cgi

*NFS add host to /etc/hosts.allow
*snmpd change init opt /etc/default/snmpd
*all rp_filter need to be disable

How to Monitor/manager software raid

# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb5[2] sda5[0]
9767424 blocks [2/1] [U_]
[=====>...............] recovery = 25.8% (2528512/9767424) finish=4.4min speed=27019K/sec

md2 : active raid1 sdb6[1]
19534912 blocks [2/1] [_U]

md3 : active raid1 sdb8[1] sda8[0]
44756992 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
64128 blocks [2/2] [UU]

# mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat Jul 9 21:46:05 2005
Raid Level : raid1
Array Size : 9767424 (9.31 GiB 10.00 GB)
Used Dev Size : 9767424 (9.31 GiB 10.00 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Tue Dec 8 12:59:37 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 5757d121:b65d61e8:0f50c7af:cbb83343
Events : 0.367292852

Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 0 0 1 removed


Rebuild raid :
# mdadm --add /dev/md1 /dev/sdb5
mdadm: re-added /dev/sdb5

Mark device as faulty
#mdadm --manage /dev/md0 --fail /dev/sda2

Remove device from array
#mdadm --remove /dev/md3 /dev/sb3

Stop & remove raid
*please unmount all file system before stop / remove raid.
#mdadm --stop /dev/md2
#mdadm --zero-superblock /dev/sda2

----

How to : Software Raid migration

Now use sfdisk to duplicate partitions from old drive to new drive:

#sfdisk -d /dev/sda | sfdisk /dev/sdb

Now use mdadm to create the raid arrays. We mark the first drive (sda) as "missing" so it doesn't wipe out our existing data:

#mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1

Now copy the remaining partitions. Be careful to match your md devices with your filesystem layout. This example is for my particular setup

mount /dev/md1 /mnt/var
cp -dpRx /var /mnt
... repeat for remaining folder.

Setting fstab, grub to boot from mdX

Reboot

At this point, you have all of your original data on the new drive, so we can safely add the original drive to the raid volume.

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
... repeat for remaining partitions.

Thanks to :
http://www.debian-administration.org/articles/238

How to : software raid GRUB + RAID

How to : software raid GRUB + RAID

In the Software RAID howto it is mentioned that it is not known how
to set up GRUB to boot off RAID. Here is how I did it:
**Follow at your own risk. If you break something it's your fault.**
==================================================================
Configuration:
- /dev/hda (Pri. Master) 60 GB Seagate HDD (blank)
- /dev/hdc (Sec. Master) 60 GB Seagate HDD (blank)
- /dev/hdd (Sec. Slave) CDROM Drive

Setup Goals:
- /boot as /dev/md0: RAID1 of /dev/hda1 & /dev/hdc1 for redundancy
- / as /dev/md1: RAID1 of /dev/hda2 & /dev/hdc2 for redundancy
- swap*2 with equal priority: /dev/hda3 & /dev/hdc3 for more speed
- GRUB installed in boot records of /dev/hda and /dev/hdc so either
drive can fail but system still boot.

Tools:
- mdadm (http://www.cse.unsw.edu.au/~neilb/source/mdadm/)
(I used 1.2.0, but notice that as of 20030729 1.3.0 is available)

1. Boot up off rescue/installation CD/disk/HDD/whatever with mdadm
tools installed.

2. Partitioning of hard drives:
(I won't show you how to do this. See: # man fdisk ; man sfdisk )
But here's how stuff was arranged:


# sfdisk -l /dev/hda

Disk /dev/hda: 7297 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting
from 0

Device Boot Start End #cyls #blocks Id System
/dev/hda1 * 0+ 16 17- 136521 fd Linux raid autodetect
/dev/hda2 17 7219 7203 57858097+ fd Linux raid autodetect
/dev/hda3 7220 7296 77 618502+ 82 Linux swap
/dev/hda4 0 - 0 0 0 Empty
------------------------------------------------------------------
To make /dev/hdc the same:
------------------------------------------------------------------
# sfdisk -d /dev/hda | sfdisk /dev/hdc
------------------------------------------------------------------
/dev/hd[ac]1 for /dev/md0 for /boot
/dev/hd[ac]2 for /dev/md1 for /
/dev/hd[ac]3 for 2*swap
It is important to make md-to-be partitions with ID 0xFD, not 0x83.

3. Set up md devices: (both are RAID1 [mirrors])
------------------------------------------------------------------
# mdadm --create /dev/md0 --level=1 \
--raid-devices=2 /dev/hda1 /dev/hdc1
# mdadm --create /dev/md1 --level=1 \
--raid-devices=2 /dev/hda2 /dev/hdc2
------------------------------------------------------------------

• สร้าง metadevice ด้วยคำสั่ง cd /dev && MAKEDEV md
(หรือทำเอง mknod /dev/md1 b 9 1 )
• กำหนด raid ด้วยคำสั่ง
◦ mdadm --create --verbose /dev/md1 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
◦ mdadm --create --verbose /dev/md5 --level=1 --raid-devices=2 /dev/sda5 /dev/sdb5
◦ mdadm --create --verbose /dev/md6 --level=1 --raid-devices=2 /dev/sda6 /dev/sdb6

รอให้สร้าง raid เสร็จก่อนด้วย คำสั่ง cat /proc/mdstat

หมายเหตุ ถ้ามีการ create แล้วให้ใช้ mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1

4. Make filesystems:
------------------------------------------------------------------
# mke2fs /dev/md0
# mkreiserfs /dev/md1
# mkswap /dev/hda3
# mkswap /dev/hdc3
------------------------------------------------------------------

5. Install Your distribution:
Simply treat /dev/md0 and /dev/md1 as the partitions to install on,
and install the way your normally do.

Here're the relevant entries /etc/fstab for the newly created
partitions:
------------------------------------------------------------------
/dev/md0 /boot ext2 noauto,noatime 1 1
/dev/md1 / reiserfs noatime 1 1
/dev/hda3 none swap sw,pri=1 0 0
/dev/hdc3 none swap sw,pri=1 0 0
------------------------------------------------------------------
The "pri=1" for each of the swap partitions makes them the same
priority so the kernel does striping and that speeds up vm. Of
course, this means that if a disk dies then the system may crash,
needing a reboot. Perhaps it would be wiser to make hd[ac]3 a RAID1
array too, and just use /dev/md2 as swap.

6. Setting up GRUB: (assuming you've already installed it)
------------------------------------------------------------------
# grub
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are
embedded.
succeeded
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p
(hd0,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded
Done.

grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are
embedded.
succeeded
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p
(hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded
Done.

grub> quit
------------------------------------------------------------------
Here is how /boot/grub/grub.conf is: (/dev/md0 mounted as /boot)
(Assuming kernel is installed as /boot/bzImage, and RAID1 support
compiled into the kernel).
------------------------------------------------------------------
# Boot automatically after 30 secs.
timeout 30

# By default, boot the first entry.
default 0

# Fallback to the second entry.
fallback 1

# For booting with disc 0 kernel
title GNU/Linux (hd0,0)
kernel (hd0,0)/bzImage root=/dev/md1

# For booting with disc 1 kernel, if (hd0,0)/bzImage is unreadable
title GNU/Linux (hd1,0)
kernel (hd1,0)/bzImage root=/dev/md1



http://www.linuxsa.org.au/mailing-list/2003-07/1270.html
http://en.gentoo-wiki.com/wiki/RAID/Software
http://www.geisterstunde.org/drupal/?q=raid1
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml