วันศุกร์ที่ 28 มีนาคม พ.ศ. 2557

วันจันทร์ที่ 3 มีนาคม พ.ศ. 2557

continuing svnsync from svn hotcopy

svnsync can be a time-consuming process if your revision is big. In this case, I will show how to continue the svnsync from hotcopy


1. Backup current by hotcopy
- svnadmin hotcopy /svn/repos/repos1/ /tmp/repos1

2. Copy data(/tmp/repos1) to new svn server directory /svn/repos/ (svn repository root) 

3. (optional) Upgrade SVN version property in case new SVN sever is  newer than old SVN server
- svnadmin upgrade

4. Initial SVN property for start svnysnc 
- svn propset --revprop -r0 svn:sync-last-merged-rev revno svn://hostname/reposname/
- svn propset --revprop -r0 svn:sync-from-uuid uuid svn://hostname/reposname/
- svn propset --revprop -r0 svn:sync-from-url svn://oldhost/reposname/ svn://hostname/reposname/

5. Sync data from old repos to new repos
- svnsync --non-interactive sync svn://hostname/reposname/

svnadmin dump, svnadmin load

svnadmin dump, svnadmin load
This sub-command can be use for back up and restore svn repository


Dump repository from SVN server
1. Dump SVN database from SVN server
- svnadmin dump /svn/repos/repos1/ | gzip -9 > /tmp/repos1.dump.gz

2. Copy file repos1.dump.gz for back up 

Restore from dumped repository
3. get svn server installed

4. Import SVN data from dump file with this command
- zcat repos1.dump.gz | svnadmin load /svn/repos/newrepos/

Mirroring svn


Mirror SVN server (and replace master server)

1. Install Mirror SVN server

2. Allow svnsync command permission 
- echo '#!/bin/sh' > /svn/repos/repos1/hooks/pre-revprop-change
- chmod +x /svn/repos/repos1/hooks/pre-revprop-change

3. Initial svnsync configure
- svnsync init svn://mirror/repos1/ svn://original/repos1/

 4. Start sync svn data
- svnsync --non-interactive sync svn://mirror

5. Setup schedule to resync svn
- crontab -e 
*/30 * * * * svnsync --non-interactive sync svn://mirror > /dev/null

- /etc/init.d/cron reload


In case you need mirror replace the master
- svnadmin setuuid /svn/repos/repos1/ uuid

replace   uuid from  /svn/repos/repos1/db/uuid 

svn/subversion server installation

Here are a short note for setting subversion server.

1. Install SVN server (subversion)
- apt-get install subversion

2. Create svnuser account
- useradd -m -s /bin/bash svnuser 

3. Create subversion working directory
- mkdir -p /svn/repos/
- chown svnuser.svnuser -R /svn/
- cd /svn/repos/

4. Change user access to svnuser
- su svnuser

5. Create svm repository
- svnadmin create repos1

6. Config repository password
- vi /svn/repos/repos1/conf/svnserve.conf

#####
anon-access = none
auth-access = write
password-db = passwd
#####

repository password file
- vi /svn/repos/repos1conf/passwd
#username and password example
user1 = pass1
- chmod 600 /home/ddsvn/wwwsvn/conf/passwd


7. Start SVN service
#login to svnuser account
- su svnuser
- svnserver -d -r /svn/repos/

8. Set svn start up on boot with user svnuser
- vi /etc/rc.local

#####
su svnuser -c "svnserve -d -r /svn/repos“
#####

APT cacher installation

Running several server with same distribution like ubuntu or debian. It is smart to have repository cache on your network. This can save your bandwidth and time.


Server side
1. Install apt-cacher mandatory package
$  apt-get install apt-cacher

2. Edit apt-cacher configuration
$  vi /etc/apt-cacher/apt-cacher.conf

#####
daemon_addr =
allowed_hosts = /
#####

3. Restart apt-cacher service
$  /etc/init.d/apt-cacher restart


Client side
1. Create proxy files for connect to apt-cacher
$  vi /etc/apt/apt.conf.d/01proxy

#####
Acquire::http::Proxy "http://:3142";
#####


How it work?
1. On client side try to use command apt-get update
2. On APT cacher server check logs file at /var/log/apt-cacher/access.log
to see client connection


Reference :
http://www.debuntu.org/how-to-set-up-a-repository-cache-with-apt-cacher/
https://help.ubuntu.com/community/Apt-Cacher-Server

Replace failure harddisk on SW raid 1

In example, Server build software raid 1 with two harddisk, sda and sdb. In this case, sdb fail. Below step show how to replace new harddisk on mdadm raid configuration.

1. Check sdb partition member on raid group  
$ cat /proc/mdstat

####################
 root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb6[2] sda6[0]
      6917056 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[2] sda1[0]
      487104 blocks super 1.2 [2/2] [UU]

unused devices:
####################
You will see sdb1 was a member of md0 and sdb6 was a member of md1

2. Mark sdb partition to failure
$ mdadm --manage /dev/md0 --fail /dev/sdb1
$ mdadm --manage /dev/md1 --fail /dev/sdb6

####################
root@raid:/# mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

root@raid:/# mdadm --manage /dev/md1 --fail /dev/sdb6
mdadm: set /dev/sdb6 faulty in /dev/md1

root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb6[2](F) sda6[0]
      6917056 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sdb1[2](F) sda1[0]
      487104 blocks super 1.2 [2/1] [U_]

unused devices:

root@raid:/#
####################
You will see the member of md0 and md1 change to [2/1] [U_]

3. Remove sdb1 and sdb6 from raid group
$ mdadm --manage /dev/md0 --remove /dev/sda1
$ mdadm --manage /dev/md1 --remove /dev/sdb6

####################
root@raid:/# mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0
root@raid:/# mdadm --manage /dev/md1 --remove /dev/sdb6
mdadm: hot removed /dev/sdb6 from /dev/md1

root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda6[0]
      6917056 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0]
      487104 blocks super 1.2 [2/1] [U_]

unused devices:

####################
You will see partition sdb1 and sdb6 already remove from raid group

4. Shutdown and change new harddisk


4.1 If new harddisk have old data you need to clear data first
$ dd if=/dev/zero bs=1M count=1 of=/dev/sdb; sync

5. After turn on check you new harddisk already install completely 

$ fdisk -l /dev/sdb

####################
root@raid:/# fdisk -l /dev/sdb

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
root@raid:/#
####################

6. Before we add new harddisk to raid member group we need to clone partition from sda and confirm partition must be same as sda
$ sfdisk -d /dev/sda | sfdisk /dev/sdb

####################
root@raid:/# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
Warning: extended partition does not start at a cylinder boundary.
DOS and Linux will interpret the contents differently.
OK....
...
root@raid:/# fdisk -l /dev/sdb

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048      976895      487424   fd  Linux raid autodetect
/dev/sdb2          978942    16775167     7898113    5  Extended
/dev/sdb5          978944     2930687      975872   82  Linux swap / Solaris
/dev/sdb6         2932736    16775167     6921216   fd  Linux raid autodetect
####################

7. Add sdb partition to raid group
$ mdadm --manage /dev/md0 --add /dev/sdb1
$ mdadm --manage /dev/md1 --add /dev/sdb6

####################
root@raid:/# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1
root@raid:/# mdadm --manage /dev/md1 --add /dev/sdb6
mdadm: added /dev/sdb6
root@raid:/# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[2] sda1[0]
      487104 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb6[2] sda6[0]
      6917056 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  1.8% (130816/6917056) finish=0.8min speed=130816K/sec

unused devices:
####################

8. Install grub on sdb partition 
$ grub-install /dev/sdb

####################
root@raid:/# grub-install /dev/sdb
Installation finished. No error reported.
root@raid:/#
####################

Remark: You can uninstall grub by:
$ dd if=/dev/zero of=/dev/sdb bs=446 count=1

9. For confirm grub is install on partition, it have two ways
- by command 
$ dd if=/dev/sdb  bs=512  count=1 2>/dev/null | hexdump -C
the result you will found GRUB string on output, it mean you grub already install on sdb correctly

- by script "Boot Info Script", is develop by ghulselmans, you can download on URL
http://sourceforge.net/projects/bootinfoscript/
after download and extract just run ./bootinfoscript it will show grub information in file RESULTS.txt

10. That all for change new harddisk but wait It still have SWAP partition remain, In my case I make SWAP partition 1 GB on sda5 and sdb5 but after change disk found SWAP partition of sdb5 not activate.
####################
root@raid:/# free -m
             total       used       free     shared    buffers     cached
Mem:           495         89        406          0         10         39
-/+ buffers/cache:         40        455
Swap:          952          0        952
root@raid:/#
####################

11. Because new harddisk drive the UUID of drive will change we need to make SWAP and change the UUID in fstab file
$ mkswap /dev/sdb5

####################
root@raid:/# mkswap /dev/sdb5
Setting up swapspace version 1, size = 975868 KiB
no label, UUID=815d087f-0875-494a-a947-2220fd14a12d
root@raid:/#
####################

12. Copy you new UUID replace in /etc/fstab 
In my case change from 

# swap was on /dev/sdb5 during installation
UUID=0f049d6d-506c-4bdb-9fec-1a6b808fbb65 none            swap    sw              0       0

to

# swap was on /dev/sdb5 during installation
UUID=815d087f-0875-494a-a947-2220fd14a12d none            swap    sw              0       0

13. Activate swap partition sdb5
$ swapon /dev/sdb5

####################
root@raid:/# swapon /dev/sdb5
root@raid:/# free -m
             total       used       free     shared    buffers     cached
Mem:           495         93        402          0         10         42
-/+ buffers/cache:         40        455
Swap:         1905          0       1905
root@raid:/#