October 26, 2004

Debian Kernel-2.6 ide問題再續

先前在 搞定了 kernel-2.6 對 ide 硬碟的問題 這一篇說明要將 ide-disk ide-generic compile 到核心才能抓到 IDE 硬碟.

而今天要修正的是不需要重新編譯到 kernel 裡,只要在載入核心時,比 SCSI 模組優先載入 IDE 模組就能驅動主機板上硬碟了,這也是為何 IDE 硬碟的系統即使 Compile 成 module 依然可以獨到的原因.

做法是編輯 /etc/mkinitrd/modules,並加入

amd74xx # 南橋晶片
ide-disk
ide-generic # ide-generic 一定要放在最後,不然會無法設 DMA,這樣速度就會變很慢喔

重新產生 initrd.img
$ mkinitrd -o /boot/initrd.img-new `uname -r`

設定好 boot loader 之後,重新開機就能獨到 IDE 裝置上的硬碟了.總算還給 Debian Kernel Maintainer 一個清白了.

BTW...Herbert Xu 已經不在 Debian Kernel Maintainer Team 中了

Posted by asho at 09:54 AM | Comments (235)

October 05, 2004

LTP

Linux Test Project

http://ltp.sourceforge.net/

Posted by asho at 02:18 PM | Comments (1791)

cpufreq

set your CPU clock in order to save "Money"!!!

Windoz user often install cpuinfo to idle their CPU clock. In other words, it can save your moneys...:-)

Windoz user often install cpuinfo to idle their CPU clock. In other words, it can save your moneys and descrease your box temperature.

So...How about Linux? It's easy if you use the Kernel-2.6 with support CPUs. What!? You haven't use kernel-2.6... Try it!!.

Be sure to add the acpi=on in your boot loader configuration. Just modprobe suitable module to realize the feature. For example, my box is a P4-2.4Ghz which is also in the supported list.

So I modprobe p4_clockmod and /sys/devices/system/cpu/cpu0/cpufreq will contain some setup files which allows you to scale the CPU clock.

cpuinfo_min_freq : this file shows the minimum operating
frequency the processor can run at(in kHz)
cpuinfo_max_freq : this file shows the maximum operating
frequency the processor can run at(in kHz)
scaling_driver : this file shows what cpufreq driver is
used to set the frequency on this CPU

scaling_available_governors : this file shows the CPUfreq governors
available in this kernel. You can see the
currently activated governor in

scaling_governor, and by "echoing" the name of another
governor you can change it. Please note
that some governors won't load - they only
work on some specific architectures or
processors.
scaling_min_freq and
scaling_max_freq show the current "policy limits" (in
kHz). By echoing new values into these
files, you can change these limits.

1: set max to 300Mhz
echo "300000" > scaling_max_freq

cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Pentium(R) 4 CPU 2.40GHz
stepping : 7
cpu MHz : 300.049
cache size : 512 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe cid
bogomips : 4767.74

In Debian, install the powernowd to esay setup the cpufreq.
See /usr/src/kernel-source-2.6.8/Documentation/cpufreq for details

Posted by asho at 01:17 PM | Comments (256)

October 04, 2004

raid5 setup and manager with mdadm on Debian Stable above

I am planning to set up a personal NAS for storaging.

I choose the raid5 as prevention and performance.....

Goal:
1: set up the raid5
2: LVM over raid5 in order to make partitions

Hardware:
3 disks for raid5
1 disk for spare
1 disk for additional

OS:
Debian Stable + kernel-2.4.18
LVM
mdadm

1:Set up raid5
vmware:~# mdadm -C /dev/md0 -l5 -n3 /dev/sdb1 /dev/sdc1 /dev/sdd1

vmware:~# cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 scsi/host0/bus0/target3/lun0/part1[3] scsi/host0/bus0/target2/lun0/part1[1] scsi/host0/bus0/target1/lun0/part1[0]
8385664 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 2.2% (96844/4192832) finish=2.8min speed=24211K/sec
unused devices:

you may find out raid5 is trying to sync all the 3 disks

2: LVM over raid
vgscan # create /etc/lvmtab" and "/etc/lvmtab.d

vmware:~# pvcreate /dev/md0
pvcreate -- physical volume "/dev/md0" successfully created

vmware:~# vgcreate -s 32M ideraid5 /dev/md0
vgcreate -- INFO: maximum logical volume size is 2 Terabyte
vgcreate -- doing automatic backup of volume group "ideraid5"
vgcreate -- volume group "ideraid5" successfully created and activated

vmware:~# lvcreate -L 4000 ideraid5 (for home)
lvcreate -- doing automatic backup of "ideraid5"
lvcreate -- logical volume "/dev/ideraid5/lvol1" successfully created

vmware:~# lvcreate -L 4000 ideraid5 (for /usr/src my flavor)
lvcreate -- doing automatic backup of "ideraid5"
lvcreate -- logical volume "/dev/ideraid5/lvol2" successfully created

3: mount the LVM partitions
mkfs.xfs /dev/ideraid5/lvol1
mkfs.xfs /dev/ideraid5/lvol2
mount -t auto /dev/ideraid5/lvol1 /home
mount -t auto /dev/ideraid5/lvol2 /usr/src

4:Create spare disk
mdadm -a /dev/md0 /dev/sde1 # become spare disk

5:Simulating a drive failure
vmware:~# mdadm /dev/md0 -f /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0

vmware:~# cat /proc/mdstat # because we set the spare disk so rebuild the raid5
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 scsi/host0/bus0/target4/lun0/part1[3] scsi/host0/bus0/target3/lun0/part1[2](F) scsi/host0/bus0/target2/lun0/part1[1] scsi/host0/bus0/target1/lun0/part1[0]
8385664 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 3.1% (132804/4192832) finish=2.0min speed=33201K/sec
unused devices:

vmware:~# mdadm -D /dev/md0 # rebuild array
/dev/md0:
Version : 00.90.00
Creation Time : Mon Oct 4 18:00:23 2004
Raid Level : raid5
Array Size : 8385664 (7.99 GiB 8.58 GB)
Device Size : 4192832 (3.99 GiB 4.29 GB)
Raid Disks : 3
Total Disks : 4
Preferred Minor : 0
Persistance : Superblock is persistant

Update Time : Mon Oct 4 19:06:25 2004
State : dirty, no-errors
Active Drives : 2
Working Drives : 3
Failed Drives : 1
Spare Drives : 1

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDisk State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 faulty /dev/sdd1
3 8 65 3 /dev/sde1
UUID : 277448fd:35686cf4:068d4813:5447c9f4

vmware:~# mdadm -D /dev/md0 # after rebuild
/dev/md0:
Version : 00.90.00
Creation Time : Mon Oct 4 18:00:23 2004
Raid Level : raid5
Array Size : 8385664 (7.99 GiB 8.58 GB)
Device Size : 4192832 (3.99 GiB 4.29 GB)
Raid Disks : 3
Total Disks : 4
Preferred Minor : 0
Persistance : Superblock is persistant

Update Time : Mon Oct 4 19:08:43 2004
State : dirty, no-errors
Active Drives : 3
Working Drives : 3
Failed Drives : 0
Spare Drives : 1

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDisk State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 65 2 active sync /dev/sde1
UUID : 277448fd:35686cf4:068d4813:5447c9f4

vmware:~# mdadm /dev/md0 -r /dev/sdd1 # remove the failed disk

6: edit /etc/mdadm/mdadm.conf # can be detect while reboot
DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
ARRAY /dev/md0 UUID=1185fff6:e19598ba:ee9c5aa0:7bd7af5e

7: grow your raid5 disks
The kernel-2.6 in Debian doesn't support growing more disks to raid5.Try raidreconf in raidtools2 instead.

vmware:~# vim /etc/raidtab
# Sample raid-5 configuration
raiddev /dev/md0
raid-level 5
nr-raid-disks 3
chunk-size 64

# Parity placement algorithm

#parity-algorithm left-asymmetric

#
# the best one for maximum performance:
#
parity-algorithm left-symmetric

#parity-algorithm right-asymmetric
#parity-algorithm right-symmetric

# Spare disks for hot reconstruction
nr-spare-disks 1

device /dev/sdb1
raid-disk 0

device /dev/sdc1
raid-disk 1

device /dev/sdd1
raid-disk 2

device /dev/sde1
spare-disk 0

vmware:~# vim /etc/raidtab.new
# Sample raid-5 configuration
raiddev /dev/md0
raid-level 5
nr-raid-disks 4
chunk-size 64

# Parity placement algorithm

#parity-algorithm left-asymmetric

#
# the best one for maximum performance:
#
parity-algorithm left-symmetric

#parity-algorithm right-asymmetric
#parity-algorithm right-symmetric

# Spare disks for hot reconstruction
nr-spare-disks 1

device /dev/sdb1
raid-disk 0

device /dev/sdc1
raid-disk 1

device /dev/sdd1
raid-disk 2

device /dev/sde1
raid-disk 3

device /dev/sdf1
spare-disk 0


vmware:~# raidreconf -o /etc/raidtab -n /etc/raidtab.new -m /dev/md0
Working with device /dev/md0
Parsing /etc/raidtab
Parsing /etc/raidtab.new
Size of old array: 33543468 blocks, Size of new array: 41929335 blocks
Old raid-disk 0 has 65513 chunks, 4192832 blocks
Old raid-disk 1 has 65513 chunks, 4192832 blocks
Old raid-disk 2 has 65513 chunks, 4192832 blocks
Old raid-disk 3 has 65513 chunks, 4192832 blocks
New raid-disk 0 has 65513 chunks, 4192832 blocks
New raid-disk 1 has 65513 chunks, 4192832 blocks
New raid-disk 2 has 65513 chunks, 4192832 blocks
New raid-disk 3 has 65513 chunks, 4192832 blocks
New raid-disk 4 has 65513 chunks, 4192832 blocks
Using 64 Kbyte blocks to move from 64 Kbyte chunks to 64 Kbyte chunks.
Detected 256248 KB of physical memory in system
A maximum of 854 outstanding requests is allowed
---------------------------------------------------
I will grow your old device /dev/md0 of 196539 blocks
to a new device /dev/md0 of 262052 blocks
using a block-size of 64 KB
Is this what you want? (yes/no): yes
Converting 196539 block device to 262052 block device
Allocated free block map for 4 disks
5 unique disks detected.
Working (|) [00124932/00196539] [###########################
Source drained, flushing sink.
Reconfiguration succeeded, will update superblocks...
Updating superblocks...
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdb1, 4192933kB, raid superblock at 4192832kB
disk 1: /dev/sdc1, 4192933kB, raid superblock at 4192832kB
disk 2: /dev/sdd1, 4192933kB, raid superblock at 4192832kB
disk 3: /dev/sde1, 4192933kB, raid superblock at 4192832kB
disk 4: /dev/sdf1, 4192933kB, raid superblock at 4192832kB
Array is updated with kernel.
Disks re-inserted in array... Hold on while starting the array...
Maximum friend-freeing depth: 6
Total wishes hooked: 196539
Maximum wishes hooked: 854
Total gifts hooked: 196539
Maximum gifts hooked: 807
Congratulations, your array has been reconfigured,
and no errors seem to have occured.

8:resize your md
Ext2/Ext3:resize2fs
Reiserfs:resize_reiserfs
XFS:please mount all your partitions first and try xfs_growfs

PS:
1:please mv the /etc/rcS/S25lvm to SXXlvm which XX must bigger than 25 if you use LVM over raid
2:want to grow your raid5 fs with mdamd ?!
please install Debian Sarge and kernel-2.6. Or...use raidtools2 with raidreconf

Appendix:reference
mdadm A New Tool For Linux Software RAID Management
SoftRaid HOWTO

Posted by asho at 12:24 PM | Comments (505)

October 02, 2004

DebBlue

http://debblue.debian.net/

When starting to run Debian as my prime desktop I noticed that Debian, compared to commercial distros, lacked a consistent look. There are many nice bootsplashes, login screen themes, desktop backgrounds etc., but it is very difficult to find a set which fits nicely together. At least I didn't find a nice set. That is the reason I started to work on DebBlue. DebBlue consists of a set of themes for

Posted by asho at 12:38 PM | Comments (216)

custom Debian Distributions

http://people.debian.org/~tille/debian-med/talks/paper-cdd/debian-cdd.html/

aptitude install cdd

Posted by asho at 12:35 PM | Comments (907)

DebianDesktop

http://wiki.debian.net/?DebianDesktop

Posted by asho at 12:33 PM | Comments (379)

October 01, 2004

Debian Sarge release feature

http://release.debian.org/sarge.html

* Debian Installer working on all architectures
- alpha, hppa, i386, ia64, m68k, mips, mipsel, powerpc, sparc: working
- amd64: working with unofficial image
- arm: netboots on bast and netwinder
- s390: sshd must work after first reboot; waiting on a NEW
package from network-console.

No Opteron support ...:-(

Posted by asho at 07:01 PM | Comments (630)