Portál AbcLinuxu, 5. listopadu 2025 13:05
cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[2] sda2[0]
1464894976 blocks [2/1] [U_]
[....................] recovery = 3.5% (52734720/1464894976) finish=23476.9min speed=1002K/sec
md0 : active raid1 sdb1[1] sda1[0]
240832 blocks [2/2] [UU]
unused devices: none
server:~# mdadm -D /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Fri Jan 16 17:47:54 2009
Raid Level : raid1
Array Size : 1464894976 (1397.03 GiB 1500.05 GB)
Used Dev Size : 1464894976 (1397.03 GiB 1500.05 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Fri Dec 14 07:58:05 2012
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 3% complete
UUID : 99be939b:58238c48:fa8b64fc:811dc3c8
Events : 0.1060209
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
2 8 18 1 spare rebuilding /dev/sdb2
server:~# dmesg | grep DMA
[ 0.000000] DMA 0x00000010 -> 0x00001000
[ 0.000000] DMA zone: 32 pages used for memmap
[ 0.000000] DMA zone: 0 pages reserved
[ 0.000000] DMA zone: 3951 pages, LIFO batch:0
[ 1.185580] ata1: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 22
[ 1.185583] ata2: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 22
[ 1.185586] ata3: SATA max UDMA/133 abar m8192@0xdfef6000 port 0xdfef6200 irq 22
[ 1.185589] ata4: SATA max UDMA/133 irq_stat 0x00400040, connection status changed irq 22
[ 1.271939] ata5: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xffa0 irq 14
[ 1.271942] ata6: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15
[ 2.080462] ata2.00: ATA-8: ST1500DM003-9YN16G, CC4H, max UDMA/133
[ 2.080938] ata2.00: configured for UDMA/133
[ 2.081336] ata4.00: ATAPI: HL-DT-STDVD-RAM GH22NS30, 1.01, max UDMA/100
[ 2.082832] ata4.00: configured for UDMA/100
[ 2.108728] ata1.00: ATA-8: ST31500341AS, CC1H, max UDMA/133
[ 2.150639] ata1.00: configured for UDMA/133
server:~# hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 2322 MB in 2.00 seconds = 1161.41 MB/sec
Timing buffered disk reads: 242 MB in 3.01 seconds = 80.44 MB/sec
echo 1 > /sys/block/sdb/device/queue_depth
nyní mám
cat /sys/block/sdb/device/queue_depth
31
Nemuze byt problem, ze jeden disk je (asi) 512B a druhý 4KB?
/proc/sys/dev/raid/speed_limit_min. Nevi, proc se to drzelo toho minima.. k cemu tam je pak maximum?
Personalities : [raid1]
md1 : active raid1 sda2[2] sdb2[1]
976272248 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.0% (3072/976272248) finish=10538.3min speed=1536K/sec
rychlost synchronizacie je nizka - neviem preco po par minutach sa zvysila na hodnotu:
finish=292.3min speed=55589K/sec
. Tam je to skutečně všechno přes řadič s hromadou paměti, která tomu též dost pomůže. Na rok starém políčku máme 6x15kRPM 600GB v R10, celkem to dělá něco přes 500MB/s, ale tam šlo především o trochu lepší IOPS.
Tiskni
Sdílej:
ISSN 1214-1267, (c) 1999-2007 Stickfish s.r.o.