Hoe Raid 1 opstelling weer draaiende krijgen?

Pagina: 1
Acties:

  • ThaDude
  • Registratie: December 2000
  • Laatst online: 16-05 19:57

ThaDude

Hang Loose

Topicstarter
Ik heb suse 8.1 met raid 1. Nou zie ik met opstarten het volgende:
<6>md: autorun ...
<6>md: considering hdg5 ...
<6>md: adding hdg5 ...
<6>md: adding hde5 ...
<6>md: created md1
<6>md: bind<hde5,1>
<6>md: bind<hdg5,2>
<6>md: running: <hdg5><hde5>
<6>md: hdg5's event counter: 00000015
<6>md: hde5's event counter: 0000001c
<3>md: superblock update time inconsistency -- using the most recent one
<6>md: freshest: hde5
<4>md: kicking non-fresh hdg5 from array!
<6>md: unbind<hdg5,1>
<6>md: export_rdev(hdg5)
<6>md: RAID level 1 does not need chunksize! Continuing anyway.
<3>kmod: failed to exec /sbin/modprobe -s -k md-personality-3, errno = 2
<3>md: personality 3 is not loaded!
<4>md :do_md_run() returned -22
<6>md: md1 stopped.
<6>md: unbind<hde5,0>
<6>md: export_rdev(hde5)
<6>md: considering hdg1 ...
<6>md: adding hdg1 ...
<6>md: adding hde1 ...
<6>md: created md0
<6>md: bind<hde1,1>
<6>md: bind<hdg1,2>
<6>md: running: <hdg1><hde1>
<6>md: hdg1's event counter: 00000015
<6>md: hde1's event counter: 0000001c
<3>md: superblock update time inconsistency -- using the most recent one
<6>md: freshest: hde1
<4>md: kicking non-fresh hdg1 from array!
<6>md: unbind<hdg1,1>
<6>md: export_rdev(hdg1)
<6>md: RAID level 1 does not need chunksize! Continuing anyway.
<3>kmod: failed to exec /sbin/modprobe -s -k md-personality-3, errno = 2
<3>md: personality 3 is not loaded!
<4>md :do_md_run() returned -22
<6>md: md0 stopped.
<6>md: unbind<hde1,0>
<6>md: export_rdev(hde1)
<6>md: ... autorun DONE.
<6>NET4: Linux TCP/IP 1.0 for NET4.0
<6>IP Protocols: ICMP, UDP, TCP, IGMP
<6>IP: routing cache hash table of 4096 buckets, 32Kbytes
<6>TCP: Hash tables configured (established 32768 bind 65536)
<6>Linux IP multicast router 0.06 plus PIM-SM
<6>NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
<5>RAMDISK: Compressed image found at block 0
<6>Freeing initrd memory: 231k freed
<4>VFS: Mounted root (ext2 filesystem).
<6>Journalled Block Device driver loaded
<6>md: raid1 personality registered as nr 3
<6>md: Autodetecting RAID arrays.
<6> [events: 00000015]
<6> [events: 0000001c]
<6> [events: 00000015]
<6> [events: 0000001c]
<6>md: autorun ...
<6>md: considering hde1 ...
<6>md: adding hde1 ...
<6>md: adding hdg1 ...
<6>md: created md0
<6>md: bind<hdg1,1>
<6>md: bind<hde1,2>
<6>md: running: <hde1><hdg1>
<6>md: hde1's event counter: 0000001c
<6>md: hdg1's event counter: 00000015
<3>md: superblock update time inconsistency -- using the most recent one
<6>md: freshest: hde1
<4>md: kicking non-fresh hdg1 from array!
<6>md: unbind<hdg1,1>
<6>md: export_rdev(hdg1)
<6>md: RAID level 1 does not need chunksize! Continuing anyway.
<6>md0: max total readahead window set to 508k
<6>md0: 1 data-disks, max readahead per data-disk: 508k
<6>raid1: device hde1 operational as mirror 0
<1>raid1: md0, not all disks are operational -- trying to recover array
<6>raid1: raid set md0 active with 1 out of 2 mirrors
<6>md: updating md0 RAID superblock on device
<6>md: hde1 [events: 0000001d]<6>(write) hde1's sb offset: 787072
<6>md: recovery thread got woken up ...
<3>md0: no spare disk to reconstruct array! -- continuing in degraded mode
<6>md: recovery thread finished ...
<6>md: considering hde5 ...
<6>md: adding hde5 ...
<6>md: adding hdg5 ...
<6>md: created md1
<6>md: bind<hdg5,1>
<6>md: bind<hde5,2>
<6>md: running: <hde5><hdg5>
<6>md: hde5's event counter: 0000001c
<6>md: hdg5's event counter: 00000015
<3>md: superblock update time inconsistency -- using the most recent one
<6>md: freshest: hde5
<4>md: kicking non-fresh hdg5 from array!
<6>md: unbind<hdg5,1>
<6>md: export_rdev(hdg5)
<6>md: RAID level 1 does not need chunksize! Continuing anyway.
<6>md1: max total readahead window set to 508k
<6>md1: 1 data-disks, max readahead per data-disk: 508k
<6>raid1: device hde5 operational as mirror 0
<1>raid1: md1, not all disks are operational -- trying to recover array
<6>raid1: raid set md1 active with 1 out of 2 mirrors
<6>md: updating md1 RAID superblock on device
<6>md: hde5 [events: 0000001d]<6>(write) hde5's sb offset: 7630720
<6>md: recovery thread got woken up ...
<3>md1: no spare disk to reconstruct array! -- continuing in degraded mode
<3>md0: no spare disk to reconstruct array! -- continuing in degraded mode
<6>md: recovery thread finished ...
<6>md: ... autorun DONE.
<6>kjournald starting. Commit interval 5 seconds
<6>EXT3-fs: mounted filesystem with ordered data mode.
<4>VFS: Mounted root (ext3 filesystem) readonly.
<5>Trying to move old root to /initrd ... failed
<5>Unmounting old root
<5>Trying to free ramdisk memory ... okay
<6>Freeing unused kernel memory: 164k freed
<4>md: array md0 already exists!
<4>md: array md1 already exists!
<6>md: Autodetecting RAID arrays.
<6> [events: 00000015]
<6> [events: 00000015]
<6>md: autorun ...
<6>md: considering hdg5 ...
<6>md: adding hdg5 ...
<4>md: md1 already running, cannot run hdg5
<6>md: export_rdev(hdg5)
<6>md: (hdg5 was pending)
<6>md: considering hdg1 ...
<6>md: adding hdg1 ...
<4>md: md0 already running, cannot run hdg1
<6>md: export_rdev(hdg1)
<6>md: (hdg1 was pending)
<6>md: ... autorun DONE.
Als ik cat /proc/mkstat doe dan zie ik
Surfdude:/var/log # cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md1 : active raid1 hde5[0]
7470144 blocks [2/1] [U_]

md0 : active raid1 hde1[0]
786624 blocks [2/1] [U_]

unused devices: <none>
Surfdude:/var/log #
Of te wel, hdg doet nie meer mee :'(

Hoe krijg ik hdg nu weer bij de raid 1 op :? Met yast kan ik niks aangezien die niks mag wijzigen (zegt die telkens weer)

We're machines just like everything else in nature. (gasloos sinds 01-10-2020, WP: SW75YAA/ERSD-VM2D, DJG WPS 300, 18 hp CPC, 11,1 kWp


  • ThaDude
  • Registratie: December 2000
  • Laatst online: 16-05 19:57

ThaDude

Hang Loose

Topicstarter
Na enig uitzoek werk heb ik hem volgens mij weer aan de gang
Ik heb raidhotadd /dev/md0 /dev/hdg1 gedaan en toen:

Surfdude:/etc # cat /proc/mdstat
Personalities : [raid1]
read_ahead 1024 sectors
md1 : active raid1 hdg5[2] hde5[0]
7470144 blocks [2/1] [U_]

md0 : active raid1 hdg1[2] hde1[0]
786624 blocks [2/1] [U_]
[=====>...............] recovery = 26.5% (209472/786624) finish=0.5min speed=16113K/sec
unused devices: <none>


Dus mag van mij op slot :)

[ Voor 55% gewijzigd door ThaDude op 01-12-2002 17:15 ]

We're machines just like everything else in nature. (gasloos sinds 01-10-2020, WP: SW75YAA/ERSD-VM2D, DJG WPS 300, 18 hp CPC, 11,1 kWp