notes_brot

This is an old revision of the document!


Notizzettel vom brot

Stand siehe Unten - tl;dr: 3x 3TB (500mb efi, 500mb raid1 /boot, 50G raid5 /, 2,7TB /mnt/raid_daten)

Einbau neuer 2x 5TB wd red

nas brot # gdisk /dev/sde
GPT fdisk (gdisk) version 0.8.10
 
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.
 
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.
 
Warning! One or more CRCs don't match. You should repair the disk!
 
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: damaged
 
****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
 
Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y
 
Command (? for help): n
Partition number (1-128, default 1):
First sector (34-9767541134, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-9767541134, default = 9767541134) or {+-}size{KMGTP}: 1026047
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): ef00
Changed type of partition to 'EFI System'
 
Command (? for help): n
Partition number (2-128, default 2):
First sector (34-9767541134, default = 1026048) or {+-}size{KMGTP}:
Last sector (1026048-9767541134, default = 9767541134) or {+-}size{KMGTP}: 2050047
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'
 
Command (? for help): n
Partition number (3-128, default 3):
First sector (34-9767541134, default = 2050048) or {+-}size{KMGTP}:
Last sector (2050048-9767541134, default = 9767541134) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
 
Command (? for help): p
Disk /dev/sde: 9767541168 sectors, 4.5 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): D69BC615-B1AC-4A38-BA71-791CD94C2154
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 9767541134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
 
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1026047   500.0 MiB   EF00  EFI System
   2         1026048         2050047   500.0 MiB   FD00  Linux RAID
   3         2050048      9767541134   4.5 TiB     8300  Linux filesystem
 
Command (? for help): w
 
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
 
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sde.
The operation has completed successfully.

nas brot # umount /boot/efi nas brot # cp /dev/sda1 /dev/sdd1 nas brot # cp /dev/sda1 /dev/sde1

nas brot # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Sun Nov 16 19:15:17 2014
          State : clean 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 248
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       4       8       33        1      active sync   /dev/sdc1
 
       3       8       17        -      spare   /dev/sdb1
nas brot # mdadm --verbose --manage --add /dev/md0 /dev/sdd2
mdadm: added /dev/sdd2
nas brot # mdadm --verbose --manage --add /dev/md0 /dev/sde2
mdadm: added /dev/sde2
nas brot # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md2 : active raid5 sdb3[6] sdc3[5] sda3[0]
      52430848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
 
md4 : active raid5 sdb4[2] sdc4[1] sda4[3]
      5753362432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk
 
md0 : active raid1 sde2[5](S) sdd2[2](S) sdb1[3](S) sdc1[4] sda2[0]
      511988 blocks super 1.0 [2/2] [UU]
 
unused devices: <none>
nas brot # mdadm /dev/md0 --fail /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
nas brot # mdadm /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
nas brot # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md2 : active raid5 sdb3[6] sdc3[5] sda3[0]
      52430848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
 
md4 : active raid5 sdb4[2] sdc4[1] sda4[3]
      5753362432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk
 
md0 : active raid1 sde2[5] sdd2[2](S) sdb1[3](S) sdc1[4](F) sda2[0]
      511988 blocks super 1.0 [2/1] [U_]
      [==========>..........]  recovery = 52.9% (270912/511988) finish=0.0min speed=67728K/sec
nas brot # mdadm /dev/md0 --remove /dev/sdb1 /dev/sdc1
mdadm: hot removed /dev/sdb1 from /dev/md0
mdadm: hot removed /dev/sdc1 from /dev/md0
nas brot # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Sun Nov 16 19:41:39 2014
          State : clean 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 277
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       5       8       66        1      active sync   /dev/sde2
 
       2       8       50        -      spare   /dev/sdd2
nas brot # mdadm --grow /dev/md0 --raid-devices=3
raid_disks for /dev/md0 set to 3
unfreeze
nas brot # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Sun Nov 16 19:45:49 2014
          State : clean, degraded, recovering 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
 
 Rebuild Status : 81% complete
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 296
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       5       8       66        1      active sync   /dev/sde2
       2       8       50        2      spare rebuilding   /dev/sdd2
nas brot # mkfs.btrfs -d raid1 -m raid1 /dev/sdd3 /dev/sde3
 
nas brot # btrfs filesystem show /dev/sdd3
Label: none  uuid: c356873f-4034-43cc-8052-b2a4bf68cb26
        Total devices 2 FS bytes used 112.00KiB
        devid    1 size 4.55TiB used 2.03GiB path /dev/sdd3
        devid    2 size 4.55TiB used 2.01GiB path /dev/sde3
 
Btrfs v3.17
 
nas brot # mount /dev/sdd3 /mnt/btrfs_rootsubvol/
 
nas brot # btrfs subvolume create /mnt/btrfs_rootsubvol/root-subvol
Create subvolume '/mnt/btrfs_rootsubvol/root-subvol'
nas brot # btrfs subvolume create /mnt/btrfs_rootsubvol/data-subvol
Create subvolume '/mnt/btrfs_rootsubvol/data-subvol'
nas brot # btrfs subvolume create /mnt/btrfs_rootsubvol/backup-subvol
Create subvolume '/mnt/btrfs_rootsubvol/backup-subvol'
nas brot # btrfs subvolume create /mnt/btrfs_rootsubvol/brot-backup-subvol
Create subvolume '/mnt/btrfs_rootsubvol/brot-backup-subvol'
 
nas brot # btrfs fi df /mnt/btrfs_rootsubvol/
Data, RAID1: total=1.00GiB, used=512.00KiB
Data, single: total=8.00MiB, used=0.00B
System, RAID1: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, RAID1: total=1.00GiB, used=176.00KiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=16.00MiB, used=0.00B
 
nas ~ # btrfs subvolume list /mnt/btrfs_rootsubvol/
ID 258 gen 155 top level 5 path root-subvol
ID 259 gen 8 top level 5 path data-subvol
ID 260 gen 9 top level 5 path backup-subvol
ID 261 gen 10 top level 5 path brot-backup-subvol
 
nas ~ # btrfs subvolume set-default 258 /mnt/btrfs_rootsubvol/
(setzen vom rootfs als std-subvol)
  • Boot von USB-Stick (grml)
  • einbinden der alten Dateisysteme (/dev/md2 als /, md1 als /boot und EFI)
  • einbinden des neuen BTRFS-Volumes ( auf die richtige Subvolume zuordnung achten)
  • cp -a /mnt/old_rootfs /mnt/btrfs-rootsubvol
  • grml-chroot /mnt/btrfs-rootsubvol /bin/bash
    • grub-install /dev/sdd
    • grub-install /dev/sde
    • fstab anpassen - nur die wichtigen einträge aktiv lassen um bootfehler zu vermeiden
    • initrd (u kernel) neubauen mit dracut
/dev/md0                /boot                           ext4            noatime         1 2
UUID=c356873f-4034-43cc-8052-b2a4bf68cb26       /       btrfs           noatime         0 1
UUID=4549-7F03          /boot/efi                       vfat            defaults        0 2
  • Einbinden der Subvolumes (primär erstmal daten und backup)
  • mount -o ro /dev/md4 /mnt/raid_datenpartition
  • cp -av /mnt/raid_datenpartition/Daten /mnt/btrfs-daten-subvol/ | tee cp-log.tee

Problem: btrfs-image-files auf den alten FS. Jetzt: subvolumes auf dem btrfs-fs

Lösung: Übertragen aller Snapshots per btrfs send/receive

nas ~ # for i in $( ls /mnt/current_backup_file/ | grep backup); do btrfs property set -t s /mnt/current_backup_file/$i ro true; done
nas ~ # btrfs send -v /mnt/current_backup_file/backup-2011.11.07-22\:43/ | btrfs receive -v /mnt/btrfs-brot-backup-subvol/
At subvol /mnt/current_backup_file/backup-2011.11.07-22:43/
At subvol backup-2011.11.07-22:43
receiving subvol backup-2011.11.07-22:43 uuid=9c52dfc1-499f-9640-a4f9-0f8b89974a61, stransid=0
  • Platte sdc soll dem BTRFS-RAID1 hinzugefügt werden (sda und sdb sind die neuen Platten)
  • Degradieren der RAID5s
  • Neupartitionieren der Platte
nas brot # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Mon Nov 17 23:42:11 2014
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10324
 
    Number   Major   Minor   RaidDevice State
       0       8       51        0      active sync   /dev/sdd3
       6       8       67        1      active sync   /dev/sde3
       5       8       35        2      active sync   /dev/sdc3
nas brot # mdadm -D /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 2876681216 (2743.42 GiB 2945.72 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
 
  Intent Bitmap : Internal
 
    Update Time : Sun Nov 30 21:05:22 2014
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : nas:4  (local to host nas)
           UUID : f6924d42:7358df85:416f97f9:83aeac62
         Events : 16217
 
    Number   Major   Minor   RaidDevice State
       3       8       52        0      active sync   /dev/sdd4
       1       8       36        1      active sync   /dev/sdc4
       2       8       68        2      active sync   /dev/sde4
nas brot # mdadm --manage --set-faulty /dev/md2 /dev/sdc3
mdadm: set /dev/sdc3 faulty in /dev/md2
nas brot # mdadm --manage --set-faulty /dev/md4 /dev/sdc4
mdadm: set /dev/sdc4 faulty in /dev/md4
nas brot # mdadm --manage /dev/md2 -r /dev/sdc3
mdadm: hot removed /dev/sdc3 from /dev/md2
nas brot # mdadm --manage /dev/md4 -r /dev/sdc4
mdadm: hot removed /dev/sdc4 from /dev/md4
nas brot # mdadm -D /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 2876681216 (2743.42 GiB 2945.72 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent
 
  Intent Bitmap : Internal
 
    Update Time : Sun Dec  7 12:07:18 2014
          State : clean, degraded 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : nas:4  (local to host nas)
           UUID : f6924d42:7358df85:416f97f9:83aeac62
         Events : 16220
 
    Number   Major   Minor   RaidDevice State
       3       8       52        0      active sync   /dev/sdd4
       2       0        0        2      removed
       2       8       68        2      active sync   /dev/sde4
nas brot # gdisk /dev/sdc
GPT fdisk (gdisk) version 0.8.10
 
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present
 
Found valid GPT with protective MBR; using GPT.
 
Command (? for help): p
Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 0C5623E2-2FF4-4787-9609-750745C161DA
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
 
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1026047   500.0 MiB   EF00  EFI System
   2         1026048         2050047   500.0 MiB   FD00  Linux RAID
   3         2050048       106907647   50.0 GiB    FD00  Linux RAID
   4       106907648      5860533134   2.7 TiB     FD00  
 
Command (? for help): d
Partition number (1-4): 4
 
Command (? for help): d
Partition number (1-3): 3
 
Command (? for help): p
Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 0C5623E2-2FF4-4787-9609-750745C161DA
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 5858485101 sectors (2.7 TiB)
 
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1026047   500.0 MiB   EF00  EFI System
   2         1026048         2050047   500.0 MiB   FD00  Linux RAID
 
Command (? for help): n
Partition number (3-128, default 3): 
First sector (34-5860533134, default = 2050048) or {+-}size{KMGTP}: 
Last sector (2050048-5860533134, default = 5860533134) or {+-}size{KMGTP}: 
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'
 
Command (? for help): p
Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 0C5623E2-2FF4-4787-9609-750745C161DA
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
 
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1026047   500.0 MiB   EF00  EFI System
   2         1026048         2050047   500.0 MiB   FD00  Linux RAID
   3         2050048      5860533134   2.7 TiB     8300  Linux filesystem
 
Command (? for help): w
 
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
 
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdc.
The operation has completed successfully.
nas brot # btrfs device add /dev/sdc3 / -f
nas brot # btrfs balance start -v /
Dumping filters: flags 0x7, state 0x0, force is off
  DATA (flags 0x0): balancing
  METADATA (flags 0x0): balancing
  SYSTEM (flags 0x0): balancing
nas brot # watch -n 10 'btrfs balance status /'
Balance on '/' is running
1 out of about 3680 chunks balanced (2 considered), 100% left

Partitionierung

nas ~ # fdisk -l

Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA

Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   2983721397   1.3T Linux RAID
/dev/sda5   2983723008   5860533134   1.3T Linux RAID


Disk /dev/sdb: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C767012E-AA8B-4ACE-AD4A-8ADCAB4300D5

Device           Start          End   Size Type
/dev/sdb1         2048      1026047   500M Linux RAID
/dev/sdb2      1026048     53460000    25G Linux RAID
/dev/sdb3     53460992   2930277134   1.3T Linux RAID


Disk /dev/sdc: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B660130-B8DD-4BA3-9D8C-615F929B0433

Device           Start          End   Size Type
/dev/sdc1         2048      1026047   500M Linux RAID
/dev/sdc2      1026048     53460000    25G Linux RAID
/dev/sdc3     53460992   2930277134   1.3T Linux RAID


Disk /dev/sdd: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D01BE9B6-18F2-4C99-B30B-544B0C9BFA62

Device           Start          End   Size Type
/dev/sdd1         2048      1026047   500M Linux RAID
/dev/sdd2      1026048     53460000    25G Linux RAID
/dev/sdd3     53460992   2930277134   1.3T Linux RAID


Disk /dev/md2: 75 GiB, 80533782528 bytes, 157292544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

Disk /dev/md3: 4 TiB, 4418775810048 bytes, 8630421504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

Disk /dev/md0: 500 MiB, 524275712 bytes, 1023976 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

mdstat

nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Nov 25 21:35:39 2013
          State : clean 
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2

           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 178

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       49        1      active sync   /dev/sdd1

       3       8       33        -      spare   /dev/sdc1
       4       8       17        -      spare   /dev/sdb1
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Dec  2 21:01:28 2013
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 1041

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       50        1      active sync   /dev/sdd2
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2
nas ~ # mdadm -D /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Fri Aug 24 04:23:04 2012
     Raid Level : raid5
     Array Size : 4315210752 (4115.31 GiB 4418.78 GB)
  Used Dev Size : 1438403584 (1371.77 GiB 1472.93 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Dec  1 19:08:30 2013
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:3
           UUID : cf205a21:16cfbf89:1b7442b4:130edcb3
         Events : 42942

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       4       8       35        1      active sync   /dev/sdc3
       3       8       51        2      active sync   /dev/sdd3
       5       8       19        3      active sync   /dev/sdb3
  1. Neue Platten einhängen [x]
  2. Partitionen kopieren [x]
  3. Neues 3 Device RAID-5 mit 1 missing Device anlegen [x]
  4. Daten kopieren [x]
  5. Große Platte aus altem RAID umhängen in das neue [ X ]
  6. Systempartitionen auf die neuen Platten holen [ ]

Partitionierung nach Anpassung

partitionstabellen nach dd if=/dev/sda of=/dev/sdz bs=8M count=10
nas ~ # fdisk -l
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA
Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   2983721397   1.3T Linux RAID
/dev/sda5   2983723008   5860533134   1.3T Linux RAID
Disk /dev/sdb: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C767012E-AA8B-4ACE-AD4A-8ADCAB4300D5
Device           Start          End   Size Type
/dev/sdb1         2048      1026047   500M Linux RAID
/dev/sdb2      1026048     53460000    25G Linux RAID
/dev/sdb3     53460992   2930277134   1.3T Linux RAID
Disk /dev/sdc: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B660130-B8DD-4BA3-9D8C-615F929B0433
Device           Start          End   Size Type
/dev/sdc1         2048      1026047   500M Linux RAID
/dev/sdc2      1026048     53460000    25G Linux RAID
/dev/sdc3     53460992   2930277134   1.3T Linux RAID
Disk /dev/sdd: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D01BE9B6-18F2-4C99-B30B-544B0C9BFA62
 
Device           Start          End   Size Type
/dev/sdd1         2048      1026047   500M Linux RAID
/dev/sdd2      1026048     53460000    25G Linux RAID
/dev/sdd3     53460992   2930277134   1.3T Linux RAID
 
 
Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA
 
Device           Start          End   Size Type
/dev/sde1         2048      1026047   500M EFI System
/dev/sde2      1026048      2050047   500M Linux RAID
/dev/sde3      2050048    106907647    50G Linux RAID
/dev/sde4    106907648   5860533134   2.7T Linux RAID
 
 
Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA
 
Device           Start          End   Size Type
/dev/sdf1         2048      1026047   500M EFI System
/dev/sdf2      1026048      2050047   500M Linux RAID
/dev/sdf3      2050048    106907647    50G Linux RAID
/dev/sdf4    106907648   5860533134   2.7T Linux RAID
 
 
Disk /dev/md2: 75 GiB, 80533782528 bytes, 157292544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
 
Disk /dev/md0: 500 MiB, 524275712 bytes, 1023976 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/md3: 4 TiB, 4418775810048 bytes, 8630421504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

Anlegen des neuen RAID-Devices

nas ~ #  mdadm --create --verbose /dev/md4 --level=5 --raid-devices=3 missing /dev/sde4 /dev/sdf4
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 2876681216K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md4 started.
nas ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid5 sdf4[2] sde4[1]
      5753362432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

Kopieren der Daten

nas newraid # rsync -av /mnt/raid_datenpartition/* /mnt/newraid/

Umhängen der großen Platte im alten RAID

nas ~ # mdadm --verbose --manage --set-faulty /dev/md3 /dev/sda4
mdadm: set /dev/sda4 faulty in /dev/md3
nas ~ # mdadm --verbose --manage /dev/md3 --remove /dev/sda4
mdadm: hot removed /dev/sda4 from /dev/md3

nas ~ # fdisk /dev/sda
Welcome to fdisk (util-linux 2.24).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.Command (m for help): p
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA

Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   2983721397   1.3T Linux RAID
/dev/sda5   2983723008   5860533134   1.3T Linux RAID

Command (m for help): d
Partition number (1-5, default 5):

Partition 5 has been deleted.

Command (m for help): d
Partition number (1-4, default 4):

Partition 4 has been deleted.

Command (m for help): n
Partition number (4-128, default 4):
First sector (34-5860533134, default 106907648):
Last sector, +sectors or +size{K,M,G,T,P} (106907648-5860533134, default 5860533134):

Created a new partition 4 of type 'Linux filesystem' and of size 2.7 TiB.

Command (m for help): t
Partition number (1-4, default 4):
Partition type (type L to list all types): 14

Changed type of partition 'Linux filesystem' to 'Linux RAID'.

Command (m for help): p
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA

Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   5860533134   2.7T Linux RAID

Command (m for help): w

The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

nas ~ # partprobe
nas ~ # mdadm --verbose --manage --add /dev/md4 /dev/sda4
mdadm: added /dev/sda4
nas ~ # mdadm -D /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 2876681216 (2743.42 GiB 2945.72 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Dec 10 19:38:06 2013
          State : active, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 0% complete

           Name : nas:4  (local to host nas)
           UUID : f6924d42:7358df85:416f97f9:83aeac62
         Events : 5393

    Number   Major   Minor   RaidDevice State
       3       8        4        0      spare rebuilding   /dev/sda4
       1       8       68        1      active sync   /dev/sde4
       2       8       84        2      active sync   /dev/sdf4

Systempartitionen auf die neuen Platten holen

nas ~ # mdadm --verbose --manage --add /dev/md2 /dev/sde3
mdadm: added /dev/sde3
nas ~ # mdadm --verbose --manage --add /dev/md2 /dev/sdf3
mdadm: added /dev/sdf3
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Tue Dec 10 19:44:51 2013
          State : clean
 Active Devices : 4
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 1043

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       50        1      active sync   /dev/sdd2
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2

       5       8       67        -      spare   /dev/sde3
       6       8       83        -      spare   /dev/sdf3
nas ~ # mdadm --verbose --manage --set-faulty /dev/md2 /dev/sdd2
mdadm: set /dev/sdd2 faulty in /dev/md2
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Tue Dec 10 19:46:24 2013
          State : clean, degraded, resyncing (DELAYED)
 Active Devices : 3
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 1045

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      spare rebuilding   /dev/sdf3
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2

       1       8       50        -      faulty   /dev/sdd2
       5       8       67        -      spare   /dev/sde3

:!::!::!: Auf rebuild warten :!::!::!:

aktueller status
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 18:58:28 2013
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 1
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10124
 
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2
 
       1       8       50        -      faulty   /dev/sdd2
       5       8       67        -      spare   /dev/sde3
nächste Platte als defekt setzen und rebuild abwarten
nas ~ # mdadm --verbose --manage --set-faulty /dev/md2 /dev/sdc2
mdadm: set /dev/sdc2 faulty in /dev/md2
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 19:02:46 2013
          State : active, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 2
  Spare Devices : 1
 
         Layout : left-symmetric
     Chunk Size : 512K
 
 Rebuild Status : 4% complete
 
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10129
 
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      spare rebuilding   /dev/sde3
       4       8       18        3      active sync   /dev/sdb2
 
       1       8       50        -      faulty   /dev/sdd2
       2       8       34        -      faulty   /dev/sdc2
removing the devices
nas ~ # mdadm --verbose --manage /dev/md2 --remove /dev/sdd2 /dev/sdc2
mdadm: hot removed /dev/sdd2 from /dev/md2
mdadm: hot removed /dev/sdc2 from /dev/md2
move bootpartition
nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent
 
    Update Time : Tue Dec 10 19:35:24 2013
          State : clean
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 178
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       49        1      active sync   /dev/sdd1
 
       3       8       33        -      spare   /dev/sdc1
       4       8       17        -      spare   /dev/sdb1
nas ~ # mdadm --verbose --manage /dev/md0 --remove /dev/sdb1 /dev/sdc1
mdadm: hot removed /dev/sdb1 from /dev/md0
mdadm: hot removed /dev/sdc1 from /dev/md0
nas ~ # mdadm --verbose --manage /dev/md0 --add /dev/sdf1 /dev/sde1
mdadm: added /dev/sdf1
mdadm: added /dev/sde1
nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 19:53:51 2013
          State : clean
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 182
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       49        1      active sync   /dev/sdd1
 
       3       8       81        -      spare   /dev/sdf1
       4       8       65        -      spare   /dev/sde1
nas ~ # mdadm --verbose --manage --set-faulty /dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0
nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 19:55:33 2013
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 1
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 201
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       4       8       65        1      active sync   /dev/sde1
 
       2       8       49        -      faulty   /dev/sdd1
       3       8       81        -      spare   /dev/sdf1
nas ~ # mdadm --verbose --manage /dev/md0 --remove /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0
device section
ARRAY /dev/md2 metadata=1.2 name=localhost.localdomain:2 UUID=a9d908f3:f8632fcf:59a69085:83d5f6aa
ARRAY /dev/md0 metadata=1.0 spares=1 name=localhost.localdomain:0 UUID=262c8e68:c7c4fa6d:19af27e8:0cfc68a1
ARRAY /dev/md4 metadata=1.2 name=nas:4 UUID=f6924d42:7358df85:416f97f9:83aeac62
array details
nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 3
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 19:56:04 2013
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 202
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       4       8       65        1      active sync   /dev/sde1
 
       3       8       81        -      spare   /dev/sdf1
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 20:04:01 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10193
 
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      active sync   /dev/sde3
       4       8       18        3      active sync   /dev/sdb2
nas ~ # mdadm -D /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 2876681216 (2743.42 GiB 2945.72 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
 
  Intent Bitmap : Internal
 
    Update Time : Wed Dec 11 09:35:53 2013
          State : active
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : nas:4  (local to host nas)
           UUID : f6924d42:7358df85:416f97f9:83aeac62
         Events : 15769
 
    Number   Major   Minor   RaidDevice State
       3       8        4        0      active sync   /dev/sda4
       1       8       68        1      active sync   /dev/sde4
       2       8       84        2      active sync   /dev/sdf4

tatsächlicher Umzug der / Partition

  • grml von USB-Booten
  • resize2fs
root@grml ~ # mdadm -D /dev/md/2
/dev/md/2:
        Version : 1.2
  Creation Time : Tue Aug 28 02:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent
    Update Time : Wed Dec 11 19:34:29 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 512K
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10193
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      active sync   /dev/sde3
       4       8       18        3      active sync   /dev/sdb2
root@grml ~ # resize2fs -P /dev/md2
resize2fs 1.42.8 (20-Jun-2013)
Estimated minimum size of the filesystem: 4376945
 root@grml ~ # e2fsck -fv /dev/md2                                                                                                       :(
e2fsck 1.42.8 (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
      607662 inodes used (12.34%, out of 4923392)
        1443 non-contiguous files (0.2%)
         229 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0             Extent depth histogram: 600410/124
     4580213 blocks used (23.30%, out of 19661568)
           0 bad blocks
           1 large file
      545388 regular files
       54723 directories
         180 character device files
          97 block device files
           2 fifos
         597 links
        7262 symbolic links (6840 fast symbolic links)
           1 socket
------------
      608250 files
root@grml ~ # resize2fs -p /dev/md2 6076945
resize2fs 1.42.8 (20-Jun-2013)
Resizing the filesystem on /dev/md2 to 6076945 (4k) blocks.
Begin pass 2 (max = 2204238)
Relocating blocks             XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 601)
Scanning inode table          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 4 (max = 56313)
Updating inode references     XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/md2 is now 6076945 blocks long.
resize2fs -p /dev/md2 6076945  10.80s user 24.18s system 2% cpu 25:40.55 total
 root@grml ~ # mdadm --verbose --grow /dev/md2 --array-size 52430848                                                                     :(
root@grml ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 02:45:52 2012
     Raid Level : raid5
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent
    Update Time : Wed Dec 11 20:15:58 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 512K
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10195
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      active sync   /dev/sde3
       4       8       18        3      active sync   /dev/sdb2
root@grml ~ # mdadm --verbose --grow /dev/md2 -n 3 --backup-file=/root/md2.backup                                                       :(
mdadm: Need to backup 3072K of critical section..
root@grml ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 02:45:52 2012
     Raid Level : raid5
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent
    Update Time : Wed Dec 11 20:29:46 2013
          State : clean, reshaping
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 512K
 Reshape Status : 0% complete
  Delta Devices : -1, (4->3)
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10199
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      active sync   /dev/sde3
       4       8       18        3      active sync   /dev/sdb2
root@grml ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 02:45:52 2012
     Raid Level : raid5
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent
    Update Time : Wed Dec 11 20:39:03 2013
          State : clean
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1
         Layout : left-symmetric
     Chunk Size : 512K
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10273
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      active sync   /dev/sde3
       4       8       18        -      spare   /dev/sdb2root@grml ~ # mdadm --verbose --manage /dev/md2 --remove /dev/sdb2                                                                      :(
mdadm: hot removed /dev/sdb2 from /dev/md2
root@grml ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 02:45:52 2012
     Raid Level : raid5
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
    Update Time : Wed Dec 11 20:44:24 2013
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10274

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      active sync   /dev/sde3

root@grml ~ # e2fsck -nvf /dev/md2                                                                                                     :(
e2fsck 1.42.8 (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

      607662 inodes used (39.88%, out of 1523712)
        2606 non-contiguous files (0.4%)
         226 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
             Extent depth histogram: 599732/802
     4365533 blocks used (71.84%, out of 6076945)
           0 bad blocks
           1 large file

      545388 regular files
       54723 directories
         180 character device files
          97 block device files
           2 fifos
         597 links
        7262 symbolic links (6840 fast symbolic links)
           1 socket
------------
      608250 files

root@grml ~ # resize2fs -p /dev/md2
resize2fs 1.42.8 (20-Jun-2013)
Resizing the filesystem on /dev/md2 to 13107712 (4k) blocks.
The filesystem on /dev/md2 is now 13107200 blocks long.

→ 78646272 / 3 (aktive devs) = 26215424; 26215424 * 2 (zukünftig aktive devs) = 52430848

Detaillierte mdadm Informationen nach dem Umzug

Hilfreich wenn es Probleme mit mdadm gibt:

nas ~ #  mdadm -Evvvvs
mdadm: No md superblock detected on /dev/md4.
mdadm: No md superblock detected on /dev/md0.
mdadm: No md superblock detected on /dev/md2.
/dev/sdc4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : f6924d42:7358df85:416f97f9:83aeac62
           Name : nas:4  (local to host nas)
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 5753363343 (2743.42 GiB 2945.72 GB)
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 5753362432 (2743.42 GiB 2945.72 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=911 sectors
          State : clean
    Device UUID : 571c6397:7414485c:bc2f56b7:d96e1278

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Dec 15 17:01:47 2013
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 7e228231 - correct
         Events : 15769

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
           Name : localhost.localdomain:2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 104855552 (50.00 GiB 53.69 GB)
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 52430848 (25.00 GiB 26.84 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=52424704 sectors
          State : active
    Device UUID : cdfa2219:81384094:5f4cab42:ca201111

    Update Time : Mon Dec 16 22:04:43 2013
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 5adf8d9c - correct
         Events : 10275

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/sdc2.
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
           Name : localhost.localdomain:0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1023976 (500.07 MiB 524.28 MB)
     Array Size : 511988 (500.07 MiB 524.28 MB)
   Super Offset : 1023984 sectors
          State : clean
    Device UUID : d5716487:f5940dab:47a8c91d:5add272a

    Update Time : Wed Dec 11 22:27:12 2013
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 383b258a - correct
         Events : 209


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdb4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : f6924d42:7358df85:416f97f9:83aeac62
           Name : nas:4  (local to host nas)
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 5753363343 (2743.42 GiB 2945.72 GB)
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 5753362432 (2743.42 GiB 2945.72 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=911 sectors
          State : clean
    Device UUID : 4b7b8103:30e8cb92:a7820f4d:1ea29fad

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Dec 15 17:01:47 2013
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : ec0b3b11 - correct
         Events : 15769

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
           Name : localhost.localdomain:2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 104855552 (50.00 GiB 53.69 GB)
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 52430848 (25.00 GiB 26.84 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=52424704 sectors
          State : active
    Device UUID : ce0a470b:360249bf:136e7cfa:b7ca891b

    Update Time : Mon Dec 16 22:04:43 2013
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 3a5632f5 - correct
         Events : 10275

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/sdb2.
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
           Name : localhost.localdomain:0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1023976 (500.07 MiB 524.28 MB)
     Array Size : 511988 (500.07 MiB 524.28 MB)
   Super Offset : 1023984 sectors
          State : clean
    Device UUID : 5c8b5072:c1efebaa:21248884:16eb4c6e

    Update Time : Wed Dec 11 22:27:12 2013
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : cde92372 - correct
         Events : 209


   Device Role : spare
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sda4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : f6924d42:7358df85:416f97f9:83aeac62
           Name : nas:4  (local to host nas)
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 5753363343 (2743.42 GiB 2945.72 GB)
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 5753362432 (2743.42 GiB 2945.72 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=911 sectors
          State : clean
    Device UUID : 23e771d6:94004c21:4b1d5ef6:de3a33dc

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Dec 15 17:01:47 2013
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 255df2b4 - correct
         Events : 15769

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
           Name : localhost.localdomain:2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 104855552 (50.00 GiB 53.69 GB)
     Array Size : 52430848 (50.00 GiB 53.69 GB)
  Used Dev Size : 52430848 (25.00 GiB 26.84 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=52424704 sectors
          State : clean
    Device UUID : 11727d2d:55248f2f:c6bfd477:6ee8c689

    Update Time : Mon Dec 16 22:04:43 2013
       Checksum : b8602b71 - correct
         Events : 10274

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
           Name : localhost.localdomain:0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1023976 (500.07 MiB 524.28 MB)
     Array Size : 511988 (500.07 MiB 524.28 MB)
   Super Offset : 1023984 sectors
   Unused Space : before=0 sectors, after=8 sectors
          State : clean
    Device UUID : d004dd2b:281bdf01:1ab30378:b08f7be0

    Update Time : Wed Dec 11 22:27:12 2013
       Checksum : 440afbe4 - correct
         Events : 209


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sda1:
   MBR Magic : aa55
/dev/sda:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)

Die aktuellen Windows virtio-Treiber gibt es unter. http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/

Serial

web.py: get auto-increment value after db.insert This is probably too simple, so no one ever bothered to write about it. (Or I couldn't come up with the proper keywords for Google).

The situation is simple. I have a table where the primary key is some auto-incrementing ID number (in the case of postgresql, with the primary key called nid, the definition is nid serial primary key). After inserting a record into the table, I want to know what ID it has got.

Turns out it's very simple. I need only to pass a custom seqname to db.insert().

nid = db.insert(“tablename”, seqname=“tablename_nid_seq”,…)

For the serial column, postgresql creates a sequence automatically and assigns it a particularly constructed name. More details at FAQ: Using Sequences in PostgreSQL.

This is probably not the best style (I'm hard-coding the seqname in the source code), and I'm not even sure if it's the right way. But at least I can proceed with my prototyping and hacking now :)

 http://blog.zhangsen.org/2011/07/webpy-get-auto-increment-value-after.html
  • notes_brot.1417950880.txt.gz
  • Last modified: 2014/12/07 11:14
  • by brot