notes_brot

This is an old revision of the document!


Notizzettel vom brot

Partitionierung

nas ~ # fdisk -l

Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA

Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   2983721397   1.3T Linux RAID
/dev/sda5   2983723008   5860533134   1.3T Linux RAID


Disk /dev/sdb: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C767012E-AA8B-4ACE-AD4A-8ADCAB4300D5

Device           Start          End   Size Type
/dev/sdb1         2048      1026047   500M Linux RAID
/dev/sdb2      1026048     53460000    25G Linux RAID
/dev/sdb3     53460992   2930277134   1.3T Linux RAID


Disk /dev/sdc: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B660130-B8DD-4BA3-9D8C-615F929B0433

Device           Start          End   Size Type
/dev/sdc1         2048      1026047   500M Linux RAID
/dev/sdc2      1026048     53460000    25G Linux RAID
/dev/sdc3     53460992   2930277134   1.3T Linux RAID


Disk /dev/sdd: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D01BE9B6-18F2-4C99-B30B-544B0C9BFA62

Device           Start          End   Size Type
/dev/sdd1         2048      1026047   500M Linux RAID
/dev/sdd2      1026048     53460000    25G Linux RAID
/dev/sdd3     53460992   2930277134   1.3T Linux RAID


Disk /dev/md2: 75 GiB, 80533782528 bytes, 157292544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

Disk /dev/md3: 4 TiB, 4418775810048 bytes, 8630421504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

Disk /dev/md0: 500 MiB, 524275712 bytes, 1023976 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

mdstat

nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Nov 25 21:35:39 2013
          State : clean 
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2

           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 178

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       49        1      active sync   /dev/sdd1

       3       8       33        -      spare   /dev/sdc1
       4       8       17        -      spare   /dev/sdb1
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Dec  2 21:01:28 2013
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 1041

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       50        1      active sync   /dev/sdd2
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2
nas ~ # mdadm -D /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Fri Aug 24 04:23:04 2012
     Raid Level : raid5
     Array Size : 4315210752 (4115.31 GiB 4418.78 GB)
  Used Dev Size : 1438403584 (1371.77 GiB 1472.93 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Dec  1 19:08:30 2013
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:3
           UUID : cf205a21:16cfbf89:1b7442b4:130edcb3
         Events : 42942

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       4       8       35        1      active sync   /dev/sdc3
       3       8       51        2      active sync   /dev/sdd3
       5       8       19        3      active sync   /dev/sdb3
  1. Neue Platten einhängen [x]
  2. Partitionen kopieren [x]
  3. Neues 3 Device RAID-5 mit 1 missing Device anlegen [x]
  4. Daten kopieren [x]
  5. Große Platte aus altem RAID umhängen in das neue [ X ]
  6. Systempartitionen auf die neuen Platten holen [ ]

Partitionierung nach Anpassung

partitionstabellen nach dd if=/dev/sda of=/dev/sdz bs=8M count=10
nas ~ # fdisk -l
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA
Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   2983721397   1.3T Linux RAID
/dev/sda5   2983723008   5860533134   1.3T Linux RAID
Disk /dev/sdb: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C767012E-AA8B-4ACE-AD4A-8ADCAB4300D5
Device           Start          End   Size Type
/dev/sdb1         2048      1026047   500M Linux RAID
/dev/sdb2      1026048     53460000    25G Linux RAID
/dev/sdb3     53460992   2930277134   1.3T Linux RAID
Disk /dev/sdc: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4B660130-B8DD-4BA3-9D8C-615F929B0433
Device           Start          End   Size Type
/dev/sdc1         2048      1026047   500M Linux RAID
/dev/sdc2      1026048     53460000    25G Linux RAID
/dev/sdc3     53460992   2930277134   1.3T Linux RAID
Disk /dev/sdd: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D01BE9B6-18F2-4C99-B30B-544B0C9BFA62
 
Device           Start          End   Size Type
/dev/sdd1         2048      1026047   500M Linux RAID
/dev/sdd2      1026048     53460000    25G Linux RAID
/dev/sdd3     53460992   2930277134   1.3T Linux RAID
 
 
Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA
 
Device           Start          End   Size Type
/dev/sde1         2048      1026047   500M EFI System
/dev/sde2      1026048      2050047   500M Linux RAID
/dev/sde3      2050048    106907647    50G Linux RAID
/dev/sde4    106907648   5860533134   2.7T Linux RAID
 
 
Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA
 
Device           Start          End   Size Type
/dev/sdf1         2048      1026047   500M EFI System
/dev/sdf2      1026048      2050047   500M Linux RAID
/dev/sdf3      2050048    106907647    50G Linux RAID
/dev/sdf4    106907648   5860533134   2.7T Linux RAID
 
 
Disk /dev/md2: 75 GiB, 80533782528 bytes, 157292544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
 
Disk /dev/md0: 500 MiB, 524275712 bytes, 1023976 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/md3: 4 TiB, 4418775810048 bytes, 8630421504 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

Anlegen des neuen RAID-Devices

nas ~ #  mdadm --create --verbose /dev/md4 --level=5 --raid-devices=3 missing /dev/sde4 /dev/sdf4
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 2876681216K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md4 started.
nas ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid5 sdf4[2] sde4[1]
      5753362432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

Kopieren der Daten

nas newraid # rsync -av /mnt/raid_datenpartition/* /mnt/newraid/

Umhängen der großen Platte im alten RAID

nas ~ # mdadm --verbose --manage --set-faulty /dev/md3 /dev/sda4
mdadm: set /dev/sda4 faulty in /dev/md3
nas ~ # mdadm --verbose --manage /dev/md3 --remove /dev/sda4
mdadm: hot removed /dev/sda4 from /dev/md3

nas ~ # fdisk /dev/sda
Welcome to fdisk (util-linux 2.24).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.Command (m for help): p
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA

Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   2983721397   1.3T Linux RAID
/dev/sda5   2983723008   5860533134   1.3T Linux RAID

Command (m for help): d
Partition number (1-5, default 5):

Partition 5 has been deleted.

Command (m for help): d
Partition number (1-4, default 4):

Partition 4 has been deleted.

Command (m for help): n
Partition number (4-128, default 4):
First sector (34-5860533134, default 106907648):
Last sector, +sectors or +size{K,M,G,T,P} (106907648-5860533134, default 5860533134):

Created a new partition 4 of type 'Linux filesystem' and of size 2.7 TiB.

Command (m for help): t
Partition number (1-4, default 4):
Partition type (type L to list all types): 14

Changed type of partition 'Linux filesystem' to 'Linux RAID'.

Command (m for help): p
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0C5623E2-2FF4-4787-9609-750745C161DA

Device           Start          End   Size Type
/dev/sda1         2048      1026047   500M EFI System
/dev/sda2      1026048      2050047   500M Linux RAID
/dev/sda3      2050048    106907647    50G Linux RAID
/dev/sda4    106907648   5860533134   2.7T Linux RAID

Command (m for help): w

The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

nas ~ # partprobe
nas ~ # mdadm --verbose --manage --add /dev/md4 /dev/sda4
mdadm: added /dev/sda4
nas ~ # mdadm -D /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Mon Dec  9 22:06:12 2013
     Raid Level : raid5
     Array Size : 5753362432 (5486.83 GiB 5891.44 GB)
  Used Dev Size : 2876681216 (2743.42 GiB 2945.72 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Dec 10 19:38:06 2013
          State : active, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 0% complete

           Name : nas:4  (local to host nas)
           UUID : f6924d42:7358df85:416f97f9:83aeac62
         Events : 5393

    Number   Major   Minor   RaidDevice State
       3       8        4        0      spare rebuilding   /dev/sda4
       1       8       68        1      active sync   /dev/sde4
       2       8       84        2      active sync   /dev/sdf4

Systempartitionen auf die neuen Platten holen

nas ~ # mdadm --verbose --manage --add /dev/md2 /dev/sde3
mdadm: added /dev/sde3
nas ~ # mdadm --verbose --manage --add /dev/md2 /dev/sdf3
mdadm: added /dev/sdf3
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Tue Dec 10 19:44:51 2013
          State : clean
 Active Devices : 4
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 1043

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       50        1      active sync   /dev/sdd2
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2

       5       8       67        -      spare   /dev/sde3
       6       8       83        -      spare   /dev/sdf3
nas ~ # mdadm --verbose --manage --set-faulty /dev/md2 /dev/sdd2
mdadm: set /dev/sdd2 faulty in /dev/md2
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Tue Dec 10 19:46:24 2013
          State : clean, degraded, resyncing (DELAYED)
 Active Devices : 3
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 1045

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      spare rebuilding   /dev/sdf3
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2

       1       8       50        -      faulty   /dev/sdd2
       5       8       67        -      spare   /dev/sde3

:!::!::!: Auf rebuild warten :!::!::!:

aktueller status
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 18:58:28 2013
          State : clean
 Active Devices : 4
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 1
 
         Layout : left-symmetric
     Chunk Size : 512K
 
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10124
 
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       2       8       34        2      active sync   /dev/sdc2
       4       8       18        3      active sync   /dev/sdb2
 
       1       8       50        -      faulty   /dev/sdd2
       5       8       67        -      spare   /dev/sde3
nächste Platte als defekt setzen und rebuild abwarten
nas ~ # mdadm --verbose --manage --set-faulty /dev/md2 /dev/sdc2
mdadm: set /dev/sdc2 faulty in /dev/md2
nas ~ # mdadm -D /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Tue Aug 28 04:45:52 2012
     Raid Level : raid5
     Array Size : 78646272 (75.00 GiB 80.53 GB)
  Used Dev Size : 26215424 (25.00 GiB 26.84 GB)
   Raid Devices : 4
  Total Devices : 6
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 19:02:46 2013
          State : active, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 2
  Spare Devices : 1
 
         Layout : left-symmetric
     Chunk Size : 512K
 
 Rebuild Status : 4% complete
 
           Name : localhost.localdomain:2
           UUID : a9d908f3:f8632fcf:59a69085:83d5f6aa
         Events : 10129
 
    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       6       8       83        1      active sync   /dev/sdf3
       5       8       67        2      spare rebuilding   /dev/sde3
       4       8       18        3      active sync   /dev/sdb2
 
       1       8       50        -      faulty   /dev/sdd2
       2       8       34        -      faulty   /dev/sdc2
removing the devices
nas ~ # mdadm --verbose --manage /dev/md2 --remove /dev/sdd2 /dev/sdc2
mdadm: hot removed /dev/sdd2 from /dev/md2
mdadm: hot removed /dev/sdc2 from /dev/md2
move bootpartition
nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent
 
    Update Time : Tue Dec 10 19:35:24 2013
          State : clean
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 178
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       49        1      active sync   /dev/sdd1
 
       3       8       33        -      spare   /dev/sdc1
       4       8       17        -      spare   /dev/sdb1
nas ~ # mdadm --verbose --manage /dev/md0 --remove /dev/sdb1 /dev/sdc1
mdadm: hot removed /dev/sdb1 from /dev/md0
mdadm: hot removed /dev/sdc1 from /dev/md0
nas ~ # mdadm --verbose --manage /dev/md0 --add /dev/sdf1 /dev/sde1
mdadm: added /dev/sdf1
mdadm: added /dev/sde1
nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 19:53:51 2013
          State : clean
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 182
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       49        1      active sync   /dev/sdd1
 
       3       8       81        -      spare   /dev/sdf1
       4       8       65        -      spare   /dev/sde1
nas ~ # mdadm --verbose --manage --set-faulty /dev/md0 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md0
nas ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Fri Aug 24 04:18:31 2012
     Raid Level : raid1
     Array Size : 511988 (500.07 MiB 524.28 MB)
  Used Dev Size : 511988 (500.07 MiB 524.28 MB)
   Raid Devices : 2
  Total Devices : 4
    Persistence : Superblock is persistent
 
    Update Time : Wed Dec 11 19:55:33 2013
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 1
 
           Name : localhost.localdomain:0
           UUID : 262c8e68:c7c4fa6d:19af27e8:0cfc68a1
         Events : 201
 
    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       4       8       65        1      active sync   /dev/sde1
 
       2       8       49        -      faulty   /dev/sdd1
       3       8       81        -      spare   /dev/sdf1
nas ~ # mdadm --verbose --manage /dev/md0 --remove /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md0

</code>

Die aktuellen Windows virtio-Treiber gibt es unter. http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/

Serial

web.py: get auto-increment value after db.insert This is probably too simple, so no one ever bothered to write about it. (Or I couldn't come up with the proper keywords for Google).

The situation is simple. I have a table where the primary key is some auto-incrementing ID number (in the case of postgresql, with the primary key called nid, the definition is nid serial primary key). After inserting a record into the table, I want to know what ID it has got.

Turns out it's very simple. I need only to pass a custom seqname to db.insert().

nid = db.insert(“tablename”, seqname=“tablename_nid_seq”,…)

For the serial column, postgresql creates a sequence automatically and assigns it a particularly constructed name. More details at FAQ: Using Sequences in PostgreSQL.

This is probably not the best style (I'm hard-coding the seqname in the source code), and I'm not even sure if it's the right way. But at least I can proceed with my prototyping and hacking now :)

 http://blog.zhangsen.org/2011/07/webpy-get-auto-increment-value-after.html
  • notes_brot.1386788345.txt.gz
  • Last modified: 2013/12/11 18:59
  • by brot