Skip to content

Pagan festival parties

The winter holiday season is usually a quiet affair for me as friends and family are dispersed all over the planet. I do not follow a religion although I do appreciate the sun rising every day. I have cats to keep me company and I also look after my neighbours cat for a few days.

Winter is when I do less cycling outside, especially when there is a risk of ice. Instead I work on my core muscles with yoga and occasional brave the outside for a Park Run. I have had an injury on my foot which has taken a few months to heal (as I have been very active), so Park Run events for me will be a brisk part walk (maybe running for the last 50 meters).

Started to consider what this year has meant for me by looking at other peoples 2025 round-ups. The most enjoyable roundup so far is Kermod & Mayo's Best and Worst Films of the Year, which also included best TV shows (that Mark Kermod has watched) of the year.

I have a Network Attached Storage (NAS) device at home as a convenient central backup for work and media files. The NAS was configured with a RAID6 array, but unfortunately two disks in the RAID 6 array had failed. I discovered that changing 2 disks on the RAID array was not necessarily a safe option, especially if yet another disk became faulty during the rebuild (which can take many hours).

I am attempting again to recover the RAID array. If that fails, I will rebuild the NAS with 2 separate RAID6 arrays, increasing further the resilience of the storage in case of hardware issues.

Eventually I will replace the 'disk' drives with solid state storage devices (SSD), especially as a 4TB SSD can be found for around 160 GBP. Using SSD rather than spinning 'disks' also makes the NAS device really quite.

NOTE: I have other backup mechanisms as RAID should not be used as a permanent backup solution.

Entertainment this week included

  • Blakes 7 season 3 (usually watched as I fall asleep)
  • Slow Horses season 5 - excellent show that keeps my gripped (watched the first 4 episodes in one night)
  • Pol1bus - wonderfully funny and quirky show (I am not sure if I am rooting for the right protagonists)
  • Foundation season 1 & 2 - watched again in preparation for season 3 this week
  • Dr Who - watching Jodie Whittaker as the Thirteenth doctor and really enjoying the depth of stories that are portrait

NAS recoveryλ︎

The RAID array status was viewed in the Web Console via the Storage Management app. The RAID was encrypted, so first unlocked by providing a passphrase.

The RAID array showed unmounted and 'Drive 2' as a single disk (not part of the RAID 6 array). Drive 2 was selected and made a global spare without noticeable effect to the RAID array.

Maintenance can be done via command line, using SSH to connect to the NAS.

I have the ssh command aliased to kitty +kitten ssh so that the remote is set up with all the kitty setup. However, as the NAS does not (not able to have) kitty installed, then I must override the alias and use the original ssh command.

A simple way to use the original ssh command is to use the full path.

Run SSH without Kitty Alias

/usr/bin/ssh admin@192.168.0.25

Once on the NAS I checked the state of the NAS, which showed it was clean but in a degraded state. This typically means the RAID must be rebuilt.

mdadm --detail /dev/md0

Adding 'Drive 2' to the RAID array triggered a rebuild of the array. In several hours (with luck) the RAID array should rebuild itself :crossed_fingers:

mdadm --add /dev/md0 /dev/sdb3
SSH session on NAS
[~] # mdadm --detail /dev/md0
/dev/md0:
        Version : 01.00.03
  Creation Time : Sat Jan 23 19:04:18 2021
     Raid Level : raid6
     Array Size : 17572185216 (16758.14 GiB 17993.92 GB)
  Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
   Raid Devices : 8
  Total Devices : 7
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Dec 23 12:26:20 2025
          State : clean, degraded
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           Name : 0
           UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
         Events : 32003

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       0        0        1      removed
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8       67        4      active sync   /dev/sde3
       8       8       83        5      active sync   /dev/sdf3
       6       8       99        6      active sync   /dev/sdg3
       7       8      115        7      active sync   /dev/sdh3
[~] # e2fsck_64 -fp -C 0 /dev/md0
/dev/md0 is in use.
e2fsck: Cannot continue, aborting.


[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid6 sda3[0] sdh3[7] sdg3[6] sdf3[8] sde3[4] sdd3[3] sdc3[2]
        17572185216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/7] [U_UUUUUU]

md8 : active raid1 sdh2[8](S) sdg2[7](S) sdf2[6](S) sde2[5](S) sdd2[4](S) sdc2[3](S) sdb2[2] sda2[0]
        530128 blocks super 1.0 [2/2] [UU]

md13 : active raid1 sdb4[0] sda4[7] sdc4[6] sdd4[5] sde4[4] sdf4[3] sdg4[2] sdh4[1]
        458880 blocks [8/8] [UUUUUUUU]
        bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdh1[7] sdg1[6] sdc1[5] sdd1[4] sde1[3] sdf1[2] sdb1[1]
        530048 blocks [8/8] [UUUUUUUU]
        bitmap: 0/65 pages [0KB], 4KB chunk

unused devices: <none>
[~] # mdadm --detail /dev/md0
/dev/md0:
        Version : 01.00.03
  Creation Time : Sat Jan 23 19:04:18 2021
     Raid Level : raid6
     Array Size : 17572185216 (16758.14 GiB 17993.92 GB)
  Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
   Raid Devices : 8
  Total Devices : 7
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Dec 23 12:27:55 2025
          State : clean, degraded
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           Name : 0
           UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
         Events : 32017

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       0        0        1      removed
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8       67        4      active sync   /dev/sde3
       8       8       83        5      active sync   /dev/sdf3
       6       8       99        6      active sync   /dev/sdg3
       7       8      115        7      active sync   /dev/sdh3
[~] # mdadm --readwrite /dev/md0
mdadm: failed to set writable for /dev/md0: Device or resource busy
[~] # mdadm --add /dev/md0 /dev/sdb3
mdadm: added /dev/sdb3
[~] # mdadm --misc --detail /dev/md0
/dev/md0:
        Version : 01.00.03
  Creation Time : Sat Jan 23 19:04:18 2021
     Raid Level : raid6
     Array Size : 17572185216 (16758.14 GiB 17993.92 GB)
  Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Dec 23 12:31:03 2025
          State : clean, degraded, recovering
 Active Devices : 7
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 1

     Chunk Size : 64K

 Rebuild Status : 0% complete

           Name : 0
           UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
         Events : 32045

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       9       8       19        1      spare rebuilding   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8       67        4      active sync   /dev/sde3
       8       8       83        5      active sync   /dev/sdf3
       6       8       99        6      active sync   /dev/sdg3
       7       8      115        7      active sync   /dev/sdh3

Leaving the NAS running overnight, the raid array reports to be in a clean status. Hopefully a reboot will return the NAS to full operation :crossed_fingers:

NAS RAID status after adding drive
[~] # mdadm --misc --detail /dev/md0
/dev/md0:
        Version : 01.00.03
  Creation Time : Sat Jan 23 19:04:18 2021
     Raid Level : raid6
     Array Size : 17572185216 (16758.14 GiB 17993.92 GB)
  Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Dec 24 15:29:51 2025
          State : clean
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           Name : 0
           UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
         Events : 40298

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       9       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8       67        4      active sync   /dev/sde3
       8       8       83        5      active sync   /dev/sdf3
       6       8       99        6      active sync   /dev/sdg3
       7       8      115        7      active sync   /dev/sdh3

The RAID is still showing as unmounted and the extfs file check command will not run, so it seems the raid array is toast.

I removed the RAID array and formatted the drives (quick format). Next I will rebuild the array and restore backups.

Create RAID Arraysλ︎

There are 8 disks in the NAS so I am creating two RAID 6 arrays. A RAID 6 array requires a minimum of 4 disks.

This approach means that I can experience up to 2 disk failures on each RAID 6 array.

I will loose more space, but I should have more than enough room left for all the content I wish to store locally.

Once the two RAID 6 arrays were defined, the NAS formatted and started synchronising each array. It will take about two days to synchronise the drives it seems.

QNAP NAS RAID syncronising status
[admin@NASCDDF53 ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 01.00.03
  Creation Time : Thu Dec 25 00:21:41 2025
     Raid Level : raid6
     Array Size : 5857395072 (5586.05 GiB 5997.97 GB)
  Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Dec 26 18:19:14 2025
          State : active, resyncing
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

 Rebuild Status : 75% complete

           Name : 0
           UUID : db736564:54a89199:2a50ccda:2c209b95
         Events : 12

    Number   Major   Minor   RaidDevice State
       0       8       67        0      active sync   /dev/sde3
       1       8       83        1      active sync   /dev/sdf3
       2       8       99        2      active sync   /dev/sdg3
       3       8      115        3      active sync   /dev/sdh3

Thank you.

🌐 Practical.li Website

Practical.li GitHub Org practicalli-johnny profile

@practicalli@clj.social @practical_li