Updating Neovim on Termux
After a lovely sunny weekend its back to grey and chilly outside. Later in the week it was sunny but really cold (freezing overnight).
Lots of updating software and tools this week, including
- Termux and Neovim to migrate to the changed packaging of Treesitter.
- Hyperland & Arch Linux
Backup continues for the Lenovo X1 Extreme laptop which is running an old Ubuntu and will have Debian Linux soon.
In the Clojurians Slack community, a post in news-and- channel highlighted a security risk to the tj-actions GitHub action. I dont use this GitHub action, but as a measure of caution I configured a more restrictive policy for running GitHub actions in the Practicalli Org.
Practicalliλ︎
Applied for Clojurists Together funding for 2025
- Clojure Stacks - an experimental project, to create the initial prototype
- Practical.li - update videos and practicalli books
Created a GitHub Project to capture tickets for the potential Clourists Together sponsorship
Termux & Neovimλ︎
A dist-upgrade
was required to upgrade Neovim and Emacs on Termux (Android 11)
The libtreesitter
package has been superseded and split into several packages, tree-sitter
,tree-sitter-c
, tree-sitter-nvim
and tree-sitter-parsers
.
A dist-upgrade
allows the libtreesitter dependency to be removed from Neovim and Emacs, enabling all packages to be upgraded.
Treesitter package versions
The tree-sitter
packages are newer than the libtreesitter
package.
→ apt show libtreesitter
Package: libtreesitter
Version: 0.22.6-1
Maintainer: Joshua Kahn @TomJo2000
Installed-Size: 13.0 MB
Homepage: https://github.com/tree-sitter/tree-sitter
Download-Size: 1662 kB
APT-Sources: https://mirror.accum.se/mirror/termux.dev/termux-main stable/main aarch64 Packages
Description: An incremental parsing system for programming tools
→ apt show tree-sitter
Package: tree-sitter
Version: 0.25.3
Maintainer: Joshua Kahn @TomJo2000
Installed-Size: 8049 kB
Breaks: libtreesitter
Replaces: libtreesitter
Homepage: https://github.com/tree-sitter/tree-sitter
Download-Size: 2008 kB
APT-Manual-Installed: no
APT-Sources: https://mirror.accum.se/mirror/termux.dev/termux-main stable/main aarch64 Packages
Description: An incremental parsing system for programming tools
better-escape.nvim rewrite suggests config change
The following notificayion shows when starting astro
on Termux.
Hyprland & Arch Linuxλ︎
I continue to run Hyprland on a spare laptop and am getting comfortable in maintaining Arch Linux.
Update the packages in Arch Linux
I still find the pacman options very cryptic. Its not as simple as remembering
apt update && apt upgrade
for Debian (and Termux).
Update the Hyprland config using HyDE. Previously HyDE suggested Hyde update
command. Now the readme suggests using -r
with the install script
HyDE recommends rebooting the system to ensure all the new config is loaded.
NAS recoveryλ︎
Continuing my journey to recover the RAID array in my Qnap NAS device.
The failed drive in bay 2 was removed and bad block checks were successfully run last week.
The NAS Web UI is still showing the RAID array as unmounted.
Checking the logs
~] # dmesg
MSI-X
[ 106.302827] e1000e 0000:02:00.0: irq 48 for MSI/MSI-X
[ 106.409838] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:08:9b:cd:df:54
[ 106.414332] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection
[ 106.418360] e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF
[ 106.422210] e1000e 0000:03:00.0: Disabling ASPM L0s L1
[ 106.426166] e1000e 0000:03:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 106.430155] e1000e 0000:03:00.0: irq 49 for MSI/MSI-X
[ 106.430167] e1000e 0000:03:00.0: irq 50 for MSI/MSI-X
[ 106.430177] e1000e 0000:03:00.0: irq 51 for MSI/MSI-X
[ 106.532695] e1000e 0000:03:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 00:08:9b:cd:df:53
[ 106.537343] e1000e 0000:03:00.0: eth1: Intel(R) PRO/1000 Network Connection
[ 106.541111] e1000e 0000:03:00.0: eth1: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF
[ 106.548079] jnl: driver (lke_9.2.0 QNAP, LBD=OFF) loaded at ffffffffa0189000
[ 106.556462] ufsd: module license 'Commercial product' taints kernel.
[ 106.560021] Disabling lock debugging due to kernel taint
[ 106.566038] ufsd: driver (lke_9.2.0 QNAP, build_host("BuildServer36"), acl, ioctl, bdi, sd2(0), fua, bz, rsrc) loaded at ffffffffa0194000
[ 106.566044] NTFS support included
[ 106.566046] Hfs+/HfsJ support included
[ 106.566048] optimized: speed
[ 106.566049] Build_for__QNAP_Atom_x86_64_k3.4.6_2014-09-17_lke_9.2.0_r245986_b9
[ 106.566052]
[ 106.615160] fnotify: Load file notify kernel module.
[ 107.706397] usbcore: registered new interface driver snd-usb-audio
[ 107.714809] usbcore: registered new interface driver snd-usb-caiaq
[ 107.726534] Linux video capture interface: v2.00
[ 107.748392] usbcore: registered new interface driver uvcvideo
[ 107.753636] USB Video Class driver (1.1.1)
[ 107.903800] 8021q: 802.1Q VLAN Support v1.8
[ 109.316712] 8021q: adding VLAN 0 to HW filter on device eth0
[ 109.403472] 8021q: adding VLAN 0 to HW filter on device eth1
[ 111.825057] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[ 120.868582] kjournald starting. Commit interval 5 seconds
[ 120.883640] EXT3-fs (md9): using internal journal
[ 120.887628] EXT3-fs (md9): mounted filesystem with ordered data mode
[ 124.910971] md: bind<sda2>
[ 124.916288] md/raid1:md8: active with 1 out of 1 mirrors
[ 124.920234] md8: detected capacity change from 0 to 542851072
[ 125.931252] md8: unknown partition table
[ 128.107675] Adding 530124k swap on /dev/md8. Priority:-1 extents:1 across:530124k
[ 131.605796] md: bind<sdc2>
[ 131.626108] RAID1 conf printout:
[ 131.626119] --- wd:1 rd:2
[ 131.626127] disk 0, wo:0, o:1, dev:sda2
[ 131.626135] disk 1, wo:1, o:1, dev:sdc2
[ 131.626275] md: recovery of RAID array md8
[ 131.630328] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 131.634311] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 131.638422] md: using 128k window, over a total of 530128k.
[ 133.867974] md: bind<sdd2>
[ 136.067523] md: bind<sde2>
[ 137.252156] md: md8: recovery done.
[ 137.291211] RAID1 conf printout:
[ 137.291222] --- wd:2 rd:2
[ 137.291231] disk 0, wo:0, o:1, dev:sda2
[ 137.291238] disk 1, wo:0, o:1, dev:sdc2
[ 137.313658] RAID1 conf printout:
[ 137.313666] --- wd:2 rd:2
[ 137.313674] disk 0, wo:0, o:1, dev:sda2
[ 137.313681] disk 1, wo:0, o:1, dev:sdc2
[ 137.313687] RAID1 conf printout:
[ 137.313692] --- wd:2 rd:2
[ 137.313698] disk 0, wo:0, o:1, dev:sda2
[ 137.313705] disk 1, wo:0, o:1, dev:sdc2
[ 139.034596] md: bind<sdf2>
[ 139.076343] RAID1 conf printout:
[ 139.076354] --- wd:2 rd:2
[ 139.076363] disk 0, wo:0, o:1, dev:sda2
[ 139.076370] disk 1, wo:0, o:1, dev:sdc2
[ 139.076376] RAID1 conf printout:
[ 139.076381] --- wd:2 rd:2
[ 139.076388] disk 0, wo:0, o:1, dev:sda2
[ 139.076395] disk 1, wo:0, o:1, dev:sdc2
[ 139.076401] RAID1 conf printout:
[ 139.076406] --- wd:2 rd:2
[ 139.076412] disk 0, wo:0, o:1, dev:sda2
[ 139.076419] disk 1, wo:0, o:1, dev:sdc2
[ 141.156135] md: bind<sdg2>
[ 142.175044] RAID1 conf printout:
[ 142.175054] --- wd:2 rd:2
[ 142.175062] disk 0, wo:0, o:1, dev:sda2
[ 142.175069] disk 1, wo:0, o:1, dev:sdc2
[ 142.175075] RAID1 conf printout:
[ 142.175080] --- wd:2 rd:2
[ 142.175086] disk 0, wo:0, o:1, dev:sda2
[ 142.175093] disk 1, wo:0, o:1, dev:sdc2
[ 142.175098] RAID1 conf printout:
[ 142.175103] --- wd:2 rd:2
[ 142.175109] disk 0, wo:0, o:1, dev:sda2
[ 142.175116] disk 1, wo:0, o:1, dev:sdc2
[ 142.175122] RAID1 conf printout:
[ 142.175127] --- wd:2 rd:2
[ 142.175133] disk 0, wo:0, o:1, dev:sda2
[ 142.175140] disk 1, wo:0, o:1, dev:sdc2
[ 144.518633] md: bind<sdh2>
[ 145.315446] RAID1 conf printout:
[ 145.315470] --- wd:2 rd:2
[ 145.315479] disk 0, wo:0, o:1, dev:sda2
[ 145.315487] disk 1, wo:0, o:1, dev:sdc2
[ 145.315493] RAID1 conf printout:
[ 145.315498] --- wd:2 rd:2
[ 145.315505] disk 0, wo:0, o:1, dev:sda2
[ 145.315511] disk 1, wo:0, o:1, dev:sdc2
[ 145.315517] RAID1 conf printout:
[ 145.315522] --- wd:2 rd:2
[ 145.315528] disk 0, wo:0, o:1, dev:sda2
[ 145.315536] disk 1, wo:0, o:1, dev:sdc2
[ 145.315541] RAID1 conf printout:
[ 145.315546] --- wd:2 rd:2
[ 145.315552] disk 0, wo:0, o:1, dev:sda2
[ 145.315559] disk 1, wo:0, o:1, dev:sdc2
[ 145.315565] RAID1 conf printout:
[ 145.315570] --- wd:2 rd:2
[ 145.315576] disk 0, wo:0, o:1, dev:sda2
[ 145.315583] disk 1, wo:0, o:1, dev:sdc2
[ 171.880718] md: md0 stopped.
[ 171.894872] md: md0 stopped.
[ 172.055374] md: bind<sdc3>
[ 172.059189] md: bind<sdd3>
[ 172.062935] md: bind<sde3>
[ 172.066654] md: bind<sdf3>
[ 172.070290] md: bind<sdg3>
[ 172.073924] md: bind<sdh3>
[ 172.077422] md: bind<sda3>
[ 172.081879] md/raid:md0: device sda3 operational as raid disk 0
[ 172.085039] md/raid:md0: device sdh3 operational as raid disk 7
[ 172.088108] md/raid:md0: device sdg3 operational as raid disk 6
[ 172.091028] md/raid:md0: device sdf3 operational as raid disk 5
[ 172.093784] md/raid:md0: device sde3 operational as raid disk 4
[ 172.096504] md/raid:md0: device sdd3 operational as raid disk 3
[ 172.099190] md/raid:md0: device sdc3 operational as raid disk 2
[ 172.123101] md/raid:md0: allocated 136640kB
[ 172.125840] md/raid:md0: raid level 6 active with 7 out of 8 devices, algorithm 2
[ 172.128478] RAID conf printout:
[ 172.128484] --- level:6 rd:8 wd:7
[ 172.128491] disk 0, o:1, dev:sda3
[ 172.128496] disk 2, o:1, dev:sdc3
[ 172.128501] disk 3, o:1, dev:sdd3
[ 172.128506] disk 4, o:1, dev:sde3
[ 172.128511] disk 5, o:1, dev:sdf3
[ 172.128516] disk 6, o:1, dev:sdg3
[ 172.128521] disk 7, o:1, dev:sdh3
[ 172.128595] md0: detected capacity change from 0 to 17993917661184
[ 173.376192] md0: unknown partition table
[ 201.306088] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[ 242.989004] rule type=2, num=0
[ 244.104208] Loading iSCSI transport class v2.0-871.
[ 244.138799] iscsi: registered transport (tcp)
[ 244.161637] iscsid (8136): /proc/8136/oom_adj is deprecated, please use /proc/8136/oom_score_adj instead.
[ 247.430283] PPP generic driver version 2.4.2
[ 247.441197] tun: Universal TUN/TAP device driver, 1.6
[ 247.444623] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[ 247.487892] nf_conntrack version 0.5.0 (7911 buckets, 31644 max)
[ 247.530181] PPP MPPE Compression module registered
[ 247.548837] PPP BSD Compression module registered
[ 247.570074] PPP Deflate Compression module registered
[ 257.869972] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 257.872758] NFSD: starting 90-second grace period
Checking the raid status
~] # mdadm --misc --detail /dev/md0
/dev/md0:
Version : 01.00.03
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Array Size : 17572185216 (16758.14 GiB 17993.92 GB)
Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
Raid Devices : 8
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Mar 10 09:52:06 2025
State : clean, degraded
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Name : 0
UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Events : 21837
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 0 0 1 removed
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
8 8 83 5 active sync /dev/sdf3
6 8 99 6 active sync /dev/sdg3
7 8 115 7 active sync /dev/sdh3
Replace the failing hard drive with new drive, inserting the drive whist the NAS is still on (should automatically add the new disk to the RAID Array and rebuild)
~] # dmesg
2
[ 145.315487] disk 1, wo:0, o:1, dev:sdc2
[ 145.315493] RAID1 conf printout:
[ 145.315498] --- wd:2 rd:2
[ 145.315505] disk 0, wo:0, o:1, dev:sda2
[ 145.315511] disk 1, wo:0, o:1, dev:sdc2
[ 145.315517] RAID1 conf printout:
[ 145.315522] --- wd:2 rd:2
[ 145.315528] disk 0, wo:0, o:1, dev:sda2
[ 145.315536] disk 1, wo:0, o:1, dev:sdc2
[ 145.315541] RAID1 conf printout:
[ 145.315546] --- wd:2 rd:2
[ 145.315552] disk 0, wo:0, o:1, dev:sda2
[ 145.315559] disk 1, wo:0, o:1, dev:sdc2
[ 145.315565] RAID1 conf printout:
[ 145.315570] --- wd:2 rd:2
[ 145.315576] disk 0, wo:0, o:1, dev:sda2
[ 145.315583] disk 1, wo:0, o:1, dev:sdc2
[ 171.880718] md: md0 stopped.
[ 171.894872] md: md0 stopped.
[ 172.055374] md: bind<sdc3>
[ 172.059189] md: bind<sdd3>
[ 172.062935] md: bind<sde3>
[ 172.066654] md: bind<sdf3>
[ 172.070290] md: bind<sdg3>
[ 172.073924] md: bind<sdh3>
[ 172.077422] md: bind<sda3>
[ 172.081879] md/raid:md0: device sda3 operational as raid disk 0
[ 172.085039] md/raid:md0: device sdh3 operational as raid disk 7
[ 172.088108] md/raid:md0: device sdg3 operational as raid disk 6
[ 172.091028] md/raid:md0: device sdf3 operational as raid disk 5
[ 172.093784] md/raid:md0: device sde3 operational as raid disk 4
[ 172.096504] md/raid:md0: device sdd3 operational as raid disk 3
[ 172.099190] md/raid:md0: device sdc3 operational as raid disk 2
[ 172.123101] md/raid:md0: allocated 136640kB
[ 172.125840] md/raid:md0: raid level 6 active with 7 out of 8 devices, algorithm 2
[ 172.128478] RAID conf printout:
[ 172.128484] --- level:6 rd:8 wd:7
[ 172.128491] disk 0, o:1, dev:sda3
[ 172.128496] disk 2, o:1, dev:sdc3
[ 172.128501] disk 3, o:1, dev:sdd3
[ 172.128506] disk 4, o:1, dev:sde3
[ 172.128511] disk 5, o:1, dev:sdf3
[ 172.128516] disk 6, o:1, dev:sdg3
[ 172.128521] disk 7, o:1, dev:sdh3
[ 172.128595] md0: detected capacity change from 0 to 17993917661184
[ 173.376192] md0: unknown partition table
[ 201.306088] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[ 242.989004] rule type=2, num=0
[ 244.104208] Loading iSCSI transport class v2.0-871.
[ 244.138799] iscsi: registered transport (tcp)
[ 244.161637] iscsid (8136): /proc/8136/oom_adj is deprecated, please use /proc/8136/oom_score_adj instead.
[ 247.430283] PPP generic driver version 2.4.2
[ 247.441197] tun: Universal TUN/TAP device driver, 1.6
[ 247.444623] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[ 247.487892] nf_conntrack version 0.5.0 (7911 buckets, 31644 max)
[ 247.530181] PPP MPPE Compression module registered
[ 247.548837] PPP BSD Compression module registered
[ 247.570074] PPP Deflate Compression module registered
[ 257.869972] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ 257.872758] NFSD: starting 90-second grace period
[ 1605.468049] ata5: exception Emask 0x10 SAct 0x0 SErr 0x4050000 action 0xe frozen
[ 1605.471523] ata5: irq_stat 0x00400000, PHY RDY changed
[ 1605.475378] ata5: SError: { PHYRdyChg CommWake DevExch }
[ 1605.478958] ata5: hard resetting link
[ 1616.254038] ata5: softreset failed (1st FIS failed)
[ 1616.257391] ata5: hard resetting link
[ 1617.139055] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 330)
[ 1617.161045] ata5.00: ATA-9: ST4000DM000-1F2168, CC54, max UDMA/133
[ 1617.164551] ata5.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
[ 1617.169114] ata5.00: configured for UDMA/133
[ 1617.173055] ata5: EH complete
[ 1617.176879] scsi 4:0:0:0: Direct-Access Seagate ST4000DM000-1F21 CC54 PQ: 0 ANSI: 5
[ 1617.181285] Check proc_name[ahci].
[ 1617.186591] sd 4:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
[ 1617.186873] Check proc_name[ahci].
[ 1617.192382] sd 4:0:0:0: Attached scsi generic sg1 type 0
[ 1617.200534] sd 4:0:0:0: [sdb] 4096-byte physical blocks
[ 1617.204372] sd 4:0:0:0: [sdb] Write Protect is off
[ 1617.207999] sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 1617.208083] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1617.270230] sdb: sdb1 sdb2 sdb3 sdb4
[ 1617.281326] sd 4:0:0:0: [sdb] Attached SCSI disk
[ 1620.957927] md: bind<sdb2>
[ 1621.133612] RAID1 conf printout:
[ 1621.133620] --- wd:2 rd:2
[ 1621.133627] disk 0, wo:0, o:1, dev:sda2
[ 1621.133633] disk 1, wo:0, o:1, dev:sdc2
[ 1621.133637] RAID1 conf printout:
[ 1621.133640] --- wd:2 rd:2
[ 1621.133645] disk 0, wo:0, o:1, dev:sda2
[ 1621.133650] disk 1, wo:0, o:1, dev:sdc2
[ 1621.133654] RAID1 conf printout:
[ 1621.133658] --- wd:2 rd:2
[ 1621.133662] disk 0, wo:0, o:1, dev:sda2
[ 1621.133667] disk 1, wo:0, o:1, dev:sdc2
[ 1621.133672] RAID1 conf printout:
[ 1621.133675] --- wd:2 rd:2
[ 1621.133680] disk 0, wo:0, o:1, dev:sda2
[ 1621.133685] disk 1, wo:0, o:1, dev:sdc2
[ 1621.133689] RAID1 conf printout:
[ 1621.133692] --- wd:2 rd:2
[ 1621.133697] disk 0, wo:0, o:1, dev:sda2
[ 1621.133702] disk 1, wo:0, o:1, dev:sdc2
[ 1621.133705] RAID1 conf printout:
[ 1621.133709] --- wd:2 rd:2
[ 1621.133714] disk 0, wo:0, o:1, dev:sda2
[ 1621.133718] disk 1, wo:0, o:1, dev:sdc2
[ 1623.240188] md: export_rdev(sdb1)
[ 1623.323379] md: bind<sdb1>
[ 1623.382850] RAID1 conf printout:
[ 1623.382858] --- wd:7 rd:8
[ 1623.382865] disk 0, wo:0, o:1, dev:sda1
[ 1623.382870] disk 1, wo:1, o:1, dev:sdb1
[ 1623.382875] disk 2, wo:0, o:1, dev:sdf1
[ 1623.382881] disk 3, wo:0, o:1, dev:sde1
[ 1623.382886] disk 4, wo:0, o:1, dev:sdd1
[ 1623.382891] disk 5, wo:0, o:1, dev:sdc1
[ 1623.382896] disk 6, wo:0, o:1, dev:sdg1
[ 1623.382901] disk 7, wo:0, o:1, dev:sdh1
[ 1623.383001] md: recovery of RAID array md9
[ 1623.386889] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 1623.390694] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 1623.394660] md: using 128k window, over a total of 530048k.
[ 1628.673286] md: export_rdev(sdb4)
[ 1628.928266] md: bind<sdb4>
[ 1628.998944] RAID1 conf printout:
[ 1628.998954] --- wd:7 rd:8
[ 1628.998962] disk 0, wo:1, o:1, dev:sdb4
[ 1628.998970] disk 1, wo:0, o:1, dev:sdh4
[ 1628.998976] disk 2, wo:0, o:1, dev:sdg4
[ 1628.998983] disk 3, wo:0, o:1, dev:sdf4
[ 1628.998990] disk 4, wo:0, o:1, dev:sde4
[ 1628.998996] disk 5, wo:0, o:1, dev:sdd4
[ 1628.999027] disk 6, wo:0, o:1, dev:sdc4
[ 1628.999034] disk 7, wo:0, o:1, dev:sda4
[ 1628.999161] md: delaying recovery of md13 until md9 has finished (they share one or more physical units)
[ 1642.307722] md: md9: recovery done.
[ 1642.320703] md: recovery of RAID array md13
[ 1642.324621] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.
[ 1642.328410] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 1642.332387] md: using 128k window, over a total of 458880k.
[ 1646.541391] RAID1 conf printout:
[ 1646.541401] --- wd:8 rd:8
[ 1646.541407] disk 0, wo:0, o:1, dev:sda1
[ 1646.541413] disk 1, wo:0, o:1, dev:sdb1
[ 1646.541418] disk 2, wo:0, o:1, dev:sdf1
[ 1646.541424] disk 3, wo:0, o:1, dev:sde1
[ 1646.541429] disk 4, wo:0, o:1, dev:sdd1
[ 1646.541434] disk 5, wo:0, o:1, dev:sdc1
[ 1646.541439] disk 6, wo:0, o:1, dev:sdg1
[ 1646.541444] disk 7, wo:0, o:1, dev:sdh1
[ 1669.506244] md: md13: recovery done.
[ 1669.666057] RAID1 conf printout:
[ 1669.666067] --- wd:8 rd:8
[ 1669.666075] disk 0, wo:0, o:1, dev:sdb4
[ 1669.666082] disk 1, wo:0, o:1, dev:sdh4
[ 1669.666089] disk 2, wo:0, o:1, dev:sdg4
[ 1669.666096] disk 3, wo:0, o:1, dev:sdf4
[ 1669.666103] disk 4, wo:0, o:1, dev:sde4
[ 1669.666111] disk 5, wo:0, o:1, dev:sdd4
[ 1669.666117] disk 6, wo:0, o:1, dev:sdc4
[ 1669.666122] disk 7, wo:0, o:1, dev:sda4
Checking the raid status
[~] # mdadm --misc --detail /dev/md0
/dev/md0:
Version : 01.00.03
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Array Size : 17572185216 (16758.14 GiB 17993.92 GB)
Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)
Raid Devices : 8
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Mar 10 10:03:42 2025
State : clean, degraded
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Name : 0
UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Events : 22047
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 0 0 1 removed
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
8 8 83 5 active sync /dev/sdf3
6 8 99 6 active sync /dev/sdg3
7 8 115 7 active sync /dev/sdh3
Filesystem check on RAID arrayλ︎
[~] # e2fsck_64 -fp -C 0 /dev/md0
e2fsck_64: Bad magic number in super-block while trying to open /dev/md0
/dev/md0:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Examining disks in raid arrayλ︎
[~] # mdadm --examine /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Name : 0
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Raid Devices : 8
Used Dev Size : 7810899112 (3724.53 GiB 3999.18 GB)
Array Size : 35144370432 (16758.14 GiB 17993.92 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 7810899368 sectors
State : clean
Device UUID : 94192aef:e4471df8:eb179f88:2c4a184c
Update Time : Mon Mar 10 10:10:41 2025
Checksum : f1b355e3 - correct
Events : 22187
Chunk Size : 64K
```
```shell
[~] # mdadm --examine /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Name : 0
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Raid Devices : 8
Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 35144370432 (16758.14 GiB 17993.92 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : clean
Device UUID : 05d1a575:72f061ca:33b3fafc:51bb6d89
Update Time : Mon Mar 10 10:11:04 2025
Checksum : 13448c70 - correct
Events : 22195
Chunk Size : 64K
New disk added to RAID array
[~] # mdadm --examine /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Name : 0
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Raid Devices : 8
Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 35144370432 (16758.14 GiB 17993.92 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : clean
Device UUID : 2a31ad40:cfdf4f97:2958f1a6:d725d374
Update Time : Mon Mar 10 10:11:27 2025
Checksum : 4095eb8d - correct
Events : 22203
Chunk Size : 64K
~] # mdadm --examine /dev/sde3
/dev/sde3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Name : 0
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Raid Devices : 8
Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 35144370432 (16758.14 GiB 17993.92 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : clean
Device UUID : 76126b57:ea47d310:dfd929d1:f9b0e74b
Update Time : Mon Mar 10 10:16:03 2025
Checksum : d2244534 - correct
Events : 22287
Chunk Size : 64K
~] # mdadm --examine /dev/sdf3
/dev/sdf3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Name : 0
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Raid Devices : 8
Used Dev Size : 7810899112 (3724.53 GiB 3999.18 GB)
Array Size : 35144370432 (16758.14 GiB 17993.92 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 7810899368 sectors
State : clean
Device UUID : 4ce70020:4d3f388b:b3a57798:6a8c4f5b
Update Time : Mon Mar 10 10:16:32 2025
Checksum : d4b4ecdc - correct
Events : 22295
Chunk Size : 64K
[~] # mdadm --examine /dev/sdg3
/dev/sdg3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Name : 0
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Raid Devices : 8
Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB)
Array Size : 35144370432 (16758.14 GiB 17993.92 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 5857395368 sectors
State : clean
Device UUID : 9e5dbe88:ce53ea91:46f63893:373fb0f1
Update Time : Mon Mar 10 10:17:13 2025
Checksum : ec664742 - correct
Events : 22307
Chunk Size : 64K
[~] # mdadm --examine /dev/sdh3
/dev/sdh3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : cf1b4f4e:6d83f848:0afc40fb:20a69932
Name : 0
Creation Time : Sat Jan 23 19:04:18 2021
Raid Level : raid6
Raid Devices : 8
Used Dev Size : 7810899112 (3724.53 GiB 3999.18 GB)
Array Size : 35144370432 (16758.14 GiB 17993.92 GB)
Used Size : 5857395072 (2793.02 GiB 2998.99 GB)
Super Offset : 7810899368 sectors
State : clean
Device UUID : 93b744a1:df8017c6:c40c823a:6a4bbe56
Update Time : Mon Mar 10 10:18:22 2025
Checksum : 2e512554 - correct
Events : 22327
Chunk Size : 64K
Each examination also has an Array Slot message with lots of failed
entries, although all the slots are accounted for, i.e. slot 0 to 7 (for the 8 bay NAS).
Finding superblocksλ︎
In the RAID array
[~] # e2fsck -b 16384 /dev/md0
e2fsck 1.41.4 (27-Jan-2009)
e2fsck: Bad magic number in super-block while trying to open /dev/md0
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
[~] # e2fsck -b 32768 /dev/md0
e2fsck 1.41.4 (27-Jan-2009)
e2fsck: Bad magic number in super-block while trying to open /dev/md0
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
dumpe2fs 1.41.4 (27-Jan-2009)
Primary superblock at 0, Group descriptors at 1-1
Backup superblock at 32768, Group descriptors at 32769-32769
Backup superblock at 98304, Group descriptors at 98305-98305
Global Spare in Qnap Web Admin UIλ︎
The web UI can be a little unintuative at times.
Whilst the NAS is on, remove the failing disk and replace with a new disk.
Visit http://192.168.0.x to open the Qnap admin console (replace x with the IP address of the QNap device).
Open the Storage manger (menu button in top left corner of page)
Select RAID Management tab
Select the new disk and in the Action menu, select Set global spare
Select the Volume Management tab and wait until the new disk shows a Status of Ready(Global Spare)
Select the RAID Management tab, select the existing Raid array and in the Action menu, select Recover.
The recovery process will take a long time, so go do something else for the rest of the day.
Note: Although these actions started the QNAP Nas doing something (red and amber lights flashing), the
mdadm --details /dev/md0
command did not update the status of the raid array torebuilding
and the web admin console (raid management tab) also did not update to show the raid array being rebuilt. Will look again at these two statuses once the QNap NAS lights stop flashing.
Hyprlandλ︎
Added a Hardware input configs added to ~/.config/hypr/userprefs.conf
to preserve my personal settings after an HyDE update
Touchpad, mouse and keyboard settings
# // █ █▄░█ █▀█ █░█ ▀█▀
# // █ █░▀█ █▀▀ █▄█ ░█░
# See https://wiki.hyprland.org/Configuring/Variables/
input {
kb_layout = us
follow_mouse = 1
# https://wiki.hyprland.org/Configuring/Variables/#touchpad
touchpad {
disable_while_typing = true # default true
natural_scroll = true # default false
scroll_factor = 1.0 # default 1.0
}
sensitivity = 0
# force_no_accel = 1 # Reduce scroll_factor to 0.4 if set to true
numlock_by_default = true
}
gestures {
workspace_swipe = true
workspace_swipe_fingers = 3
}
Tweaked some of the key bindings in ~/.config/hypr/keybindings.conf
Also set editor
to by astro (neovim) although I think this needs something else to get the key binding to work.
Configured file
as nautilus, the gnome file manager - which supports browsing over an SFTP connection.
Keybinding Preferences
```config # █▄▀ █▀▀ █▄█ █▄▄ █ █▄░█ █▀▄ █ █▄░█ █▀▀ █▀ # █░█ ██▄ ░█░ █▄█ █ █░▀█ █▄▀ █ █░▀█ █▄█ ▄█
# See https://wiki.hyprland.org/Configuring/Keywords/
# & https://wiki.hyprland.org/Configuring/Binds/
# Main modifier
$mainMod = Super # super / meta / windows key
# Assign apps
$term = kitty
# $editor = code
$editor = astro
# $file = dolphin
$file = nautilus
$browser = firefox
# Window/Session actions
bindd = $mainMod+Shift, P,Color Picker , exec, hyprpicker -a # Pick color (Hex) >> clipboard#
bind = $mainMod, Q, exec, $scrPath/dontkillsteam.sh # close focused window
bind = Alt, F4, exec, $scrPath/dontkillsteam.sh # close focused window
bind = $mainMod, Delete, exit, # kill hyprland session
bind = $mainMod, W, togglefloating, # toggle the window between focus and float
bind = $mainMod, G, togglegroup, # toggle the window between focus and group
bind = Alt, Return, fullscreen, # toggle the window between focus and fullscreen
bind = $mainMod, L, exec, swaylock # launch lock screen
bind = $mainMod+Shift, F, exec, $scrPath/windowpin.sh # toggle pin on focused window
bind = $mainMod, Backspace, exec, $scrPath/logoutlaunch.sh # launch logout menu
bind = Ctrl+Alt, W, exec, killall waybar || (env reload_flag=1 $scrPath/wbarconfgen.sh) # toggle waybar and reload config
#bind = Ctrl+Alt, W, exec, killall waybar || waybar # toggle waybar without reloading, this is faster
# Application shortcuts
bind = $mainMod, return, exec, $term # launch terminal emulator
bind = $mainMod, E, exec, $file # launch file manager
bind = $mainMod, C, exec, $editor # launch text editor
bind = $mainMod, B, exec, $browser # launch web browser
# bind = Ctrl+Shift, Escape, exec, $scrPath/sysmonlaunch.sh # launch system monitor (htop/btop or fallback to top)
bind = $mainMod, M, exec, $scrPath/sysmonlaunch.sh # launch system monitor (htop/btop or fallback to top)
```
Thank you.