Pages:
Author

Topic:   - page 3. (Read 639 times)

sr. member
Activity: 728
Merit: 317
nothing to see here
January 15, 2020, 05:54:16 PM
#18
Hmmm... that would have been too easy anyway  Cheesy

You could try again to find backup superblocks with dumpe2fs:

Code:
dumpe2fs /dev/md0p1|grep -i super



sorry for the late reply.

one may also poke for potential superblock copies by using fsck -b and popular blocksize values (8192,16384,32768...)
in case you don't know the blocksize of the filesystem.


I also see that GPT message...
i would try read that out with gdisk or sgdisk

sgdisk -p /dev/md0

since md0 is reported as active.
if that works, i would try to use the information to copy the data blocks from /md0p? to a new filesystem (with a valid superblock).
mounting the device/partition is failing, but the data should be on there. So i'd try to work around that strategy to mount the filesystem.
There is a bit more detail to this, but first i'd like to see the output of sgdisk -p

I also didn't find much about this /dev/md127 on the net. any idea psycodad?

EDIT: seems like a complicated situation there. Don't give up.
The main thing that kept up my motivation with these errors, was seeking to experience that certain feeling after i got one of them finally resolved.

Off to sleep now.

legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 05:20:49 PM
#17
Hmmm... that would have been too easy anyway  Cheesy

You could try again to find backup superblocks with dumpe2fs:

Code:
dumpe2fs /dev/md0p1|grep -i super

legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 04:59:48 PM
#16
Code:
root@bitcorn:/dev# swapon -a
root@bitcorn:/dev# mount /dev/md0p1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error.
root@bitcorn:/dev# mount /dev/md0p1 /mnt/md0
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error.

Try fsck'ing it first:

Code:
fsck.ext4 /dev/md0p1

legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 04:51:40 PM
#15
Ok, checked again through what you posted above and would suggest the following:

Code:
swapon -a
mount /dev/md0p1 /mnt

Didn't really question you trying to mount md0, even so your output above says you partitioned md0...

legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 04:21:54 PM
#14
Apologies, didn't check back and just typed from the top of my head, my fault.
Edited in above post.

Quote
Quote
Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure?
I mean hours of disk activity while cat /proc/mdstat shows minimal progress?

Took about 12 hours to rebuild/resync when I tried.


Hmm, okay then my assumption was wrong, then the array should be clean.
legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 04:06:01 PM
#13
 Grin
Seriously:
Again don't panic, a power failure can't fsck up your array so badly, only the operator is capable of that (I know *exactly* what I am talking about here! - Lost TBs of data only due to getting panicky and impatient ). If you didn't forget to tell us about re-formatting the fs or similar nasties you are still in a good position. Just annoying problems so far.

Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure?
I mean hours of disk activity while cat /proc/mdstat shows minimal progress?

If not, you have probably forcefully assembled a damaged RAID by --assume-clean from above. The next thing I would recommend to try is actually resynching that RAID, either by simply rebooting or by stopping md with
Code:
mdadm --stop /dev/md0
mdadm --assemble --scan

Code:
cat /proc/mdstat
should then show it is synching.

Edit:
Try
Code:
dumpe2fs /dev/md0|grep super
as makrospex suggested to verify if the superblock suggested by fsck.ext4 are correct (did assume so, but I could be wrong).
legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 03:36:08 PM
#12
@Bob: Does it anything after that (probably not)?

Please try with the superblock backups listed, i.e.

Code:
fsck.ext4 -b 8193 /dev/md0

Agree with makrospex^, fs is fs no matter on which block device it sits (at least from the perspective of the user and the tools trying to fix a filesystem).
sr. member
Activity: 728
Merit: 317
nothing to see here
January 15, 2020, 03:34:00 PM
#11
So you might find out the location of a superblock copy with

dumpe2fs

and use that for fsck?
I had a similar situation on solaris 8, where i had to replace the superblock to make the volume readable again. It was a single volume, but it's a filesystem anyway.

EDIT: ran a quick google and the first result pretty hit what i searched for.
https://www.linuxquestions.org/questions/linux-software-2/mount-unknown-filesystem-type-%27linux_raid_member%27-4175491589/
It's a raid level 5, too. See last post in that thread.
sr. member
Activity: 728
Merit: 317
nothing to see here
January 15, 2020, 03:23:00 PM
#10
So you know the filesystem from md0 is ext4, right?
try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via
Quote
-t ext4
since it can't read this info from /etc/fstab.
Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s).

EDIT: Or was it /dev/sdb0 (...)
My daily unix practice aged well.


There is no point in trying to mount a linux-raid5 partition directly.
It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only).


Came to my mind soon after writing. Edited out, thanks.
legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 03:21:57 PM
#9
So you know the filesystem from md0 is ext4, right?
try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via
Quote
-t ext4
since it can't read this info from /etc/fstab.
Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s).

EDIT: Or was it /dev/sdb0 (...)
My daily unix practice aged well.


There is no point in trying to mount a linux-raid5 partition directly.
It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only).
sr. member
Activity: 728
Merit: 317
nothing to see here
January 15, 2020, 03:15:19 PM
#8

EDIT: forget that, didn't make sense.
legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 03:14:32 PM
#7
Thanks, so we know there was no LVM on top.
Please try fsck again, but on the device not the mountpoint as you did above:

Code:
fsck.ext4 /dev/md0

Edit: It the above starts, it will (depending on your hardware) take ages (or 2 minutes longer) to complete - do not interrupt it!
legendary
Activity: 4354
Merit: 3614
what is this "brake pedal" you speak of?
January 15, 2020, 02:51:25 PM
#6
bob sorry im of no help in this, just wanted to comment ive dropped my Z1 (3 drive raid 5) arrays and switched to either Z2 (5+ drives any 2 can fail) for important stuff or straight up mirrors for less important. i love mirrors cuz you can read the remaining drive on most anything.

best of luck, im lurking as i hope to learn something here too.

btw talking about zfs files system on my nas. for desktops i always mirror.
legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 02:32:53 PM
#5
Thanks.

Again, your array is fine according to /proc/mdstat you posted, no need to recreate it and risk something.

Please also post output from
Code:
cat /etc/fstab

legendary
Activity: 1612
Merit: 1608
精神分析的爸
January 15, 2020, 02:12:42 PM
#4
May I ask what has led to this situation? (power loss, disk crash etc.)

Your array is obviously perfectly fine, don't panic!

Did you by chance have LVM on top of that RAID ?

If you can, post your fstab and the output of 'lsblk -f' or 'blkid'.
sr. member
Activity: 728
Merit: 317
nothing to see here
January 15, 2020, 01:30:44 PM
#3
uh, oh...

have you tried
Quote
mdadm --create --assume-clean ...
, too?
Don't be a fool and dd that hdd blocks off to a file on a backup medium first (if you not already did so).

EDIT: imo, as long as the disks are physically ok, you should have a chance of lossless recovery. I had more problems with journaling fs than with ext2 or vfat, only exception was reiser, but since hans went to jail, i stopped using it.
legendary
Activity: 3374
Merit: 4738
diamond-handed zealot
January 15, 2020, 01:17:50 PM
#2
oh man, I'm sorry Bob, I loathe crap like this

I got no help for you, I was a whole day editing fstab to get my legacy volumes mounted recently, but I'm going to follow along and try to learn something.

I will say this, RAID, particularly software RAID, has been more trouble than good in my personal experience; supposed to protect you from physical disk failure but introduces whole other layers of failure modes.
legendary
Activity: 1869
Merit: 5781
Neighborhood Shenanigans Dispenser
January 15, 2020, 01:06:07 PM
#1
Pages:
Jump to: