[colug-432] md (linux drive mirroring)

Judd Montgomery judd at engineer.com
Wed Aug 21 14:32:06 EDT 2019


On 8/21/19 1:23 PM, Jeff Frontz wrote:

> Does anyone have any success stories from using md?   How were you
> able to determine that it actually allowed the system to successfully
> ride out a hard failure -- are there log messages or anything that
> gave some indication?
>
>
>
> _______________________________________________
> colug-432 mailing list
> colug-432 at colug.net
> http://lists.colug.net/mailman/listinfo/colug-432

Yes.

I played around with md many years ago and it seemed to work fine.  I
tried it on a couple of HDDs for learning purposes.  I also had a 4 port
USB hub with 4 1 GB flash drives on it.  I did this because 4GB flash
drives were really expensive and slow at the time and 1 GB ones were
getting cheap.  But, mostly it was because I am a geek and I liked to
watch the lights flash with different array types ;-)

Probably a year and a half ago my Cineraid 4 bay raid system started
flaking out and having write errors.  Smartctl said one of the drives
had a corrected error.  I bought a couple of new drives for it
(needlessly) and eventually determined that it was the Cineraid that was
bad and not the drives.

I didn't want to spend $200-$600 on a RAID enclosure.  I bought a
wavlink 2 drive USB dock to go along with my old 2 drive Cineraid dock. 
I setup md with raid-0, 5 and 10 (tried them all, just playing) with 4
drives.  It would occasionally get corrupted.  I tracked it down to when
the 2 docks were turned on in a certain order the kernel would bounce
one of them and cause the raid to fail.

I bought another wavlink dock and have been using 4 drives in a raid-10
for probably 1.5 years.  I also tried raid-0, 5 and experimented with 6
drives.  You can get SSD speeds, but the seek time still lacks.  I've
used the commands to remove drives and replace them and let them
recover.  I have also just purposely pulled drives out of the dock,
turned the power off on a cradle and replaced them to make sure it
works.  You can set it up to send you an email when a drive fails and it
does send the email immediately.  It is also in the logs.  You can cat
/proc/mdstat to watch the drives recover from a failure, or during the
initial create.  There is also the mdadm command.

The one thing I had to do was make sure that the array is such that set
A is on one dock and set B is on the other dock.  Once in a long while
USB will flake out (maybe I kick the cable) and it will fail 2 drives. 
It always recovers painlessly.  This isn't server room reliability, but
it suits my uses fine and was $40.

I also have 2 SATA SSDs in an md raid-0 array in my workstation. They
don't keep up with the 4x NVME drive, but they are quick.  I have never
seen a problem with those.

Judd


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.colug.net/pipermail/colug-432/attachments/20190821/177ccefb/attachment.html 


More information about the colug-432 mailing list