Unoffical empeg BBS

Quick Links: Empeg FAQ | RioCar.Org | Hijack | BigDisk Builder | jEmplode | emphatic
Repairs: Repairs

Page 1 of 3 1 2 3 >
Topic Options
#352797 - 25/06/2012 15:47 mhddfs -- Pooling filesystems together for bulk storage
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
---- begin preamble ---
RAID is a way of combining lots of hard drives into a common storage pool, with the ability to tolerate loss/failure of one or more drives. If a RAID has no redundant drives configured, then failures on a single drive result in loss of ALL data from ALL drives, even the good ones.

So normally RAID systems incorporate at least one extra drive for data redundancy. If a single drive goes bad in this scenario, all data is still available until the next drive failure.

Meanwhile, one should replace the failed drive ASAP, and then sit through a day-long "RAID rebuild" session, biting the remains of one's fingernails while hoping a second failure (or discovery of an existing bad sector) doesn't kill the rebuild and result in total data loss.

From this it becomes apparent that RAID is not a substitute for a full back-up, or even a more elaborate system of backups.

And even just a simple "software crash", aka. "improper shutdown", will result in the RAID wanting to spend a day or more doing yet another "rebuild" or resynchronization of the array (assuming multi-terrabyte size drives).

You may have guessed that I don't like RAID. smile
---- end preamble ---

In addition to a fragile running environment, RAID also makes it possible to have a single, LARGE filesystem spanning several physical drives. This is especially convenient for video (media) collections, with thousands of files each of which measures giga-bytes in size. Shuffling those around manually among a collection of small filesystems is not a joyful exercise, so using a RAID to create larger filesystems is a pretty commonplace workaround.

I recently discovered a better (for me) way to do this. There's a Linux FUSE filesystem called mhddfs for these situations. The rather awkward name parses as Multiple Hard Disk Drive FileSystem, or mhddfs for short.

To use it, one formats drives (or partitions) individually, one filesystem on each. Then mount (or attach) the filesystems, and use mhddfs to combine (or pool) their storage into one massive higher layer filesystem.

That's it. No fuss, no RAID. If any drive fails, then only the files from that specific drive need be restored from backup (let rsync figure it out) -- no days long resync of a RAID array.

And the individual drives (and files!) are still fully accessible when (or not) mounted as part of the mhddfs array.

One thing I wish mhddfs would do by default, is to keep leaf nodes grouped together on the same underlying drive/filesystem. Eg. If I have a directory called "Top Gear", then ideally all Top Gear episodes should be kept together within that directory on a single underlying drive, rather than potentially being scattered across multiple underlying drives.

Dunno, why, but that's just how I'd like it to work, and so I've patched my copy here to do exactly that.

My main MythTV box now has two 3TB drives, plus one 2TB drive, with the space all pooled together using mhddfs. The backup array for that system uses mhddfs to combine space from four 2TB drives, with both setups conveniently totaling 8TB despite the different numbers and capacities of drives used.

Anyway, perhaps something like this might be of use to others out there. It sure has made managing our media collection much easier than before.

Cheers

Top
#352798 - 25/06/2012 15:58 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Speaking of 3TB drives: I purchased the Western Digital "Green" drives for this setup. My initial observations of them are that (1) they are as mechanically quiet as the 2TB drives, BUT (2) they do vibrate more than the 2TB ones, and this may make them audible in some cases. I also wonder about endurance with all of that vibration.

The vibration from the extra platter is not bad -- less than a typical 7200rpm drive -- but it is noticeable when compared with the 2TB versions.

Top
#352799 - 25/06/2012 16:15 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
andy
carpal tunnel

Registered: 10/06/1999
Posts: 5916
Loc: Wivenhoe, Essex, UK
Luckily my data needs for my server still fit on a single disk (500GB data, 500GB of backups from other machines).

So for me, I just throw disks at the problem. I stick with a 3 disk RAID1 array, which means I can take any single drive out at any point and still have redundancy. And any drive I take out can be used to rebuild the full setup.

I'm planning on moving to XFS soon, I'll still be sticking with mirroring though (probably a 4 disk mirror now). I'll probably add an SSD as a cache device too, given that I have a 160GB doing nothing at the moment.

XFS seems to be so much more robust that most other file systems.

mhddfs sounds interesting though, but I'm glad I don't need it wink
_________________________
Remind me to change my signature to something more interesting someday

Top
#352800 - 25/06/2012 16:49 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: andy]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Yeah, I like XFS here, too. That's what the MythTV box is using for the underlying filesystems that mhddfs then pools together.

Something I don't do (yet), is use an SSD to hold the logs for the XFS filesystems. In theory that should give a tremendous performance boost, but I just don't know if I trust that much chewing gum and string. smile

Top
#352801 - 25/06/2012 17:11 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
drakino
carpal tunnel

Registered: 08/06/1999
Posts: 7868
I was really hoping ZFS would have taken off. It offered the same type of storage pooling that mhddfs offers, and also offered redundancy if you wanted. It's very similar to how larger enterprise storage devices work, like the EVA series I supported at HP.

Some RAID controllers do offer a similar option, usually termed JBOD (Just a Bunch Of Disks). They would operate at a block level though instead of file level, and just sit there filling up the first disk, then rolling over to the second, third, etc.

Top
#352802 - 25/06/2012 17:15 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: drakino]
wfaulk
carpal tunnel

Registered: 25/12/2000
Posts: 16706
Loc: Raleigh, NC US
Yeah, but if you lose one disk of a JBOD, you still pretty much lose everything. The filesystem is still corrupted.
_________________________
Bitt Faulk

Top
#352804 - 25/06/2012 18:29 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
There is a bit of a performance hit with mhddfs --> everything gets relayed from the kernel to the mhddfs task (userspace) and then back to the kernel and on to the original task.

The CPU usage is noticeable. Not huge, not a problem, but noticeable. I wouldn't put a news/mail server on mhddfs, but it's quite good for a home media server.

One project I might do if I get time/bored, is to write an in-kernel implementation of a simplified version of it, which would get rid of the double copying on reads/writes and make it all pretty much invisible performance-wise.

I don't need to do that, but it just looks like a fun way to play with the Linux VFS layer.

Cheers

Top
#352805 - 25/06/2012 18:41 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: mlord
One project I might do if I get time/bored, is to write an in-kernel implementation of a simplified version of it, which would get rid of the double copying on reads/writes and make it all pretty much invisible performance-wise.


Mmmm.. thinking about this more now, and perhaps a better thing might be to write an extension to FUSE to allow file redirection to a different path, like symlinks do.

With that feature, mhddfs could continue to manage the directory structure, but then redirect actual file accesses to the REAL underlying file on its filesystem. Reads/writes would then happen completely in-kernel, at full performance, and things like mmap() etc.. would all work natively.

Nice and simple, and it's the kind of thing that might actually get accepted upstream into default kernels some day.

Cheers

Top
#352810 - 25/06/2012 20:00 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
Latest MythTV already supports storage groups. Not quite as complete as a "fused" filesystem like this though.

unRAID uses a similar principle to join file systems together (and adds a RAID type functionality on top of that).
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#352822 - 26/06/2012 00:05 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Shonky]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Storage Groups are an example of the "all software eventually evolves to implement email" syndrome. Not a core strength of the mythtv devs; best avoided.

Besides, they only really help with recordings, not videos. 95% of my MythTV stuff consists of "videos".

Cheers

Top
#352827 - 26/06/2012 01:17 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
msaeger
carpal tunnel

Registered: 23/09/2000
Posts: 3608
Loc: Minnetonka, MN
I did not know RAID was that fragile I have no experience with it but the way it was talked about when I went to school was you get a disk failure you just slap in a new on and it fixes itself with no down time.

I can totally believe that this is not true but I am wondering what do people use to back up multiple terabytes of stuff? Just more hard drives? I only backup pictures and documents and I just put them online and hope I don't lose the local copy and online copy at the same time :)but if I had to I could fit all of it on DVD's.
_________________________

Matt

Top
#352831 - 26/06/2012 01:44 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: msaeger]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
"RAID is not a backup" and that's the problem. Many people do not get their head around it.

Mark: The videos module does support storage groups too.

I do like the sound of this so I'm going to look into it further next time I rebuild my MythTV. Currently my recordings and videos are split up appropriately across a few drives with some strategic links to the separate file systems i.e. it doesn't share the space as efficiently but it's close enough for now.
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#352834 - 26/06/2012 02:11 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: msaeger]
drakino
carpal tunnel

Registered: 08/06/1999
Posts: 7868
RAID isn't as fragile as Mark makes it out to be, but it does have it's downsides at times. And Christian is exactly right, RAID is not a backup solution. (well, unless you are doing the 3 disk RAID 1 trick Andy does, assuming one drive is kept out most of the time).

Much like many things, it all comes down to the quality of the implementation. There are some pretty bad RAID hardware controllers out there. And some pretty bad RAID software setups. But with a good setup, RAID can do it's job of ensuring one (or two or multiple depending on the level) failures don't take out a system. RAID remains one of the easiest ways to raise performance of a storage subsystem when dealing with spinning drives. And modern servers changed the definition of RAID to mean things beyond hard drives. I've worked with RAIDed RAM before, in servers that cannot afford any downtime.

Top
#352835 - 26/06/2012 04:22 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: msaeger]
wfaulk
carpal tunnel

Registered: 25/12/2000
Posts: 16706
Loc: Raleigh, NC US
Originally Posted By: msaeger
I did not know RAID was that fragile I have no experience with it but the way it was talked about when I went to school was you get a disk failure you just slap in a new on and it fixes itself with no down time.

RAID can survive the failure of some number of drives, depending on the type of RAID, without downtime, assuming that there are no further issues. (Sometimes a single failed drive can take out an entire IO channel.) When you replace the drive, it will reconstruct the data on the failed drive.

The problem these days is that drives are so large that it takes a long time to recover that data, and you run the risk of another drive failing before the replacement disk's data gets rebuilt. If more drives fail than the RAID set can handle, your data is completely gone and you have to restore from backup. Add to that problem that drives will often fail in clusters, since drives from a single production run will often have very similar lifespans and usage patterns are very similar for all members of a RAID set, and you can get into problems more quickly than you'd like.

There are a number of ways to help alleviate those issues. One is by using "hot spares", where you have one or more drives in the system and running idle waiting for another drive to go bad, which minimizes the amount of time that a RAID set will run in a degraded mode, since it doesn't have to wait for a person to physically replace the bad drive. Another is using a RAID type that can tolerate more drive failures. Typically, if someone says "RAID" without any qualifier, he probably means RAID5, which can tolerate the failure of a single drive. Other versions can tolerate other finite numbers of drives per RAID set (that is: two, three, etc.). And you can even combine the types together. Still other versions have complete duplicates of the data, so that every disk might have one or more backups. The tradeoff here is between an increasing number of drives needed to store a given amount of data and the number of drives that can be lost without resorting to a backup.

In Mark's system, any drive failure results in a restore from backup, but it doesn't affect all of the data on all of the drives, which is what would happen on a RAID set that lost more drives than it could tolerate, but only the data that happened to reside on that one drive.
_________________________
Bitt Faulk

Top
#352836 - 26/06/2012 05:23 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: drakino]
andy
carpal tunnel

Registered: 10/06/1999
Posts: 5916
Loc: Wivenhoe, Essex, UK
Originally Posted By: drakino
And Christian is exactly right, RAID is not a backup solution. (well, unless you are doing the 3 disk RAID 1 trick Andy does, assuming one drive is kept out most of the time).


That would be pretty unworkable, given it would have to copy the entire RAID contents every time you reconnected the third disk. I don't do that.

My backups are entirely separate from my RAID, CrashPlan FTW.
_________________________
Remind me to change my signature to something more interesting someday

Top
#352837 - 26/06/2012 06:38 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: andy]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
Yep so I have unRAID for local redundancy (has the merged fs functionality with separate file systems as well as a parity drive). Also Crashplan (offsite) FTW. I then keep a local Crashplan backup copy on a second machine with the most important stuff like photos so it's basically instant backup for HD failure and slower offsite for house burns down situations.

I don't backup everything though. Media like Movies and TV shows I can deal with the loss of if the house burns down but it's nice to have some level of fail safe to cover a dead drive.


Edited by Shonky (26/06/2012 06:39)
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#352838 - 26/06/2012 07:35 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
peter
carpal tunnel

Registered: 13/07/2000
Posts: 4180
Loc: Cambridge, England
Originally Posted By: mlord
One project I might do if I get time/bored, is to write an in-kernel implementation of a simplified version of it, which would get rid of the double copying on reads/writes and make it all pretty much invisible performance-wise.

I don't need to do that, but it just looks like a fun way to play with the Linux VFS layer.

So, a bit like a union mount, but with both underlying FSes writable rather than just the top one?

Peter

Top
#352852 - 26/06/2012 12:16 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Shonky]
hybrid8
carpal tunnel

Registered: 12/11/2001
Posts: 7738
Loc: Toronto, CANADA
Originally Posted By: Shonky
"RAID is not a backup" and that's the problem.


"RAID is not an alternative to a backup" is more apt. Which means in practical terms, you should still have a backup of the important data on your RAID.
_________________________
Bruno
Twisted Melon : Fine Mac OS Software

Top
#352854 - 26/06/2012 13:48 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: peter]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Quote:
So, a bit like a union mount, but with both underlying FSes writable rather than just the top one?


That's more or less it (mhddfs), except all drives share the exact same directory structure, or at least they appear to. mhddfs automatically clones directories to other drives as needed when storing new files in the "same directory" as existing files.

It's really quite useful and exactly what a lot of us might want. Way better than JBOD. The code in mhddfs defaults to picking the first filesystem (drive) with sufficient free space (mount option configurable) when storing a new file. And if all drives are below the user's threshold, it instead chooses the filesystem with the most available space.

It's also got a fancy feature to automatically relocate a file if, while writing to it, it runs out of space on the currently chosen drive. I don't really need that feature, and will likely get rid of it if/when I re-implement things.

So from that, one can see that if each of four drives has only 100MB of free space, it still cannot store a 200MB file --> the minor constraint is that each file must fit onto a single filesystem (drive). Not really an issue, but there you have it.

Cheers

Top
#352855 - 26/06/2012 14:32 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Shonky]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: Shonky
I do like the sound of this so I'm going to look into it further next time I rebuild my MythTV. Currently my recordings and videos are split up appropriately across a few drives with some strategic links to the separate file systems i.e. it doesn't share the space as efficiently but it's close enough for now.


When I set it up here, I actually just used the existing separate filesystems, very similar to what you have there right now. That sounds pretty much how things were set up here before mhddfs.

Part of the real beauty of it all, is that you don't have to do much to layer mhddfs onto the existing hodgepodge. smile

Just tidy up the current directory structures if needed, so that each existing filesystem uses a common/compatible directory layout, and then fire up mhddfs to pool them all at a new mount point. The existing filesystems are still there, mounted, and 100% functional and 100% safe, but you then have the option to slowly migrate things to begin using the new mount point.

mhddfs does not save any metadata of its own anywhere. It just redirects accesses one at a time as they happen, with no saved state. So it's easy to adopt over top of existing stuff, and just as easy to get rid of if one changes their mind.

Really, this is something that should already be in the stock kernels, maybe called "poolfs" or something. Heck, even the empeg could have used this had it existed in the day.

Cheers


Edited by mlord (26/06/2012 14:51)

Top
#352856 - 26/06/2012 14:43 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
On my MythTV box, the three big drives are mounted at mount points named /drive1, /drive2, and /drive3.

The mhddfs is mounted at /drives, pooling the other three filesystems into a common mass, with these two commands:

mkdir -p /drives
mhddfs /drive1,/drive2,/drive3 /drives -o \
allow_other,auto_cache,max_write=4194304,uid=1000,gid=1000,mlimit=500G


Most of those mount options aren't really needed, but I want everything on the filesystem to be "owned" by user "mythtv" (uid/gid 1000), and I want to allow larger write() sizes than the piddly default (originally designed for .mp3 files).

A much simplified way, just to try things, would be this:

mkdir -p /drives
mhddfs /drive1,/drive2,/drive3 /drives



Top
#352857 - 26/06/2012 14:49 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: andy]
tahir
pooh-bah

Registered: 27/02/2004
Posts: 1914
Loc: London
How does read/write speed compare to RAID?

It's definitely painful waiting for a RAID array to rebuild.

Top
#352860 - 26/06/2012 18:02 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: tahir]
siberia37
old hand

Registered: 09/01/2002
Posts: 702
Loc: Tacoma,WA
How does mhddfs handle splitting up data across the drives? Does it try to distribute data across the "array" so that if one drive dies the minimum number of data is lost?

Top
#352862 - 26/06/2012 18:54 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: siberia37]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: siberia37
How does mhddfs handle splitting up data across the drives?

Originally Posted By: mlord
The code in mhddfs defaults to picking the first filesystem (drive) with sufficient free space (mount option configurable) when storing a new file. And if all drives are below the user's threshold, it instead chooses the filesystem with the most available space.

Top
#352863 - 26/06/2012 18:57 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: tahir]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: tahir
How does read/write speed compare to RAID?

It's definitely painful waiting for a RAID array to rebuild.

I'm not sure what you are asking about there. With mhddfs there is nothing to "rebuild". In the unlikely event that a drive has to be replaced, one has to copy back only the files that were on that drive. The time needed for that depends on how full the drive was, but it will nearly always take significantly less time than a RAID resync, unless the drive was very full of very small files or something.

Cheers

Top
#352869 - 27/06/2012 07:38 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
tahir
pooh-bah

Registered: 27/02/2004
Posts: 1914
Loc: London
I was just saying that RAID is a pain when it goes wrong. If there's no performance losses then it might be a useful alternative to RAID even for some of my work network


Edited by tahir (27/06/2012 07:39)

Top
#352870 - 27/06/2012 10:26 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: tahir]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Yeah, I don't like RAID resyncs either -- they are the elephant sized flaw in most current RAID implementations -- RAID really needs a journal of some sort to know what actually needs resyncing, so that it doesn't have to mindlessly read 6TB of data (and write 2TB) to resync an 8TB array.

That's just mindless, and a big reason I don't use RAID for anything around the home office here.

But.. mhddfs is not suitable for office use. The code is clean, but it is in userspace not the kernel, so it will not cope well with heavy multi-user loads. Performance would suck, and it might even run into trouble when multiple users are creating/updating a common subset of files/directories.

Good for my one/two user media server, not good for a mail or database server. An in-kernel version would be much, much better for that, which is why I'm rather shocked we don't already have an in-kernel solution.

Cheers

Top
#352871 - 27/06/2012 10:29 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
tahir
pooh-bah

Registered: 27/02/2004
Posts: 1914
Loc: London
Gotcha, thanks

Top
#352873 - 27/06/2012 11:42 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
LittleBlueThing
addict

Registered: 11/01/2002
Posts: 612
Loc: Reading, UK
First, I know, I'm biased and 'like' RAID smile

This isn't an attempt to persuade anyone that RAID is the right answer for them - there's a lot it can't do that this type of approach can - effective use of mismatched drive sizes for instance.

Originally Posted By: mlord
And even just a simple "software crash", aka. "improper shutdown", will result in the RAID wanting to spend a day or more doing yet another "rebuild" or resynchronization of the array (assuming multi-terrabyte size drives).


For this issue, have you come across raid bitmaps?

They typically mean that a dirty shutdown even of an actively writing multi-TB raid will often be cleaned before it's even mounted. Yes you can add one to an existing RAID. Obviously they're not useful when a drive fails completely.

I also note that all data 'lost' when a drive dies under mhddfs is not available until the restore is done. Typically RAID provides zero downtime.

The real (and painful) risk of a second failure when a drive does fail means RAID6 or more highly redundant setups are often a better option if you really want to avoid downtime. I'm now using RAID6.

Interestingly you don't address the issue of the backup solution (a non-redundant cold spare?) failing as you read possibly many Tb of data from it? Isn't that the same problem as biting your nails whilst re-syncing an array with a new drive?


Edited by LittleBlueThing (27/06/2012 11:44)
Edit Reason: link to bitmap page
_________________________
LittleBlueThing Running twin 30's

Top
#352877 - 27/06/2012 13:04 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: LittleBlueThing]
tanstaafl.
carpal tunnel

Registered: 08/07/1999
Posts: 5549
Loc: Ajijic, Mexico
Originally Posted By: LitleBlueThing
(a non-redundant cold spare?)
Key word here being "non-redundant". My backups are redundant, although I have to admit that the off-site copies are not as current as the on-site ones, usually a couple months out of date.

But, none of my data is mission-critical. I'd miss it if it were lost, but I wouldn't lose any sleep over it, and most of it could be [time-consumingly] recreated if I had to.

tanstaafl.
_________________________
"There Ain't No Such Thing As A Free Lunch"

Top
#352878 - 27/06/2012 13:20 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: LittleBlueThing]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: LittleBlueThing
Interestingly you don't address the issue of the backup solution (a non-redundant cold spare?) failing as you read possibly many Tb of data from it?


The backup copy (or copies) has the same issues/concerns as the original array has. Even with RAID(6) one needs a backup. The backup can be a RAID, mhddfs or IBM reel-to-reel. It's a separate filesystem, and can be designed however one likes.

For me, I'm using non-redundant mhddfs again to bond four 2TB drives. These are just media files -- a pain to replace, but not a catastrophe if lost. So I'm happy with a single backup copy of everything, effectively giving me RAID1 on a per-file basis (one copy on the live system, one on the backup), but without the headaches of RAID. smile

Mechanical drives seldom die outright without warning. The failure mode is more typically bad sectors accumulating (dust/dirt scratching the platters), so it might lose a few files, but probably not the entire filesystem. And with mhddfs the damage is limited to only that one drive, not all drives.

RAID *needs* RAID, because it throws everything into a single large basket, where a single drive failure loses everything unless redundant drives are configured. And RAID multiplies the probability of total failure by the number of drives.. making a catastrophe more likely.

Cheers

Top
#352879 - 27/06/2012 13:22 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
drakino
carpal tunnel

Registered: 08/06/1999
Posts: 7868
Originally Posted By: mlord
Yeah, I don't like RAID resyncs either -- they are the elephant sized flaw in most current RAID implementations -- RAID really needs a journal of some sort to know what actually needs resyncing, so that it doesn't have to mindlessly read 6TB of data (and write 2TB) to resync an 8TB array.

Which RAID level are you talking about? I'm going to guess RAID 5 based on your comment. It's the nature of how RAID5 works to provide optimal amounts of pooled space with redundancy to survive a single failure that causes pretty extensive rebuild processes. Other RAID levels can help to minimize rebuilds, with the tradeoff of not having as much available space. RAID 0+1 or 1+0 setups wouldn't need to read all 6TB of data to rebuild, instead it would only read the good partner drive. The downside is that all rebuild activity is focused on one drive, so performance can go down dramatically if rebuild priority is high.

RAID is a block level setup, very much disconnected from the file system and operation of what the OS does. Adding a file aware journal would also require a decent bit of rework of how RAID functions. The bitmaps solution LittleBlueThing linked to does look like a good attempt at this though, for addressing the improper shutdowns for RAID setups affected by such a condition.

Most of the pure RAID systems I work with are hardware based, with controllers that offer support for battery backed cache. This is the other method to mitigate the issue with a bad shutdown.

Top
#352880 - 27/06/2012 13:25 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: LittleBlueThing]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: LittleBlueThing
For this issue, have you come across raid bitmaps?


No, I hadn't. smile
Those sound exactly like the journal mechanism I so wishfully imagined. Definitely on my MUST-HAVE list should I ever succumb to setting up a RAID again.

Oh, and I notice that someone at OLS this year will be discussing making RAID rebuilds more intelligent by using the filesystem metadata to only sync actual allocated data areas, rather than blindly syncing all sectors.

With a setup that does both of those, RAIDs would be much less painful.


Edited by mlord (27/06/2012 13:26)

Top
#352883 - 27/06/2012 13:43 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
wfaulk
carpal tunnel

Registered: 25/12/2000
Posts: 16706
Loc: Raleigh, NC US
Originally Posted By: mlord
Mechanical drives seldom die outright without warning.

Sorry, Mark, but I'm calling bullshit on this one. Drives die suddenly all the time. I have seldom had any useful warning that a drive was about to go bad, from SMART to small errors. The vast majority of them just up and die.

Google's experience is not as extreme as mine, but it still belies your claim:

Originally Posted By: Google
Out of all failed drives, over 56% of them have no count in any of the four strong SMART signals, namely scan errors, reallocation count, offline reallocation, and probational count. In other words, models based only on those signals can never predict more than half of the failed drives. Figure 14 shows that even when we add all remaining SMART parameters (except temperature) we still find that over 36% of all failed drives had zero counts on all variables.
_________________________
Bitt Faulk

Top
#352884 - 27/06/2012 13:50 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
drakino
carpal tunnel

Registered: 08/06/1999
Posts: 7868
Originally Posted By: mlord
Oh, and I notice that someone at OLS this year will be discussing making RAID rebuilds more intelligent by using the filesystem metadata to only sync actual allocated data areas, rather than blindly syncing all sectors.

This I'm curious to read more about to see how it is implemented. Almost sounds like ZFS levels of redundancy, without possibly having to commit to that filesystem. The tighter coupling would help eliminate your main gripe point with RAID smile

Another interesting way to handle this would be to signal the RAID system with TRIM (or UNMAP on SCSI systems). The RAID system could keep running at the block level with no filesystem knowledge, but still gain the benefits of knowing exactly what would need to be rebuilt in a failure.

Top
#352894 - 27/06/2012 14:45 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: wfaulk]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: wfaulk
Originally Posted By: mlord
Mechanical drives seldom die outright without warning.

Sorry, Mark, but I'm calling bullshit on this one.


Okay, I'll reword that for you: I have NEVER, in my 20 years as an industry storage expert, had one of my mechanical drives just die outright without software detectible advance warning. Except for empeg drives. smile

Of course drives do die that way, I suppose. It's just a heck of a lot less common than the gradual failure situations.


Edited by mlord (27/06/2012 14:48)

Top
#352895 - 27/06/2012 14:49 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: drakino]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: drakino
Another interesting way to handle this would be to signal the RAID system with TRIM (or UNMAP on SCSI systems). The RAID system could keep running at the block level with no filesystem knowledge, but still gain the benefits of knowing exactly what would need to be rebuilt in a failure.


Big commercial arrays are already doing that with Linux, so, yeah!

Top
#352897 - 27/06/2012 14:58 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Mmm.. now that you mention it, there also was that whole IBM/Hitachi "DeathStar" crap, where large batches of drives would die electronically without warning.

I guess maybe that's what prompted the whole "home RAID" revolution. smile

Top
#352904 - 27/06/2012 18:49 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
wfaulk
carpal tunnel

Registered: 25/12/2000
Posts: 16706
Loc: Raleigh, NC US
Originally Posted By: mlord
Of course drives do die that way, I suppose. It's just a heck of a lot less common than the gradual failure situations.

Not according to Google's real-world survey of 100,000 drives.
_________________________
Bitt Faulk

Top
#352915 - 27/06/2012 23:31 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: wfaulk]
msaeger
carpal tunnel

Registered: 23/09/2000
Posts: 3608
Loc: Minnetonka, MN
Whether you think they die suddenly or not maybe depends on how close you are watching.
_________________________

Matt

Top
#352916 - 28/06/2012 01:05 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: msaeger]
RobotCaleb
pooh-bah

Registered: 15/01/2002
Posts: 1866
Loc: Austin
Mark, is this your office?

http://imgur.com/a/6ZG5e

Top
#352917 - 28/06/2012 01:09 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: RobotCaleb]
Phoenix42
veteran

Registered: 21/03/2002
Posts: 1424
Loc: MA but Irish born
Doubtful. The craftsmanship of the woodwork is poor...

Top
#352918 - 28/06/2012 01:13 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: RobotCaleb]
drakino
carpal tunnel

Registered: 08/06/1999
Posts: 7868
Ouch, no vibrational dampening, those drives probably aren't too happy.

Top
#352921 - 28/06/2012 06:59 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
LittleBlueThing
addict

Registered: 11/01/2002
Posts: 612
Loc: Reading, UK
Originally Posted By: mlord
Originally Posted By: LittleBlueThing
For this issue, have you come across raid bitmaps?


No, I hadn't. smile

Glad to be of service.

Originally Posted By: mlord

With a setup that does both of those, RAIDs would be much less painful.


Ah, there's more...

I proposed something years back that seems to be making it's way through Neils list:

When a drive is about to fail, insert a new hot spare, fast mirror from the failing drive to the new drive and only look at the rest of the RAID for parity when a bad block is encountered. Meanwhile all new writes go to the transient mirror.

This speeds up resync/restore for a failed drive massively and you essentially only rely on RAID resilience for the few failing blocks, not all the Tb of data.

Finally, put a RAID 1 SSD bcache in front of a RAID6 backend (with working barriers... one day...) and I think that becomes a very, very fast and reliable system.
_________________________
LittleBlueThing Running twin 30's

Top
#352922 - 28/06/2012 07:04 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: drakino]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
That's a bit ghetto if you ask me for that amount of data.
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#352967 - 28/06/2012 23:56 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: wfaulk]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: wfaulk
Originally Posted By: mlord
Of course drives do die that way, I suppose. It's just a heck of a lot less common than the gradual failure situations.

Not according to Google's real-world survey of 100,000 drives.

Well, they ought to know!
I wonder what happens to the data when DeathStar drives are removed from the mix? Those had a huge electronic failure rate, never seen before or since, I believe.

Thanks Bitt!

Top
#352968 - 29/06/2012 00:12 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
wfaulk
carpal tunnel

Registered: 25/12/2000
Posts: 16706
Loc: Raleigh, NC US
It says:
Originally Posted By: Google
The data used for this study were collected between December 2005 and August 2006.

The DeathStars were back in 2001, so I imagine that none of them were involved at all.

Also,
Originally Posted By: Google
None of our SMART data results change significantly when normalized by drive model. The only exception is seek error rate, which is dependent on one specific drive manufacturer
_________________________
Bitt Faulk

Top
#355990 - 31/10/2012 02:37 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
aTTila
new poster

Registered: 31/10/2012
Posts: 3
Originally Posted By: mlord
One thing I wish mhddfs would do by default, is to keep leaf nodes grouped together on the same underlying drive/filesystem. Eg. If I have a directory called "Top Gear", then ideally all Top Gear episodes should be kept together within that directory on a single underlying drive, rather than potentially being scattered across multiple underlying drives.

Dunno, why, but that's just how I'd like it to work, and so I've patched my copy here to do exactly that.


Hi mlord, are you able to provide your patch changes? I would also like mhddfs to function like this and only write a new directory path to another drive when the original source drive is full

Top
#356000 - 31/10/2012 11:15 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: aTTila]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: aTTila
Originally Posted By: mlord
One thing I wish mhddfs would do by default, is to keep leaf nodes grouped together on the same underlying drive/filesystem. Eg. If I have a directory called "Top Gear", then ideally all Top Gear episodes should be kept together within that directory on a single underlying drive, rather than potentially being scattered across multiple underlying drives.

Dunno, why, but that's just how I'd like it to work, and so I've patched my copy here to do exactly that.


Hi mlord, are you able to provide your patch changes? I would also like mhddfs to function like this and only write a new directory path to another drive when the original source drive is full


Patch attached.


Attachments
01_group_files_on_same_fs.patch (253 downloads)
Description: mhddfs patch to group leaf files from same subdir onto same drive.



Top
#356044 - 31/10/2012 21:26 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
aTTila
new poster

Registered: 31/10/2012
Posts: 3
Thanks for that mate, now writing to the same subdirectory but doesn't seem to duplicate the directory structure on another drive with free space when the original drive is full?


Edited by aTTila (31/10/2012 22:45)

Top
#356050 - 01/11/2012 00:47 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: aTTila]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Mmm.. dunno about that. My drives haven't gotten full yet. smile
Could be a bug in the patch, or maybe something else (?).

Top
#356051 - 01/11/2012 00:50 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Oh, it could have something to do with this code from the patch:

Code:
+       off_t threshold, move_limit = mhdd.move_limit;
+       if (move_limit <= 100)
+               threshold = 10;                         /* minimum percent-free */
+       else
+               threshold = 200ll * 1024 * 1024 * 1024; /* minimum space free */


I undoubtedly put that 200GB value in there as a suitable threshold for my own 2TB drives, which may not be suitable for your drives (?). But if you specify a "move limit" mount option less than 100 (eg. -o mlimit=20), it's supposed to use the 10% rule instead of the 200GB threshold.

-ml


Edited by mlord (01/11/2012 00:52)

Top
#356053 - 01/11/2012 01:13 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
aTTila
new poster

Registered: 31/10/2012
Posts: 3
Thanks again for your replies mlord, much appreciated. The drive that contains the subdir im trying to write to is a 2TB. It has ~5GB left, and my test file is say 9GB. When I cp this file to my mhddfs mount, instead of writing the directory structure to another free drive and placing the file there it just fails with 'out of space' error.

If it helps my move size limit is at the default of 4gb


Edited by aTTila (01/11/2012 01:20)

Top
#356190 - 09/11/2012 12:04 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: aTTila]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Yeah, so you are probably better off without that patch, then. It must have a bug in there of some kind.

Cheers

Top
#356806 - 14/12/2012 21:25 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
xmodder
new poster

Registered: 14/12/2012
Posts: 2
Hi everybody
I'm just new here, directed to this post after googling a little searching information about mhddfs.

I'm building a NAS home server for storing mainly media files and, as mlord, I don't think a RAID setup is a good approach for this kind of server. Well, at least not a standard (hardware or software) RAID setup.

The reasons I think that have been yet exposed on this thread, so I'm not going to repeat them, but suffice to say that, for a home server, the risk of losing ALL data in the array for any of the multiple ways this can happen on a RAID and the inability to mix mismatched drives is not affordable, IMHO.

But, as I can't afford a duplicated storage setup, as mlord has for backup, I wanted some kind of redundance on mine, so I can recover without a full backup from a single drive fail (or maybe more). That's why I'm considering snapraid. I guess you know it, but just in case, take a look at: SnapRAID

On top of that, I want also to be able to look at all my drives as a single storage pool, that's why I'm also thinking on using mhddfs. But it seems that mhddfs has a hit on performance and also a noticeable increase in cpu use. As I want also to use the same machine as a MythTV backend that should be able to record several channels at a time while streaming to several frontends, I'm a little concerned with anything hitting drives performance and/or cpu cycles availability.

So, I was thinking on using the mhddfs mount just for exporting the drives pool through samba/nfs just for reading, and then using the separate drive mounts for writing (directly on the server, for the MythTV recordings or from exported samba/nfs shares).

For what mlord has said, it seems that this is possible:
Originally Posted By: mlord
And the individual drives (and files!) are still fully accessible when (or not) mounted as part of the mhddfs array.


Originally Posted By: mlord
Part of the real beauty of it all, is that you don't have to do much to layer mhddfs onto the existing hodgepodge. smile

Just tidy up the current directory structures if needed, so that each existing filesystem uses a common/compatible directory layout, and then fire up mhddfs to pool them all at a new mount point. The existing filesystems are still there, mounted, and 100% functional and 100% safe, but you then have the option to slowly migrate things to begin using the new mount point.

mhddfs does not save any metadata of its own anywhere. It just redirects accesses one at a time as they happen, with no saved state. So it's easy to adopt over top of existing stuff, and just as easy to get rid of if one changes their mind.


I know that this way I will lose the functionallity of automatically filling drives and then writting to the next one that mhddfs has, but having to manage disk occupation by myself is less important to me than losing drive or cpu performance.

So, mlord, have you tested this scenario? Will it work? And, if it works, can I assume that this will not impact drives nor cpu performance, at least for write operations, as I will be using the individual drive mounts and not the pooled mhddfs mount for writting? And, as my last question, files written directly to one of the individual drive mounts will be shown immediately in the mhddfs pooled mount?

Thanks in advance

Top
#356809 - 14/12/2012 21:37 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: xmodder]
Dignan
carpal tunnel

Registered: 08/03/2000
Posts: 12338
Loc: Sterling, VA
I cannot comment on the topic, but I wanted to welcome you to the forum.

Welcome! smile
_________________________
Matt

Top
#356810 - 15/12/2012 01:18 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: xmodder]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
I know nothing about SnapRaid. But my mythbackend box is the one using mhddfs, and it works swimmingly well without me having to fuss over it all. That box has a Core2duo CPU (2X @2.6MHz), and the minimal CPU hit of mhddfs has never been the slightest issue or concern. The NFS server in the box delivers over 100MBytes/sec to other machines on the GigE LAN from the mhddfs array.

No form of RAID is a backup, and neither is mhddfs. Sure, it is far more dead-drive tolerant than any RAID, but I still recommend that everyone have at least one extra copy of their data (aka. a "backup"). smile

Cheers

Top
#356811 - 15/12/2012 01:29 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
I should add that, on my mythtbackend box, the "Recordings" directory is set to a specific filesystem/drive of the array, not going through mhddfs. I believe you were asking about this specific point. It turns out that it *could* use mhddfs after all (performance is a non-issue), but I _like_ having the recordings all on one drive rather than spread over the array. My box holds mostly (static) video content rather than (dynamic) recordings.

Top
#357085 - 11/01/2013 10:05 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
xmodder
new poster

Registered: 14/12/2012
Posts: 2
Originally Posted By: mlord
I should add that, on my mythtbackend box, the "Recordings" directory is set to a specific filesystem/drive of the array, not going through mhddfs. I believe you were asking about this specific point. It turns out that it *could* use mhddfs after all (performance is a non-issue), but I _like_ having the recordings all on one drive rather than spread over the array. My box holds mostly (static) video content rather than (dynamic) recordings.


Ok, thanks a lot for your answer mlord. That's all I wanted to know. Actually I was thinking that a decent CPU should cope well with the overhead imposed by mhddfs, but just in case, as I have not tested it, I wanted to know if I could have an alternative in case writting to the mhddfs pool finally resulted in slow performance.
MY NAS will be holding too mainly static video/audio content, and writes to it would be only downloads from a bittorrent client and recordings from mythtv server on the same machine. Also I want to write to this NAS from other machines on the network for copying new video and audio content from time to time, but it will be mainly used to stream live TV from mythtv server and for streaming the video and audio to XBMC clients on the network.
So, I think this setup is very similar to yours, and then if it is working fine for you, it should work also fine for me.

Best regards

Top
#357591 - 18/02/2013 17:06 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: mlord
Yeah, so you are probably better off without that patch, then. It must have a bug in there of some kind.

Well, the underlying filesystems are now full enough here that I can see that mhddfs actually is working to distribute files across both arrays. So perhaps there isn't "a bug in there" after all!

Here's how both of the filesystems get mounted here.
First, the main Mythtv storage heirarchy. For this, filesystem, capacity balancing isn't forced on until a drive dips below 400GB free.

/usr/bin/mhddfs /drive[123] /drives -o allow_other,auto_cache,max_write=4194304,uid=1000,gid=1000,mlimit=400G

And here is the backup array, mounted only when needed. Capacity balancing is set to kick in even later here, at the 200GB per-member threshold:

mhddfs /.bkp/drive[1234] /.bkp/drives -o rw,allow_other,auto_cache,max_write=4194304,uid=1000,gid=1000,mlimit=200G



Top
#357835 - 12/03/2013 21:15 Seagate ST4000DM000 4TB "Green" drive [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: mlord
Speaking of 3TB drives: I purchased the Western Digital "Green" drives for this setup. My initial observations of them are that (1) they are as mechanically quiet as the 2TB drives, BUT (2) they do vibrate more than the 2TB ones, and this may make them audible in some cases. I also wonder about endurance with all of that vibration.

The vibration from the extra platter is not bad -- less than a typical 7200rpm drive -- but it is noticeable when compared with the 2TB versions.


Well, almost a year later now, and I need more space. WD still haven't released a 4TB "Green" drive yet (roadmap says 2013Q3, sometime in the summer), and the Black ones are too hot, noisy, and expensive.

So.. I discovered that Seagate, having abandoned "Green" drives, have actually come out with a new 4TB model. They call it a "Desktop" drive rather than a "Green" drive, but it spins cool/quiet at 5900rpm just like the earlier Seagate "Green" series did.

I'm wary of Seagate from an incredible string of past (3-4 years ago) firmware and mechanical failures, but this drive is pretty much all new technology compared with those. Only 4 platters (current WD 4TB drives require 5 platters), and it really is nearly silent and vibration free -- much better even than the WD 3TB (5 platter?) drives I have.

Time will tell, but meanwhile I'll just continue to keep full backups. smile

Top
#357836 - 13/03/2013 05:51 Re: Seagate ST4000DM000 4TB "Green" drive [Re: mlord]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
4TB is still significantly i.e. much more than double the price of 3TB though. Is it really worth it or you need the single drive?

3TB is very much the sweet spot.
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#357837 - 13/03/2013 12:02 Re: Seagate ST4000DM000 4TB "Green" drive [Re: Shonky]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Well, that's the thing. Here, a 3TB WD Green drive is currently CAD$130. The 4TB Seagate is CAD$190. The 4TB WD Black is around CAD$300 by comparison. The problem with my MythTV box is that it has only 3 internal drive bays, which are already filled with 3TB drives.

Actually, I've installed the 4TB into that box as a fourth drive, by suspending it over top of the motherboard/RAM area (horizontal case). Some trickery was required. smile

Edit: I expect prices to remain fairly constant until WD releases their own "Green" (and "Red") 4TB drives this summer. If the Seagate is still behaving itself then, I might buy a few more of them once prices come down more. Note that these drives are equivalent to the WD "Red" series (5900rpm, TLER, etc..).

Cheers

Top
#357838 - 13/03/2013 20:01 Re: Seagate ST4000DM000 4TB "Green" drive [Re: mlord]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
I should have qualified with my location to be clear. Those new Seagates haven't made it to our shores either - not available from my local cheap parts store.

I bought a 3TB WD green for AUD$135 just this week so similar money to you. A 4TB WD black is $350 though and other 4TB drives go up significantly from there.

Edit: Maybe they have but from the online retailers starting from about $220 + $10-20 shipping. I usually buy from my local guy since it's easier and usually cheaper.


Edited by Shonky (13/03/2013 20:04)
Edit Reason: They are available.
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#357839 - 13/03/2013 20:11 Re: Seagate ST4000DM000 4TB "Green" drive [Re: Shonky]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Here're the "Shopbot.ca" search results for the new Seagate 4TB, with prices in CAD$ (for me, at least). Some of these places will ship internationally, though I think that's a bad way to purchase a sensitive mechanical device like a multi-TB hard drive. smile

http://www.shopbot.ca/m/?m=ST4000DM000

Top
#357840 - 13/03/2013 20:24 Re: Seagate ST4000DM000 4TB "Green" drive [Re: mlord]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
Yeah we have shopbot too as well as other similar ones - that's how I found they were avaialble. Just replace .ca with .com.au

Shipping is also usually a deal breaker when you're looking at $/GB.
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#357843 - 14/03/2013 15:32 Re: Seagate ST4000DM000 4TB "Green" drive [Re: mlord]
BartDG
carpal tunnel

Registered: 20/05/2001
Posts: 2616
Loc: Bruges, Belgium
I'm really looking forward to the WD 4TB drives, indeed as shown on a leaked roadmap meant to be released Q3. On the other hand that same roadmap also mentions a Green 5TB drive to be released in Q4. IF that is true, that might drive the price of the 4TB drives down faster.
_________________________
Riocar 80gig S/N : 010101580 red
Riocar 80gig (010102106) - backup

Top
#358015 - 29/03/2013 18:35 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: mlord
There is a bit of a performance hit with mhddfs --> everything gets relayed from the kernel to the mhddfs task (userspace) and then back to the kernel and on to the original task.
..
One project I might do if I get time/bored, is to write an in-kernel implementation of a simplified version of it, which would get rid of the double copying on reads/writes and make it all pretty much invisible performance-wise.

Back to this again. Last night, SWMBO & I were watching a BRRip of Fame(1980). Really good film, especially the cafeteria party scene. But playback stuttered annoyingly during that scene, which has a lot of fast movement, camera panning, and high bit-rate audio.

Today I poked at things, and noticed that the file plays more smoothly (though not perfectly) when not passing through mhddfs. I think mythtv doesn't do enough read-ahead, an issue that is compounded by the frontend streaming content via the backend, rather than just reading the file itself.

So I patched the kernel on that box to intercept open() calls to the videos hierarchy on the mhddfs, redirecting access straight to the underlying files when the open() is for an ordinary file read. Doing this similarly reduces the stuttering we saw to almost non-existent levels.

Someday I've gotta figure out where Mythtv itself does the file reads, and insert an fadvise() call there for proper read-ahead, or something.

Cheers

Top
#358022 - 30/03/2013 00:38 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
Excluding mhddfs completely since I'm not using it, I see similar results in MythTV in what I assume are high bit rate portions that could easily be resolved with a bit of buffering.

I have all my TV on a NAS connected via HomePlug powerline so it's not the speediest in the world - maybe 2-3MByte/sec average.

The untuned signal at the beginning of HBO shows reliably stutters (probably doesn't help it's right at the start though). Certainly sections where bitrate should go up cause problems.
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#358023 - 30/03/2013 02:54 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Shonky]
Dignan
carpal tunnel

Registered: 08/03/2000
Posts: 12338
Loc: Sterling, VA
Originally Posted By: Shonky
I have all my TV on a NAS connected via HomePlug powerline so it's not the speediest in the world - maybe 2-3MByte/sec average.

On a completely different note, do yourself a big favor and pick up this. In my experience, it's perfect for the same use cases where people usually turn to powerline. If you have a cable jack anywhere near both ends of that run, you'll be FAR, FAR better-off with MoCA adapters. The speed will be almost 100x better. I used MoCA adapters throughout my condo (before I moved into a house with ethernet), and I never saw speed issues.

MoCA is amazing...
_________________________
Matt

Top
#358024 - 30/03/2013 04:23 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Dignan]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
Well based on the numbers mentioned it would be more like 10x. I get somewhere in the 20-40Mbit/sec range and have seen 50Mbit at some point (I said Mbyte/sec before).

I don't have cable at both ends either so no use to me. I'm considering looking at Wifi N. From the living room I easily get 12-14 MByte/sec from my router on 802.11n 5GHz to my laptop. However I feel the problem is more MythTV's lack of buffering than anything...


Edited by Shonky (30/03/2013 08:09)
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#358025 - 30/03/2013 11:27 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Shonky]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Well normally, my MythTV box can play back anything I throw at it, without even a hint of stutter. To get it to that point, I've patched bugs in MythTV, installed an NVIDIA GT240 video card for pure VDPAU playback at the highest quality settings, made the appropriate configuration tweaks to have the HDMI output match the refresh rate of the source material (the video file), etc..

I'm not used to seeing stutter or even the slightest jitter from anything on it. smile So kinda surprising to get it from a 3GB video file. Definitely a buffering issue, because one can hit the "15sec back" button and have it replay the stuttered scene (from the page cache) without any hint of stutter second time around.

I might poke at it more if I see the problem again with some other file.

Cheers

Top
#358026 - 30/03/2013 12:32 Bandwidth
tanstaafl.
carpal tunnel

Registered: 08/07/1999
Posts: 5549
Loc: Ajijic, Mexico
Originally Posted By: Shonky
From the living room I easily get 12-14 MByte/sec from my router on 802.11n 5GHz to my laptop.
Only peripherally related to this discussion, but...

My internet comes to me on a 15Mbit/sec cable modem and a WRT54G2 router. My computer and SWMBO's computer are wired directly to the router with Ethernet. My downstairs neighbor (and the one below her) are using the wireless signal from the router. (It's a bit more complex than that: they receive their signal from a second WRT54G that is being used as a repeater from my WRT54G2 because there is so much steel in my house that my signal can't propagate directly to them.)

I have run speed tests and I do receive consistent 14-15 Mbit/sec downloads at my directly-wired computer. Can my downstairs neighbors expect to receive comparable speeds after going through the two wireless routers?

tanstaafl.
_________________________
"There Ain't No Such Thing As A Free Lunch"

Top
#358027 - 30/03/2013 13:03 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: tanstaafl.]
K447
old hand

Registered: 29/05/2002
Posts: 798
Loc: near Toronto, Ontario, Canada
What speeds are they actually getting?

Expect is a loaded question, as there are multiple variables involved. The WiFi repeater effectively halves the effective bandwidth available to everything connected through it. Whether that has much impact depends on how much bandwidth is available to begin with, on each hop.

Top
#358028 - 30/03/2013 13:41 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Dignan]
Phoenix42
veteran

Registered: 21/03/2002
Posts: 1424
Loc: MA but Irish born
Originally Posted By: Dignan
On a completely different note, do yourself a big favor and pick up this.

Tivo sell what I think i the same one for a few dollars cheaper. I believe this is their preferred method for streaming to the Mini, that or ethernet, but wireless, even N, is right out..

Originally Posted By: Dignan
MoCA is amazing...

Depending on how things work out over the next few months, I may be coming to you for MoCA advise.

Top
#358030 - 30/03/2013 15:35 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: tanstaafl.]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: tanstaafl.
I have run speed tests and I do receive consistent 14-15 Mbit/sec downloads at my directly-wired computer. Can my downstairs neighbors expect to receive comparable speeds after going through the two wireless routers?

Probably not. As a rule of thumb, take whatever "speed" a wireless gizmo advertises, and divide it by two to get a real-life baseline for any use that doesn't happen within the same room. In your case, this (26mbits/sec) is still faster than your Cable modem, so no loss here. This gives 15mbits/sec over the first wireless link.

Then divide by perhaps two again, to account for the (half-duplex) relay.

So in her setup, I'd expect to be able to stream at a steady 7-8mbits/sec in real-life. Maybe better, if the sun, moon, and planets are in The Correct Alignment. smile

Cheers


Edited by mlord (30/03/2013 15:38)

Top
#358032 - 31/03/2013 02:15 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Phoenix42]
Dignan
carpal tunnel

Registered: 08/03/2000
Posts: 12338
Loc: Sterling, VA
Originally Posted By: Shonky
Well based on the numbers mentioned it would be more like 10x. I get somewhere in the 20-40Mbit/sec range and have seen 50Mbit at some point (I said Mbyte/sec before).

Ah, my fault, I read MB as Mb in your previous post.

Originally Posted By: Phoenix42
Originally Posted By: Dignan
On a completely different note, do yourself a big favor and pick up this.

Tivo sell what I think i the same one for a few dollars cheaper. I believe this is their preferred method for streaming to the Mini, that or ethernet, but wireless, even N, is right out..

Yup, Tivo relies on MoCA for the Mini (or ethernet) and I believe can also talk between regular Tivo units the same way. It's also the way many (most? all?) cable providers are handling in-house place-shifting of content.

You're also right that it seems Tivo is charging a little less for those things, though I get two-day shipping through Amazon smile

Quote:
Originally Posted By: Dignan
MoCA is amazing...

Depending on how things work out over the next few months, I may be coming to you for MoCA advise.

Happy to give it! It's pretty straightforward, with a couple little potential snags.
_________________________
Matt

Top
#358054 - 01/04/2013 14:37 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Shonky]
drakino
carpal tunnel

Registered: 08/06/1999
Posts: 7868
Originally Posted By: Shonky
I'm considering looking at Wifi N. From the living room I easily get 12-14 MByte/sec from my router on 802.11n 5GHz to my laptop.

N throws more variables into the puzzle then A, B or G did. (Posting for general info)

1. Is it 2.4 or 5GHz? - 5GHz has more channels, thus less overlap and interference. Downside, signal doesn't travel as far.
2. Are "wide" channels in use? 802.11n allows the old sized 20MHz sized channels, and wider 40MHz channels. Potential to double speed.
3. How many radios and antennas? Most equipment has 2, leading to a possible 300Mbit rated link when using wide channels. Some devices may only have one, capping it at 150MBit. Other options are 3, and the spec allows up to 4 (600Mbit max).

When shopping for 802.11n, keep in mind the above info to try and meet whatever speeds are needed. Distance and overhead will of course slow down these speeds in real world situations. The ability to close in on 450MBit has negated the need for Gigabit ethernet to my laptop in many situations. The upcoming 802.11ac standard should remove the wired need completely for my laptop.

Top
#358059 - 02/04/2013 06:21 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: drakino]
Shonky
pooh-bah

Registered: 12/01/2002
Posts: 2009
Loc: Brisbane, Australia
Yep aware of all of that. I have a 2x2 connection at the moment on 5GHz. It's only from one floor to another directly on top (albeit through a concrete slab).

However as mentioned, I'm not sure this will actually make much difference due to MythTV's buffering. I've been meaning to run a wired 100Mbit (or maybe even Gbit) just to try it out. If that's no good, WiFi is no chance...
_________________________
Christian
#40104192 120Gb (no longer in my E36 M3, won't fit the E46 M3)

Top
#358062 - 02/04/2013 11:42 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Shonky]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
For nearly everything, MythTV here plays perfectly well over GigE. It's just a couple of recent files I've first/ever seen any hint of stutter on.

I had a friend trying to get MythTV to play stuff smoothly from his NAS, over Wifi, then over Powerline networking, and it was a real nightmare. I don't remember what the final solution was there.

Top
#360065 - 25/10/2013 17:05 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: xmodder]
Mister_Smith
new poster

Registered: 25/10/2013
Posts: 2
Hi everybody,

I have to bring up this thread again because I'm not sure if my thinking is correct:

Originally Posted By: xmodder
So, I was thinking on using the mhddfs mount just for exporting the drives pool through samba/nfs just for reading, and then using the separate drive mounts for writing (directly on the server, for the MythTV recordings or from exported samba/nfs shares).

.....

So, mlord, have you tested this scenario? Will it work? And, if it works, can I assume that this will not impact drives nor cpu performance, at least for write operations, as I will be using the individual drive mounts and not the pooled mhddfs mount for writting? And, as my last question, files written directly to one of the individual drive mounts will be shown immediately in the mhddfs pooled mount?


I also want to use mhddfs to create a storage pool and want to having directly access to the drives on writing (so that I can handle the place where each file is located on the drive I want to be). I don't want to let mhddfs manage the place/drive were my files shall be written, I want to keep this in my hand. But when accessing to the pool for reading, I want do this via mhddfs-mount-point.

I couldn't find an absolute answer although xmodder asked before, so here my questions:

1) Can I have direct access to the drives for writing?
2) Are the files I've written directly to one of the individual drives immediately shown in the mhddfs pool?
3) Or do I have to rebuild the mhddfs pool each time I write data by my own?
4) How can I use aufs and does it make sense to integrate aufs?

Many thanks in advance for any help.

Greets

Top
#360066 - 25/10/2013 18:07 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: Mister_Smith]
mlord
carpal tunnel

Registered: 29/08/2000
Posts: 14493
Loc: Canada
Originally Posted By: Mister_Smith
1) Can I have direct access to the drives for writing?

Yes, no problem at all, no confusion.
Quote:
2) Are the files I've written directly to one of the individual drives immediately shown in the mhddfs pool?

Yes, immediately. Mhddfs never caches any metadata. Bad for performance, but good for directly accessing the member drives.
Quote:
3) Or do I have to rebuild the mhddfs pool each time I write data by my own?

No.
Quote:
4) How can I use aufs and does it make sense to integrate aufs?

For what you are doing, aufs (aufs2 ?) may be a better choice. It provides the functionality you need: read access to everything under a single mount point, write access to individual members. The reason to perhaps prefer mhddfs over aufs, is if you want the filesystem to automatically figure out where to store stuff. Otherwise, aufs will have higher performance (being an in-kernel filesystem).

Cheers

Top
#360067 - 25/10/2013 18:33 Re: mhddfs -- Pooling filesystems together for bulk storage [Re: mlord]
Mister_Smith
new poster

Registered: 25/10/2013
Posts: 2
Perfect, thanks mlord.

That's all I wanted to know, and the best information you gave: I can make it with aufs without mhddsf.

I thought aufs would only create an overlay-filestructure to get a kind of virtual access, but if aufs is able to create a pool with a single-mount-point without any balancing I'm happy for the rest of this week (and above)...


Edited by Mister_Smith (25/10/2013 18:33)

Top
Page 1 of 3 1 2 3 >