From SRS0=LM0p=MF=lists.debian.org=bounce-debian-user=vogelke+debian=pobox.com@bounce2.pobox.com  Mon Apr 26 06:03:10 2010
Message-ID: <4BD562A7.3050907@hardwarefreak.com>
Date: Mon, 26 Apr 2010 04:53:43 -0500
From: Stan Hoeppner <stan@hardwarefreak.com>
MIME-Version: 1.0
To: debian-user@lists.debian.org
Subject: Re: Filesystem recommendations
References: <o2p258ced3f1004241053j28352c68v7fa61b56b021443b@mail.gmail.com>
 <4BD3425F.6080301@cox.net> <4BD3DEEF.7050305@allums.com>
 <4BD53D53.9050205@hardwarefreak.com> <4BD54D42.3030701@allums.com>
In-Reply-To: <4BD54D42.3030701@allums.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Mark Allums put forth on 4/26/2010 3:22 AM:
> On 4/26/2010 2:14 AM, Stan Hoeppner wrote:
>> Mark Allums put forth on 4/25/2010 1:19 AM:
>
> Sorry Stan,  Your defense of XFS has put me into troll mode.  It's a
> reflex.  I don't buy it, but I shouldn't troll.
>
> I think you are confusing what is with what should be.

A'ight, you forced me to pull out the big gun.  Choke on it.  The master
penguin himself, kernel.org, has run on XFS since 2008.  Sorry for the body
slam.  Is your back ok Mark?  ;) Pretty sure I just "won" this discussion.
Err, actually, XFS wins.  ;) BTW, the main Debian mirror in the US is actually
housed in kernel.org last I checked.  Thus, the files on your system were
very likely originally served from XFS.

The Linux Kernel Archives

"A bit more than a year ago (as of October 2008) kernel.org, in an ever
increasing need to squeeze more performance out of it's machines, made the
leap of migrating the primary mirror machines (mirrors.kernel.org) to XFS.
We site a number of reasons including fscking 5.5T of disk is long and painful,
we were hitting various cache issues, and we were seeking better performance
out of our file system."

"After initial tests looked positive we made the jump, and have been
quite happy with the results.  With an instant increase in performance
and throughput, as well as the worst xfs_check we've ever seen taking 10
minutes, we were quite happy.  Subsequently we've moved all primary mirroring
file-systems to XFS, including www.kernel.org and mirrors.kernel.org.

With an average constant movement of about 400mbps around the world, and
with peaks into the 3.1gbps range serving thousands of users simultaneously
it's been a file system that has taken the brunt we can throw at it and held
up spectacularly."

http://www.xfs.org/index.php/XFS_Companies#The_Linux_Kernel_Archives

--
Stan


From SRS0=LM0p=MF=lists.debian.org=bounce-debian-user=vogelke+debian=pobox.com@bounce2.pobox.com  Mon Apr 26 04:03:17 2010
Message-ID: <4BD54389.1030604@hardwarefreak.com>
Date: Mon, 26 Apr 2010 02:40:57 -0500
From: Stan Hoeppner <stan@hardwarefreak.com>
MIME-Version: 1.0
To: debian-user@lists.debian.org
Subject: Re: Filesystem recommendations
References: <o2p258ced3f1004241053j28352c68v7fa61b56b021443b@mail.gmail.com>
 <j2v537f90651004250829j1b0cb458y681b1732c2c2da4a@mail.gmail.com>
In-Reply-To: <j2v537f90651004250829j1b0cb458y681b1732c2c2da4a@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

Mike Castle put forth on 4/25/2010 10:29 AM:
> On Sat, Apr 24, 2010 at 10:53 AM, B. Alexander <storm16@gmail.com> wrote:
>> Does anyone have suggestions and practical experience with the pros and cons
>> of the various filesystems?
>
> Google is switching (has switched by now?) all of it's servers over to
> ext4.  A web search will turn up more details on the subject.  But
> they are mostly lots of big files.

If it weren't for the live migration requirement, I read this to say that
Google would be using XFS due to its superior performance:

"In a mailing list post, Google engineer Michael Rubin provided more insight
into the decision-making process that led the company to adopt Ext4.  The
filesystem offered significant performance advantages over Ext2 _and nearly
rivaled the high-performance XFS filesystem_ during the company's tests.
Ext4 was ultimately chosen over XFS because it would allow Google to do a
live in-place upgrade of its existing Ext2 filesystems."

--
Stan

---------------------------------------------------------------------------
Taken from http://www.linux-mag.com/id/7876/3/
Creating 1 billion files

Ric talked about some specific lessons they learned from the testing:

* When he fsck-ed the 1 billion file ext4 file system (a total of 70TB
capacity), it took about 10GB of memory during the operation.  This may
sound like a great deal of memory and on today's laptops and desktops this
may be true, but on servers this amount of memory is fairly common.

* Using xfs_repair (the XFS file system repair tool), on a large file system
took almost 30GB of memory which is quite a bit of memory even for servers.

