Coda File System

RE: Client Cache Files Limit

From: Beckmann, John <J.Beckmann_at_wmrc.com>
Date: Fri, 18 May 2001 14:07:29 +0100
Hi Jan,
Thanks for your response. This fixed the problem I was having, but now I
have come across the same problem that Greg Troxel was having with too many
files in a single directory.

I do not understand the reasoning that was given to Greg as to why you can
handle 4000 files with long file name and ~= 8000 files with sort filenames.
I could understand this, if the system came back and complained that it had
reached it's limit, but to just crash and disable the whole system because
of this, would idicate that the whole system does very little checking as to
what resources it can and can't use. 

After reading a lot of documentation on the coda website, I was under the
impression that this software could be used for FTP mirror sites! Nice idea,
but in reality...

Regards,
John Beckmann

-----Original Message-----
From: Jan Harkes [mailto:jaharkes_at_cs.cmu.edu]
Sent: Wednesday, May 16, 2001 21:39
To: Beckmann, John
Cc: codalist_at_coda.cs.cmu.edu
Subject: Re: Client Cache Files Limit


On Tue, May 15, 2001 at 05:41:11PM +0100, Beckmann, John wrote:
> Hi,
> I have setup Coda 5.3.14 running on two servers in replicated mode, which
is
> working fine. The problem I have is when I try to write more files to the
> volume than the client cache has cache files in table.
> 
> If I setup the client with a 20MB cache, venus reports that it initially
has
> handles for 833 files. Does this mean that if I write more than 833 files,
> venus will lock up.

No, the client should discard not recently used files of which it knows
there is a valid copy on the servers from it's cache to make room for
the new ones.

> 13:30:51 root acquiring Coda tokens!
> Assertion failed: VDB->AllocatedMLEs < VDB->MaxMLEs, file
> "/usr/src/redhat/BUILD/coda-5.3.14/coda-src/venus/vol_cml.cc", line 519
> Sleeping forever.  You may use gdb to attach to process 5872.
> 
> Is this by design?

MLE's are Modification Log Entries, i.e. you client was operating in
disconnected or write-disconnected mode. The whole cache got filled with
modified files that were not yet on the server so it couldn't complete
the local operation.

Typically it should fail with ENOSPC when allocating the file object,
the MLE is created later on and there is no adequate error handling
anymore because most of the administration is already done and committed
and hard to revoke, the MLE just logs the change.

> This would mean that if I want to update 100,000 files, I need start venus
> with a 2.4GB cache.

Or keep the client strongly connected. To avoid server load or network
congestion to trick a client into going write-disconnected use
'cfs strong' before running the rsync. 'cfs adaptive' switches the
client back to a mode where it attempts to adapt it's behaviour as a
result of varying network connectivity.

Jan
Received on 2001-05-18 09:10:03