Coda File System

Re: Cache Overflow

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Thu, 12 Aug 1999 10:34:02 -0400
On Thu, Aug 12, 1999 at 12:40:21PM +0200, Mitja Sarp wrote:
> On Wed, Aug 11, 1999 at 06:51:08PM -0400, Jan Harkes wrote:
> 
> > Coda uses `whole file caching'. So your cache needs to be (at least) as
> > large as the 2GB file you are trying to work with. And, as you might
> > have noticed, the cache-limit is a little `soft', and Coda only
> > complains once every 30 seconds about an overflow, and you might want to
> > have the directories leading up to the file cached so it is probably
> > better to have a larger local cache size.
> > 
> 
> How about, if a file is found to be larger than the local cache, it
> skips the caching and logging part and the file would come 'streaming'
> from the server instead?

That is impossible. Read the papers about the design decisions around
which Coda is built. Most applications can successfully handle errors
when making an open call, far fewer can handle failed read/writes, and
actually no application can roll back (or fix up) a file to a consistent
state when a write has failed half way through (due to disconnection).

How do you expect to handle disconnected/weakly-connected operation?
Concurrent access, detecting of write-write sharing, consistency, etc.

In a way, Coda works like a database. The open begins a transaction, and
the close ends this transaction. Every transaction that modifies the
filesystem is shipped to the server or logged. Failed transactions are,
depending on the type of error, transparently handled or the blocked and
show up as a local-global conflict. This was done to ensure that a user
will not lose his updates, as we otherwise would do a roll back to the
known state at the fileservers. But the system always goes from a
consistent state to a consistent state (on the granualarity of a file).

Coda is NOT designed as an nfs client with an aggressive, persistent,
buffer cache on top of it. If you want that, go hack an NFS client and
server. If you use locking to block concurrent write access, and modify
the server to invalidate all blocks a client has cached when a file is
updated (i.e. when the lock is released), you actually would do pretty
well, as long as there is a network.

But I wouldn't recommend going off to Europe for 2 weeks with your
laptop, work on all your files/re-organise your email folders/update the
webpages, and then come back and reintegrate with the server without
problems. btw. I did this, and my desktop was really still delivering
email, into the email-folders I was playing around with on my laptop.

Jan
Received on 1999-08-12 10:36:41