Coda File System

Re: performance

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Tue, 23 May 2000 14:59:45 -0400
On Tue, May 23, 2000 at 09:43:31AM -0700, Justin Chapweske wrote:
> 
> > NFS/Samba have a completely different model, they are block based. If
> > you read a very large uncached file in Coda, you have to wait until it
> > has been fetched completely. 
> 
> I take it that this is an implementation detail.  I'm suprised that since
> you guys have the benefit of having a multi-threaded implementation you
> don't just feed the file's bytes off to the reading application while
> saving it to the cache at the same time, that way the initial latency of
> opening large files is illiminated.  I'm sure that this would be much more
> complex to implement than I have stated here, but are there any semantic
> reasons why it couldn't be done this way?

Well, there are several reasons. First of all, our kernel code does not
intercept read/write calls, they go directly to the inode of the cache
file. If we want to trickle, every read/write/lseek/mmap (or readpage/
writepage) would have to check whether that part of the file has been
fetched yet. The kernel would have to use some additional upcall to
`extend' it's access to the file.

Another thing is, what do you do when someone is already reading the
file and the client disconnects, or the server goes down and the replica
we switch to has a different version of the file. At the moment these
things are no problem we can return ETIMEDOUT on the open, or restart
the fetch for the new version.

> Are there any other places in the coda client that code be significantly
> improved if the implementation weren't  prohibitively complex?  The reason

I'm restructuring the volumes in the client, once that has stabilized it
will become trivial (as far as the client is concerned) to resize the
VSG of a volume, or move volumes around between different servers.

Jan
Received on 2000-05-23 15:00:44