Coda File System

Re: Venus cache size

From: Gulcu Ceki <cgu_at_zurich.ibm.com>
Date: Thu, 07 Oct 1999 11:21:03 +0200
Bill,

You are absolutely right. There were at least two mistakes in my
experiment.

1) I added an extra zero for the cache size. In other words, the venus
cache size was configured to be 100'000 and not 10'000. Bruce Janson
politely suggested that this was perhaps a possible source of
error. He was right of course.

2) Having corrected this "minor" detail, I tried once again

   dd if=/dev/zero of=/coda/x ...

   As you remarked, the client can open open zero-length files for
   writing and can continue to write to them (above the cache limit)
   without data being lost.

   However, "cat /coda/x" fails, exactly as you wrote.

Having made a big fool out of myself, I went to read the paper
entitled "Coda: A Highly Available File System for a Distributed
Workstation Environment."  I now reason that the cache size limitation
is apparently not just a Coda thing.  It is inherent to caching in
general. I would think that AFS has the same restriction -- except
that less people complain about it.

My humble excuses to all. Ceki

> > AFAICT, strongly connected clients (and no hoarding) can manipulate
> > files of any size independently of cache size. Ceki
> 
> Sorry, this just isn't true.  Clients can certainly open zero-length
> files for writing (what your test does) and can continue to write
> to them without data being lost.  The problem is when you 
> later try to manipulate those files, namely by open()ing them
> for reading, writing, or both.  open() fails with ENOSPC if the file
> size is larger than the cache (or on coda-5.2.7 and earlier, if the
> file was larger than (cache_size - leaked_rvm).
> 
> Bill Gribble
Received on 1999-10-07 05:23:44