Coda File System

Re: cfs flushcache .. what to use to flush venus contents to server?

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Thu, 25 Jan 2001 15:18:49 -0500
On Thu, Jan 25, 2001 at 02:58:10PM -0500, Brad Clements wrote:
> I created a new replicated volume, mounted it, and have written about 15 
> large files to a subdir on that volume from client B
> 
> on client A I can see the files, but they all have 0 length (client A is the 
> scm) on client C, same problem

Ok the CREATE operations went through, but the STORE operations did not.

> cfs lv <dir> on client B shows current blocks used 15, but it should be 
> much more.

Does it show anything about CML entries pending for reintegration in the
cfs lv output?

> still it seems that all my data is in B's cache and hasn't been written 
> back to the server.

If the client is write-disconnected, make sure you have tokens and try

    cfs fr <dir>	(aka. forcereintegrate)

> Also, cfs lv says "write-back disabled"

That is writeback-caching, an extension to write-disconnected operation.

> cfs dasr, cfs easr, cfs wd, cfs wr, cfs wbstart, wb stop and wbauto??

dasr/easr - Disable/Enable Application specific resolver execution.
    Not really useful without a fully functioning helper application
    (AdviceMonitor is broken, it's replacement the sidekick is under
    development)

wd/wr - Force writedisconnected operation, or attempt to go back to
    fully connected operation.

fr - Force all logged operations back to the server.

wbstart/wbstop/wbauto - All related to the writeback-caching extension.

> cfs wb or cfs wbstart gives "resource not available"

Because all newly created volumes have a flag that disables writeback
caching. This code is still very fresh, and doesn't really work nicely
when several clients are hoarding the same files and have wb-caching
enabled. Writeback caching is similar to but not the same as
write-disconnected operation.

> Also I'm seeing a lot of Cache Overflow on the B client . this is before 
> writing data to the volume, I was just reading from another volume..

That is unusual, a client should refuse to get anything that would
exceed available cache-space. Cache Overflows are normally associated
with writing, the client only "realizes" how big a file as after it is
closed, and at that point it might be bigger than the cachesize or quota
would allow.

Jan
Received on 2001-01-25 15:18:56