Coda File System

Re: Real life lessons of disconnected mode

From: <>
Date: Tue, 27 Jun 2006 16:06:52 +0200
Hi Satya,

On Tue, Jun 27, 2006 at 08:08:53AM -0400, M. Satyanarayanan wrote:

> The current code is biased towards the fully automated end of 
> the spectrum and no atomicity.  So the cache management policy
> is roughly "as fresh a state as possible, without user interaction
> or atomicity guarantees." 

As you can see, I am advocating for less ambitious behaviour, namely
omitting the refreshing walks. Then some problems you mention do not
arise at all (like background growth of hoarded trees).

In my perception when I take a decision to hoard a file or a tree,
it means I want to have guaranteed access to "this" version of the object.
Freshness is welcome but not crucial. Then I have also the liberty
of refreshing the object any time by accessing it for reading.
That can be done by a separate per-user daemon, for the cases when
some files must be as fresh as possible. Like

while sleep 60; do while read file; do true <"$file"; done <filelist; done &

with a GUI producing the "filelist"...

> One reason for rejecting the "sticky" approach was that we 
> didn't have a good answer to the question of  what to do if the 
> resync step would cause a pinned subtree to expand greatly
> (beyond cache size limits).  E.g. you disconnect after hoarding

Then there should be some mechanism preventing users from doing
denial of service for the whole host. Each user's processes should get
"disk full" error as soon as they expand their pinned file set beyond
a predefined size.

In general, there should be a list of users allowed to pin objects in
the cache and how much each of them is allowed to.

The same as with disk quotas, you want to "overbook" reasonably so that
at least one user alone can never take all the space.

> a 1-byte subtree; a later "hoard walk" discovers that the
> 1-byte subtree has grown to 10 GB, which is bigger than the cache.
> What does Venus do now?   Currently, Coda tries to use the hoard priority

Return ENOSPC. Of course that implies that there is a _user_process_
doing the walk on behalf of the user, not Venus internally.
The user process can be implemented in a variety of ways, allowing
for different automatic or interactive failover behaviours.

Say, a cron job doing the walk would abort and possibly send mail,
while a desktop-builtin hoarder would give you the desktop-congruent popup.

For my purposes I would probably never run a hoarder implicitely,
certainly never when I am not logged in.

> The deeper issue is static partitioning of the cache versus dynamic
> partitioning.   Even without growth of hoarded subtrees, there
> could be cache pressure to throw things out.  E.g. you hoard 
> critical objects, then start crawling some big tree while still
> connected.   The cache misses during the crawl will eventually
> force a hard decision:  to throw out a hoarded object or not.
> The "sticky" approach would never throw out a hoarded object
> to relieve cache pressure.    But it would make the apparent
> cache size smaller for non-hoarded objects.  

If there are quotas on hoarding, they would in many cases prevent
the situation when non-hoarded objects have no space.

> Usage-based insights and ideas from the Coda user community on 
> these issues would be very helpful --- please contribute.

Thanks for paying attention, and of course thanks for Coda!
Received on 2006-06-27 10:08:16