Coda File System

sizing and venus memory usage questions

From: Dr. Douglas C. MacKenzie <doug_at_mobile-intelligence.com>
Date: Sat, 17 Jun 2000 09:39:48 -0500
Jan Harkes wrote:

> On Thu, Jun 15, 2000 at 09:43:31PM -0400, Dr. Douglas C. MacKenzie wrote:
> > files.  I assume this is due to the different directory structure
> > I've seen mentioned impacting "find" commands.  I suspect
> > the java commands do something similar to a find to get the
>
> Unlikely, the find problem only occurs in directories that contain
> mountpoints for Coda volumes. Since a Coda mountpoint is not really a
> directory until the mountpoint is accessed and the volume get's
> attached, the directory linkcount is off.
>
> Find has some optimizations to avoid calling stat on the directory
> entries once linkcount-2 directories have been seen. This way the
> mounted Coda volumes confuse these optimizations and whole subtrees are
> never traversed.

You were right.  The venus cache size was set too small in
/usr/coda/etc/vstab
(set to 10000) and java couldn't open its runtime library.  I raised it to
50000 and
it works fine.

After experimenting with different venus cache sizes, I'm surprised at the
amount of memory venus uses.  My numbers (the server is set for 1.1Gig):

    Venus cache size (in vstab)          Venus memory usage (from top)
         50Meg                                             9.9Meg
       100Meg                                              16Meg
       500Meg                                              66Meg
     1000Meg                                            125Meg

Memory usage seems to run 12 to 15 percent of the cache size,
which seems pretty high to me.  Am I doing something wrong?
I believe that the cache size is the total disk space available to
coda for local storage of the hoard files and the current working set,
so it seems like it should be at least 500Meg to keep network traffic down.
Does that seem reasonable?


>
>
> > How big of coda partitions are people using?
> > I set up a 1.1Gig data partition,  44Meg rvm partition,
> > and a 2Meg log partition and it seems to be working OK
> > on a 266 PII machine with 64Meg of memory and a
> > 64Meg swap partition.
>
> We have about 120MB RVM for a 2GB server, but are not yet sure how much
> of that RVM is really in use (at least we haven't run out of RVM on that
> machine yet).
>
> > Unfortunately, 1 Gig doesn't go very far anymore so I'm
> > looking at the 3.3Gig setup.  Is it OK to allocate more
> > rvm data than the size of physical memory, as long as
> > real+swap < rvm or do I need to add more memory?
>
> Yes, one machine has 128MB memory but has no problems mapping 330MB of
> RVM data.
>
> > If I add a second server, are the data partion sizes additive,
> > or am I still limited to a 3.3Gig total coda filespace across
> > the cluster?  I'd like to get to around 10Gig total.
>
> 10GB should be doable even with one server, the sizes are only additive
> if you do not use any doubly replicated volumes. If everything is singly
> replicated you can add the disksizes.

What would I have to do to create a configuration table for the init scripts

for a 9.9Gig setup (since you already have a 3.3, 9.9 seems like a good
choice)?

>
> If everything is doubly replicated, there is the advantage that when one
> server is lost, it is simple to reinitialize it, recreate the lost
> volume replicas, and then resolve the whole tree back by doing ls -lR.
> But in that case you can't add the sizes anymore.
>
> Jan
Received on 2000-06-17 10:45:16