Coda File System

Re: Implementation Between 2 Servers

From: Jan Harkes <>
Date: Mon, 2 Jan 2006 12:49:05 -0500
On Fri, Dec 30, 2005 at 11:21:11AM -0600, Ryan Toomer wrote:
> My questions are as follows
> 1.	Is it true coda has problems on 64bit platforms?

Yes, that is true. I don't have a 64-bit machine myself, but I have
merged several patches from someone who does. There is one important
change for the servers that I haven't merged yet, but even with that
patch I'm not sure if everything actually works right yet.

In theory the client should be ok now, but I haven't actually tested it.

> 2.	Is it true that I will have to cut this up into 20 GB partitions to
> reliably use coda? (would like to use 200gb signle partition)

Actually, Coda doesn't care much about the amount of data, but the
number of objects. There is a small 'test' program named rvmsizer, which
is included in the server package that can give an estimated metadata
usage when it is given a representative subtree of the data you want to
store. From that estimate you can extrapolate to what would be required
(or what the limits would be) for an actual deployment.

On a 32-bit system you an assume that RVM should scale up to about 2GB
without requiring special hacks like statically linking the binaries, so
if some tree would result in a 20MB estimate, the system would be able
to handle 100 times as much data.

I just checked on on of my servers, it has a little over 44GB of data,
and is using 180MB of RVM to store the required metadata. At the moment
it is configured for a 320MB RVM data segment, so we should be able to
store at least 78GB without having to resize the RVM data segment and up
to 525GB if we scale RVM data to 2GB.

It looks like rvmsizer.c doesn't actually depend on any headers in the
Coda source tree, so you can

    # grab a copy from CVS
    # compile it
    cc -o rvmsizer rvmsizer.c

Received on 2006-01-02 12:51:23