Coda File System

Coda 6.07 | Replication Problems

From: redirecting decoy <>
Date: Thu, 28 Oct 2004 13:40:13 -0700 (PDT)
Hello all,

Been messing with coda for a few weeks. Had it working
fine until I attempted replication. I just upgraded to
coda 6.0.7 from 6.0.6.  Been trying to get replication
to work correctly for a few days already with little
success. And it is difficult to find current
documentation.  Here is my setup and what I want to

I have 8 machines. I want 2(m1 and m2) of them to act
as servers, and all of them to act as clients (m1-

m1 is now setup as the SCM, and I want m2 to be the
replication server. I ran vice-setup on m2 and set it
up as a non-scm server. seems to work ok. I Followed
any directions that I found in the manuals.

Once I had m1 and m2 setup, I created a Root volume
using the following command:

createvol_rep CodaRoot m1/vicepa m2/vicepa

The above command created CodaRoot.0 on m1 and
CodaRoot.1 on m2.  So in theory I should have a
replicating Root Volume Correct? 

The output of "cfs whereis /coda/m1" and "cfs whereis
tells me that the files reside on m1 and m2. So it
looks like its working so far.

Then I did a "venus-setup m1,m2 500000 on m1"
and "venus-setup m2,m1 500000 on m2", and started
venus on both machines.

Well, it seems to work, although it's kind of flakey. 
For example: I am on m1(scm), start venus, and go to
the /coda/m1 directory. I create a file in the
directory called "something", and put some text in the
file.  Now, if still on m1 and I go to the dir
/coda/m2, which should be same volume, the file
doesn't show.

Then I go an look on m2 in the dir /coda/m1 and
/coda/m2, and the file is not there. However, if I
restart venus, then the changes appear to propagate. 
It seems the same with m1 and m2, if I make changes on
one side, I have to restart venus on the other in
order to see the changes.  What am I doing wrong ?

Also, venus is doing some wierd things.  If I stop
venus, then try to restart it using "venus &" (on
either machine), I get an error message about venus
turning into a zombie.  To fix the error, I have to do
"venus -init &", but when I do that, this is my output
16:20:00 Starting RealmDB scan
16:20:00        Found 1 realms
16:20:00 starting VDB scan
16:20:00        0 volume replicas

then I stop venus again, and restart using just "venus
&", and "0 volume replicas" turns into "2 volume
replicas".  I do not understand why this happens this
way. But once I go through that process, I can see the
updated files.

So basically here are my questions:

1) How do I force changes made to a file/dir to
propagate to every client without restarting venus on
every client?

2) Is it possible to mount a volume in the /coda
directory instead of /coda/m1/'volume'?  I would like
to be able to do this, since it seems that a change on
m1:/coda/m1 does not propagate to m1:/coda/m2 unless I
restart venus. Also, being able to go to /coda/dir
directly instead of typing in the realm would make it
easier for me when I write scripts.

3) How do I delete a replicated volume? Or any volume
in general.  I've tried the purgevol and purgevol_rep
commands without success. The programs appear to work,
but "volutil getvolumelist" still shows the volume I
wanted to delete. Say I wanted to delete my root
volume; CodaRoot.0 on m1 and CodaRoot.1 on m2. How
would I do that quickly and easily ?

4) The output of the command "cfs lv /coda/m(1,2)"
sometimes tells me:
  Status of volume 0x7f000000 (2130706432) named "/"
  Volume type is Replicated
  Connection State is Disconnected
  There are 2 CML entries pending for reintegration
and other times tells me that the Connection state is
"Connected".  How do I keep it connected to tell it to
reconnect?  None of the commands that I have tried
appear to make any difference.

Uhh, that's all I can think of right now.  Does anyone
have any ideas about any of these problems ? 
Everything is sooo close to working, but keeps missing
by a few annoyances.  BTW: My log files don't show
anything useful, so I'm assuming I'm doing something
wrong. Any help would be appreciated.

Thanks in advance,


Do you Yahoo!?
Yahoo! Mail - You care about security. So do we.
Received on 2004-10-28 16:42:46