Coda File System

Re: Replication server problems

From: Achim Stumpf <newgrp_at_gmx.de>
Date: Tue, 16 Jan 2007 12:40:40 +0100
Sean Caron wrote:
> Hi Achim,
>
> Personally, I'd just shitcan the whole thing and start over from
> scratch. I completely redid my Coda setup a few times (rm -r /vice;
> /usr/local/sbin/vice-setup) before I got it working sort of like how I
> wanted it. And even if you were to get it to work now, would you
> really trust it after hacking around so much?
>
I have started from scratch again. I have installed the os again. And 
now I have been able to setup three servers in one realm. Thanks a lot 
for your advice here.
But now I have some further question on the replication.

I have done everything nearly as you have written.  My /vice/db/servers 
looks like that:
# cat servers
clusty1.mytest.de               1
clusty2.mytest.de               2
clusty3.mytest.de               3

And the /vice/db/vicetab
# cat vicetab
clusty1.mytest.de   /vicepa   ftree   width=256,depth=3
clusty2.mytest.de   /vicepa   ftree   width=256,depth=3
clusty3.mytest.de   /vicepa   ftree   width=256,depth=3

The client is connected:
# ctokens
Tokens held by the Cache Manager for root:
    @mytest.de
        Coda user id:    500
        Expiration time: Wed Jan 17 12:00:05 2007

[root_at_clusty4 ~]# l /coda/
total 12
dr-xr-xr-x  1 root nfsnobody 2048 Jan 16 11:00 .
drwxr-xr-x 24 root root      4096 Dec  8 11:19 ..
drwxr-xr-x  1 root nfsnobody 2048 Jan 16 09:58 mytest.de
[root_at_clusty4 ~]# l /coda/mytest.de/
total 4
drwxr-xr-x 1 root nfsnobody 2048 Jan 16 09:58 .
dr-xr-xr-x 1 root nfsnobody 2048 Jan 16 11:00 ..

[root_at_clusty4 ~]# cfs cs
Contacting servers .....
All servers up

[root_at_clusty4 ~]# cfs listvol /coda
  Status of volume ff000001 (4278190081) named "CodaRoot"
  Volume type is Backup
  Connection State is Connected
  Reintegration age: 0 sec, hogtime 0.000 sec
  Minimum quota is 0, maximum quota is unlimited
  Current blocks used are 0
  The partition has 0 blocks available out of 0

[root_at_clusty4 ~]# cfs listvol /coda/mytest.de/
  Status of volume 7f000000 (2130706432) named "/"
  Volume type is ReadWrite
  Connection State is Connected
  Reintegration age: 4294967295 sec, hogtime 4294967.295 sec
  Minimum quota is 0, maximum quota is unlimited
  Current blocks used are 2
  The partition has 8191888 blocks available out of 8211208

I stopped with your advice after the client setup. I haven't created any 
volumes. So I wonder now, if the /coda/mytest.de/ is now replicated over 
those three servers or not?
I have created /vicepa for every server with a size of 9000MB and a ext2 
filesystem on it.

If it is not replicated - Is it the right way if I am using the /vicepa, 
which was created on every server during setup, for the replication volume?

for example:

createvol_rep testrepvol clusty1.mytest.de/vicepa 
clusty2.mytest.de/vicepa clusty3.mytest.de/vicepa

I have done this already during the last setup, but I was confused then 
a bit, because with
# cfs listvol /coda/mytest.de/
and
# cfs listvol /coda/mytest.de/mountpoint

It seems to be the same partition (blocks available). The superuser is 
able to store files on /coda/mytest.de/ (the root directory). I am 
confused a bit about that. Is the /coda/mytest.de/ also replicated?

Is it the best way to setup only volumes under 
/coda/mytest.de/<myvolume-mountpoint> and not to put files directly there?


thanks,

Achim
Received on 2007-01-16 06:49:37