Coda File System

Re: adding a server

From: <>
Date: Thu, 22 Apr 1999 13:58:28 -0400 said:
|   Suppose you have a group of 3 servers that have a volume.  What if
| you want to add a 4th server to maintain a copy of that volume?
| What's the procedure?  What's the procedure for removing a server from
| a volume?

Hi Bob,

I had to think about this one, and read some source and stuff, think 
some more, but here is the answer.

Currently, the volume storage group information is fixed, and there is 
no reliable mechanism to invalidate any cached volume information in the
clients. So it is not really possible right now.

However, there is a cfs command (cfs checkvolumes) which forces the
client to revalidate volume mountpoints, and therefore (all?) cached
volume information. So the clients technically should be able to adapt
to a changing volume storage group. The automatic adaptation doesn't 
work, a returned error when a volume replica is missing doesn't trigger 
revalidation, and added volumes are not automatically detected. 

For the servers a lot of information is cached as well, but the VLDB 
(volume location database) is not stored in rvm and changes are picked 
up during run-time (otherwise we wouldn't be able to create new volumes 
on running servers).

As servers (except for the server-server resolution subsystem) don't 
really know whether a volume is replicated or not, they _should_ also
be able to handle most of the hairy cases.

Now how to do such a thing, I've looked at how the createvol_rep and scripts create volumes in the first place. This is completely
untested, but the most likely to work set of steps to take.

Extending a volume storage group

- Setup the new server as a non SCM machine.

- Add the new server to the /vice/db/servers file (on the SCM).

- Start all the services on the new server (codasrv/update/auth2).

- On the new server create new replicas for _all_ volumes that were in
  the original VSG:
  f.i. volutil create_rep /vicepa coda:root.<n> <Replicated group-id>
 (you can find the replicated group ids for volumes in /vice/vol/VRList)

>From here on everything is run on the SCM
- Add the new server to the volume storage group to /vice/vol/VSGDB.
  f.i. "E0000203 server1 server2" becomes "E0000203 server1 server2 newserver"

- Run ' <new server>', this fetches the list of volumes on 
  the new server and rebuilds the volume location database.

- Using the information in /vice/vol/remote/newserver.list, add the 
  numbers for the new replicas to /vice/vol/VRList

        Wvmm:s.tmp.2 Ic70000bd Hc9 P/vicepa m0 M0 U4aaf Wc70000bd C37149e75
         ^^^^^^^^^^^  ^^^^^^^^
         volume name  volume id

        vmm:s.tmp 7F0004D1 2 c9000108 dd0000c9 0 0 0 0 0 0 E0000203
                           ^                   ^ 
	is changed to:
        vmm:s.tmp 7F0004D1 3 c9000108 dd0000c9 c70000bd 0 0 0 0 0 E0000203
                           ^                   ^^^^^^^^

- Rebuild the volume replication database: volutil makevrdb

That should be it for the servers....

The clients might work correctly after just doing a `cfs checkvolumes', 
otherwise you'd have to start the clients with the -init flag to 
reinitialize their caches.

If you actually pull something like this off, you've probably gone
somewhere that no (sane) Coda user has gone before.

Received on 1999-04-22 13:59:52