Coda File System

Re: crash in rvmlib_free during repair

From: Jan Harkes <jaharkes_at_cs.cmu.edu>
Date: Thu, 4 Jul 2013 02:35:41 -0400
On Thu, Jul 04, 2013 at 07:25:44AM +0200, Piotr Isajew wrote:
> gigabytes) amount of files to coda and it happened, that during
> copying SCM went off the network. Copying resumed to secondary
> server and failed some time after that (I don't remember why it
> was though, probably some venus crash).
...
> This triggered several server/server conflicts for the
> directories involved.
> 
> I was supprised that this kind of conflict is not resolved
> automatically, but tried to use repair on directories causing
> problems.

Long story, but the short explanation is that when reintegration fails
because of a network connection issue the 'in-flight' operations were
applied to the failed server. They are then retried against the other
server, but the client picks a new store identifier for each operation.

As a result, during resolution the servers treat the conflicts as
updates that may have come from different clients and the log-based
resolution never gets a chance to kick in.

> I'm able to beginrepair, comparedirs generates reasonable fix:
> 
> replica 192.168.9.6 02000001 
> 	removed java
> 
> replica 192.168.10.1 01000002 
> 
> 
> but if I invoke dorepair, or removeinc non-SCM crashes. repair
> just reports error due to lost connectivity with non-SCM.
...
> repair_getdfile: completed!: Success
> RVMLIB_ASSERT: Error in rvmlib_free

Odd, I don't think I have seen such a crash before, the usual cases that
I see involve the server crashing because it ran out of available
resolution log entries and the next mutating operation sent to the
server triggers an assertion.

My guess is that that may be a non-empty directory, the removedir fails
early on and when repair is trying to clean up it is trying to remove an
object that normally gets created after the removedir succeeds.

> After restarting everything I still have the conflict in the same
> node or it's parent node depending on the situation.

Was this directory by any chance moved from one directory to another?

With files I've seen rename related conflicts where the default repair
suggestion when the source directory is resolved is to recreate the
renamed object but repair then fails because the server already has that
same object in the not-yet resolved destination directory. But this is
different since it is a directory, and it is a remove.

Either way, when repair fails like that, cached object aren't correctly
invalidated on the client so any further repair attempts probably will
fail either way because the server's version vectors are different.
After a failed repair I typically do,

    cfs expand java
    cfs fl java/*
    cfs collapse java

this will drop cached object from the failed repair so that the next
repair attempt will at least pick up the accurate server state. 

> Is there any hack, that would allow me to recover from that
> situation?

Conflicts are hard, and ones that crash a server are worse. Some
possible approaches.

Instead of removing the directory, recreate it on the other replica.
Start of with launching repair, then run 'beginrepair java',
'comparedirs /tmp/fix'. Then suspend repair and edit the fix file,
remove the removed from the one replica and add to the other replica
"created java volumeid vnodenr unique" where volumeid.vnode.unique are
the file identifier bits which can probably be gotten with 'cfs getfid
java/*' while the conflict is expanded.

If that doesn't work, and it is reliably only one server that crashes,
you can try to repair the conflict with only the other server running.
If that works you can bring the crashed server back up, extract all the
volume replica information with volutil info volumename.0 or .1 and then
remove and recreate the corrupted replica and then repopulate the volume
through runt resolution by doing a 'find /coda/path/to/volume -noleaf'.
This is pretty risky, you will lose any files that had not been
replicated to the remaining volume. The information you need to recreate
the replica is the exact volume _replica_ name (this is volume name with
an extra .0 or .1 depending on if it was the first or second replica in
the VRList). The replicated volume id (i.e. the one that starts with 7f)
and the volume replica id (i.e. the one that doesn't start with 7f, but
that starts with the server id from /vice/db/servers as a hexadecimal)
and the partition where the volume's container files should be placed.

    # volume replica id and replica name
    volutil -h failingserver info volumename | grep header

    # replicated volume id
    volutil -h failingserver info volumename | grep groupId

    volutil purge <replica_id> <replica_name>

    volutil create_rep /vicepa <replica_name> <replicated_volume_id> <replica_id>

Good luck,

Jan
Received on 2013-07-04 02:35:55
Binary file ./codalist-2013/9262.html matches