Coda File System

Re: volume connection timeouts

From: Ivan Popov <pin_at_medic.chalmers.se>
Date: Tue, 3 May 2005 10:16:43 +0200
On Mon, May 02, 2005 at 03:22:13PM -0600, Patrick Walsh wrote:
> 	We have cron jobs that run as local user root and coda uid 502.  Apache
> runs as user www-data with coda uid 501.  But there's a catch.  We
> forgot that apache needs to start as user root in order to listen on
> ports 80 and 443.  It also does its logging as user root.  Only its

Patrick, I think you hit one of the cases when incompatible
security domains cross.

You need a process which has special privileges on the local objects
and also special privileges on global objects. That's fine of course.

Then you need another process with the same privileges on the local
objects but different ones on global objects.

Given that privileges in Unix are deadly tied to uid, you can not.
There are different "workarounds" used e.g. by AFS, with PAGs, but
they are inherently incomplete, as two processes with the same uid
are not protected from each other under Unix.

It is worst when you are using uid 0 as you can not change the
uid to get rid of the conflict - you can not assign root's properties
to any other uid.

A right fix would be to let Apache to write logs as another uid than root.
A workaround would be to let Apache to write logs on a local file system.
Then you can copy them over to Coda by a non-root process, periodically,
say at log rotation (you said such delay is ok).

> I wonder if venus could detect when the server is on the
> localhost and then adjust itself to be more patient?  I'm sure it's

I think it would be wrong to make hooks trying to improve a rare - and anyway
troublesome - configuration. Such hooks can not help when you happen
to run clients and servers say on virtual hosts, vmware, user space linux
and so on - when you still can encounter similar bottlenecks.

Think also that a client can find at most one server on the same host,
while there is at the same time an indefinite number of [realms and] servers
on other hosts. It is those we should care about :-)

> common to want the server to be able to mount the coda files.  Intuition
> says that that should be the most reliable setup instead of the least-

The problem with intuition is that it is wrong sometimes...

It is not really good to have a server and a client on the same
machine. I would not expect my Mozilla work any better if I run it on 
the same machine as Chalmers web server... and certainly would not
expect the server work better if we'd encourage the users run
their Mozillas on that machine :)

Coda is built to work as a distributed filesystem. It goes to extremes
abstracting the files from hosts. It is not supposed to be best when
used locally. And it is not, which does not contradict the goals.

> reliable setup.  It might be slower, but it shouldn't be less reliable.

The conclusion above is reasonable for "host-bound" systems.
Their profit from being near each other overweights the disadvantages
of competing for resources. It is not true in general case.

> We're putting `cfs strong` 's all over the place now to try to keep
> things from getting write-disconnected.  Is there anything else we can
> do to enforce this?  Slow writes are not a problem so much as conflicts
> are.

I would suggest making the life of servers easier by running them on
dedicated hosts. It will certainly pull down the probablility of conflicts.

> 	Thanks for examining this issue with me.  As usual, I'll be sure to
> write up my notes in the wiki and share them with others to stave off
> similar such problems in the future.  

That's a great contribution.

Best regards Patrick,
--
Ivan
Received on 2005-05-03 04:17:57