Coda File System

Re: Weak connectivity between servers

From: David C. Steere <dcs_at_cse.ogi.edu>
Date: Fri, 03 Apr 1998 10:34:08 -0800
Satya'll probably crucify me for this...

Another way to solve the same problem is to use the ficus approach: the
things you call servers in your diagram are really clients (use the coda
client protocols to keep consistent), and export their files to other
clients. Ficus used NFS to export stuff from their client/server to the
real clients, we could do that or hack the coda servers to export files
from their filesystems and not from the special coda partion as is
currently the case.  (The other drawback of their approach is that they do
not distinguish between clients and servers, which makes it hard to do
administration).

I posted this question a while back, has anyone had time to consider it?
(Having Coda servers export local files, nfs files, cdrom files, etc).

david.



At 11:52 AM 4/3/98 -0500, you wrote:
>On Fri, 3 Apr 1998 braam_at_cs.cmu.edu wrote:
>
>> I think that weakly connected servers are presently not a good
>> idea. No one has ever investigated how much bandwidth is eaten by
>> resolution but it is substantial.  
>
>As a long term goal, however, weak server connectivity would be very
>useful.  Imagine, if you will, a company with a number of facilities
>distributed across the world.  Rather than have the servers all in one
>location and relying on each individual client to cache the right data
>(and not allowing sharing of cache data between the clients), it would be
>easier to have a pool of servers at each site that service the clients at
>that site.  While connections between the sites would often be up, the
>latency might be high, and bandwidth low.  At times, the links between the
>sites would go down due to normal Internet suckiness.  While individual
>clients handle this ok, it would be better if the servers handled it ok.
>
>Example figure:
>
>C - client
>S - server cluster
>
>(Pittsburgh, PA)
>C                         C
> \                       /
>  \                     /
>C--S -- -- -- -- -- -- S--C
>  /                     \
> /                       \
>C                         C
>               (London, UK)
>
>Presumably more diverse structures would be used also.  This kind of
>arrangement using a VPN is far from uncommon.
>
>Another potential use is for a cluster of mobile machines -- in a vehicle
>of some sort that has weak connectivity.  Examples might be a boat, where
>satellite links exist, but are slow, high latency, and expensive.  Your 7
>client machines would not have to individually manage their hoards,
>rather, the server would provide a centralized hoarding service.  The same
>would apply to cars, airplanes, etc.  In each case their are benfitis to a
>central cache, especially if the workstations are each accessing the same
>data -- also if they want to see each others modifications, which would be
>useful in such an environment.  
>
>Perhaps I am thinking of a different class of server -- rather a
>centralized Venus cache and not a seperate Vice in the last described
>case?  If there is clearly a "client" relationship of the cluster, that is
>different from two peer server clusters where it is not clear that one is
>the central copy.  So perhaps we have two pictures -- the one above with
>peer server clusters, and then,
>
>C client
>S server cluster
>s server
>
>
>C
> \
>  \
>C--s -- -- -- -- -- -- S  (plus much more)
>  /
> /
>C
>
>
>  Robert N Watson 
>
>
>----
>Carnegie Mellon University  http://www.cmu.edu/
>Trusted Information Systems http://www.tis.com/
>SafePort Network Services   http://www.safeport.com/
>robert@fledge.watson.org    http://www.watson.org/~robert/
>
>
>
>
Received on 1998-04-03 13:37:44