Coda File System

Re: Coda development

From: Jan Harkes <>
Date: Wed, 4 May 2016 20:40:32 -0400
On Wed, May 04, 2016 at 11:44:35AM +0200, wrote:
> Probably the most apparent one is the limit on the key length in the
> security layer. It is a hard one too, because the limitation is hardwired
> in the current protocol.

Can you point out where the key length is hardwired to a undesirable
length? Because I am not aware I did so and the only limitation at the
rpc2/secure layer is that AES does not go beyond 256-bit keys, which is
a very good keysize for a symmetric key encryption cypher. It is, as far
as I know, considered secure even at its minimal 128-bit key size.

It in fact will even use separate encryption and authentication keys if
enough key material is provided.

So at the security layer there is no forced limit, it just depends on
the amount of key material provided during connection setup. So see that
we go one layer up and look at the new connection handshake which
results in a session key used for all the following packets in that
communication. The actual key size here is chosen by the server based on
the list of supported algorithms it just got from the client and sent
back on the second packet in the handshake.

So it picks the largest encryption keysize supported by both the client
and the server and then adds the size needed for the authentication key.
Because the info we got from the client was on the first packet of the
handshake it is not encrypted at that point, so an active attacker could
potentially try to force a downgrade attack to min_keysize, which is for
now still sufficient, but just in case it isn't sufficient there is the
'RPC2_Preferred_Keysize' configuration override which can be used to
prevent such downgrading, this can be set with the RPC2SEC_KEYSIZE
environment variable, we will still pick a large keysize when it is

When we send the init2 response we clearly do not yet have a session key
because we just started negotiating it's parameters, so the init2 is
sent encrypted using a shared secret derived from the client identity
sent in the init1 packet. This is either a username (clog/auth2) and the
shared secret is looked up in the password database, you aren't even
using this bit because you are using kerberos. The other client identity
is the encrypted Coda token, which the client has a plaintext copy of
and the server is able to decrypt because it (or one of it's peers)
generated the original. For either of these two we have to go yet
another layer higher and end up at auth2/codatoken.c.

Now at this point the key exchange that Coda uses is using an old
RPC2_EncryptionKey to store the random bytes of the secret, and this one
is only 8 lousy bytes, 64-bits which is clearly sub-par and so bad it
doesn't even qualify for the min_keylen we need to get an encrypted
connection going to begin with. So the actual encryption key is derived
using a PBKDF, which runs a non-parallelizable operator 10000 times to
slow down the speed at which someone can iterate through possible keys.
(Password Based Key Derivation Functions are normally used for
passwords, which quite often have less than 64-bits of entropy).

Now you are concerned about the one 64-bit key that is used after
strenghtening using a PBKDF on a single packet that is normally sent
once at the beginning of the handshake. I actually am more in line with
Greg Troxel's thinking because all of this is 'homegrown crypto' and
although I have tried my very best to avoid trying to be smart and
creative by very closely following IPSec RFCs, aggressively limiting the
used encryption and authentication algorithms, basically we can't ever
hit an RC4 issue, or get downgraded to export ciphers and such. 

I've also closely looked at CVE's for existing IPSec implementations and
checked if my implementation could be affected. The latest is that I
have started to introduce constant time comparisons in places, there
actually was only one in rpc2/secure there are probably some more where
we are checking passwords and Coda tokens. But nobody else is clearly
looking for vulnerabilities in such a little used implementation.

In the long run using TLS over TCP is a better solution, but a whole lot
more needs to change than the size of a variable before that is even
close to a workable solution.

Received on 2016-05-04 20:40:43