Re: [PATCH net-next 4/9] rxrpc: Randomise epoch and starting client conn ID values

From: Jeffrey Altman
Date: Mon Sep 05 2016 - 22:18:15 EST


Reply inline ....

On 9/5/2016 12:24 PM, David Howells wrote:
> [cc'ing Jeff Altman for comment]
>
> David Laight <David.Laight@xxxxxxxxxx> wrote:
>
>>> Create a random epoch value rather than a time-based one on startup and set
>>> the top bit to indicate that this is the case.
>>
>> Why set the top bit?
>> There is nothing to stop the time (in seconds) from having the top bit set.
>> Nothing else can care - otherwise this wouldn't work.
>
> This is what I'm told I should do by purveyors of other RxRPC solutions.

The protocol specification requires that the top bit be 1 for a random
epoch and 0 for a time derived epoch.
>
>>> Also create a random starting client connection ID value. This will be
>>> incremented from here as new client connections are created.
>>
>> I'm guessing this is to make duplicates less likely after a restart?

Its to reduce the possibility of duplicates on multiple machines that
might at some point exchange an endpoint address either due to mobility
or NAT/PAT.
>
> Again, it's been suggested that I do this, but I would guess so.
>
>> You may want to worry about duplicate allocations (after 2^32 connects).
>
> It's actually a quarter of that, but connection != call, so a connection may
> be used for up to ~16 billion RPC operations before it *has* to be flushed.
>
>> There are id allocation algorithms that guarantee not to generate duplicates
>> and not to reuse values quickly while still being fixed cost.
>> Look at the code NetBSD uses to allocate process ids for an example.
>
> I'm using idr_alloc_cyclic()[*] with a fixed size "window" on the active conn
> ID values. Client connections with IDs outside of that window are discarded
> as soon as possible to keep the memory consumption of the tree down (and to
> force security renegotiation occasionally). However, given that there are a
> billion IDs to cycle through, it will take quite a while for reuse to become
> an issue.
>
> I like the idea of incrementing the epoch every time we cycle through the ID
> space, but I'm told that a change in the epoch value is an indication that the
> client rebooted - with what consequences I cannot say.

State information might be recorded about an rx peer with the assumption
that state will be reset when the epoch changes. The most frequent use
of this technique is for rx rpc statistics monitoring.


>
> [*] which is what Linux uses to allocate process IDs.
>
> David
>

begin:vcard
fn:Jeffrey Altman
n:Altman;Jeffrey
org:AuriStor, Inc.
adr:Suite 6B;;255 West 94Th Street;New York;New York;10025-6985;United States
email;internet:jaltman@xxxxxxxxxxxx
title:Founder and CEO
tel;work:+1-212-769-9018
note;quoted-printable:LinkedIn: https://www.linkedin.com/in/jeffreyaltman=0D=0A=
Skype: jeffrey.e.altman=0D=0A=

url:https://www.auristor.com/
version:2.1
end:vcard

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature