Refactor KIRA daemon into sans i/o
Goal
We want to refactor the KIRA routing daemon to achieve the following goals:
- more clear control flow
- better reusability of the core protocol logic
- better testability
- clearer architecture
This is done by splitting the KIRA daemon up into four major parts:
- R²/KAD: Internal protocol logic of the routing protocol R²/KAD
- I/O implementations: Handling sending and receiving actual bytes and providing us with crucial link-layer information.
- Fast Forwarding: fast forwarding implementations
- KIRA node: glue that connects everything
Read more about sans i/o here.
UML Overview of the changes
Important to note is that link-layer information is not passed into the R²/KAD domain but only kept track of by the KIRA node, which provides the forwarding layer with this crucial information. For still enabling (maybe future) link-layer decisions a generic underlay-neighbour-ID is provided. This enables us to test the R²/KAD protocol logic completely independently by any link-layer information or any other domain proposed. Also, this will enable us to simulate networks efficiently by only interacting with the R²/KAD domain without the necessity to parse any data.
Refactoring TODOs
Sans I/O
This work is mostly done but currently mostly untested.
-
new type definitions and conversions UnderlayNeighbourId
ProtocolInEvent
ProtocolOutEvent
-
UseCaseEvent
mergeAPI
andInjectMesssage
events into newDebug
event -
Move NetworkInterface
dependencies in the protocol logic to a newUnderlayNeighbourId
type-
UnderlayNeighbourMapper
converting between ll-data (SocketAddrV6) andUnderlayNeighbourID
(hashing) - use
UnderlayNeighbourId
instead ofNetworkInterface
inUnderlayNeighbourTable
- rename
PhysicalNeighbourTable
toUnderlayNeighbourTable
-
Adjust ForwardingTables
to useUnderlayNeighbourId
- adjust interface definitions
- provide
UnderlayNeighbourMapper
access to theForwardingTables
for ll-meta-data. - (even further restrict ForwardingTables
by not providing a destination instead use the source in the
SocketAddrV6`)
- use
UnderlayneighbourId
inUnderlayNeighbourUpdate
instead ofHardware
events inUseCaseEvent
-
-
Move theInsertionStrategy
directly to theRoutingTable
-
Refactor Node
andNodeHandle
:- move event pipeline into
R²/KAD
protocol handle - move event serialziation into
Node
- implement
ProtocolInEvent
andProtocolOutEvent
- factor out all link-layer specific calls into new i
- move event pipeline into
-
Runtime
:factor out timer management into newTimerRuntime
directly owned by every UseCase or theUseCaseContext
provide access totx_buffer
of theR²/KAD
handle by cloning the producer and moving it into theRuntime
- move pending
UseCaseEvent
queue intoRuntime
- Make runtime real-time agnostic.
-
Move ForwardingTables
out of the context- instead, the
ForwardingTablesUpdate
is used to communicate updates to the forwarding tables.
- instead, the
-
Refactor unit tests
I/O
After the sans-i/o refactoring has been taken place, it's time to implement the i/o counterpart of sans-i/o part in kira-lib:
-
initial UnderlayNeighborId
generator -
move (de)serialization of ProtocolMessages into i/o part -
drive progress of protocol instance -
listen for protocol messages -
invoke if a timer is due -
provide underlay updates
-
-
external interaction of protocol instance -
send protocol messages -
relay forwarding updates to fast forwarding layer
-
Fast Forwarding layer
This aims to integrate the existing real (nft, eBPF) fast forwarding layer implementations into the i/o part.
-
provide link-layer information on UnderlayNeighborId
bases- improved
UnderlayNeighborId
generator - design
UnderlayNeighborContext
interface provided
- improved
-
adapt native tables to use link-layer information instead of physical neighbor table -
include link-layer information in the eBPF forwarding tables
Autonomous timers
Update: As it stands this is not planed anymore.
To enable us to remove the legacy TimerRuntime
some extra things need to be done:
-
Introduction of an extra time sub-event of UseCaseEvent
emitted ifR²/KAD
receives a new protocol event. -
Make each UseCase manage its timers themselves and react to the Time events to determine which timers are overdue.
This is because currently the pipeline has short-circuits. This does not guarantee that every UseCase receives every timed event. Also, this keeps the notion intact that a UseCase reacts to one event only at each invocation of handle_event
.