Skip to content

Conversation

@vincenthz
Copy link
Member

@vincenthz vincenthz commented Jun 7, 2021

the main purpose is to give a range of liveness for a given transaction,
with the possibility of garbage collecting old expired transaction from mempool.

Each transaction comes with a start and end block date, which
are inclusive bound of validity: start <= current_date <= end
and where start < end.

if transaction contains the start=0.0, end=0.0 blockdates, we
expects the same behavior as before: no expiration of validity

Each transactions contains now a block date until when the transaction is valid.
after this date, the transaction cannot be applied to the ledger anymore.

Originally the end was relative number of slots from the start,
but this creates lots of difficulty to calculate the end
precisely, due to possible era changes, so instead this is now
a blockdate that doesn't have to match the number of slots per epoch.

Copy link
Contributor

@eugene-babichenko eugene-babichenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the change itself looks good, I am wondering how would it affect the system in certain edge cases? E.g. a transaction sitting in the mempool for too long because the network is not able to process the influx of transactions any faster. We did not encounter such situation before with jormungandr, but IMO this is something that must be taken into consideration.

Copy link
Contributor

@mzabaluev mzabaluev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work!


IOW = SIZE-ELEMENT-8BIT ; number of inputs
SIZE-ELEMENT-8BIT ; number of outputs
BLOCK-DATE ; start validity of this IOW
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The start date is not present in the data structure yet.

.make_input_with_value(self.builder.fee(&self.cert));
self.builder
.fragment(&self.cert, keys, &[input], &[], false, &self.funder)
self.builder.fragment(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must say this builder API is not very builder-y, and has become even less so.

@mzabaluev mzabaluev force-pushed the transaction-validity branch from a6a94a5 to 6129a0f Compare August 2, 2021 14:56
Each transaction comes with a start and end block date, which
are inclusive bound of validity: start <= current_date <= end
and where start < end.

if transaction contains the start=0.0, end=0.0 blockdates, we
expects the same behavior as before: no expiration of validity

Originally the end was relative number of slots from the start,
but this creates lots of difficulty to calculate the end
precisely, due to possible era changes, so instead this is now
a blockdate that doesn't have to match the number of slots per epoch.
expect the start validity to be defined implicitely by the node's mempool's blockdate
introduce a new SetValidity state in the transaction builder

lots of minor adjustment in the test suite to cope with explicit date
verification now in the ledger.
@mzabaluev mzabaluev force-pushed the transaction-validity branch from 6129a0f to 059fb9a Compare August 3, 2021 08:39
@NicolasDP
Copy link
Contributor

@eugene-babichenko

While the change itself looks good, I am wondering how would it affect the system in certain edge cases? E.g. a transaction sitting in the mempool for too long because the network is not able to process the influx of transactions any faster. We did not encounter such situation before with jormungandr, but IMO this is something that must be taken into consideration.

Remember that a transaction broadcasted to the p2p network is not guaranteed to be added by a leader node. It dependent on multiple variables: how well the transaction is broadcasted to the network, how full the mempool currently are (in Jörmungandr, we use a LRU cache. So a transaction seating for too long is going to be pruned anyway).

Adding this time validity is mostly to allow the user to set a deterministic mechanism to detect a transaction is considered invalid. This will allow many users to have a less stressful experience using Jörmungandr. Imagine you send your transaction and you have to wait days and it is still not in the blockchain? Price fluctuates a lot at the scale of days in the cryptocurrency industry. And this transaction may be important: paying the rent for example. There is no way to know if the transaction is going to be included or not in the blockchain and when. Between that instant you broadcast your transaction and that it appears in the blockchain: there is a laps of time where things are up in the air.

@mzabaluev
Copy link
Contributor

mzabaluev commented Aug 3, 2021

in Jörmungandr, we use a LRU cache

Ah yes, but we dismantled that because it could be a way to censor or otherwise disadvantage other participants by flushing out their transactions.

@mzabaluev
Copy link
Contributor

This will allow many users to have a less stressful experience using Jörmungandr.

Exactly. Also, it provides a more deterministic way to prune the mempool.

@NicolasDP
Copy link
Contributor

it could be a way to censor or otherwise disadvantage other participants

I appreciate you want to be fair with the users. Good for you. Though I don't see how this is fair/unfair. Every nodes can set the mempool to allow as much backlog as possible with appropriate cache size.

@vincenthz
Copy link
Member Author

Adding this time validity is mostly ...

so while the user side and mempool management, are both very important and wanted side effect of this, the main reason for the end validity is for a further improvement related to account nonce, so that we can introduce non-monomorphically-strictly-increasing nonce.

All names now have "expiry", set_validity had too generic meaning.
@mzabaluev
Copy link
Contributor

Every nodes can set the mempool to allow as much backlog as possible with appropriate cache size.

A dishonest participant could still guess sizes of configured mempools, or just try to flush out unwanted fragments with statistically significant results.

@NicolasDP
Copy link
Contributor

thanks @vincenthz , helpful to have that comment. I now remember you talking about that back in the day.

@mzabaluev this kind of strategy only works for a time and is very expensive to put in place. You should trust the ledger's protocol more to deal with the transaction flooding. The node is not affected at all by that kind of behaviour and having a LRU prevented the mempool from growing into unmanageable proportions.

@zeegomo
Copy link
Contributor

zeegomo commented Aug 3, 2021

Wouldn't that (LRU eviction) violate the liveness property as described in the Ouroboros-BFT paper?

Liveness. If a transaction tx is provided by the environment to all honest servers at a point of the execution when the latest slot among the honest servers is sl, then any server whose clock advances u slots after sl to a slot sl′, will have a ledger at a state q for which it holds q0 ()→q1 (tx)→q2 ()→q for some states q1,q2; note that (*)→includes only transactions produced by the environment Z up to slot sl′. Ouroboros-BFT p5

@mzabaluev mzabaluev merged commit 715644b into master Aug 4, 2021
@mzabaluev mzabaluev deleted the transaction-validity branch August 4, 2021 06:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants