Skip to content

Conversation

@jayantk
Copy link
Contributor

@jayantk jayantk commented Apr 28, 2022

At the moment, pythd sends all of the price update transactions at roughly the same time. I think this is causing the later transactions to fail (though I'm not 100% sure).

Before we added batching, pythd staggered price updates by staggering requests for prices. When the client responded to this request, pythd immediately sent a price update transaction. This approach had the effect of staggering the update transactions (because the staggered update request schedule is directly reflected in the TXs). We lost this staggering when we added batching, because the client responses were queued and then flushed at the start of the next slot.

This change adds back staggering by eagerly sending transactions for any complete batches during the slot. It also explicitly flushes any pending updates at the end of the price request schedule; this flushing handles the case where there aren't enough prices to complete a single batch.

Comment on lines 509 to 514
// Flush any pending complete batches of price updates by submitting solana TXs.
for( user *uptr = olist_.first(); uptr; uptr = uptr->get_next() ) {
if (uptr->num_pending_upds() >= get_max_batch_size()) {
uptr->send_pending_upds(get_max_batch_size());
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be simpler to instead add a partial boolean parameter to user::send_pending_upds, and change the semantics of user::send_pending_upds to only send partial batches if partial is true.

You could then call send_pending_upds(false) on every iteration of the event loop, and send_pending_upds(true) when a slot is received.

pc/user.cpp Outdated
n_sent = pending_vec_.size();
}

if ( !price::send( pending_vec_.data(), n_sent) ) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As price::send is effectively the API boundary for publishers which link to the C++ bindings, this change won't benefit those publishers.

pc/user.cpp Outdated
return pending_vec_.size();
}

void user::send_pending_upds(uint32_t n)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding testing: we don't have an easy way to reliably test when transactions are sent. To test if partial batches and full batches are sent, you could use something like test_update_price.py.

pc/manager.cpp Outdated
Comment on lines 755 to 757
for( user *uptr = olist_.first(); uptr; uptr = uptr->get_next() ) {
uptr->send_pending_upds(uptr->num_pending_upds());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be nice to factor this loop out into an manager::send_pending_upds function.

tompntn
tompntn previously approved these changes May 3, 2022
pc/user.cpp Outdated
void user::send_pending_upds()
{
if ( pending_vec_.empty() ) {
uint32_t n_sent = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to_send might be a clearer variable name: to me, n_sent implies that that it's calculating the number of updates that have already been sent.

void on_response( price_sched *, uint64_t ) override;

// send all pending updates
// send a batch of pending price updates. This function eagerly sends any complete batches.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth clarifying the semantics:

// If there are multiple complete batches, only one will be sent.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume those are the desired semantics, as the aim of this is to stagger transactions.

@jayantk jayantk merged commit 7f6d7f3 into main May 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants