-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Big arrays sliced from netty buffers (long) #91641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Big arrays sliced from netty buffers (long) #91641
Conversation
Based on elastic#90745 but for longs. This should allow aggregations down the road to read long values directly from netty buffer, rather than copying it from the netty buffer. Relates to elastic#89437
|
Pinging @elastic/es-analytics-geo (Team:Analytics) |
not-napoleon
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. I think it's still good practice to get someone from Distributed to review these changes before merging though.
| int i = getOffsetIndex(index); | ||
| int idx = index - offsets[i]; | ||
| int end = idx + 8; | ||
| BytesReference wholeDoublesLivesHere = references[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copy paste? Should probably be wholeLongLivesHere, I would think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, a classic copy paste error. I will change the variable name.
original-brownbear
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM just a few trivial points that might be nice to clean up :)
| @Override | ||
| public void writeTo(StreamOutput out) throws IOException { | ||
| int size = (int) size(); | ||
| out.writeVInt(size * 8); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just cache size * Long.BYTES to a variable? :) Calculating it twice in two different ways back to back makes me unhappy :D
|
|
||
| @Override | ||
| public void writeTo(StreamOutput out) throws IOException { | ||
| int size = (int) size(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use Math.toIntExcact(sizze()) here to make this obviously correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review! I pushed the changes via: ebe635c
| out.write(pages[i]); | ||
| } | ||
| out.write(pages[pages.length - 1], 0, lastPageEnd * Double.BYTES); | ||
| writePages(out, (int) size, pages, Double.BYTES, DOUBLE_PAGE_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use Math.toIntExcact(sizze()) here to make this obviously correct?
|
|
||
| @Override | ||
| public void writeTo(StreamOutput out) throws IOException { | ||
| writePages(out, (int) size, pages, Long.BYTES, LONG_PAGE_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use Math.toIntExcact(sizze()) here to make this obviously correct?
|
@elasticmachine run elasticsearch-ci/part-2 |
Based on #90745 but for longs. This should allow aggregations down the road to read long values directly from netty buffer, rather than copying it from the netty buffer.
Relates to #89437