Skip to content

Conversation

@shakeelmohamed
Copy link
Contributor

No description provided.

splunklogger.js Outdated
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need to do this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If _enableTimer(interval) is called explicitly, we just store that interval value in the config. It could be removed

splunklogger.js Outdated
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't get this case here. Why do you distinguish between batch and unbatched?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For isBatched we empty the entire queue, otherwise 1 event per request

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand what you're doing, I just don't understand why you're doing it. Why are you making a distinction here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

splunklogger.js Outdated
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what measurement units do you use here? Milliseconds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes & done

Shakeel Mohamed added 9 commits November 13, 2015 14:08
TODO:
- Make some decisions about middleware
- Update remaining examples
- Add a few timer tests for 100% coverage

- Added ability to have custom event formatter
- Context no longer has config or requestOptions
- If !autoflush and any batching settings, throw an error
- Simplified logic in many places
- Added several util functions
- Updated tests
- Clarified the basic.js example
All batch settings will be ignored when they are <= 0.
We have collectively decided that this feature
does not provide enough value at this time.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need to set this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If none of the batch settings are configured, and autoFlush === true (the default) it's basically not manual batching

splunklogger.js Outdated
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need contextQueue? This could easily be just an incremented number. You're storing a lot more memory this way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simplified, done

Shakeel Mohamed added 3 commits November 16, 2015 16:06
This change optimizes the flush() function
since events will only be serialized once
after passed to send().

Without middleware, we no longer need
to store un-serialized objects.
To disable autoFlush behavior, set all
batching settings to 0. It is still
enabled by default - flushing events
1 by 1. Modify this behavior through
the config object.
@shakeelmohamed
Copy link
Contributor Author

Closing this PR, opening another one with updated readme, examples and version numbers.

@shakeelmohamed shakeelmohamed deleted the feature/batch-event-count branch December 10, 2015 19:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants