Skip to content

Conversation

@gfoidl
Copy link

@gfoidl gfoidl commented Oct 24, 2020

There are some race conditions in IsEnabledAsync. This PR fixes them.

@ghost
Copy link

ghost commented Oct 24, 2020

CLA assistant check
All CLA requirements met.

Copy link
Author

@gfoidl gfoidl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some notes for review.

bool enabled = await _featureManager.IsEnabledAsync(feature, context).ConfigureAwait(false);

_flagCache[feature] = enabled;
#if NETSTANDARD2_0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding ifdefs increases the code complexity. I'm not sure it's worth it here. I'd opt for just sticking with the NETSTANDARD2_0 implementation.

Copy link
Author

@gfoidl gfoidl Jan 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With NETSTANDARD2_0 we have repeated allocations for the delegate, that the overload with state (starting from NS 2.1) can avoid (by C# emitting and using a cached delegate instance). See this sharplab.

I believe it's worth it.
Say it's used in a controller and checked for each request. So we can save #requests delegate allocations.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we avoid having the ifdefs intermingled with our business logic here by having an extension method with the 3.1 signature and having the ifdef there. I don't expect any unnecessary closures as a result. If we can do that I'm fine with it.

Copy link
Member

@jimmyca15 jimmyca15 Jan 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then again, optimizing for perf without numbers doesn't sound like the best strategy. For this it's hard to say how much we'd be saving in any case without numbers. So I'd step back a bit and say unless we have numbers I don't think we need to be taking this optimization here and introduce if/def branches.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optimizing for perf without numbers

The whole rationale of this API on ConcurrentDictionary is about saving the allocation of the delegate and potentially the closure. I don't need numbers for this, and won't do any benchmarks. So much discussion for this change...and the code reads quite nice with these two #ifdefs (IMHO). An extension method will just complicate things, as a) where to place it and b) it just creates more complicate closures.

I'll leave this PR as is. If it doesn't meet the bar, so please close it. Sorry that I have to say it that way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I stay away from ifdef without a strong reason to do so since I favor code maintenance and readability even though it may provide an optimization here for netcoreapp3.1. I would opt to remove it. It may just be me. @MSGaryWang @zhenlan for their insight.

Copy link
Author

@gfoidl gfoidl Jan 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about https://github.com/gfoidl/FeatureManagement-Dotnet/commit/aaa47c3ab3f374ffedfd46706d5d02e939fad770
It's just one #ifdef, by passing null in as context (the FeatureManager does the same, so this has no side-effects).

Edit: that's not true as pointed out in the comment below (didn't check the code thoroughly)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, here's my thoughts.

  • If we are going to use ifdef I certainly do like the idea of limiting it to one place instead of multiple.
  • This does have side-effects. The code no longer behaves the same way. FeatureManager has different code paths when IsEnabledAsync is called with or without a context. Even if null is passed.
  • I still would opt out of adding an ifdef in general, but would like to defer to one of the mentioned collaborators.

private readonly IFeatureManager _featureManager;
private readonly IDictionary<string, bool> _flagCache = new Dictionary<string, bool>();
private IEnumerable<string> _featureNames;
private readonly ConcurrentDictionary<string, Task<bool>> _flagCache = new ConcurrentDictionary<string, Task<bool>>();
Copy link

@rjgotten rjgotten Mar 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW this isn't actually race proof. ConcurrentDictionary.GetOrAdd has atomic publish semantics; not atomic execution semantics. If you want to prevent duplicate executions for costly feature flag evaluations, you have to create a cache of Lazy<Task<bool>> with its mode set to ExecutionAndPublication instead.

Basically:

public Task<bool> IsEnabledAsync(string feature)
{
  return _flagCache
    .GetOrAdd(feature, (key, arg) => new Lazy<Task<bool>>(
      async () => await arg.IsEnabledAsync(key).ConfigureAwait(false),
      LazyThreadSafetyMode.ExecutionAndPublication
    ), _featureManager )
    .Value;
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rjgotten for the suggested improvement.

@jimmyca15
Copy link
Member

This PR has not had any recent activity. In an effort to address #136, a new PR, #141, has been created. Any future discussion can be moved there.

@jimmyca15 jimmyca15 closed this Sep 1, 2021
@gfoidl gfoidl deleted the fix-race-snapshot branch September 2, 2021 21:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants