Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
96a1cc3
add option to reduce front-end metadata for untracked flags
eli-darkly Oct 5, 2018
e59a1c7
Merge pull request #76 from launchdarkly/eb/ch24449/less-metadata
eli-darkly Oct 5, 2018
89056fc
fix logic for whether a flag is tracked in all_flags_state
eli-darkly Oct 8, 2018
1fc23e4
use expiringdict from PyPi
eli-darkly Oct 14, 2018
0a30e0d
Merge pull request #77 from launchdarkly/eb/ch24449/less-metadata-2
eli-darkly Oct 15, 2018
103b339
Merge pull request #78 from launchdarkly/eb/ch25286/expiring-dict
eli-darkly Oct 15, 2018
40f2ca4
merge from public after release
LaunchDarklyCI Oct 17, 2018
ae8b25e
implement file data source, not including auto-update
eli-darkly Nov 1, 2018
850837d
rm debugging
eli-darkly Nov 1, 2018
aa7684a
rm debugging
eli-darkly Nov 1, 2018
39c9042
Python 3 compatibility fix
eli-darkly Nov 1, 2018
a43bf0c
add file watching, update documentation and tests
eli-darkly Nov 2, 2018
2cea730
readme
eli-darkly Nov 2, 2018
dcf1afe
debugging
eli-darkly Nov 2, 2018
4e98fdd
debugging
eli-darkly Nov 2, 2018
8f3c221
debugging
eli-darkly Nov 2, 2018
84276dd
fix cleanup logic
eli-darkly Nov 2, 2018
2a822e6
rm debugging
eli-darkly Nov 2, 2018
eaabe4d
Merge pull request #79 from launchdarkly/eb/ch26233/file-data-source
eli-darkly Nov 14, 2018
ac5e8de
typo in comment
eli-darkly Nov 14, 2018
39f5f62
merge from public after release
LaunchDarklyCI Nov 14, 2018
040ced9
add feature store wrapper class and make Redis feature store use it
eli-darkly Dec 29, 2018
59a67a8
test the new Redis factory method
eli-darkly Dec 29, 2018
1e38ac1
add DynamoDB support
eli-darkly Dec 29, 2018
431dddf
add test credentials
eli-darkly Dec 29, 2018
3aa5644
link in comment
eli-darkly Dec 31, 2018
bd00276
comment
eli-darkly Dec 31, 2018
11eabd3
Merge branch 'eb/ch28329/feature-store-support' into eb/ch28329/dynamodb
eli-darkly Dec 31, 2018
534ec5d
don't catch exceptions in Redis feature store, let the client catch them
eli-darkly Dec 31, 2018
5f16c8d
gitignore
eli-darkly Dec 31, 2018
ac0f2ea
misc test fixes
eli-darkly Dec 31, 2018
fa56526
Merge branch 'eb/ch28329/feature-store-support' into eb/ch28329/dynamodb
eli-darkly Dec 31, 2018
3a1c2dc
Merge pull request #81 from launchdarkly/eb/ch28329/dynamodb
eli-darkly Jan 9, 2019
b06eef9
Merge pull request #80 from launchdarkly/eb/ch28329/feature-store-sup…
eli-darkly Jan 9, 2019
256b6fb
implement dependency ordering for feature store data
eli-darkly Jan 9, 2019
289077c
fix incomplete implementation & test
eli-darkly Jan 9, 2019
2c59294
Python 3.x fix
eli-darkly Jan 9, 2019
1dd6961
Merge pull request #82 from launchdarkly/eb/ch29197/dependency-order
eli-darkly Jan 15, 2019
78b6118
minor doc fixes
eli-darkly Jan 16, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,28 +40,34 @@ jobs:
docker:
- image: circleci/python:2.7-jessie
- image: redis
- image: amazon/dynamodb-local
test-3.3:
<<: *test-template
docker:
- image: circleci/python:3.3-jessie
- image: redis
- image: amazon/dynamodb-local
test-3.4:
<<: *test-template
docker:
- image: circleci/python:3.4-jessie
- image: redis
- image: amazon/dynamodb-local
test-3.5:
<<: *test-template
docker:
- image: circleci/python:3.5-jessie
- image: redis
- image: amazon/dynamodb-local
test-3.6:
<<: *test-template
docker:
- image: circleci/python:3.6-jessie
- image: redis
- image: amazon/dynamodb-local
test-3.7:
<<: *test-template
docker:
- image: circleci/python:3.7-stretch
- image: redis
- image: amazon/dynamodb-local
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ nosetests.xml
coverage.xml
*,cover
.hypothesis/
.pytest_cache

# Translations
*.mo
Expand Down
14 changes: 10 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ Or it can be set from within python:
os.environ["https_proxy"] = "https://web-proxy.domain.com:8080"
```


If your proxy requires authentication then you can prefix the URN with your login information:
```
export HTTPS_PROXY=http://user:[email protected]:8080
Expand All @@ -75,12 +74,19 @@ Your first feature flag
# the code to run if the feature is off

Supported Python versions
----------
-------------------------

The SDK is tested with the most recent patch releases of Python 2.7, 3.3, 3.4, 3.5, and 3.6. Python 2.6 is no longer supported.

Database integrations
---------------------

Feature flag data can be kept in a persistent store using Redis or DynamoDB. These adapters are implemented in the `DynamoDB` and `Redis` classes in `ldclient.integrations`; to use them, call the `new_feature_store` method in the appropriate class, and put the returned object in the `feature_store` property of your client configuration. See [`ldclient.integrations`](https://github.com/launchdarkly/python-client-private/blob/master/ldclient/integrations.py) and the [SDK reference guide](https://docs.launchdarkly.com/v2.0/docs/using-a-persistent-feature-store) for more information.

Using flag data from a file
---------------------------
For testing purposes, the SDK can be made to read feature flag state from a file or files instead of connecting to LaunchDarkly. See [`file_data_source.py`](https://github.com/launchdarkly/python-client/blob/master/ldclient/file_data_source.py) for more details.

For testing purposes, the SDK can be made to read feature flag state from a file or files instead of connecting to LaunchDarkly. See [`file_data_source.py`](https://github.com/launchdarkly/python-client/blob/master/ldclient/file_data_source.py) and the [SDK reference guide](https://docs.launchdarkly.com/v2.0/docs/reading-flags-from-a-file) for more details.

Learn more
-----------
Expand All @@ -100,7 +106,7 @@ Contributing
See [CONTRIBUTING](CONTRIBUTING.md) for more information.

About LaunchDarkly
-----------
------------------

* LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
* Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
Expand Down
1 change: 1 addition & 0 deletions dynamodb-requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
boto3>=1.9.71
48 changes: 43 additions & 5 deletions ldclient/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,10 @@
from ldclient.config import Config as Config
from ldclient.event_processor import NullEventProcessor
from ldclient.feature_requester import FeatureRequesterImpl
from ldclient.feature_store import _FeatureStoreDataSetSorter
from ldclient.flag import EvaluationDetail, evaluate, error_reason
from ldclient.flags_state import FeatureFlagsState
from ldclient.interfaces import FeatureStore
from ldclient.polling import PollingUpdateProcessor
from ldclient.streaming import StreamingUpdateProcessor
from ldclient.util import check_uwsgi, log
Expand All @@ -27,6 +29,35 @@
from threading import Lock


class _FeatureStoreClientWrapper(FeatureStore):
"""Provides additional behavior that the client requires before or after feature store operations.
Currently this just means sorting the data set for init(). In the future we may also use this
to provide an update listener capability.
"""

def __init__(self, store):
self.store = store

def init(self, all_data):
return self.store.init(_FeatureStoreDataSetSorter.sort_all_collections(all_data))

def get(self, kind, key, callback):
return self.store.get(kind, key, callback)

def all(self, kind, callback):
return self.store.all(kind, callback)

def delete(self, kind, key, version):
return self.store.delete(kind, key, version)

def upsert(self, kind, item):
return self.store.upsert(kind, item)

@property
def initialized(self):
return self.store.initialized


class LDClient(object):
def __init__(self, sdk_key=None, config=None, start_wait=5):
"""Constructs a new LDClient instance.
Expand Down Expand Up @@ -55,7 +86,7 @@ def __init__(self, sdk_key=None, config=None, start_wait=5):
self._event_processor = None
self._lock = Lock()

self._store = self._config.feature_store
self._store = _FeatureStoreClientWrapper(self._config.feature_store)
""" :type: FeatureStore """

if self._config.offline or not self._config.send_events:
Expand Down Expand Up @@ -243,7 +274,14 @@ def send_event(value, variation=None, flag=None, reason=None):
if user is not None and user.get('key', "") == "":
log.warn("User key is blank. Flag evaluation will proceed, but the user will not be stored in LaunchDarkly.")

flag = self._store.get(FEATURES, key, lambda x: x)
try:
flag = self._store.get(FEATURES, key, lambda x: x)
except Exception as e:
log.error("Unexpected error while retrieving feature flag \"%s\": %s" % (key, repr(e)))
log.debug(traceback.format_exc())
reason = error_reason('EXCEPTION')
send_event(default, None, None, reason)
return EvaluationDetail(default, None, reason)
if not flag:
reason = error_reason('FLAG_NOT_FOUND')
send_event(default, None, None, reason)
Expand All @@ -264,7 +302,7 @@ def send_event(value, variation=None, flag=None, reason=None):
send_event(detail.value, detail.variation_index, flag, detail.reason)
return detail
except Exception as e:
log.error("Unexpected error while evaluating feature flag \"%s\": %s" % (key, e))
log.error("Unexpected error while evaluating feature flag \"%s\": %s" % (key, repr(e)))
log.debug(traceback.format_exc())
reason = error_reason('EXCEPTION')
send_event(default, None, flag, reason)
Expand Down Expand Up @@ -328,7 +366,7 @@ def all_flags_state(self, user, **kwargs):
if flags_map is None:
raise ValueError("feature store error")
except Exception as e:
log.error("Unable to read flags for all_flag_state: %s" % e)
log.error("Unable to read flags for all_flag_state: %s" % repr(e))
return FeatureFlagsState(False)

for key, flag in flags_map.items():
Expand All @@ -339,7 +377,7 @@ def all_flags_state(self, user, **kwargs):
state.add_flag(flag, detail.value, detail.variation_index,
detail.reason if with_reasons else None, details_only_if_tracked)
except Exception as e:
log.error("Error evaluating flag \"%s\" in all_flags_state: %s" % (key, e))
log.error("Error evaluating flag \"%s\" in all_flags_state: %s" % (key, repr(e)))
log.debug(traceback.format_exc())
reason = {'kind': 'ERROR', 'errorKind': 'EXCEPTION'}
state.add_flag(flag, None, None, reason if with_reasons else None, details_only_if_tracked)
Expand Down
191 changes: 191 additions & 0 deletions ldclient/dynamodb_feature_store.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
import json

have_dynamodb = False
try:
import boto3
have_dynamodb = True
except ImportError:
pass

from ldclient import log
from ldclient.feature_store import CacheConfig
from ldclient.feature_store_helpers import CachingStoreWrapper
from ldclient.interfaces import FeatureStore, FeatureStoreCore

#
# Internal implementation of the DynamoDB feature store.
#
# Implementation notes:
#
# * Feature flags, segments, and any other kind of entity the LaunchDarkly client may wish
# to store, are all put in the same table. The only two required attributes are "key" (which
# is present in all storeable entities) and "namespace" (a parameter from the client that is
# used to disambiguate between flags and segments).
#
# * Because of DynamoDB's restrictions on attribute values (e.g. empty strings are not
# allowed), the standard DynamoDB marshaling mechanism with one attribute per object property
# is not used. Instead, the entire object is serialized to JSON and stored in a single
# attribute, "item". The "version" property is also stored as a separate attribute since it
# is used for updates.
#
# * Since DynamoDB doesn't have transactions, the init() method - which replaces the entire data
# store - is not atomic, so there can be a race condition if another process is adding new data
# via upsert(). To minimize this, we don't delete all the data at the start; instead, we update
# the items we've received, and then delete all other items. That could potentially result in
# deleting new data from another process, but that would be the case anyway if the init()
# happened to execute later than the upsert(); we are relying on the fact that normally the
# process that did the init() will also receive the new data shortly and do its own upsert().
#
# * DynamoDB has a maximum item size of 400KB. Since each feature flag or user segment is
# stored as a single item, this mechanism will not work for extremely large flags or segments.
#

class _DynamoDBFeatureStoreCore(FeatureStoreCore):
PARTITION_KEY = 'namespace'
SORT_KEY = 'key'
VERSION_ATTRIBUTE = 'version'
ITEM_JSON_ATTRIBUTE = 'item'

def __init__(self, table_name, prefix, dynamodb_opts):
if not have_dynamodb:
raise NotImplementedError("Cannot use DynamoDB feature store because AWS SDK (boto3 package) is not installed")
self._table_name = table_name
self._prefix = None if prefix == "" else prefix
self._client = boto3.client('dynamodb', **dynamodb_opts)

def init_internal(self, all_data):
# Start by reading the existing keys; we will later delete any of these that weren't in all_data.
unused_old_keys = self._read_existing_keys(all_data.keys())
requests = []
num_items = 0
inited_key = self._inited_key()

# Insert or update every provided item
for kind, items in all_data.items():
for key, item in items.items():
encoded_item = self._marshal_item(kind, item)
requests.append({ 'PutRequest': { 'Item': encoded_item } })
combined_key = (self._namespace_for_kind(kind), key)
unused_old_keys.discard(combined_key)
num_items = num_items + 1

# Now delete any previously existing items whose keys were not in the current data
for combined_key in unused_old_keys:
if combined_key[0] != inited_key:
requests.append({ 'DeleteRequest': { 'Key': self._make_keys(combined_key[0], combined_key[1]) } })

# Now set the special key that we check in initialized_internal()
requests.append({ 'PutRequest': { 'Item': self._make_keys(inited_key, inited_key) } })

_DynamoDBHelpers.batch_write_requests(self._client, self._table_name, requests)
log.info('Initialized table %s with %d items', self._table_name, num_items)

def get_internal(self, kind, key):
resp = self._get_item_by_keys(self._namespace_for_kind(kind), key)
return self._unmarshal_item(resp.get('Item'))

def get_all_internal(self, kind):
items_out = {}
paginator = self._client.get_paginator('query')
for resp in paginator.paginate(**self._make_query_for_kind(kind)):
for item in resp['Items']:
item_out = self._unmarshal_item(item)
items_out[item_out['key']] = item_out
return items_out

def upsert_internal(self, kind, item):
encoded_item = self._marshal_item(kind, item)
try:
req = {
'TableName': self._table_name,
'Item': encoded_item,
'ConditionExpression': 'attribute_not_exists(#namespace) or attribute_not_exists(#key) or :version > #version',
'ExpressionAttributeNames': {
'#namespace': self.PARTITION_KEY,
'#key': self.SORT_KEY,
'#version': self.VERSION_ATTRIBUTE
},
'ExpressionAttributeValues': {
':version': { 'N': str(item['version']) }
}
}
self._client.put_item(**req)
except self._client.exceptions.ConditionalCheckFailedException:
# The item was not updated because there's a newer item in the database. We must now
# read the item that's in the database and return it, so CachingStoreWrapper can cache it.
return self.get_internal(kind, item['key'])
return item

def initialized_internal(self):
resp = self._get_item_by_keys(self._inited_key(), self._inited_key())
return resp.get('Item') is not None and len(resp['Item']) > 0

def _prefixed_namespace(self, base):
return base if self._prefix is None else (self._prefix + ':' + base)

def _namespace_for_kind(self, kind):
return self._prefixed_namespace(kind.namespace)

def _inited_key(self):
return self._prefixed_namespace('$inited')

def _make_keys(self, namespace, key):
return {
self.PARTITION_KEY: { 'S': namespace },
self.SORT_KEY: { 'S': key }
}

def _make_query_for_kind(self, kind):
return {
'TableName': self._table_name,
'ConsistentRead': True,
'KeyConditions': {
self.PARTITION_KEY: {
'AttributeValueList': [
{ 'S': self._namespace_for_kind(kind) }
],
'ComparisonOperator': 'EQ'
}
}
}

def _get_item_by_keys(self, namespace, key):
return self._client.get_item(TableName=self._table_name, Key=self._make_keys(namespace, key))

def _read_existing_keys(self, kinds):
keys = set()
for kind in kinds:
req = self._make_query_for_kind(kind)
req['ProjectionExpression'] = '#namespace, #key'
req['ExpressionAttributeNames'] = {
'#namespace': self.PARTITION_KEY,
'#key': self.SORT_KEY
}
paginator = self._client.get_paginator('query')
for resp in paginator.paginate(**req):
for item in resp['Items']:
namespace = item[self.PARTITION_KEY]['S']
key = item[self.SORT_KEY]['S']
keys.add((namespace, key))
return keys

def _marshal_item(self, kind, item):
json_str = json.dumps(item)
ret = self._make_keys(self._namespace_for_kind(kind), item['key'])
ret[self.VERSION_ATTRIBUTE] = { 'N': str(item['version']) }
ret[self.ITEM_JSON_ATTRIBUTE] = { 'S': json_str }
return ret

def _unmarshal_item(self, item):
if item is None:
return None
json_attr = item.get(self.ITEM_JSON_ATTRIBUTE)
return None if json_attr is None else json.loads(json_attr['S'])


class _DynamoDBHelpers(object):
@staticmethod
def batch_write_requests(client, table_name, requests):
batch_size = 25
for batch in (requests[i:i+batch_size] for i in range(0, len(requests), batch_size)):
client.batch_write_item(RequestItems={ table_name: batch })
Loading