forked from tensorflow/addons
-
Notifications
You must be signed in to change notification settings - Fork 0
Add docstring correlation cost #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
PyExtreme
wants to merge
18
commits into
master
Choose a base branch
from
add-docstring-correlation_cost
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Replace some compat.v1 APIs by their v2 equivalent * Fix lint error
…d-docstring-correlation_cost
* FIX: MovingAverage from_config, get_config * `get_config` correctly serializes the wrapped optimizer using `tf.keras.optimizers.serialize()` * `from_config` to deserialize the config correctly, including the wrapped optimizer * Wrapper optimizer can be specificed using keras optimizer strings like: adam, rmsprop, adagrad, etc. * Add tests for above new functionality * Minor refactoring in `moving_average.py` * Add docstring to MovingAverage optimizer * Add name and kwargs to docstring
* Initial setup for tensorflow subsite * Move examples and rename as tutorials * Spelling and style * Update README to use the word `tutorials`
* Implement alternatives to some TensorFlow private APIs * Fix tests
* Add standard flags to build_docs.py * ran autoformat * Readability improvements.
* Namespaced all of the custom ops * Updated C++ namespaces to not conflict w/ TF contrib ones * Ran code reformatting tool * Port bug fix in TF contrib to addons. (tensorflow#497) * Port bug fix in TF contrib to addons. Original change at tensorflow/tensorflow@a913689. * Fix lint warning. * check pass through and do the expand_dims() only if needed (tensorflow#464) * check pass through and do the expand_dims() only if needed * add indent to the fixed line * merge return condition to if state * add hardshrink kernel (tensorflow#500) * add hardshrink kernel * make linter happy * Fixing SequenceLoss Keras incompatibility (tensorflow#503) * Fix SequenceLoss incompatibility with Keras built-in loops * Remove debugging prints * Change the attribute existence checking to use more pythonic way * Replace some compat.v1 APIs by their v2 equivalent (tensorflow#507) * Replace some compat.v1 APIs by their v2 equivalent * Fix lint error * Add documentation for LazyAdam (tensorflow#515) * Updated hardshrink custom ops & made #ifdef names more consistent. * Fix to undef
# Conflicts: # tensorflow_addons/layers/optical_flow_test.py
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.