[SPARK-17356][SQL][1.6] Fix out of memory issue when generating JSON for TreeNode #14973
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a backport of PR #14915 to branch 1.6.
What changes were proposed in this pull request?
class
org.apache.spark.sql.types.Metadatais widely used in mllib to store some ml attributes.Metadatais commonly stored inAliasexpression.The
Metadatacan take a big memory footprint since the number of attributes is big ( in scale of million). WhentoJSONis called onAliasexpression, theMetadatawill also be converted to a big JSON string.If a plan contains many such kind of
Aliasexpressions, it may trigger out of memory error whentoJSONis called, since converting allMetadatareferences to JSON will take huge memory.With this PR, we will skip scanning Metadata when doing JSON conversion. For a reproducer of the OOM, and analysis, please look at jira https://issues.apache.org/jira/browse/SPARK-17356.
How was this patch tested?
Existing tests.