-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-40918][SQL][3.3] Mismatch between FileSourceScanExec and Orc and ParquetFileFormat on producing columnar output #38431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
juliuszsompolski
wants to merge
3
commits into
apache:branch-3.3
from
juliuszsompolski:SPARK-40918-3.3
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…rquetFileFormat on producing columnar output
### What changes were proposed in this pull request?
We move the decision about supporting columnar output based on WSCG one level from ParquetFileFormat / OrcFileFormat up to FileSourceScanExec, and pass it as a new required option for ParquetFileFormat / OrcFileFormat. Now the semantics is as follows:
* `ParquetFileFormat.supportsBatch` and `OrcFileFormat.supportsBatch` returns whether it **can**, not necessarily **will** return columnar output.
* To return columnar output, an option `FileFormat.OPTION_RETURNING_BATCH` needs to be passed to `buildReaderWithPartitionValues` in these two file formats. It should only be set to `true` if `supportsBatch` is also `true`, but it can be set to `false` if we don't want columnar output nevertheless - this way, `FileSourceScanExec` can set it to false when there are more than 100 columsn for WSCG, and `ParquetFileFormat` / `OrcFileFormat` doesn't have to concern itself about WSCG limits.
* To avoid not passing it by accident, this option is made required. Making it required requires updating a few places that use it, but an error resulting from this is very obscure. It's better to fail early and explicitly here.
### Why are the changes needed?
This explains it for `ParquetFileFormat`. `OrcFileFormat` had exactly the same issue.
`java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRow` was being thrown because ParquetReader was outputting columnar batches, while FileSourceScanExec expected row output.
The mismatch comes from the fact that `ParquetFileFormat.supportBatch` depends on `WholeStageCodegenExec.isTooManyFields(conf, schema)`, where the threshold is 100 fields.
When this is used in `FileSourceScanExec`:
```
override lazy val supportsColumnar: Boolean = {
relation.fileFormat.supportBatch(relation.sparkSession, schema)
}
```
the `schema` comes from output attributes, which includes extra metadata attributes.
However, inside `ParquetFileFormat.buildReaderWithPartitionValues` it was calculated again as
```
relation.fileFormat.buildReaderWithPartitionValues(
sparkSession = relation.sparkSession,
dataSchema = relation.dataSchema,
partitionSchema = relation.partitionSchema,
requiredSchema = requiredSchema,
filters = pushedDownFilters,
options = options,
hadoopConf = hadoopConf
...
val resultSchema = StructType(requiredSchema.fields ++ partitionSchema.fields)
...
val returningBatch = supportBatch(sparkSession, resultSchema)
```
Where `requiredSchema` and `partitionSchema` wouldn't include the metadata columns:
```
FileSourceScanExec: output: List(c1#4608L, c2#4609L, ..., c100#4707L, file_path#6388)
FileSourceScanExec: dataSchema: StructType(StructField(c1,LongType,true),StructField(c2,LongType,true),...,StructField(c100,LongType,true))
FileSourceScanExec: partitionSchema: StructType()
FileSourceScanExec: requiredSchema: StructType(StructField(c1,LongType,true),StructField(c2,LongType,true),...,StructField(c100,LongType,true))
```
Column like `file_path#6388` are added by the scan, and contain metadata added by the scan, not by the file reader which concerns itself with what is within the file.
### Does this PR introduce _any_ user-facing change?
Not a public API change, but it is now required to pass `FileFormat.OPTION_RETURNING_BATCH` in `options` to `ParquetFileFormat.buildReaderWithPartitionValues`. The only user of this API in Apache Spark is `FileSourceScanExec`.
### How was this patch tested?
Tests added
Closes apache#38397 from juliuszsompolski/SPARK-40918.
Authored-by: Juliusz Sompolski <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
Contributor
Author
Contributor
|
all tests passed actually: https://github.com/juliuszsompolski/apache-spark/runs/9189880361 |
Contributor
|
thanks, merging to 3.3! |
cloud-fan
pushed a commit
that referenced
this pull request
Oct 31, 2022
…nd ParquetFileFormat on producing columnar output
### What changes were proposed in this pull request?
We move the decision about supporting columnar output based on WSCG one level from ParquetFileFormat / OrcFileFormat up to FileSourceScanExec, and pass it as a new required option for ParquetFileFormat / OrcFileFormat. Now the semantics is as follows:
* `ParquetFileFormat.supportsBatch` and `OrcFileFormat.supportsBatch` returns whether it **can**, not necessarily **will** return columnar output.
* To return columnar output, an option `FileFormat.OPTION_RETURNING_BATCH` needs to be passed to `buildReaderWithPartitionValues` in these two file formats. It should only be set to `true` if `supportsBatch` is also `true`, but it can be set to `false` if we don't want columnar output nevertheless - this way, `FileSourceScanExec` can set it to false when there are more than 100 columsn for WSCG, and `ParquetFileFormat` / `OrcFileFormat` doesn't have to concern itself about WSCG limits.
* To avoid not passing it by accident, this option is made required. Making it required requires updating a few places that use it, but an error resulting from this is very obscure. It's better to fail early and explicitly here.
### Why are the changes needed?
This explains it for `ParquetFileFormat`. `OrcFileFormat` had exactly the same issue.
`java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRow` was being thrown because ParquetReader was outputting columnar batches, while FileSourceScanExec expected row output.
The mismatch comes from the fact that `ParquetFileFormat.supportBatch` depends on `WholeStageCodegenExec.isTooManyFields(conf, schema)`, where the threshold is 100 fields.
When this is used in `FileSourceScanExec`:
```
override lazy val supportsColumnar: Boolean = {
relation.fileFormat.supportBatch(relation.sparkSession, schema)
}
```
the `schema` comes from output attributes, which includes extra metadata attributes.
However, inside `ParquetFileFormat.buildReaderWithPartitionValues` it was calculated again as
```
relation.fileFormat.buildReaderWithPartitionValues(
sparkSession = relation.sparkSession,
dataSchema = relation.dataSchema,
partitionSchema = relation.partitionSchema,
requiredSchema = requiredSchema,
filters = pushedDownFilters,
options = options,
hadoopConf = hadoopConf
...
val resultSchema = StructType(requiredSchema.fields ++ partitionSchema.fields)
...
val returningBatch = supportBatch(sparkSession, resultSchema)
```
Where `requiredSchema` and `partitionSchema` wouldn't include the metadata columns:
```
FileSourceScanExec: output: List(c1#4608L, c2#4609L, ..., c100#4707L, file_path#6388)
FileSourceScanExec: dataSchema: StructType(StructField(c1,LongType,true),StructField(c2,LongType,true),...,StructField(c100,LongType,true))
FileSourceScanExec: partitionSchema: StructType()
FileSourceScanExec: requiredSchema: StructType(StructField(c1,LongType,true),StructField(c2,LongType,true),...,StructField(c100,LongType,true))
```
Column like `file_path#6388` are added by the scan, and contain metadata added by the scan, not by the file reader which concerns itself with what is within the file.
### Does this PR introduce _any_ user-facing change?
Not a public API change, but it is now required to pass `FileFormat.OPTION_RETURNING_BATCH` in `options` to `ParquetFileFormat.buildReaderWithPartitionValues`. The only user of this API in Apache Spark is `FileSourceScanExec`.
### How was this patch tested?
Tests added
Backports #38397 from juliuszsompolski/SPARK-40918.
Authored-by: Juliusz Sompolski <julekdatabricks.com>
Signed-off-by: Wenchen Fan <wenchendatabricks.com>
Closes #38431 from juliuszsompolski/SPARK-40918-3.3.
Authored-by: Juliusz Sompolski <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
Contributor
Author
Yeah, they did in all three runs, but three times in a row it didn't update github status... |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
We move the decision about supporting columnar output based on WSCG one level from ParquetFileFormat / OrcFileFormat up to FileSourceScanExec, and pass it as a new required option for ParquetFileFormat / OrcFileFormat. Now the semantics is as follows:
ParquetFileFormat.supportsBatchandOrcFileFormat.supportsBatchreturns whether it can, not necessarily will return columnar output.FileFormat.OPTION_RETURNING_BATCHneeds to be passed tobuildReaderWithPartitionValuesin these two file formats. It should only be set totrueifsupportsBatchis alsotrue, but it can be set tofalseif we don't want columnar output nevertheless - this way,FileSourceScanExeccan set it to false when there are more than 100 columsn for WSCG, andParquetFileFormat/OrcFileFormatdoesn't have to concern itself about WSCG limits.Why are the changes needed?
This explains it for
ParquetFileFormat.OrcFileFormathad exactly the same issue.java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to org.apache.spark.sql.catalyst.InternalRowwas being thrown because ParquetReader was outputting columnar batches, while FileSourceScanExec expected row output.The mismatch comes from the fact that
ParquetFileFormat.supportBatchdepends onWholeStageCodegenExec.isTooManyFields(conf, schema), where the threshold is 100 fields.When this is used in
FileSourceScanExec:the
schemacomes from output attributes, which includes extra metadata attributes.However, inside
ParquetFileFormat.buildReaderWithPartitionValuesit was calculated again asWhere
requiredSchemaandpartitionSchemawouldn't include the metadata columns:Column like
file_path#6388are added by the scan, and contain metadata added by the scan, not by the file reader which concerns itself with what is within the file.Does this PR introduce any user-facing change?
Not a public API change, but it is now required to pass
FileFormat.OPTION_RETURNING_BATCHinoptionstoParquetFileFormat.buildReaderWithPartitionValues. The only user of this API in Apache Spark isFileSourceScanExec.How was this patch tested?
Tests added
Backports #38397 from juliuszsompolski/SPARK-40918.
Authored-by: Juliusz Sompolski [email protected]
Signed-off-by: Wenchen Fan [email protected]