Skip to content

Commit 4e13820

Browse files
jamesthompgatorsmile
authored andcommitted
[SPARK-23388][SQL] Support for Parquet Binary DecimalType in VectorizedColumnReader
## What changes were proposed in this pull request? Re-add support for parquet binary DecimalType in VectorizedColumnReader ## How was this patch tested? Existing test suite Author: James Thompson <[email protected]> Closes #20580 from jamesthomp/jt/add-back-binary-decimal. (cherry picked from commit 5bb1141) Signed-off-by: gatorsmile <[email protected]>
1 parent 70be603 commit 4e13820

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -444,7 +444,8 @@ private void readBinaryBatch(int rowId, int num, WritableColumnVector column) {
444444
// This is where we implement support for the valid type conversions.
445445
// TODO: implement remaining type conversions
446446
VectorizedValuesReader data = (VectorizedValuesReader) dataColumn;
447-
if (column.dataType() == DataTypes.StringType || column.dataType() == DataTypes.BinaryType) {
447+
if (column.dataType() == DataTypes.StringType || column.dataType() == DataTypes.BinaryType
448+
|| DecimalType.isByteArrayDecimalType(column.dataType())) {
448449
defColumn.readBinarys(num, column, rowId, maxDefLevel, data);
449450
} else if (column.dataType() == DataTypes.TimestampType) {
450451
if (!shouldConvertTimestamps()) {

0 commit comments

Comments
 (0)