-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-27176][SQL] Upgrade hadoop-3's built-in Hive maven dependencies to 2.3.4 #23788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #102343 has finished for PR 23788 at commit
|
|
Test build #102356 has finished for PR 23788 at commit
|
|
Test build #102374 has finished for PR 23788 at commit
|
pom.xml
Outdated
| </dependency> | ||
| <dependency> | ||
| <groupId>${hive.group}</groupId> | ||
| <artifactId>hive-llap-client</artifactId> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need this dependency, otherwise:
build/sbt "hive/testOnly *.StatisticsSuite" -Phadoop-3.1
sbt.ForkMain$ForkError: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/llap/security/LlapSigner$Signable
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:79)
at org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDTF(Registry.java:208)
at org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDTF(Registry.java:201)
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.<clinit>(FunctionRegistry.java:500)
at org.apache.spark.sql.hive.test.TestHiveSparkSession.<init>(TestHive.scala:521)
at org.apache.spark.sql.hive.test.TestHiveSparkSession.<init>(TestHive.scala:181)
at org.apache.spark.sql.hive.test.TestHiveContext.<init>(TestHive.scala:129)
at org.apache.spark.sql.hive.test.TestHive$.<init>(TestHive.scala:53)
at org.apache.spark.sql.hive.test.TestHive$.<clinit>(TestHive.scala)
at org.apache.spark.sql.hive.test.TestHiveSingleton.$init$(TestHiveSingleton.scala:30)
at org.apache.spark.sql.hive.StatisticsSuite.<init>(StatisticsSuite.scala:45)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:435)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: org.apache.hadoop.hive.llap.security.LlapSigner$Signable
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:79)
at org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDTF(Registry.java:208)
at org.apache.hadoop.hive.ql.exec.Registry.registerGenericUDTF(Registry.java:201)
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.<clinit>(FunctionRegistry.java:500)
at org.apache.spark.sql.hive.test.TestHiveSparkSession.<init>(TestHive.scala:521)
at org.apache.spark.sql.hive.test.TestHiveSparkSession.<init>(TestHive.scala:181)
at org.apache.spark.sql.hive.test.TestHiveContext.<init>(TestHive.scala:129)
at org.apache.spark.sql.hive.test.TestHive$.<init>(TestHive.scala:53)
at org.apache.spark.sql.hive.test.TestHive$.<clinit>(TestHive.scala)
at org.apache.spark.sql.hive.test.TestHiveSingleton.$init$(TestHiveSingleton.scala:30)
at org.apache.spark.sql.hive.StatisticsSuite.<init>(StatisticsSuite.scala:45)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:435)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)|
Test build #102540 has finished for PR 23788 at commit
|
|
Test build #102861 has finished for PR 23788 at commit
|
srowen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know the details well here but it looks reasonable if it's mostly moves, dependency changes, and adding Hive 2 code paths
...3.4/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnarBatchReader.java
Show resolved
Hide resolved
|
Test build #103543 has finished for PR 23788 at commit
|
|
retest this please |
|
Test build #103587 has finished for PR 23788 at commit
|
…le to sql/core/v1.2.1 ## What changes were proposed in this pull request? To make #23788 easy to review. This PR moves `OrcColumnVector.java`, `OrcShimUtils.scala`, `OrcFilters.scala` and `OrcFilterSuite.scala` to `sql/core/v1.2.1` and copies it to `sql/core/v2.3.4`. ## How was this patch tested? manual tests ```shell diff -urNa sql/core/v1.2.1 sql/core/v2.3.4 ``` Closes #24119 from wangyum/SPARK-27182. Authored-by: Yuming Wang <[email protected]> Signed-off-by: gatorsmile <[email protected]>
# Conflicts: # dev/deps/spark-deps-hadoop-3.2 # pom.xml # sql/core/pom.xml # sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/orc/OrcFilterSuite.scala # sql/core/v1.2.1/src/test/scala/org/apache/spark/sql/execution/datasources/orc/OrcFilterSuite.scala # sql/core/v2.3.4/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java # sql/core/v2.3.4/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFilters.scala # sql/core/v2.3.4/src/test/scala/org/apache/spark/sql/execution/datasources/orc/OrcFilterSuite.scala
| builder <- buildSearchArgument(dataTypeMap, conjunction, newBuilder) | ||
| } yield builder.build() | ||
| if (HiveUtils.isHive2) { | ||
| BuiltinOrcFilters.createFilter(schema, filters).asInstanceOf[Option[SearchArgument]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the built-in Hive is 2.3.4, we use org.apache.spark.sql.execution.datasources.orc.OrcFilters to create the filter.
| val parameterInfo = new SimpleGenericUDAFParameterInfo(inputInspectors, false, false) | ||
| resolver.getEvaluator(parameterInfo) | ||
| val clazz = Utils.classForName(classOf[SimpleGenericUDAFParameterInfo].getName) | ||
| if (HiveUtils.isHive2) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hive 2.3.x(HIVE-13453):
val parameterInfo = new SimpleGenericUDAFParameterInfo(inputInspectors, false, false, false)
Hive 1.x:
val parameterInfo = new SimpleGenericUDAFParameterInfo(inputInspectors, false, false)
| def deserializePlan[UDFType](is: java.io.InputStream, clazz: Class[_]): UDFType = { | ||
| deserializeObjectByKryo(Utilities.runtimeSerializationKryo.get(), is, clazz) | ||
| .asInstanceOf[UDFType] | ||
| if (HiveUtils.isHive2) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hive 2.x(HIVE-12302):
import org.apache.hadoop.hive.ql.exec.SerializationUtilities
val kryo = SerializationUtilities.borrowKryo()
try {
SerializationUtilities.deserializeObjectByKryo(kryo, is, clazz).asInstanceOf[UDFType]
} finally {
SerializationUtilities.releaseKryo(kryo)
}Hive 1.x:
import org.apache.hadoop.hive.ql.exec.Utilities
Utilities.deserializeObjectByKryo(Utilities.runtimeSerializationKryo.get(), is, clazz)
.asInstanceOf[UDFType]|
Test build #104009 has finished for PR 23788 at commit
|
|
|
||
| private val hive1Version = "1.2.1" | ||
| private val hive2Version = "2.3.4" | ||
| val isHive2: Boolean = HiveVersionInfo.getVersion.equals(hive2Version) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this check be a little more general, to match all 2.x versions? should it fail on Hive 3.x?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it only works on Hive 2.3.x.
- The current code is not compatible with Hive 2.0 - Hive 2.2: https://github.com/apache/spark/pull/23788/files#diff-53f31aa4bbd9274f40547cd00cf0826dR341
- The current code is not compatible with Hive 3.1(HIVE-12192).
[ERROR] /Users/yumwang/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:451: type mismatch;
found : Timestamp (in org.apache.hadoop.hive.common.type)
required: Timestamp (in java.sql)
[ERROR] row.setLong(ordinal, DateTimeUtils.fromJavaTimestamp(oi.getPrimitiveJavaObject(value)))There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, should this at least check for "isHive23" then and match on starting with "2.3."? otherwise this may well work with 2.3.5 but will fail. What about a future 2.4?
Can we ... drop Hive 1.2.x support entirely here or in a next PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. isHive23 is more reasonable. I will update it later.
Can we ... drop Hive 1.2.x support entirely here or in a next PR?
Removing Hive 1.2.x support may be a bit risky. cc @gatorsmile
| <artifactId>commons-logging</artifactId> | ||
| </exclusion> | ||
| <!-- Hive 2.3.4 --> | ||
| <exclusion> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exclude jetty-all, it conflict with jetty 9.4.12.v20180830:
build/sbt clean package -Phadoop-3.2 -Phive
...
[error] /home/yumwang/opensource/spark/core/src/main/scala/org/apache/spark/SSLOptions.scala:78: value setTrustStorePath is not a member of org.eclipse.jetty.util.ssl.SslContextFactory
[error] trustStore.foreach(file => sslContextFactory.setTrustStorePath(file.getAbsolutePath))
[error]| <artifactId>groovy-all</artifactId> | ||
| </exclusion> | ||
| <!-- Hive 2.3.4 --> | ||
| <exclusion> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exclude log4j-slf4j-impl, otherwise:
$ build/sbt clean package -Phadoop-3.2 -Phive
$ export SPARK_PREPEND_CLASSES=true
$ bin/spark-shell
NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/logging/log4j/spi/AbstractLoggerAdapter
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:36)
at org.apache.spark.internal.Logging$.org$apache$spark$internal$Logging$$isLog4j12(Logging.scala:217)
at org.apache.spark.internal.Logging.initializeLogging(Logging.scala:122)
at org.apache.spark.internal.Logging.initializeLogIfNecessary(Logging.scala:111)
at org.apache.spark.internal.Logging.initializeLogIfNecessary$(Logging.scala:105)
at org.apache.spark.deploy.SparkSubmit.initializeLogIfNecessary(SparkSubmit.scala:73)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:81)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:939)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:948)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.logging.log4j.spi.AbstractLoggerAdapter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 22 more| <groupId>${hive.group}</groupId> | ||
| <artifactId>hive-llap-tez</artifactId> | ||
| </exclusion> | ||
| <exclusion> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exclude calcite-druid and avatica. more details: https://issues.apache.org/jira/browse/SPARK-27054
|
Test build #104277 has finished for PR 23788 at commit
|
|
retest this please |
|
Test build #104281 has finished for PR 23788 at commit
|
| Some(builder.startAnd().equals(attribute, value).end()) | ||
| val bd = builder.startAnd() | ||
| val method = findMethod(bd.getClass, "equals", classOf[String], classOf[Object]) | ||
| Some(method.invoke(bd, attribute, value.asInstanceOf[AnyRef]).asInstanceOf[Builder].end()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cast value to AnyRef based on the following:
https://github.com/apache/spark/pull/8799/files#diff-6cac9bc2656e3782b0312dceb8c55d47R132
https://github.com/apache/hive/blob/release-1.2.1/serde/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgument.java#L255
Otherwise:
[error] /Users/yumwang/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFilters.scala:180: type mismatch;
[error] found : Any
[error] required: Object
[error] Some(method.invoke(bd, attribute, value).asInstanceOf[Builder].end())
|
Test build #104294 has finished for PR 23788 at commit
|
|
Test build #104295 has finished for PR 23788 at commit
|
…-client and co.cask.tephra:*
|
Test build #104350 has finished for PR 23788 at commit
|
|
retest this please |
|
Test build #104370 has finished for PR 23788 at commit
|
|
Test build #104375 has finished for PR 23788 at commit
|
|
LGTM So far, this change only impacts the profile Hadoop 3.x. It is safe in general. It might still have a few issues when we running the tests using Hadoop 3.x. Let us resolve them if existed, when we trigger the tests using Hadoop 3.x profile. Thanks! Merged to master. |
|
Hi, @yhuai and @liancheng . To @wangyum and @gatorsmile .
|
| import java.math.BigDecimal; | ||
|
|
||
| import org.apache.orc.storage.ql.exec.vector.*; | ||
| import org.apache.hadoop.hive.ql.exec.vector.*; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes .. we shouldn't do this..
| import org.apache.hadoop.hive.ql.io.sarg.{PredicateLeaf, SearchArgument} | ||
| import org.apache.hadoop.hive.ql.io.sarg.SearchArgument.Builder | ||
| import org.apache.hadoop.hive.ql.io.sarg.SearchArgumentFactory.newBuilder | ||
| import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here.
| import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch | ||
| import org.apache.hadoop.hive.ql.io.sarg.{SearchArgument => OrcSearchArgument} | ||
| import org.apache.hadoop.hive.ql.io.sarg.PredicateLeaf.{Operator => OrcOperator} | ||
| import org.apache.hadoop.hive.serde2.io.{DateWritable, HiveDecimalWritable} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here.
|
thank you @dongjoon-hyun. do you and @wangyum have any concern of using nohive? |
|
i created #27536 |
…lap.scope` in root `pom.xml` ### What changes were proposed in this pull request? This PR aims to fix `hive-llap-common` dependency to use `hive.llap.scope` in root pom for Apache Spark 3.5 and 4.0. ### Why are the changes needed? Apache Spark has been supposed to use `hive.llap.scope` for `hive-llap-common` dependency and `hive` module do it correctly. https://github.com/apache/spark/blob/a1b0f256c04e5b632075358d1e2f946e64588da6/sql/hive/pom.xml#L119-L123 Since Apache Spark 3.0.0 (SPARK-27176), the root `pom.xml` file has been using a wrong scope mistakenly. Probably, it's due to `-Phive-provided` support. This causes a confusion to other external systems and the users. We had better fix the root `pom.xml` to use `hive.llap.scope` correctly. - #23788 ### Does this PR introduce _any_ user-facing change? No, there is no change technically because `hive` module has been using a correct scope. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #49733 from dongjoon-hyun/SPARK-51039. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…lap.scope` in root `pom.xml` ### What changes were proposed in this pull request? This PR aims to fix `hive-llap-common` dependency to use `hive.llap.scope` in root pom for Apache Spark 3.5 and 4.0. ### Why are the changes needed? Apache Spark has been supposed to use `hive.llap.scope` for `hive-llap-common` dependency and `hive` module do it correctly. https://github.com/apache/spark/blob/a1b0f256c04e5b632075358d1e2f946e64588da6/sql/hive/pom.xml#L119-L123 Since Apache Spark 3.0.0 (SPARK-27176), the root `pom.xml` file has been using a wrong scope mistakenly. Probably, it's due to `-Phive-provided` support. This causes a confusion to other external systems and the users. We had better fix the root `pom.xml` to use `hive.llap.scope` correctly. - #23788 ### Does this PR introduce _any_ user-facing change? No, there is no change technically because `hive` module has been using a correct scope. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #49733 from dongjoon-hyun/SPARK-51039. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]> (cherry picked from commit 7243de6) Signed-off-by: Dongjoon Hyun <[email protected]>
…lap.scope` in root `pom.xml` This PR aims to fix `hive-llap-common` dependency to use `hive.llap.scope` in root pom for Apache Spark 3.5 and 4.0. Apache Spark has been supposed to use `hive.llap.scope` for `hive-llap-common` dependency and `hive` module do it correctly. https://github.com/apache/spark/blob/a1b0f256c04e5b632075358d1e2f946e64588da6/sql/hive/pom.xml#L119-L123 Since Apache Spark 3.0.0 (SPARK-27176), the root `pom.xml` file has been using a wrong scope mistakenly. Probably, it's due to `-Phive-provided` support. This causes a confusion to other external systems and the users. We had better fix the root `pom.xml` to use `hive.llap.scope` correctly. - #23788 No, there is no change technically because `hive` module has been using a correct scope. Pass the CIs. No. Closes #49733 from dongjoon-hyun/SPARK-51039. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]> (cherry picked from commit 7243de6) Signed-off-by: Dongjoon Hyun <[email protected]>
…lap.scope` in root `pom.xml` ### What changes were proposed in this pull request? This PR aims to fix `hive-llap-common` dependency to use `hive.llap.scope` in root pom for Apache Spark 3.5 and 4.0. ### Why are the changes needed? Apache Spark has been supposed to use `hive.llap.scope` for `hive-llap-common` dependency and `hive` module do it correctly. https://github.com/apache/spark/blob/94185fd22eefd11ddded66b09538ae3e7693664f/sql/hive/pom.xml#L119-L123 Since Apache Spark 3.0.0 (SPARK-27176), the root `pom.xml` file has been using a wrong scope mistakenly. Probably, it's due to `-Phive-provided` support. This causes a confusion to other external systems and the users. We had better fix the root `pom.xml` to use `hive.llap.scope` correctly. - apache#23788 ### Does this PR introduce _any_ user-facing change? No, there is no change technically because `hive` module has been using a correct scope. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#49733 from dongjoon-hyun/SPARK-51039. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]> (cherry picked from commit 0d19313) Signed-off-by: Dongjoon Hyun <[email protected]>
What changes were proposed in this pull request?
This PR mainly contains:
sql/hivemodule.How was this patch tested?
jenkins test hadoop-2.7
manual test hadoop-3: