forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 1
Fixing copy bug #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
techaddict
merged 2 commits into
techaddict:SPARK-18274
from
jkbradley:techaddict-SPARK-18274
Nov 11, 2016
Merged
Fixing copy bug #1
techaddict
merged 2 commits into
techaddict:SPARK-18274
from
jkbradley:techaddict-SPARK-18274
Nov 11, 2016
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
techaddict
pushed a commit
that referenced
this pull request
Mar 21, 2018
## What changes were proposed in this pull request? There were two related fixes regarding `from_json`, `get_json_object` and `json_tuple` ([Fix #1](apache@c8803c0), [Fix #2](apache@86174ea)), but they weren't comprehensive it seems. I wanted to extend those fixes to all the parsers, and add tests for each case. ## How was this patch tested? Regression tests Author: Burak Yavuz <[email protected]> Closes apache#20302 from brkyvz/json-invfix.
techaddict
pushed a commit
that referenced
this pull request
Mar 21, 2018
## What changes were proposed in this pull request?
Solved two bugs to enable stream-stream self joins.
### Incorrect analysis due to missing MultiInstanceRelation trait
Streaming leaf nodes did not extend MultiInstanceRelation, which is necessary for the catalyst analyzer to convert the self-join logical plan DAG into a tree (by creating new instances of the leaf relations). This was causing the error `Failure when resolving conflicting references in Join:` (see JIRA for details).
### Incorrect attribute rewrite when splicing batch plans in MicroBatchExecution
When splicing the source's batch plan into the streaming plan (by replacing the StreamingExecutionPlan), we were rewriting the attribute reference in the streaming plan with the new attribute references from the batch plan. This was incorrectly handling the scenario when multiple StreamingExecutionRelation point to the same source, and therefore eventually point to the same batch plan returned by the source. Here is an example query, and its corresponding plan transformations.
```
val df = input.toDF
val join =
df.select('value % 5 as "key", 'value).join(
df.select('value % 5 as "key", 'value), "key")
```
Streaming logical plan before splicing the batch plan
```
Project [key#6, value#1, value#12]
+- Join Inner, (key#6 = key#9)
:- Project [(value#1 % 5) AS key#6, value#1]
: +- StreamingExecutionRelation Memory[#1], value#1
+- Project [(value#12 % 5) AS key#9, value#12]
+- StreamingExecutionRelation Memory[#1], value#12 // two different leaves pointing to same source
```
Batch logical plan after splicing the batch plan and before rewriting
```
Project [key#6, value#1, value#12]
+- Join Inner, (key#6 = key#9)
:- Project [(value#1 % 5) AS key#6, value#1]
: +- LocalRelation [value#66] // replaces StreamingExecutionRelation Memory[#1], value#1
+- Project [(value#12 % 5) AS key#9, value#12]
+- LocalRelation [value#66] // replaces StreamingExecutionRelation Memory[#1], value#12
```
Batch logical plan after rewriting the attributes. Specifically, for spliced, the new output attributes (value#66) replace the earlier output attributes (value#12, and value#1, one for each StreamingExecutionRelation).
```
Project [key#6, value#66, value#66] // both value#1 and value#12 replaces by value#66
+- Join Inner, (key#6 = key#9)
:- Project [(value#66 % 5) AS key#6, value#66]
: +- LocalRelation [value#66]
+- Project [(value#66 % 5) AS key#9, value#66]
+- LocalRelation [value#66]
```
This causes the optimizer to eliminate value#66 from one side of the join.
```
Project [key#6, value#66, value#66]
+- Join Inner, (key#6 = key#9)
:- Project [(value#66 % 5) AS key#6, value#66]
: +- LocalRelation [value#66]
+- Project [(value#66 % 5) AS key#9] // this does not generate value, incorrect join results
+- LocalRelation [value#66]
```
**Solution**: Instead of rewriting attributes, use a Project to introduce aliases between the output attribute references and the new reference generated by the spliced plans. The analyzer and optimizer will take care of the rest.
```
Project [key#6, value#1, value#12]
+- Join Inner, (key#6 = key#9)
:- Project [(value#1 % 5) AS key#6, value#1]
: +- Project [value#66 AS value#1] // solution: project with aliases
: +- LocalRelation [value#66]
+- Project [(value#12 % 5) AS key#9, value#12]
+- Project [value#66 AS value#12] // solution: project with aliases
+- LocalRelation [value#66]
```
## How was this patch tested?
New unit test
Author: Tathagata Das <[email protected]>
Closes apache#20598 from tdas/SPARK-23406.
techaddict
pushed a commit
that referenced
this pull request
May 3, 2019
…te temporary path in local staging directory ## What changes were proposed in this pull request? Th environment of my cluster as follows: ``` OS:Linux version 2.6.32-220.7.1.el6.x86_64 (mockbuildc6b18n3.bsys.dev.centos.org) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Wed Mar 7 00:52:02 GMT 2012 Hadoop: 2.7.2 Spark: 2.3.0 or 3.0.0(master branch) Hive: 1.2.1 ``` My spark run on deploy mode yarn-client. If I execute the SQL `insert overwrite local directory '/home/test/call_center/' select * from call_center`, a HiveException will appear as follows: `Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Mkdirs failed to create file:/home/xitong/hive/stagingdir_hive_2019-02-19_17-31-00_678_1816816774691551856-1/-ext-10000/_temporary/0/_temporary/attempt_20190219173233_0002_m_000000_3 (exists=false, cwd=file:/data10/yarn/nm-local-dir/usercache/xitong/appcache/application_1543893582405_6126857/container_e124_1543893582405_6126857_01_000011) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)` Current spark sql generate a local temporary path in local staging directory.The schema of local temporary path start with `file`, so the HiveException appears. This PR change the local temporary path to HDFS temporary path, and use DistributedFileSystem instance copy the data from HDFS temporary path to local directory. If Spark run on local deploy mode, 'insert overwrite local directory' works fine. ## How was this patch tested? UT cannot support yarn-client mode.The test is in my product environment. Closes apache#23841 from beliefer/fix-bug-of-insert-overwrite-local-dir. Authored-by: gengjiaan <[email protected]> Signed-off-by: Sean Owen <[email protected]>
techaddict
pushed a commit
that referenced
this pull request
May 3, 2019
## What changes were proposed in this pull request?
This PR supports `OpenJ9` in addition to `IBM JDK` and `OpenJDK` in Spark by handling `System.getProperty("java.vendor") = "Eclipse OpenJ9"`.
In `inferDefaultMemory()` and `getKrb5LoginModuleName()`, this PR uses non `IBM` way.
```
$ ~/jdk-11.0.2+9_openj9-0.12.1/bin/jshell
| Welcome to JShell -- Version 11.0.2
| For an introduction type: /help intro
jshell> System.out.println(System.getProperty("java.vendor"))
Eclipse OpenJ9
jshell> System.out.println(System.getProperty("java.vm.info"))
JRE 11 Linux amd64-64-Bit Compressed References 20190204_127 (JIT enabled, AOT enabled)
OpenJ9 - 90dd8cb40
OMR - d2f4534b
JCL - 289c70b6844 based on jdk-11.0.2+9
jshell> System.out.println(Class.forName("com.ibm.lang.management.OperatingSystemMXBean").getDeclaredMethod("getTotalPhysicalMemory"))
public abstract long com.ibm.lang.management.OperatingSystemMXBean.getTotalPhysicalMemory()
jshell> System.out.println(Class.forName("com.sun.management.OperatingSystemMXBean").getDeclaredMethod("getTotalPhysicalMemorySize"))
public abstract long com.sun.management.OperatingSystemMXBean.getTotalPhysicalMemorySize()
jshell> System.out.println(Class.forName("com.ibm.security.auth.module.Krb5LoginModule"))
| Exception java.lang.ClassNotFoundException: com.ibm.security.auth.module.Krb5LoginModule
| at Class.forNameImpl (Native Method)
| at Class.forName (Class.java:339)
| at (#1:1)
jshell> System.out.println(Class.forName("com.sun.security.auth.module.Krb5LoginModule"))
class com.sun.security.auth.module.Krb5LoginModule
```
## How was this patch tested?
Existing test suites
Manual testing with OpenJ9.
Closes apache#24308 from kiszk/SPARK-27397.
Authored-by: Kazuaki Ishizaki <[email protected]>
Signed-off-by: Sean Owen <[email protected]>
techaddict
pushed a commit
that referenced
this pull request
Aug 13, 2019
…comparison assertions ## What changes were proposed in this pull request? This PR removes a few hardware-dependent assertions which can cause a failure in `aarch64`. **x86_64** ``` rootdonotdel-openlab-allinone-l00242678:/home/ubuntu# uname -a Linux donotdel-openlab-allinone-l00242678 4.4.0-154-generic apache#181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux scala> import java.lang.Float.floatToRawIntBits import java.lang.Float.floatToRawIntBits scala> floatToRawIntBits(0.0f/0.0f) res0: Int = -4194304 scala> floatToRawIntBits(Float.NaN) res1: Int = 2143289344 ``` **aarch64** ``` [rootarm-huangtianhua spark]# uname -a Linux arm-huangtianhua 4.14.0-49.el7a.aarch64 #1 SMP Tue Apr 10 17:22:26 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux scala> import java.lang.Float.floatToRawIntBits import java.lang.Float.floatToRawIntBits scala> floatToRawIntBits(0.0f/0.0f) res1: Int = 2143289344 scala> floatToRawIntBits(Float.NaN) res2: Int = 2143289344 ``` ## How was this patch tested? Pass the Jenkins (This removes the test coverage). Closes apache#25186 from huangtianhua/special-test-case-for-aarch64. Authored-by: huangtianhua <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
techaddict
pushed a commit
that referenced
this pull request
Apr 4, 2023
…nto w/ and w/o `ansi` suffix to pass sql analyzer test in ansi mode ### What changes were proposed in this pull request? After apache#40496, run ``` SPARK_ANSI_SQL_MODE=true build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite" ``` There is one test faild with `spark.sql.ansi.enabled = true` ``` [info] - timestampNTZ/datetime-special.sql_analyzer_test *** FAILED *** (11 milliseconds) [info] timestampNTZ/datetime-special.sql_analyzer_test [info] Expected "...date(999999, 3, 18, [false) AS make_date(999999, 3, 18)#x, make_date(-1, 1, 28, fals]e) AS make_date(-1, ...", but got "...date(999999, 3, 18, [true) AS make_date(999999, 3, 18)#x, make_date(-1, 1, 28, tru]e) AS make_date(-1, ..." Result did not match for query #1 [info] select make_date(999999, 3, 18), make_date(-1, 1, 28) (SQLQueryTestSuite.scala:777) [info] org.scalatest.exceptions.TestFailedException: ``` The failure reason is the last parameter of function `MakeDate` is `failOnError: Boolean = SQLConf.get.ansiEnabled`. So this pr split `timestampNTZ/datetime-special.sql` into w/ and w/o ansi to mask this test difference. ### Why are the changes needed? Make SQLQueryTestSuite test pass with `spark.sql.ansi.enabled = true`. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - Pass GitHub Actions - Manual checked `SPARK_ANSI_SQL_MODE=true build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite"` Closes apache#40552 from LuciferYang/SPARK-42921. Authored-by: yangjie01 <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
moved copy from JavaModel to JavaParams. mv del from JavaModel to JavaWrapper
this includes a unit test which fails before this fix