Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
1351 commits
Select commit Hold shift + click to select a range
fd828e1
[SPARK-16409][SQL] regexp_extract with optional groups causes NPE
srowen Aug 7, 2016
f37ed6e
[SPARK-16939][SQL] Fix build error by using `Tuple1` explicitly in St…
dongjoon-hyun Aug 7, 2016
ca0c6e6
[SPARK-16457][SQL] Fix Wrong Messages when CTAS with a Partition By C…
gatorsmile Aug 8, 2016
b8a7958
[SPARK-16936][SQL] Case Sensitivity Support for Refresh Temp Table
gatorsmile Aug 8, 2016
69e278e
[SPARK-16586][CORE] Handle JVM errors printed to stdout.
Aug 8, 2016
9748a29
[SPARK-16953] Make requestTotalExecutors public Developer API to be c…
tdas Aug 8, 2016
6fc54b7
Update docs to include SASL support for RPC
Aug 8, 2016
601c649
[SPARK-16563][SQL] fix spark sql thrift server FetchResults bug
Aug 9, 2016
bbbd3cb
[SPARK-16610][SQL] Add `orc.compress` as an alias for `compression` o…
HyukjinKwon Aug 9, 2016
41d9dca
[SPARK-16950] [PYSPARK] fromOffsets parameter support in KafkaUtils.c…
Aug 9, 2016
44115e9
[SPARK-16956] Make ApplicationState.MAX_NUM_RETRY configurable
JoshRosen Aug 9, 2016
2d136db
[SPARK-16905] SQL DDL: MSCK REPAIR TABLE
Aug 9, 2016
475ee38
Fixed typo
jupblb Aug 10, 2016
2285de7
[SPARK-16522][MESOS] Spark application throws exception on exit.
sun-rui Aug 10, 2016
20efb79
[SPARK-16324][SQL] regexp_extract should doc that it returns empty st…
srowen Aug 10, 2016
719ac5f
[SPARK-15899][SQL] Fix the construction of the file path with hadoop …
avulanov Aug 10, 2016
15637f7
Revert "[SPARK-15899][SQL] Fix the construction of the file path with…
srowen Aug 10, 2016
977fbbf
[SPARK-15639] [SPARK-16321] [SQL] Push down filter at RowGroups level…
viirya Aug 10, 2016
d3a30d2
[SPARK-16579][SPARKR] add install.spark function
junyangq Aug 10, 2016
1e40135
[SPARK-17010][MINOR][DOC] Wrong description in memory management docu…
WangTaoTheTonic Aug 11, 2016
8611bc2
[SPARK-16866][SQL] Infrastructure for file-based SQL end-to-end tests
petermaxlee Aug 10, 2016
51b1016
[SPARK-17008][SPARK-17009][SQL] Normalization and isolation in SQLQue…
petermaxlee Aug 11, 2016
ea8a198
[SPARK-17007][SQL] Move test data files into a test-data folder
petermaxlee Aug 11, 2016
4b434e7
[SPARK-17011][SQL] Support testing exceptions in SQLQueryTestSuite
petermaxlee Aug 11, 2016
0ed6236
Correct example value for spark.ssl.YYY.XXX settings
ash211 Aug 11, 2016
33a213f
[SPARK-15899][SQL] Fix the construction of the file path with hadoop …
avulanov Aug 11, 2016
6bf20cd
[SPARK-17015][SQL] group-by/order-by ordinal and arithmetic tests
petermaxlee Aug 11, 2016
bc683f0
[SPARK-17018][SQL] literals.sql for testing literal parsing
petermaxlee Aug 11, 2016
0fb0149
[SPARK-17022][YARN] Handle potential deadlock in driver handling mess…
WangTaoTheTonic Aug 11, 2016
b4047fc
[SPARK-16975][SQL] Column-partition path starting '_' should be handl…
dongjoon-hyun Aug 12, 2016
bde94cd
[SPARK-17013][SQL] Parse negative numeric literals
petermaxlee Aug 12, 2016
38378f5
[SPARK-12370][DOCUMENTATION] Documentation should link to examples …
jagadeesanas2 Aug 13, 2016
a21ecc9
[SPARK-17023][BUILD] Upgrade to Kafka 0.10.0.1 release
lresende Aug 13, 2016
750f880
[SPARK-16966][SQL][CORE] App Name is a randomUUID even when "spark.ap…
srowen Aug 13, 2016
e02d0d0
[SPARK-17027][ML] Avoid integer overflow in PolynomialExpansion.getPo…
zero323 Aug 14, 2016
8f4cacd
[SPARK-16508][SPARKR] Split docs for arrange and orderBy methods
junyangq Aug 15, 2016
4503632
[SPARK-17065][SQL] Improve the error message when encountering an inc…
zsxwing Aug 15, 2016
2e2c787
[SPARK-16964][SQL] Remove private[hive] from sql.hive.execution package
hvanhovell Aug 16, 2016
237ae54
Revert "[SPARK-16964][SQL] Remove private[hive] from sql.hive.executi…
rxin Aug 16, 2016
1c56971
[SPARK-16964][SQL] Remove private[sql] and private[spark] from sql.ex…
hvanhovell Aug 16, 2016
022230c
[SPARK-16519][SPARKR] Handle SparkR RDD generics that create warnings…
felixcheung Aug 16, 2016
6cb3eab
[SPARK-17089][DOCS] Remove api doc link for mapReduceTriplets operator
phalodi Aug 16, 2016
3e0163b
[SPARK-17084][SQL] Rename ParserUtils.assert to validate
hvanhovell Aug 17, 2016
68a24d3
[MINOR][DOC] Fix the descriptions for `properties` argument in the do…
Aug 17, 2016
22c7660
[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grow…
kiszk Aug 17, 2016
394d598
[SPARK-17102][SQL] bypass UserDefinedGenerator for json format check
cloud-fan Aug 17, 2016
9406f82
[SPARK-17096][SQL][STREAMING] Improve exception string reported throu…
tdas Aug 17, 2016
585d1d9
[SPARK-17038][STREAMING] fix metrics retrieval source of 'lastReceive…
keypointt Aug 17, 2016
91aa532
[SPARK-16995][SQL] TreeNodeException when flat mapping RelationalGrou…
viirya Aug 18, 2016
5735b8b
[SPARK-16391][SQL] Support partial aggregation for reduceGroups
rxin Aug 18, 2016
ec5f157
[SPARK-17117][SQL] 1 / NULL should not fail analysis
petermaxlee Aug 18, 2016
176af17
[MINOR][SPARKR] R API documentation for "coltypes" is confusing
keypointt Aug 10, 2016
ea684b6
[SPARK-17069] Expose spark.range() as table-valued function in SQL
ericl Aug 18, 2016
c180d63
[SPARK-16947][SQL] Support type coercion and foldable expression for …
petermaxlee Aug 19, 2016
05b180f
HOTFIX: compilation broken due to protected ctor.
rxin Aug 19, 2016
d55d1f4
[SPARK-16961][CORE] Fixed off-by-one error that biased randomizeInPlace
nicklavers Aug 19, 2016
e0c60f1
[SPARK-16994][SQL] Whitelist operators for predicate pushdown
rxin Aug 19, 2016
d0707c6
[SPARK-11227][CORE] UnknownHostException can be thrown when NameNode …
sarutak Aug 19, 2016
3276ccf
[SPARK-16686][SQL] Remove PushProjectThroughSample since it is handle…
viirya Jul 26, 2016
ae89c8e
[SPARK-17113] [SHUFFLE] Job failure due to Executor OOM in offheap mode
Aug 19, 2016
efe8322
[SPARK-17149][SQL] array.sql for testing array related functions
petermaxlee Aug 20, 2016
379b127
[SPARK-17158][SQL] Change error message for out of range numeric lite…
srinathshankar Aug 20, 2016
f7458c7
[SPARK-17150][SQL] Support SQL generation for inline tables
petermaxlee Aug 20, 2016
4c4c275
[SPARK-17104][SQL] LogicalRelation.newInstance should follow the sema…
viirya Aug 20, 2016
24dd9a7
[SPARK-17124][SQL] RelationalGroupedDataset.agg should preserve order…
petermaxlee Aug 20, 2016
faff929
[SPARK-12666][CORE] SparkSubmit packages fix for when 'default' conf …
BryanCutler Aug 20, 2016
26d5a8b
[MINOR][R] add SparkR.Rcheck/ and SparkR_*.tar.gz to R/.gitignore
mengxr Aug 21, 2016
0297896
[SPARK-16508][SPARKR] Fix CRAN undocumented/duplicated arguments warn…
junyangq Aug 20, 2016
e62b29f
[SPARK-17098][SQL] Fix `NullPropagation` optimizer to handle `COUNT(N…
dongjoon-hyun Aug 21, 2016
49cc44d
[SPARK-17115][SQL] decrease the threshold when split expressions
Aug 22, 2016
2add45f
[SPARK-17085][STREAMING][DOCUMENTATION AND ACTUAL CODE DIFFERS - UNSU…
jagadeesanas2 Aug 22, 2016
7919598
[SPARKR][MINOR] Fix Cache Folder Path in Windows
junyangq Aug 22, 2016
94eff08
[SPARK-16320][DOC] Document G1 heap region's effect on spark 2.0 vs 1.6
srowen Aug 22, 2016
6dcc1a3
[SPARKR][MINOR] Add Xiangrui and Felix to maintainers
shivaram Aug 22, 2016
01a4d69
[SPARK-17162] Range does not support SQL generation
ericl Aug 22, 2016
b65b041
[SPARK-16508][SPARKR] doc updates and more CRAN check fixes
felixcheung Aug 22, 2016
ff2f873
[SPARK-16550][SPARK-17042][CORE] Certain classes fail to deserialize …
ericl Aug 22, 2016
2258989
[SPARK-16577][SPARKR] Add CRAN documentation checks to run-tests.sh
shivaram Aug 23, 2016
eaea1c8
[SPARK-17182][SQL] Mark Collect as non-deterministic
liancheng Aug 23, 2016
d16f9a0
[SPARKR][MINOR] Update R DESCRIPTION file
felixcheung Aug 23, 2016
811a2ce
[SPARK-13286] [SQL] add the next expression of SQLException as cause
Aug 23, 2016
cc40189
[SPARKR][MINOR] Remove reference link for common Windows environment …
junyangq Aug 23, 2016
a2a7506
[MINOR][DOC] Use standard quotes instead of "curly quote" marks from …
HyukjinKwon Aug 23, 2016
a772b4b
[SPARK-17194] Use single quotes when generating SQL for string literals
JoshRosen Aug 23, 2016
a6e6a04
[MINOR][SQL] Remove implemented functions from comments of 'HiveSessi…
weiqingy Aug 24, 2016
df87f16
[SPARK-17186][SQL] remove catalog table type INDEX
cloud-fan Aug 24, 2016
ce7dce1
[MINOR][BUILD] Fix Java CheckStyle Error
weiqingy Aug 24, 2016
33d79b5
[SPARK-17086][ML] Fix InvalidArgumentException issue in QuantileDiscr…
Aug 24, 2016
29091d7
[SPARKR][MINOR] Fix doc for show method
junyangq Aug 24, 2016
9f924a0
[SPARK-16781][PYSPARK] java launched by PySpark as gateway may not be…
srowen Aug 24, 2016
4327337
[SPARKR][MINOR] Add more examples to window function docs
junyangq Aug 24, 2016
9f363a6
[SPARKR][MINOR] Add installation message for remote master mode and i…
junyangq Aug 24, 2016
3258f27
[SPARK-16216][SQL][BRANCH-2.0] Backport Read/write dateFormat/timesta…
HyukjinKwon Aug 25, 2016
aa57083
[SPARK-17228][SQL] Not infer/propagate non-deterministic constraints
sameeragarwal Aug 25, 2016
c1c4980
[SPARK-17193][CORE] HadoopRDD NPE at DEBUG log level when getLocation…
srowen Aug 25, 2016
fb1c697
[SPARK-17061][SPARK-17093][SQL] MapObjects` should make copies of uns…
lw-lin Aug 25, 2016
88481ea
Revert "[SPARK-17061][SPARK-17093][SQL] MapObjects` should make copie…
hvanhovell Aug 25, 2016
184e78b
[SPARK-17061][SPARK-17093][SQL][BACKPORT] MapObjects should make copi…
lw-lin Aug 25, 2016
48ecf3d
[SPARK-16991][SPARK-17099][SPARK-17120][SQL] Fix Outer Join Eliminati…
gatorsmile Aug 25, 2016
2b32a44
[SPARK-17167][2.0][SQL] Issue Exceptions when Analyze Table on In-Mem…
gatorsmile Aug 25, 2016
356a359
[SPARK-16700][PYSPARK][SQL] create DataFrame from dict/Row with schema
Aug 15, 2016
55db262
[SPARK-15083][WEB UI] History Server can OOM due to unlimited TaskUIData
ajbozarth Aug 25, 2016
b3a4430
[SPARKR][BUILD] ignore cran-check.out under R folder
wangmiao1981 Aug 25, 2016
ff2e270
[SPARK-17205] Literal.sql should handle Infinity and NaN
JoshRosen Aug 25, 2016
73014a2
[SPARK-17231][CORE] Avoid building debug or trace log messages unless…
Aug 25, 2016
27ed6d5
[SPARK-17242][DOCUMENT] Update links of external dstream projects
zsxwing Aug 26, 2016
6f82d2d
[SPARKR][MINOR] Fix example of spark.naiveBayes
junyangq Aug 26, 2016
deb6a54
[SPARK-17165][SQL] FileStreamSource should not track the list of seen…
petermaxlee Aug 26, 2016
52feb3f
[SPARK-17246][SQL] Add BigDecimal literal
hvanhovell Aug 26, 2016
dfdfc30
[SPARK-17235][SQL] Support purging of old logs in MetadataLog
petermaxlee Aug 26, 2016
9c0ac6b
[SPARK-17244] Catalyst should not pushdown non-deterministic join con…
sameeragarwal Aug 26, 2016
94d52d7
[SPARK-17269][SQL] Move finish analysis optimization stage into its o…
rxin Aug 27, 2016
f91614f
[SPARK-17270][SQL] Move object optimization rules into its own file (…
rxin Aug 27, 2016
901ab06
[SPARK-17274][SQL] Move join optimizer rules into a separate file
rxin Aug 27, 2016
56a8426
[SPARK-15382][SQL] Fix a bug in sampling with replacement
maropu Aug 27, 2016
7306c5f
[ML][MLLIB] The require condition and message doesn't match in Sparse…
Aug 27, 2016
5487fa0
[SPARK-17216][UI] fix event timeline bars length
Aug 27, 2016
eec0371
[SPARK-16216][SQL][FOLLOWUP][BRANCH-2.0] Bacoport enabling timestamp …
HyukjinKwon Aug 28, 2016
3d283f6
[SPARK-17063] [SQL] Improve performance of MSCK REPAIR TABLE with Hiv…
Aug 29, 2016
976a43d
[SPARK-16581][SPARKR] Make JVM backend calling functions public
shivaram Aug 29, 2016
5903257
[SPARK-17301][SQL] Remove unused classTag field from AtomicType base …
JoshRosen Aug 30, 2016
f35b10a
[SPARK-17264][SQL] DataStreamWriter should document that it only supp…
srowen Aug 30, 2016
bc6c0d9
[SPARK-17318][TESTS] Fix ReplSuite replicating blocks of object with …
zsxwing Aug 31, 2016
021aa28
[SPARK-17243][WEB UI] Spark 2.0 History Server won't load with very l…
ajbozarth Aug 31, 2016
c17334e
[SPARK-17316][CORE] Make CoarseGrainedSchedulerBackend.removeExecutor…
zsxwing Aug 31, 2016
ad36892
[SPARK-17326][SPARKR] Fix tests with HiveContext in SparkR not to be …
HyukjinKwon Aug 31, 2016
d01251c
[SPARK-17316][TESTS] Fix MesosCoarseGrainedSchedulerBackendSuite
zsxwing Aug 31, 2016
8d15c1a
[SPARK-16581][SPARKR] Fix JVM API tests in SparkR
shivaram Aug 31, 2016
191d996
[SPARK-17180][SPARK-17309][SPARK-17323][SQL][2.0] create AlterViewAsC…
cloud-fan Sep 1, 2016
8711b45
[SPARKR][MINOR] Fix windowPartitionBy example
junyangq Sep 1, 2016
6281b74
[SPARK-17318][TESTS] Fix ReplSuite replicating blocks of object with …
zsxwing Sep 1, 2016
13bacd7
[SPARK-17271][SQL] Planner adds un-necessary Sort even if child orde…
tejasapatil Sep 1, 2016
ac22ab0
[SPARK-16926] [SQL] Remove partition columns from partition metadata.
bchocho Sep 1, 2016
dd377a5
[SPARK-17355] Workaround for HIVE-14684 / HiveResultSetMetaData.isSig…
JoshRosen Sep 1, 2016
f946323
[SPARK-17342][WEBUI] Style of event timeline is broken
sarutak Sep 2, 2016
171bdfd
[SPARK-16883][SPARKR] SQL decimal type is not properly cast to number…
wangmiao1981 Sep 2, 2016
d9d10ff
[SPARK-17352][WEBUI] Executor computing time can be negative-number b…
sarutak Sep 2, 2016
91a3cf1
[SPARK-16935][SQL] Verification of Function-related ExternalCatalog APIs
gatorsmile Sep 2, 2016
30e5c84
[SPARK-17261] [PYSPARK] Using HiveContext after re-creating SparkCont…
zjffdu Sep 2, 2016
29ac2f6
[SPARK-17376][SPARKR] Spark version should be available in R
felixcheung Sep 2, 2016
d4ae35d
[SPARKR][DOC] regexp_extract should doc that it returns empty string …
felixcheung Sep 2, 2016
03d9af6
[SPARK-17376][SPARKR] followup - change since version
felixcheung Sep 2, 2016
c9c36fa
[SPARK-17230] [SQL] Should not pass optimized query into QueryExecuti…
Sep 2, 2016
a3930c3
[SPARK-16334] Reusing same dictionary column for decoding consecutive…
sameeragarwal Sep 2, 2016
b8f65da
Fix build
davies Sep 2, 2016
c0ea770
Revert "[SPARK-16334] Reusing same dictionary column for decoding con…
davies Sep 2, 2016
12a2e2a
[SPARKR][MINOR] Fix docs for sparkR.session and count
junyangq Sep 3, 2016
949544d
[SPARK-17347][SQL][EXAMPLES] Encoder in Dataset example has incorrect…
CodingCat Sep 3, 2016
196d62e
[MINOR][SQL] Not dropping all necessary tables
techaddict Sep 3, 2016
a7f5e70
[SPARK-16959][SQL] Rebuild Table Comment when Retrieving Metadata fro…
gatorsmile Aug 10, 2016
3500dbc
[SPARK-16663][SQL] desc table should be consistent between data sourc…
cloud-fan Jul 26, 2016
704215d
[SPARK-17335][SQL] Fix ArrayType and MapType CatalogString.
hvanhovell Sep 3, 2016
e387c8b
[SPARK-17391][TEST][2.0] Fix Two Test Failures After Backport
gatorsmile Sep 5, 2016
f92d874
[SPARK-17353][SPARK-16943][SPARK-16942][BACKPORT-2.0][SQL] Fix multip…
gatorsmile Sep 6, 2016
7b1aa21
[SPARK-17369][SQL] MetastoreRelation toJSON throws AssertException du…
clockfly Sep 6, 2016
dd27530
[SPARK-17358][SQL] Cached table(parquet/orc) should be shard between …
watermen Sep 6, 2016
f56b70f
Revert "[SPARK-17369][SQL] MetastoreRelation toJSON throws AssertExce…
yhuai Sep 6, 2016
286ccd6
[SPARK-17369][SQL][2.0] MetastoreRelation toJSON throws AssertExcepti…
clockfly Sep 6, 2016
c0f1f53
[SPARK-17356][SQL] Fix out of memory issue when generating JSON for T…
clockfly Sep 6, 2016
95e44dc
[SPARK-16922] [SPARK-17211] [SQL] make the address of values portable…
Sep 6, 2016
5343804
[SPARK-16334] [BACKPORT] Reusing same dictionary column for decoding …
sameeragarwal Sep 6, 2016
130a80f
[SPARK-17378][BUILD] Upgrade snappy-java to 1.1.2.6
a-roberts Sep 6, 2016
0ae9786
[SPARK-17299] TRIM/LTRIM/RTRIM should not strips characters other tha…
techaddict Sep 6, 2016
0157514
[SPARK-17110] Fix StreamCorruptionException in BlockManager.getRemote…
JoshRosen Sep 6, 2016
f3cfce0
[SPARK-17316][CORE] Fix the 'ask' type parameter in 'removeExecutor'
zsxwing Sep 6, 2016
a23d406
[SPARK-17279][SQL] better error message for exceptions during ScalaUD…
cloud-fan Sep 6, 2016
796577b
[SPARK-17372][SQL][STREAMING] Avoid serialization issues by using Arr…
tdas Sep 7, 2016
ee6301a
[SPARK-16785] R dapply doesn't return array or raw columns
clarkfitzg Sep 7, 2016
c8811ad
[SPARK-17296][SQL] Simplify parser join processing [BACKPORT 2.0]
hvanhovell Sep 7, 2016
e6caceb
[MINOR][SQL] Fixing the typo in unit test
Sep 7, 2016
078ac0e
[SPARK-17370] Shuffle service files not invalidated when a slave is lost
ericl Sep 7, 2016
067752c
[SPARK-16533][CORE] - backport driver deadlock fix to 2.0
Sep 7, 2016
28377da
[SPARK-17339][CORE][BRANCH-2.0] Do not use path to get a filesystem i…
HyukjinKwon Sep 8, 2016
e169085
[SPARK-16711] YarnShuffleService doesn't re-init properly on YARN rol…
Sep 8, 2016
c6e0dd1
[SPARK-17442][SPARKR] Additional arguments in write.df are not passed…
felixcheung Sep 8, 2016
a7f1c18
[SPARK-17456][CORE] Utility for parsing Spark versions
jkbradley Sep 9, 2016
6f02f40
[SPARK-17354] [SQL] Partitioning by dates/timestamps should work with…
HyukjinKwon Sep 9, 2016
c2378a6
[SPARK-17396][CORE] Share the task support between UnionRDD instances.
rdblue Sep 10, 2016
bde5452
[SPARK-17439][SQL] Fixing compression issues with approximate quantil…
thunterdb Sep 11, 2016
d293062
[SPARK-17336][PYSPARK] Fix appending multiple times to PYTHONPATH fro…
BryanCutler Sep 11, 2016
3052152
[SPARK-17486] Remove unused TaskMetricsUIData.updatedBlockStatuses field
JoshRosen Sep 12, 2016
0a36e36
[SPARK-17503][CORE] Fix memory leak in Memory store when unable to ca…
clockfly Sep 12, 2016
37f45bf
[SPARK-14818] Post-2.0 MiMa exclusion and build changes
JoshRosen Sep 12, 2016
a3fc576
[SPARK-17485] Prevent failed remote reads of cached blocks from faili…
JoshRosen Sep 12, 2016
1f72e77
[SPARK-17474] [SQL] fix python udf in TakeOrderedAndProjectExec
Sep 12, 2016
b17f10c
[SPARK-17515] CollectLimit.execute() should perform per-partition limits
JoshRosen Sep 13, 2016
c142645
[SPARK-17531] Don't initialize Hive Listeners for the Execution Client
brkyvz Sep 13, 2016
12ebfbe
[SPARK-17525][PYTHON] Remove SparkContext.clearFiles() from the PySpa…
sjakthol Sep 14, 2016
c6ea748
[SPARK-17480][SQL] Improve performance by removing or caching List.le…
seyfe Sep 14, 2016
5493107
[SPARK-17445][DOCS] Reference an ASF page as the main place to find t…
srowen Sep 14, 2016
6fe5972
[SPARK-17514] df.take(1) and df.limit(1).collect() should perform the…
JoshRosen Sep 14, 2016
fab77da
[SPARK-17511] Yarn Dynamic Allocation: Avoid marking released contain…
kishorvpatil Sep 14, 2016
fffcec9
[SPARK-17463][CORE] Make CollectionAccumulator and SetAccumulator's v…
zsxwing Sep 14, 2016
bb2bdb4
[SPARK-17465][SPARK CORE] Inappropriate memory management in `org.apa…
Sep 14, 2016
5c2bc83
[SPARK-17521] Error when I use sparkContext.makeRDD(Seq())
codlife Sep 15, 2016
a09c258
[SPARK-17317][SPARKR] Add SparkR vignette to branch 2.0
junyangq Sep 15, 2016
e77a437
[SPARK-17547] Ensure temp shuffle data file is cleaned up after error
JoshRosen Sep 15, 2016
62ab536
[SPARK-17114][SQL] Fix aggregates grouped by literals with empty input
hvanhovell Sep 15, 2016
abb89c4
[SPARK-17483] Refactoring in BlockManager status reporting and block …
JoshRosen Sep 12, 2016
0169c2e
[SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifi…
clockfly Sep 15, 2016
9c23f44
[SPARK-17484] Prevent invalid block locations from being reported aft…
JoshRosen Sep 15, 2016
5ad4395
[SPARK-17558] Bump Hadoop 2.7 version from 2.7.2 to 2.7.3
rxin Sep 16, 2016
3fce125
[SPARK-17549][SQL] Only collect table size stat in driver for cached …
Sep 16, 2016
9ff158b
Correct fetchsize property name in docs
darabos Sep 17, 2016
3ca0dc0
[SPARK-17567][DOCS] Use valid url to Spark RDD paper
keypointt Sep 17, 2016
c9bd67e
[SPARK-17561][DOCS] DataFrameWriter documentation formatting problems
srowen Sep 16, 2016
eb2675d
[SPARK-17548][MLLIB] Word2VecModel.findSynonyms no longer spuriously …
willb Sep 17, 2016
ec2b736
[SPARK-17575][DOCS] Remove extra table tags in configuration document
phalodi Sep 17, 2016
a3bba37
[SPARK-17480][SQL][FOLLOWUP] Fix more instances which calls List.leng…
HyukjinKwon Sep 17, 2016
bec0770
[SPARK-17491] Close serialization stream to fix wrong answer bug in p…
JoshRosen Sep 17, 2016
0cfc046
Revert "[SPARK-17480][SQL][FOLLOWUP] Fix more instances which calls L…
tdas Sep 17, 2016
5fd354b
[SPARK-17480][SQL][FOLLOWUP] Fix more instances which calls List.leng…
HyukjinKwon Sep 17, 2016
cf728b0
[SPARK-17541][SQL] fix some DDL bugs about table management when same…
cloud-fan Sep 18, 2016
5619f09
[SPARK-17546][DEPLOY] start-* scripts should use hostname -f
srowen Sep 18, 2016
6c67d86
[SPARK-17586][BUILD] Do not call static member via instance reference
HyukjinKwon Sep 18, 2016
151f808
[SPARK-16462][SPARK-16460][SPARK-15144][SQL] Make CSV cast null value…
lw-lin Sep 18, 2016
27ce39c
[SPARK-17571][SQL] AssertOnQuery.condition should always return Boole…
petermaxlee Sep 18, 2016
ac06039
[SPARK-17297][DOCS] Clarify window/slide duration as absolute time, n…
srowen Sep 19, 2016
c4660d6
[SPARK-17589][TEST][2.0] Fix test case `create external table` in Met…
gatorsmile Sep 19, 2016
f56035b
[SPARK-17473][SQL] fixing docker integration tests error due to diffe…
sureshthalamati Sep 19, 2016
d6191a0
[SPARK-17438][WEBUI] Show Application.executorLimit in the applicatio…
zsxwing Sep 19, 2016
fef3ec1
[SPARK-16439] [SQL] bring back the separator in SQL UI
Sep 19, 2016
c02bc92
[SPARK-17100] [SQL] fix Python udf in filter on top of outer join
Sep 19, 2016
7026eb8
[SPARK-17160] Properly escape field names in code-generated error mes…
JoshRosen Sep 20, 2016
5456a1b
[SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata
petermaxlee Sep 20, 2016
643f161
Revert "[SPARK-17513][SQL] Make StreamExecution garbage-collect its m…
cloud-fan Sep 20, 2016
e76f4f4
[SPARK-17051][SQL] we should use hadoopConf in InsertIntoHiveTable
cloud-fan Sep 20, 2016
2bd37ce
[SPARK-17549][SQL] Revert "[] Only collect table size stat in driver …
yhuai Sep 20, 2016
8d8e233
[SPARK-15698][SQL][STREAMING] Add the ability to remove the old Metad…
jerryshao Sep 20, 2016
726f057
[SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata
petermaxlee Sep 21, 2016
65295ba
[SPARK-17617][SQL] Remainder(%) expression.eval returns incorrect res…
clockfly Sep 21, 2016
45bccdd
[BACKPORT 2.0][MINOR][BUILD] Fix CheckStyle Error
weiqingy Sep 21, 2016
cd0bd89
[SPARK-17418] Prevent kinesis-asl-assembly artifacts from being publi…
JoshRosen Sep 21, 2016
59e6ab1
[SPARK-17512][CORE] Avoid formatting to python path for yarn and meso…
jerryshao Sep 21, 2016
966abd6
[SPARK-17627] Mark Streaming Providers Experimental
marmbrus Sep 22, 2016
ec377e7
[SPARK-17494][SQL] changePrecision() on compact decimal should respec…
Sep 22, 2016
053b20a
Bump doc version for release 2.0.1.
rxin Sep 22, 2016
00f2e28
Preparing Spark release v2.0.1-rc1
pwendell Sep 22, 2016
e8b26be
Preparing development version 2.0.2-SNAPSHOT
pwendell Sep 22, 2016
b25a8e6
[SPARK-17421][DOCS] Documenting the current treatment of MAVEN_OPTS.
frreiss Sep 22, 2016
f14f47f
Skip building R vignettes if Spark is not built
shivaram Sep 22, 2016
243bdb1
[SPARK-17613] S3A base paths with no '/' at the end return empty Data…
brkyvz Sep 22, 2016
47fc0b9
[SPARK-17638][STREAMING] Stop JVM StreamingContext when the Python pr…
zsxwing Sep 22, 2016
0a593db
[SPARK-17616][SQL] Support a single distinct aggregate combined with …
hvanhovell Sep 22, 2016
c2cb841
[SPARK-17599][SPARK-17569] Backport and to Spark 2.0 branch
brkyvz Sep 23, 2016
04141ad
Preparing Spark release v2.0.1-rc2
pwendell Sep 23, 2016
c393d86
Preparing development version 2.0.2-SNAPSHOT
pwendell Sep 23, 2016
22216d6
[SPARK-17502][17609][SQL][BACKPORT][2.0] Fix Multiple Bugs in DDL Sta…
gatorsmile Sep 23, 2016
54d4eee
[SPARK-16240][ML] ML persistence backward compatibility for LDA - 2.0…
GayathriMurali Sep 23, 2016
d3f90e7
[SPARK-17640][SQL] Avoid using -1 as the default batchId for FileStre…
zsxwing Sep 23, 2016
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
/lib/
R-unit-tests.log
R/unit-tests.out
R/cran-check.out
build/*.jar
build/apache-maven*
build/scala*
Expand Down Expand Up @@ -72,7 +73,13 @@ metastore/
metastore_db/
sql/hive-thriftserver/test_warehouses
warehouse/
spark-warehouse/

# For R session data
.RData
.RHistory
.Rhistory
*.Rproj
*.Rproj.*

.Rproj.user
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ It lists steps that are required before creating a PR. In particular, consider:

- Is the change important and ready enough to ask the community to spend time reviewing?
- Have you searched for existing, related JIRAs and pull requests?
- Is this a new feature that can stand alone as a package on http://spark-packages.org ?
- Is this a new feature that can stand alone as a [third party project](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects) ?
- Is the change being proposed clearly explained and motivated?

When you contribute code, you affirm that the contribution is your original work and that you
Expand Down
3 changes: 2 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ The text of each license is also included at licenses/LICENSE-[project].txt.
(New BSD license) Protocol Buffer Java API (org.spark-project.protobuf:protobuf-java:2.4.1-shaded - http://code.google.com/p/protobuf)
(The BSD License) Fortran to Java ARPACK (net.sourceforge.f2j:arpack_combined_all:0.1 - http://f2j.sourceforge.net)
(The BSD License) xmlenc Library (xmlenc:xmlenc:0.52 - http://xmlenc.sourceforge.net)
(The New BSD License) Py4J (net.sf.py4j:py4j:0.9.2 - http://py4j.sourceforge.net/)
(The New BSD License) Py4J (net.sf.py4j:py4j:0.10.3 - http://py4j.sourceforge.net/)
(Two-clause BSD-style license) JUnit-Interface (com.novocode:junit-interface:0.10 - http://github.com/szeiger/junit-interface/)
(BSD licence) sbt and sbt-launch-lib.bash
(BSD 3 Clause) d3.min.js (https://github.com/mbostock/d3/blob/master/LICENSE)
Expand Down Expand Up @@ -296,3 +296,4 @@ The text of each license is also included at licenses/LICENSE-[project].txt.
(MIT License) blockUI (http://jquery.malsup.com/block/)
(MIT License) RowsGroup (http://datatables.net/license/mit)
(MIT License) jsonFormatter (http://www.jqueryscript.net/other/jQuery-Plugin-For-Pretty-JSON-Formatting-jsonFormatter.html)
(MIT License) modernizr (https://github.com/Modernizr/Modernizr/blob/master/LICENSE)
13 changes: 5 additions & 8 deletions NOTICE
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Apache Spark
Copyright 2014 The Apache Software Foundation.
Copyright 2014 and onwards The Apache Software Foundation.

This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
Expand All @@ -12,7 +12,9 @@ Common Development and Distribution License 1.0
The following components are provided under the Common Development and Distribution License 1.0. See project link for details.

(CDDL 1.0) Glassfish Jasper (org.mortbay.jetty:jsp-2.1:6.1.14 - http://jetty.mortbay.org/project/modules/jsp-2.1)
(CDDL 1.0) JAX-RS (https://jax-rs-spec.java.net/)
(CDDL 1.0) Servlet Specification 2.5 API (org.mortbay.jetty:servlet-api-2.5:6.1.14 - http://jetty.mortbay.org/project/modules/servlet-api-2.5)
(CDDL 1.0) (GPL2 w/ CPE) javax.annotation API (https://glassfish.java.net/nonav/public/CDDL+GPL.html)
(COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.0) (GNU General Public Library) Streaming API for XML (javax.xml.stream:stax-api:1.0-2 - no url defined)
(Common Development and Distribution License (CDDL) v1.0) JavaBeans Activation Framework (JAF) (javax.activation:activation:1.1 - http://java.sun.com/products/javabeans/jaf/index.jsp)

Expand All @@ -22,15 +24,10 @@ Common Development and Distribution License 1.1

The following components are provided under the Common Development and Distribution License 1.1. See project link for details.

(CDDL 1.1) (GPL2 w/ CPE) org.glassfish.hk2 (https://hk2.java.net)
(CDDL 1.1) (GPL2 w/ CPE) JAXB API bundle for GlassFish V3 (javax.xml.bind:jaxb-api:2.2.2 - https://jaxb.dev.java.net/)
(CDDL 1.1) (GPL2 w/ CPE) JAXB RI (com.sun.xml.bind:jaxb-impl:2.2.3-1 - http://jaxb.java.net/)
(CDDL 1.1) (GPL2 w/ CPE) jersey-core (com.sun.jersey:jersey-core:1.8 - https://jersey.dev.java.net/jersey-core/)
(CDDL 1.1) (GPL2 w/ CPE) jersey-core (com.sun.jersey:jersey-core:1.9 - https://jersey.java.net/jersey-core/)
(CDDL 1.1) (GPL2 w/ CPE) jersey-guice (com.sun.jersey.contribs:jersey-guice:1.9 - https://jersey.java.net/jersey-contribs/jersey-guice/)
(CDDL 1.1) (GPL2 w/ CPE) jersey-json (com.sun.jersey:jersey-json:1.8 - https://jersey.dev.java.net/jersey-json/)
(CDDL 1.1) (GPL2 w/ CPE) jersey-json (com.sun.jersey:jersey-json:1.9 - https://jersey.java.net/jersey-json/)
(CDDL 1.1) (GPL2 w/ CPE) jersey-server (com.sun.jersey:jersey-server:1.8 - https://jersey.dev.java.net/jersey-server/)
(CDDL 1.1) (GPL2 w/ CPE) jersey-server (com.sun.jersey:jersey-server:1.9 - https://jersey.java.net/jersey-server/)
(CDDL 1.1) (GPL2 w/ CPE) Jersey 2 (https://jersey.java.net)

========================================================================
Common Public License 1.0
Expand Down
2 changes: 2 additions & 0 deletions R/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,5 @@
lib
pkg/man
pkg/html
SparkR.Rcheck/
SparkR_*.tar.gz
12 changes: 6 additions & 6 deletions R/DOCUMENTATION.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# SparkR Documentation

SparkR documentation is generated using in-source comments annotated using using
`roxygen2`. After making changes to the documentation, to generate man pages,
SparkR documentation is generated by using in-source comments and annotated by using
[`roxygen2`](https://cran.r-project.org/web/packages/roxygen2/index.html). After making changes to the documentation and generating man pages,
you can run the following from an R console in the SparkR home directory

library(devtools)
devtools::document(pkg="./pkg", roclets=c("rd"))

```R
library(devtools)
devtools::document(pkg="./pkg", roclets=c("rd"))
```
You can verify if your changes are good by running

R CMD check pkg/
32 changes: 18 additions & 14 deletions R/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
# R on Spark

SparkR is an R package that provides a light-weight frontend to use Spark from R.

### Installing sparkR

Libraries of sparkR need to be created in `$SPARK_HOME/R/lib`. This can be done by running the script `$SPARK_HOME/R/install-dev.sh`.
By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable `R_HOME` the full path of the base directory where R is installed, before running install-dev.sh script.
Example:
```
```bash
# where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
export R_HOME=/home/username/R
./install-dev.sh
Expand All @@ -17,8 +18,9 @@ export R_HOME=/home/username/R
#### Build Spark

Build Spark with [Maven](http://spark.apache.org/docs/latest/building-spark.html#building-with-buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
```
build/mvn -DskipTests -Psparkr package

```bash
build/mvn -DskipTests -Psparkr package
```

#### Running sparkR
Expand All @@ -37,8 +39,8 @@ To set other options like driver memory, executor memory etc. you can pass in th

#### Using SparkR from RStudio

If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example
```
If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example
```R
# Set this to where Spark is installed
Sys.setenv(SPARK_HOME="/Users/username/spark")
# This line loads SparkR from the installed directory
Expand All @@ -55,23 +57,25 @@ Once you have made your changes, please include unit tests for them and run exis

#### Generating documentation

The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script `R/create-docs.sh`. This script uses `devtools` and `knitr` to generate the docs and these packages need to be installed on the machine before using the script.
The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script `R/create-docs.sh`. This script uses `devtools` and `knitr` to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these [prerequisites](https://github.com/apache/spark/tree/master/docs#prerequisites). See also, `R/DOCUMENTATION.md`

### Examples, Unit tests

SparkR comes with several sample programs in the `examples/src/main/r` directory.
To run one of them, use `./bin/spark-submit <filename> <args>`. For example:

./bin/spark-submit examples/src/main/r/dataframe.R

You can also run the unit-tests for SparkR by running (you need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first):

R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
./R/run-tests.sh
```bash
./bin/spark-submit examples/src/main/r/dataframe.R
```
You can also run the unit tests for SparkR by running. You need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first:
```bash
R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
./R/run-tests.sh
```

### Running on YARN

The `./bin/spark-submit` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
```
```bash
export YARN_CONF_DIR=/etc/hadoop/conf
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R
```
20 changes: 20 additions & 0 deletions R/WINDOWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,23 @@ include Rtools and R in `PATH`.
directory in Maven in `PATH`.
4. Set `MAVEN_OPTS` as described in [Building Spark](http://spark.apache.org/docs/latest/building-spark.html).
5. Open a command shell (`cmd`) in the Spark directory and run `mvn -DskipTests -Psparkr package`

## Unit tests

To run the SparkR unit tests on Windows, the following steps are required —assuming you are in the Spark root directory and do not have Apache Hadoop installed already:

1. Create a folder to download Hadoop related files for Windows. For example, `cd ..` and `mkdir hadoop`.

2. Download the relevant Hadoop bin package from [steveloughran/winutils](https://github.com/steveloughran/winutils). While these are not official ASF artifacts, they are built from the ASF release git hashes by a Hadoop PMC member on a dedicated Windows VM. For further reading, consult [Windows Problems on the Hadoop wiki](https://wiki.apache.org/hadoop/WindowsProblems).

3. Install the files into `hadoop\bin`; make sure that `winutils.exe` and `hadoop.dll` are present.

4. Set the environment variable `HADOOP_HOME` to the full path to the newly created `hadoop` directory.

5. Run unit tests for SparkR by running the command below. You need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first:

```
R -e "install.packages('testthat', repos='http://cran.us.r-project.org')"
.\bin\spark-submit2.cmd --conf spark.hadoop.fs.default.name="file:///" R\pkg\tests\run-all.R
```

64 changes: 64 additions & 0 deletions R/check-cran.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
#!/bin/bash

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

set -o pipefail
set -e

FWDIR="$(cd `dirname $0`; pwd)"
pushd $FWDIR > /dev/null

if [ ! -z "$R_HOME" ]
then
R_SCRIPT_PATH="$R_HOME/bin"
else
# if system wide R_HOME is not found, then exit
if [ ! `command -v R` ]; then
echo "Cannot find 'R_HOME'. Please specify 'R_HOME' or make sure R is properly installed."
exit 1
fi
R_SCRIPT_PATH="$(dirname $(which R))"
fi
echo "USING R_HOME = $R_HOME"

# Build the latest docs
$FWDIR/create-docs.sh

# Build a zip file containing the source package
"$R_SCRIPT_PATH/"R CMD build $FWDIR/pkg

# Run check as-cran.
VERSION=`grep Version $FWDIR/pkg/DESCRIPTION | awk '{print $NF}'`

CRAN_CHECK_OPTIONS="--as-cran"

if [ -n "$NO_TESTS" ]
then
CRAN_CHECK_OPTIONS=$CRAN_CHECK_OPTIONS" --no-tests"
fi

if [ -n "$NO_MANUAL" ]
then
CRAN_CHECK_OPTIONS=$CRAN_CHECK_OPTIONS" --no-manual"
fi

echo "Running CRAN check with $CRAN_CHECK_OPTIONS options"

"$R_SCRIPT_PATH/"R CMD check $CRAN_CHECK_OPTIONS SparkR_"$VERSION".tar.gz

popd > /dev/null
30 changes: 28 additions & 2 deletions R/create-docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,26 @@
# limitations under the License.
#

# Script to create API docs for SparkR
# This requires `devtools` and `knitr` to be installed on the machine.
# Script to create API docs and vignettes for SparkR
# This requires `devtools`, `knitr` and `rmarkdown` to be installed on the machine.

# After running this script the html docs can be found in
# $SPARK_HOME/R/pkg/html
# The vignettes can be found in
# $SPARK_HOME/R/pkg/vignettes/sparkr_vignettes.html

set -o pipefail
set -e

# Figure out where the script is
export FWDIR="$(cd "`dirname "$0"`"; pwd)"
export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"

# Required for setting SPARK_SCALA_VERSION
. "${SPARK_HOME}"/bin/load-spark-env.sh

echo "Using Scala $SPARK_SCALA_VERSION"

pushd $FWDIR

# Install the package (this will also generate the Rd files)
Expand All @@ -43,4 +52,21 @@ Rscript -e 'libDir <- "../../lib"; library(SparkR, lib.loc=libDir); library(knit

popd

# Find Spark jars.
if [ -f "${SPARK_HOME}/RELEASE" ]; then
SPARK_JARS_DIR="${SPARK_HOME}/jars"
else
SPARK_JARS_DIR="${SPARK_HOME}/assembly/target/scala-$SPARK_SCALA_VERSION/jars"
fi

# Only create vignettes if Spark JARs exist
if [ -d "$SPARK_JARS_DIR" ]; then
# render creates SparkR vignettes
Rscript -e 'library(rmarkdown); paths <- .libPaths(); .libPaths(c("lib", paths)); Sys.setenv(SPARK_HOME=tools::file_path_as_absolute("..")); render("pkg/vignettes/sparkr-vignettes.Rmd"); .libPaths(paths)'

find pkg/vignettes/. -not -name '.' -not -name '*.Rmd' -not -name '*.md' -not -name '*.pdf' -not -name '*.html' -delete
else
echo "Skipping R vignettes as Spark JARs not found in $SPARK_HOME"
fi

popd
7 changes: 6 additions & 1 deletion R/install-dev.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,12 @@ pushd $FWDIR > /dev/null
if [ ! -z "$R_HOME" ]
then
R_SCRIPT_PATH="$R_HOME/bin"
else
else
# if system wide R_HOME is not found, then exit
if [ ! `command -v R` ]; then
echo "Cannot find 'R_HOME'. Please specify 'R_HOME' or make sure R is properly installed."
exit 1
fi
R_SCRIPT_PATH="$(dirname $(which R))"
fi
echo "USING R_HOME = $R_HOME"
Expand Down
5 changes: 5 additions & 0 deletions R/pkg/.Rbuildignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
^.*\.Rproj$
^\.Rproj\.user$
^\.lintr$
^src-native$
^html$
25 changes: 17 additions & 8 deletions R/pkg/DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,20 +1,25 @@
Package: SparkR
Type: Package
Title: R frontend for Spark
Title: R Frontend for Apache Spark
Version: 2.0.0
Date: 2013-09-09
Author: The Apache Software Foundation
Maintainer: Shivaram Venkataraman <[email protected]>
Imports:
methods
Date: 2016-08-27
Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
email = "[email protected]"),
person("Xiangrui", "Meng", role = "aut",
email = "[email protected]"),
person("Felix", "Cheung", role = "aut",
email = "[email protected]"),
person(family = "The Apache Software Foundation", role = c("aut", "cph")))
URL: http://www.apache.org/ http://spark.apache.org/
BugReports: https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-ContributingBugReports
Depends:
R (>= 3.0),
methods,
methods
Suggests:
testthat,
e1071,
survival
Description: R frontend for Spark
Description: The SparkR package provides an R frontend for Apache Spark.
License: Apache License (== 2.0)
Collate:
'schema.R'
Expand All @@ -26,16 +31,20 @@ Collate:
'pairRDD.R'
'DataFrame.R'
'SQLContext.R'
'WindowSpec.R'
'backend.R'
'broadcast.R'
'client.R'
'context.R'
'deserialize.R'
'functions.R'
'install.R'
'jvm.R'
'mllib.R'
'serialize.R'
'sparkR.R'
'stats.R'
'types.R'
'utils.R'
'window.R'
RoxygenNote: 5.0.1
Loading