Skip to content

Commit 57255dc

Browse files
phatak-devrxin
authored andcommitted
[SPARK-7084] improve saveAsTable documentation
Author: madhukar <[email protected]> Closes apache#5654 from phatak-dev/master and squashes the following commits: 386f407 [madhukar] apache#5654 updated for all the methods 2c997c5 [madhukar] Merge branch 'master' of https://github.com/apache/spark 00bc819 [madhukar] Merge branch 'master' of https://github.com/apache/spark 2a802c6 [madhukar] apache#5654 updated the doc according to comments 866e8df [madhukar] [SPARK-7084] improve saveAsTable documentation
1 parent 4f4dbb0 commit 57255dc

File tree

1 file changed

+18
-0
lines changed

1 file changed

+18
-0
lines changed

sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1192,6 +1192,9 @@ class DataFrame private[sql](
11921192
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
11931193
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
11941194
* be the target of an `insertInto`.
1195+
*
1196+
* Also note that while this function can persist the table metadata into Hive's metastore,
1197+
* the table will NOT be accessible from Hive.
11951198
* @group output
11961199
*/
11971200
@Experimental
@@ -1208,6 +1211,9 @@ class DataFrame private[sql](
12081211
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
12091212
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
12101213
* be the target of an `insertInto`.
1214+
*
1215+
* Also note that while this function can persist the table metadata into Hive's metastore,
1216+
* the table will NOT be accessible from Hive.
12111217
* @group output
12121218
*/
12131219
@Experimental
@@ -1232,6 +1238,9 @@ class DataFrame private[sql](
12321238
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
12331239
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
12341240
* be the target of an `insertInto`.
1241+
*
1242+
* Also note that while this function can persist the table metadata into Hive's metastore,
1243+
* the table will NOT be accessible from Hive.
12351244
* @group output
12361245
*/
12371246
@Experimental
@@ -1248,6 +1257,9 @@ class DataFrame private[sql](
12481257
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
12491258
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
12501259
* be the target of an `insertInto`.
1260+
*
1261+
* Also note that while this function can persist the table metadata into Hive's metastore,
1262+
* the table will NOT be accessible from Hive.
12511263
* @group output
12521264
*/
12531265
@Experimental
@@ -1264,6 +1276,9 @@ class DataFrame private[sql](
12641276
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
12651277
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
12661278
* be the target of an `insertInto`.
1279+
*
1280+
* Also note that while this function can persist the table metadata into Hive's metastore,
1281+
* the table will NOT be accessible from Hive.
12671282
* @group output
12681283
*/
12691284
@Experimental
@@ -1285,6 +1300,9 @@ class DataFrame private[sql](
12851300
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
12861301
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
12871302
* be the target of an `insertInto`.
1303+
*
1304+
* Also note that while this function can persist the table metadata into Hive's metastore,
1305+
* the table will NOT be accessible from Hive.
12881306
* @group output
12891307
*/
12901308
@Experimental

0 commit comments

Comments
 (0)