Skip to content

Commit 8f99a49

Browse files
phatak-devrxin
authored andcommitted
[SPARK-7084] improve saveAsTable documentation
Author: madhukar <[email protected]> Closes #5654 from phatak-dev/master and squashes the following commits: 386f407 [madhukar] #5654 updated for all the methods 2c997c5 [madhukar] Merge branch 'master' of https://github.com/apache/spark 00bc819 [madhukar] Merge branch 'master' of https://github.com/apache/spark 2a802c6 [madhukar] #5654 updated the doc according to comments 866e8df [madhukar] [SPARK-7084] improve saveAsTable documentation (cherry picked from commit 57255dc) Signed-off-by: Reynold Xin <[email protected]>
1 parent d4eb590 commit 8f99a49

File tree

1 file changed

+18
-0
lines changed

1 file changed

+18
-0
lines changed

sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1003,6 +1003,9 @@ class DataFrame private[sql](
10031003
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
10041004
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
10051005
* be the target of an `insertInto`.
1006+
*
1007+
* Also note that while this function can persist the table metadata into Hive's metastore,
1008+
* the table will NOT be accessible from Hive.
10061009
* @group output
10071010
*/
10081011
@Experimental
@@ -1019,6 +1022,9 @@ class DataFrame private[sql](
10191022
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
10201023
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
10211024
* be the target of an `insertInto`.
1025+
*
1026+
* Also note that while this function can persist the table metadata into Hive's metastore,
1027+
* the table will NOT be accessible from Hive.
10221028
* @group output
10231029
*/
10241030
@Experimental
@@ -1043,6 +1049,9 @@ class DataFrame private[sql](
10431049
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
10441050
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
10451051
* be the target of an `insertInto`.
1052+
*
1053+
* Also note that while this function can persist the table metadata into Hive's metastore,
1054+
* the table will NOT be accessible from Hive.
10461055
* @group output
10471056
*/
10481057
@Experimental
@@ -1059,6 +1068,9 @@ class DataFrame private[sql](
10591068
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
10601069
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
10611070
* be the target of an `insertInto`.
1071+
*
1072+
* Also note that while this function can persist the table metadata into Hive's metastore,
1073+
* the table will NOT be accessible from Hive.
10621074
* @group output
10631075
*/
10641076
@Experimental
@@ -1075,6 +1087,9 @@ class DataFrame private[sql](
10751087
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
10761088
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
10771089
* be the target of an `insertInto`.
1090+
*
1091+
* Also note that while this function can persist the table metadata into Hive's metastore,
1092+
* the table will NOT be accessible from Hive.
10781093
* @group output
10791094
*/
10801095
@Experimental
@@ -1096,6 +1111,9 @@ class DataFrame private[sql](
10961111
* there is no notion of a persisted catalog in a standard SQL context. Instead you can write
10971112
* an RDD out to a parquet file, and then register that file as a table. This "table" can then
10981113
* be the target of an `insertInto`.
1114+
*
1115+
* Also note that while this function can persist the table metadata into Hive's metastore,
1116+
* the table will NOT be accessible from Hive.
10991117
* @group output
11001118
*/
11011119
@Experimental

0 commit comments

Comments
 (0)