@@ -1271,6 +1271,15 @@ sqlContext.refreshTable("my_table")
12711271
12721272</div >
12731273
1274+ <div data-lang =" python " markdown =" 1 " >
1275+
1276+ {% highlight python %}
1277+ # sqlContext is an existing HiveContext
1278+ sqlContext.refreshTable("my_table")
1279+ {% endhighlight %}
1280+
1281+ </div >
1282+
12741283<div data-lang =" sql " markdown =" 1 " >
12751284
12761285{% highlight sql %}
@@ -1518,8 +1527,8 @@ This command builds a new assembly jar that includes Hive. Note that this Hive a
15181527on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries
15191528(SerDes) in order to access data stored in Hive.
15201529
1521- Configuration of Hive is done by placing your ` hive-site.xml ` file in ` conf/ ` . Please note when running
1522- the query on a YARN cluster (` yarn-cluster ` mode), the ` datanucleus ` jars under the ` lib_managed/jars ` directory
1530+ Configuration of Hive is done by placing your ` hive-site.xml ` file in ` conf/ ` . Please note when running
1531+ the query on a YARN cluster (` yarn-cluster ` mode), the ` datanucleus ` jars under the ` lib_managed/jars ` directory
15231532and ` hive-site.xml ` under ` conf/ ` directory need to be available on the driver and all executors launched by the
15241533YARN cluster. The convenient way to do this is adding them through the ` --jars ` option and ` --file ` option of the
15251534` spark-submit ` command.
@@ -1962,7 +1971,7 @@ options.
19621971#### DataFrame data reader/writer interface
19631972
19641973Based on user feedback, we created a new, more fluid API for reading data in (` SQLContext.read ` )
1965- and writing data out (` DataFrame.write ` ),
1974+ and writing data out (` DataFrame.write ` ),
19661975and deprecated the old APIs (e.g. ` SQLContext.parquetFile ` , ` SQLContext.jsonFile ` ).
19671976
19681977See the API docs for ` SQLContext.read ` (
0 commit comments