@@ -38,8 +38,8 @@ file, or by running HiveQL against data stored in [Apache Hive](http://hive.apac
3838
3939Spark SQL allows relational queries expressed in SQL or HiveQL to be executed using
4040Spark. At the core of this component is a new type of RDD,
41- [ SchemaRDD] ( api/pyspark/pyspark.rdd .SchemaRDD-class.html ) . SchemaRDDs are composed
42- [ Row] ( api/pyspark/pyspark.rdd .Row-class.html ) objects along with
41+ [ SchemaRDD] ( api/pyspark/pyspark.sql .SchemaRDD-class.html ) . SchemaRDDs are composed
42+ [ Row] ( api/pyspark/pyspark.sql .Row-class.html ) objects along with
4343a schema that describes the data types of each column in the row. A SchemaRDD is similar to a table
4444in a traditional relational database. A SchemaRDD can be created from an existing RDD, parquet
4545file, or by running HiveQL against data stored in [ Apache Hive] ( http://hive.apache.org/ ) .
@@ -85,11 +85,11 @@ JavaSQLContext sqlCtx = new org.apache.spark.sql.api.java.JavaSQLContext(ctx);
8585<div data-lang =" python " markdown =" 1 " >
8686
8787The entry point into all relational functionality in Spark is the
88- [ SQLContext] ( api/pyspark/pyspark.context .SQLContext-class.html ) class, or one
88+ [ SQLContext] ( api/pyspark/pyspark.sql .SQLContext-class.html ) class, or one
8989of its decedents. To create a basic SQLContext, all you need is a SparkContext.
9090
9191{% highlight python %}
92- from pyspark.context import SQLContext
92+ from pyspark.sql import SQLContext
9393sqlCtx = SQLContext(sc)
9494{% endhighlight %}
9595
@@ -402,7 +402,7 @@ expressed in HiveQL.
402402
403403{% highlight python %}
404404
405- from pyspark.context import HiveContext
405+ from pyspark.sql import HiveContext
406406hiveCtx = HiveContext(sc)
407407
408408hiveCtx.hql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
0 commit comments