Skip to content

Commit bb5938f

Browse files
committed
Replaced rest of "SparkSQL" with "Spark SQL"
1 parent 6df66de commit bb5938f

File tree

5 files changed

+7
-7
lines changed

5 files changed

+7
-7
lines changed

dev/run-tests

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ echo "========================================================================="
8989
echo "Running Spark unit tests"
9090
echo "========================================================================="
9191

92-
# Build Spark; we always build with Hive because the PySpark SparkSQL tests need it.
92+
# Build Spark; we always build with Hive because the PySpark Spark SQL tests need it.
9393
# echo "q" is needed because sbt on encountering a build file with failure
9494
# (either resolution or compilation) prompts the user for input either q, r,
9595
# etc to quit or retry. This echo is there to make it not block.

python/pyspark/sql.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -901,7 +901,7 @@ def __reduce__(self):
901901

902902
class SQLContext:
903903

904-
"""Main entry point for SparkSQL functionality.
904+
"""Main entry point for Spark SQL functionality.
905905
906906
A SQLContext can be used create L{SchemaRDD}s, register L{SchemaRDD}s as
907907
tables, execute SQL over tables, cache tables, and read parquet files.
@@ -949,7 +949,7 @@ def __init__(self, sparkContext, sqlContext=None):
949949

950950
@property
951951
def _ssql_ctx(self):
952-
"""Accessor for the JVM SparkSQL context.
952+
"""Accessor for the JVM Spark SQL context.
953953
954954
Subclasses can override this property to provide their own
955955
JVM Contexts.
@@ -1492,7 +1492,7 @@ class SchemaRDD(RDD):
14921492
"""An RDD of L{Row} objects that has an associated schema.
14931493
14941494
The underlying JVM object is a SchemaRDD, not a PythonRDD, so we can
1495-
utilize the relational query api exposed by SparkSQL.
1495+
utilize the relational query api exposed by Spark SQL.
14961496
14971497
For normal L{pyspark.rdd.RDD} operations (map, count, etc.) the
14981498
L{SchemaRDD} is not operated on directly, as it's underlying

python/run-tests

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ FAILED=0
2828

2929
rm -f unit-tests.log
3030

31-
# Remove the metastore and warehouse directory created by the HiveContext tests in SparkSQL
31+
# Remove the metastore and warehouse directory created by the HiveContext tests in Spark SQL
3232
rm -rf metastore warehouse
3333

3434
function run_test() {

sql/core/src/main/scala/org/apache/spark/sql/api/java/Row.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ import scala.math.BigDecimal
2525
import org.apache.spark.sql.catalyst.expressions.{Row => ScalaRow}
2626

2727
/**
28-
* A result row from a SparkSQL query.
28+
* A result row from a Spark SQL query.
2929
*/
3030
class Row(private[spark] val row: ScalaRow) extends Serializable {
3131

sql/hive/src/main/scala/org/apache/spark/sql/hive/parquet/FakeParquetSerDe.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector
2626
import org.apache.hadoop.io.Writable
2727

2828
/**
29-
* A placeholder that allows SparkSQL users to create metastore tables that are stored as
29+
* A placeholder that allows Spark SQL users to create metastore tables that are stored as
3030
* parquet files. It is only intended to pass the checks that the serde is valid and exists
3131
* when a CREATE TABLE is run. The actual work of decoding will be done by ParquetTableScan
3232
* when "spark.sql.hive.convertMetastoreParquet" is set to true.

0 commit comments

Comments
 (0)