From 17505deca758ca44408170764cca3b18f5dfb427 Mon Sep 17 00:00:00 2001 From: Nicholas Chammas Date: Wed, 17 Sep 2014 21:10:54 -0400 Subject: [PATCH 1/3] dummy sql commit --- sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala b/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala index 3bc5dce095511..c6b0901f721a0 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala @@ -47,7 +47,7 @@ import org.apache.spark.api.java.JavaRDD * A `SchemaRDD` can also be created by loading data in from external sources. * Examples are loading data from Parquet files by using by using the * `parquetFile` method on [[SQLContext]], and loading JSON datasets - * by using `jsonFile` and `jsonRDD` methods on [[SQLContext]]. + * by using `jsonFile` and `jsonRDD` methods on [[SQLContext]]. Yo. * * == SQL Queries == * A SchemaRDD can be registered as a table in the [[SQLContext]] that was used to create it. Once From 74266abbd7f92a35238ebcec4d6afaba9348c453 Mon Sep 17 00:00:00 2001 From: Nicholas Chammas Date: Wed, 17 Sep 2014 21:30:43 -0400 Subject: [PATCH 2/3] dummy non-sql commit --- core/src/main/scala/org/apache/spark/Partition.scala | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/core/src/main/scala/org/apache/spark/Partition.scala b/core/src/main/scala/org/apache/spark/Partition.scala index 27892dbd2a0bc..1d02f186ea93f 100644 --- a/core/src/main/scala/org/apache/spark/Partition.scala +++ b/core/src/main/scala/org/apache/spark/Partition.scala @@ -18,7 +18,7 @@ package org.apache.spark /** - * A partition of an RDD. + * A partition of an RDD. Yup. */ trait Partition extends Serializable { /** From 97c43a9daaec50afe6714e762640e9fdcf70736f Mon Sep 17 00:00:00 2001 From: Nicholas Chammas Date: Wed, 17 Sep 2014 21:39:38 -0400 Subject: [PATCH 3/3] Revert "dummy sql commit" This reverts commit 17505deca758ca44408170764cca3b18f5dfb427. --- sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala b/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala index c6b0901f721a0..3bc5dce095511 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala @@ -47,7 +47,7 @@ import org.apache.spark.api.java.JavaRDD * A `SchemaRDD` can also be created by loading data in from external sources. * Examples are loading data from Parquet files by using by using the * `parquetFile` method on [[SQLContext]], and loading JSON datasets - * by using `jsonFile` and `jsonRDD` methods on [[SQLContext]]. Yo. + * by using `jsonFile` and `jsonRDD` methods on [[SQLContext]]. * * == SQL Queries == * A SchemaRDD can be registered as a table in the [[SQLContext]] that was used to create it. Once