Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
ab6025d
compiling
ahirreddy Apr 6, 2014
bcc0f23
Java to python
ahirreddy Apr 6, 2014
67ba875
java to python, and python to java
ahirreddy Apr 6, 2014
b8b904b
Added schema rdd class
ahirreddy Apr 7, 2014
5496f9f
doesn't crash
ahirreddy Apr 7, 2014
043ca85
working
ahirreddy Apr 7, 2014
c0fb1c6
more working
ahirreddy Apr 7, 2014
4886052
even better
ahirreddy Apr 8, 2014
e948bd9
yippie
ahirreddy Apr 8, 2014
cd5f79f
Switched to using Scala SQLContext
ahirreddy Apr 8, 2014
be079de
returning dictionaries works
ahirreddy Apr 8, 2014
4fe1319
output dictionaries correctly
ahirreddy Apr 8, 2014
55d1c76
return row objects
ahirreddy Apr 8, 2014
725c91e
awesome row objects
ahirreddy Apr 8, 2014
c608947
SchemaRDD now has all RDD operations
ahirreddy Apr 8, 2014
09b9980
made jrdd explicitly lazy
ahirreddy Apr 8, 2014
251f99d
for now only allow dictionaries as input
ahirreddy Apr 8, 2014
906d180
added todo explaining cost of creating Row object in python
ahirreddy Apr 8, 2014
e9f5b8d
adding tests
ahirreddy Apr 8, 2014
d26ec5e
added test
ahirreddy Apr 8, 2014
7515ba0
added more tests :)
ahirreddy Apr 8, 2014
79f739d
added more tests
ahirreddy Apr 8, 2014
e4d21b4
Added pyrolite dependency
ahirreddy Apr 8, 2014
20936a5
Added tests and documentation
ahirreddy Apr 8, 2014
b406ba0
doctest formatting
ahirreddy Apr 8, 2014
79621cf
cleaning up cruft
ahirreddy Apr 8, 2014
f98a422
HiveContexts
ahirreddy Apr 9, 2014
b0192d3
Added Long, Double and Boolean as usable types + unit test
ahirreddy Apr 9, 2014
e00980f
First draft of python sql programming guide.
marmbrus Apr 9, 2014
1836944
Fix comments.
marmbrus Apr 9, 2014
40491c9
PR Changes + Method Visibility
ahirreddy Apr 9, 2014
337b201
Changed com.clearspring.analytics stream version from 2.4.0 to 2.5.1 …
ahirreddy Apr 10, 2014
38a92b0
Add note to future non-python developers about python docs.
marmbrus Apr 10, 2014
4285340
Fix building of Hive API Docs.
marmbrus Apr 10, 2014
58e2aa9
Build Docs for pyspark SQL Api. Minor fixes.
marmbrus Apr 10, 2014
227a0be
Update API links. Fix Hive example.
marmbrus Apr 10, 2014
e4da06c
Display message if hive is not built into spark
ahirreddy Apr 11, 2014
22de1d4
Fixed maven pyrolite dependency
ahirreddy Apr 11, 2014
ded03e7
Added doc test for HiveContext
ahirreddy Apr 11, 2014
ab95eba
Set SPARK_HIVE=true on jenkins
ahirreddy Apr 11, 2014
521ff6d
Trying to get spark to build with hive
ahirreddy Apr 11, 2014
6d658ba
Remove the metastore directory created by the HiveContext tests in Sp…
ahirreddy Apr 11, 2014
a19afe4
Doc fixes
ahirreddy Apr 12, 2014
f2312c7
Moved everything into sql.py
ahirreddy Apr 12, 2014
29245bf
Cache underlying SchemaRDD instead of generating and caching PythonRDD
ahirreddy Apr 12, 2014
3ef074a
Updated documentation because classes moved to sql.py
ahirreddy Apr 12, 2014
6f7b8f6
Temporary fix MIMA checker. Since we now assemble Spark jar with Hive…
ahirreddy Apr 14, 2014
307d6e0
Style fix
ahirreddy Apr 15, 2014
0294497
Updated log4j properties to supress Hive Warns
ahirreddy Apr 15, 2014
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions core/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -266,6 +266,11 @@
<artifactId>junit-interface</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.spark-project</groupId>
<artifactId>pyrolite</artifactId>
<version>2.0</version>
</dependency>
</dependencies>
<build>
<outputDirectory>target/scala-${scala.binary.version}/classes</outputDirectory>
Expand Down
32 changes: 32 additions & 0 deletions core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ import java.util.{List => JList, ArrayList => JArrayList, Map => JMap, Collectio
import scala.collection.JavaConversions._
import scala.reflect.ClassTag

import net.razorvine.pickle.{Pickler, Unpickler}

import org.apache.spark._
import org.apache.spark.api.java.{JavaSparkContext, JavaPairRDD, JavaRDD}
import org.apache.spark.broadcast.Broadcast
Expand Down Expand Up @@ -284,6 +286,36 @@ private[spark] object PythonRDD {
file.close()
}

/**
* Convert an RDD of serialized Python dictionaries to Scala Maps
* TODO: Support more Python types.
*/
def pythonToJavaMap(pyRDD: JavaRDD[Array[Byte]]): JavaRDD[Map[String, _]] = {
pyRDD.rdd.mapPartitions { iter =>
val unpickle = new Unpickler
// TODO: Figure out why flatMap is necessay for pyspark
iter.flatMap { row =>
unpickle.loads(row) match {
case objs: java.util.ArrayList[JMap[String, _] @unchecked] => objs.map(_.toMap)
// Incase the partition doesn't have a collection
case obj: JMap[String @unchecked, _] => Seq(obj.toMap)
}
}
}
}

/**
* Convert and RDD of Java objects to and RDD of serialized Python objects, that is usable by
* PySpark.
*/
def javaToPython(jRDD: JavaRDD[Any]): JavaRDD[Array[Byte]] = {
jRDD.rdd.mapPartitions { iter =>
val pickle = new Pickler
iter.map { row =>
pickle.dumps(row)
}
}
}
}

private
Expand Down
1 change: 1 addition & 0 deletions dev/run-tests
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ else
fi
JAVA_VERSION=$($java_cmd -version 2>&1 | sed 's/java version "\(.*\)\.\(.*\)\..*"/\1\2/; 1q')
[ "$JAVA_VERSION" -ge 18 ] && echo "" || echo "[Warn] Java 8 tests will not run because JDK version is < 1.8."
export SPARK_HIVE=true

echo "========================================================================="
echo "Running Apache RAT checks"
Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ To mark a block of code in your markdown to be syntax highlighted by jekyll duri

You can build just the Spark scaladoc by running `sbt/sbt doc` from the SPARK_PROJECT_ROOT directory.

Similarly, you can build just the PySpark epydoc by running `epydoc --config epydoc.conf` from the SPARK_PROJECT_ROOT/pyspark directory.
Similarly, you can build just the PySpark epydoc by running `epydoc --config epydoc.conf` from the SPARK_PROJECT_ROOT/pyspark directory. Documentation is only generated for classes that are listed as public in `__init__.py`.

When you run `jekyll` in the docs directory, it will also copy over the scaladoc for the various Spark subprojects into the docs directory (and then also into the _site directory). We use a jekyll plugin to run `sbt/sbt doc` before building the site so if you haven't run it (recently) it may take some time as it generates all of the scaladoc. The jekyll plugin also generates the PySpark docs using [epydoc](http://epydoc.sourceforge.net/).

Expand Down
4 changes: 2 additions & 2 deletions docs/_plugins/copy_api_dirs.rb
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@
curr_dir = pwd
cd("..")

puts "Running sbt/sbt doc from " + pwd + "; this may take a few minutes..."
puts `sbt/sbt doc`
puts "Running 'sbt/sbt doc hive/doc' from " + pwd + "; this may take a few minutes..."
puts `sbt/sbt doc hive/doc`

puts "Moving back into docs dir."
cd("docs")
Expand Down
103 changes: 99 additions & 4 deletions docs/sql-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ a schema that describes the data types of each column in the row. A SchemaRDD i
in a traditional relational database. A SchemaRDD can be created from an existing RDD, parquet
file, or by running HiveQL against data stored in [Apache Hive](http://hive.apache.org/).

**All of the examples on this page use sample data included in the Spark distribution and can be run in the spark-shell.**
**All of the examples on this page use sample data included in the Spark distribution and can be run in the `spark-shell`.**

</div>

Expand All @@ -33,6 +33,19 @@ a schema that describes the data types of each column in the row. A JavaSchemaR
in a traditional relational database. A JavaSchemaRDD can be created from an existing RDD, parquet
file, or by running HiveQL against data stored in [Apache Hive](http://hive.apache.org/).
</div>

<div data-lang="python" markdown="1">

Spark SQL allows relational queries expressed in SQL or HiveQL to be executed using
Spark. At the core of this component is a new type of RDD,
[SchemaRDD](api/pyspark/pyspark.sql.SchemaRDD-class.html). SchemaRDDs are composed
[Row](api/pyspark/pyspark.sql.Row-class.html) objects along with
a schema that describes the data types of each column in the row. A SchemaRDD is similar to a table
in a traditional relational database. A SchemaRDD can be created from an existing RDD, parquet
file, or by running HiveQL against data stored in [Apache Hive](http://hive.apache.org/).

**All of the examples on this page use sample data included in the Spark distribution and can be run in the `pyspark` shell.**
</div>
</div>

***************************************************************************************************
Expand All @@ -44,7 +57,7 @@ file, or by running HiveQL against data stored in [Apache Hive](http://hive.apac

The entry point into all relational functionality in Spark is the
[SQLContext](api/sql/core/index.html#org.apache.spark.sql.SQLContext) class, or one of its
decendents. To create a basic SQLContext, all you need is a SparkContext.
descendants. To create a basic SQLContext, all you need is a SparkContext.

{% highlight scala %}
val sc: SparkContext // An existing SparkContext.
Expand All @@ -60,7 +73,7 @@ import sqlContext._

The entry point into all relational functionality in Spark is the
[JavaSQLContext](api/sql/core/index.html#org.apache.spark.sql.api.java.JavaSQLContext) class, or one
of its decendents. To create a basic JavaSQLContext, all you need is a JavaSparkContext.
of its descendants. To create a basic JavaSQLContext, all you need is a JavaSparkContext.

{% highlight java %}
JavaSparkContext ctx = ...; // An existing JavaSparkContext.
Expand All @@ -69,6 +82,19 @@ JavaSQLContext sqlCtx = new org.apache.spark.sql.api.java.JavaSQLContext(ctx);

</div>

<div data-lang="python" markdown="1">

The entry point into all relational functionality in Spark is the
[SQLContext](api/pyspark/pyspark.sql.SQLContext-class.html) class, or one
of its decedents. To create a basic SQLContext, all you need is a SparkContext.

{% highlight python %}
from pyspark.sql import SQLContext
sqlCtx = SQLContext(sc)
{% endhighlight %}

</div>

</div>

## Running SQL on RDDs
Expand All @@ -81,7 +107,7 @@ One type of table that is supported by Spark SQL is an RDD of Scala case classes
defines the schema of the table. The names of the arguments to the case class are read using
reflection and become the names of the columns. Case classes can also be nested or contain complex
types such as Sequences or Arrays. This RDD can be implicitly converted to a SchemaRDD and then be
registered as a table. Tables can used in subsequent SQL statements.
registered as a table. Tables can be used in subsequent SQL statements.

{% highlight scala %}
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
Expand Down Expand Up @@ -176,6 +202,34 @@ List<String> teenagerNames = teenagers.map(new Function<Row, String>() {

</div>

<div data-lang="python" markdown="1">

One type of table that is supported by Spark SQL is an RDD of dictionaries. The keys of the
dictionary define the columns names of the table, and the types are inferred by looking at the first
row. Any RDD of dictionaries can converted to a SchemaRDD and then registered as a table. Tables
can be used in subsequent SQL statements.

{% highlight python %}
# Load a text file and convert each line to a dictionary.
lines = sc.textFile("examples/src/main/resources/people.txt")
parts = lines.map(lambda l: l.split(","))
people = parts.map(lambda p: {"name": p[0], "age": int(p[1])})

# Infer the schema, and register the SchemaRDD as a table.
# In future versions of PySpark we would like to add support for registering RDDs with other
# datatypes as tables
peopleTable = sqlCtx.inferSchema(people)
peopleTable.registerAsTable("people")

# SQL can be run over SchemaRDDs that have been registered as a table.
teenagers = sqlCtx.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")

# The results of SQL queries are RDDs and support all the normal RDD operations.
teenNames = teenagers.map(lambda p: "Name: " + p.name)
{% endhighlight %}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add something saying that in future versions of PySpark, we'd like to support RDDs with other data types in registerAsTable too.

</div>

</div>

**Note that Spark SQL currently uses a very basic SQL parser.**
Expand Down Expand Up @@ -231,6 +285,27 @@ parquetFile.registerAsTable("parquetFile");
JavaSchemaRDD teenagers = sqlCtx.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");


{% endhighlight %}

</div>

<div data-lang="python" markdown="1">

{% highlight python %}

peopleTable # The SchemaRDD from the previous example.

# SchemaRDDs can be saved as parquet files, maintaining the schema information.
peopleTable.saveAsParquetFile("people.parquet")

# Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
# The result of loading a parquet file is also a SchemaRDD.
parquetFile = sqlCtx.parquetFile("people.parquet")

# Parquet files can also be registered as tables and then used in SQL statements.
parquetFile.registerAsTable("parquetFile");
teenagers = sqlCtx.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19")

{% endhighlight %}

</div>
Expand Down Expand Up @@ -318,4 +393,24 @@ Row[] results = hiveCtx.hql("FROM src SELECT key, value").collect();

</div>

<div data-lang="python" markdown="1">

When working with Hive one must construct a `HiveContext`, which inherits from `SQLContext`, and
adds support for finding tables in in the MetaStore and writing queries using HiveQL. In addition to
the `sql` method a `HiveContext` also provides an `hql` methods, which allows queries to be
expressed in HiveQL.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this is said earlier in the doc, but you should say how to build Spark for Hive support.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has it's own section earlier in the dock starting on line 338.


{% highlight python %}

from pyspark.sql import HiveContext
hiveCtx = HiveContext(sc)

hiveCtx.hql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
hiveCtx.hql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")

# Queries can be expressed in HiveQL.
results = hiveCtx.hql("FROM src SELECT key, value").collect()

{% endhighlight %}

</div>
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@
<dependency>
<groupId>com.clearspring.analytics</groupId>
<artifactId>stream</artifactId>
<version>2.4.0</version>
<version>2.5.1</version>
</dependency>
<!-- In theory we need not directly depend on protobuf since Spark does not directly
use it. However, when building with Hadoop/YARN 2.2 Maven doesn't correctly bump
Expand Down
3 changes: 2 additions & 1 deletion project/SparkBuild.scala
Original file line number Diff line number Diff line change
Expand Up @@ -345,7 +345,8 @@ object SparkBuild extends Build {
"com.twitter" %% "chill" % chillVersion excludeAll(excludeAsm),
"com.twitter" % "chill-java" % chillVersion excludeAll(excludeAsm),
"org.tachyonproject" % "tachyon" % "0.4.1-thrift" excludeAll(excludeHadoop, excludeCurator, excludeEclipseJetty, excludePowermock),
"com.clearspring.analytics" % "stream" % "2.5.1"
"com.clearspring.analytics" % "stream" % "2.5.1",
"org.spark-project" % "pyrolite" % "2.0"
),
libraryDependencies ++= maybeAvro
)
Expand Down
18 changes: 17 additions & 1 deletion python/pyspark/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,19 @@
Access files shipped with jobs.
- L{StorageLevel<pyspark.storagelevel.StorageLevel>}
Finer-grained cache persistence levels.

Spark SQL:
- L{SQLContext<pyspark.sql.SQLContext>}
Main entry point for SQL functionality.
- L{SchemaRDD<pyspark.sql.SchemaRDD>}
A Resilient Distributed Dataset (RDD) with Schema information for the data contained. In
addition to normal RDD operations, SchemaRDDs also support SQL.
- L{Row<pyspark.sql.Row>}
A Row of data returned by a Spark SQL query.

Hive:
- L{HiveContext<pyspark.context.HiveContext>}
Main entry point for accessing data stored in Apache Hive..
"""


Expand All @@ -45,9 +58,12 @@

from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import SQLContext
from pyspark.rdd import RDD
from pyspark.sql import SchemaRDD
from pyspark.sql import Row
from pyspark.files import SparkFiles
from pyspark.storagelevel import StorageLevel


__all__ = ["SparkConf", "SparkContext", "RDD", "SparkFiles", "StorageLevel"]
__all__ = ["SparkConf", "SparkContext", "SQLContext", "RDD", "SchemaRDD", "SparkFiles", "StorageLevel", "Row"]
4 changes: 4 additions & 0 deletions python/pyspark/java_gateway.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,5 +64,9 @@ def run(self):
java_import(gateway.jvm, "org.apache.spark.api.java.*")
java_import(gateway.jvm, "org.apache.spark.api.python.*")
java_import(gateway.jvm, "org.apache.spark.mllib.api.python.*")
java_import(gateway.jvm, "org.apache.spark.sql.SQLContext")
java_import(gateway.jvm, "org.apache.spark.sql.hive.HiveContext")
java_import(gateway.jvm, "org.apache.spark.sql.hive.LocalHiveContext")
java_import(gateway.jvm, "org.apache.spark.sql.hive.TestHiveContext")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this work if users haven't built with Hive? Maybe we want to make the Hive support optional. Not sure what the best way to do so is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will still work, but it will throw a non-fatal exception when the user tries to use a HiveContext without hive built. I'll catch that and present a better error message that indicates that the user would need to build spark with hive with SPARK_HIVE=true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added an better message that tells the user they need to compile with Spark with Hive to use the hive context

java_import(gateway.jvm, "scala.Tuple2")
return gateway
Loading