Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/building-spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ should run continuous compilation (i.e. wait for changes). However, this has not
extensively. A couple of gotchas to note:

* it only scans the paths `src/main` and `src/test` (see
[docs](http://scala-tools.org/mvnsites/maven-scala-plugin/usage_cc.html)), so it will only work
[docs](http://davidb.github.io/scala-maven-plugin/example_cc.html)), so it will only work
from within certain submodules that have that structure.

* you'll typically need to run `mvn install` from the project root for compilation within
Expand Down
2 changes: 1 addition & 1 deletion docs/rdd-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -604,7 +604,7 @@ before the `reduce`, which would cause `lineLengths` to be saved in memory after
Spark's API relies heavily on passing functions in the driver program to run on the cluster.
There are two recommended ways to do this:

* [Anonymous function syntax](http://docs.scala-lang.org/tutorials/tour/anonymous-function-syntax.html),
* [Anonymous function syntax](http://docs.scala-lang.org/tour/basics.html#functions),
which can be used for short pieces of code.
* Static methods in a global singleton object. For example, you can define `object MyFunctions` and then
pass `MyFunctions.func1`, as follows:
Expand Down