Skip to content

Commit fa8bc92

Browse files
committed
typo: sparm -> spark
1 parent 865a90c commit fa8bc92

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/sparkr.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ head(df)
6363
SparkR supports operating on a variety of data sources through the `DataFrame` interface. This section describes the general methods for loading and saving data using Data Sources. You can check the Spark SQL programming guide for more [specific options](sql-programming-guide.html#manually-specifying-options) that are available for the built-in data sources.
6464

6565
The general method for creating DataFrames from data sources is `read.df`. This method takes in the `SQLContext`, the path for the file to load and the type of data source. SparkR supports reading JSON and Parquet files natively and through [Spark Packages](http://spark-packages.org/) you can find data source connectors for popular file formats like [CSV](http://spark-packages.org/package/databricks/spark-csv) and [Avro](http://spark-packages.org/package/databricks/spark-avro). These packages can either be added by
66-
specifying `--packages` with `sparm-submit` or `sparkR` commands, or if creating context through `init`
66+
specifying `--packages` with `spark-submit` or `sparkR` commands, or if creating context through `init`
6767
you can specify the packages with the `packages` argument.
6868

6969
We can see how to use data sources using an example JSON input file. Note that the file that is used here is _not_ a typical JSON file. Each line in the file must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail.

0 commit comments

Comments
 (0)