Skip to content

Commit 895eb4f

Browse files
committed
Address
1 parent 6761947 commit 895eb4f

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/streaming-flume-integration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ configuring Flume agents.
166166

167167
Note that each input DStream can be configured to receive data from multiple sinks.
168168

169-
3. **Deploying:** This is same as the first approach, for Scala, Java and Python.
169+
3. **Deploying:** This is same as the first approach.
170170

171171

172172

docs/streaming-kafka-integration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -207,4 +207,4 @@ Next, we discuss how to use this approach in your streaming application.
207207

208208
Another thing to note is that since this approach does not use Receivers, the standard receiver-related (that is, [configurations](configuration.html) of the form `spark.streaming.receiver.*` ) will not apply to the input DStreams created by this approach (will apply to other input DStreams though). Instead, use the [configurations](configuration.html) `spark.streaming.kafka.*`. An important one is `spark.streaming.kafka.maxRatePerPartition` which is the maximum rate (in messages per second) at which each Kafka partition will be read by this direct API.
209209

210-
3. **Deploying:** This is same as the first approach, for Scala, Java and Python.
210+
3. **Deploying:** This is same as the first approach.

0 commit comments

Comments
 (0)