Skip to content

Commit 812a9ad

Browse files
KazKazMiddelhoek
authored andcommitted
[MINOR] Fix some typos
### What changes were proposed in this pull request? This PR fixes typos in docstrings and comments (and a few functional bits of code). The typos were found using the typos software: https://github.com/crate-ci/typos I've made a typo-fix-PR before, but haven't seen the result on the website yet. Is there anything else I need to do for that? ### Why are the changes needed? Nice to fix :) ### Does this PR introduce _any_ user-facing change? yes, documentation was updated. ### How was this patch tested? No tests added. ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#48557 from KazMiddelhoek/master. Lead-authored-by: Kaz <[email protected]> Co-authored-by: Kaz <[email protected]> Signed-off-by: Max Gekk <[email protected]>
1 parent 10e0b61 commit 812a9ad

File tree

35 files changed

+50
-50
lines changed

35 files changed

+50
-50
lines changed

.github/workflows/update_build_status.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ jobs:
7272
} catch (error) {
7373
console.error(error)
7474
// Run not found. This can happen when the PR author removes GitHub Actions runs or
75-
// disalbes GitHub Actions.
75+
// disables GitHub Actions.
7676
continue
7777
}
7878

R/pkg/R/functions.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2922,7 +2922,7 @@ setClassUnion("characterOrstructTypeOrColumn", c("character", "structType", "Col
29222922
#' @details
29232923
#' \code{from_json}: Parses a column containing a JSON string into a Column of \code{structType}
29242924
#' with the specified \code{schema} or array of \code{structType} if \code{as.json.array} is set
2925-
#' to \code{TRUE}. If the string is unparseable, the Column will contain the value NA.
2925+
#' to \code{TRUE}. If the string is unparsable, the Column will contain the value NA.
29262926
#'
29272927
#' @rdname column_collection_functions
29282928
#' @param as.json.array indicating if input string is JSON array of objects or a single object.
@@ -3004,7 +3004,7 @@ setMethod("schema_of_json", signature(x = "characterOrColumn"),
30043004
#' @details
30053005
#' \code{from_csv}: Parses a column containing a CSV string into a Column of \code{structType}
30063006
#' with the specified \code{schema}.
3007-
#' If the string is unparseable, the Column will contain the value NA.
3007+
#' If the string is unparsable, the Column will contain the value NA.
30083008
#'
30093009
#' @rdname column_collection_functions
30103010
#' @aliases from_csv from_csv,Column,characterOrstructTypeOrColumn-method

R/pkg/R/serialize.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ writeObject <- function(con, object, writeType = TRUE) {
6060
if (type %in% c("integer", "character", "logical", "double", "numeric")) {
6161
if (is.na(object[[1]])) {
6262
# Uses the first element for now to keep the behavior same as R before
63-
# 4.2.0. This is wrong because we should differenciate c(NA) from a
63+
# 4.2.0. This is wrong because we should differentiate c(NA) from a
6464
# single NA as the former means array(null) and the latter means null
6565
# in Spark SQL. However, it requires non-trivial comparison to distinguish
6666
# both in R. We should ideally fix this.

common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/RemoteBlockPushResolver.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -251,17 +251,17 @@ AppShufflePartitionInfo getOrCreateAppShufflePartitionInfo(
251251
// Higher shuffleMergeId seen for the shuffle ID meaning new stage attempt is being
252252
// run for the shuffle ID. Close and clean up old shuffleMergeId files,
253253
// happens in the indeterminate stage retries
254-
AppAttemptShuffleMergeId currrentAppAttemptShuffleMergeId =
254+
AppAttemptShuffleMergeId currentAppAttemptShuffleMergeId =
255255
new AppAttemptShuffleMergeId(appShuffleInfo.appId, appShuffleInfo.attemptId,
256256
shuffleId, latestShuffleMergeId);
257257
logger.info("{}: creating a new shuffle merge metadata since received " +
258258
"shuffleMergeId {} is higher than latest shuffleMergeId {}",
259259
MDC.of(LogKeys.APP_ATTEMPT_SHUFFLE_MERGE_ID$.MODULE$,
260-
currrentAppAttemptShuffleMergeId),
260+
currentAppAttemptShuffleMergeId),
261261
MDC.of(LogKeys.SHUFFLE_MERGE_ID$.MODULE$, shuffleMergeId),
262262
MDC.of(LogKeys.LATEST_SHUFFLE_MERGE_ID$.MODULE$, latestShuffleMergeId));
263263
submitCleanupTask(() ->
264-
closeAndDeleteOutdatedPartitions(currrentAppAttemptShuffleMergeId,
264+
closeAndDeleteOutdatedPartitions(currentAppAttemptShuffleMergeId,
265265
mergePartitionsInfo.shuffleMergePartitions));
266266
return new AppShuffleMergePartitionsInfo(shuffleMergeId, false);
267267
} else {

connector/connect/docs/client-connection-string.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
From the client perspective, Spark Connect mostly behaves as any other GRPC
44
client and can be configured as such. However, to make it easy to use from
5-
different programming languages and to have a homogenous connection surface
5+
different programming languages and to have a homogeneous connection surface
66
this document proposes what the user surface is for connecting to a
77
Spark Connect endpoint.
88

@@ -136,7 +136,7 @@ server_url = "sc://myhost.com:443/;use_ssl=true;token=ABCDEFG"
136136

137137
As mentioned above, Spark Connect uses a regular GRPC client and the server path
138138
cannot be configured to remain compatible with the GRPC standard and HTTP. For
139-
example the following examles are invalid.
139+
example the following examples are invalid.
140140

141141
```python
142142
server_url = "sc://myhost.com:443/mypathprefix/;token=AAAAAAA"

docs/_plugins/include_example.rb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,8 +114,8 @@ def select_lines(code)
114114
range = Range.new(start + 1, endline - 1)
115115
trimmed = trim_codeblock(lines[range])
116116
# Filter out possible example tags of overlapped labels.
117-
taggs_filtered = trimmed.select { |l| !l.include? '$example ' }
118-
result += taggs_filtered.join
117+
tags_filtered = trimmed.select { |l| !l.include? '$example ' }
118+
result += tags_filtered.join
119119
result += "\n"
120120
end
121121
result

docs/core-migration-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ license: |
6262

6363
## Upgrading from Core 3.3 to 3.4
6464

65-
- Since Spark 3.4, Spark driver will own `PersistentVolumnClaim`s and try to reuse if they are not assigned to live executors. To restore the behavior before Spark 3.4, you can set `spark.kubernetes.driver.ownPersistentVolumeClaim` to `false` and `spark.kubernetes.driver.reusePersistentVolumeClaim` to `false`.
65+
- Since Spark 3.4, Spark driver will own `PersistentVolumeClaim`s and try to reuse if they are not assigned to live executors. To restore the behavior before Spark 3.4, you can set `spark.kubernetes.driver.ownPersistentVolumeClaim` to `false` and `spark.kubernetes.driver.reusePersistentVolumeClaim` to `false`.
6666

6767
- Since Spark 3.4, Spark driver will track shuffle data when dynamic allocation is enabled without shuffle service. To restore the behavior before Spark 3.4, you can set `spark.dynamicAllocation.shuffleTracking.enabled` to `false`.
6868

docs/running-on-yarn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -673,7 +673,7 @@ To use a custom metrics.properties for the application master and executors, upd
673673
<td>false</td>
674674
<td>
675675
Set to true for applications that have higher security requirements and prefer that their
676-
secret is not saved in the db. The shuffle data of such applications wll not be recovered after
676+
secret is not saved in the db. The shuffle data of such applications will not be recovered after
677677
the External Shuffle Service restarts.
678678
</td>
679679
<td>3.5.0</td>

docs/security.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ secrets to be secure.
7272
<td>false</td>
7373
<td>
7474
Set to true for applications that have higher security requirements and prefer that their
75-
secret is not saved in the db. The shuffle data of such applications wll not be recovered after
75+
secret is not saved in the db. The shuffle data of such applications will not be recovered after
7676
the External Shuffle Service restarts.
7777
</td>
7878
<td>3.5.0</td>

docs/spark-standalone.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -372,7 +372,7 @@ SPARK_MASTER_OPTS supports the following system properties:
372372
<td>
373373
The pattern for app ID generation based on Java `String.format` method.
374374
The default value is `app-%s-%04d` which represents the existing app id string, e.g.,
375-
`app-20231031224509-0008`. Plesae be careful to generate unique IDs.
375+
`app-20231031224509-0008`. Please be careful to generate unique IDs.
376376
</td>
377377
<td>4.0.0</td>
378378
</tr>

0 commit comments

Comments
 (0)