Skip to content

Commit 8fe3400

Browse files
vsemozhetbytfhemberger
authored andcommitted
doc: fix nits in guides/backpressuring-in-streams (#1376)
1 parent ed48247 commit 8fe3400

File tree

1 file changed

+23
-22
lines changed

1 file changed

+23
-22
lines changed

locale/en/docs/guides/backpressuring-in-streams.md

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ layout: docs.hbs
77

88
There is a general problem that occurs during data handling called
99
[`backpressure`][] and describes a buildup of data behind a buffer during data
10-
transfer. When the recieving end of the transfer has complex operations, or is
10+
transfer. When the receiving end of the transfer has complex operations, or is
1111
slower for whatever reason, there is a tendency for data from the incoming
1212
source to accumulate, like a clog.
1313

@@ -28,7 +28,7 @@ some experience with [`Stream`][]. If you haven't read through those docs,
2828
it's not a bad idea to take a look at the API documentation first, as it will
2929
help expand your understanding while reading this guide.
3030

31-
## The Problem With Data Handling
31+
## The Problem with Data Handling
3232

3333
In a computer system, data is transferred from one process to another through
3434
pipes, sockets, and signals. In Node.js, we find a similar mechanism called
@@ -52,7 +52,7 @@ rl.question('Why should you use streams? ', (answer) => {
5252
});
5353
```
5454

55-
A good example of why the backpressure mechanism implemented through streams are
55+
A good example of why the backpressure mechanism implemented through streams is
5656
a great optimization can be demonstrated by comparing the internal system tools
5757
from Node.js' [`Stream`][] implementation.
5858

@@ -82,15 +82,15 @@ the [`zip(1)`][] tool will notify you the file is corrupt, whereas the
8282
compression finished by [`Stream`][] will decompress without error.
8383

8484
Note: In this example, we use `.pipe()` to get the data source from one end
85-
to the other. However, notice there is no proper error handlers attached. If
86-
a chunk of data were to fail be properly recieved, the `Readable` source or
85+
to the other. However, notice there are no proper error handlers attached. If
86+
a chunk of data were to fail to be properly received, the `Readable` source or
8787
`gzip` stream will not be destroyed. [`pump`][] is a utility tool that would
8888
properly destroy all the streams in a pipeline if one of them fails or closes,
8989
and is a must have in this case!
9090

9191
## Too Much Data, Too Quickly
9292

93-
There are instance where a [`Readable`][] stream might give data to the
93+
There are instances where a [`Readable`][] stream might give data to the
9494
[`Writable`][] much too quickly — much more than the consumer can handle!
9595

9696
When that occurs, the consumer will begin to queue all the chunks of data for
@@ -145,7 +145,7 @@ average time: | 55299 | 55975
145145

146146
Both take around a minute to run, so there's not much of a difference at all,
147147
but let's take a closer look to confirm whether our suspicions are correct. We
148-
use the linux tool [`dtrace`][] to evaluate what's happening with the V8 garbage
148+
use the Linux tool [`dtrace`][] to evaluate what's happening with the V8 garbage
149149
collector.
150150

151151
The GC (garbage collector) measured time indicates the intervals of a full cycle
@@ -230,7 +230,8 @@ And now changing the [return value][] of the [`.write()`][] function, we get:
230230
Without respecting the return value of .write():
231231
==================================================
232232
real 54.48
233-
user 53.15sys 7.43
233+
user 53.15
234+
sys 7.43
234235
1524965376 maximum resident set size
235236
0 average shared memory size
236237
0 average unshared data size
@@ -254,7 +255,7 @@ Without streams in place to delegate the backpressure, there is an order of
254255
magnitude greater of memory space being allocated - a huge margin of
255256
difference between the same process!
256257

257-
This experiment shows how optimized and cost-effective Node's backpressure
258+
This experiment shows how optimized and cost-effective Node.js' backpressure
258259
mechanism is for your computing system. Now, let's do a break down on how it
259260
works!
260261

@@ -286,7 +287,7 @@ pause the incoming [`Readable`][] stream from sending any data and wait until
286287
the consumer is ready again. Once the data buffer is emptied, a [`.drain()`][]
287288
event will be emitted and resume the incoming data flow.
288289

289-
Once the the queue is finished, backpressure will allow data to be sent again.
290+
Once the queue is finished, backpressure will allow data to be sent again.
290291
The space in memory that was being used will free itself up and prepare for the
291292
next batch of data.
292293

@@ -301,7 +302,7 @@ Well the answer is simple: Node.js does all of this automatically for you.
301302
That's so great! But also not so great when we are trying to understand how to
302303
implement our own custom streams.
303304

304-
Note: In most machines, there is a byte size that is determines when a buffer
305+
Note: In most machines, there is a byte size that determines when a buffer
305306
is full (which will vary across different machines). Node.js allows you to set
306307
your own custom [`highWaterMark`][], but commonly, the default is set to 16kb
307308
(16384, or 16 for objectMode streams). In instances where you might
@@ -322,9 +323,9 @@ stream:
322323
+===============+ x |-------------------|
323324
| Your Data | x They exist outside | .on('close', cb) |
324325
+=======+=======+ x the data flow, but | .on('data', cb) |
325-
| x importantly attach | .on('drain', cb) |
326-
| x events, and their | .on('unpipe', cb) |
327-
+--------v----------+ x respective callbacks. | .on('error', cb) |
326+
| x importantly attach | .on('drain', cb) |
327+
| x events, and their | .on('unpipe', cb) |
328+
+---------v---------+ x respective callbacks. | .on('error', cb) |
328329
| Readable Stream +----+ | .on('finish', cb) |
329330
+-^-------^-------^-+ | | .on('end', cb) |
330331
^ | ^ | +-------------------+
@@ -395,7 +396,7 @@ In general,
395396
396397
1. Never `.push()` if you are not asked.
397398
2. Never call `.write()` after it returns false but wait for 'drain' instead.
398-
3. Streams changes between different node versions, and the library you use.
399+
3. Streams changes between different Node.js versions, and the library you use.
399400
Be careful and test things.
400401
401402
Note: In regards to point 3, an incredibly useful package for building
@@ -407,7 +408,7 @@ and supports older versions of browsers and Node.js.
407408
## Rules specific to Readable Streams
408409
409410
So far, we have taken a look at how [`.write()`][] affects backpressure and have
410-
focused much on the [`Writable`][] stream. Because of Node's functionality,
411+
focused much on the [`Writable`][] stream. Because of Node.js' functionality,
411412
data is technically flowing downstream from [`Readable`][] to [`Writable`][].
412413
However, as we can observe in any transmission of data, matter, or energy, the
413414
source is just as important as the destination and the [`Readable`][] stream
@@ -442,11 +443,11 @@ backpressure. In this counter-example of good practice, the application's code
442443
forces data through whenever it is available (signaled by the
443444
[`.data` event][]):
444445
```javascript
445-
// This ignores the backpressure mechanisms node has set in place,
446+
// This ignores the backpressure mechanisms Node.js has set in place,
446447
// and unconditionally pushes through data, regardless if the
447448
// destination stream is ready for it or not.
448449
readable.on('data', (data) =>
449-
writable.write(data);
450+
writable.write(data)
450451
);
451452
```
452453
@@ -458,15 +459,15 @@ the [`stream state machine`][] will handle our callbacks and determine when to
458459
handle backpressure and optimize the flow of data for us.
459460
460461
However, when we want to use a [`Writable`][] directly, we must respect the
461-
[`.write()`][] return value and pay close attention these conditions:
462+
[`.write()`][] return value and pay close attention to these conditions:
462463
463464
* If the write queue is busy, [`.write()`][] will return false.
464465
* If the data chunk is too large, [`.write()`][] will return false (the limit
465466
is indicated by the variable, [`highWaterMark`][]).
466467
467468
<!-- eslint-disable indent -->
468469
```javascript
469-
// This writable is invalid because of the async nature of javascript callbacks.
470+
// This writable is invalid because of the async nature of JavaScript callbacks.
470471
// Without a return statement for each callback prior to the last,
471472
// there is a great chance multiple callbacks will be called.
472473
class MyWritable extends Writable {
@@ -526,7 +527,7 @@ call [`.uncork()`][] the same amount of times to make it flow again.
526527
527528
## Conclusion
528529
529-
Streams are a often used module in Node.js. They are important to the internal
530+
Streams are an often used module in Node.js. They are important to the internal
530531
structure, and for developers, to expand and connect across the Node.js modules
531532
ecosystem.
532533
@@ -557,7 +558,7 @@ Node.js.
557558
[`.cork()`]: https://nodejs.org/api/stream.html#stream_writable_cork
558559
[`.uncork()`]: https://nodejs.org/api/stream.html#stream_writable_uncork
559560
560-
[push method]: https://nodejs.org/docs/latest/api/stream.html#stream_readable_push_chunk_encoding
561+
[`.push()`]: https://nodejs.org/docs/latest/api/stream.html#stream_readable_push_chunk_encoding
561562
562563
[implementing Writable streams]: https://nodejs.org/docs/latest/api/stream.html#stream_implementing_a_writable_stream
563564
[implementing Readable streams]: https://nodejs.org/docs/latest/api/stream.html#stream_implementing_a_readable_stream

0 commit comments

Comments
 (0)