Skip to content

Commit 13d9c91

Browse files
committed
[SPARK-51351][SS] Do not materialize the output in Python worker for TWS
### What changes were proposed in this pull request? This PR proposes to fix the logic of serializer in TWS PySpark version to NOT materialize the output entirely. This PR changes the logic of creating a list to create a generator instead, so that it can be lazily consumed. ### Why are the changes needed? Without this PR, all the outputs are materialized when JVM signals to Python worker that there is no further input (at task completion), which brings up two critical issues: * downstream operator can only see outputs after TWS operator processes all inputs * all the outputs are materialized into "memory" in Python worker, which could lead memory issue ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Existing UTs. I've confirmed manually below: * Before this PR, all the outputs are available after processing all inputs * After this PR, outputs are available during processing inputs The change I have made to verify the fix manually: HeartSaVioR@cd30db0 If we call run_test() in testcode.py in PySpark, the log messages `Spark pulls the iterators` and `The data is being retrieved from Python worker` are interleaved with this fix. Without the fix, there is a sequence of log messages, all `Spark pulls the iterators` messages come first, and then all `The data is being retrieved from Python worker` messages come later. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #50110 from HeartSaVioR/SPARK-51351. Authored-by: Jungtaek Lim <[email protected]> Signed-off-by: Jungtaek Lim <[email protected]> (cherry picked from commit 496fe7a) Signed-off-by: Jungtaek Lim <[email protected]>
1 parent 1df6fc6 commit 13d9c91

File tree

1 file changed

+11
-2
lines changed

1 file changed

+11
-2
lines changed

python/pyspark/sql/pandas/serializers.py

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1223,8 +1223,17 @@ def dump_stream(self, iterator, stream):
12231223
Read through an iterator of (iterator of pandas DataFrame), serialize them to Arrow
12241224
RecordBatches, and write batches to stream.
12251225
"""
1226-
result = [(b, t) for x in iterator for y, t in x for b in y]
1227-
super().dump_stream(result, stream)
1226+
1227+
def flatten_iterator():
1228+
# iterator: iter[list[(iter[pandas.DataFrame], pdf_type)]]
1229+
for packed in iterator:
1230+
iter_pdf_with_type = packed[0]
1231+
iter_pdf = iter_pdf_with_type[0]
1232+
pdf_type = iter_pdf_with_type[1]
1233+
for pdf in iter_pdf:
1234+
yield (pdf, pdf_type)
1235+
1236+
super().dump_stream(flatten_iterator(), stream)
12281237

12291238

12301239
class TransformWithStateInPandasInitStateSerializer(TransformWithStateInPandasSerializer):

0 commit comments

Comments
 (0)