Add S3 integration for input/output data to job executor #26
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request introduces first‑class support for pulling input data from S3/MinIO and pushing output data back, while preserving backward‑compatible behaviour for existing HyperFlow jobs.
Key points:
Data stager. New data‑stager.js and storage/s3Adapter.js modules implement:
Connector tweaks. The RemoteJobConnector now uses a keys object to refer to wf::tasksPendingCompletionHandling and wf::completedTasks, making it easier to mark tasks as completed or ready for completion handling
Environment variables & defaults. New variables:
HF_VAR_USE_S3_IO – enable S3 downloads/uploads; off by default to maintain old behaviour.
HF_S3_ENDPOINT, HF_S3_FORCE_PATH_STYLE, AWS_REGION/AWS_DEFAULT_REGION – S3/MinIO config.
HF_S3_CONCURRENCY, HF_S3_RETRIES – concurrency and retry controls.
HF_TASK_CLEANUP_LOCAL – remove local data after successful upload.
Existing workflows without these variables continue to run as before.
Dependencies. Adds @aws-sdk/client-s3, @aws-sdk/lib-storage, minimatch and updates amqplib to latest, but retains callback‑based AMQP API for backward compatibility