Skip to content

Conversation

@lukaszstolarczuk
Copy link
Contributor

@lukaszstolarczuk lukaszstolarczuk commented Dec 5, 2024

Configs[DisjointPoolMemType::Device].Name = "Device";
Configs[DisjointPoolMemType::Shared].Name = "Shared";
Configs[DisjointPoolMemType::SharedReadOnly].Name = "SharedReadOnly";
ret = umfDisjointPoolParamsSetName(Configs[DisjointPoolMemType::Host], "Host");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you can just do CheckConfigRet(umfDisjointPoolParamsSetName(Configs[DisjointPoolMemType::Host], "Host")).

Also, I would rename CheckConfigRet to UMF_CALL or something like that so it's shorter and more consistent with UR_CALL/ZE_CALL

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

for (auto &Config : AllConfigs.Configs) {
Config.MaxPoolableSize = AllConfigs.Configs[LM].MaxPoolableSize;
ret = umfDisjointPoolParamsSetMaxPoolableSize(Config, TmpValue);
CheckConfigRet(ret);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This return will only exit from this lambda, not from the entire parseDisjointPoolConfig function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Config.SharedLimits = AllConfigs.limits.get();
Config.PoolTrace = trace;
umfDisjointPoolParamsSetSharedLimits(Config, AllConfigs.limits.get());
umfDisjointPoolParamsSetTrace(Config, trace);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no return check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

#define CheckConfigRet(umf_ret) \
if (umf_ret != UMF_RESULT_SUCCESS) { \
logger::error("DisjointPool params failed"); \
return; \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we just ignore the error? If any of the setParams function fail and we will just exit the function we might use incomplete config later on.

For calls in DisjointPoolAllConfigs::DisjointPoolAllConfigs I think we should have an assert and for calls in parseDisjointPoolConfig if there is any error we should catch it, log it, and probably revert all configs to the default settings.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@github-actions github-actions bot added the level-zero L0 adapter specific issues label Dec 6, 2024
<< std::setw(12)
<< AllConfigs.Configs[DisjointPoolMemType::SharedReadOnly].Capacity
<< std::endl;
// TODO: fixme, accessing config values directly is no longer allowed - API's changed
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to do this nicely, but I'm not sure if I can... @igchor tried to make the pre-generated content in ctor, but it would have to be updated somehow in the parsing function... If this is not crucial I could update that in a sep. PR...?

@github-actions github-actions bot added ci/cd Continuous integration/devliery cuda CUDA adapter specific issues hip HIP adapter specific issues labels Dec 6, 2024
@lukaszstolarczuk
Copy link
Contributor Author

replaced with #2436

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/cd Continuous integration/devliery common Changes or additions to common utilities cuda CUDA adapter specific issues hip HIP adapter specific issues level-zero L0 adapter specific issues

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants