Skip to content

Conversation

jeroenvervaeke
Copy link
Member

Proposed changes

Allow configuration of temporary user timeout when connecting to atlas cluster.

  • config name temporaryDatabaseUserLifetimeSeconds
  • flag name --temporaryDatabaseUserLifetimeSeconds
  • env variable MDB_MCP_TEMPORARY_DATABASE_USER_LIFETIME_SECONDS

Ticket: [MCP-199] [Security] allow users to configure expiration timeout and limit the default 12 hours to 4 hours.

Checklist

@jeroenvervaeke jeroenvervaeke requested a review from a team as a code owner September 11, 2025 08:10
"tlsCertificateSelector",
"tlsDisabledProtocols",
"username",
"temporaryDatabaseUserLifetimeSeconds",
Copy link
Collaborator

@gagik gagik Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this array is duplicated from mongosh and shouldn't be modified.
In the future, we should definetely export it from mongosh and import it here because this is doomed to be confusing.
Maybe let's move this to a separate file now with a comment?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you elaborate on this comment?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to (and shouldn't) modify the OPTIONS type as that configuration is for part of code from mongosh which we depend on for connection related work. For general config changes the user UserConfig modification should be enough.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only added temporaryDatabaseUserLifetimeSeconds to the list because it didn't work without adding it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's odd, it shouldn't depend on this.
This object should be only about mongosh-related args

Copy link
Collaborator

@gagik gagik Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should definitely deal with this now. Just create a second object which extends this. It should take a minute.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this PR have a " 👍 ", or do you require any more changes on my side?

Copy link
Collaborator

@gagik gagik Sep 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should definitely deal with this now. Just create a second object which extends this. It should take a minute.

We need to split the object between what is copied of over from mongosh and what is ours.
Something like

const MONGOSH_OPTIONS = [
// copy-paste from that mongosh file
]
const MCP_SERVER_OPTIONS = [
// ...
]
const OPTIONS = [...MONGOSH_OPTIONS, ...MCP_SERVER_OPTIONS]

(this would need to be done per field as otherwise they'd override each other)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've split the options like you suggested.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed the shallow merge issue.

Copy link
Collaborator

@nirinchev nirinchev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new option should probably be documented in the readme. Also, we are now entering a territory where it's much more likely that an automatically created user expires while we're still connected to the cluster. We should consider how to handle such a case - we can either recreate the user and update the connection or return an error message instructing the model to re-run the atlas-connect-cluster tool. I would suggest that you test it manually to verify what the real-world behavior is and then decide whether gracefully handling it is critical for GA.

idleTimeoutMs: 600000, // 10 minutes
notificationTimeoutMs: 540000, // 9 minutes
httpHeaders: {},
atlasTemporaryDatabaseUserLifetimeMs: 14400000, // 4 hours
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When representing time, I find it easier to reason about the numbers if they're expressed as multiplication of units of time - i.e. 1000 * 60 * 60 * 4 rather than the result - it's much easier to spot mistakes than making the calculation in your head.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! I chose to use the plain number format here to maintain consistency with the other time values already defined in this struct, which all follow the same pattern.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't mind changing the others to match as a drive-by as well

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've update the numbers

@jeroenvervaeke
Copy link
Member Author

The new option should probably be documented in the readme. Also, we are now entering a territory where it's much more likely that an automatically created user expires while we're still connected to the cluster. We should consider how to handle such a case - we can either recreate the user and update the connection or return an error message instructing the model to re-run the atlas-connect-cluster tool. I would suggest that you test it manually to verify what the real-world behavior is and then decide whether gracefully handling it is critical for GA.

@nirinchev
Updated the readme

string: ["atlasTemporaryDatabaseUserLifetimeMs"],
} as const;

const OPTIONS = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is fine. alternative idea for consideration but non-blocking:

function mergeOptions(...optionSources: Array<Record<string, any>>) {
    return {
        string: optionSources.flatMap(opts => opts.string ?? []),
        boolean: optionSources.flatMap(opts => opts.boolean ?? []),
        array: optionSources.flatMap(opts => opts.array ?? []),
        alias: Object.assign({}, ...optionSources.map(opts => opts.alias ?? {})),
        configuration: Object.assign({}, ...optionSources.map(opts => opts.configuration ?? {})),
    } as const;
}

const OPTIONS = mergeOptions(MONGOSH_OPTIONS, MCP_SERVER_OPTIONS);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have accepted your suggestion.

I had to introduce a new Options interface to make the linter happy.

idleTimeoutMs: 600000, // 10 minutes
notificationTimeoutMs: 540000, // 9 minutes
httpHeaders: {},
atlasTemporaryDatabaseUserLifetimeMs: 14400000, // 4 hours
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't mind changing the others to match as a drive-by as well


// From: https://github.com/mongodb-js/mongosh/blob/main/packages/cli-repl/src/arg-parser.ts
const OPTIONS = {
const MONGOSH_OPTIONS = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is a problem. The options that you see here are not all MONGOSH options. They contain MCP related options as well.

I don't have strong opinion on fixing this right now but consider that this is only a partial solution. We still need to address it entirely, perhaps with https://jira.mongodb.org/browse/MCP-202

@nirinchev
Copy link
Collaborator

Folks, let's hold off on the refactorings of the cli options that close to the release. I agree it's not ideal that we're intertwining mongosh and mcp arguments together, but this has been the case for a while now and besides being a code smell, hasn't really caused issues. Sorry for the back-and-forth here @jeroenvervaeke, but let's just revert all the options refactorings and go back to the state in the first commit where we just add the new option to the original OPTIONS object.

@coveralls
Copy link
Collaborator

coveralls commented Sep 12, 2025

Pull Request Test Coverage Report for Build 17668190160

Details

  • 8 of 8 (100.0%) changed or added relevant lines in 2 files are covered.
  • 6 unchanged lines in 1 file lost coverage.
  • Overall coverage decreased (-0.1%) to 81.428%

Files with Coverage Reduction New Missed Lines %
src/common/connectionManager.ts 6 87.78%
Totals Coverage Status
Change from base Build 17650490388: -0.1%
Covered Lines: 4855
Relevant Lines: 5865

💛 - Coveralls

@jeroenvervaeke
Copy link
Member Author

Folks, let's hold off on the refactorings of the cli options that close to the release. I agree it's not ideal that we're intertwining mongosh and mcp arguments together, but this has been the case for a while now and besides being a code smell, hasn't really caused issues. Sorry for the back-and-forth here @jeroenvervaeke, but let's just revert all the options refactorings and go back to the state in the first commit where we just add the new option to the original OPTIONS object.

Reverted those requested changes

Copy link
Collaborator

@gagik gagik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OPTIONS changes seemed low risk but SGTM

@jeroenvervaeke jeroenvervaeke merged commit 02fe6a2 into main Sep 12, 2025
20 of 23 checks passed
@jeroenvervaeke jeroenvervaeke deleted the MCP-199 branch September 12, 2025 09:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants