Skip to content

Commit 66d590b

Browse files
sprasad-microsoftSteve French
authored andcommitted
cifs: deal with the channel loading lag while picking channels
Our current approach to select a channel for sending requests is this: 1. iterate all channels to find the min and max queue depth 2. if min and max are not the same, pick the channel with min depth 3. if min and max are same, round robin, as all channels are equally loaded The problem with this approach is that there's a lag between selecting a channel and sending the request (that increases the queue depth on the channel). While these numbers will eventually catch up, there could be a skew in the channel usage, depending on the application's I/O parallelism and the server's speed of handling requests. With sufficient parallelism, this lag can artificially increase the queue depth, thereby impacting the performance negatively. This change will change the step 1 above to start the iteration from the last selected channel. This is to reduce the skew in channel usage even in the presence of this lag. Fixes: ea90708 ("cifs: use the least loaded channel for sending requests") Cc: <[email protected]> Signed-off-by: Shyam Prasad N <[email protected]> Signed-off-by: Steve French <[email protected]>
1 parent cc55f65 commit 66d590b

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

fs/smb/client/transport.c

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1018,14 +1018,16 @@ struct TCP_Server_Info *cifs_pick_channel(struct cifs_ses *ses)
10181018
uint index = 0;
10191019
unsigned int min_in_flight = UINT_MAX, max_in_flight = 0;
10201020
struct TCP_Server_Info *server = NULL;
1021-
int i;
1021+
int i, start, cur;
10221022

10231023
if (!ses)
10241024
return NULL;
10251025

10261026
spin_lock(&ses->chan_lock);
1027+
start = atomic_inc_return(&ses->chan_seq);
10271028
for (i = 0; i < ses->chan_count; i++) {
1028-
server = ses->chans[i].server;
1029+
cur = (start + i) % ses->chan_count;
1030+
server = ses->chans[cur].server;
10291031
if (!server || server->terminate)
10301032
continue;
10311033

@@ -1042,17 +1044,15 @@ struct TCP_Server_Info *cifs_pick_channel(struct cifs_ses *ses)
10421044
*/
10431045
if (server->in_flight < min_in_flight) {
10441046
min_in_flight = server->in_flight;
1045-
index = i;
1047+
index = cur;
10461048
}
10471049
if (server->in_flight > max_in_flight)
10481050
max_in_flight = server->in_flight;
10491051
}
10501052

10511053
/* if all channels are equally loaded, fall back to round-robin */
1052-
if (min_in_flight == max_in_flight) {
1053-
index = (uint)atomic_inc_return(&ses->chan_seq);
1054-
index %= ses->chan_count;
1055-
}
1054+
if (min_in_flight == max_in_flight)
1055+
index = (uint)start % ses->chan_count;
10561056

10571057
server = ses->chans[index].server;
10581058
spin_unlock(&ses->chan_lock);

0 commit comments

Comments
 (0)