Skip to content

[ci] Upload autoscaling in K8s test results #2651

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 10, 2025
Merged

Conversation

selenium-ci
Copy link
Member

@selenium-ci selenium-ci commented Feb 10, 2025

User description

This PR contains the results of the autoscaling tests in Kubernetes


PR Type

Tests


Description

  • Updated Kubernetes autoscaling test results across multiple scenarios.

  • Adjusted metrics for deployment count and job count strategies.

  • Refined results for chaos scenarios and node max sessions.

  • Improved consistency and accuracy in test data representation.


Changes walkthrough 📝

Relevant files
Tests
results_test_k8s_autoscaling_deployment_count.md
Updated deployment count autoscaling test results               

.keda/results_test_k8s_autoscaling_deployment_count.md

  • Updated test results for deployment count strategy.
  • Adjusted metrics for session creation and pod scaling.
  • Improved data consistency across iterations.
  • +20/-20 
    results_test_k8s_autoscaling_deployment_count_in_chaos.md
    Updated deployment count in chaos test results                     

    .keda/results_test_k8s_autoscaling_deployment_count_in_chaos.md

  • Updated test results for deployment count in chaos scenarios.
  • Refined metrics for pod scaling and session handling.
  • Enhanced accuracy in chaos scenario data.
  • +20/-20 
    results_test_k8s_autoscaling_deployment_count_with_node_max_sessions.md
    Updated node max sessions autoscaling test results             

    .keda/results_test_k8s_autoscaling_deployment_count_with_node_max_sessions.md

  • Updated test results for node max sessions strategy.
  • Adjusted metrics for pod scaling and session limits.
  • Improved data representation for node constraints.
  • +20/-20 
    results_test_k8s_autoscaling_job_count_strategy_default.md
    Updated default job count strategy test results                   

    .keda/results_test_k8s_autoscaling_job_count_strategy_default.md

  • Updated test results for default job count strategy.
  • Refined metrics for session creation and pod scaling.
  • Enhanced consistency in job count data.
  • +20/-20 
    results_test_k8s_autoscaling_job_count_strategy_default_in_chaos.md
    Updated job count in chaos test results                                   

    .keda/results_test_k8s_autoscaling_job_count_strategy_default_in_chaos.md

  • Updated test results for job count in chaos scenarios.
  • Adjusted metrics for pod scaling and session handling.
  • Improved accuracy in chaos scenario data representation.
  • +20/-20 
    results_test_k8s_autoscaling_job_count_strategy_default_with_node_max_sessions.md
    Updated job count with node max sessions test results       

    .keda/results_test_k8s_autoscaling_job_count_strategy_default_with_node_max_sessions.md

  • Updated test results for job count with node max sessions.
  • Refined metrics for session creation and pod scaling.
  • Enhanced data consistency for node constraints.
  • +20/-20 

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Performance Concern

    High session creation times (30-40s) in early iterations may indicate scaling inefficiencies that need investigation

    | 1         | 2                    | 39.74 s               | 0                         | 2                  | 2                      | 2                  | 1                    | 0    | 2               |
    | 2         | 2                    | 39.03 s               | 0                         | 1                  | 2                      | 3                  | 1                    | 1    | 0               |
    | 3         | 3                    | 37.15 s               | 0                         | 2                  | 5                      | 5                  | 1                    | 0    | 0               |
    | 4         | 2                    | 41.82 s               | 0                         | 2                  | 7                      | 7                  | 1                    | 0    | 0               |
    | 5         | 3                    | 31.94 s               | 0                         | 3                  | 10                     | 10                 | 1                    | 0    | 0               |
    | 6         | 1                    | 43.87 s               | 0                         | 1                  | 11                     | 11                 | 1                    | 0    | 11              |
    Resource Utilization

    Large number of gaps between total pods and running sessions suggests potential resource waste and inefficient pod allocation

    | 6         | 3                    | 18.11 s               | 0                         | 3                  | 9                      | 10                 | 3                    | 21   | 9               |
    | 7         | 3                    | 15.75 s               | 0                         | 0                  | 3                      | 9                  | 3                    | 24   | 0               |
    | 8         | 2                    | 15.64 s               | 0                         | 1                  | 5                      | 11                 | 3                    | 28   | 0               |
    | 9         | 1                    | 16.46 s               | 0                         | 0                  | 6                      | 11                 | 3                    | 27   | 0               |
    | 10        | 2                    | 13.45 s               | 0                         | 1                  | 8                      | 12                 | 3                    | 28   | 0               |
    | 11        | 3                    | 14.04 s               | 0                         | 0                  | 11                     | 12                 | 3                    | 25   | 11              |

    Copy link
    Contributor

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    General
    Reduce excessive idle pod count

    The high number of gaps (up to 10) in iterations 7-8 indicates inefficient pod
    utilization. Consider adjusting the scaling threshold to reduce the number of
    idle pods and optimize resource usage.

    .keda/results_test_k8s_autoscaling_deployment_count.md [9-10]

    -| 7         | 2                    | 14.33 s               | 0                         | 0                  | 2                      | 11                 | 1                    | 9    | 0               |
    -| 8         | 1                    | 5.54 s                | 0                         | 0                  | 3                      | 11                 | 1                    | 8    | 0               |
    +| 7         | 2                    | 14.33 s               | 0                         | -3                 | 2                      | 8                  | 1                    | 6    | 0               |
    +| 8         | 1                    | 5.54 s                | 0                         | 0                  | 3                      | 8                  | 1                    | 5    | 0               |

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 8

    __

    Why: The suggestion addresses a significant resource efficiency issue by identifying high pod underutilization (9 gaps) and proposing concrete scaling threshold adjustments to optimize cluster resources.

    Medium
    Optimize pod resource utilization

    Large gap count (20-34) in later iterations suggests severe pod
    underutilization. Implement pod termination for idle pods to maintain optimal
    cluster efficiency.

    .keda/results_test_k8s_autoscaling_job_count_strategy_default_with_node_max_sessions.md [19-21]

    -| 17        | 3                    | 13.87 s               | 0                         | 0                  | 3                      | 9                  | 3                    | 24   | 0               |
    -| 18        | 1                    | 5.04 s                | 0                         | 0                  | 4                      | 9                  | 3                    | 23   | 0               |
    -| 19        | 1                    | 12.96 s               | 0                         | 0                  | 5                      | 9                  | 3                    | 22   | 0               |
    +| 17        | 3                    | 13.87 s               | 0                         | -3                 | 3                      | 6                  | 3                    | 15   | 0               |
    +| 18        | 1                    | 5.04 s                | 0                         | 0                  | 4                      | 6                  | 3                    | 14   | 0               |
    +| 19        | 1                    | 12.96 s               | 0                         | 0                  | 5                      | 6                  | 3                    | 13   | 0               |
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    __

    Why: The suggestion correctly identifies severe pod underutilization with 22-24 gaps and proposes a valid solution to improve cluster efficiency through pod termination strategies.

    Medium

    @VietND96 VietND96 merged commit 2156202 into trunk Feb 10, 2025
    1 check passed
    @VietND96 VietND96 deleted the autoscaling-results branch February 10, 2025 09:59
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    2 participants