添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
高大的脸盆  ·  Gatling - Injection·  8 月前    · 
粗眉毛的硬币  ·  Gatling - HTTP Request·  11 月前    · 
欢乐的打火机  ·  运行测试·  1 年前    · 
天涯  ·  WebFlux和SpringMVC性能对比 ...·  3 年前    · 
礼貌的课本  ·  Export as a GIF · ...·  2 周前    · 
冷静的饼干  ·  【在线】140206 EXO ...·  1 月前    · 
踏实的椰子  ·  框架介绍 | OpenSumi·  5 月前    · 

Injection

Injection profiles, differences between open and closed workload models

The definition of the injection profile of users is done with the injectOpen and injectClosed methods (just inject in Scala). This method takes as an argument a sequence of injection steps that will be processed sequentially.

Open vs Closed Workload Models

When it comes to load model, systems behave in 2 different ways:

  • Closed systems, where you control the concurrent number of users
  • Open systems, where you control the arrival rate of users
  • Make sure to use the proper load model that matches the load your live system experiences.

    Closed system are system where the number of concurrent users is capped. At full capacity, a new user can effectively enter the system only once another exits.

    Typical systems that behave this way are:

  • call center where all operators are busy
  • ticketing websites where users get placed into a queue when the system is at full capacity
  • On the contrary, open systems have no control over the number of concurrent users: users keep on arriving even though applications have trouble serving them. Most websites behave this way.

    Don’t reason in terms of concurrent users if your system can’t push excess traffic into a queue.

    If you’re using a closed workload model in your load tests while your system actually is an open one, your test is broken, and you’re testing some different imaginary behavior. In such case, when the system under test starts to have some trouble, response times will increase, journey time will become longer, so number of concurrent users will increase and injector will slow down to match the imaginary cap you’ve set.

    You can read more about open and closed models here and on our blog .

    Open and closed workload models are antinomical and you can’t mix them in the same injection profile.

    Open Model

    setUp(
      scn.injectOpen(
        nothingFor(4), // 1
        atOnceUsers(10), // 2
        rampUsers(10).during(5), // 3
        constantUsersPerSec(20).during(15), // 4
        constantUsersPerSec(20).during(15).randomized(), // 5
        rampUsersPerSec(10).to(20).during(10), // 6
        rampUsersPerSec(10).to(20).during(10).randomized(), // 7
        stressPeakUsers(1000).during(20) // 8
      ).protocols(httpProtocol)
          
    setUp(
      scn.injectOpen(
        nothingFor(4), // 1
        atOnceUsers(10), // 2
        rampUsers(10).during(5), // 3
        constantUsersPerSec(20.0).during(15), // 4
        constantUsersPerSec(20.0).during(15).randomized(), // 5
        rampUsersPerSec(10.0).to(20.0).during(10), // 6
        rampUsersPerSec(10.0).to(20.0).during(10).randomized(), // 7
        stressPeakUsers(1000).during(20) // 8
      ).protocols(httpProtocol)
          
    setUp(
      scn.inject(
        nothingFor(4), // 1
        atOnceUsers(10), // 2
        rampUsers(10).during(5), // 3
        constantUsersPerSec(20).during(15), // 4
        constantUsersPerSec(20).during(15).randomized, // 5
        rampUsersPerSec(10).to(20).during(10.minutes), // 6
        rampUsersPerSec(10).to(20).during(10.minutes).randomized, // 7
        stressPeakUsers(1000).during(20) // 8
      ).protocols(httpProtocol)
    

    The building blocks for open model profile injection are:

  • nothingFor(duration): Pause for a given duration.
  • atOnceUsers(nbUsers): Injects a given number of users at once.
  • rampUsers(nbUsers).during(duration): Injects a given number of users distributed evenly on a time window of a given duration.
  • constantUsersPerSec(rate).during(duration): Injects users at a constant rate, defined in users per second, during a given duration. Users will be injected at regular intervals.
  • constantUsersPerSec(rate).during(duration).randomized: Injects users at a constant rate, defined in users per second, during a given duration. Users will be injected at randomized intervals.
  • rampUsersPerSec(rate1).to.(rate2).during(duration): Injects users from starting rate to target rate, defined in users per second, during a given duration. Users will be injected at regular intervals.
  • rampUsersPerSec(rate1).to(rate2).during(duration).randomized: Injects users from starting rate to target rate, defined in users per second, during a given duration. Users will be injected at randomized intervals.
  • stressPeakUsers(nbUsers).during(duration): Injects a given number of users following a smooth approximation of the heaviside step function stretched to a given duration.
  • Rates can be expressed as fractional values.

    Closed Model

       
    setUp(
      scn.injectClosed(
        constantConcurrentUsers(10).during(10), // 1
        rampConcurrentUsers(10).to(20).during(10) // 2
          
    setUp(
      scn.injectClosed(
        constantConcurrentUsers(10).during(10), // 1
        rampConcurrentUsers(10).to(20).during(10) // 2
          
    setUp(
      scn.inject(
        constantConcurrentUsers(10).during(10), // 1
        rampConcurrentUsers(10).to(20).during(10) // 2
    

    The building blocks for closed model profile injection are:

  • constantConcurrentUsers(nbUsers).during(duration): Inject so that number of concurrent users in the system is constant
  • rampConcurrentUsers(fromNbUsers).to(toNbUsers).during(duration): Inject so that number of concurrent users in the system ramps linearly from a number to another
  • Ramping down the number of concurrent users won’t force the existing users to interrupt. The only way for virtual users to terminate is to complete their scenario.

    Meta DSL

    It is possible to use elements of Meta DSL to write tests in an easier way. If you want to chain levels and ramps to reach the limit of your application (a test sometimes called capacity load testing), you can do it manually using the regular DSL and looping using map and flatMap. But there is now an alternative using the meta DSL.

    incrementUsersPerSec

       
    setUp(
      // generate an open workload injection profile
      // with levels of 10, 15, 20, 25 and 30 arriving users per second
      // each level lasting 10 seconds
      // separated by linear ramps lasting 10 seconds
      scn.injectOpen(
        incrementUsersPerSec(5.0)
          .times(5)
          .eachLevelLasting(10)
          .separatedByRampsLasting(10)
          .startingFrom(10) // Double
          
    setUp(
      // generate an open workload injection profile
      // with levels of 10, 15, 20, 25 and 30 arriving users per second
      // each level lasting 10 seconds
      // separated by linear ramps lasting 10 seconds
      scn.injectOpen(
        incrementUsersPerSec(5.0)
          .times(5)
          .eachLevelLasting(10)
          .separatedByRampsLasting(10)
          .startingFrom(10.0) // Double
          
    setUp(
      // generate an open workload injection profile
      // with levels of 10, 15, 20, 25 and 30 arriving users per second
      // each level lasting 10 seconds
      // separated by linear ramps lasting 10 seconds
      scn.inject(
        incrementUsersPerSec(5.0)
          .times(5)
          .eachLevelLasting(10)
          .separatedByRampsLasting(10)
          .startingFrom(10) // Double
    

    incrementConcurrentUsers

       
    setUp(
      // generate a closed workload injection profile
      // with levels of 10, 15, 20, 25 and 30 concurrent users
      // each level lasting 10 seconds
      // separated by linear ramps lasting 10 seconds
      scn.injectClosed(
        incrementConcurrentUsers(5)
          .times(5)
          .eachLevelLasting(10)
          .separatedByRampsLasting(10)
          .startingFrom(10) // Int
          
    setUp(
      // generate a closed workload injection profile
      // with levels of 10, 15, 20, 25 and 30 concurrent users
      // each level lasting 10 seconds
      // separated by linear ramps lasting 10 seconds
      scn.injectClosed(
        incrementConcurrentUsers(5)
          .times(5)
          .eachLevelLasting(10)
          .separatedByRampsLasting(10)
          .startingFrom(10) // Int
          
    setUp(
      // generate a closed workload injection profile
      // with levels of 10, 15, 20, 25 and 30 concurrent users
      // each level lasting 10 seconds
      // separated by linear ramps lasting 10 seconds
      scn.inject(
        incrementConcurrentUsers(5)
          .times(5)
          .eachLevelLasting(10)
          .separatedByRampsLasting(10)
          .startingFrom(10) // Int
    

    incrementUsersPerSec is for open workload and incrementConcurrentUsers is for closed workload (users/sec vs concurrent users).

    separatedByRampsLasting and startingFrom are both optional. If you don’t specify a ramp, the test will jump from one level to another as soon as it is finished. If you don’t specify the number of starting users the test will start at 0 concurrent user or 0 user per sec and will go to the next step right away.

    Concurrent Scenarios

    You can configure multiple scenarios in the same setUp block to start at the same time and execute concurrently.

       
    setUp(
      scenario1.injectOpen(injectionProfile1),
      scenario2.injectOpen(injectionProfile2)
          
    setUp(
      scenario1.injectOpen(injectionProfile1),
      scenario2.injectOpen(injectionProfile2)
          
    setUp(
      scenario1.inject(injectionProfile1),
      scenario2.inject(injectionProfile2)
    

    Sequential Scenarios

    It’s also possible with andThen to chain scenarios, so that children scenarios start once all the users in the parent scenario terminate.

       
    setUp(
      parent.injectClosed(injectionProfile)
        // child1 and child2 will start at the same time when last parent user will terminate
        .andThen(
          child1.injectClosed(injectionProfile)
            // grandChild will start when last child1 user will terminate
            .andThen(grandChild.injectClosed(injectionProfile)),
          child2.injectClosed(injectionProfile)
        ).andThen(
          // child3 will start when last grandChild and child2 users will terminate
          child3.injectClosed(injectionProfile)
          
    setUp(
      parent.injectClosed(injectionProfile)
        // child1 and child2 will start at the same time when last parent user will terminate
        .andThen(
          child1.injectClosed(injectionProfile)
            // grandChild will start when last child1 user will terminate
            .andThen(grandChild.injectClosed(injectionProfile)),
          child2.injectClosed(injectionProfile)
        ).andThen(
          // child3 will start when last grandChild and child2 users will terminate
          child3.injectClosed(injectionProfile)
          
    setUp(
      parent.inject(injectionProfile)
        // child1 and child2 will start at the same time when last parent user will terminate
        .andThen(
          child1.inject(injectionProfile)
            // grandChild will start when last child1 user will terminate
            .andThen(grandChild.inject(injectionProfile)),
          child2.inject(injectionProfile)
    

    When chaining andThen calls, Gatling will define the new children to only start once all the users of the previous children have terminated, descendants included.

    Disabling Gatling Enterprise Load Sharding

    By default, Gatling Enterprise will distribute your injection profile amongst all injectors when running a distributed test from multiple nodes.

    This might not be the desired behavior, typically when running a first initial scenario with one single user in order to fetch some auth token to be used by the actual scenario. Indeed, only one node would run this user, leaving the other nodes without an initialized token.

    You can use noShard to disable load sharding. In this case, all the nodes will use the injection and throttling profiles as defined in the Simulation.

       
    setUp(
      // parent load won't be sharded
      parent.injectOpen(atOnceUsers(1)).noShard()
        .andThen(
          // child load will be sharded
          child1.injectClosed(injectionProfile)
          
    setUp(
      // parent load won't be sharded
      parent.injectOpen(atOnceUsers(1)).noShard()
        .andThen(
          // child load will be sharded
          child1.injectClosed(injectionProfile)
          
    setUp(
      // parent load won't be sharded
      parent.inject(atOnceUsers(1)).noShard
        .andThen(
          // child load will be sharded
          child1.inject(injectionProfile)
        ).andThen(
          // child3 will start when last grandChild and child2 users will terminate
          child3.inject(injectionProfile)
        
    This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
    Accept Decline