Uploaded image for project: 'UX Product'
  1. UX Product
  2. UXPROD-1816

Real-world Load and Performance Testing Methodology



    • New Feature
    • Status: Open (View Workflow)
    • P2
    • Resolution: Unresolved
    • None
    • None
    • None
    • XXL < 30 days
    • XXL is the largest available. This item is likely larger.
    • None
    • 20
    • R1
    • R1
    • R1
    • R1
    • R1
    • R1
    • R2
    • R4
    • R1
    • R2


      We need an environment, set of tests and an approach to load and performance testing that exercise the system based on realistic usage/behavior. Things like waits, concurrency, sequence, parameters, etc... all of these things can contribute to realistic and valuable test results, or if not founded in real-world and expected usage, garbage results that force us to chase our tail. There's no sense in working on optimizing an API so that it can stand up to a pounding if it will never be pounded in real life.

      We need to test front-end modules as well as backend. We've all seen times when the frontend is not using the backend correctly and causing performance problems as a result. Backend components' contribution to performance are obvious.

      I think we need a few things:
      1) An environment we can use for end-to-end and API testing. We need to understand limitations and uniqueness of these environments - how applicable are they to a live production environment such that when an issue comes up we can determine it's importance. Ideally there'd be a nice direct correlation between the environment and a production environment.
      2) A transparent and well understood testing methodology. We throw around terms like "50 user test". What does that mean? What does each user do? are they identical? If not identical, what's the distribution of what they do (simple vs complex, etc.)..? Are there sleep times between actions? Do they ramp up and ramp down? How long does each live?
      3) Ability to replicate results - so that we can see regressions and also so that we can reproduce or diagnose problems when needed.
      4) We need to understand what realistic system behavior is. For a given tenant, what % of activity is read vs write, circulation vs acquisitions, etc. Some parts of FOLIO may be isolated such that it doesn't matter what's happening in Acquisitions as far as Inventory or Circulation go... but some pieces of the system MAY care. This also may depend on how the system is laid out in terms of deployment and load balancing (not just software architecture).

      This epic/issue will be used to track the various activities that relate to the overall load and performance testing approach for FOLIO.

      TestRail: Results


          Issue Links



                jakub Jakub Skoczen
                mikegorrell Mike Gorrell
                Jakub Skoczen Jakub Skoczen
                0 Vote for this issue
                9 Start watching this issue



                  TestRail: Runs

                    TestRail: Cases