Loading…
CAST 2019 has ended
Back To Schedule
Wednesday, August 14 • 3:15pm - 4:15pm
5 Levels of API Test Automation

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

In my context we run a micro service architecture with a number (300+) of api endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very neccessary to make sure this distributed monolith operates correctly. Traditionly we would test by invoked an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flakey tests.
I will demonstrate how we leveraged of newer technologies and split our api testing into 5 levels to increase our overall confidence. The levels are: (ignoring developer focused unit and unit integration tests)
  1. Mocked black box testing - where you start up an api (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies.
  2. Temp namespaced api in your ci environment - here you start up ur api as it would in a normal integrated env but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors occur, use kubernetes and ci config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the api will meet all the acceptance criteria.
  3. Post deployment tests - usually called smoke testing to verify that an api is up and critical functionality is working in a fully integrated environment.
We should be happy by now right? Fairly happy that api does what it says on the box… but…
  1. Environment stability tests - tests tha run every few min in an integrated env and makes sure all services are highly available given all the deployments that have occurred. Use gitlab to control the scheduling.
  2. Data explorer tests - these are tests that run periodically but use some randomisation to either generate or extract random data with which to invoke the api with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.
I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.

Speakers
avatar for Shekhar Ramphal

Shekhar Ramphal

Quality assurance technical lead, Allan Gray
Shekhar is passionate about software testing, Computer engineer by qualification. He has experience in full stack testing in all areas from manual QA, system design and architecture, to Performance and security as well as automation in different languages.


Wednesday August 14, 2019 3:15pm - 4:15pm EDT
Sea Oats