Get Started

PLAYBOOK

Quality Assurance Practices

Our Quality Assurance Practices lay out all of the best practices followed at CodeLink for Quality Assurance. Our quality assurance practices are a guide to ensure we apply the highest level of testing quality, follow all best practices in everything we test, and improve the software we develop.

Testing in Development

When development starts, all code and/or modifications to legacy code are backed by unit tests. Unit tests ensure that the system works at a technical level and that there are no errors in the logic.

Any commit to the code repository should trigger an execution of the unit tests from the CI server. This provides a fast feedback mechanism to the development team.

Automated Acceptance Tests and Non-functional Testing

The automated acceptance tests include Integration Tests and UI tests which aim to prove the software works at a functional level and that it meets users’ requirements and specifications.

Most automated acceptance tests we implement are written in Gherkin language and executed using a BDD tool such as cucumber. Some tests are written with step-by-step scripts in Javascript or Ruby.

Non-functional tests (Performance and Security) tests are as equally important as functional tests, therefore need to be executed on each major deployment. We use jMeter and Flood.io for Performance testing.

For Security testing, we work with 3rd party vendors to perform penetration tests as well as full-scale security checks when requested by our client.

It is vital that this should be a completely automated process with very little maintenance to get the most benefit out of automated deployments. This means there should be no intermittent test failures, test script issues, and a broken environment.

Failures should only be due to genuine code defects rather than script issues, therefore any failing test which is not due to genuine failures should be fixed immediately or removed from the automation pack, to be able to get consistent results.

Smoke Testing

Smoke Testing is followed to cover all high-level functionality to make sure the application is stable enough for further development or testing.

Regression Testing

We follow regression testing to ensure that the implementation of new features we have introduced has not broken any existing functionality. There should be a very little amount of manual regression testing once proper automation tests are in place.

The full regression suite of tests contains all features broken down by action in more detail than what was covered in the smoke testing.

UAT and Exploratory Testing

UAT and exploratory testing are run in parallel with automated acceptance tests. The aim of UAT is to ensure that the developed features make business sense and fulfill business requirements.

The PO (Product Owner) will run User Acceptance Tests to confirm new features deployed and new product releases align with business requirements and meet users’ expectations.

Exploratory testing will focus on user scenarios and should find bugs that the automation tests may miss. Exploratory testing should not find trivial bugs, rather it should find subtle issues with the product feature set.

Test Environments

All new features developed are deployed in order to the following environments:

  1. Staging Environment

    Staging Environment is used to deploy all features that are ready for acceptance testing.

  2. Demo Environment

    Demo Environment is a replica of the production environment. Demo should be used to deploy the set of features planned for a release to test how they interact with all existing functionality on production.

  3. Production Environment

Production Environment is deployed once new features meet the Definition of Done on both Staging and Demo environments.

Every new deployment to each environment is deployed with an incremental new release number.

In the event of any issues with a new deployment, the deployment environment can be rolled back to a previous release build so that releases containing any issues can be rolled back.

Definition of Done

Once all the above activities are completed and no issues are found, the story is Done. A user story is done when:

  1. Unit Tests pass
  2. API testing has passed for any stories that integrate into API
  3. Acceptance Testing has passed
  4. Regression Testing has passed
Playbook Tags:
Testing
Process

Our Playbook

View all playbook

Ready to start building?

Let's begin

HIGH-QUALITY LEAN PRODUCT DEVELOPMENT TEAMS

(+84) 28 39333143

[email protected]