Today’s tutorial is a hands-on review of Cross-Browser functional testing tool Parrot QA. This tool makes it easy to test your website without writing a line of code.
This tutorial will walk you through the whole platform. We’ll start with the simplest way to set up a website test, then cover testing more complex functionality.
You’ll find screenshots and an overview of both the QAmcorder (our Chrome extension for recording user flows) and of our mind map test management cloud app.
From recording a test to setting up test dependencies and expectations, to addressing regressions and bugs when we find them, this guide will teach you everything you need to know about testing your website’s functionality using Parrot QA tool.
Parrot QA Guide: The Easiest Way to Test Functionality of Your Website
There are two ways to set up a test i.e., Quick Start and QAmcorder.
What You Will Learn:
#1) Quick Start
Quick Start is perfect if all you want to do is a test that a web page renders correctly. Simply enter the web page’s URL and you’re off!
[NOTE – Click on any image for an enlarged view]
Watch this short 2-minute getting started a video on Parrot QA:
The QAmcorder is for testing more complex functionality. It allows you to record yourself using your website and test full user flows like logging in or buying a widget. You can download it from the Chrome Extension Web Store.
Once you’ve downloaded the QAmcorder, navigate to your website, and then click the colorful little button in the top right of your screen.
If you’ve already signed up at parrotqa.com website, just log in to the QAmcorder using that same email and password. If not, you can sign up for Parrot from within the QAmcorder.
After you have logged into the QAmcorder, you can record user flows by clicking “Start Recording”.
You will have 1 minute to click through a user flow. We recommend keeping each recording short and sweet.
For example, if a user has to log in before buying a widget, we generally recommend that you make “Log in” one recording and make “Buy a widget” a second.
While you are recording, you will see a green bar on the left side of your screen, with a countdown of how many seconds you have left.
If you however over the bar, you can see how many clicks and keystrokes you’ve recorded.
Once you are finished recording a user flow, simply click the colorful box in the top right corner again and save the recording.
Nice work! You’ve saved your first recording.
Tests run regularly (every hour, day, week or month). And every time we run a series of tests, we start as a completely new website visitor (with a fresh session). So we recommend breaking each user flow into lots of short recordings, with some dedicated purely to setting up the session.
In the example above, you would select “Log in” as the “recording that needs to run first” when setting up the “Buy a widget” test.
Pro tip: If the “Buy a widget” test implicitly ensures that the user was able to log in, you can pause the “Log in” test (by clicking “Pause” in the top right). In other words, not all recordings in your dashboard have to be actively testing expectations. A paused recording will run if it is a prerequisite to another test, but it won’t test its own expectations at its scheduled time.
You can always return to the test setup view (depicted above) by clicking on the title of your test. This view allows you to configure when, how quickly, and across which browsers your test should run.
Sometimes you’ll want to slow a test down if you’re testing a feature that’s slow to load. But usually, we find that if a test can’t run at the fastest speed (which is the default), then your users probably aren’t having a smooth experience either.
Every test has a set of expectations about the HTML contents of your web page.
For example, you could:
- Expect that a header always has the same text
- Expect that a link always has the same href attribute
- Expect that an image always has the same src attribute
Setting test expectations in Parrot doesn’t require any code or knowledge of HTML.
We save a full version of your web page (its DOM) every time we run a test. You can see a full “reincarnation” of how the web page looked during the test by clicking on any of the test’s screenshots.
Click on any part of the “reincarnation” and check off all the expectations you want to test, then click “Save” and “Run the Test”.
In the example below, we’re testing that the header always says “Example Domain”.
Now we’ll run the test as often as you want, and let you know if the resulting web page doesn’t meet the expectations you set.
Every test in your dashboard will be color-coded by status. Green means passing, blue means running, yellow means paused, and red means failing.
When we detect a mismatch between your expectations and how your web page looks now, we’ll send you an email or Slack notification with a comparison. You can also see the most recent comparisons by clicking on a recording in your dashboard and clicking on one of the browser-specific screenshots. This is the same interface you used to set the expectations, explained above.
By hovering over a red rectangle, you can see exactly what changed.
In the example below, the expectation that our subheader would refer to “functional testing” failed, because the text changed from “functional testing” to just “testing”.
There are at least three good options for handling failing tests which was mentioned below
- If you no longer care about this specific expectation, you can just remove it. Click on the highlighted change and uncheck any checkboxes that you previously checked.
- If you are happy with the way the page looks now, and you want to test against this current version in the future, simply click “Approve Changes”. In this case, if we wanted to refer to “testing” in the future, not “functional testing”, then we would just approve these changes.
- If you want to revert your page to how it looked before, then you should fix the bug, deploy the changes, and click “Run the Test” again to ensure this test goes back to green.
This simple-to-use cross-browser functional testing tool automates your web application test cases. You can use the chrome extension to record tests, run tests at scheduled times, and report on bugs and issues.
Thanks for taking the time to read through this guide! Even though you’ve just become an expert in all things Parrot, you may still have some questions.
Please reach out by clicking the yellow chat icon in the bottom right corner of every Parrot QA page or put your queries in below comments section. We’d love to help fulfill all your testing dreams.