This tutorial is a hands-on review of Cross-Browser functional testing tool Parrot QA:
This tool makes it easy to test your website without writing a line of code.
This tutorial will walk you through the whole platform. We’ll start with the simplest way to set up a website test, then cover testing for more complex functionality.
You’ll find screenshots and an overview of both the QAmcorder (our Chrome extension for recording user flows) and of our Mind Map Test Management Cloud app.
Table of Contents:
Parrot QA Guide
From recording a test to setting up test dependencies and expectations, to addressing regressions and bugs when we find them, this guide will teach you everything you need to know about testing your website’s functionality using Parrot QA tool.
Parrot QA Guide: The Easiest Way to Test Functionality of Your Website
There are two ways to set up a test i.e., Quick Start and QAmcorder.
#1) Quick Start
Quick Start is perfect if all you want to do is test what a web page renders correctly. Simply enter the web page’s URL and you’re off!
[NOTE – Click on any image for an enlarged view]
Watch this short 2-minute getting started video on Parrot QA:
#2) QAmcorder
The QAmcorder is for testing more complex functionality. It allows you to record yourself using your website and test full user flows like logging in or buying a widget. You can download it from the Chrome Extension Web Store.
Once you’ve downloaded the QAmcorder, navigate to your website, and then click the colorful little button in the top right of your screen.
If you’ve already signed up at parrotqa.com website, just log in to the QAmcorder using that same email and password. If not, you can sign up for Parrot from within the QAmcorder.
After you have logged into the QAmcorder, you can record user flows by clicking “Start Recording”.
You will have 1 minute to click through the user flow. We recommend keeping each recording short and sweet.
For example, if a user has to log in before buying a widget, we generally recommend that you make “Log in” one recording and make “Buy a widget” a second.
While you are recording, you will see a green bar on the left side of your screen, with a countdown of how many seconds you have left.
If you go over the bar, you can see how many clicks and keystrokes you’ve recorded.
Once you are finished recording a user flow, simply click the colorful box in the top right corner again and save the recording.
Nice work! You’ve saved your first recording.
Test Setup
Tests run regularly (every hour, day, week or month). And every time we run a series of tests, we start as a completely new website visitor (with a fresh session). We recommend breaking each user flow into lots of short recordings, with some dedicated purely to setting up the session.
In the example above, you would select “Log in” as the “recording that needs to run first” when setting up the “Buy a widget” test.
Pro tip: If the “Buy a widget” test implicitly ensures that the user was able to log in, you can pause the “Log in” test (by clicking “Pause” in the top right). In other words, not all recordings on your dashboard should be actively testing expectations. A paused recording will run if it is a prerequisite for another test, but it won’t test its own expectations at its scheduled time.
You can always return to the test setup view (depicted above) by clicking on the title of your test. This view allows you to configure when, how quickly, and across which browsers your test should run.
Sometimes you’ll want to slow a test down if you’re testing a feature that’s slow to load. But usually, we find that if a test can’t run at the fastest speed (which is the default), then your users probably aren’t having a smooth experience either.
Setting Expectations
Each test has a set of expectations about the HTML content of your web page.
For example, you could:
- Expect a header that always has the same text
- Expect a link that always has the same href attribute
- Expect an image that always has the same src attribute
Setting test expectations in Parrot doesn’t require any code or knowledge of HTML.
We save a full version of your web page (its DOM) every time we run a test. You can see a full “reincarnation” of how the web page looked during the test by clicking on any of the test’s screenshots.
Click on any part of the “reincarnation” and check off all the expectations you want to test, then click “Save” and “Run the Test”.
In the example below, we’re testing what the header always says “Example Domain”.
We’ll run the test as often as we want and let you know if the resulting web page doesn’t meet the expectations you set.
Test Colors
All tests on your dashboard will be color-coded with status. Green means passing, blue means running, yellow means paused, and red means failing.
Failing Tests
If we detect a mismatch between your expectations and how your web page looks now, we’ll send you an email or Slack notification with a comparison. You can also see the most recent comparisons by clicking on a recording in your dashboard and clicking on one of the browser-specific screenshots. This is the same interface you used to set the expectations, explained above.
By hovering over a red rectangle, you can see exactly what has changed.
In the example below, the expectation that our subheader would refer to “functional testing” failed, because the text changed from “functional testing” to just “testing”.
There are at least three good options for handling failing tests which were mentioned below
- If you no longer care about this specific expectation, you can just remove it. Click on the highlighted changes and uncheck any checkboxes that you previously checked.
- If you are happy with the way the page looks now, and you want to test against this current version in the future, simply click “Approve Changes”. In this case, if we wanted to refer to “testing” in the future, not “functional testing”, then we would just approve these changes.
- If you want to revert your page to how it looked before, then you should fix the bug, deploy the changes, and click “Run the Test” again to ensure this test goes back to green.
Conclusion
This simple-to-use cross-browser functional testing tool automates your web application test cases. You can use the Chrome extension to record tests, run tests at scheduled times, and report on bugs and issues.
Thanks for taking the time to read through this guide! Even though you’ve just become an expert in all things Parrot, you may still have some questions.
Please reach out by clicking on the yellow chat icon in the bottom right corner of each Parrot QA page or put your queries in the comments section below. We’d love to help fulfil all your testing dreams.
Thanks for asking, Avishek! To answer your questions:
1 – No, the plug in is currently only available in Chrome. But we run your tests on Safari and Firefox, too!
2 – I do intend to open source Parrot, as soon as the code is up to OSS standards! Stay tuned 😉
3 – Parrot makes it easy to set up a series of user flows, where one test inherits the session from the previous. So you can record yourself logging in (which sets up the session), and then record yourself taking an authenticated action. And as long as you mark the second recording as being dependent on the first, you’ll be all set!
Let me know if you have other questions!
1. Is the plug in available for all the browsers apart from chrome?
2. Why company will consider this tool though it is not free/ open source as QTP falls and selenium rises in the market as selenium is open source?
3. How parrot will handle the login credentials or any previously filled up any form data?
This looks interesting. Checking now. I will update the details soon.