Some Tricky Manual Testing Questions & Answers

By Vijay

By Vijay

I'm Vijay, and I've been working on this blog for the past 20+ years! I’ve been in the IT industry for more than 20 years now. I completed my graduation in B.E. Computer Science from a reputed Pune university and then started my career in…

Learn about our editorial policies.
Updated October 25, 2024

Interviews play a major role in the life of any human. Mostly all of us aim to clear the interviews on the first attempt.

In this tutorial, you will learn some tricky manual testing questions and answers, along with examples to help you crack the interview easily.

These interview questions will be very useful for beginner, intermediate, and experienced testers. These questions are quite tricky, hence you need to answer them carefully.

Let’s move on!!

Manual Testing Question and Answer

Manual Testing Tricky Interview Questions

Q #1) What is test coverage?

Answer: Test coverage is a statistical measure of quality to define the percentage of testing conducted by a given set of test cases. This will also provide a metric if the test cases cover the entire code of the application. Understanding the area of code and in turn, the application which will be covered by the given test suite is important here.

Q #2) As soon as a bug is found, what you will do?

Answer: The exact same steps should be executed on any other test machine to rule out any issues with the test environment itself. If it is a web application, confirm compatible browser versions. Once confirmed, you can create a bug with the proper replicating steps. Attach the necessary screenshots and server logs if available.

Q #3) Which are the key fields while creating a bug report?

Answer: The most important elements of a bug report are as follows:

  • Title: It should be very unique and easily understandable. The idea is that by looking at the title itself most of the time the developer will understand the bug.
  • Description: This should consist of the necessary steps to replicate the bug.
  • Good to attach any supporting screenshots and/or videos.
  • Attach any server logs, if available.
  • You can also provide a reference to the corresponding test case.

Q #4) Can you differentiate between Test Scenario and Test Case?

Answer: Test Scenario: A test scenario consists of a very high-level description of what needs to be verified. It can be called a set of test cases that usually cover the end-to-end functionality of a particular module/feature.

Test Case: A test case is a set of actions (also called steps) to be performed while verifying the functionality of the Application Under Test (AUT).

Q #5) Is there any difference between Regression testing and Re-testing (Confirmation testing)?

Answer: Regression Testing is running the tests (test suite) after one or more bugs are fixed and/or any new changes are done in the application to ascertain that the software application is not regressed, meaning, changes have not affected any other part(s) of the application.

Re-Testing is testing the fixed bug. In short, tests try to verify that the functionality which was known to be not working during the previous test run is now working after the issue is fixed. Re-testing is also many times referred to as ‘Conformation testing’.

Q #6) Explain API testing.

Answer: API testing checks if the developed API(s) meet the pre-defined criteria with respect to functionality, reliability, performance, security, and maintainability.

It basically checks if the output/data (response) fetched from the ‘other’ application/database is correct, properly structured, and usable by another/calling application. The response data should be based on the request/input parameter(s).

The following should also be considered:

  1. Response time.
  2. Type of Authentication required.
  3. Data is transmitted securely over the network or not.

Q #7) What is your approach in writing a test case?

Answer: Test cases should be designed (written) in such a way that it gives the best Return on Investment.

Before starting to design test case(s), the QA person should try to get enough knowledge of the feature/module, by

  1. Go through the User Story (if available)
  2. Gain knowledge of the application through the available documentation (if any), or from any of the team members.

This approach should include the following best practices:

  • The test case needs to be as simple as possible. Should be easy to execute, even by a person who is not familiar with the application.
  • Test cases should be easily identifiable/understanding with their name/title.
  • Each test case needs to be independent.
  • Define pre-requisite (precondition) very clearly wherever necessary.
  • Create a test case from the end user’s perspective.
  • Test case step expected results and input data should be clearly defined.
  • Each test case step should have ideally only one expected result.
  • Avoid test case duplication. If required modularise test case(s) and call the test case wherever required (e.g. Login test case)
  • Avoid any assumptions.
  • Try to implement test design techniques like Boundary Value Analysis (BVA), Equivalence Class Partitioning (EP), etc.

Note: It is also important to get the designed test case reviewed by colleagues (including business analysts, if available).

Q #8) Given a set of test cases, how do you prioritize those for test execution?

Answer: The following criteria should be considered:

  • Test cases that involve critical functionality.
  • Test cases that were able to find bugs frequently.
  • Based on past historical data.
  • Test cases for features where frequent code changes are made.
  • Test cases covering end-to-end functionality flows.
  • Test cases for newly added feature(s)/functionality.
  • Test cases that include field validations (importance should be given to negative test cases).

Q #9) Explain Compatibility testing for desktop and web applications.

Answer: For desktop application compatibility testing is done to verify if the application works for different configurations, meaning different combinations of hardware and operating systems (software).

Both backward and forward compatibility should be reasonably considered.

In the case of web applications, it is referred to as Cross browser compatibility testing. This involves testing the application on different browser-OS combinations. It is also good to consider testing on different kinds of devices also (e.g. mobile, Tab, etc.)

Q #10) How will you check the expiry of a given link (the link is supposed to expire at 12 midnight)?

Answer: Change the system date time (in this case 12 midnight – the next day) and then check if the link works or not.

Q #11) You click a link on a web page, but a 500 error page appears. What would be your approach to reach some conclusion (or how you would troubleshoot the problem)?

Answer: Follow these steps:

  • Check if the logged-in user has the right access permissions.
  • Rule out any issues with browser compatibility by discussing them with the concerned developer.
  • Delete the cookies and check once again.
  • If the problem still persists, create a bug.

Q #12) You observed that a few bugs that were tested and closed earlier reappear again on the new build, during test execution. What could be the reason and your approach?

Answer: If the said bugs were verified ok in the previous build, then it could be a problem in the new build itself, meaning some issues in the new build. This could be an issue with the latest change(s) that the developers made.

Also, it might be possible that the functionality itself is changed without the knowledge of QA, although this is a rare possibility. In such a case QA should have a proper discussion with the development team and then conclude.

Q #13) How is testing done in Scrum?

Answer: Ideally in Scrum dedicated QA persons may not exist. All the members of the Scrum team are responsible for testing.

  • QA should write test cases once the user story is available.
  • Developers should execute unit tests.
  • QA should execute functional and non-functional tests against the user story.
  • During the Sprint demo customers/stakeholders may carry out acceptance testing and provide important feedback, which normally becomes the input to the next Sprint.
  • The test results should be maintained.

Q #14) What are you going to do if the developer does not accept the bug reported by you?

Answer: If this kind of situation arises, then re-verify the bug to make sure that the bug is replicating. Provide necessary screenshots, and a screen capture of the steps to the concerned developer.

Even after providing more information, if the developer is not accepting the bug then call for a meeting along with the team lead/manager and BA to discuss the same so that a consensus is reached.

Q #15) Describe BVT.

Answer: Build Verification Tests, also referred to many times as Sanity tests, are the highest-priority test cases that are executed on every new build. These tests should be executed quickly so that the team can decide if the build is usable for further testing. Typically these tests are automated.

Q #16) Explain the difference between Smoke testing and Sanity testing.

Answer: Smoke Testing decides if the deployed software build is stable or not. Smoke tests help in finding issues in the early stage/phase of testing. The smoke test suite will have a very limited number of test cases.

Executing smoke tests manually will require a lot of time and sometimes cause a waste of time if the build is not stable enough. Due to this reason, ideally, smoke tests should be automated.

Sanity Testing is done immediately after a new software build is triggered. These tests will determine if the proposed functionality works as expected or not. It normally saves a lot of time as it is primarily focused on limited areas/functionalities. It helps to determine if QA can go ahead with further testing (say regression testing).

Q #17) What do you mean by non-functional testing?

Answer: Non-functional testing verifies non-functional aspects of the software application. Performance and reliability under load are very critical as far as user experience is concerned.

Typically, the following non-functional tests are considered:

  • Usability testing
  • Performance testing
  • Load testing
  • Security testing

Q #18) What are the essential components of a Test Report? Explain the benefits of Test reporting.

Answer: Test report content may vary depending upon the stakeholders to whom the report is sent. It should essentially consist of the following components:

  • Project Overview
  • Test Objectives
  • Test Summary
  • Defect Report

Test reporting, done properly, provides valuable insights to the stakeholders which in turn enables them on deciding a successful release to the production environment.

Q #19) How will you carry out regression testing if the given time duration for the testing phase is shorter than estimated?

Answer: If the given time duration is less than expected, then test cases should be prioritized to carry out test execution in the newly defined order. In some cases, there might be a need to include extra resources. An automated test suite plays a crucial role in this kind of situation.

The following criteria can be considered to prioritize test cases for execution:

  • Test cases that involve critical functionality.
  • Test cases that were able to find bugs frequently.
  • Based on past historical data.
  • Test cases for features where frequent code changes are done.
  • Test cases covering end-to-end functionality flows.
  • Test cases for newly added feature(s)/functionality.
  • Test cases that include field validations (importance should be given to negative test cases).

Q #20) What is a Test Plan and Test Strategy? What comes first?

Answer: Test Plan is a document that defines the scope, objective, approach, and intensity of testing the software. The test plan can be dynamic.

Test Strategy is a set of guidelines/protocols that explain the test design and determines how the test execution should be carried out. Test strategy cannot be changed. It is often included as a component of the test plan.

Test Strategy is set at the organization level, which means it is defined well before a project is started.

Q #21) How do you decide when to stop testing?

Answer: The following criteria can be considered:

  • When test execution of pre-defined test cases (suite) is completed with agreed upon pass percentage.
  • When testing deadlines and/or release deadlines have been reached.
  • When the bug rate falls below a certain pre-defined level, no high-priority bugs are observed.
  • When functional and code coverage completion reaches the pre-identified point.
  • When management decides (for instance testing budget is depleted).

Q #22) Is it possible to achieve 100% test coverage? How would you ensure this?

Answer: Although it heavily depends upon the size of the application under test, 100% test coverage. The complexity of the code also plays a vital role. For smaller-size applications, it can be easily achieved.

Test cases should be constructed thoroughly for each and every functionality/requirement. Test coverage matrix should be maintained. It is a good idea to improve test coverage using test automation.

Note: It might be somewhat a waste of time if the focus lies on attaining 100% test coverage if mapping with requirement/functionality is not done and maintained properly.

Q #23) Describe your process of getting up to speed with new products and team members.

Answer: Talk to team members and ask many questions about the product, as well as processes, and any difficulties they faced. Make attempts to run pre-existing manual test cases, which can give substantial knowledge of the product. Navigate through the application/product.

Q #24) Do you measure how your current testing approach works? How do you adjust if it is not? 

Answer: Yes, the testing approach is measured by collecting and analyzing some key metrics. The first one is to find out the number of bugs found as a result of the execution of existing tests/test cases before the release of the application. This will identify the test coverage.

The next metric is to identify the number of bugs found outside of the test suite (maybe in exploratory/monkey testing) or even in production after the release. If there are quite a lot of bugs found outside of the existing test suite then definitely some adjustment is required to the whole testing approach.

Test cases need to be revisited and if required should be re-designed to meet the test coverage. Missing test cases (if any) should be added. Teams and leadership should consider if test automation should be considered or not, at least for some key features.

Q #25) Testing can be challenging. What keeps you motivated? 

Answer: Some of the following ways can help:

  1. Knowledge sharing sessions
  2. On-the-job training (e.g. basic Selenium training)
  3. Rewarding excellence
  4. Failures are considered opportunities for learning.
  5. Leadership asks team members about their motivation and takes necessary action.

Some Testing Terms Asked in the Interviews

Below mentioned are some of the most important terms that you must know before attending any manual testing interview.

Let’s learn more about each term, along with simple examples for your easy understanding.

  • Boundary Value Analysis
  • Equivalence Testing
  • Error Guessing
  • Desk Checking
  • Control Flow Analysis

Boundary Value Analysis

It is the process of selecting test cases/data by identifying the boundaries that separate valid and invalid conditions. Tests are constructed to test the inside and outside edges of these boundaries.

In addition to the actual boundary points or a selection technique, test data are chosen to lie along the “boundaries” of the input domain [or output range] classes, data structures, procedure parameters, etc.

Choices often include maximum, minimum, and trivial values or parameters.

For Example – Input data 1 to 10 (boundary value)
Test input data 0, 1, 2 to 9, 10, 11

Equivalence Testing

The input domain of the system is partitioned into classes of representative values so that the number of test cases can be limited to one-per-class, which indeed represents the minimum no. of test cases that must be executed.

For Example – valid data range: 1-10
Test set: 2; 5; 14

=> Click here to know more about Boundary Value Analysis and Equivalence Testing

Error Guessing

It is a test data selection technique. The selection criterion is to pick values that seem likely to cause errors. Error guessing is mostly based upon experience, with some assistance from other techniques, such as boundary value analysis.

Based on the experience, the test designer guesses the type of errors that could occur in a particular type of software and designs test cases to uncover them.

For Example, if any type of resource is allocated dynamically, then a good place to look for errors is in the de-allocation of resources.

Are all resources correctly de-allocated, or are some lost as the software executes?

Desk Checking

Desk checking is conducted by the developer of the system or program. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met.

This is the most traditional means of analyzing a system or program.

Control Flow Analysis

It is based upon a graphical representation of the program process. In control flow analysis, the program graphs have nodes that represent a statement or segment possibly ending in an unresolved branch.

The graph illustrates the flow of program control from one segment to another, as illustrated through branches.

The objective of control flow analysis is to determine the potential problems in logic branches that might result in a loop condition or improper processing.

Conclusion

These are some of the practical manual testing interview questions and answers based on actual interviewing experience. We hope that this tutorial is insightful and you can approach any manual testing interview confidently.

Good Luck!!

Was this helpful?

Thanks for your feedback!

Recommended Reading

  • Top Interview Questions & Answers

    Lists popular Software Testing and related Technical Interview Questions and answers on the most common software testing & technical topics: Let us see some of the most frequently asked interview questions and answers on Software Testing and other related technical…

  • ETL Testing

    List of Most Frequently Asked ETL Testing Tools Interview Questions And Answers to Help You Prepare For The Upcoming Interview: Business information and the data are of key importance to any business and company. Many companies invest a lot of…

  • RESTful Web Services Interview

    List of Most Frequently Asked RESTful Web services Interview Questions And Answers to Help You Prepare For The Upcoming Interview: Web services, a very well known term when we talk about exchanging some sort of data between multiple applications or…

  • agile testing q&A

    Here are some of the top Agile Testing Interview Questions with detailed answers to help you prepare for upcoming interviews: Agile Testing interview questions and answers will help you prepare for Agile methodology and agile process interviews for Software testers…

  • Datastage Interview Questions And Answers

    List of Most Frequently Asked Datastage Interview Questions And Answers to Help You Prepare For The Upcoming Interview: DataStage is a very popular ETL tool that was available in the current market. In this article, I am sharing a set…

  • Top JSON Interview Questions and Answers

    List of Most Most Popular JSON Interview Questions And Answers to Help You Prepare For The Upcoming Interview: Before proceeding to go through these questions and answers, make sure to have a quick look at our previous JSON tutorials. We…

  • SPOCK INTERVIEW QUESTIONS

    Clear Your Spock Interview Successfully with this list of Spock Interview Questions: In this Spock Tutorials for All, we explored all about Integration and Functional Testing in Spock in our previous tutorial. This tutorial will cover the most commonly asked…

  • Top Teradata Interview Questions and Answers

    Most Frequently Asked Teradata Interview Questions and Answers: This tutorial enlists some common Teradata Interview questions and answers for a better understanding of the concept. Teradata is a Relational Database Management System that is suitable to use with large-scale data…


20 thoughts on “Some Tricky Manual Testing Questions & Answers”

  1. Hi vijay,

    In one of my interview I have been asked following question.

    If we are testing web application on mobile, so from the functionality point of view is it necessary to test all functionality in each platform (iPhone, Android, BlackBerry etc.) separately? or testing on one platform is enough?

    Reply
    • I think it depends on the scope of requirements and available resources. especially if, it is manual testing, testing on Android and IOS is sufficient.

      Reply
  2. Hello everyone
    I’m preparing for manual testing and I’m little bit nervous for interviews so please give me any advice from your past experience of interviews it will really help me.

    Reply
  3. – A relevant working experience with enterprise software testing.
    – A relevant working experience in analyzing requirements usually provided in functional
    requirements documents for enterprise software applications.
    – A relevant working experience in developing test suites and test cases using test
    management tools.
    – A relevant working experience in performing different testing types and major releases of enterprise
    software.
    – Previous experience in testing bilingual Web-based enterprise software applications in English and
    Arabic is a must.
    – Strong knowledge of software QA methodologies, tools and processes.
    – Strong experience with automated testing of web applications, using tools like Selenium
    – Strong experience and understanding of CI/CD.
    – Design, develop and execute test automation processes for existing and new product
    functionalities.
    – Design, develop and execute manual and automated test plans.
    – Collaborate with product and development teams to define test strategies.
    – Track bugs and perform regression testing when bugs are resolved.
    – Research, diagnose, troubleshoot and provide solutions to reported issues.
    – Deliver test reports and provide feedback to developers.
    – Write specifications in collaboration with our development department.
    – Ability to thrive in a fast-paced environment.
    – Ability to work under pressure, prioritize and manage multiple tasks.
    – Participate in the evolution of our process, tooling and methods by providing expertise in Software
    Quality Assurance and Automation.
    above requirement what you think which question might asking in interview?

    Reply
  4. hiii….i just done s/w testing course n m still confuse abt boundary value.I knw abt it bt m want tO knw simple thing that “what is boundary value”.plzzz give the definition

    Reply
    • Hi Akash,

      In simple word Boundary values means its the both edge values. And here we don’t need to check all the input data instead only the boundary values we will check.
      Lets say the valid inputs of a text field is 1-10.
      Here for checking this we can divide the data into some equivalent partitions such as:
      <1 i.e 0, 1-10, 11-99 etc.
      Here the boundary values are 0,1,10,11,99 etc. So assumption is if these boundary values will pass then overall it will pass. And if any particular boundary vakue is failed means that particular range will fail.

      Reply
  5. Excellent topics coverd, I appreciate your knowledge. I suggest you if you give more typical examples for the above topics useful for viewres of this blog. Once again thanks for the above explanation.

    Reply
    • for equivalance class partioning the input has to be partioned into groups (valid and invalid)
      ex:if customer needs userID field with only alphabets(A-Z) then valid data is A-Z and invalid data is a-z,special characters,blank spaces…

      Reply
  6. when defect found in production then we called it as a “failure”…once we do all the testings we have to do testings like exploratory and regression testing ….by doing these testings it tests small part of the application….with that we increase product quality….

    Reply
    • First & foremost, depending on the priority/severity – it has to be fixed & delivered as per the defined SLAs. For that, we need to replicate it in the test environment in order to identify the root cause.

      It is fixed. Fix impact is analyzed. Defect is retested thoroughly and a regression is performed on the impacted functionalities. You don’t want it to reoccur OR impact any other working functionality. Automation is run to ensure all the happy flows.
      Once delivered – Why it was not caught during testing. Once we know the reason – test cases are updated to include different permutations & combinations. The knowledge gap is filled.

      Reply

Leave a Comment