40+ Popular Test QA Analyst Interview Questions and Answers [2021 LIST]

Most Frequently asked Test/Quality Assurance Analyst Interview Questions and Answers:

While deciding the career in which you want to be, the deciding factor is not only the one you think can enjoy working on.

But being in that category requires a lot of skills, understanding the responsibilities as well as the necessary job duties for the career you chose. The same goes while choosing a career as a QA Analyst. It does not only require you to be a good tester, quick learner, extraordinary thinker but also requires being a complex problem solver too.

Test QA Analyst Interview Questions

Although the above-mentioned qualities are not achieved instantly, obviously it requires experience and days of hard work also.

This article will cover every aspect whose knowledge is mandatory to be a QA Analyst. The most frequently asked QA Test Analyst interview questions and answers included here will give you a clear idea of your interview preparation.

Popular QA Test Analyst Interview Questions

Q #1) What are the responsibilities of a QA Analyst?

Answer: QA Analyst is the one who ensures that every possible measure has been taken for testing each feature of software solution both functionally and technically.

The major responsibilities of QA Analyst can be enlisted as follows:

  • Execute and manage all the activities to meet the objectives of the test plan.
  • Choose the processes of high quality to develop the product.
  • Should be able to analyze the requirement and document procedures.
  • Document and re-verify all defects. Set the priority and severity of defects.
  • They should be able to create, document and maintain test cases.
  • Analysis of test results.

Q #2) What is your understanding regarding a Test plan?

Answer: When you have a clear idea of what, when, how, and who, then things become easier. The same is the case with software testing as well, where the test plan is a document that consists of scope, approach, resources, and outline of the testing project as well as the activities for tracking the progress of the project.

The test plan is a record of processes which include:

  • Testing tasks
  • Testing environment
  • Design techniques
  • Entry and exit criteria
  • Any risks, etc.

Q #3) Enlist the priority of the testing tasks defined by the QA team in product development.

Answer: The priority of the testing tasks are defined as follows:

  • A test plan is prepared consisting of the outline and scope of the testing project.
  • Test cases are prepared to cover all the major and minor functionalities with the data required for testing.
  • Execution of the test cases as per the functionalities implemented with the coming builds of the testing project in the testing cycle.
  • Defect reporting with re-verification as well as tracking its progress.
  • Preparing the test execution report summary.

Q #4) Enlist some of the key challenges that are faced while performing Software Testing.

Answer: As we say that complete testing can never be achieved, there are several challenges involved in it. Be it a small or a complicated one there are some challenges faced while performing software testing of any project.

Enlisted below are a few key challenges:

  • Lack of skilled tester who usually face the problem of subject awareness as well as lack of good knowledge of customer’s business.
  • Time is also considered as the factor, as usually testers focus mainly on task coverage rather than test coverage with quality testing when there is a huge list of tasks to be completed.
  • To decide which test case has to be executed first and with priority. This is usually achieved by the experience of work.
  • A proper understanding of the requirements which can lead to all your testing efforts to zero, if the requirement is misunderstood.
  • Unavailability of best tools that are required to complete the testing with less time and more effectiveness.
  • Handling the relationship between testers and developers with good communication and analyzing skills.

Q #5) Define Use Case Testing.

Answer: Use Case testing can be defined as the functional black-box testing technique which captures the series of interactions that have occurred between ‘actors’ and ‘system’. Here, ‘Actors’ are represented by the users and their interactions.

Characteristics of the use case testing are enlisted below:

  • The functional requirements of the project are organized.
  • Records the path or scenarios from start to finish.
  • Can cover integration defects i.e. the defects that occurred as a result of interaction between different components.
  • It describes the main flows as well as the exceptional flow of the events.
  • Any pre-conditions that are required for the use case to work should be specified earlier.

Q #6) Define Test Strategy.

Answer: A set of guidelines or the testing approaches that are usually carried out by the project manager to determine the test design and general testing approach is defined as Test Strategy. It is found as a small section of the test plan and is used by multiple projects.

Different test approaches are followed based on the factors like nature and domain of product, the risk of product failure, expertise in working with proposed tools, etc.

These approaches are further categorized as follows:

  • Proactive approach, where the test designs approach starts before the build is created. Thus it helps in finding and fixing the bugs before the build.
  • Reactive approach, where the testing approach is started after the completion of test design and coding.

Q #7) Explain the difference between Quality control and Quality assurance.

Answer: ‘Quality Control’ and ‘Quality Assurance’ are the two major terms used concerning any testing project or product. Usually, testers, who are new to this field, do not understand the actual difference between the two.

Let’s understand the difference with the help of the below table.

Quality AssuranceQuality Control
It is a technique used for managing quality where all team members are responsible for process planning.It is a technique used for verifying the quality where the testing team is responsible for executing the planned process.
Program execution is not involved in this process.This process involves program execution.
It is a verification process to ensure that right things are done.It is a validation process to ensure the occurrence of expected results.
It is a process oriented exercise where issues/defects occurrence in the application are not detected.It is a product oriented exercise where issues/defects occurrence in application are identified and reported
Deliverables are created in this Quality Assurance process.Deliverables are verified in this Quality Control process.
Not a time-consuming activity.Considered as the time-consuming activity.
It comes under the category of Statistical Process control. It comes under the category of Statistical Quality control.

Q #8) According to you, when is the good time to start QA in a project?

Answer: According to the Software Development Life Cycle (SDLC), the Testing phase is executed after the completion of the ‘Implementation and Coding’ phase. But in today’s scenario, to achieve the best results, it is required to start the QA of the project or product at the starting of the project.

Following this approach will lead to the major advantages given below:

  • Early process planning to meet customer’s quality expectations.
  • Good and healthy communication between the teams.
  • Gives an ample amount of time that is required for setting up the testing environment.
  • Allows early review and approval of test plans.

Q #9) Differentiate Verification and Validation processes.

Answer: Verification and Validation processes are usually determined by two famous questions i.e. “Are we building the system right?” and “Are we building the right system?”.

Let us see the other difference between these two processes in the below table:

VerificationValidation
Verification is defined as the process of evaluating the product for determining whether it meets the requirement and design specifications.Validation is the process of determining whether the software satisfies the business need or is fit for uses.
It is considered as the static testing technique which does not involve and execution of the software.It is considered as the dynamic testing technique where execution of the software is done.
This is a human based practice of verifying documents, files, designing, coding of programs, etc.This is a computer-based practice of validating and testing the actual product.
Does not involve execution of code.Involves execution of code.
Usually done by QA team to ensure software is as per requirement specifications.Usually carried out by the testing team.
Performed before validation process.Performed after the verification process.
E.g. Inspection, walk-through, reviews, etcE.g. Smoke testing, regression testing, functional testing, etc.

Q #10) Explain the benefits of Destructive Testing.

Answer: Destructive testing is defined as the form of testing that is carried out by the testing team to determine the point of failure of the product under different loads i.e. to evaluate the application structural performance to determine its strength, toughness, hardness or say robustness.

Enlisted below are the benefits of Destructive testing:

  • The weakness of the application design is determined.
  • Determine the service life of the application.
  • It helps to reduce costs and failure.

Q #11) How is Retesting different from Regression Testing?

Answer: There are several differences between Retesting and Regression Testing.

This can be easily understood from the below table:

Regression TestingRetesting
Regression testing is the process of determining or say finding issues which may have been introduced to the existing functionality with the code change.Retesting is the process of re-verifying the failed test case after the defect has been fixed.
Regression testing can be performed through automation.Cannot automate the test cases for retesting.
This testing is usually performed when there is the change in existing code or say any new functionality.Retesting is done to the same defect with the same environment but with the fixes in the new build.
This is generic testing which is usually carried out for passed test cases.This is planned testing which is usually carried out for failed test cases.
Can be performed parallel with retesting.Is done before regression testing.
Even the passes test cases are executed during this process.Only the failed test cases are retested.
Verification of the bug is not included.Verification of the bug is the part of retesting.

Q #12) What do you know about Data-Driven Testing?

Answer: It is very clear to every automation tester that automation test scripts cover only the area of the application to be tested with a recorded sequence of user actions. Normally, these actions do not produce any error as only that input data are taken under conditions that we have entered while recording.

Data-driven testing comes into picture here, where we want the application to work as expected for any type of input values. For this purpose, data required for data-driven testing are not hardcoded but test scripts take their data from data sources like CSV files, ODBC sources, etc.

To summarize, data-driven testing performs the following actions in the loop:

  • Takes input test data from the storage.
  • Data entered in the application to perform actions.
  • Verify the actual results with the expected ones.
  • Again repeat the same steps with new input test data.

Q #13) What is the Traceability Matrix? Is it required for every project?

Answer: Traceability matrix in any project is the means of tracking the progress of the project concerning the implementation of new functionalities, enhancement of existing functionalities, etc. Through a traceability matrix, you can always keep an eye on the project progress with every aspect maintained as per the date.

Requirement Traceability matrix consists of the below-mentioned parameters which are actually as per the requirement specification document.

Parameters of Requirement Traceability matrix include:

  • Every section of the requirement document is a point to be covered in RTM (Requirement Traceability Matrix).
  • The headline of each point is the headline of each section in the requirement specification.
  • Corresponding to each point, test case ids are mentioned which are written for that particular section.
  • BUG/New Feature ID is also mentioned in each section.
  • The most important point is, tracking of the feature is also maintained in which the build of the project and its feature has been implemented.
  • Another parameter includes whether the section is fully tested or is still under testing status.

Q #14) Describe the benefits of Agile Testing.

Answer: Being a tester, the focus becomes delivering the quality product in less time by understanding the end-user requirement and most importantly, no defects from the end-user side. Here, Agile testing comes in the picture which follows the principle of agile software development and quickly validates the client’s requirements.

Mentioned below are the benefits of Agile testing:

  • A cross-functional agile team is included in testing, which in turn delivers the results at frequent intervals.
  • Saves a lot of time and money.
  • Includes less documentation and time to time feedback from the end-user.
  • Not only the tester, but the whole team including the manager, customer, and developer are involved in face to face communication.
  • As a result of daily meetings, issues can be well determined in advance.
  • Increase in team productivity and a better understanding of the technical aspects of the project.

Q #15) What is Negative Testing?

Answer: Negative testing is the method of ensuring that the stability of a product or application is maintained or say do not fail when unexpected input is given. The main purpose of this form of testing validates the application against any possible invalid input data.

This form of testing is also known as ‘failure testing’ or ‘error path testing’ and its main purpose is to check the application function reliability under negative scenarios. It also exposes software weakness, spots the faults and gives a clear idea of data corruption.

Q #16) Differentiate Ad-hoc Testing and Exploratory Testing?

Answer: There are several differences between Ad-hoc testing and Exploratory testing.

Let see the differences in the below table:

Adhoc TestingExploratory testing
This form of testing includes learning the application first and then proceeding with the testing process.As the name suggests, this form of testing includes learning the application while testing.
Any specific set of documents to perform testing is not available.Testing of the application is done with the detailed set of documents.
It is required to have good hands on experience and knowledge of the software before testing.Knowledge of the software application is gained while performing exploratory testing.
It is an informal testing which basically follows negative testing.It is considered as formal testing which follows positive testing.
Does not work with the workflow.Works with the workflow.

Q #17) Why is Automation Testing preferred over Manual Testing?

Answer: Well, both Automation testing and Manual testing have their importance and existence in the world of testing.

Given below are some important aspects due to which Automation Testing is preferred over Manual Testing:

  • The same test script can be used every time to run the test thus automation testing is considered as the most reliable and efficient one.
  • Mostly preferred in case of regression testing and repeated execution.
  • Automation testing is considered to be a cost-effective one in the case of long-term execution and thus ensures a better quality of software.
  • Test scripts are reusable, fast and everyone can see the results.
  • Tools used for automation testing are more fast and reliable when compared to the manual approach.

Although, some more factors determine that automation testing is preferred over manual testing. The above mentioned are the major factors.

Q #18) What do you understand by ‘Test effectiveness’ and ‘Test efficiency’?

Answer: Test Efficiency can be defined as calculating the number of resources and test code consumed to perform or say execute a particular function. It also determines the number of resources utilized in Software product creation.

This can be determined by the formula:

Test Efficiency = (Number of defects resolved/total number of defects submitted)* 100

Test Effectiveness can be defined as the measure of evaluating the test environment and its effect on the software application. Here customer response is evaluated when application requirement is fulfilled.

This can be determined by the formula:

Test Effectiveness= (Number of defects found/ Number of test cases executed)

Q #19) Explain the process of Project Tailoring.

Answer: Project tailoring is a consistent and ongoing process that makes sure that the performance of the project is correct and is according to the business requirements. The whole process includes reviewing and modifying the project data as per the current operational need of the organization.

The review process is done at the organizational level but the implementation of the tailoring plans is done at the project level. The main goal and requirements of the organization, as well as customer and user relationships, are the two major factors that should be considered in the process.

Few aspects as per the organizational goals under the tailoring process are:

  • Project approach
  • Strategies
  • Controls and processes involved
  • Roles and responsibilities

Q #20) How do you differentiate between Priority and Severity of the defect within the project?

Answer: Both ‘Priority’ and ‘Severity’ is assigned to the bug for categorizing the issues/bugs for the order in which they are to be taken for fixing. These are based on various factors.

Let us understand more along with their differences in the below table:

PrioritySeverity
Priority determines the order in which the developers take up the defects/ issues for fixing.Severity determines the impact of a particular issue/ defect on the functionality of the application.
This is associated with scheduling of the issues and is driven by business standards.This is both associated and is driven by functionality.
Priority of the issue is decided on the basis of customer requirements.Severity of the issue is decided considering the technical aspects of the product.
Categorized as ‘High’, ‘Medium’ and ‘Low’.Categorized as ‘Moderate’, ‘Major’, ‘Minor’, ‘Critical’.
When a bug has
Status: High priority and Low severity
Result: Defect does not impact the application much but needs to be fixed immediately.
When a bug has
Status: High severity and low priority
Result: Defect has to be fixed but does not require any immediate action.

Q #21) Why is Performance Testing necessary to be done for any application?

Answer: In simple language, Performance testing is done to determine the behavior and response of an application under various situations. This helps to gather information regarding application stability, scalability, speed, etc.

The reasons for doing performance testing can be understood from the below points:

  • It determines the response time and performance of an application component under the workload.
  • The response time of the user’s activity is calculated.
  • Requires experienced programmers with extensive technical language.
  • Determines the behavior of the application under load i.e. when the number of the user increases instantly.

Q #22) What is Specification-Driven testing?

Answer: As the name itself defines, Specification-driven testing is done based on requirement specification of the application where functional specifications serve as the basis of the tests performed.

This form of testing is the same as ‘Black box testing’ where the user inputs multiple data and then the output is observed. It is appropriate at all levels of testing with specification and test plan.

Q #23) Explain CMMI.

Answer: CMMI stands for Capability Maturity Model Integration. This model was developed by the Software Engineering Institute (SEI). It is based on the principle that the processes involved in managing and developing a product or system determine the quality.

It also provides guidelines for process improvement for the product or even the entire organization.

CMMI is divided into 5 levels as enlisted below:

  • Level 1: Initial
  • Level 2: Managed
  • Level 3: Defined
  • Level 4: Quantitatively Managed
  • Level 5: Optimized

 Q #24) Explain the advantages of implementing CMMI.

Answer: There are several advantages to implementing CMMI.

They are listed as follows:

  • It provides detailed coverage and reporting of the product lifecycle and thus helps in process improvements.
  • The existing standards of the organization, their processes and procedures get improved as a part of CMMI implementation.
  • As a result of CMMI implementation, there is an increase in on-time delivery as well as customer satisfaction.
  • It also leads to effective management and increased cost savings as there is early detection of errors.

Q #25) Enlist some Automation Testing Tools.

Answer: Some of the automation testing tools are enlisted below:

  • Selenium
  • Watir
  • Windmill
  • SoapUI
  • Tellurium

Q #26) Can we do regression testing in Unit Testing?

Answer: Definitely. Regression testing is to test the undesired defect which might have been introduced into the code as a side effect of fixing other defects. Unit testing is the test execution of running a small independent and individual part of code.

Regression testing can be done at any level right from Unit testing to Integration testing to finally Acceptance testing. Regression testing is testing based on perspective, while Unit testing is the approach of level (Bottom Up, Top-down).

Q #27) What is the difference between Smoke testing and Sanity Testing?

Answer:

  • Smoke testing is the testing of the old prominent features or existing features of the build, while Sanity testing is the validation of newly added modules, fixed defects in the build.
  • Smoke testing happens first and then is followed by Sanity testing.
  • Smoke testing covers the testing of critical functionalities catered by the software so it extends throughout the software. Sanity testing, on the other hand, is narrowed down just to the recently added modules and is tested in-depth.

Q #28) What are your daily activities as a manual tester in your office?

Manual: The first thing that I check in my system is to refresh the dashboard for the status of requirements/ enhancements or bugs in the current iteration. It is followed by daily scrum calls and reporting, discussion and brainstorming sessions for defining with test scenarios and test cases.

These cases are then executed after redrafting it as per the review. Liaising with clients for non-functional requirements is also one of the major activities on my plate.

Q #29) What are your daily activities as a member of the automation tester in your office?

Automation: My day begins with a daily status meet that discusses yesterday’s automation results, in case I have fired a batch of test cases on the new build.

The execution cycle can be called a Health Check, to see how healthy the build is.

It is followed by reporting defects based on the script failures, design changes in the functionality; maintain the scripts/ libraries or functions, automate and check-in in a new script for the new requirements and if required, a new function in the function library.

Sometimes the test scripts need to be re-executed individually to find regression defects via automation and add them to test suite as well.

Q #30) How do you differentiate between a requirement and a defect and an enhancement?

Answer: A requirement is a user story that is essential to be implemented, tested and delivered.
An enhancement is an added or improvised feature to the existing one.
A defect is rather a complete deviation from the expected user stories.

Also, if a defect uncovers a certain area of a requirement which is not stated unless otherwise into the specification, it can also be called as a requirement or a part of it.

Q #31) What do you do when your developer denies fixing a bug that you filed?

Answer: An important factor that decides the fixing of a defect is the “Priority” assigned to it. If the defect is of high priority, a show stopper, which blocks a major functionality and is consistently reproduced then it is necessary to be fixed in the build.

The same has to be conveyed to developers effectively since together testers and developers contribute to the quality of the product to be shipped.

Other aspects that can help convince the developer to fix a bug within a short period are quality reporting of the bug and making the developers understand the fact that the fixing of bug is of prime importance in the release.

Q #32) What do you do when your developer denies that what you filed IS A BUG?

Answer: A most important phase of the defect life cycle is ‘Rejected’, which means that the logged incident report is not a valid one. The business requirement document that states requirements can help to understand the software and hence the nature of the incident reported.

Analyze the bug and expose your findings on the bug to the developer and the team. If it is a defect, never fail to log it. Sometimes testers have to provide a Gap analysis and present the same to developers. If that doesn’t solve the conflicts then senior folks in the team should pitch in.

Q #33) What comes first Re-testing or Regression testing?

Answer: Re-testing comes first since it is re-running the code, in simpler terms, it is a repeated execution of pre-defined steps. It has not to be necessary after fixing a code. But a regression test is to asses the side effects of a solved defect.

Certainly solving one defect and adding another one into the code is not the purpose of the testing process. The best finds and best catches of testers are usually regression defects. A build should never be released without being regression tested.

Q #34) What is an alternative to Beta Testing?

Answer: Beta testing is held at the client’s site with the least involvement of developers, recording the failures in the real production environment. If such a practice is not carried out by a firm then a safer idea can be to ship the product first to the clients those who are not in the queue to get the latest build.

For a couple of days, certain service consultants at clients premise can use the software, record and monitor the activities that assure the stability of the release in their environment, so that even if a major bug is left out to be fixed can be tested before delivering it to the targeted client. Another approach is swapped testing of the requirements within a team for unbiased testing.

Q #35) What are the drawbacks of the Agile implementation/ methodology that you faced?

Answer: Drawbacks are as follows:

  1. Sprints are usually very deadline constrained.
  2. Documentation is not the priority
  3. Switching between PBIs (Product Backlog Items) can be frequent.

Q #36) Why is impact analysis important?

Answer: To practice Risk-based, Impact analysis has to be done. By doing so test cases can be designed in a way that all the severe bugs, critical from the customer’s view can be solved before time. A good study of the business, client’s need and their usage of the software is to be taken care of.

For Example, the most important risk associated with software in the banking domain is Security. Any new form added to the already existing software can be vulnerable. A good amount of security testing is advisable by adding proper links, redirection, and navigation to the proper page, installing proxy if needed.

Q #37) With the help of an example each Performance Testing, Stress Testing, and Load Testing?

Answer: The best case here that can be taken is a live website.

Performance testing is done to verify the glitches in the system when put through a condition similar to a real-time scenario. It is not necessary to be performed under stressed conditions. The outputs of performance testing help to establish whether the system is ready to go into production.

For a simple ticket booking flow, a performance issue may have caused slowness. For instance, some query using joins is a bit slower that has implemented unnecessary clause or storage of data inappropriately in the database.

Stress testing is a type of Performance testing that is performed by putting the software under extreme conditions (heavy and undistributed loads, limited computational resources, high concurrency).

If a system exhibits certain behavior like data lost or corrupted, resource utilized even after stress removed, irresponsiveness or unhandled exceptions, it means it is failed in Stress testing. Sometimes disk failure, an unnecessary increase of GDI counts can also be the outcome.

For Example, If the website hosted on a machine that is already consuming huge memory or bombarding it with repeated requests should not hang or log you out.

Load testing is observing the system behavior while constantly increasing the load on the system until a threshold is reached. Workload models, metrics and load levels are usually the input to the load testing.

For Example, the time to fetch seat availability for a train gradually increases when the time of booking Tatkal quota gets closer since the number of users then logged into the system increase with the Tatkal booking time nearing to 10 am or 11 am.

Q #38) What has been one of your greatest challenges while doing regression testing?

Answer: There can be various challenges while performing regression testing.

  • Re executing tests repeatedly might become not so exciting for testers.
  • Time-consuming, since sometimes such testing needs out of the box thinking.
  • Compromised business value.
  • Improper selection of regression test cases might skip a major regression defect to be found.
  • Reproducing the defect on production hence becomes inconsistent.
  • Large suite to execute.

Q #39) If you are asked to document Test scenarios, Test cases, Test plans, Test Strategy what will you begin with and in what sequence will the rest follow?

Answer: The sequence will be Test Strategy, Test Plan, Test Scenarios and finally Test cases.

Q #40) What if I miss documenting any one of the above? Say I miss documenting the test plan what will be the consequences?

Answer: If we miss out documenting Test plan there will be a void for the scope of testing its objective approach and emphasis on testing. It will be then hard to determine the features to be tested, techniques to test, pass or fail criteria and ultimately a major risk associated with the testing.

Q #41) How would you begin testing the build that you recently got: Is there any approach you follow e.g. first begin Smoke testing, then Sanity testing?

Answer: Smoke testing > Sanity testing > Exploratory Testing> Functionality Testing> Regression testing and Final Product Validation.

Q #42) Explain the format of the Bug Report that you followed?

Answer:

A bug report should contain the following information:

  • Bug Id
  • Mapping to Requirement/ Enhancement/Existing bug
  • Bug Summary/title
  • A version of the product
  • Priority
  • Configuration (System specifications)
  • Pre-requisites
  • Steps
  • Expected outcome
  • Actual Outcome
  • Logs. Snapshots, video clips
  • Status
  • Other remarks

Q #43) How do you select regression test cases or form the regression test suite?

Answer: Yes. This is an outcome of impact analysis. It is a simple mapping of the features used or accessed in different areas that you are testing, its integration with other features and throughout as a system’s end to end or flow testing.

You can also pick up defects previously filed for the same functionality in previous builds. Ideally, one defect should be regression tested using at least five different test cases that use the functionality.

Q #44) Can you come with an example of the following defects

  • Low priority High Severity defect
  • High priority and Low severity defect

Answer: A defect that crashes the application when reproduced only at a given time stamp on a particular Operating system may be a High Severity and a low priority defect.

A defect that is filed against a view that does not open from double click but opens with a right-click can be a high priority and low severity defect.

Q #45) Write one effective test case to test whether a given paper is a white paper?

Answer: If the color of source ink with which you write on white paper, remains the same then the paper is white. For instance, if you write on a white paper with red ink the color of the ink remains red in the pen and appears red on the paper as well.

Note: There are many other answers to this question. You can come to think of any such valid answer with underlying logic.

Q #46) What is Charter testing?

Answer: A session testing performed based on the goals and agendas listed under a charter before beginning testing is known as Charter testing.

The testing here is done in a fixed time slot with a lesser focus on documentation and more focus on just testing. It is a different variant of exploratory testing wherein the test engineers verify the software in a time frame (For Example, just 2 hours) based on some heuristics developed.

Q #47) What is your approach when you have a high priority release to be delivered in a very short time?

Answer: During such instances, a well-thought plan can be beneficial.

The following can be done to assist testing in a time shortage scenario:-

  • Using existing updated automation scripts for executing regression testing.
  • Testing flow-based scenarios end to end.
  • Executing high priority test cases and if time permits then switch to the lower priority cases.
  • Re-testing the high priority bugs filed on previous versions.
  • Rapid software testing
  • Developers can be asked to run Unit tests to gain more coverage in testing.

Q #48) Write test cases on any device/object present around (Example: a chair)?

Answer: A piece of advice here would be: Always begin with gathering requirements. It shows your maturity towards the Software Development Life Cycle. Feel free to ask questions after selecting the object.

In this case:-

  • What type of chair is it? Office chair, study table-chair, sofa chair, dining table-chair, comfort chair?
  • What material is used to make the chair- wood, steel, plastic, upholstery?
  • Ask for the dimensions (height, weight based on the type of chair).
  • Ask for availability. And based on that begin drafting your cases.

Test cases would differ for each type of chair, which is better left for your thinking capability (For Example, purpose of chair, dimensions according to the type of chair, portable-non potable, light-weight, purchase options).

For each chair, a performance test case can be: to derive the tensile strength or the maximum weight-bearing capacity.

Q #49) Can everything be automated?

Answer: – To a certain extent yes. But automation tools, like other software, has its limitations. Also, software under test or Application under test will keep getting upgraded.

So there is no warranty that software testing can run without manual intervention. After all, a tool is as smart as the tester is. It is just software testing yet another software. It’s the code /scripts/ libraries that needs to be intelligent enough to test and find defects.

Conclusion

Hope this exercise helps you warm up with some questions and gives you a great kick start for your interviews and refine your confidence while answering the questions. Also, there can be other scenario-based questions those which can come out of your resume/ profile.

Hence, it is always advisable to practice a mock interview with self pre-hand, so that the interview turns out to be a win-win situation for both interviewer and the candidate. Remember a quality analyst is more than a test engineer, the feedback of whom is important for not just the product’s quality but also the process followed for testing the software.

Thank you and good luck with the interviews!