Explore the Differences between Smoke Testing and Sanity Testing in detail with examples:
In this tutorial, you will learn what is Sanity Testing and Smoke Testing in Software Testing. We will also learn the key differences between Sanity and Smoke testing with simple examples.
Most of the time we get confused between the meaning of Sanity Testing and Smoke Testing. First of all, these two testings are way “different” and are performed during different stages of a testing cycle.
What You Will Learn:
- Sanity Testing
- My Experience
- Sanity Testing Vs Regression Testing
- Strategy for Mobile App Testing
- Precautionary Measures
- Smoke Testing
- Smoke Testing Examples
- Importance of SCRUM Methodology
- Smoke Test Vs Build Acceptance Testing
- Smoke Test Cycle
- Who Should Perform the Smoke Test?
- Why Should We Automate Smoke Tests?
- Advantages And Disadvantages
- Difference Between Smoke and Sanity Testing
- Recommended Reading
Sanity Testing is done when as a QA we do not have sufficient time to run all the test cases, be it Functional Testing, UI, OS or Browser Testing.
Hence, we can define,
“Sanity Testing as a test execution which is done to touch each implementation and its impact but not thoroughly or in-depth, it may include functional, UI, version, etc. testing depending on the implementation and its impact.”
Don’t we all fall into a situation where we have to sign off in a day or two but the build for testing is still not released?
Ah yes, I bet you must have also faced this situation at least once in your Software Testing experience. Well, I faced it a lot because my project(s) were mostly agile and at times we were asked to deliver it the same day. Oops, how can I test and release the build within a stretch of hours?
I used to go nuts at times because even if it was a small functionality, the implication could be tremendous. As an icing on the cake, clients sometimes simply refuse to give extra time. How can I complete the whole testing in a few hours, verify all the functionality, Bugs and release it?
The answer to all such problems was very simple, i.e nothing but using Sanity Testing strategy.
When we do this testing for a module or functionality or a complete system, the Test cases for execution are selected such that they will touch all the important bits and pieces of the same i.e. wide but shallow testing.
At times the testing is even done randomly with no test cases. But remember, the sanity test should only be done when you are running short of time, so never use this for your regular releases. Theoretically, this testing is a subset of Regression Testing.
Out of my 8+ years of career in Software Testing, I was working in Agile methodology for 3 years and that was the time when I mostly used a sanity test.
All the big releases were planned and executed in a systematic manner but at times, small releases were asked to be delivered as soon as possible. We didn’t get much time to document the test cases, execute, do the bug documentation, do the regression and follow the whole process.
Hence, given below are some of the key pointers that I used to follow under such situations:
#1) Sit with the manager and the dev team when they are discussing the implementation because they have to work fast and hence we can’t expect them to explain to us separately.
This will also help you to get an idea about what they are going to implement, which area will it be affecting etc., this is a very important thing to do because at times we simply don’t realize the implications and if any existing functionality is going to be hampered (at worst).
#2) As you are short of time, by the time the development team is working on the implementation, you can note down the test cases roughly in tools like Evernote, etc. But make sure to write them somewhere so that you can add them later to the test case tool.
#3) Keep your testbed ready as per the implementation and if you feel that there are any red flags like some specific data creation if a testbed will take time (and it’s an important test for the release), then raise those flags immediately and inform your manager or PO about the roadblock.
Just because the client wants it asap, it doesn’t mean that QA will release even if it is half tested.
#4) Make an agreement with your team and manager that due to time crunch you will only communicate the bugs to the development team and the formal process of adding, marking the bugs for different stages in the bug tracking tool will be done later in order to save time.
#5) When the development team is testing on their end, try to pair with them (called dev-QA pairing) and do a basic round on their setup itself, this will help to avoid the to and fro of the build if the basic implementation is failing.
#6) Now that you have the build, test the business rules and all the use cases first. You can keep tests like a validation of a field, navigation, etc for later.
#7) Whatever bugs you find, make a note of all of them and try to report them together to the developers rather than reporting individually because it will be easy for them to work on a bunch.
#8) If you have a requirement for the overall Performance Testing, or Stress or Load Testing, then make sure that you have a proper automation framework for the same. Because it is nearly impossible to manually test these with a sanity test.
#9) This is the most important part, and indeed the last step of your sanity test strategy – “When you draft the release email or the document, mention all the test cases that you executed, the bugs found with a status marker and if anything was left untested mention it with the reasons” Try to write a crisp story about your testing which will convey everyone about what has been tested, verified and what has not been.
I followed this religiously when I was using this testing.
Let me share my own experience:
#1) We were working on a website and it used to popup ads based on the keywords. The advertisers used to place the bid for particular keywords which had a screen designed for the same. The default bid value used to be shown as $0.25, which the bidder could even change.
There was one more place where this default bid used to show up and it could be changed to another value as well. The client came with a request to change the default value from $0.25 to $0.5 but he mentioned only the obvious screen.
During our brainstorming discussion, we forgot (?) about this other screen because it wasn’t used much for that purpose. But while testing when I ran the basic case of the bid being $0.5 and checked end to end, I found that the cronjob for the same was failing because at one place it was finding $0.25.
I reported this to my team and we made the change and successfully delivered it the same day itself.
#2) Under the same project (mentioned above), we were asked to add a small text field for notes/comments for bidding. It was a very simple implementation and we were committed to deliver it the same day.
Hence, as mentioned above, I tested all the business rules and use cases around it, and when I did some validation testing, I found that when I entered a combination of special characters like </>, the page crashed.
We thought over it and figured out that the actual bidders won’t in any case use such combinations. Hence, we released it with a well-drafted note about the issue. The client accepted it as a bug but agreed with us to implement it later because it was a severe bug but not a prior one.
#3) Recently, I was working on a mobile app project, and we had a requirement to update the time of delivery shown in the app as per the time zone. It was not only to be tested in the app but also for the web service.
While the development team was working on the implementation, I created the automation scripts for the web service testing and the DB scripts for changing the time zone of the delivery item. This saved my efforts and we could achieve better results within a short duration.
Sanity Testing Vs Regression Testing
Given below are a few differences between the two:
|1||Regression testing is done to verify that the complete system and bug fixes are working fine.||Sanity testing is done at random to verify that each functionality is working as expected.|
|2||Every tiniest part is regressed in this testing.||This is not a planned testing and is done only when there’s a time crunch.|
It is a well elaborate and planned testing.
|This is not a planned testing and is done only when there’s a time crunch.
|4||An appropriately designed suite of test cases is created for this testing.||It may not every time be possible to create the test cases; a rough set of test cases is created usually.
|5||This includes in-depth verification of functionality, UI, performance, browser/OS testing etc. i.e. every aspect of the system is regressed.||This mainly includes verification of business rules, functionality.
|6||This is a wide and deep testing.||This is a wide and shallow testing.
|7||This testing is at times scheduled for weeks or even month(s).||This mostly spans over 2-3 days max.
Strategy for Mobile App Testing
You must be wondering why I am mentioning specifically about mobile apps here?
The reason is that the OS and browser versions for web or desktop apps do not vary much and especially the screen sizes are standard. But with mobile apps, screen size, mobile network, OS versions, etc affect the stability, look and in short, the success of your mobile app.
Hence a strategy formulation becomes critical when you are performing this testing on a mobile app because one failure can land you in big trouble. Testing must be done smartly and with caution too.
Given below are some pointers to help you perform this testing successfully on a mobile app:
#1) First of all, analyze the impact of the OS version on the implementation with your team.
Try to find answers to questions like, will the behavior be different across versions? Will the implementation work on the lowest supported version or not? Will there be performance issues for the implementation of versions? Are there any specific features of the OS that might impact the behavior of the implementation? etc.
#2) On the above note, analyze for the phone models also i.e., are there any features on the phone that will impact the implementation? Is the implementation of behavior-changing with GPS? Is the implementation behavior changing with the phone’s camera? etc. If you find that there’s no impact, avoid testing on different phone models.
#3) Unless there are any UI changes for the implementation I would recommend keeping UI testing on the least priority, you can inform the team (if you want) that the UI will not be tested.
#4) In order to save your time, avoid testing on good networks because it is obvious that the implementation is going to work as expected on a strong network. I would recommend starting with testing on a 4G or 3G network.
#5) This testing is to be done in less time but make sure that you do at least one field test unless it’s a mere UI change.
#6) If you must test for a matrix of different OS and their version, I would suggest that you do it in a smart way. For instance, choose the lowest, medium and the latest OS-version pairs for testing. You can mention in the release document that not every combination is tested.
#7) On a similar line, for UI implementation sanity test, use small, medium and large screen sizes to save time. You can also use a simulator and emulator.
Sanity Testing is performed when you are running short of time and hence it is not possible for you to run each and every test case and most importantly you are not given enough time to plan out your testing. In order to avoid the blame games, it is better to take precautionary measures.
In such cases, lack of written communication, test documentation and miss outs are quite common.
To ensure that you don’t fall prey to this, make sure that:
- Never accept a build for testing until you are not given a written requirement shared by the client. It happens that clients communicate changes or new implementations verbally or in chat or a simple 1 liner in an email and expect us to treat that as a requirement. Compel your client to provide some basic functionality points and acceptance criteria.
- Always make rough notes of your test cases and bugs if you do not have sufficient time to write them neatly. Don’t leave these undocumented. If you have some time, share it with your lead or team so that if anything is missing they can point it out easily.
- If you and your team are short of time, make sure that the bugs are marked in the appropriate state in an email? You can email the complete list of bugs to the team and make the devs mark them appropriately. Always keep the ball in the other’s court.
- If you have the Automation Framework ready, use it and avoid doing Manual Testing, that way in less time you can cover more.
- Avoid the scenario of “release in 1 hour” unless you are 100% sure that you will be able to deliver.
- Last but not the least, as mentioned above, draft a detailed release email communicating what is tested, what is left out, reasons, risks, which bugs are resolved, what are ‘Latered’ etc.
As a QA, you should judge what is the most important part of the implementation that needs to be tested and what are the parts that can be left out or basic-tested.
Even in a short time, plan a strategy about how you want to do and you will be able to achieve the best in the given time frame.
Smoke Testing is not exhaustive testing but it is a group of tests that are executed to verify if the basic functionalities of that particular build are working fine as expected or not. This is and should always be the first test to be done on any ‘new’ build.
When the development team releases a build to the QA for testing, it is obviously not possible to test the entire build and verify immediately if any of the implementations are having bugs or if any of the working functionality is broken.
In light of this, how will QA make sure that the basic functionalities are working fine?
The answer to this will be to perform Smoke Testing.
Once the tests are marked as Smoke tests (in the test suite) pass, only then will the build be accepted by the QA for in-depth testing and/or regression. If any of the smoke tests fail, then the build is rejected and the development team needs to fix the issue and release a new build for testing.
Theoretically, the Smoke test is defined as surface-level testing to certify that the build provided by the development team to the QA team is ready for further testing. This testing is also performed by the development team before releasing the build to the QA team.
This testing is normally used in Integration Testing, System Testing, and Acceptance Level Testing. Never treat this as a substitute for actual end to end complete testing. It comprises of both positive and negative tests depending on the build implementation.
Smoke Testing Examples
This testing is normally used for Integration, Acceptance and System Testing.
In my career as a QA, I always accepted a build only after I had performed a smoke test. So, let’s understand what a smoke test is from the perspective of all these three testings, with some examples.
#1) Acceptance Testing
Whenever a build is released to QA, smoke test in the form of an Acceptance Testing should be done.
In this test, the first and most important smoke test is to verify the basic expected functionality of the implementation. This way, you will need to verify all the implementations for that particular build.
Let us take the following Examples as implementations done in the build to understand the smoke tests for those:
- Implemented the login functionality to allow the registered drivers to log in successfully.
- Implemented the dashboard functionality to show the routes that a driver is to execute today.
- Implemented the functionality to show an appropriate message if no routes exist for a given day.
In the above build, at the acceptance level, the smoke test will mean to verify that the three basic implementations are working fine. If any of these three are broken, then the QA should reject the build.
#2) Integration Testing
This testing is usually done when the individual modules are implemented and tested. At the Integration Testing level, this testing is performed to make sure that all the basic integration and end to end functionalities are working fine as expected.
It may be the integration of two modules or all modules together, hence the complexity of the smoke test will vary depending on the level of integration.
Let us consider the following Examples of integration implementation for this testing:
- Implemented the integration of route and stop modules.
- Implemented the integration of arrival status update and it reflects the same on the stop screen.
- Implemented the integration of complete pick up till the delivery functionality modules.
In this build, the smoke test will not only verify these three basic implementations but for the third implementation, a few cases will verify for complete integration too. It helps a lot to find out the issues that get introduced in integration and the ones that went unnoticed by the development team.
#3) System Testing
As the name itself suggests, for system level, the smoke testing includes tests for the most important and commonly used workflows of the system. This is done only after the complete system is ready & tested, and this testing for system-level can be referred to as smoke testing before regression testing also.
Before starting the regression of the complete system, the basic end to end features are tested as a part of the smoke test. The smoke test suite for the complete system comprises of the end to end test cases that the end-users are going to use very frequently.
This is usually done with the help of automation tools.
Importance of SCRUM Methodology
Nowadays, the projects hardly follow the Waterfall methodology in project implementation, rather mostly all the projects follow Agile and SCRUM only. Compared to the traditional waterfall method, Smoke Testing holds high regard in SCRUM and Agile.
I worked for 4 years in SCRUM. We know that in SCRUM, the sprints are of shorter duration and hence it is of extreme importance to do this testing so that the failed builds can immediately be reported to the development team and fixed too.
The following are some takeaways on the importance of this testing in SCRUM:
- Out of the fortnight sprint, halftime is allocated to QA but at times the builds to the QA are delayed.
- In sprints, it is best for the team that the issues are reported at an early stage.
- Each story has a set of acceptance criteria, hence testing the first 2-3 acceptance criteria is equal to smoke testing of that functionality. Customers reject the delivery if a single criterion is failing.
- Just imagine what would happen if it was 2 days that the development team delivered you the build and only 3 days are remaining for the demo and you come across a basic functionality failure.
- On average, a sprint has stories ranging from 5-10, hence when the build is given, it is important to make sure that each story is implemented as expected before accepting the build into testing.
- If the complete system is to be tested and regressed, then a sprint is dedicated to the activity. A fortnight may be a little less to test the whole system, hence it is very important to verify the most basic functionalities before starting the regression.
Smoke Test Vs Build Acceptance Testing
Smoke Testing is directly related to Build Acceptance Testing (BAT).
In BAT, we do the same testing – to verify if the build has not failed and if the system is working fine or not. Sometimes, it happens that when a build is created, some issues get introduced and when it is delivered, the build doesn’t work for the QA.
I would say that BAT is a part of a smoke check because if the system is failing, then how can you as a QA accept the build for testing? Not just the functionalities, the system itself has to work before the QA’s proceed with In-Depth Testing.
Smoke Test Cycle
The following flowchart explains the Smoke Testing Cycle.
Once a build is deployed to QA, the basic cycle followed is that if the smoke test passes, the build is accepted by the QA team for further testing but if it fails, then the build is rejected until the reported issues are fixed.
Who Should Perform the Smoke Test?
Not the whole team is involved in this type of testing to avoid the wastage of time of all the QA’s.
Smoke Testing is ideally performed by the QA lead who decides based on the result as to whether to pass the build to the team for further testing or reject it. Or in the absence of the lead, the QA’s themselves can also perform this testing.
At times, when the project is a large scale one, then a group of QA can also perform this testing to check for any showstoppers. But this is not so in the case of SCRUM because SCRUM is a flat structure with no Leads or Managers and each tester has their own responsibilities towards their stories.
Hence individual QA’s perform this testing for the stories that they own.
Why Should We Automate Smoke Tests?
This is the first test to be done on a build released by the development team(s). Based on the results of this testing, further testing is done (or the build is rejected).
The best way to do this testing is to use an automation tool and schedule the smoke suite to run when a new build is created. You may be wondering why I should “automate the smoke testing suite”?
Let us look at the following case:
Let’s say that you are a week away from your release and out of a total of 500 test cases, your smoke test suite comprises of 80-90. If you start executing all these 80-90 test cases manually, imagine how much time will you take? I think 4-5 days (minimum).
However, if you use automation and create scripts to run all 80-90 test cases then ideally, these will be run in 2-3 hours and you will have the results with you instantly. Didn’t it save your precious time and give you the results about the build-in much less time?
5 years back, I was testing a financial projection app, which took inputs about your salary, savings, etc., and projected your taxes, savings, profits depending on the financial rules. Along with this, we had customization for countries that depend on the country and its tax rules used to change (in the code).
For this project, I had 800 test cases and 250 were smoke test cases. With the use of Selenium, we could easily automate and get the results of those 250 test cases in 3-4 hours. It not only saved time but showed us ASAP about the showstoppers.
Hence, unless it is impossible to automate, do take the help of automation for this testing.
Advantages And Disadvantages
Let us first take a look at the advantages as it has a lot to offer when compared to its few disadvantages.
- Easy to perform.
- Reduces the risk.
- Defects are identified at a very early stage.
- Saves effort, time and money.
- Runs quickly if automated.
- Least integration risks and issues.
- Improves the overall quality of the system.
- This testing is not equal to or a substitute for complete functional testing.
- Even after the smoke test passes, you may find showstopper bugs.
- This type of testing is best suited if you can automate else a lot of time is spent manually executing the test cases especially in large-scale projects having around 700-800 test cases.
Smoke Testing should definitely be done on every build as it points out the major failures and showstoppers at a very early stage. This applies not only to new functionalities but also to the integration of modules, fixing of issues and improvisation as well. It is a very simple process to perform and get the correct result.
This testing can be treated as the entry point for complete Functional Testing of functionality or system (as a whole). But before that, the QA team should be very clear about what tests are to be done as smoke tests. This testing can minimize the efforts, save time and improve the quality of the system. It holds a very important place in sprints as the time in sprints is less.
This testing can be done both manually and also with the help of automation tools. But the best and preferred way is to use automation tools to save time.
Difference Between Smoke and Sanity Testing
Most of the time we get confused between the meaning of Sanity Testing and Smoke Testing. First of all, these two testings are way “different” and are performed during different stages of a testing cycle.
|S. No.||Smoke Testing||Sanity Testing
|1||Smoke testing means to verify (basic) that the implementations done in a build are working fine.||Sanity testing means to verify the newly added functionalities, bugs etc. are working fine.|
|2||This is the first testing on the initial build.||Done when the build is relatively stable.|
|3||Done on every build.||Done on stable builds post regression.|
Given below is a diagrammatic representation of their differences:
- This testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire or smoke. In the software industry, this testing is a shallow and wide approach whereby all the areas of the application without getting into too deep, is tested.
- The smoke test is scripted, either using a written set of tests or an automated test
- Smoke tests are designed to touch every part of the application in a cursory way. It’s shallow and wide.
- This testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with the finer details. (Such as build verification).
- This testing is a normal health check-up to the build of an application before taking it to test in-depth.
- A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity Testing is usually narrow and deep.
- This test is usually unscripted.
- This test is used to determine that a small section of the application is still working after a minor change.
- This testing is cursory testing, it is performed whenever a cursory testing is sufficient to prove that the application is functioning according to specifications. This level of testing is a subset of regression testing.
- This is to verify whether the requirements are met or not, by checking all the features breadth-first.
Hope you are clear about the differences between these two vast and important Software Testing types. Feel free to share your thoughts in the comments section below!!
298 thoughts on “Smoke Testing Vs Sanity Testing: Difference with Examples”
Very good explanation… Thanks
What is the difference between smoke and unit testing
unit testing is generally performed by the developers. in this, they isolate each section of the code and check its correctness whether it is working fine or not.
smoke testing is performed by the testers. in this, we generally check the critical functions of the software,whether they are working fine or not. for eg, check the login functionality, verify the app launches successfully, check the gui is responsive or not.
I disagree. It has no sense to execute Smoke tests (testing of the only critical scenarios) by testers when they has been testing new functionality on more detailed level anyway. Smoke tests should be done by developer to check quickly if a new piece of solution is working or not on DEV environment and it is ready for deploy and further deeper testing on TEST environment. For example, when developer add some new function to existing invoice form then he will at least try to create and save new invoice long with usage of the new function.
please specify thing in brief detail along with real-time example
it is easily understandable
Nice copy paste !!!!
The answer of your question is
Generally we follow release cycle in our current organization. It deals with all SDLC phases.
To get completion of release cycle we would have atmost 20 days.
if target of 450 cases assigned to an test engineer and the deadline of 8 hours how he can execute and finish it
Thanks so much
Well, every project/customer uses his own terms based on his own habits.
If you refer to ISTQB glossary, both terms have the same meaning (altought I don’t necessarily agree on this point like most of you but it’s only due to my own experience).
IMHO the main difference resides in the strategy you apply to each subset of test (smoke, sanity, regression…).
If failed sanity checks doesn’t lead to any exit criteria, does it even worth giving them a name or another ?
On the other hand, if any failed test during smoke check leads to build/release rejection, then it woth defining the term and specifically follow and report the associated results.
Which need to start first? Smoke or Sanity?
Smoke testing needs to start first. This is the first testing on the initial build. You always accepted a build only after You had performed a smoke test. Smoke testing is normally used for Integration, Acceptance and System Testing.
Sanity testing is a subset of regression testing .Sanity test should be done only when you are running short of time.
Hi, Smoke has to be done first and then Sanity.
Hi, I watched one of the videos on smoke and sanity check by softwaretestinghelp.com. This article and that video have contradictions.
why automation tools are not used for sanity check ?
cause you cannot automate 100%
what is silk test?with example?
help me pls
Are smoke and sanity lies in Black box?
Other than the subject expertise you provide in your posts, what I especially appreciate is the level of editing leading to idiomatic and error-free content that is a delight to read.
We in india often neglect this as can be seen all over the web wherever users create content.
Keep up the good work.
hello author i am esteem reader of your tutorilas .plz provide a sample application and then provide testcases for smoke & sanity check, regression testing,retesting. so that we get realtime experience
Can you tell me how to document sanity for future reference?
Good to read so many comments. Sorry but am still not able to get an idea about where and when i should use sanity and smoke ?
If would be good if any one can provide a real life simple example please.
thank you sir!!!
Those who all are having difficulty in the understanding difference between both the testing can read this. Let’s take an example of a mobile phone.
Smoke : Here, we don’t go much deep and cover major functionalities like on/off, msg, calling, internet, homepage etc..
Sanity: Here, we choose only a few functionalities like setting, contacts etc and do a deep testing.
If you visit a doctor for a general health checkup(height, weight, blood pressure, blood test etc), then it is called as smoke testing.
But if you visit a doctor for a particular problem like fever, then it is called as sanity.
how to write test scenario and test cases for regression testing??
please help me
Can any one clarify my doubt i.e Does smoke test is conducted by development team or testing team.
Firstly it is conducted by QA lead, if not then it is conducted by tester.
Very informational doc. Thank you.
hello, the writer has mad an extremely confusing contradiction.
Statement 1: In the Regression vs Sanity table: “This [sanity testing] is a wide and shallow testing.”
Statement 2: “A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.”
Statement 2 is correct, sanity testing is Narrow and Deep, while statement 1 must be accidentally referring to Smoke Tests which are Shallow and Wide, not Sanity Tests.
Can you please explain about banking domain modules and bug examples.
Wrong Explanation for Sanity Testing..It is Deep and Narrow instead of shalow and wide…
You said sanity is “Done on stable builds post regression.” I am sorry! But this actually contradicts understanding created above the article. “Post regression” is actually confusing. Regression is done after sanity is done.
” these two testings are way “different” and are performed during different stages of a testing cycle.” how come they different? Smog goes to Implementation Cycle, Sanity to Conclusion? Correct me if i’m wrong
if any changes is done is any module then what should be the sequence of testing(smoke,sanity,regression, retest)?
Great insights there and thanks for sharing them!
It’s clear to me that smoke testing is entirely subjective and there’s a broad-range of how to interpret it.
If we go back to what smoke testing means for electronic products, it means turn the power on for a few seconds. Watch to see if the magic smoke comes out of the device. If it does, the device is effectively trash.
I have so many concerns around interpretations of the kinds of tests we have. What is a smoke test, what is a functional test, what is a system test, etc. The lines are blurry. What makes it even blurrier is when a test can be any combination of those kinds of tests.
To keep things simple, for me a smoke test is as simple as “if I can start the service, and it can reach the database and endpoints it needs to, then we’re good”.
A sanity test is more along the lines of “If I go through a specific user-journey, do the important data items have the right values?”
A functional test is “when I do x, do I get y back”?
There’s also Contract Testing which checks that the interface between two systems are consistent from a consumer’s perspective.
While I think it’s great that we can classify all these tests, I think it’s more important to ensure that our automated tests provide value. When they’re flaky (sometimes pass, sometimes fail), then there’s something the code is doing which we don’t understand. Pay attention to these ones as they are identifying very subtle idiosyncrasies in the implementation, or the test itself!
Smoke tests should be light on execution time. Sanity tests should be relatively light on resources.
There’s no right or wrong way, but I think being smart what needs testing where takes great care, skill and planning. Otherwise it will become a mess.
Great insights and thank you very much for sharing it!
It was Superb. Nice one!
When you compare Regression and Sanity testing you say that Sanity testing is “shallow and wide”, but when you compare Smoke and Sanity testing you say Sanity testing is “narrow and deep”. Are both correct?
Thank u… Good example
One point about the Sanity Testing says that testing is narrow and deep. Another point says that Sanity tests check whether the requirements are met or not, by checking all the features breadth-first. These two points seem to be contradictory, because checking all the features breadth-first would seem to be a wider view, not a narrow one.
Did I misread these points?
well explained…..Thanks a lot
i am still confuse about smoke ,sanity and regression testing smoke means checking imp and basic functionality of application and after that we do sanity testing i got confuse here are we checking same functionality or diff on the other hand they said sanity testing check when we add new functionalty check this is stable or not so on the new bulid so here what should i do on first build