An overview of Functional Testing:
Testing and Quality Assurance are the huge part of the SDLC process. As testers, we need to be well aware of all types of testing, even if we’re not directly involved with them on a daily basis.
Since testing is such an ocean and the scope of it is so vast, we have dedicated testers perform different kinds of testing. Many of you are probably already familiar with most concepts, but it wouldn’t hurt to organize it all here.
So let’s get back to some basics.
On the broadest level, there are two kinds or classes of testing.
#1) Black box testing:
This kind of testing is exactly what it sounds like! Picture a black colored box as shown below.
Let’s ask ourselves some basic questions so that we can derive a technical definition.
- Can you see the box’s external characteristics?
- Can you see what lies within the box?
If you now try to answer these fundamental questions, it’ll give you a perfect idea of what this entails. The answer to the above question is yes and no respectively. Therefore, black box testing can be defined as the type of testing where you view the system under test as a black box and observe how it reacts to various situations.
You don’t need to get into the system’s internals like its implementation, code or internal characteristics. The tester will derive the different inputs to this system based on the requirements, record the outputs for each of them and make sure they are consistent with the expectations.
Examples of black box testing are: Functional testing, System testing, Acceptance testing, Regression testing, Beta testing and sometimes Integration testing also comes under this category.
#2) White box testing:
White box testing, on the other hand, is almost the opposite of the black box testing. Let us assume that the same black box is a transparent box as below.
Again let’s ask some questions to derive its meaning:
- Can you see the box’s internal characteristics?
- Should you be testing the code?
The answer to both these questions is an affirmative “yes”. Commonly known synonyms for white box testing are glass box testing, transparent box testing, or structural testing.
Therefore white box testing is a mechanism where a tester has access to the internal code of the system and executes tests to check the robustness of the code. He doesn’t have to get into the actual functionality of the application.
Examples of white box testing are Unit testing, API testing, Integration and Regression testing also extend into this category based on scope. Sometimes even functional tests require the usage of white box testing.
Now that we have formed a basis and background, the scope of this article will be to discuss the functionality testing aspect in detail.
What You Will Learn:
Introduction to Functional testing
Definition: As the name goes, a functional test is a kind of black box testing that is performed to confirm that the functionality of an application or system is behaving as expected.
Therefore, there must be something that defines what is acceptable behavior and what is not. This is specified in a functional or requirement specification. It is a document that describes what a user is permitted to do so that he can determine the conformance of the application or system to it. Additionally, sometimes this could also entail actual business side scenarios to be validated.
Therefore, functionality testing can be carried out via two popular techniques:
- Testing based on Requirements: Contains all the functional specifications which form a basis for all the tests to be conducted.
- Testing based on Business scenarios: Contains the information about how the system will be perceived from a business process perspective.
Functional testing Types: It has many categories and these can be used based on the scenario. The most prominent types are discussed in brief below:
- Unit Testing: Unit testing is usually performed by the developer who writes different code units that could be related or unrelated to achieve a particular functionality.
Therefore, this usually entails writing unit tests which would call the methods in each unit and validate that when the needed parameters are passed, its return value is as expected. Code coverage is an important part of unit testing where test cases need to exist to cover the below three:
i) Line coverage
ii) Code path coverage
iii) Method coverage
- Sanity Testing: Testing that is done to ensure that all major and vital functionalities of the application/system are working correctly. This is generally done after a smoke test.
- Smoke testing: Testing that is done after each build is released to test to ensure build stability. It’s also called build verification testing.
- Regression tests: Testing performed to ensure that the adding new code, enhancements, fixing of bugs is not breaking existing functionality or cause instability and still works according to the specifications. Regression tests need not be as extensive as the actual functional tests but should ensure just the amount of coverage to certify that the functionality is stable.
- Integration tests: When the system relies on multiple functional modules that might individually work perfectly, but have to work coherently when clubbed together to achieve an end to end scenario, validation of such scenarios is called integration testing.
- Beta/Usability testing: Product is exposed to the actual customer in a production like an environment and they test the product. The user’s comfort is derived from this and feedback is taken. This is similar to User Acceptance testing.
I’m again going to represent this in easy flow-chart:
Difference Between Unit, Functional and Integration Testing
|UNIT TEST||INTEGRATION TEST||FUNCTIONAL TEST|
|Tests a program by separating it into code modules or units and individually testing those modules.||Tests a user scenario which involves two or more individual modules to work correctly||Functional tests determine if the application is behaving according to specifications|
|Not very complex in nature||More complex than unit tests.||More complex than unit tests and sometimes even more complex than integration tests.|
|It is a white box testing technique||It is a white box and black box testing technique||It is a black box testing technique|
|Unit testing helps in screening/uncovering bugs that might occur frequently||Integration testing will help uncovering the error that occurs when individual modules are assembled to work together.||Functional testing helps to uncover bugs that do not let the application serve its desired purpose.|
|Unit tests should be more in number||Integration tests should be added if it’s not possible to add a unit test as they are complex and take more time to execute||Functional tests should be less in number when compared to unit tests as they might take more time to execute.|
Functional Testing Process
Approach, Techniques, and Examples
Functional or behavioral testing generates an output based on the given inputs and determines if the System is functioning correctly as per the specifications. So a pictorial representation looks like something below:
Different kind of scenarios can be thought of and authored in the form of “test cases”. As QA folks, we all know how the skeleton of a test case looks.
It has mostly four parts to it:
- Test summary
- Test Steps and
- Expected results.
Obviously, attempting to author each and every kind of test is not only impossible but also time-consuming and expensive.
Typically, we would want to uncover maximum bugs without any escapes with existing tests. Therefore, QA needs to use optimization techniques and strategize how they would approach the testing. Let’s explain this with an example.
Let’s explain this with an example.
Functional Testing Example Use Case: Take an online HRMS portal where the employee logs in with his user account and password. On the login page, there are two text fields for the user, password, and two buttons: Login and Cancel. Successful login takes the user to the HRMS home page and cancel will cancel the login. Specifications are as below:
1 ) The user id field takes a minimum of 6 characters, a maximum of 10 characters, numbers(0-9), letters(a-z, A-z), special characters (only underscore, period, hyphen allowed) and cannot be left blank. User id must begin with a character or number and not special characters.
2) Password field takes a minimum of 6 characters, a maximum of 8 characters, numbers (0-9), letters (a-z, A-Z), special characters (all) and cannot be blank.
The basic approach to testing this scenario can be classified into two broad categories:
- Positive testing and
- negative testing
Off course, each of these categories has their own subsection of tests that will be carried out.
Positive tests are happy path tests which are done to ensure that product means – at least the basic requirements that are vital to customer usage.
Negative scenarios ensure that the product behaves properly even subjected to unexpected data.
For more information on these forms of testing, you can check this article -> What is Negative Testing and How to Write Negative Test Cases
So now let me try to structure the testing techniques using a flowchart below. We’ll get into the details of each of those tests.
#1) End user based/System tests: The system under test may have many components that when coupled together achieve the user scenario. In the example, a customer scenario would include HRMS application loading, entering the correct credentials, going to the home page, performing some actions and logging out of the system. This particular flow has to work without any errors for a basic business scenario.
Some samples are given below:
|Sl No||Summary||Pre-requisite||Test case||Expected results.|
|1.||Fully privileged user can make account changes||1)User account must exist|
2) User needs to have the required privileges
|1) User enters the userid and password|
2) User sees edit permissions to modify account itself
3) User modifies account information and saves.
4) User logs out.
|1) User is logged into the home page|
2) Edit screen is presented to the user.
3) Account information is saved
4) User is taken back to login page
|2.||Another valid user without full privileges||1)User account must exist|
2) User needs to have the minimum privileges
|1) User enters the userid and password|
2) User sees edit permissions to modify only certain fields.
3) User modifies only those fields and saves.
4) User logs out.
|1) User is logged into the home page|
2) Edit screen is presented to the user only on certain fields. The account fields are grayed out.
3) Fields modified are saved
4) User is taken back to login page
This is a basic example of how test cases are authored for situations. The format above will apply to all the below tests as well. For the sake of strong conceptual grounding, I have put in only some simple tests above and below.
#2) Equivalence tests: In Equivalence partitioning, the test data are segregated into various partitions called equivalence data classes. Data in each partition must behave the same way therefore only one condition needs to be tested. Similarly, if one condition in a partition doesn’t work, then none of the others will work.
For example, in the above scenario the user id field can have a maximum of 10 characters, so entering data > 10 should behave the same way.
#3) Boundary Value tests: Boundary tests imply data limits to the application and validate how it behaves. Therefore, if the inputs are supplied beyond the boundary values, it is considered to be negative testing. So a minimum of 6 characters for user sets the boundary limit. Tests written to have user id < 6 characters are boundary analysis tests.
#4) Decision-based tests: Decision-based tests are centered around the ideology of the possible outcomes of the system when a particular condition is met. In the scenario given above, the following decision-based tests can be immediately derived:
- If wrong credentials are entered, it should indicate that to the user and reload the login page.
- If the user enters the correct credentials, it should take the user to the next UI.
- If the user enters the correct credentials but wishes to cancel login, it should not take the user to the next UI and reload the login page.
#5) Alternate flow tests: Alternate path tests are basically run to validate all the possible ways that exist, other than the main flow to accomplish a function.
#6) Ad-hoc tests: When most of the bugs are uncovered through the above techniques, ad-hoc tests are a great way to uncover any discrepancies not observed earlier. These are performed with the mindset of breaking the system and see if it responds gracefully.
For more information read this article -> Ad-hoc Testing: How to Find Defects Without a Formal Testing Process
For the example, a sample test case would be:
- A user is logged in, but the admin deletes the user account while he is performing some operations. It would be interesting to see how the application handles this gracefully.
Functional v/s Non-Functional Testing
Now, we have a detailed discussion about functionality test. Let’s also briefly discuss the non-functional testing concept.
Non-functional tests focus on the quality of the application / system as a whole. So tries to deduce how well the system performs as per the customer requirements as in contrast to the function it performs.
Non-functional test types: Some more popular kinds of non-functional tests are:
- Performance: Stress, Load Testing: This testing is done to see if the application can withstand heavy stress and load if there are too many simultaneous users accessing the system at the same time.
- Security testing: Testing which is done to determine how robust the application is to malicious attacks or unwarranted users.
- Compatibility Testing: Testing is carried out to determine how well the application works in any particular environment.
- Install Testing: Testing carried out to determine that the application can be downloaded, installed from something called a “package” and setup without any issues.
- Recovery Testing: Testing done to figure out if an application is able to gracefully recover from disasters and catastrophic events such as hardware failures, system crashes, etc,
- Volume/Tenacity Testing: Tests which is done to determine the application’s ability to handle certain amounts of data.
Functional Test Automation
Many times while performing defect escape analysis, the prominent and perennial cause of escapes seems to be a lack of test coverage in a particular function. Again, there are several causes for this to happen like lack of environments, lack of testers, too many functions being delivered, less time to cover all testing aspects and sometimes simply overlooking it.
While dedicated test teams might do detailed testing each sprint or each test cycle, there will always be defects and there will always be defects that might get missed. This is one of the fundamental needs to have test automation in place, thereby having a marked improvement in efficiency of the overall test process and test cases coverage.
Although automated functional testing can never replace manual tests, having an ideal mix of the two will prove to be vital to have the desired quality in Software projects.
#1) Select the correct Automation Tool: The amount of tools available in the market, make the choice of the automation tool a real daunting task! However you could make a list of requirements, based on which you can select what automation tool to use. Some primary aspects to think of are:
- Select a tool that will be easy to use by all QA members of the team, if they don’t already have the needed skills.
- The tool can be used across different environments For example: Can the scripts be created on one OS platform and run on another? Do you require CLI automation, UI automation, mobile application automation or all?
- The tool must have all the features you require. For example: If some testers are not well-versed with a scripting language, the tool should have record and playback feature and then support conversion of the recorded script to the desired scripting language. Likewise, if you need the tool to also support automated build tests, specific reporting, and logging, it must be able to do that.
- The tool must be able to support reusability of test cases in case of UI changes.
Automation Tools: There are quite a few tools for functional automation available. Selenium is probably a hot favorite, but there are some other open source tools as well like Sahi, Watir, Robotium, AutoIt etc.
#2) Pick the right test cases to automate: If you want to get the best out of the automation, it is vital to be smart about the kind of tests you pick to automate. If there are tests that require some setup and configurations on and off during test execution, those are best left non-automated.
Therefore, you can automate tests which:
- Need to be run repeatedly
- Run with different kinds of data
- Some P1, P2 test cases which take a lot of effort and time
- Tests which are error prone
- Set of tests that need to be run in different environments, browsers, etc
#3) Dedicated automation team: This is probably overlooked in most organizations and automation is imposed upon all the members of the QA team. Each team member has varying experience levels, skill sets, interest levels, bandwidth to support automation. Some individuals possibly better skilled at executing manual tests, while some others may have the knowledge of scripting and automation tools.
In situations like this, it’s a good practice to take an analysis of all the members of the team and have some members dedicated doing only automation. Automation activity requires time, effort, knowledge and a dedicated team will help achieve the required results instead of overloading all the members of the team with both manual and automation testing.
#4) Data Driven tests: Automated test cases that require multiple sets of data should be well written so that they can be reusable. The data could be written in sources such as text or properties files, XML files or read from a database.
Whatever the data source is, creating well-structured automation data makes the framework easier to maintain and makes the existing test scripts used to their full potential.
#5) UI changes must not break tests: The test cases you create using the selected tool must be able to deal with potential UI changes. For instance, earlier versions of selenium used a location to identify page elements. So if the UI changed, those elements were no longer found at those locations leading to mass failure of tests.
Therefore, it’s important to understand the shortcomings of the tool beforehand and author test cases such that there are only minimal changes required in case of UI changes.
#6) Frequent testing: Once you have a basic automation test bucket ready, plan to have more frequent execution of this bucket. This has a two-way advantage: one is that you can enhance the automation framework and make it more robust; second is that you will obviously catch more bugs in the process.
This article comprehensively discusses everything you need to know about functional testing, right from the basics.
About Author: Sanjay Zalavadia – as the VP of Client Service for Zephyr, Sanjay brings over 15 years of leadership experience in IT and Technical Support Services. Throughout his career, he has successfully established and grown premier IT and Support Services teams across multiple geographies for both large and small companies.
I hope that some of the techniques that we’ve suggested come in handy for all readers. Let us know your thoughts in comments below.