Most Frequently asked Informatica Interview Questions and Answers:
This article covers the top Informatica MDM, PowerCenter, Data Quality, Cloud, ETL, Admin, Testing, and Developer questions.
In today’s scenario, INFORMATICA has achieved the tag of a most demanding product across the globe. Its products were newly introduced but they became popular within a short time period.
Over the years, INFORMATICA has been acting as a leader in the technology field, Data Integration. Based on ETL (Extract, Transfer, Load) architecture, this data integration tool has several products that focus on providing services for government organizations, financial & insurance companies, healthcare, and several other businesses.
Table of Contents:
Well, this was just the background of INFORMATICA. But today, the Data warehousing field has tremendous growth and thus much job opportunities are available in the industry.
Best Informatica Interview Questions & Answers
Given below is a list of the most commonly asked interview questions and answers. It includes around 64 questions, which, in turn, would enable you to brush your knowledge about Informatica concepts in an easier way.
Q #1) What is INFORMATICA? Why do we need it?
Answer: INFORMATICA is a software development firm which offers some data integration solution for ETL, data virtualization, master data management, data quality, data replica, ultra messaging, etc.
Suggested Read => Best Informatica Scheduling Integrations Platforms for 2023
Some of the popular INFORMATICA products are:
- INFORMATICA PowerCenter
- INFORMATICA PowerConnect
- INFORMATICA Power Mart
- INFORMATICA Power Exchange
- INFORMATICA Power Analysis
- INFORMATICA Power Quality
We need INFORMATICA while working with data systems that contain data to perform certain operations along with a set of rules. It facilitates operations line cleaning and modifying data from structured and unstructured data systems.
Q #2) What is the format of INFORMATICA objects in a repository? What are the databases that it can connect to Windows?
Answer: INFORMATICA objects can be written in XML format.
Following is the list of databases that it can connect to:
- SQL Server
- Oracle
- MS Access
- MS Excel
- DB2
- Sybase
- Teradata
Q #3) What is INFORMATICA PowerCenter?
Answer: It is an ETL/Data Integration tool that is used to connect and retrieve data from different sources and data processing. PowerCenter processes a high volume of data and supports data retrieval from ERP sources such as SAP, PeopleSoft, etc.
You can connect PowerCenter to database management systems like SQL and Oracle to integrate data into the third system.
Q #4) Which are the different editions of INFORMATICA PowerCenter that are available?
Answer: Different editions of INFORMATICA PowerCenter are:
- Standard Edition
- Advanced Edition
- Premium Edition
The current version of PowerCenter available is v10 with a high-performance increase.
Q #5) How can you differentiate between PowerCenter and Power Map?
Answer: Given below are the differences between PowerCenter and Power Map.
INFORMATICA PowerCenter | INFORMATICA PowerMart | |
---|---|---|
1. | Processes high volume of data | Processes low volume of data |
2. | Supports global and local repositories | Supports only local repositories |
3. | Supports data retrieval from ERP Sources like SAP, PeopleSoft etc. | Do not support data retrieval from ERP sources |
4. | Converts local repositories to global | Do not convert local repositories to global |
Q #6) What are the different components of PowerCenter?
Answer: Given below are the 7 important components of PowerCenter:
- PowerCenter Service
- PowerCenter Clients
- PowerCenter Repository
- PowerCenter Domain
- Repository Service
- Integration Service
- PowerCenter Administration Console
- Web Service Hub
Q #7) What are the different clients of PowerCenter?
Answer: Here is the list of PowerCenter clients:
- PowerCenter designer
- PowerCenter workflow monitor
- PowerCenter workflow manager
- PowerCenter repository manager
Q #8) What is INFORMATICA PowerCenter Repository?
Answer: PowerCenter Repository is a relational database or a system database that contains metadata such as,
- Source definition
- Target definition
- Session and Session logs
- Workflow
- ODBC connection
- Mapping
There are two types of Repositories:
- Global Repositories
- Local Repositories
PowerCenter Repository is required to perform Extraction, Transformation, and Loading(ETL) based on metadata.
Q #9) How to elaborate Tracing Level?
Answer: Tracing level can be defined as the amount of information that the server writes in the log file. Tracing level is created and configured either at the transformation level or at session-level else at both the levels.
Given below are the 4 types of tracing level:
- None
- Terse
- Verbose Initialization
- Verbose Data
Q #10) How to elaborate PowerCenter integration service?
Answer: Integration services control the workflow and execution of PowerCenter processes.
There are three components of INFORMATICA integration services as shown in the below figure.
Integration Service Process: It is called as pmserver, integration service can start more than one process to monitor the workflow.
Load Balancing: Load Balancing refers to distributing the entire workload across several nodes in the grid. A load balancer conducts different tasks that include commands, sessions, etc.
Data Transformation Manager(DTM): Data Transformation Manager allows to perform the following data transformations:
- Active: To change the number of rows in the output.
- Passive: Cannot change the number of rows in the output.
- Connected: Link to the other transformation.
- Unconnected: No link to other transformations.
Q #11) What is PowerCenter on Grid?
Answer: INFORMATICA has the feature of Grid computing which can be utilized for the largest data scalability in order to the performance. The grid feature is used for load balancing and parallel processing.
PowerCenter domains contain a set of multiple nodes to configure the workload and then run it on the Grid.
A domain is a foundation for efficient service administration served by the PowerCenter.
Node is an independent physical machine that is logically represented for running the PowerCenter environment.
Q #12) What is Enterprise Data Warehousing?
Answer: When a large amount of data is assembled at a single access point then it is called Enterprise Data Warehousing. This data can be reused and analyzed at regular intervals or as per the need of the time requirement.
Considered as the central database or say a single point of access, enterprise data warehousing provides a complete global view and thus helps in decision support.
It can be more understood from the following points which define its features:
- All important business information stored in this unified database can be accessed from anywhere across the organization.
- Although the time required is more, periodic analysis on this single source always produces better results.
- Security and integrity of data are never compromised while making it accessible across the organization.
Q #13) What is the benefit of Session Partitioning?
Answer: While integration service is running in the environment the workflow is partitioned for better performance. These partitions are then used to perform Extraction, Transformation, and Loading.
Q #14) How can we create an Index after completion of the Load Process?
Answer: Command tasks are used to create an Index. Command task scripts can be used in a session of the workflow to create an index.
Q #15) What are Sessions?
Answer: Session is a set of instructions that are used while moving data from the source to the destination. We can partition the session to implement several sequences of sessions to improve server performance.
After creating a session we can use the server manager or command-line program pmcmd to stop or start the session.
Q #16) How can we use Batches?
Answer: Batches are the collection of sessions that are used to migrate the data from the source to target on a server. Batches can have the largest number of sessions in it but they cause more network traffic whereas fewer sessions in a batch can be moved rapidly.
Q #17) What is Mapping?
Answer: Mapping is a collection of source and targets which are linked with each other through certain sets of transformations such as Expression Transformation, Sorter Transformation, Aggregator Transformation, Router Transformation, etc.
Q #18) What is Transformation?
Answer: Transformation can be defined as a set of rules and instructions that are to be applied to define data flow and data load at the destination.
Q #19) What is Expression Transformation?
Answer: It is a mapping transformation that is used to transform data in one record at a time. Expression transformation can be passive or connected. The expression is used for data manipulation and output generation using conditional statements.
Q #20) What is Update Strategy Transformation?
Answer: Update strategy in Informatica is used to control data passing through it and tag it such as INSERT, UPDATE, DELETE and REJECT. We can set a conditional Logic within the Update Strategy Transformation to tag it.
Q #21) What is Sorter Transformation?
Answer: Sorter transformation is used to sort large volumes of data through multiple ports. It is much likely to work as the ORDER BY clause in SQL. Sorter transformation can be Active, Passive or Connected.
Active transformation passes through mapping and changes the number of rows whereas Passive transformation passes through mapping but does not change the number of rows.
Most of the INFORMATICA transformations are connected to the Data path.
Q #22) What is Router Transformation?
Answer: Router transformation is used to filter the source data. You can use router transformation to split out a single data source.
It is much like Filter transformation but the only difference is that filter transformation uses only one transformation condition and returns the rows that do not fulfill the condition, whereas router transformation uses multiple transformation conditions and returns the rows that match even a single condition.
Q #23) What is Rank Transformation?
Answer: Rank transformation is Active as well as Connected. It is used to sort and rank a set of records either top or bottom. It is also used to select data with the largest or smallest numeric value based on a specific port.
Q #24) What is Rank Index in Rank transformation?
Answer: Rank Index is assigned by the task designer to each record. The rank index port is used to store ranking position for each row. Rank transformation identifies each row from the top to bottom and then assigns Rank Index.
Q #25) What is Status Code in INFORMATICA?
Answer: Code provides an Error handling mechanism during each session. Status Code is issued by the stored procedure to recognize whether it is committed successfully or not and provides information to the INFORMATICA server to decide whether the session has to be stopped or continued.
Q #26) What are Junk Dimensions?
Answer: Junk dimension is a structure that consists of a group of some junk attributes such as random codes or flags. It forms a framework to store related codes with respect to a specific dimension at a single place instead of creating multiple tables for the same.
Q #27) What is Mapplet in Informatica?
Answer: Mapplet is a reusable object that contains a certain set of rules for transformation and transformation logic that can be used in multiple mappings. Mapplet is created in the Mapplet Designer in the designer tool.
Q #28) What is Decode in Informatica?
Answer: To understand Decode, let’s consider it as similar to the CASE statement in SQL. It is basically the function that is used by an expression transformation in order to search a specific value in a record.
There can be unlimited searches within the Decode function where a port is specified for returning result values. This function is usually used in cases where it is required to replace nested IF statements or to replace lookup values by searching in small tables with constant values.
Decode is a function that is used within Expression transformation. It is used just like the CASE statement in SQL to search a specific record.
Below is a simple example of a CASE in SQL:
Syntax:
SELECT EMPLOYEE_ID, CASE WHEN EMPLOYEE_AGE <= 20 THEN 'Young' WHEN EMPLOYEE_AGE > 30 AND AGE <= 40 THEN 'Knowledgeable' WHEN EMPLOYEE_AGE > 40 AND AGE = 60 THEN ‘Wise’ ELSE ‘Very Wise’ END AS EMPLOYEE_WISDOM FROM EMPLOYEE
Q #29) What is Joiner Transformation in INFORMATICA?
Answer: With the help of Joiner transformation, you can make use of Joins in INFORMATICA.
It is based on two sources namely:
- Master source
- Detail source
Following joins can be created using Joiner transformation as in SQL.
- Normal Join
- Full Outer Join
- Master Outer Join(Right Outer Join)
- Detail Outer Join(Left Outer Join)
Q #30) What is Aggregator Transformation in INFORMATICA?
Answer: Aggregator transformation can be active or connected. It works as the GROUP BY clause in SQL. It is useful to perform aggregate calculations on groups in INFORMATICA PowerCenter. It performs an aggregate calculation on data using aggregate type function viz. SUM, AVG, MAX, and MIN.
Q #31) What is Sequence Generator Transformation in INFORMATICA?
Answer: Sequence Generator Transformation can be passive or connected. Its basic use is to generate integer value with NEXTVAL and CURRVAL.
Q #32) What is Union Transformation in INFORMATICA?
Answer: Union transformation is used to combine the data from different sources and frame it with the same port and data type. It is much like a clause in SQL.
Q #33) What is Source Qualifier Transformation in INFORMATICA?
Answer: Source Qualifier transformation is useful in Mapping, whenever we add relational flat files it is automatically created. It is an active and connected transformation that represents those rows which are read by integration service.
Q #34) What is INFORMATICA Worklet?
Answer: Worklet works as a Mapplet with the feature of reusability, the only difference is that we can apply the Worklet to any number of workflows in INFORMATICA, unlike Mapplet. Worklet saves the logic and tasks at a single place to reuse.
Worklet is much similar to the Mapplet and is defined as the group of tasks that can be either reusable or non-reusable at the workflow level. It can be added to as many workflows as required. With its reusability feature, much time is saved as reusable logic can be developed once and can be placed from where it can be reused.
In the INFORMATICA PowerCenter environment, Mapplets are considered as the most advantageous feature. They are created in Mapplet designers and are a part of the Designer tool.
It basically contains a set of transformations that are designed to be reused in multiple mapping.
Mapplets are said to be reusable objects which simplify mapping by:
- Including multiple transformations and source definitions.
- Not required to connect to all input and output ports.
- Accept data from sources and pass to multiple transformations
Well, overall when it is required to reuse the mapping logic then the logic should be placed in Mapplet.
Q #35) What is SUBSTR in INFORMATICA?
Answer: SUBSTR is a function that extracts or removes a set of characters from a larger character set.
Syntax: SUBSTR( string, start [,length] )
Where,
string defines the character that we want to search.
start is an integer that is used to set the position where the counting should get started.
Length is an optional parameter that is used to count the length of a string to return from its starting position.
For Example, SUBSTR(Contact, 5, 8), where we start at the 5th character of our contact and returns to the next 8 characters.
Q #36) What is Code Page Compatibility?
Answer: When data is transferred from the source code page to the target code page then all the characteristics of the source page must be present in the target page to prevent data loss, this feature is called Code Page Compatibility.
Code page compatibility comes into picture when the INFORMATICA server is running in Unicode data movement mode. In this case, the two code pages are said to be identical when their encoded characters are virtually identical and thus results in no loss of data.
For complete accuracy, it is said that the source code page is the subset of the target code page.
Q #37) How you can differentiate between Connected LookUp and Unconnected LookUp?
Answer: Connected Lookup is part of the data flow which is connected to another transformation, it takes data input directly from another transformation that performs a lookup. It uses both static and dynamic cache.
Unconnected Lookup does not take the data input from another transformation but it can be used as a function in any transformation using LKP(LookUp) expression. It uses the only static cache.
Q #38) What is Incremental Aggregation?
Answer: Incremental aggregation is generated as soon as a session created. It is used to calculate changes in the source data that do not change target data with significant changes.
CUSTOMER_NO | BILL_NO | AMOUNT | DATE |
---|---|---|---|
1001 | 4001 | 1000 | 11/01/2016 |
2001 | 4002 | 2550 | 11/01/2016 |
3001 | 5012 | 4520 | 11/01/2016 |
1001 | 6024 | 2000 | 23/01/2016 |
1001 | 6538 | 5240 | 23/01/2016 |
2001 | 7485 | 5847 | 23/01/2016 |
5858 | 4566 | 3550 | 23/01/2016 |
1515 | 4572 | 6000 | 23/01/2016 |
On the first load, the output is:
CUSTOMER_NO | BILL_NO | LOAD_KEY | AMOUNT |
---|---|---|---|
1001 | 4001 | 20011 | 1000 |
2001 | 4002 | 20011 | 2550 |
3001 | 5012 | 20011 | 4520 |
Now, on the second load, it will aggregate the data with the next session date.
CUSTOMER_NO | BILL_NO | LOAD_KEY | AMOUNT | Remarks/Operation |
---|---|---|---|---|
1001 | 6538 | 20011 | 8240 | The cache file is updated after aggregation |
2001 | 7485 | 20011 | 8397 | The cache file is updated after aggregation |
3001 | 5012 | 20011 | 4520 | No Change |
5858 | 4566 | 20011 | 3550 | No Change |
1515 | 4572 | 20011 | 6000 | No Change |
Q #39) What is a Surrogate Key?
Answer: A surrogate key is a sequentially generated integer value which is used as another substitute or replacement for the primary key which is required as a unique identification of each row in a table.
The primary key can be changed frequently as per the need which makes the update process more difficult for a future requirement, Surrogate key is the only solution for this problem.
Q #40) What is the Session task and Command task?
Answer: Session task is a set of instructions that are to be applied while transferring data from source to target using session command. Session command can be either pre-session command or post-session command.
Command task is a specific task that allows one or multiple shell commands of UNIX to run in Windows during the workflow
Q #41) What is the Standalone command task?
Answer: The standalone command task can be used to run Shell command anywhere and anytime in the workflow.
Q #42) What is Workflow? What are the components of the Workflow Manager?
Answer: Workflow is the way of a manner in which the task should be implemented. It is a collection of instructions that inform the server about how to implement the task.
Given below are the three major components of the Workflow Manager:
- Task Designer
- Task Developer
- Workflow Designer
Q #43) What is the Event and what are the tasks related to it?
Answer: The event can be any action or function that occurs in the workflow.
There are two tasks related to it, which includes:
- Event Wait Task: This task waits until an event occurs, once the event is triggered this task gets accomplished and assigns the next task.
- Events Raise Task: Event Raise task triggers the specific event in the workflow.
Q #44) What is a pre-defined event and User-defined event?
Answer: Predefined events are system-defined events that wait until the arrival of a specific file at a specific location. It is also called as File-Watcher event.
User-Defined events are created by the user to raise anytime in the workflow once created.
Q #45) What is the Target Designer and Target Load Order?
Answer: Target Designer is used for defining the Target of data.
When there are multiple sources or a single source with multiple partitions linked to different targets through the INFORMATICA server then the server uses Target Load Order to define the order in which the data is to be loaded at a target.
Q #46) What is the Staging Area?
Answer: The staging area is a database where temporary tables connected to the work area are stored or fact tables to provide inputs for data processing.
Q #47) What is the difference between STOP and ABORT?
Answer: Differences are as follows:
- STOP command runs on Session task, once it is raised, the integration service stops only reading the data in the data source but continues processing and writing it to the target.
- ABORT command is used to completely stop the integration service from reading, processing and writing data to the target. It has its own timeout period of 60 seconds to finish the processing and writing of data through integration service if not then it simply kills the session.
Q #48) What are the different LookUp caches?
Answer: Lookup can be either cached or uncached. Basically, it is divided into five parts.
They are:
- Static Cache
- Dynamic Cache
- Recache
- Persistent Cache
- Shared Cache
Static cache remains as it is without change while a session is running.
Dynamic Cache keeps updating frequently while a session is running.
Q #49) How to update Source Definition?
Answer: There are two ways to update the source definition.
They are:
- You can edit the existing source definition.
- You can import a new source from the database.
Q #50) How to implement Security measures using a Repository manager?
Answer: There are 3 ways to implement security measures.
They are:
- Folder permission within owners, groups, and users.
- Locking (Read, Write, Retrieve, Save and Execute).
- Repository Privileges viz.
- Browse Repository.
- Use the Workflow Manager(to create sessions and batches and set its properties).
- Workflow Operator(to execute Session and batches).
- Use Designer, Admin Repository(allows any user to create and manage repository).
- Admin User(allows the user to create a repository server and set its properties).
- SuperUser(all the privileges are granted to the user).
Q #51) Enlist the advantages of INFORMATICA.
Answer: Being considered as the most favored Data Integration tool, there are multiple advantages that need to be enlisted.
They are:
- It can effectively and very efficiently communicate and transform the data between different data sources like Mainframe, RDBMS, etc.
- It is usually very faster, robust and easy learning than any other available platform.
- With the help of INFORMATICA Workflow Monitor, jobs can be easily monitored, failed jobs can be recovered as well as slow running jobs can be pointed out.
- It has features like easy processing of database information, data validation, migration of projects from one database to another, project development, iteration, etc.
Q #52) Enlist a few areas or real-time situations where INFORMATICA is required.
Answer: Data Warehousing, Data Integration, Data migration & Application Migration from one platform to other platforms are a few examples of real-time usage areas.
Q #53) Explain the ETL program with few examples.
Answer: Known for its uniqueness, ETL tool stands for Extract, Transform and Load tool which basically solves the purpose of extracting data and sending somewhere as defined by altering it.
To be very precise:
- The extraction task is to collect the data from sources like the database, files, etc.
- Transformation is considered as altering the data that has been received from the source.
- Loading defines the process of feeding the altered data to the defined target.
To understand in a technical way, the ETL tool collects data from heterogeneous sources and alters to make it homogeneous so that it can be used further for analysis of the defined task.
Some basic program examples include:
- Mappings derive the ETL process of reading data from their original sources where the mapping process is done in the designer.
- Workflows consist of multiple tasks that are decided and designed in the Workflow Manager Window.
- The task consists of a set of multiple steps that determine the sequence of actions to be performed during run-time.
Q #54) Enlist the differences between Database and Data Warehouse.
Answer: Refer to the below table to understand the differences between the two:
Database | Data Warehouse |
---|---|
It stores/records current and up to date which is used in daily operations | It stores/analyze historical data which is used for information support on a long-term basis. |
Its orientation is on Online Transactional processing which includes simple and short transactions. | Its orientation is on Online Analytical Processing which includes complex queries. |
It consists of detailed and primitive data where its view is flat relational. | It consists of summarized a consolidated data where its view is multidimensional. |
Low performance is observed for Analytical queries. | Analytical queries are judged here as high performance. |
Efficiency is determined by measuring transaction throughput. | Efficiency is determined by measuring query throughput and response time. |
Q #55) Explain the features of the Connected and Unconnected lookup.
Answer: The features of Connected Lookup can be explained as follows:
- There is a direct source of input from the pipeline for connected lookup.
- It has active participation in data flow and both dynamic, as well as static cache, is used as the case is.
- It caches all lookup columns and returns the default values as the output when the lookup condition does not match.
- More than one column values can be returned to the output port.
- Multiple output values are passed as well as output ports are linked to another transformation.
- Connected lookup supports user-defined default values.
The features of unconnected lookup can be explained as follows:
- Unconnected lookup uses static cache and its source of input is the result received from the output of LKP expression.
- It caches only the lookup output ports and returns the value as NULL when the lookup condition does not match.
- Only one column is returned from each port.
- Only one output value is passed to another transformation.
- User-defined default values are not supported by unconnected lookup.
Q #56) During the running session, output files are created by the Informatica server. Enlist a few of them.
Answer: Mentioned below are the few output files:
- Cache files: These files are created at the time of memory cache creation. For circumstances like Lookup transformation, Aggregator transformation, etc index and data cache files are created by the Informatica server.
- Session detail file: As the name defines, this file contains load statistics like table name, rows rejected or written for each target in mapping and can be viewed in the monitor window.
- Performance detail file: This file is a part of the session property sheet and contains session performance information in order to determine improvement areas.
- INFORMATICA server log: The server creates a log for all status and error messages and can be seen in the home directory.
- Session log file: For each session, the server creates a session log file depending on the set tracing level. The information that can be seen in log files about sessions can be:
- Session initialization process,
- SQL commands creation for reader and writer threads,
- List of errors encountered and
- Load summary
- Post-session email: This helps in communicating the information about the session (session completed/session failed) to the desired recipients automatically.
- Reject file: This file contains information about the data that has not been used/written to targets.
- Control file: In case, when the session uses the external loader, the control file consists of loading instructions and data format about the target file.
- Indicator file: This file basically contains a number that highlights the rows marked for INSERT/UPDATE/DELETE or REJECT.
- Output file: The output file is created based on the file properties.
Q #57) How to differentiate between the Active and Passive transformations?
Answer: To understand the difference between Active and Passive transformations, let us see its features which will explain the differences in a better way.
The action performed by Active transformations includes:
- Changing the number of rows as per the requirement, that passes through the transformations. For Example, Filter transformation that deletes the row that does not meet the condition.
- Changing the transaction boundary by setting the rollback and commit points. For Example, Transaction control transformation.
- Changing the row type for INSERT/ UPDATE/DELETE or REJECT.
The action performed by Passive transformations includes:
- The number of rows passing through the transformation is never changed.
- The transaction boundary is set.
- Row type is set.
Q #58) Enlist the various types of Transformations.
Answer: The various types of transformations are as follows:
- Aggregator transformation
- Expression transformation
- Normalizer transformation
- Rank transformation
- Filter transformation
- Joiner transformation
- Lookup transformation
- Stored procedure transformation
- Sorter transformation
- Update strategy transformation
- XML source qualifier transformation
- Router transformation
- Sequence Generator transformation
Q #59) What is Dynamic Cache?
Answer: INFORMATICA lookups can be categorized either as cached or uncached. In the case of Dynamic cache, rows can be inserted or deleted at the time of passing the rows and is synchronized with the target. The cache memory is refreshed every time after delete/update operations within the session.
Q #60) What is decode in Static cache?
Answer: Static cache is the one that is neither updated nor refreshed in the session run. It is the default cache and returns the value only when the return condition is true. In other cases, it returns Null value. Insert or Update cache cannot be performed in this case.
Q #61) Mention a few advantages of Router transformation over Filter transformation.
Answer: Router transformation and Filter transformation are the same because both of them use a condition to test and filter the data.
However, the advantages of Router over filter transformation can be understood by the below-mentioned points.
Router Transformation:
- It allows more than one test condition.
- Provide the ability to test the same input data on multiple numbers of conditions.
- In the case of mapping, input data is processed only once by the server and hence performance is improved.
- Less complex and more efficient.
- The records that fail the test condition are never blocked instead are passed on to the default group.
Q #62) Enlist some properties of sessions.
Answer: A session is available in the workflow manager and is configured by creating a session task. Within a mapping program, there can be multiple sessions and it can be either reusable or non-reusable.
Some of the properties of the session are as follows:
- As per the requirement, session tasks can be run either concurrently or sequentially.
- A session can be configured to analyze the performance.
- To create or run a session task, it is required to have general information about Session name, schedule and integration service.
- Other important property of session includes Session log file, the test load, error handling, commit interval, target properties, etc.
Q #63) Enlist the tasks for which Source qualifier transformation is used.
Answer: Source qualifier is considered as an active transformation that reads the rows that are involved in integration service within the running session. It determines the way in which the data is fetched from the source and is automatically added while adding a source to mapping.
The list of different tasks where source qualifier is used is as follows:
- Rows filtering
- Data sorting
- Custom query creation
- Joining tables from the same source
- Selecting distinct values
Q #64) Mention a few Power Centre client applications with their basic purpose?
Answer: Tasks like session and workflow creation, monitoring workflow progress, designing Mapplets, etc are performed by PowerCenter client applications.
Enlisted below is the list of PowerCenter client applications with their purpose:
- Repository Manager: It is an administrative tool and its basic purpose is to manage repository folders, objects, groups, etc.
- Administration Console: Here the service tasks like start/stop, backup/restore, upgrade/delete, etc are performed.
- Power center designer: The designer consists of various designing tools that serve various purposes. These designing tools are:
- Source Analyzer
- Target Designer
- Transformation Developer
- Mapplet Designer
- Mapping Manager
- Workflow Manager: Its basic purpose is to define a set of instructions/workflow that is required to execute mappings designed in the designer. To help develop a workflow, there are 3 tools available, namely Task developer, Workflow designer, Worklet designer.
- Workflow Monitor: As the name suggests, the Workflow monitor, monitors the workflow or tasks. The list of windows available are:
- Navigator window
- Output window
- Time window
- Properties window
- Task view
- Gantt chart view
Conclusion
I hope, by now you must have got a clear idea about the tool and the type of questions that will be asked in interviews.
INFORMATICA is the best solution to perform Data Integration. It works with Multi-data management in a multi-platform environment such as Windows, Linux, Unix, etc. and is tested over 50,000+ platforms for better outcomes and best performance among several others.
In a nutshell, INFORMATICA is an ETL tool which gathers information or data from various sources and loads that information to the defined specific targets without actually saving it. Its task is to deliver various services and resources to different machines and thus the delivered data has to be correct with remarkable results.
Brush up knowledge on Informatica concepts through this article and get prepared for your interview right away.
All The Best!!!