Most Frequently asked Informatica Interview Questions and Answers:
This article is covering the top Informatica MDM, PowerCenter, Data Quality, Cloud, ETL, Admin, Testing, and Developer questions.
In today’s scenario, INFORMATICA has achieved the tag of a most demanding product across the globe. INFORMATICA products were newly introduced but they became popular within a short time period.
Over the years, INFORMATICA has been acting as a leader in the technology field, Data Integration. Based on ETL (Extract, Transfer, Load) architecture, this data integration tool has several products that focus on providing services for government organizations, financial & insurance companies, healthcare, and several other businesses.
Well, this was just the background of INFORMATICA. But today, the Data warehousing field has tremendous growth and thus much job opportunities are available in the industry.
Best Informatica Interview Questions & Answers
Given below is a list of most commonly asked Informatica Interview questions and answers. It includes around 64 questions, which, in turn, would enable you to brush your knowledge about Informatica concepts in an easier way.
Q #1) What is INFORMATICA? Why do we need it?
Ans: INFORMATICA is a software development firm which offers some data integration solution for ETL, data virtualization, master data management, data quality, data replica, ultra messaging etc.
Some of the popular INFORMATICA products are:
- INFORMATICA PowerCenter
- INFORMATICA PowerConnect
- INFORMATICA Power Mart
- INFORMATICA Power Exchange
- INFORMATICA Power Analysis
- INFORMATICA Power Quality
We need INFORMATICA while working with data systems that contain data to perform certain operations along with a set of rules. INFORMATICA facilitates operations line cleaning and modifying data from structured and unstructured data systems.
Q #2) What is the format of INFORMATICA objects in a repository? What are the databases that INFORMATICA can connect to Windows?
Ans: INFORMATICA objects can be written in XML format.
Following is the list of databases that INFORMATICA can connect to:
- SQL Server
- MS Access
- MS Excel
Q #3) What is INFORMATICA PowerCenter?
Ans: INFORMATICA PowerCenter is an ETL/Data Integration Tool which is used to connect and retrieve data from different sources and data processing. PowerCenter processes a high volume of data and supports data retrieval from ERP sources such as SAP, PeopleSoft, etc.
You can connect PowerCenter to database management systems like SQL and Oracle to integrate data into the third system.
Q #4) Which are the different editions of INFORMATICA PowerCenter that are available?
Ans: Different editions of INFORMATICA PowerCenter are:
- Standard Edition
- Advance Edition
- Premium Edition
The current version of PowerCenter available is v10 with a high-performance increase.
Q #5) How can you differentiate between PowerCenter and Power Map?
Ans: Given below are the differences between PowerCenter and Power Map.
|INFORMATICA PowerCenter||INFORMATICA PowerMart|
|1.||Processes high volume of data||Processes low volume of data|
|2.||Supports global and local repositories||Supports only local repositories|
|3.||Supports data retrieval from ERP|
Sources like SAP, PeopleSoft etc.
|Do not support|
Data retrieval from ERP sources
|4.||Converts local repositories to global||Do not convert local repositories to global|
Q #6) What are the Different Components of PowerCenter?
Ans: Given below are the 7 important components of PowerCenter:
- PowerCenter Service
- PowerCenter Clients
- PowerCenter Repository
- PowerCenter Domain
- Repository Service
- Integration Service
- PowerCenter Administration Console
- Web Service Hub
Q #7) What are the different Clients of PowerCenter?
Ans: Here is the list of PowerCenter clients:
- PowerCenter designer
- PowerCenter workflow monitor
- PowerCenter Workflow manager
- PowerCenter repository manager
Q #8) What is INFORMATICA PowerCenter Repository?
Ans: PowerCenter Repository is a Relational Database or a system database that contains metadata such as,
- Source Definition
- Target Definition
- Session and Session Logs
- ODBC Connection
There are two types of Repositories:
- Global Repositories
- Local Repositories
PowerCenter Repository is required to perform Extraction, Transformation, and Loading(ETL) based on metadata.
Q #9) How to elaborate Tracing Level?
Ans: Tracing Level can be defined as the amount of information that the server writes in the log file. Tracing Level is created and configured either at the transformation level or at session-level else at both the levels.
Given below are the 4 types of tracing level:
- Verbose Initialization
- Verbose Data
Q #10) How to elaborate PowerCenter Integration Service?
Ans: Integration Services control the workflow and execution of PowerCenter processes.
There are three components of INFORMATICA Integration Services as shown in the below figure.
Integration Service Process: It is called as pmserver, Integration Service can start more than one process to monitor the workflow.
Load Balancing: Load Balancing refers to distributing the entire workload across several nodes in the grid. Load Balancer conducts different tasks that include commands, sessions, etc.
Data Transformation Manager(DTM): Data Transformation Manager allows to perform the following data transformations:
- Active: To change the number of rows in the output.
- Passive: Cannot change the number of rows in the output.
- Connected: Link to the other transformation.
- Unconnected: No link to other transformations.
Q #11) What is PowerCenter on Grid?
Ans: INFORMATICA has the feature of Grid Computing which can be utilized for largest data scalability in order to the performance. The grid feature is used for load balancing and parallel processing.
PowerCenter domains contain a set of multiple nodes to configure the workload and then run it on the Grid.
A domain is a foundation for efficient service administration served by the PowerCenter.
Node is an independent physical machine that is logically represented for running the PowerCenter environment.
Q #12) What is Enterprise Data Warehousing?
Ans: When a large amount of data is assembled at a single access point then it is called Enterprise Data Warehousing. This data can be reused and analyzed at regular intervals or as per the need of the time requirement.
Considered as the central database or say a single point of access, Enterprise data warehousing provides a complete global view and thus helps in decision support.
It can be more understood from the following points which define its features:
- All important business information stored in this unified database can be accessed from anywhere across the organization.
- Although the time required is more, periodic analysis on this single source always produces better results.
- Security and integrity of data are never compromised while making it accessible across the organization.
Q #13) What is the benefit of Session Partitioning?
Ans: While Integration Service is running in the environment the workflow is partitioned for better performance. These partitions are then used to perform Extraction, Transformation, and Loading.
Q #14) How can we create an Index after completion of the Load Process?
Ans: Command Tasks are used to create an Index. Command Task scripts can be used in a session of the workflow to create an index.
Q #15) What are Sessions?
Ans: Session is a Set of Instructions that are used while moving data from the source to destination. We can partition the session to implement several sequences of sessions to improve server performance.
After creating session we can use the server manager or command-line program pmcmd to stop or start the session.
Q #16) How can we use Batches?
Ans: Batches are the collection of sessions which is used to migrate the data from the source to target on a server. Batches can have the largest number of sessions in it but they cause more network traffic whereas fewer sessions in a batch can be moved rapidly.
Q #17) What is Mapping?
Ans: Mapping is a collection of source and targets which are linked with each other through certain sets of transformations such as Expression Transformation, Sorter Transformation, Aggregator Transformation, Router Transformation, etc.
Q #18) What is Transformation?
Ans: Transformation can be defined as a set of rules and instructions that are to be applied to define data flow and data load at the destination.
Q #19) What is Expression Transformation?
Ans: It is a mapping transformation that is used to transform data in one record at a time. Expression Transformation can be passive or connected. The expression is used for data manipulation and output generation using conditional statements.
Q #20) What is Update Strategy Transformation?
Ans: Update Strategy in Informatica is used to control data passing through it and tag it such as INSERT, UPDATE, DELETE and REJECT. We can set a conditional Logic within the Update Strategy Transformation to tag it.
Q #21) What is Sorter Transformation?
Ans: Sorter Transformation is used to sort large volumes of data through multiple ports. It is much likely to work as the ORDER BY Clause in SQL. Sorter Transformation can be Active, Passive or Connected.
Active Transformation passes through Mapping and changes a number of rows whereas Passive Transformation passes through Mapping but does not change the number of rows.
Most of the INFORMATICA Transformations are Connected to the Data Path.
Q #22) What is Router Transformation?
Ans: Router Transformation is used to filter the source data. You can use Router Transformation to split out a single data source.
It is much like Filter Transformation but the only difference is that Filter Transformation uses only one transformation condition and returns the rows that do not fulfill the condition, Whereas Router Transformation uses multiple transformation conditions and returns the rows that match even a single condition.
Q #23) What is Rank Transformation?
Ans: Rank Transformation is Active as well as Connected. It is used to sort and rank a set of records either top or bottom.
Rank Transformation is also used to select data with the largest or smallest numeric value based on a specific port.
Q #24) What is Rank Index in Rank Transformation?
Ans: Rank Index is assigned by the task designer to each record. The rank index port is used to store ranking position for each row. Rank Transformation identifies each row from the top to bottom and then assigns Rank Index.
Q #25) What is Status Code in INFORMATICA?
Ans: Code provides Error Handling Mechanism during each session. Status Code is issued by stored Procedure to recognize whether it is committed successfully or not and provides information to the INFORMATICA server to decide whether the session has to be stopped or continued.
Q #26) What are Junk Dimensions?
Ans: Junk Dimension is a structure that consists of a group of some junk attributes such as random codes or flags. It forms a framework to store related codes with respect to a specific dimension at a single place instead of creating multiple tables for the same.
Q #27) What is Mapplet in INFORMATICA?
Ans: Mapplet is a reusable object in INFORMATICA that contains a certain set of rules for transformation and transformation logic that can be used in multiple mappings. Mapplet is created in the Mapplet Designer in the Designer Tool.
Q #28) What is Decode in INFORMATICA?
Ans: To understand Decode in an easy way, let's consider it as similar to the CASE statement in SQL. It is basically the function that is used by an expression transformation in order to search a specific value in a record.
There can be unlimited searches within the Decode function where a port is specified for returning result values. This function is usually used in cases where it is required to replace nested IF statements or to replace lookup values by searching in small tables with constant values.
Decode is a function that is used within Expression Transformation. It is used just like the CASE Statement in SQL to search a specific record.
Below is a simple example of a CASE in SQL:
SELECT EMPLOYEE_ID, CASE WHEN EMPLOYEE_AGE <= 20 THEN 'Young' WHEN EMPLOYEE_AGE > 30 AND AGE <= 40 THEN 'Knowledgeable' WHEN EMPLOYEE_AGE > 40 AND AGE = 60 THEN ‘Wise’ ELSE ‘Very Wise’ END AS EMPLOYEE_WISDOM FROM EMPLOYEE
Q #29) What is Joiner Transformation in INFORMATICA?
Ans: With the help of Joiner Transformation, you can make use of Joins in INFORMATICA.
It is based on two sources namely:
- Master Source
- Detail Source
Following joins can be created using Joiner transformation as in SQL.
- Normal Join
- Full Outer Join
- Master outer join(Right Outer Join)
- Detail outer join(Left Outer Join)
Q #30) What is Aggregator Transformation in INFORMATICA?
Ans: Aggregator Transformation can be active or connected. It works as the GROUP BY clause in SQL. It is useful to perform aggregate calculations on groups in INFORMATICA PowerCenter. It performs an aggregate calculation on data using aggregate type function viz. SUM, AVG, MAX, and MIN.
Q #31) What is Sequence Generator Transformation in INFORMATICA?
Ans: Sequence Generator Transformation can be Passive or Connected. Its basic use is to generate integer value with NEXTVAL and CURRVAL.
Q #32) What is Union Transformation in INFORMATICA?
Ans: Union Transformation is used to combine the data from different sources and frame it with the same port and data type. It is much like a Clause in SQL.
Q #33) What is Source Qualifier Transformation in INFORMATICA?
Ans: Source Qualifier Transformation is useful in Mapping, whenever we add relational flat files it is automatically created. It is an active and connected transformation that represents those rows which are read by integration service.
Q #34) What is INFORMATICA Worklet?
Ans: Worklet works as a Mapplet with the feature of Reusability, the only difference is that we can apply worklet to any number of workflows in INFORMATICA, unlike mapplet. Worklet saves the logic and tasks at a single place to reuse.
Worklet is much similar to the Mapplet and is defined as the group of tasks that can be either reusable or non-reusable at the workflow level. It can be added to as many workflows as required. With its reusability feature, much time is saved as reusable logic can be developed once and can be placed from where it can be reused.
In the INFORMATICA Power center environment, Mapplets are considered as the most advantageous feature. Mapplets are created in Mapplet designers and are a part of the Designer tool.
It basically contains a set of transformations that are designed to be reused in multiple mapping.
Mapplets are said to be reusable objects which simplify mapping by:
- Including multiple transformations and source definitions.
- Not required to connect to all input and output ports.
- Accept data from sources and pass to multiple transformations
Well, overall when it is required to reuse the mapping logic then the logic should be placed in Mapplet.
Q #35) What is SUBSTR in INFORMATICA?
Ans: SUBSTR is a function in INFORMATICA that extracts or removes a set of characters from a larger character set.
Syntax: SUBSTR( string, start [,length] )
string defines the character that we want to search.
start is an integer that is used to set the position where the counting should get started.
Length is an optional parameter that is used to count the length of a string to return from its starting position.
For example, SUBSTR(Contact, 5, 8), where we start at the 5th character of our contact and returns to the next 8 characters.
Q #36) What is Code page Compatibility?
Ans: When data is transferred from the source code page to the target code page then all the characteristics of the source page must be present in the target page to prevent data loss, this feature is called Code Page Compatibility.
Code page compatibility comes into picture when the INFORMATICA server is running in Unicode data movement mode. In this case, the two code pages are said to be identical when their encoded characters are virtually identical and thus results in no loss of data.
For complete accuracy, it is said that the source code page is the subset of the target code page.
Q #37) How you can differentiate between Connected LookUp and Unconnected LookUp?
Ans: Connected Lookup is part of the data flow which is connected to another transformation, it takes data input directly from another transformation that performs a lookup. It uses both static and dynamic Cache.
Unconnected Lookup does not take the data input from another transformation but it can be used as a function in any transformation using LKP(LookUp) Expression. It uses the only static cache.
Q #38) What is Incremental Aggregation?
Ans: Incremental Aggregation is generated as soon as a session created. Incremental Aggregation is used to calculate changes in the source data that do not change target data with significant changes.
On the first load, the output is:
Now, on the second load, it will aggregate the data with the next session date.
|1001||6538||20011||8240||The cache file is updated after aggregation|
|2001||7485||20011||8397||The cache file is updated after aggregation|
Q #39) What is a Surrogate Key?
Ans: A surrogate key is a sequentially generated integer value which is used as another substitute or replacement for the primary key which is required as a unique identification of each row in a table.
The primary key can be changed frequently as per the need which makes the update process more difficult for a future requirement, Surrogate Key is the only solution for this problem.
Q #40) What is the Session task and Command task?
Ans: Session Task is a set of instructions that are to be applied while transferring data from source to target using Session Command. Session Command can be either pre-session command or post-session command.
Command Task is a specific task that allows one or multiple shell commands of UNIX to run in Windows during the workflow
Q #41) What is the Standalone command task?
Ans: The standalone command task can be used to run Shell Command anywhere and anytime in the workflow.
Q #42) What is Workflow? What are the components of the Workflow Manager?
Ans: Workflow is the way of a manner in which the task should be implemented. It is a collection of instructions that inform the server about how to implement the task.
Given below are the three major components of the Workflow Manager:
- Task Designer
- Task Developer
- Workflow Designer
Q #43) What is the Event and what are the tasks related to it?
Ans: The event can be any action or function that occurs in the workflow.
There are two tasks related to it, which includes:
- Event Wait Task: This task waits until an event occurs, once the event is triggered this task gets accomplished and assigns the next task.
- Events Raise Task: Event Raise Task triggers the specific event in the workflow.
Q #44) What is a pre-defined event and User-defined event?
Ans: Predefined Events are system-defined Events that wait until the arrival of a specific file at a specific location. It is also called as File-Watcher Event.
User-Defined Events are created by the user to raise anytime in the workflow once created.
Q #45) What is the Target Designer and Target Load Order?
Ans: Target Designer is used for defining the Target of data.
When there are multiple sources or a single source with multiple partitions linked to different targets through the INFORMATICA server then the server uses Target Load Order to define the order in which the data is to be loaded at a target.
Q #46) What is the Staging Area?
Ans: Staging Area is a database where temporary tables connected to the work area are stored or fact tables to provide inputs for data processing.
Q #47) What is the difference between STOP and ABORT?
Ans: Differences are as follows:
- STOP command runs on Session task, once it is raised. the integration service stops only reading the data in the data source but continues processing and writing it to the target.
- ABORT command is used to completely stop the integration service from reading, processing and writing data to the target. It has its own timeout period of 60 seconds to finish the processing and writing of data through integration service if not then it simply kills the session.
Q #48) What are the different LookUp Caches?
Ans: INFORMATICA Lookup Can be either cached or uncached. Basically, it is divided into five parts.
- Static Cache
- Dynamic Cache
- Persistent Cache
- Shared Cache
Static Cache remains as it is without change while a session is running.
Dynamic Cache keeps updating frequently while a session is running.
Q #49) How to update Source Definition?
Ans: There are two ways to update source definition in INFORMATICA.
- You can edit the existing source definition.
- You can import a new source from the database.
Q #50) How to implement Security Measures using a Repository manager?
Ans: There are 3 ways to implement security measures.
- Folder Permission within owners, groups, and users.
- Locking (Read, Write, Retrieve, Save and Execute).
- Repository Privileges viz.
- Browse Repository.
- Use Workflow Manager(To create session and batches and set its properties).
- Workflow Operator(To execute Session and batches).
- Use Designer, Admin Repository(Allows any user to create and manage repository).
- Admin User(Allows the user to create a repository server and set its properties).
- SuperUser(All the privileges are granted to the user).
Q #51) Enlist the advantages of INFORMATICA.
Ans: Being considered as the most favored Data Integration tool, there are multiple advantages that need to be enlisted.
- It can effectively and very efficiently communicate and transform the data between different data sources like Mainframe, RDBMS, etc.
- It is usually very faster, robust and easy learning than any other available platform.
- With the help of INFORMATICA Workflow Monitor, jobs can be easily monitored, failed jobs can be recovered as well as slow running jobs can be pointed out.
- It has features like easy processing of database information, data validation, migration of projects from one database to another, project development, iteration, etc.
Q #52) Enlist a few areas or real-time situations where INFORMATICA is required.
Ans: Data Warehousing, Data Integration, Data migration & Application Migration from one platform to another platforms are a few examples of real-time usage areas.
Q #53) Explain the ETL program with few examples.
Ans: Known for its uniqueness, ETL tool stands for Extract, Transform and Load tool which basically solves the purpose of extracting data and sending somewhere as defined by altering it.
To be very precise:
- The extraction task is to collect the data from sources like the database, files, etc.
- Transformation is considered as altering the data that has been received from the source.
- Loading defines the process of feeding the altered data to the defined target.
To understand in a technical way, the ETL tool collects data from heterogeneous sources and alters to make it homogeneous so that it can be used further for analysis of the defined task.
Some basic program examples include:
- Mappings derive the ETL process of reading data from their original sources where the mapping process is done in the Designer.
- Workflows consist of multiple tasks that are decided and designed in the Workflow Manager Window.
- The task consists of a set of multiple steps that determine the sequence of actions to be performed during run-time.
Q #54) Enlist the differences between Database and Data Warehouse.
Ans: Refer to the below table to understand the differences between the two:
|It stores/records current and up to date which is used in daily operations||It stores/analyze historical data which is used for information support on a long-term basis.|
|Its orientation is on Online Transactional processing which includes simple and short transactions.||Its orientation is on Online Analytical Processing which includes complex queries.|
|It consists of detailed and primitive data where its view is flat relational.||It consists of summarized a consolidated data where its view is multidimensional.|
|Low performance is observed for Analytical queries.||Analytical queries are judged here as high performance.|
|Efficiency is determined by measuring transaction throughput.||Efficiency is determined by measuring query throughput and response time.|
Q #55) Explain the features of the Connected and Unconnected lookup.
Ans: The features of Connected Lookup can be explained as follows:
- There is a direct source of input from the pipeline for connected lookup.
- It has active participation in data flow and both dynamic as well as static cache is used as the case is.
- It caches all lookup columns and returns the default values as the output when the lookup condition does not match.
- More than one column values can be returned to the output port.
- Multiple output values are passed as well as output ports are linked to another transformation.
- Connected lookup supports user-defined default values.
The features of unconnected lookup can be explained as follows:
- Unconnected lookup uses static cache and its source of input is the result received from the output of LKP expression.
- It caches only the lookup output ports and returns the value as NULL when the lookup condition does not match.
- Only one column is returned from each port.
- Only one output value is passed to another transformation.
- User-defined default values are not supported by unconnected lookup.
Q #56) During the running session, output files are created by the INFORMATICA server. Enlist a few of them.
Ans: Mentioned below are the few output files:
- Cache files: These files are created at the time of memory cache creation. For circumstances like Lookup transformation, Aggregator transformation, etc index and data cache files are created by the INFORMATICA server.
- Session detail file: As the name defines, this file contains load statistics like table name, rows rejected or written for each target in mapping and can be viewed in the monitor window.
- Performance detail file: This file is a part of the session property sheet and contains session performance information in order to determine improvement areas.
- INFORMATICA server log: The server creates a log for all status and error messages and can be seen in the INFORMATICA home directory.
- Session log file: For each session, the server creates a session log file depending on the set tracing level. The information that can be seen in log files about sessions can be:
- Session initialization process,
- SQL commands creation for reader and writer threads,
- List of errors encountered and
- Load summary
- Post-session email: This helps in communicating the information about the session (session completed/session failed) to the desired recipients automatically.
- Reject file: This file contains information about the data that has not been used/written to targets.
- Control file: In case, when the session uses the external loader, control file consists of loading instructions and data format about the target file.
- Indicator file: This file basically contains a number that highlights the rows marked for INSERT/UPDATE/DELETE or REJECT.
- Output file: The output file is created based on the file properties.
Q #57) How to differentiate between the Active and Passive transformations?
Ans: To understand the difference between Active and Passive transformations, let us see its features which will explain the differences in a better way.
The action performed by Active transformations includes:
- Changing the number of rows as per the requirement, that passes through the transformations. For Example, Filter transformation that deletes the row that does not meet the condition.
- Changing the transaction boundary by setting the rollback and commit points. For Example, Transaction control transformation.
- Changing the row type for INSERT/ UPDATE/DELETE or REJECT.
The action performed by Passive transformations includes:
- The number of rows passing through the transformation is never changed.
- The transaction boundary is set.
- Row type is set.
Q #58) Enlist the various Types of Transformations.
Ans: The various types of transformations are as follows:
- Aggregator transformation
- Expression transformation
- Normalizer transformation
- Rank transformation
- Filter transformation
- Joiner transformation
- Lookup transformation
- Stored procedure transformation
- Sorter transformation
- Update strategy transformation
- XML source qualifier transformation
- Router transformation
- Sequence Generator transformation
Q #59) What is Dynamic Cache?
Ans: INFORMATICA lookups can be categorized either as cached or uncached. In the case of Dynamic cache, rows can be inserted or deleted at the time of passing the rows and is synchronized with the target. The cache memory is refreshed every time after delete/update operations within the session.
Q #60) What is decode in Static cache?
Ans: Static cache is the one that is neither updated nor refreshed in the session run. It is the default cache and returns the value only when the return condition is true. In other cases, it returns Null value. Insert or Update cache cannot be performed in this case.
Q #61) Mention a few advantages of Router transformation over Filter transformation.
Ans: Router transformation and Filter transformation are the same because both of them use a condition to test and filter the data.
However, the advantages of Router over filter transformation can be understood by the below-mentioned points.
- It allows more than one test condition.
- Provide the ability to test the same input data on multiple numbers of conditions.
- In the case of mapping, input data is processed only once by the server and hence performance is improved.
- Less complex and more efficient.
- The records that fail the test condition are never blocked instead are passed on to the default group.
Q #62) Enlist some properties of sessions.
Ans: A session is available in the workflow manager and is configured by creating a session task. Within a mapping program, there can be multiple sessions and it can be either reusable or non-reusable.
Some of the properties of the session are as follows:
- As per the requirement, session tasks can be run either concurrently or sequentially.
- A session can be configured to analyze the performance.
- To create or run a session task, it is required to have general information about Session name, schedule and integration service.
- Other important property of session includes Session log file, the test load, error handling, commit interval, target properties, etc.
Q #63) Enlist the tasks for which Source qualifier transformation is used.
Ans: Source qualifier is considered as an active transformation that reads the rows that are involved in integration service within the running session. It determines the way in which the data is fetched from the source and is automatically added while adding a source to mapping.
The list of different tasks where source qualifier is used is as follows:
- Rows filtering
- Data sorting
- Custom query creation
- Joining tables from the same source
- Selecting distinct values
Q #64) Mention a few Power Centre client applications with their basic purpose?
Ans: Tasks like session and workflow creation, monitoring workflow progress, designing mapplets, etc are performed by Powercentre client applications.
Enlisted below is the list of Power center client applications with their purpose:
- Repository Manager: It is an administrative tool and its basic purpose is to manage repository folders, objects, groups, etc.
- Administration Console: Here the service tasks like start/stop, backup/restore, upgrade/delete, etc are performed.
- Power center designer: The designer consists of various designing tools that serve various purposes. These designing tools are:
- Source Analyzer
- Target designer
- Transformation Developer
- Mapplet Designer
- Mapping Manager
- Workflow Manager: Its basic purpose is to define a set of instructions/workflow that is required to execute mappings designed in the designer. To help develop a workflow, there are 3 tools available, namely Task developer, Workflow designer, Worklet Designer.
- Workflow Monitor: As the name suggests, the Workflow monitor, monitors the workflow or tasks. The list of windows available are:
- Navigator Window
- Output window
- Time window
- Properties window
- Task view
- Gantt chart view
I hope, by now you must have got a clear idea about the tool and the type of questions that will be asked in interviews.
INFORMATICA is the best solution to perform Data Integration. It works with Multi-data management in a multi-platform environment such as Windows, Linux, Unix, etc. and is tested over 50,000+ platforms for better outcomes and best performance among several others.
In a nutshell, INFORMATICA is an ETL tool which gathers information or data from various sources and loads that information to the defined specific targets without actually saving it. Its task is to deliver various services and resources to different machines and thus the delivered data has to be correct with remarkable results.
Brush up knowledge on Informatica concepts through this article and Get prepared for your interview right away.
All The Best!!!