A suite of products which together provide a platform for data processing applications
Main Ab-Initio products are:
Co>Operating System
Component Library
GDE
EME
Data Profiler
Conduct>It
Difference between a phase and checkpoint as far as i know it something related with how temporary files containing
the data landed to disk is handled.
that is phases are used to break up a graph so that it does not use up all the memory , it reduce the no of component
running in parallel hence improves the performances (used for performance fine tuning, by managing the resource in
perfect manner)
Check points are used for the purpose of recovery.
Phase is a stage in a graph that runs before staratup of next stage.
Check point is intermediate stoping point of the graph to save guard against failure.
We can arrange phase without check points.
We dont assign checkpoint without phases.
In another words :-
The major difference between these to is that phasing deletes the intermediate files made at the end of each phase,
as soon as it enters the next phase. On the other hand, what checkpointing does is...it stores these intermediate files
till the end of the graph. Thus we can easily use the intermediate file to restart the process from where it failed. But
this cannot be done in case of phasing
Phases are used in case to use the resources such as memory, disk space, and CPU cycles for the most demanding
part of the job.Say, we have memory consuming components in the straight flow and the data in flow is in millions,we
can separate the process out in one phase so as the cpu allocation is more for the process to consume less time for
the whole process to get over.
In contrary,Checkpoints are like save points while we play a PC game.These are required if we need to run the graph
from the saved last phase recovery file(phase break checkpoint) if it fails unexpectedly.
Use of phase breaks which includes the checkpoints would degrade the performance but ensures save point
run.Toggling Checkpoints could be helpful for removing checkpoints from phase break
Parallelism
Dynamic Script Generation
Plans & Psets
Plan
A plan is an Ab-Initio Conduct>It feature
It is a representation of all the interrelated elements of a system
Using a plan, you can control the sequence, relationships, and communication between tasks by how you
connect the tasks and by how you specify methods and parameters. You also control how tasks use system
resources and how to group tasks for safe recovery
A subplan is a complete Conduct>It plan embedded in a larger plan
Pset
A pset is a file containing a set of input parameter values, that reference a graph/plan
Every .pset file contains information linking it back to the original graph or plan it was created from
DML Overview:--
record
string(10) name;
decimal(10) roll no;
string(“\n”) newline;
end;
Useful DML Utilities: m_eval, m_dump
m_eval
Evaluates DML expressions and displays their derived types
Used to test and evaluate simple, multiple, cast, and other expressions that you want to use in a graph
E.g.: $ m_eval ‘(date("YYYYMMDD")) (today() - 10)’ "20041130“
m_dump
Prints information about data records, their record formats, and the evaluations of expressions
E.g.: $ m_dump -string "record int a; string(12) b; \
double c; end" \
-describe
• Record formats are set in the following 2 ways :
Use a file
Embed
• Embed – The record format is written for each port in the below format:
record
string(“\x01”,maximum_length=7) clm_nbr;
decimal(“\x01”) agr_id;
date(“YYYY-MM-DD”) (“\x01”) eff_strt_dt;
end;
• Use file – A DML file is created which contains only the record format and it is stored in the DML folder of the
sandbox.
• In the component we specify the path for this DML to import the record format.
Feedback Provides performance metrics for each component Debug mode, but slow
executed implementation
Manage and run Abinitio graph and control the ETL processes
Provide Ab initio extensions to the operating system
ETL processes monitoring and debugging
Meta-data management and interaction with the EME
7) List out the file extensions used in Abinitio?
To execute graph infinitely, the graph end script should call the .ksh file of the graph. Therefore, if the graph name is
abc.mp then in the end script of the graph it should call to abc.ksh. This will run the graph for infinitely.
) Mention what the difference between “Look-up” file and “Look is up” in Abinitio?
Lookup file defines one or more serial file (Flat Files); it is a physical file where the data for the Look-up is stored. While Look-
up is the component of abinitio graph, where we can save data and retrieve it by using a key parameter.
Component parallelism: A graph with multiple processes executing simultaneously on separate data uses parallelism
Data parallelism: A graph that works with data divided into segments and operates on each segments respectively,
uses data parallelism.
Pipeline parallelism: A graph that deals with multiple components executing simultaneously on the same data uses
pipeline parallelism. Each component in the pipeline read continuously from the upstream components, processes data
and writes to downstream components. Both components can operate in parallel.
2) Explain what is Sort Component in Abinitio?
The Sort Component in Abinitio re-orders the data. It comprises of two parameters “Key” and “Max-core”.
Key: It is one of the parameters for sort component which determines the collation order
Max-core: This parameter controls how often the sort component dumps data from memory to disk
13) Mention what dedup-component and replicate component does?
Dedup component: It is used to remove duplicate records
Replicate component: It combines the data records from the inputs into one flow and writes a copy of that flow to each
of its output ports
Mention what is a partition and what are the different types of partition components in Abinitio?
In Abinitio, partition is the process of dividing data sets into multiple sets for further processing. Different types of partition
component includes
Partition by Round-Robin: Distributing data evenly, in block size chunks, across the output partitions
Partition by Range: You can divide data evenly among nodes, based on a set of partitioning ranges and key
Partition by Percentage: Distribution data, so the output is proportional to fractions of 100
Partition by Load balance: Dynamic load balancing
Partition by Expression: Data dividing according to a DML expression
Partition by Key: Data grouping by a key
air object Is<EME path for the object-/Projects/edf/..> : It is used to see the listings of objects in a directory inside the
project
air object rm<EME path for the object-/Projects/edf/..> : It is used to remove an object from the repository
air object versions-verbose<EME path for the object-/Projects/edf/..> : It gives the version history of the object.
air versions –verbose <path to the object in EME>
air sandbox diff -version 437959 -version 397048
Other air command for Abinitio include air object cat, air object modify, air lock show user, etc.
The syntax for m_dump in Abinitio is used to view the data in multifile from unix prompt. The command for m_dump
includes
m_dump a.dml a.dat: This command will print the data as it manifested from GDE when we view data in formatted text
m_dump a.dml a.dat>b.dat: The output is re-directed in b.dat and will act as a serial file.b.dat that can be referred when
it is required.
What is PDL Abinitio?
PDL is a new concept introduced in later versions of Abinitio. Using this feature you can run the graph without deploying it
through GDE. The mp file can directly be executed using air sandbox run command, which contains the commands to set up
the host environment. To summarize its kind of parameterized environment
in above case the graph fails with Exit 3 along with error message in error log file saying - Age not suitable for Voting
Whereas , force_abort just fails the graph without any message even this with return code exit 3 for failure
Perform Check-out
Copying one or more files, projects from EME datastore to sandbox. This process is called Check-out.
The command line syntax:
air project export <project-path> -basedir <basedir > { [-files <relative path >] \ [-force ] }
What is Locking?
To set whether other users can break this lock on an object or not . By default locks are breakable.
air lock set –unbreakable /Projects/bi/sbis/cmi/cmi_loss /mp/LHST1031CLAIMCatEvntLs_Blddata_JulVer.mp
Dynamic Output DML generation based on values in input field and then dynamic xfr generation
Policy_number | Coverage_id
I want in Output :
1) only one record which will be having:
ii) coverage id1 coverage id2 coverageid3 coverageidN column/columns --> contains count of corresponding
Coverage_ids
Note : Number of coverage id columns in output dml would be created based on distinct coverage_id values present in input
coverage_id column
Look into the below scenarios to get clear picture of what I exactly want:
Scenario 1:
Policy_number Coverage_id
1 1
2 1
3 1
Expected o/p:
Policy_number_distinct_count Coverage_id_1
3 3
As coverage_id in input has only one distinct value(i.e., '1') - so there should be one coverage_id column in output with
columnname as coverage_id_1
Scenario 2:
i/p:
Policy_number Coverage_id
1 1
2 1
3 2
Expected o/p:
3 2 1
As coverage_id in input has two distinct values('1' and '2 ') - so there should be two coverage_id columns in output with
columnnames as coverage_id_1 and coverage_id_2 respectively
Scenario 3:
i/p:
Policy_number Coverage_id
1 1
2 1
3 2
4 3
5 3
Expected o/p:
5 2 1 2
As coverage_id in input has three distinct values(1,2 and 3 ) - so there should be three coverage_id columns in output with
columnnames as coverage_id_1,coverage_id_2 and coverage_id_3 respectively
Scenario 4:
i/p:
Policy_number Coverage_id
1 1
1 1
1 2
2 3
Expected o/p:
2 2 1 1
As there are two distinct policy_number in input, so policy_number_count in output should have '2' as value.
I have implemented the solution for the above requirement for fixed set of coverage id values in input using the rollup. but in above
case I want solution where coverage id in input can have any set of values i.e., not fixed and based on that output columns/dml
should be created.
Thanks in advance.
Answer:---
1) First sort by Policy_number then dedup it then use rollup and take count
2) Use another sort by Coverage_id use the rollup with below xfr[ NOTE:- USE SORT IF REQUIRED]
temp::rollup(temp,in)=
begin
temp.a :: temp.a+1;
end;
type temporary_type=record
decimal("|") a;
end; /*Temporary variable*/
temp :: initialize(in) =
begin
temp.a :: 0;
end;
3) Use concatenate component to combine the file (in.0 -> count and in.1 -> 2nd rollup)
#!/bin/ksh
cut -d "+" -f1 /data/sandboxes/jprathap/jaga_ts/dm1.dat > /data/sandboxes/jprathap/jaga_ts/dm4.dat
export b=1;
export b1=`wc -l /data/sandboxes/jprathap/jaga_ts/dm1.dat | cut -d " " -f1`
awk '
{
for (i=1; i<=NF; i++) {
a[NR,i] = $i
}
}
NF>p { p = NF }
END {
for(j=1; j<=p; j++) {
str=a[1,j]
for(i=2; i<=NR; i++){
str=str" "a[i,j];
}
print str
}
}' /data/sandboxes/jprathap/jaga_ts/dm4.dat > /data/sandboxes/jprathap/jaga_ts/dm5.dat
while read a
do
if [ $b == 1 ]
then
echo "record
decimal ('|') Policy_number;" > /data/sandboxes/jprathap/jaga_ts/dml/dm3.dml
elif [ $b != $b1 ]
then
b2=`echo $a | cut -d "+" -f2`;
echo "decimal ('|') Coverage_id_$b2;" >> /data/sandboxes/jprathap/jaga_ts/dml/dm3.dml
elif [ $b == $b1 ]
then
b2=`echo $a | cut -d "+" -f2`;
echo "decimal ('\n') Coverage_id_$b2;
end;" >> /data/sandboxes/jprathap/jaga_ts/dml/dm3.dml
else
echo "0";
fi
((b++))
done < /data/sandboxes/jprathap/jaga_ts/dm1.dat
sed -i 's/ /|/g' /data/sandboxes/jprathap/jaga_ts/dm5.dat
Now
/data/sandboxes/jprathap/jaga_ts/dm5.dat --it will be the output flat file and
OUTPUT DATA:
"Lookup returns a single record (I mean a single field from that record) even when multiple records match with key field. Suppose a
lookup file contains multiple records for the matched key, and I want all those records (I mean again fields from all matching
records) to be retrieved. How can I emulate this behavior? I am permitted to use any components except Join. Please help. Thank
you.
Answer:-
You will need to first use the lookup_count to get the number of matching records in the lookup file. And then you need to loop
around till the number of matching records with lookup_next function.
lookup_next function will move the pointer forward till it has a matching records.
There you go, now normalize function will understand how many times it has to run for each record in the input dataset. i.e
Normalize function will be called as many as times the value of length function. Now use lookup_next function to get the
subsequent matching records. For each time normalize function is called, it will produce an output record. i.e in this scenario
typically it produces number of o/p records that is directly proportional to the number of matching records in the lookup file.
But could you let me know, what is your business case if there are no matches in the lookup file for an input record?
I was just trying to understand your business requirement. i.e If there are no matches with the lookup.
If you need to reject the input record then prioritize the length function output of the normalize component as below,
ANSWER 3:-
out:1: lookup_count(""lookup_file"", key);
out::1;
The reason is if there is no match i.e count will be 0, then the normalize function will not be called , so prioritize the rules as above.
Now in the normalize function you can do the reject mechanism may be force_error to reject it.
out :: length(in) =
begin
out :: 1;
end;
type lkp_type =
record
string("","") key;
string("","") prod_type;
string(""\n"") prod_man_name;
end;
out :: normalize(in,index) =
begin
if(index == 0)
else
lkp_rec = lookup_next(""lookup_file"");
end
out.prod_type :: """";
out.prod_man_name :: """";
out.key :: in.key;
end;
No need for a vector structure in the output dml unless you wanted to create any based on your requirement.
Ab Inito Differences for In-Memory Sort With Multifiles
Questions:-
I am running a simple graph with 2 multifiles (one with huge volume (8 billion) and other file small volume (1 million).
I have followed 2 approaches.
1. I have used the PBKS component before join for the 2 files and I have done inner join.
2. I have used the in-memory sort, keeping huge volume for the driving port and applied inner join.
So, I see the difference in the output. Want to know, is there any difference between the 2 approaches when working with
multifiles?
Answer:-
The logic of MFS is
Partition_1 file of in0 port will always look into Partition_1 file of in1 port
Partition_2 file of in0 port will always look into Partition_2 file of in1 port and so on...
Partition_1 never look into Partition_2 data
-> Since you have used PBKS in the First approach, all the same key will in same partition so it will give the expected result, but in
second approach same key may be split up into different partition so result may get differ.
-> Never use in memory for huge file; it will slow down the performance.
How to Handle Tab Separated Records Which Have Some Tabs and Newline Characters in the Data Ab Initio?
Can someone please help on how to handle tab separated records which have some tabs and newline characters in the data
abinitio? I'm new to Ab Initio. One of my friends suggested to use RSV component, but I'm not able to figure out
Answer:-
(1) OP had a file with field separators, but some process had folded the rows over multiple lines: that is, the data had
extra newlines in most of the records.
Logical solution: we knew there should be 15 fields in each row. So we appended lines from the input file until there
were 15 (or more) fields, and output that line. So the field separators told us where the real line breaks should be.
(2) OP had a CSV file where just one field could have extra commas in the data (but it had not been quoted as
demanded by the CSV specification).
Logical solution: There were always 13 data fields, and field 4 was the only one that had extra commas (it was an
address). So we identified fields 1-3 by counting from the front, and fields 5-13 by counting from the end of each line,
and field 4 was anything in the middle, and we quoted it to make valid CSV. In that case, the line breaks told us
where the trailing fields should be.
If your file format has that much internal consistency, we can fix it. If it really has random characters, nobody can fix
it, because information has been destroyed when it was created.
Hi, What is the use of Redefine format over Reformat? What Redefine format can do where Reformat cannot do? In
help file I saw about FAQ: REFORMAT versus REDEFINE FORMAT, but I can't get exact difference between these.
According to help file: 1. REFORMAT can actually change the bytes in the data. I can able to convert string(3) to
string("","") with Reformat but with Redefine I can't able to convert So this point is understandable. 2. REDEFINE
FORMAT simply changes the record format on the data as it flows through, leaving the data unchanged. I can able to
convert string to decimal with Reformat, as well as with Redefine format also, for example string(3) to decimal(3)
Reformat also changes record format leaving the data unchanged, then what is the exact use of Redefine format
over Reformat. Please give your answers. Regards, Syam"
Answer1:-
To remove processing over head.
Well I am sure you know what reformat does. It's the primary Transform Component of AB-INIO.. Any kind of
standard Record by record transformation is performed by it . This involves applying field to field transformation logic
, Increasing or reducing the number of fields.
Redefine Format is just used to RE-INTERPRET the same data with a Different DML. For Ex : Initially you might read
the below file with
10|abc|200|2015-01-01
string(""\n"") input_record.
You are reading the whole input record but after few components you need to interpret the same data with the actual
DML.
say
decimal(""|"") employee_id;
string(""|"") employee_name;
decimal(""|"") salary;
date(""YYYY-MM-DD"")(""\n"") joining_date;
Now you just need to provide this DML in the output port of Redefine format and the component will automatically re-
interpret the same data with the new DML.
One major use of Redfeine Format is when a File has header , trailer and data Records. Since the file has three
types of records with three different DMLs we can read the file first using
string(""\n"") input_line;
But after we remove the header and trailer from the file and left with the actual data records we can re-interpret the
flow using a redefine format and start reading the data records with the actual DML
LOOKUP FILE
How to use a LOOKUP FILE COMPONENT:
• To perform a memory-resident lookup using a Lookup File component:
• Place a LOOKUP_FILE component in the graph and open its Properties dialog.
• On the Description tab, set the Label to a name we will use in the lookup functions that reference this file.
• Set the RecordFormat parameter to the record format of the lookup file.
• Set the key parameter to specify the fields to search.
LOOKUP FILE
• Set the Special attribute of the key to the type of lookup we want to do.
• Add a lookup function to the transform of the component that will use the lookup file.
• The first argument to a lookup function is the name of the lookup file. The remaining arguments are values to be matched against
the fields named by the key parameter of the lookup file.
lookup("MyLookupFile", in.key)
• If the lookup file key's Special attribute (in the Key Specifier Editor) is exact, the lookup functions return a record that matches the
key values and has the format specified by the RecordFormat parameter.
Partitioned lookup files:
Lookup files can be either serial or partitioned (multifiles). The lookup
functions we use to access lookup data come in both local and non-local
varieties, depending on whether the lookup data files are partitioned.
When a component accesses a serial lookup file, the Co>Operating System
loads the entire file into the component’s memory. If the component is
running in parallel (and you use a _local lookup function), the
Co>Operating System splits the lookup file into partitions.
The benefits of partitioning lookup files are:
1. The per-process footprint is lower. This means the lookup file as a whole can exceed the 2 GB limit.
2. If the component is partitioned across machines, the total memory needed on any one machine is reduced.
DYNAMIC LOOKUP
A disadvantage of statically loading a lookup file is that the dataset occupies a fixed amount of
memory even when the graph isn’t using the data.
By dynamically loading lookup data, we control how many lookup datasets are loaded, which
lookup datasets are loaded, and when lookup datasets are loaded. This control is useful in
conserving memory; applications can unload datasets that are not immediately needed and load
only the ones needed to process the current input record.
The idea behind dynamically loading lookup data is to:
1. Load the dataset into memory when it is needed.
2. Retrieve data with your graph.
3. Free up memory by unloading the dataset after use.
DYNAMIC LOOKUP
How to look up data dynamically:
To look up data dynamically:
1. Prepare a LOOKUP TEMPLATE component:
a. Add a Lookup Template component to the graph and open its Properties dialog.
b. On the Description tab of the Properties dialog, enter a label in the Label text box.
c. On the Parameters tab, set the RecordFormat parameter.
Here, we specify the DML record format of the lookup data file.
• Set the key parameter to the key we will use for the lookup.
• Load the lookup file using the lookup_load function inside a transform function.
DYNAMIC LOOKUP
For example, enter:
let lookup_identifier_type LID =
lookup_load(MyData, MyIndex, "MyTemplate", -1)
where:
LID is a variable to hold the lookup ID returned by the lookup_load function. This ID references the lookup file in memory.
The lookup ID is valid only within the scope of the transform.
MyData is the pathname of the lookup data file.
MyIndex is the index of the pathname of the lookup index file.
If no index file exists, we must enter the DML keyword NULL. The graph creates an
index on the fly.
• The only lookup operations we can perform on block-compressed lookup data are exact and range.
• In addition, we must use only fixed-length keys for block-compressed lookup operations.
Compressed LOOKUP
Handling compressed versus uncompressed data:
The Co>Operating System manages memory differently when handling block-compressed
And uncompressed lookup data.
Uncompressed lookup data
Any file can serve as an uncompressed lookup file as long as the data is not compressed
and has a field you can define as a key.
We can also create an uncompressed lookup file using the WRITE LOOKUP (or
WRITE MULTIPLE LOOKUPS) component. The component writes two files:
a file containing the lookup data
and an index file that references the data file.
With an uncompressed lookup file, both the data and its index reside in memory. The
lookup function uses the index to find the probable location of the lookup key value in the
data file. Then it goes to that location and retrieves the matching record.
ICFF
An indexed compressed flat file (ICFF) is a specific kind of lookup file that can store large
volumes of data while also providing quick access to individual records.
Why use indexed compressed flat files?
A disadvantage of using a lookup file like is that there is a limit to how much data we can
keep in it. What happens when the dataset grows large? Is there a way to maintain the
benefits of a lookup file without swamping physical memory? Yes, there is a way: it
involves using indexed compressed flat files.
ICFFs present advantages in a number of categories:
• Disk requirements — Because ICFFs store compressed data in flat files without the overhead associated with a DBMS, they
require much less disk storage capacity than databases — on the order of 10 times less.
• Memory requirements — Because ICFFs organize data in discrete blocks, only a small portion of the data needs to be loaded in
memory at any one time.
ICFF
• Speed — ICFFs allow us to create successive generations of updated information without any pause in processing. This means
the time between a transaction taking place and the results of that transaction being accessible can be a matter of seconds.
• Performance — Making large numbers of queries against database tables that are continually being updated can slow down a
DBMS. In such applications, ICFFs outperform databases.
• Volume of data — ICFFs can easily accommodate very large amounts of data — so large, in fact, that it can be feasible to take
hundreds of terabytes of data from archive tapes, convert it into ICFFs, and make it available for online access and processing.
ICFF
• ICFFs are usually dynamically loaded. To define an ICFF dataset, place a BLOCK-COMPRESSED LOOKUP TEMPLATE
component in your graph.
About the BLOCK-COMPRESSED LOOKUP TEMPLATE component :
• A BLOCK-COMPRESSED LOOKUP TEMPLATE component is identical to a LOOKUP TEMPLATE, except that in the former the
block_compressed and keep_on_disk parameters are set to True by default, while in the latter they are False.
Defining a BLOCK-COMPRESSED LOOKUP TEMPLATE component:
• When we place a BLOCK-COMPRESSED LOOKUP TEMPLATE component in the graph, we define it by specifying two
parameters:
RecordFormat — A DML description of the data
key — The field or fields by which the data is to be searched
Note: In a BLOCK-COMPRESSED LOOKUP TEMPLATE component, we do not provide a static URL for the dataset’s
location as we do with a lookup file. Instead, we specify the dataset’s location in a call to the lookup_load function when the data is
actually loaded.
Lookup Functions :
Lookup -- Returns the first record from a lookup file that matches a specified expression.
lookup_local -- Behaves like lookup, except that this function searches only one partition of a lookup file
lookup_match_local -- Behaves like lookup_match, except that this function searches only one partition of a lookup file.
lookup_first -- Returns the first record from a lookup file that matches a specified expression. In Co>Operating System Version
2.15.2 and later, this is another name for the lookup function.
lookup_first_local -- Returns the first record from a partition of a lookup file that matches a specified expression. In Co>Operating
System Version 2.15.2 and later, this is another name for the lookup_local function.
lookup_last -- Returns the last record from a lookup file that matches a specified expression.
lookup_last_local -- Behaves the same as lookup_last, except that this function searches only one partition of a lookup file.
lookup_count -- Returns the number of records in a lookup file that matches a specified expression.
lookup_next -- Returns the next successive matching record or the next successive record in a range,
if any, that appears in the lookup file.
lookup_previous -- Returns the record from the lookup file that precedes the record returned by the last successful call to a lookup
function.
Lookup_range -- Returns the first record whose key matches a value in a specified range. For use only with block-compressed
lookup files.
Lookup_range_count -- Returns the number of records whose keys match a value in a specified range. For use only with block-
compressed lookup files.
Lookup_range_last -- Returns the last record whose key matches a value in a specified range.
----------------------------------------------------------------------------------------------------------------------
Answer: Some complex SQL statements contain grammar that is not recognized by the Ab Initio parser when unloading in
parallel. In this case you can use the ABLOCAL construct to prevent the input component from parsing the SQL (it will get passed
through to the database). It also specifies which table to use for the parallel clause.
We know rollup component in Abinitio is used to summarize group of data record then why do we use
aggregation?
Aggregation and Rollup, both are used to summarize the data.
- Aggregate does not display the intermediate results in main memory, where as Rollup can.
- Double click on the transform parameter in the parameter tab page in component properties
- The key is used for mapping values based on the data available in a particular file
- Hash-joins can be replaced by reformatting and any of the input in lookup to join should contain less number of
records with a slim length of records
- Abinitio has certain functions for retrieval of values using the key for the lookup
What is a ramp limit?
- Ramp parameter contain a real number representing a rate of reject events of certain processed records
- The formula is - No. of bad records allowed = limit + no. of records x ramp
- Rollup component allows the users to group the records on certain field values.
- It is a multi stage function and contains
- The finally function calls only once at the end of last rollup call.
What is the difference between partitioning with key / hash and round robin?
- The partitioning technique that is used when the keys are diverse
- Large data skew can exist when the key is present in large volume
- This partition technique uniformly distributes the data on every destination data partitions
- When number of records is divisible by number of partitions, then the skew is zero.
- Make sure that a limited number of components are used in a particular phase
- Implement the usage of optimum value of max core values for the purpose of sorting and joining components.
- Utilize the minimum number of sorted join components and replace them by in-memory join / hash join, if needed
and possible
- Use sorted join, when two inputs are huge, otherwise use hash join
What is the function that transfers a string into a decimal?
- Use decimal cast with the size in the transform() function, when the size of the string and decimal is same.
- out.field :: (decimal(5))string_lrtrim(string_substring(in.field,1,5))
- The ‘ lrtrim ‘ function is used to remove leading and trailing spaces in the string
Describe the Evaluation of Parameters order.
- Suppose there is a need to have a dynamic field that is to be added to a predefined DML while executing the graph
- For Example : define a parameter named myfield with a value “string(“ | “”) name;”
Ex:
decimal_strip("-0184o") := "-184"
decimal_strip("oxyas97abc") := "97"
decimal_strip("+$78ab=-*&^*&%cdw") := "78"
decimal_strip("Honda") "0"
State the first_defined function with an example.
- MAX CORE is the space consumed by a component that is used for calculations
- The process may slow down / fasten if a wrong MAX CORE is set
What are the operations that support avoiding duplicate record?
- Performing aggregation
- Pipeline Parallelism : Data is passed from one component to another component. Data is worked on both of the
components.
State the relation between EME, GDE and Co-operating system.
EME:
- It is a repository to AbInitio. It holds transformations, database configuration files, metadata and target information
GDE:
Co-operative System:
- If a graph flows diverge and converge in a single phase, it is potential for a deadlock
- A component might wait for the records to arrive on one flow during the flow converge, even though the unread data
accumulates on others.
Check point:
- When a graph fails in the middle of the process, a recovery point is created, known as Check point
- The rest of the process will be continued after the check point
- Data from the check point is fetched and continue to execute after correction.
Phase:
- If a graph is created with phases, each phase is assigned to some part of memory one after another.
What is the lookup function used to retrieve the particular duplictae datarecords in the lookup file
Use lookup_count for finding the duplicates and lookup_next for retrieving it.
Input file
col1
1
2
3
4
5
6
7
8
output file
col1 col2 col3 col4
1234
5678
How to achieve this?
• Layout
1)Layout determines the location of the resources.
2)Layout is either Serial or Parallel.
3)Serial layout specifies the one node or one directory.
4)Parallel layout specifies the multiple nodes or multiple directories.
Phase:
Phase are basically to break up the graph into blocks for performance tuning.Phase limits the number of simultaneous processes
by breaking up the graphs into different phase.The main use of phase is to avoid the deadlock.The temporary files generated by
phase break will be deleted at the end of phase regardless of wether the job got successful or not.
Checkpoint:
The temporary file generated through checkpoint will not get deleted hence it will start the job from the last good
process.Checkpoint are used as the purpose of recovery.
Multifiles
Multifiles are parallel files composed of individual files, which may be located on separate disks or systems. These
individual files are the partitions of the multifile.
An AbInitio multifile organizes all partitions of a multifile into a single virtual file that you can reference as one entity.
You organize multifiles by using a multifile system, which has a directory tree structure that allows you to work with
multifiles.
A multifile has a control file that contains URLs pointing to one or more data files.
3) Data: This is the most common parallelism when you partition your
data to be processed fast.This is achieved thru partitioning. For
example you have 1000 records and you divide them to 8 computers to
process fast.
Packages:- Deployment :-
• Two types of packages are present:
1. Full
2. Incremental
• Incremental package contains only the objects which have been modified.
• 2. Config File :- Contains information about TAG name, EME project path and Sandbox path
• 3. Save File:- Contains details about the objects and their associated fields
Reformat:-
• Example:
Variable : TEMP_ACC_YR
(decimal(4))date_year((date”YYYY-MM-DD”))in.LOSS_DT)
Business Rule for ACC_YR_CAT_CD
else
string_concat(TEMP_ACC_YR,CAT_CD)
2) Join:--
• Dedup Sort :
Dedup sorted separates one particular record from each group of records.
The input for Dedup sorted component must always be Grouped as it operates on groups.
The key component of the Dedup sort should have the same key on which the input is grouped.
Example :-
Sort within groups sorts the records within a group which is created by already sorting the records.
For this component the Major key parameter contains the value of the field on which the component is already
sorted.
The Minor key parameter contains the value of the field on which the component will sort the data.
Example
I have file containing 5 unique rows and I am passing them through SORT component using null key and and passing output
of SORT to Dedup sort. What will happen, what will be the output
Answer:- If there is no key used in the sort component, while using the dedup sort the output depends on the keep
parameter.
If its set to firt then the output would have only the first record,
if its set to last the output would have the last record,
if its set to unique_only,then there would be no records in the output file.
dedup :
{} key - 1 record in sequence will go to out port (In case of keep first)
Null in key data - 1 st null will go to out port (In case of keep first)
Case:1 :If we can take null key in dedup sort also then output depend on keep parameter.
keep: first: 1st record
last: last record
unique: 0 records
Case 2: If we can take any key in dedup then output will be 5 records(if only ur i/p file contain unique rows only)
Question :-
I have some queries regarding Sort and Sort Within Groups components...
i) Which one is more useful?
ii) Are they both work on same logic?
iii) My file is already sorted on account number but now i want to sort on 2
more keys.
iv) In such case my major key will be acct num and minor keys will be other
2 keys on which i want to sort my file.
iv) I have referred the component help but still it not completely clarified
all my points.
Answer:-
if your file is sorted on acct_num and you want
sort on 2 other keys you can use sort within groups provided acct_num is
your first preferred key.
For example:
if you require the file to sort on acct_num, key 2, key 3.. in this case you
can use sort within groups.
But if you require to sort the file on keys as key1, acct_num, key2 then you
will have to use sort component.
It is preferred to use sort within groups wherever applicable as it reduces
the keys on which the sort needs to be done adding its part in the
performance.
• Rollup :
The Rollup component groups input records on a key parameter and performs aggregation functions like
count(),sum(),avg(), max() etc within the group.
Scan :
Scan creates a series of cumulative aggregate or summarized records for grouped data.
Answer:-
I was able to solve the above issue after I used the below code in my finalize function in the scan:
Its setting up ""Z"" for the first record. But my requirement is that if I have only 1 record, then I need to set the value as ""N""
instead.