Anda di halaman 1dari 19

WHITE PAPER ON TERADATA PERFORMANCE TUNING

PERFORMANCE TUNING IN TERADATA


PREFACE
1. 2. 3. 4. 5. 6. 7. 8. Performance Tuning Overview Teradata Performance Tuning - Basic Tips COLLECTION OF STATISTICS USING SAMPLE Advantages of Collecting Stats using SAMPLE Performance Tuning Tips Examples Peformance Improvement Tips Teradata SQL Query Optimization Conclusion

Performance Tuning Overview:


Teradata Corporation is an American computer company that sells database software for data warehouses and analytic applications, including Big Data. Its products are meant to consolidate data from different sources and make the data available for analysis This paper will walk you through the process of achieving good performance improvement as how the system can be fine tuned using the features of Teradata.

Teradata Performance Tuning - Basic Tips


Performance tuning thumb rules: Here are very basic steps which are used to PT any given query in given environment . As a pre-requisite, make sure User has proper select rights and actual profile settings Enough space available to run and test the queries 1. Run explain plan (pressing F6 or EXPLAIN sel * ,) Then see for potential information like - No or low confidence - Product joins conditions - By way of an all row scan - FTS - Translate Also check for - Distinct or group by keywords in SQL query - In/ not in keywords and check for the list of values generated for the same

APPROACHES A. In case of product join scenarios,check for - Proper usage of alias - joining on matching columns - Usage of join keywords - like specifying type of joins (ex. inner or outer ) - use union in case of "OR scenarios

Ensure statistics are collected on join columns and this is especially important if the columns you are joining on are not unique.

B. collects stats - Run command "diagnostic help stats on for the session" - Gather information on columns on which stats has to be collected - Collect stats on suggestions columns - Also check for stats missing on PI, SI or columns used in joins "help stats <databasename>.<tablename> - Make sure stats are re-collected when at-least 10% of data changes - remove unwanted stats or stat which hardly improves performance of the queries - Collect stats on columns instead of indexes since index dropped will drop stats as well!! - collect stats on index having multiple columns, this might be helpful when these columns are used in join conditions - Check if stats are re-created for tables whose structures have some changes C. Full table scan scenarios - Try to avoid FTS scenarios as, it might take very long time to access all the data in every amp in the system - Make sure SI is defined on the columns which are used as part of joins or Alternate access path. - Collect stats on SI columns else there are chances where optimizer might go for FTS even when SI is defined on that particular column. 2. If intermediate tables are used to store results, make sure that - It has same PI of source and destination table 3. Tune to get the optimizer to join on the Primary Index of the largest table, when possible, to ensure that the large table is not redistributed on AMPS. 4. For large list of values, avoid using IN /NOT IN in SQLs. Write large list values to a temporary table and use this table in the query. 5. Make sure when to use exists/not exists condition since they ignore unknown comparisons (ex. - NULL value in the column results in unknown) . Hence this leads to inconsistent results

6. Inner Vs Outer Joins Check which join works efficiently in given scenarios.Some examples are - Outer joins can be used in case of large table joining with small tables (like fact table joining with Dimension table based on reference column) - Inner joins can be used when we get actual data and no extra data is loaded into spool for processing Please note for outer join conditions: Filter condition for inner table should be present in "ON" condition 2. Filter condition for outer table should be present in "WHERE" condition.

COLLECTION OF STATISTICS USING SAMPLE:


Here is the step by step process on how to collect statistics using SAMPLE: SAMPLE statistics is collected using Random amp sampling and it is recommended to use when we dont have stats collected on index or set of columns. As a preparation step we check whether table is suitable for SAMPLE STATISTICS collection using following query /* suggested to use when data skew is less and also when more rows are there in table than number of amps*/ SEL TAB1.A AS TABLECOUNT, TAB2.B AS AMPCOUNT, CASE WHEN TABLECOUNT > AMPCOUNT THEN ' RANDOM AMP SAMPLING CAN BE SUGGESTED' ELSE 'RANDOM AMP SAMPLING NOT NEEDED' END FROM (SEL COUNT (*) AS A FROM TABLENAME) TAB1, (SEL HASHAMP () +1 AS B) TAB2; Below is step by step process on Collecting statistics using SAMPLING 1. Check if Stats are already collected on the column of table for which Random AMP sampling is to be considered using HELP STATISTICS ON <YOUR_DB>.<YOUR_TB>; If YES, then this situation is tricky and do you still want to try out SAMPLING or look for other recommendations is up-to you. 5

2. If NO then, check if column is highly skewed using following Query.

SELECT HASHAMP (HASHBUCKET (HASHROW (<YOUR_COLUMN>))) , COUNT (*) FROM <YOUR_DB>.<YOUR_TB> GROUP BY 1; If you see that Data is equally distributed among all the amps (Variance of +-5 % is accepted), If there is large amount of DATASKEW in one AMP, then SAMPLING is not a good option 3. If you dont find data skew on any particular AMP then, Run sample statistics on column of particular table as follows. COLLECT STATICSTICS ON <YOUR_DB>.<YOUR_TB> COLUMN (<YOUR_COLUMN>) USING SAMPLE; 4. Check the performance of query after running sample STATS, also note the time taken for collecting sample stats. 5. If not satisfied with performance, try to run full statistics on columns and measure performance and time taken to collect full stats 6. Decide which is the best option FULL STATS or SAMPLEconsidering factors like - Performance, - Time taken for statistics collection on scenarios, - Table size, - Data skew, - Frequency of table being loaded - How many times this table would be used in your environment.

Advantages of Collecting Stats using SAMPLE:


1. Only a sample of table rows is scanned. Default being 2%. It is based on random amp sampling estimate of total rows. a. If you want to override the default value for particular session then use, b. DIAGNOSTIC "COLLECTSTATS, SAMPLESIZE=n" ON FOR SESSION; 2. It uses less CPU and I/O resources compared to full statistics hence saving considerable amount of time and resources. It is not recommended to used for 1. Columns which are not indexed 2. Indexes which has lot of duplicates or non unique combinations 3. for small tables like dimension /key tables 4. for tables that have greater data skew. Please note that Sample statistics cannot be collected on 1. Global temporary tables 6

2. Join indexes

Performance Tunings tips : Inlist vs Between


There are lot of questions running around guys who do tuning . Sometimes they do suggest use of temporary tables instead of using large in list values, as the optimizer would go for value in XXX or Value in XXY or Value in XXZ to generate a explain plain. What if the column compared against large in-list values was part of any index say PI,SI,JI.... ? Sometimes it so happens that even after using a temp table with list of values you would still get same performance issue, why so? Did you ever consider to use "between" clause to check if the query performed better ?? Did you...? I would say give it a try to see if this would be much better option compared to standard "temp table " against the inlist. Say for example: SELECT customer_number, customer_name FROM customer WHERE customer_number in (1000, 1001, 1002, 1003, 1004); is much less efficient than SELECT customer_number, customer_name FROM customer WHERE customer_number BETWEEN 1000 and 1004 We are assuming that an index on customer_number, the Query Optimizer can locate a range of numbers much faster (using BETWEEN) than it can find a series of numbers using the IN clause. Check for explain to compare difference, if still the same... refresh/collect stats on index column it would help. If still you would find some kind of issue try to find out skewness of column using following query and try to rectify the issue. Sel hashamp(hashbucket(hashrow(customer_number))), count(*) from Customer group by 1; Over all we can say trying the query with between would be of great help. But query tuning is such a complex thing you will never get what you want unless you understand the data.

Peformance tuning Tips : Join Considerations


7

If you are working on writing queries, working on performance or helping in betterment of performance. You will have to take sometime in going through this topic. It is all to do about Joins which is most important concern in Teradata. If some light is given to following suggestions, any join related issues can be taken care off...

Tip 1: Joining on PI/NUPI/ Non PI columns : We should make sure join is happening on columns composed of UPI/NUPI. But why?? Whenever we join two tables on common columns, the smart optimizer will try to take data from both the data into a common spool space and join them to get results. But getting data from both the tables into common spool has overhead. What if I joined a very large table with small table? Should small table be redistributed or large table? Should small table be duplicated across all the AMPs? Should both the tables be redistributed across all the AMPs?? Here is some basic thumb rules on joining columns on Index, so joining happens faster. Case 1 - P.I = P.I joins There is no redistribution of data over amp's. Since amp local joins happen as data are present in same AMP and need not be redistributed. These types of joins on unique primary index are very fast. Case 2 - P.I = Non PI column joins -Data from second table will be re-distributed on all amps since joins are happening on PI vs. NUPI column. Ideal scenario is when small table is redistributed to be joined with large table records on same amp -Data in small table is duplicated to Every AMP where it is joined locally with large table Case 3 - No Index = Non PI column joins Data from both the tables are redistributed on all AMPs. This is one of the longest processing queries , Care should be taken to see that stats are collected on these columns Tip 2: The columns part of join must be of the same data type (CHAR, INTEGER,). But why?!?

When trying to join columns from two tables, optimizer makes sure that datatype is same or else it will translate the column in driving table to match that of derived table. Say for example TABLE employee deptno (char) TABLE dept deptno (integer) If I am joining employee table with Dept on employee.deptno(char) = dept.deptno(Integer), optimizer will convert character column to Integer resulting in translation . What would happen if employee table had 100 million records and every time deptno would have to undergo Translation. So we have to make sure to avoid such scenarios since translation is a cost factor and might need time and system resources. Make sure you are joining columns that have same data types to avoid translation!!!! Tip 3 : Do not use functions like SUBSTR, COALESCE , CASE ... on the indices used as part of Join. Why?!? It is not recommended not to use functions such as SUBSTR, COALESCE, CASE and others since they add up to cost factor resulting in performance issue. Optimizer will not be able to read stats on those columns which have functions as it is busy converting functions. This might result in Product join, spool out issues and optimizer will not be able to take decisions since no stats/demographics are available on column. It might assume column to have 100 values instead of 1 million values and might redistribute on wrong assumption directly impacting performance. Tip 4 : Use NOT NULL where ever possible! What?!! Did someone say Not Null?? .. Yes, we have to make sure to use NOT null for columns which are declared as NULLABLE in TABLE definition. Reason being that all the Null values might get sorted to one poor AMP resulting in infamous " NO SPOOL SPACE " Error as that AMP cannot accommodate any more Null values. SO remember to use NOT NULL in joining so that table SKEW can be avoid . 9

Since V2R5 , teradata automatically adds the condition IS NOT NULL to the query. Still it is better to ensure NOT NULL columns are not included as part of the join.

Peformance tuning Tips : Locking table for access?


We would have come across this statement in many queries which are run in sensitive environments like PROD, UAT. They can be used with views or sometimes just for querying purpose. I wanted to discuss how important this statement would be in real-time /active data warehouses where lot of users will be striking queries on same database at the time. create view Employee.view_employ_withLock as locking table Employee.Dept_emp for access select * from Employee.Dept_emp ; By using locking table for access, we make sure that normal "access" lock is applied on table which is required to fetch results and by doing so There is no waiting for other locks to release since access lock can be applied on table which has read/write lock applied to it. This will cause the query to execute even when some lock is applied , but accessing data using this lock might not be consistent as it might result in dirty read due to concurrent write on the same table.

It is always suggested to use locking table for access" which since they will not block the other users from applying read/write lock on the table.

Peformance tuning Tips : LIKE Clause


While tuning queries in Teradata , We take care of major performance issues but ignore small cases which might still cause the query to perform badly. I wanted to mention about one such case of LIKE clause , which many people good at performance tuning miss it assuming like patterns does not harm the performance . But in reality this is not so!!

10

If LIKE is used in a WHERE clause, it is better to try to use one or more leading character in the clause, if at all possible. eg; LIKE '%STRING%' will be processed differently compared to LIKE 'STRING%' If a leading character 'STRING%' is used in the beginning of like clause , then the Qptimizer makes use of an index to perform on query thereby increasing the performance. But if the leading character' in '%STRING%' is a wildcard(say '%') , then the Optimizer will not be able to use an index, and a full table scan (FTS ) must be run, which reduces performance and takes more time. Hence it is suggested to go for '%STRING%' only if STRING is a part of entire pattern say 'SUBSTRING'

Peformance tuning Tips : Some More Tips


When it comes to performance tuning, we cannot stick to a certain set of rules. It varies based on the data you are dealing with. Although, we can create a baseline and address issues based on scenarios we face on a day to day basis.

1. Utilizing Teradatas Parallel Architecture: If you understand what happens in the background, you will be able to make your query work its best. So, try and run explain plan on your query before executing it and see how the PE(Parsing Engine) has planned to execute it. Understand the Key-words in Explain plan. I will have to write a more detailed post on this topic. But for now, let us go on with the highlights 2. Understanding Resource consumption: Resource that you consume can be directly related to dollars. Be aware and frugal about the resources you use. Following are the factors you need to know and check from time to time: a. CPU consumption b. Parallel Efficiency / Hot amp percentage c. Spool usage 3. Help the Parser: Since the architecture has been made to be intelligent, we have to give it some respect You can help the parser understand data you are dealing with, by collecting statistics. 11

But you need to be careful when you do so, due to 2 reasons: Incorrect stats are worse than not collecting stats, so make sure your stats are not stale(old) If your dataset changes rapidly in your table, and suppose you are dealing with a lot of data, then collecting stats itself might be resource consuming. So, based on how frequently your table will be accessed, you will have to make the call

4. Since same SQL can be written in different ways, you will have to know which method is better than which. For eg, creating Volatile table vs Global temp table vs working table. You cannot directly point out which is the best, But I can touch base on the pros and cons and comparison for them. 5. Take a step back and look at the whole process. Consider how much data you need to keep, how critical is it for your business to get the data soon, how frequently do you need to run your SQL. Most of the times, the big picture will give you a lot of answers

Peformance Improvement Tips:


This article is prepared to understand the key points required to be considered while analyzing/improving the Teradata ETL performance. These points generally refer to the periodic activities or the housekeeping activities that should be done periodically to avoid the database performance implications. Key Points: 1. Stat Collection 2. Pack Disk 3. Data Skew Analysis 4. Lock monitoring (Locking Logger) 5. Session tuning (DBS Control parameters) 1). Stat Collection : Statistics on the loaded tables are very important from the performance perspective. Statistics helps optimizer to generate the accurate & faster query plans. An old statistics in warehouse can lead to wrong query plans which may take time in query processing. Hence it is much required to refresh the statistics periodically at least in the warehousing environment. We can identify the tables which are frequently accessed & modified with inserts, updates, deletes etc.

12

It is recommended to refresh the stats after every 10% of data change. We can collect the statistics at column level or at index level. Syntax: Collect statistics on <table_name> column (column_name 1,.., column_name n); OR Collect statistics on <table_name> index (column_name 1,.., column_name n); 2). Pack Disk : Pack disk is an utility that free up the cylinder space on the database, this utility must be run periodically as in the warehouse environment large amount of data inserts, updates are happening which causes the physical memory to disorder due to frequent data manipulation. Pack disk utility allows us to restructure & physically reorder the data, free up space same as defragmentation. Teradata also run mini CYLPACKs automatically, if cylinder space goes below the prescribed limit. Cylinder space is required for the merge operation while the data Insert, Deletes Updates etc. To run a pack disk we use Ferret utility provided by Teradata can be run through Teradata Manager Tool or through telnet on node session. The set of commands that starts packdisk utility are given below one can create a kron job to schedule the same & run it periodically. Commands to run pack defrag & packdisk utilities : ~ferret defrag Y packdisk fsp=8 Y

3). Skew Analysis : Primary index of a table in Teradata is responsible for the data distribution on all the AMPs. Proper data distribution is required for the parallel processing in the system. As Teradata system follows shared nothing architecture, all the AMPs works in parallel. If data is evenly distributed amongst the AMPs then the amount of the work done by every AMP would be equal & time required for particular job would obviously be lesser. In contrast to this if only one/two AMPs are flooded with the data i.e. data skew then while running that job the two AMPs would be working & others will be idle. In this case we wont be utilizing the parallel processing power of the system.

13

To avoid such data skew need to analyze the primary index of the tables in Teradata database over the period of time it might happen that data is getting accumulate at the few AMPs, which can have a adverse effect on the ETL as well as the system performance. To analyze the data distribution for the table we can use the inbuilt HASH functions provided by the Teradata. To check the data distribution for a table one can use a query: SELECT HASHAMP (HASHBUCKET (HASHROW (Column 1,.., column n))) AS AMP_NUM, count(*) From Table_Name Group by 1; This query will provide the distribution of records on each AMP we can also analyze the probable PIs with this query which will predict the data distribution on the AMPs 4). Lock monitoring : Locking Logger is an utility that enables us to monitor the locking on the tables. Using this utility we can create a table that has the entries for the locks which have been applied to the tables while processing. This utility allows us to analyze the regular ETL process, jobs being blocked at particular time when there is no one to monitor the locking. By analyzing such locking situations we can modify the jobs & avoid the waiting period due to such situations. To apply this locking loggers First, we need to enable locking logger via the DBS console window or the cnsterm subsystem. The setting does not take effect until the database is restarted. LockLogger - This Field defines the system default for the locking logger. This allows the DBA to log the delays caused by database locks to help in identifying lock conflicts. To enable this feature set the field to TRUE. To disable the feature set the field to FALSE. After a database restart with the LockLogger flag set to true, the Locking Logger will begin to accumulate lock information into a circular memory buffer of 64KB. Depending on how frequently the system encounters lock contention, this buffer will wrap, but it will usually span a several day period. Following a period of lock contention, to analyze the lock activity, you need to run the dumplocklog utility which moves the data from the memory buffer to a database table where it can be accessed. 5). Session Tuning: Session tuning is done for the running the load utilities in parallel

14

this requires to analyze some DBScontrol parameters & tune the same to provide the best parallel processing of the load utilities. There are two parameters MaxLoadAWT & MaxLoadTasks that enables the parallel job management a short note on the same: The MaxLoadAWT internal field serves two purposes: 1) Enabling a higher limit for the MaxLoadTasks field beyond the default limit of 15. 2) Specifying the AMP Worker Task (AWT) limit for concurrent FastLoad and MultiLoad jobs when a higher limit is enabled. In effect, this field allows more FastLoad, MultiLoad, and FastExport utilities running concurrently while controlling AWT usage and preventing excessive consumption and possible AWT exhaustion. The default value is zero: When MaxLoadAWT is zero, concurrency limit operates in the same manner as prior to V2R6.1 MaxLoadTasks specifies the concurrency limit for all three utilities: FastLoad, MultiLoad, and FastExport. The valid range for MaxLoadTasks is from 0 to 15. When MaxLoadAWT is non-zero (higher limit enabled): It specifies the maximum number of AWTs that can be used by FastLoads and MultiLoads. Maximum allowable value is 60% of the total AWTs. The valid range for MaxLoadTasks is from 0 to 30. A new FastLoad/MultiLoad job is allowed to start only if BOTH MaxLoadTasks AND MaxLoadAWT limits are not reached. Therefore, jobs may be rejected before MaxLoadTasks limit is exceeded. MaxLoadTasks specifies the concurrency limit for the combination of only two utilities: FastLoad and MultiLoad. FastExport is managed differently; FastExport is no longer controlled by the MaxLoadTasks field. A FastExport job is only rejected if the total number of active utility jobs is 60. At least 30 FastExport jobs can run at any time. A FastExport job may be able to run even when FastLoad and MultiLoad jobs are rejected. When a Teradata Dynamic Workload Manager (TDWM) utility throttle rule is enabled, the MaxLoadAWT field is overridden. TDWM will use the highest allowable value which is 60% of total AWTs. Update to MaxLoadAWT becomes effective after the DBS control record has been written. No DBS restart is required. Note that when the total number of AWTs (specified by the internal field MaxAMPWorkerTasks) has been modified but a DBS restart has not occurred, then there may be a discrepancy between the actual number of AWTs and the DBS control record. The system may internally reduce the effective value of MaxLoadAWTs to prevent AWT exhaustion. AWT Usage of Load Utilities: All load/unload utilities require and consume AWTs at different rates depending on the execution phase: FastLoad: Phase 1 (Loading): 3 AWTs Phase 2 (End 15

Loading): 1 AWTs MultiLoad*: Acquisition Phase (and before): 2 AWTs. Application Phase (and after): 1 AWTs FastExport: All. This description is for the single target table case which is the most common. The above explained parameters can be analyzed & tuned accordingly to achieve the expected performance on the Teradata system. Also need to have some maintenance/ House keeping activities in place to avoid the performance implications due to some physical data parameters like data skew, less cylinder space etc.

Teradata SQL Query Optimization:


SQL and Indexes :
1) Primary indexes: Use primary indexes for joins whenever possible, and specify in the where clause all the columns for the primary indexes. 2) Secondary indexes (10% rule rumor): The optimizer does not actually use a 10% rule to determine if a secondary index will be used. But, this is a good estimation: If less than 10% of a table will be accessed if the secondary index is used, then assume the sql will use the secondary index. Otherwise, the sql execution will do a full table scan. The optimizer actually uses a least cost method: The optimizer determines if the cost of using a secondary index is cheaper than the cost of doing a full table scan. The cost involves the cpu usage, and diskio counts. 3) Constants: Use constants to specify index column contents whenever possible, instead of specifying the constant once, and joining the tables. This may provide a small savings on performance. 4) Mathematical operations: Mathematical operations are faster than string operations (i.e. concatenation), if both can achieve the same result. 5) Variable length columns: The use of variable length columns should be minimized, and should be by exception. Fixed length columns should always be used to define tables. 6) Union: The union command can be used to break up a large sql process or statement into several smaller sql processes or statements, which would run in parallel. But these could then cause spoolspace limit problems. Union all executes the sqls single threaded. 16

7) Where in/where not in (subquery): The sql where in is more efficient than the sql where not in. It is more efficient to specify constants in these, but if a subquery is specified, then the subquery has a direct impact on the sql time. If there is a sql time problem with the subquery, then the sql subquery could be separated from the original query. This would require 2 sql statements, and an intermediate table. The 2 sql statements would be: 1) New sql statement, which does the previous subquery function, and inserts into the temporary table, and 2) Modified original sql statement, which doesnt have the subquery, and reads the temporary table. 8) Strategic Semicolon: At the end of every sql statement, there is a semicolon. In some cases, the strategic placement of this semicolon can improve the sql time of a group of sql statements. But this will not improve an individual sql statements time. These are a couple cases: 1) The groups sql time could be improved if a group of sql statements share the same tables (or spool files), 2) The groups sql time could be improved if several sql statements use the same unix input file.

Reducing Large SQLs :


The following methods can be used to scope down the size of sqls. 1) Table denormalization: Duplicating data in another table. This provides faster access to the duplicated data, but requires more update time. 2) Table summarization: The data from one/many table(s) is summarized into commonly used summary tables. This provides faster access to the summarized data, but requires more update time. 3) SQL union: The DBC/SQL Union can be used to break up a large SQL process or statement into several smaller SQL processes or statements, which would run in parallel. 4) Unix split: A large input unix files could be split into several smaller unix files, which could then be input in series, or in parallel, to create smaller SQL processing steps. 5) Unix concatenation: A large query could be broken up into smaller independent queries, whose output is written to several smaller unix files. Then these smaller files are unix concatenated together to provide a single unix file.

17

6) Trigger tables: A group of tables, each contains a subset of the keys of the index of an original table. the tables could be created based on some value in the index of the original table. This provides an ability to break up a large SQL statement into multiple smaller SQL statements, but creating the trigger tables requires more update time. 7) Sorts (order by): Although sorts take time, these are always done at the end of the query, and the sort time is directly dependent on the size of the solution. Unnecessary sorts could be eliminated. 8) Export/Load: Table data could be exported (Bteq, Fastexport) to a unix file, and updated, and then reloaded into the table (Bteq, fastload, Multiload). 9) C PROGRAM/UNIX SCRIPTS: Some data manipulation is very difficult and time consuming in sql. These could be replaced with c programs/unix scripts. See the C/Embedded sql tip.

Reducing Table Update Time :


1) Table update time can be improved by dropping the tables indexes first, and then doing the updates. After the completion of the updates, then rebuild the indexes, and recollect the tables statistics on the indexes. The best improvement is obtained when the volume of table updates is large in relation to the size of the table. If more then 5% of a large table is changed. 2) Try to avoid dropping a table, instead, delete the table. Table related statements (i.e. create table, drop table) are single threaded thru a system permissions table and become a bottleneck. They can also cause deadlocks on the dictionary tables. Also, any user permissions specific to the table are dropped when the table is dropped, and these permissions must be recreated.

Conclusion:
Teradata is a System which really can process the complex queries very fastly. Teradata database is Linearly scalable.We can expand the database capacity by just adding more nodes to the existing database.If the data volume grows we can add more hardware and expand the database capacity. Teradata has a extensive parallel processing capacity,It can handle multiple adhoc requests and many concurrent users. Teradata database has shared nothing architecture. It has high fault tolerance and data protection. Another advantage is the uniform distribution of data through the Unique primary indexes with out any overhead. The 18

performance is just amazing for Huge data. Teradata is excellent to handle HUGE data.

19

Anda mungkin juga menyukai