Saturday, September 25, 2010
I was wondering whether we could pass parameters to Bteq script. After doing some research , found out a closest way to passing parameter to Bteq script. Actually this involves creating of BATCH file".bat" and passing parameter to it.
Let us try this for following select query
select * from user.tab_test1 where test1='2010';
The batch file is created as follows (script2.bat)
echo .logon localhost/test,test >>script1.txt
echo select * from user.tab_%1% where %1% ='2010'; >>script1.txt
echo .logoff >> script1.txt
echo .quit >> script1.txt
bteq < script1.txt
Run the batch files as follows
we can see following execution steps
C:\>echo .logon localhost/test,test 1>>script1.txt
C:\>echo select * from user.tab_test1 where test1='2010' ; 1>>script1.txt
C:\>echo .logoff 1>>script1.txt
C:\>echo .quit 1>>script1.txt
BTEQ 08.02.00.00 Thu Jul 18 09:33:12 2010
*** Logon successfully completed.
*** Transaction Semantics are BTET.
*** Character Set Name is 'ASCII'.
*** Total elapsed time was 5 seconds.
select * from user.tab_test1 where test1='2010';
*** Query completed. One row found. One column returned.
*** Total elapsed time was 1 second.
*** You are now logged off from the DBC.
*** Exiting BTEQ...
*** RC (return code) = 0
WHERE product_code (CASESPECIFIC) LIKE '%Ra%';
Note: Usually the comparison in teradata is not case-specific. To enforce CASESPECIFIC , we use CASESPECIFIC function
Sunday, September 19, 2010
Access lock is one wherein the table is not locked means you can do insert/update/delete on the table while access lock will access the table ,in this the dirty reads may happen , which means you will not get the latest changes whatever happened on table to be reflected in your answer set.
READ lock will lock the table wherein you can not do insert/update and structural changes in table.
It is placed by simple SELECT statement for by explicitly specifying as LOCKING FOR READ.
read lock :
Locking table for Access;
access lock :
Locking table for Read Access
The main difference between read lock and access lock is data Integrity.On placing a read lock the user expects data integrity, while as for access lock the user cannot expect data integrity.
Consider following scenarios
1. User A places READ lock and User B WRITE places lock
User B will have to wait for User A to complete its read in order to start insert/updates/deletes on the data.
2. User A places ACCESS lock and User B WRITE places lock.
User A & B access the data simultaneously, hence User A cannot expect to get consistent results.
Wednesday, September 15, 2010
How to create a Macro
Create a macro to generate a DOB list for department 321:
CREATE MACRO DOB_Details AS
(SELECT first_name ,last_name ,DOB
WHERE dept_numbr =321
ORDER BY DOB asc;);
EXECUTE a Macro
To execute a macro, call it along with the exec command.
last_name first_name DOB
Ram Kumar 75/02/22
Laxman Sinha 79/04/06
DROP a Macro
To drop a macro, use following command .
DROP MACRO DOB_Details;
REPLACE a Macro
If we need to modify an existing macro , instead of dropping and re-creating it
We can use replace macro command as follows
REPLACE MACRO DOB_Details AS
(SELECT first_name,last_name ,DOB
WHERE dept_numbr = 321
ORDER BY DOB, first_name;);
Parametrized macros allow usage of variables . we can pass values to these variables. Advantage of using parametrized macros is , Values can be passed to these variables at run-time.
CREATE MACRO dept_list (dept INTEGER) AS
WHERE dept_numbr = :dept; );
To Execute the macro
EXEC dept_list (321);
Macros may have more than one parameter. Each name and its associated type are separated by a comma from the next name and its associated type. The order is important. The first value in the EXEC of the macro will be associated with the first value in the parameter list. The second value in the EXEC is associated with the second value in the parameter list, and so on.
CREATE MACRO emp_verify (dept_numbr INTEGER ,salary DEC(18,0))
WHERE dept_numbr = :dept
AND salary< :sal;) ;
To Execute this macro
EXEC emp_check (301, 50000);
Key points to note about Macros:
- Macros are a Teradata extension to SQL.
- Macros can only be executed with the EXEC privilege.
- Macros can provide column level security.
he has privileges for the underlying tables or views that the macro uses.
Teradata uses HASH values to store data in AMPs. To view data distribution we use Hash Functions.
Hash functions are usually used over primary index columns to find data distribution . We can identify skewness by using this concept .
Following query can be used to find hash values of PI columns
SELECT HASHAMP(HASHBUCKET(HASHROW(<PRIMARY INDEX>))) AS
GROUP BY 1
ORDER BY 2 DESC;
By looking at result , you query you can easily find out the Data Distribution across all AMPs in your system and further you can easily identify un-even data distribution.
HASHROW - returns the row hash value for a given value
HASHBUCKET - the grouping of a specific hash value
HASHAMP - the AMP that is associated with the hash bucket
WHERE DATABASENAME = '<DATABASE_NAME>'
AND TABLENAME = '<TABLE_NAME>'
GROUP BY DATABASENAME , TABLENAME;
Following query will give the space consumed on each AMP by the Table
SELECT DATABASENAME, TABLENAME, CURRENTPERM
WHERE DATABASENAME = ‘<DATABASE_NAME>'
AND TABLENAME = '<TABLE_NAME> ';
Monday, September 13, 2010
2616 : Numeric overflow occured during computation
Saturday, September 11, 2010
Friday, September 10, 2010
By using ROW_NUMBER() Function , we can mark all the selected rows with numbers .
Then use QUALIFY clause to get excat row number.
qualify row_number() over (order by columnA ) = Nth record ;
Here 'N' being particular row number.
P.S: The same query can be used to select Top N records;
qualify row_number() over (order by columnA ) <= N;
Here Sum function is used over rows preceding in the SOURCE_TABLE
sum(1) over( rows unbounded preceding ),
Here ROW_NUMBER function is used to generate row_number on columnA
ROW_NUMBER() over( ORDER BY columnA ),
If you have to use the row number concept in target table as well, then following approach using "identity column" (from V2R6 onwards ) be used :
CREATE MULTISET TABLE TARGET_TABLE
columnA INTEGER GENERATED BY DEFAULT AS IDENTITY
(START WITH 1
INCREMENT BY 20) ,
columnB VARCHAR(20) NOT NULL
UNIQUE PRIMARY INDEX pidx (ColA);
P.S: Identity columns does differ from sequence concept in oracle. The numbers assigned in these columns are not guaranteed to be sequenctial. The Identity column in Teradata is used to guaranteed row-uniqueness.
This works without use of Identity approach.
create TABLE TARGET_TABLE as
ROW_NUMBER() over( ORDER BY columnA ) NUMBER_SEQ ,
from a join b on a.id=b.id
) with data ;
Sunday, September 5, 2010
It collects the information like:
- total row counts of the table,
- how many distinct values are there in the column,
- how many rows per value, is the column indexed,
- if so unique or non unique etc.
What if collect stats is not done on the table?
Teradata uses a cost based optimizer and cost estimates are done based on statistics.
So if you donot have statistics collected then optimizer will use a Dynamic AMP Sampling method to get the stats. If your table is big and data was unevenly distributed then dynamic sampling may not get right information and your performance will suffer.
How can i know the tables for which the collect stats has been done?
Run the Help Stats command on that table.
e.g HELP STATISTICS TABLE_NAME ;
this will give you Date and time when stats were last collected. You will also see stats for the columns ( for which stats were defined) for the table
Whenever collect stats is done on the particular table(say on index/column) where can I find information regarding these entries ?
Collected statistics are stored in DBC.TVFields or DBC.Indexes tables. However, these two tables cannot be queried .
When to collect stats on tables which have stats ?
1. Typical guideline is roughly 10% of the data has changed. (By measuring delta in perm space since last collected.)
2. Recollect based on stats that have aged 60-90 days. (say last time stats collected was 2 months ago) .
Please note :
Collect stats could be pretty resource consuming for large tables. So it is always advisable to schedule the job at off peak period .
Saturday, September 4, 2010
The purpose of a permanent journal is to maintain a sequential history of all changes made to the rows of one or more tables.
Permanent journals help protect user data when users commit, uncommit or abort transactions.
A permanent journal can capture a snapshot of rows before a change, after a change, or both.
Permanent journaling is usually used to protect data.
likein case of the automatic journal, the contents of a permanent journal remain until you drop them.
When you create a new journal table, you can use several options to control the type of information to be captured.
We create Permanent Journal when the User or Database is created.
Consider following example for creation of database
CREATE DATABASE testdat
PERM = 20000000
SPOOL = 2000000
ACCOUNT = '$xxxxx'
NO BEFORE JOURNAL
DEFAULT JOURNAL TABLE = testdat.journal;
Here Admin has opted for only AFTER JOURNAL and he has name the journal table as "testdat.journal".
When user creates a table in the database "testdat" , by default AFTER JOURNAL is available for him to protect his data when the hardware failure occurs.
He can opt for NO AFTER JOURNAL by overriding the default. Follwoing is the example.
Scenario1 : Here by default the table has AFTER JOURNAL option.
"CREATE TABLE testdat.table
( field1 INTEGER,
PRIMARY INDEX field1;
Scenario2: in this case, user has specifically stated he wanted no AFTER JOURNAL for his data. This is how user can override the defult.
CREATE TABLE testdat.table2
NO AFTER JOURNAL
( field1 INTEGER,
PRIMARY INDEX field1;
In this case whenever the user inserts/updates and the transaction is committed , then the affected rows will be taken backup in the journal table "testdat.journal".
Please note :
You must allocate sufficient permanent space to a database or user that will contain permanent journals. If a database or user that contains a permanent journal runs out of space, all table updates that write to that journal abort.
- You could run a Show table command to get the exact DDL, then change datatype ( say for example char(8)) to datatype varchar(10) .
- Run the ddl script to create the table.
- Then you can run an insert select command to insert data into the new table.
- Since the PI of both the tables are same, the operation would be pretty fast.
- Then DROP the original table and rename the new one to the old one.
Inserting data into an empty table is very quick because there is not reference to transient journal.
please note , if you use
- CREATE new_table AS existing_table" preserves all column attributes and indexes (just Triggers and Foreign Keys are removed)
- But using CREATE new_table AS (SELECT * FROM existing_table) will remove NOT NULL and TITLE properties.
Thursday, September 2, 2010
TEradata Query) is a command-driven utility used to 1) access and manipulate
data, and 2) format reports for both print and screen output.
BTEQ, short for Basic TEradata Query,
is a general-purpose command-driven utility used to access and manipulate data
on the Teradata Database, and format reports for both print and screen output. 
As part of the Teradata Tools and Utilities (TTU), BTEQ is a
Teradata native query tool for DBA and programmers — a real Teradata workhorse,
just like SQLPlus for the Oracle Database. It enables users on a workstation to
easily access one or more Teradata Database systems for ad hoc queries, report
generation, data movement (suitable for small volumes) and database
All database requests in BTEQ are expressed in Teradata
Structured Query Language (Teradata SQL). You can use Teradata SQL statements
in BTEQ to:
data — create and modify data structures;
data — query a database;
data — insert, delete, and update data;
data — define databases and users, establish access rights, and secure
Teradata SQL macros — store and execute sequences of Teradata SQL
statements as a single operation.
BTEQ supports Teradata-specific SQL functions for doing
complex analytical querying and data mining, such as:
* RANK -
* CSUM -
* MAVG -
* MSUM -
- (Moving Differences);
- (Moving Linear Regression);
- (One Dimension of Group);
* CUBE -
(All Dimensions of Group);
SETS - (Restrict Group);
- (Distinguish NULL rows).
Noticeably, BTEQ supports the conditional logic (i.e.,
"IF..THEN..."). It is useful for batch mode export / import
This section is based on Teradata documentation for the
In a BTEQ session, you can access a Teradata Database easily
and do the following:
Teradata SQL statements to view, add, modify, and delete data;
operating system commands;
and use Teradata stored procedures.
BTEQ operates in two modes: interactive mode and batch mode.
In interactive mode, you start a BTEQ session by entering BTEQ at
the system prompt on your terminal or workstation, and submit commands to the
database as needed. In batch mode, you prepare BTEQ scripts or macros, and then
submit them to BTEQ from a scheduler for processing. A BTEQ script is a set of
SQL statements and BTEQ commands saved in a file with the extension
".bteq"; however, it does not matter what file extension is used. The
BTEQ script can be run using the following command (in UNIX or Windows):
bteq < infle > outfile
Here infile is the BTEQ script, and outfile
is the output or log file.
This section is based on Teradata documentation,
and for the detailed usage, please refer to Reference 1.
BTEQ Command Summary
BTEQ commands can be categorized into four functional
groups, as described below:
control — Session control commands begin and end BTEQ sessions, and control
control — specify input and output formats and identify information
sources and destinations;
control — control the sequence in which other BTEQ commands and Teradata
SQL statements will be executed within scripts and macros;
control — control the format of screen and printer output.
1. Commands for Session Control
abort any active requests and transactions without exiting
create or replace a Teradata stored procedure.
override the precision specified by a CLI System Parameter
to indicate what the precision should be for decimal
Resets BTEQ command options to the values that were set
end the current sessions and exit BTEQ.
abort any active requests and transactions and exit BTEQ;
end the current sessions without exiting BTEQ.
start a BTEQ session.
bypass the warnings related to conventional LOGON command
end the current sessions and exit BTEQ.
specify the name of a character set for the current
override the buffer length specified in resp_buf_len.
specify the disposition of warnings issued in response to
specify whether transaction boundaries are determined by
specify whether CLI double-buffering is used.
specify the number of sessions to use with the next LOGON
display the current configuration of the BTEQ control
display the BTEQ version number, module revision numbers,
specify the Teradata server for subsequent logons during
2. Commands for File Control
Repeats the previous Teradata SQL request a specified
enables users to specify whether the values of any fields
executes a VM CMS command from within the BTEQ
Routes the standard error stream and the standard output
Specifies the name and format of an export file that BTEQ
Enables suppression of the additional Page Advance ASA
aborts any active requests and transactions and exit BTEQ;
Enables all of the page-oriented formatting commands, or
Opens a channel- or network-attached system file, of the
Specifies the mode of information returned from the
INDICDATA and / or LARGEDATAMODE
specify the response mode, either Field mode, Indicator
Enables use of Teradata Database’s Multipart Indicator
If more than 64K is required, SET LARGEDATAMODE allows
executes an MS-DOS, PC-DOS, or UNIX command from within
Limits BTEQ output to errors and request processing
Returns data from SQL SELECT statements in client-oriented
submits the next request a specified number of times.
executes Teradata SQL requests and BTEQ commands from a
executes an MVS TSO command from within the BTEQ environment.
3. Commands for Sequence Control
Use the following commands to control the sequence in which
BTEQ executes commands:
For the commands not listed below, refer to the tables
Assigns severity levels to errors.
Skips over all intervening BTEQ commands and SQL
Pauses BTEQ processing for a specified period of time.
Tests the validity of the condition stated in the IF
Identifies the point at which BTEQ resumes processing, as
Designates a maximum error severity level beyond which
4. Format Control Commands
Use the following BTEQ commands to specify the way BTEQ
presents information for screenoriented and printer/printer-file oriented
For the commands not listed below, refer to the tables
Enables the echo required function that returns a copy of
Splits (fold) each line of a report into two or more
Specifies a footer to appear at the bottom of every page
Specifies a header to appear at the top of every page of a
Specifies a character or character string to represent
Excludes specified columns returned from SQL SELECT
Ejects a page whenever the value for one or more specified
specify the page length of printed reports, in lines per
cancel a request when the value specified by the RETLIMIT
Specifies the maximum number of rows and/or columns
resubmit requests that fail under certain error
Specifies a header to appear at the top of every page of a
Specifies a character string or width (in blank
Position summary titles to the left of the summary lines
insert two blank lines in a report whenever the value of a
Inserts a blank line in a report whenever the value of a
Replaces all consecutively repeated values with all-blank
Display a row of dash characters before each report line
Displays a row of dash characters whenever the value of a
Specifies the width of screen displays and printed
Can you recover the password of a user in Teradata?
No, you can’t recover the password of a user in Teradata. Passwords are stored in this data dictionary table (DBC.DBASE) using a one-way encryption method. You can view the encrypted passwords using the following query
SELECT * FROM DBC.DBASE;
Explain Ferret Utility in Teradata?
Ferret (File Reconfiguration tool) is an utility which is used to display and set Disk Space Utilization parameters within Teradata RDBMS. When you select the Ferret Utility parameters, it dynamically reconfigures the data on disks. We can run this utility through Teradata Manager; to start the Ferret Utility type (START FERRET) in the database window.
Following commands can be used within Ferret Utility:
1. SHOWSPACE – Well this command reports you the amount of Disk Cylinder Space is in use and the amount of Disk Cylinder Space is available in the system. This will give you an information about Permanent Space cylinders, Spool Space cylinders, Temporary Space cylinders, Journaling cylinders, Bad cylinders and Free cylinders. For each of these 5 things it will present you 3 parameters i.e. Average Utilization per cylinder, % of total avaliable cylinders and number of cylinders.
2. SHOWBLOCKS – This command will help you in identifying the Data Block size and the number of Rows per data block. This command displays the Disk Space information for a defined range of Data Blocks and Cylinders.
Explain TPUMP (Teradata Parallel Data Pump) Utility in Teradata?
* TPUMP allows near real time updates from Transactional Systems into the Data Warehouse.
* It can perform Insert, Update and Delete operations or a combination from the same source.
* It can be used as an alternative to MLOAD for low volume batch maintenance of large databases.
* TPUMP allows target tables to have Secondary Indexes, Join Indexes, Hash Indexes, Referential Integrity, Populated or Empty Table, Multiset or Set Table or Triggers defined on the Tables.
* TPUMP can have many sessions as it doesn’t have session limit.
* TPUMP uses row hash locks thus allowing concurrent updates on the same table.
How can you determine I/O and CPU usage at a user level in Teradata?
You can find out I/O and CPU Usage from this Data Dictionary Table DBC.AMPUSAGE;
SELECT ACCOUNTNAME, USERNAME, SUM(CPUTIME) AS CPU, SUM(DISKIO) AS DISKIO FROM DBC.AMPUSAGE GROUP BY 1,2 ORDER BY 3 DESC;
How can you find the Table Space Size of your table across all AMPs?
You can find the Table Space Size of your table from this Data Dictionary Table DBC.TABLESIZE
SELECT DATABASENAME, TABLENAME, SUM(CURRENTPERM) FROM DBC.TABLESIZE WHERE DATABASENAME = ‘
How can you find the Teradata Release and Version information from Data Dictionary Table?
To find Release and Version information you can query this Data Dictionary table DBC.DBCINFO
SELECT * FROM DBC.DBCINFO;
How can you track Login Parameters of users in Teradata?
You can view all these parameters in this Data Dictionary Table DBC.LOGONOFF
SELECT LOGDATE, LOGTIME, USERNAME, EVENT FROM DBC.LOGONOFF;
How can you use HASH FUNCTIONS to view Data Distribution across all AMPs in Teradata?
Hash Functions can be used to view the data distribution of rows for a chosen primary index.
HASHROW – returns the row hash value for a given value
HASHBUCKET – the grouping of a specific hash value
HASHAMP – the AMP that is associated with the hash bucket
This is really good, by looking into the result set of above written query you can easily find out the Data Distribution across all AMPs in your system and further you can easily identify un-even data distribution.
How do you transfer large amount of data in Teradata?
Transferring of large amount of data can be done using various Application Teradata Utilities which resides on the host computer ( Mainframe or Workstation) i.e. BTEQ, FastLaod, MultiLoad, Tpump and FastExport.
* BTEQ (Basic Teradata Query) supports all 4 DMLs: SELECT, INSERT, UPDATE and DELETE. BTEQ also support IMPORT/EXPORT protocols.
* Fastload, MultiLoad and Tpump transfer the data from Host to Teradata.
* FastExport is used to export data from Teradata to the Host.
How does Hashing happens in Teradata?
* Hashing is the mechanism through which data is distributed and retrieved to/from AMPs.
* Primary Index (PI) value of a row is the input to the Hashing Algorithm.
* Row Hash (32-bit number) value is the output from this Algorithm.
* Table Id + Row Hash is used to locate Cylinder and Data block.
* Same Primary Index value and data type will always produce same hash value.
* Rows with the same hash value will go to the same AMP.
So data distribution depends directly on the Row Hash uniqueness; be careful while Choosing Indexes in Teradata.
How to eliminate Product Joins in a Teradata SQL query?
1. Ensure statistics are collected on join columns and this is especially important if the columns you are joining on are not unique.
2. Make sure you are referencing the correct alias.
3. Also, if you have an alias, you must always reference it instead of a fully qualified tablename.
4. Sometimes product joins happen for a good reason. Joining a small table (100 rows) to a large table (1 million rows) a product join does make sense.
How to select first N Records in Teradata?
To select N records in Teradata you can use RANK function. Query syntax would be as follows
SELECT BOOK_NAME, BOOK_COUNT, RANK(BOOK_COUNT) A FROM LIBRARY QUALIFY A <= 10;
How to view every column and the columns contained in indexes in Teradata?
Following query describes each column in the Teradata RDBMS
SELECT * FROM DBC.TVFields;
Following query describes columns contained in indexes in the Teradata RDBMS
SELECT * FROM DBC.Indexes;
What are the 5 phases in a MultiLoad Utility?
* Preliminary Phase – Basic Setup
* DML Phase – Get DML steps down on AMPs
* Acquisition Phase – Send the input data to the AMPs and sort it
* Application Phase – Apply the input data to the appropriate Target Tables
* End Phase – Basic Cleanup
What are the functions of a Teradata DBA?
Following are the different functions which a DBA can perform:
1. User Management – Creation and managing Users, Databases, Roles, Profiles and Accounts.
2. Space Allocation – Assigning Permanent Space, Spool Space and Temporary Space.
3. Access of Database Objects – Granting and Revoking Access Rights on different database objects.
4. Security Control – Handling logon and logoff rules for Users.
5. System Maintenance – Specification of system defaults, restart etc.
6. System Performance – Use of Performance Monitor(PMON), Priority Scheduler and Job Scheduling.
7. Resource Monitoring – Database Query Log(DBQL) and Access Logging.
8. Data Archives, Restores and Recovery – ARC Utility and Permanent Journals.
What are the MultiLoad Utility limitations?
MultiLoad is a very powerful utility; it has following limitations:
* MultiLoad Utility doesn’t support SELECT statement.
* Concatenation of multiple input data files is not allowed.
* MultiLoad doesn’t support Arithmatic Functions i.e. ABS, LOG etc. in Mload Script.
* MultiLoad doesn’t support Exponentiation and Aggregator Operators i.e. AVG, SUM etc. in Mload Script.
* MultiLoad doesn’t support USIs (Unique Secondary Indexes), Refrential Integrity, Join Indexes, Hash Indexes and Triggers.
* Import task require use of PI (Primary Index).
What are TPUMP Utility Limitations?
Following are the limitations of Teradata TPUMP Utility:
* Use of SELECT statement is not allowed.
* Concatenation of Data Files is not supported.
* Exponential & Aggregate Operators are not allowed.
* Arithmatic functions are not supported.
What is FILLER command in Teradata?
While running Fastload or Multiload if you don’t want to load a particular field from the datafile to the target table then use the FILLER command to achieve this. Syntax for FILLER command would be as following:
.LAYOUT FILE_PRODUCT; /* It is input file layout name */
.FIELD Prod_No * char(11); /* To load data into Prod_No */
.FIELD Prod_Name * char(11); /* To load data into Prod_Name */
.FIELD Location * char(11); /* To load data into Location */
.FILLER Prod_Chars * char(20); /* To skip the value for the next 5 locations */
What is the difference between Access Logging and Query Logging in Teradata?
1. Access Logging is concerned with security (i.e. who’s is doing what). In access logging you ask the database to log who’s doing what on a given object. The information stored is based on the object not the SQL fired or the user who fired it.
2. Query Logging (DBQL) is used for debugging (i.e. what’s happening around ?). Incase of DBQL database keep on tracking various parameters i.e. the SQLs, Resource, Spool Usage, Steps and other things which help you understand what’s going on, the information is fruitful to debug a problem. Further DBQL is enabled on a User id rather than an object like say Table or so.
What is the difference between Sub-Query & Co-Related Sub-Query?
When queries are written in a nested manner then it is termed as a sub-query. A Sub-Query get executed once for the parent statement whereas Co-Related Sub-Query get executed once for each row of the parent query.
Select Empname, Deptno, Salary from Employee Emp where Salary = (Select Max(Salary) from Employee where Deptno = Emp.Deptno) order by Deptno
What is Reconfiguration Utility in Teradata and What it is used for?
* When we feed Primary Index value to Hashing Algorithm then it gives us Row Hash(32 bit number) value which is used to make entries into Hash Maps.
* Hash Maps are the mechansim for determining which AMP will be getting that row.
* Each Hash Map is an array of 65,536 entries and its size is close to 128KB.
When Teradata is installed on a system then there are some scrpits which we need to execute i.e. DIP Scripts. So it creates a Hash Maps of 65,536 entries for the current configuration. But what if you want to add some more AMPs into your system?
Reconfiguration (Reconfig) is a technique for changing the configuration (i.e. changing the number of AMPs in a system) and is controlled by the Reconfiguration Hash Maps. System builds Reconfiguration Hash Maps by reassigning hash map entries to reflect new configuration of system.
Lets understand this concept with the help of an example; suppose you have a 4 AMPs system which holds 65,536 entries. Each AMP is responsible for holding (65,536/4=16,384) 16,384 entries.
Now you have added 2 more AMPs in your current configuration so you need to reconfigure your system. Now each AMP would be responsible for holding (65,536/6=10922) 10,922 entries.