Teradata Database
SQL Reference
Fundamentals
Release V2R6.2
B035-1141-096A
September 2006
Teradata sur FNAC.com
ou juste avant la balise de fermeture -->
ou juste avant la balise de fermeture -->
Voir le pdf
Voir également d'autres Guide Teradata :
Teradata Database
SQL Reference
Fundamentals
Release V2R6.2
B035-1141-096A
September 2006The product described in this book is a licensed product of Teradata, a division of NCR Corporation.
NCR, Teradata and BYNET are registered trademarks of NCR Corporation.
Adaptec and SCSISelect are registered trademarks of Adaptec, Inc.
EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
Engenio is a trademark of Engenio Information Technologies, Inc.
Ethernet is a trademark of Xerox Corporation.
GoldenGate is a trademark of GoldenGate Software, Inc.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
IBM, CICS, DB2, MVS, RACF, OS/390, Tivoli, and VM are registered trademarks of International Business Machines Corporation.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
KBMS is a registered trademark of Trinzic Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI, SYM, and SYMplicity are registered trademarks of LSI Logic Corporation.
Active Directory, Microsoft, Windows, Windows Server, and Windows NT are either registered trademarks or trademarks of Microsoft
Corporation in the United States and/or other countries.
Novell is a registered trademark of Novell, Inc., in the United States and other countries. SUSE is a trademark of SUSE LINUX Products GmbH,
a Novell business.
QLogic and SANbox are registered trademarks of QLogic Corporation.
SAS and SAS/C are registered trademark of SAS Institute Inc.
Sun Microsystems, Sun Java, Solaris, SPARC, and Sun are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. or other
countries.
Unicode is a registered trademark of Unicode, Inc.
UNIX is a registered trademark of The Open Group in the US and other countries.
NetVault is a trademark and BakBone is a registered trademark of BakBone Software, Inc.
NetBackup and VERITAS are trademarks of VERITAS Software Corporation.
Other product and company names mentioned herein may be the trademarks of their respective owners.
THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED ON AN “AS-IS” BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NONINFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION MAY
NOT APPLY TO YOU. IN NO EVENT WILL NCR CORPORATION (NCR) BE LIABLE FOR ANY INDIRECT, DIRECT, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS OR LOST SAVINGS, EVEN IF EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
The information contained in this document may contain references or cross references to features, functions, products, or services that are
not announced or available in your country. Such references do not imply that NCR intends to announce such features, functions, products,
or services in your country. Please consult your local NCR representative for those features, functions, products, or services available in your
country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or updated
without notice. NCR may also make improvements or changes in the products or services described in this information at any time without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this
document. Please e-mail: teradata-books@lists.ncr.com
Any comments or materials (collectively referred to as “Feedback”) sent to NCR will be deemed non-confidential. NCR will have no obligation
of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display, transform, create derivative works of and
distribute the Feedback and derivative works thereof without limitation on a royalty-free basis. Further, NCR will be free to use any ideas,
concepts, know-how or techniques contained in such Feedback for any purpose whatsoever, including developing, manufacturing, or marketing
products or services incorporating Feedback.
Copyright © 2000 - 2006 by NCR Corporation. All Rights Reserved.SQL Reference: Fundamentals iii
Preface
Purpose
SQL Reference: Fundamentals describes basic SQL data handling, SQL data definition, control,
and manipulation, and the SQL lexicon.
Use this book with the other books in the SQL Reference book set.
Audience
System administrators, database administrators, security administrators, application
programmers, NCR field engineers, end users, and other technical personnel responsible for
designing, maintaining, and using the Teradata Database will find this book useful.
Experienced SQL users can also see simplified statement, data type, function, and expression
descriptions in SQL/Data Dictionary Quick Reference.
Supported Software Release
This book supports Teradata
®
Database V2R6.2.
Prerequisites
If you are not familiar with Teradata Database, you will find it useful to read Introduction to
Teradata Warehouse before reading this book.
You should be familiar with basic relational database management technology. This book is
not an SQL primer.Preface
Changes to This Book
iv SQL Reference: Fundamentals
Changes to This Book
This book includes the following changes to support the current release.
Date Description
September 2006 • Added material to support BIGINT data type
• Removed the restriction that the PARTITION BY option is not allowed in the
CREATE JOIN INDEX statement for non-compressed join indexes
• Removed the restriction that triggers cannot be defined on a table on which a
join index is already defined
• Updated the section on altering table structure and definition to indicate that
ALTER TABLE can now be used to define, modify, or delete a COMPRESS
attribute on an existing column
• Updated Appendix E with new syntax for ALTER TABLE and CREATE
TABLE
• Moved the topics that identified valid and non-valid character ranges for
KanjiEBCDIC, KanjiEUC, and KanjiShift-JIS object names from Chapter 2 to
the International Character Set Support book
May 2006 Removed RESTRICT from list of Teradata Database reserved words
November 2005 • Added material to support new UDT and UDM feature
• Added Appendix E, which details the differences in SQL between this release
and previous releases
• Removed the restriction that the PARTITION BY option is not allowed in the
CREATE TABLE statement for global temporary tables and volatile tables
November 2004 • Removed colons from stored procedure examples because colons are no
longer required when local stored procedure variables or parameters are
referenced in SQL statements
• Added material to support new table function feature and new external stored
procedure feature
• Added overview of event processing using queue tables and the SELECT AND
CONSUME statement
• Removed the restriction that triggers cannot call stored procedures
• Added material on new recursive query feature
• Added material on new iterated requests feature
• Added the restricted word list back into Appendix BPreface
Additional Information
SQL Reference: Fundamentals v
Additional Information
Additional information that supports this product and the Teradata Database is available at
the following Web sites.
Type of Information Description Source
Overview of the
release
Information too
late for the manuals
The Release Definition provides the
following information:
• Overview of all the products in the
release
• Information received too late to be
included in the manuals
• Operating systems and Teradata
Database versions that are certified to
work with each product
• Version numbers of each product and
the documentation for each product
• Information about available training
and support center
http://www.info.ncr.com/
Click General Search. In the
Publication Product ID field,
enter 1725 and click Search to
bring up the following Release
Definition:
• Base System Release
Definition
B035-1725-096K
Additional
information related
to this product
Use the NCR Information Products
Publishing Library site to view or
download the most recent versions of all
manuals.
Specific manuals that supply related or
additional information to this manual are
listed.
http://www.info.ncr.com/
Click General Search, and do
one of the following:
• In the Product Line field,
select Software - Teradata
Database for a list of all of
the publications for this
release,
• In the Publication Product ID
field, enter a book number.
CD-ROM images This site contains a link to a
downloadable CD-ROM image of all
customer documentation for this release.
Customers are authorized to create CDROMs for their use from this image.
http://www.info.ncr.com/
Click General Search. In the
Title or Keyword field, enter
CD-ROM, and Click Search.
Ordering
information for
manuals
Use the NCR Information Products
Publishing Library site to order printed
versions of manuals.
http://www.info.ncr.com/
Click How to Order under Print
& CD Publications.Preface
References to Microsoft Windows
vi SQL Reference: Fundamentals
References to Microsoft Windows
This book refers to “Microsoft Windows.” For Teradata Database V2R6.2, such references
mean Microsoft Windows Server 2003 32-bit and Microsoft Windows Server 2003 64-bit.
General
information about
Teradata
The Teradata home page provides links to
numerous sources of information about
Teradata. Links include:
• Executive reports, case studies of
customer experiences with Teradata,
and thought leadership
• Technical information, solutions, and
expert advice
• Press releases, mentions and media
resources
Teradata.com
Type of Information Description SourceSQL Reference: Fundamentals vii
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Supported Software Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Changes to This Book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
References to Microsoft Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Chapter 1: Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Databases and Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Global Temporary Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Volatile Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Primary Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Secondary Indexes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Join Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Hash Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Referential Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
External Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
User-Defined Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Table of Contents
viii SQL Reference: Fundamentals
Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
User-Defined Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Chapter 2: Basic SQL Syntax and Lexicon . . . . . . . . . . . . . . . . . . . . . . . .63
Structure of an SQL Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
SQL Lexicon Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
Standard Form for Data in Teradata Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
Unqualified Object Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73
Default Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75
Name Validation on Systems Enabled with Japanese Language Support . . . . . . . . . . . . . . . . .77
Object Name Translation and Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81
Object Name Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
Finding the Internal Hexadecimal Representation for Object Names. . . . . . . . . . . . . . . . . . . .84
Specifying Names in a Logon String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87
NULL Keyword as a Literal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Delimiters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
Separators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Terminators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Null Statements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Chapter 3: SQL Data Definition, Control, and Manipulation . .99
SQL Functional Families and Binding Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99
Embedded SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
Data Definition Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Altering Table Structure and Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
Dropping and Renaming Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
Data Control Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105Table of Contents
SQL Reference: Fundamentals ix
Data Manipulation Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Subqueries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Recursive Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Query and Workload Analysis Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Help and Database Object Definition Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Chapter 4: SQL Data Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Invoking SQL Statements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Transaction Processing in ANSI Session Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Transaction Processing in Teradata Session Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Multistatement Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Iterated Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Dynamic and Static SQL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Dynamic SQL in Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Using SELECT With Dynamic SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Event Processing Using Queue Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Manipulating Nulls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Session Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Session Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Return Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Statement Responses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Success Response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Warning Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Error Response (ANSI Session Mode Only). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Failure Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Chapter 5: Query Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Query Processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Table Access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Full-Table Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Collecting Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164Table of Contents
x SQL Reference: Fundamentals
Appendix A: Notation Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
Syntax Diagram Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
Character Shorthand Notation Used In This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171
Predicate Calculus Notation Used in This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .172
Appendix B: Restricted Words for V2R6.2. . . . . . . . . . . . . . . . . . . . . . .173
Reserved Words and Keywords for V2R6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
Appendix C: Teradata Database Limits. . . . . . . . . . . . . . . . . . . . . . . . . . .203
System Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204
Database Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Session Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Appendix D: ANSI SQL Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .213
ANSI SQL Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .213
Terminology Differences Between ANSI SQL and Teradata . . . . . . . . . . . . . . . . . . . . . . . . . .216
SQL Flagger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .217
Differences Between Teradata and ANSI SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
Appendix E: SQL Feature Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Notation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Statements and Modifiers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Data Types and Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277
Functions, Operators, and Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280Table of Contents
SQL Reference: Fundamentals xi
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291Table of Contents
xii SQL Reference: FundamentalsSQL Reference: Fundamentals 1
CHAPTER 1 Objects
This chapter describes the objects you use to store, manage, and access data in the Teradata
Database.
Topics include:
• Databases and Users
• Tables
• Columns
• Data Types
• Keys
• Indexes
• Views
• Triggers
• Macros
• Stored Procedures and External Stored Procedures
• User-Defined Functions
• User-Defined Types (UDTs) and User-Defined Methods (UDMs)
• Profiles
• Roles
Databases and Users
Definitions
A database is a collection of related tables, views, triggers, indexes, stored procedures,
user-defined functions, and macros. A database also contains an allotment of space from
which users can create and maintain their own objects, or other users or databases.
A user is almost the same as a database, except that a user has a password and can log on to the
system, whereas the database cannot.
Defining Databases and Users
Before you can create a database or user, you must have sufficient privileges granted to you.
To create a database, use the CREATE DATABASE statement. You can specify the name of the
database, the amount of storage to allocate, and other attributes.Chapter 1: Objects
Tables
2 SQL Reference: Fundamentals
To create a user, use the CREATE USER statement. The statement authorizes a new user
identification (user name) for the database and specifies a password for user authentication.
Because the system creates a database for each user, the CREATE USER statement is very
similar to the CREATE DATABASE statement.
Difference Between Users and Databases
The difference between users and databases in the Teradata Database has important
implications for matters related to access privileges, but neither the differences nor their
implications are easy to understand. This is particularly true with respect to understanding
fully the consequences of implicitly granted access privileges.
Formally speaking, the difference between a user and a database is that a user has a password
and a database does not. Users can also have default attributes such as time zone, date form,
character set, role, and profile, while databases cannot. You might infer from this that
databases are passive objects, while users are active objects. That is only true in the sense that
databases cannot execute SQL statements. However, a query, macro, or stored procedure can
execute using the privileges of the database.
Tables
Definitions
A table is what is referred to in set theory terminology as a relation, from which the expression
relational database is derived.
Every relational table consists of one row of column headings (more commonly referred to as
column names) and zero or more unique rows of data values.
Formally speaking, each row represents what set theory calls a tuple. Each column represents
what set theory calls an attribute.
The number of rows (or tuples) in a table is referred to as its cardinality and the number of
columns (or attributes) is referred to as its degree or arity.
Defining Tables
Use the CREATE TABLE statement to define base tables.
The CREATE TABLE statement specifies a table name, one or more column names, and the
attributes of each column. CREATE TABLE can also specify datablock size, percent freespace,
and other physical attributes of the table.
The CREATE/MODIFY USER and CREATE/MODIFY DATABASE statements provide
options for creating permanent journal tables.
Defining Indexes For a Table
An index is a physical mechanism used to store and access the rows of a table. When you define
a table, you can define a primary index and one or more secondary indexes. Chapter 1: Objects
Tables
SQL Reference: Fundamentals 3
All tables require a primary index. If you do not specify a column or set of columns as the
primary index when you create a table, then CREATE TABLE specifies a primary index by
default.
For more information on indexes, see “Indexes” on page 17.
Duplicate Rows in Tables
Though both set theory and common sense prohibit duplicate rows in relational tables, the
ANSI standard defines SQL based not on sets, but on bags, or multisets.
A table defined not to permit duplicate rows is called a SET table because its properties are
based on set theory, where set is defined as an unordered group of unique elements with no
duplicates.
A table defined to permit duplicate rows is called a MULTISET table because its properties are
based on a multiset, or bag, model, where bag and multiset are defined as an unordered group
of elements that may be duplicates.
Temporary Tables
Temporary tables are useful for temporary storage of data. Teradata Database supports three
types of temporary tables.
FOR more information on … SEE …
rules for duplicate rows in a table CREATE TABLE in SQL Reference: Data Definition
Statements.
the result of an INSERT operation that
would create a duplicate row
INSERT in SQL Reference: Data Manipulation
Statements.
the result of an INSERT using a SELECT
subquery that would create a duplicate row
Type Usage
Global
temporary
A global temporary table has a persistent table definition that is stored in the data
dictionary. Any number of sessions can materialize and populate their own local
copies that are retained until session logoff.
Global temporary tables are useful for stroring temporary, intermediate results from
multiple queries into working tables that are frequently used by applications.
Global temporary tables are identical to ANSI global temporary tables.
Volatile Like global temporary tables, the contents of volatile tables are only retained for the
duration of a session. However, volatile tables do not have persistent definitions. To
populate a volatile table, a session must first create the definition.Chapter 1: Objects
Tables
4 SQL Reference: Fundamentals
Materialized instances of a global temporary table share the following characteristics with
volatile tables:
• Private to the session that created them.
• Contents cannot be shared by other sessions.
• Optionally emptied at the end of each transaction using the ON COMMIT
PRESERVE/DELETE rows option in the CREATE TABLE statement.
• Activity optionally logged in the transient journal using the LOG/NO LOG option in the
CREATE TABLE statement.
• Dropped automatically when a session ends.
For details about the individual characteristics of global temporary and volatile tables, see
“Global Temporary Tables” on page 5 and “Volatile Tables” on page 9.
Queue Tables
Teradata Database supports queue tables, which are similar to ordinary base tables, with the
additional unique property of behaving like an asynchronous first-in-first-out (FIFO) queue.
Queue tables are useful for applications that want to submit queries that wait for data to be
inserted into queue tables without polling.
When you create a queue table, you must define a TIMESTAMP column with a default value
of CURRENT_TIMESTAMP. The values in the column indicate the time the rows were
inserted into the queue table, unless different, user-supplied values are inserted.
You can then use a SELECT AND CONSUME statement, which operates like a FIFO pop:
• Data is returned from the row with the oldest timestamp value in the specified queue table.
• The row is deleted from the queue table, guaranteeing that the row is processed only once.
If no rows are available, the transaction enters a delay state until one of the following occurs:
• A row is inserted into the queue table.
• The transaction aborts, either as a result of direct user intervention, such as the ABORT
statement, or indirect user intervention, such as a DROP TABLE statement on the queue
table.
To perform a FIFO peek on a queue table, use a SELECT statement.
Global
temporary
trace
Global temporary trace tables are useful for debugging external routines (UDFs,
UDMs, and external stored procedures). During execution, external routines can
write trace output to columns in a global temporary trace table.
Like global temporary tables, global temporary trace tables have persistent
definitions, but do not retain rows across sessions.
Type UsageChapter 1: Objects
Global Temporary Tables
SQL Reference: Fundamentals 5
Global Temporary Tables
Introduction
Global temporary tables allow you to define a table template in the database schema,
providing large savings for applications that require well known temporary table definitions.
The definition for a global temporary table is persistent and stored in the data dictionary.
Space usage is charged to login user temporary space.
Each user session can materialize as many as 2000 global temporary tables at a time.
How Global Temporary Tables Work
To create the base definition for a global temporary table, use the CREATE TABLE statement
and specify the keywords GLOBAL TEMPORARY to describe the table type.
Once created, the table exists only as a definition. It has no rows and no physical instantiation.
When any application in a session accesses a table with the same name as the defined base
table, and the table has not already been materialized in that session, then that table is
materialized as a real relation using the stored definition. Because that initial invocation is
generally due to an INSERT statement, a temporary table—in the strictest sense—is usually
populated immediately upon its materialization.
There are only two occasions when an empty global temporary table is materialized:
• A CREATE INDEX statement is issued on the table.
• A COLLECT STATISTICS statement is issued on the table.
The following table summarizes this information.
Note: Issuing a SELECT, UPDATE, or DELETE on a global temporary table that is not
materialized produces the same result as issuing a SELECT, UPDATE, or DELETE on an
empty global temporary table that is materialized.
WHEN this statement is issued on a global
temporary table that has not yet been materialized …
THEN a local instance of the global temporary
table is materialized and it is …
INSERT populated with data upon its materialization.
CREATE INDEX … ON TEMPORARY …
COLLECT STATISTICS … ON TEMPORARY …
not populated with data upon its
materialization.Chapter 1: Objects
Global Temporary Tables
6 SQL Reference: Fundamentals
Example
For example, suppose there are four sessions, Session 1, Session 2, Session 3, and Session 4 and
two users, User_1 and User_2. Consider the scenario in the following two tables.
Step Session … Does this … The result is this …
1 1 The DBA creates a global temporary
table definition in the database
scheme named globdb.gt1 according
to the following CREATE TABLE
statement:
CREATE GLOBAL TEMPORARY
TABLE globdb.gt1,
LOG
(f1 INT NOT NULL PRIMARY
KEY,
f2 DATE,
f3 FLOAT)
ON COMMIT PRESERVE ROWS;
The global temporary table definition
is created and stored in the database
schema.
2 1 User_1 logs on an SQL session and
references globdb.gt1 using the
following INSERT statement:
INSERT globdb.gt1 (1,
980101, 11.1);
Session 1 creates a local instance of the
global temporary table definition
globdb.gt1. This is also referred to as a
materialized temporary table.
Immediately upon materialization, the
table is populated with a single row
having the following values.
f1=1
f2=980101
f3=11.1
This means that the contents of this
local instance of the global temporary
table definition is not empty when it is
created.
From this point on, any
INSERT/DELETE/UPDATE statement
that references globdb.gt1 in Session 1
is mapped to this local instance of the
table.
3 2 User_2 logs on an SQL session and
issues the following SELECT
statement.
SELECT * FROM globdb.gt1;
No rows are returned because Session
2 has not yet materialized a local
instance of globdb.gt1.Chapter 1: Objects
Global Temporary Tables
SQL Reference: Fundamentals 7
User_1 and User_2 continue their work, logging onto two additional sessions as described in
the following table.
4 2 User_2 issues the following INSERT
statement:
INSERT globdb.gt1 (2,
980202, 22.2);
Session 2 creates a local instance of the
global temporary table definition
globdb.gt1.
The table is populated, immediately
upon materialization, with a single
row having the following values.
f1=2
f2=980202
f3=22.2
From this point on, any
INSERT/DELETE/UPDATE statement
that references globdb.gt1 in Session 2
is mapped to this local instance of the
table.
5 2 User_2 logs again issues the
following SELECT statement:
SELECT * FROM globdb.gt1;
A single row containing the data (2,
980202, 22.2) is returned to the
application.
6 1 User_1 logs off from Session 1. The local instance of globdb.gt1 for
Session 1 is dropped.
7 2 User_2 logs off from Session 2. The local instance of globdb.gt1 for
Session 2 is dropped.
Step Session … Does this … The result is this …
1 3 User_1 logs on another SQL session
3 and issues the following SELECT
statement:
SELECT * FROM globdb.gt1;
No rows are returned because Session
3 has not yet materialized a local
instance of globdb.gt1.
2 3 User_1 issues the following INSERT
statement:
INSERT globdb.gt1 (3,
980303, 33.3);
Session 3 created a local instance of the
global temporary table definition
globdb.gt1.
The table is populated, immediately
upon materialization, with a single row
having the following values.
f1=3
f2=980303
f3=33.3
From this point on, any
INSERT/DELETE/UPDATE statement
that references globdb.gt1 in Session 3
maps to this local instance of the table.
Step Session … Does this … The result is this …Chapter 1: Objects
Global Temporary Tables
8 SQL Reference: Fundamentals
With the exception of a few options (see “CREATE TABLE” in SQL Reference: Data Definition
Statements for an explanation of the features not available for global temporary base tables),
materialized temporary tables have the same properties as permanent tables.
After a global temporary table definition is materialized in a session, all further references to
the table are made to the materialized table. No additional copies of the base definition are
materialized for the session. This global temporary table is defined for exclusive use by the
session whose application materialized it.
3 3 User_1 again issues the following
SELECT statement:
SELECT * FROM globdb.gt1;
A single row containing the data (3,
980303, 33.3) is returned to the
application.
4 4 User_2 logs on Session 4 and issues
the following CREATE INDEX
statement:
CREATE INDEX (f2) ON
TEMPORARY globdb.gt1;
An empty local global temporary table
named globdb.gt1 is created for
Session 4.
This is one of only two cases in which a
local instance of a global temporary
table is materialized without data.
The other would be a COLLECT
STATISTICS statement—in this case,
the following statement:
COLLECT STATISTICS ON
TEMPORARY globdb.gt1;
5 4 User_2 issues the following SELECT
statement:
SELECT * FROM globdb.gt1;
No rows are returned because the local
instance of globdb.gt1 for Session 4 is
empty.
6 4 User_2 issues the following SHOW
TABLE statement:
SHOW TABLE globdb.gt1;
CREATE SET GLOBAL TEMPORARY
TABLE globdb.gt1, FALLBACK,
LOG
(
f1 INTEGER NOT NULL,
f2 DATE FORMAT 'YYYY-MM-DD',
f3 FLOAT)
UNIQUE PRIMARY INDEX (f1)
ON COMMIT PRESERVE ROWS;
7 4 User_2 issues the following SHOW
TEMPORARY TABLE statement:
SHOW TEMPORARY TABLE
globdb.gt1;
CREATE SET GLOBAL TEMPORARY
TABLE globdb.gt1, FALLBACK,
LOG
(
f1 INTEGER NOT NULL,
f2 DATE FORMAT 'YYYY-MM-DD',
f3 FLOAT)
UNIQUE PRIMARY INDEX (f1)
INDEX (f2)
ON COMMIT PRESERVE ROWS;
Note that this report indicates the new
index f2 that has been created for the
local instance of the temporary table.
Step Session … Does this … The result is this …Chapter 1: Objects
Volatile Tables
SQL Reference: Fundamentals 9
Materialized global temporary tables differ from permanent tables in the following ways:
• They are always empty when first materialized.
• Their contents cannot be shared by another session.
• The contents can optionally be emptied at the end of each transaction.
• The materialized table is dropped automatically at the end of each session.
Limitations
You cannot use the following CREATE TABLE options for global temporary tables:
• WITH DATA
• Permanent journaling
• Referential integrity constraints
This means that a temporary table cannot be the referencing or referenced table in a
referential integrity constraint.
References to global temporary tables are not permitted in FastLoad, MultiLoad, or
FastExport.
Archive, Restore, and TableRebuild operate on base global temporary tables only.
Non-ANSI Extensions
Transient journaling options on the global temporary table definition are permitted using the
CREATE TABLE statement.
You can modify the transient journaling and ON COMMIT options for base global temporary
tables using the ALTER TABLE statement.
Privileges Required
To materialize a global temporary table, you must have the appropriate privilege on the base
global temporary table or on the containing database or user as required by the statement that
materializes the table.
No access logging is performed on materialized global temporary tables, so no access log
entries are generated.
Volatile Tables
Creating Volatile Tables
Neither the definition nor the contents of a volatile table persist across a system restart. You
must use CREATE TABLE with the VOLATILE keyword to create a new volatile table each
time you start a session in which it is needed. Chapter 1: Objects
Volatile Tables
10 SQL Reference: Fundamentals
What this means is that you can create volatile tables as you need them. Being able to create a
table quickly provides you with the ability to build scratch tables whenever you need them.
Any volatile tables you create are dropped automatically as soon as your session logs off.
Volatile tables are always created in the login user space, regardless of the current default
database setting. That is, the database name for the table is the login user name. Space usage is
charged to login user spool space. Each user session can materialize as many as 1000 volatile
tables at a time.
Limitations
The following CREATE TABLE options are not permitted for volatile tables:
• Permanent journaling
• Referential integrity constraints
This means that a volatile table cannot be the referencing or referenced table in a
referential integrity constraint.
• Check constraints
• Compressed columns
• DEFAULT clause
• TITLE clause
• Named indexes
References to volatile tables are not permitted in FastLoad or MultiLoad.
For more information, see “CREATE TABLE” in SQL Reference: Data Definition Statements.
Non-ANSI Extensions
Volatile tables are not defined in ANSI.
Privileges Required
To create a volatile table, you do not need any privileges.
No access logging is performed on volatile tables, so no access log entries are generated.
Volatile Table Maintenence Among Multiple Sessions
Volatile tables are private to a session. This means that you can log on multiple sessions and
create volatile tables with the same name in each session.
However, at the time you create a volatile table, the name must be unique among all global
and permanent temporary table names in the database that has the name of the login user.Chapter 1: Objects
Volatile Tables
SQL Reference: Fundamentals 11
For example, suppose you log on two sessions, Session 1 and Session 2. Assume the default
database name is your login user name. Consider the following scenario.
Stage In Session 1, you … In Session 2, you … The result is this …
1 Create a volatile table
named VT1.
Create a volatile
table named VT1.
Each session creates its own copy of
volatile table VT1 using your login user
name as the database.
2 Create a permanent
table with an unqualified
table name of VT2.
Session 1 creates a permanent table
named VT2 using your login user name
as the database.
3 Create a volatile
table named VT2.
Session 2 receives a CREATE TABLE
error, because there is already a
permanent table with that name.
4 Create a volatile table
named VT3.
Session 1 creates a volatile table named
VT3 using your login user name as the
database.
5 Create a permanent
table with an
unqualified table
name of VT3.
Session 2 creates a permanent table
named VT3 using your login user name
as the database.
Because a volatile table is known only
to the session that creates it, a
permanent table with the same name as
the volatile table VT3 in Session 1 can
be created as a permanent table in
Session 2.
6 Insert into VT3. Session 1 references volatile table VT3.
Note: Volatile tables take precedence
over permanent tables in the same
database in a session.
Because Session 1 has a volatile table
VT3, any reference to VT3 in Session 1
is mapped to the volatile table VT3
until it is dropped (see Step 10).
On the other hand, in Session 2,
references to VT3 remain mapped to
the permanent table named VT3.
7 Create volatile table
VT3.
Session 2 receives a CREATE TABLE
error for attempting to create the
volatile table VT3 because of the
existence of that permanent table.
8 Insert into VT3. Session 2 references permanent table
VT3.
9 Drop VT3. Session 2 drops volatile table VT3.
10 Select from VT3. Session 1 references the permanent
table VT3.Chapter 1: Objects
Columns
12 SQL Reference: Fundamentals
Columns
Definition
A column is a structural component of a table and has a name and a declared type. Each row in
a table has exactly one value for each column. Each value in a row is a value in the declared
type of the column. The declared type includes nulls and values of the declared type.
A column value is the smallest unit of data that can be selected from or updated for a table.
Defining Columns
The column definition clause of the CREATE TABLE statement defines the table column
elements.
A name and a data type must be specified for each column defined for a table. Each column
can be further defined with one or more attribute definitions.
Here is an example that creates a table called employee with three columns:
CREATE TABLE employee
(deptno INTEGER
,name CHARACTER(23)
,hiredate DATE);
The following optional subclauses are also elements of the SQL column definition clause:
• Data type attribute declaration, such as NOT NULL, FORMAT, and TITLE
• COMPRESS column storage attributes clause
• Column constraint attributes clause, such as PRIMARY KEY
• UNIQUE table-level definition clause
• REFERENCES table-level definition clause
• CHECK constraint table-level definition clause
Related Topics
FOR more information on … SEE …
data types “Data Types” on page 13.
CREATE TABLE and the column definition clause SQL Reference: Data Definition Statements.Chapter 1: Objects
Data Types
SQL Reference: Fundamentals 13
Data Types
Introduction
Every data value belongs to an SQL data type. For example, when you define a column in a
CREATE TABLE statement, you must specify the data type of the column.
The set of data values that a column defines can belong to one of the following data types:
Numeric Data Types
A numeric value is either an exact numeric number (integer or decimal) or an approximate
numeric number (floating point). Use the following SQL data types to specify numeric values.
Character Data Types
Character data types represent characters that belong to a given character set. Use the
following SQL data types to specify character data.
• Numeric
• Character
• Datetime
• Interval
• Byte
• UDT
Type Description
BIGINT Represents a signed, binary integer value from
-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
INTEGER Represents a signed, binary integer value from -2,147,483,648 to
2,147,483,647.
SMALLINT Represents a signed binary integer value in the range -32768 to 32767.
BYTEINT Represents a signed binary integer value in the range -128 to 127.
REAL Represent a value in sign/magnitude form.
DOUBLE PRECISION
FLOAT
DECIMAL [(n[,m])] Represent a decimal number of n digits, with m of those n digits to the
right of the decimal point.
NUMERIC [(n[,m])]
Type Description
CHAR Represents a fixed length character string for Teradata Database internal
character storage.
VARCHAR(n) Represents a variable length character string of length n for Teradata
Database internal character storage.Chapter 1: Objects
Data Types
14 SQL Reference: Fundamentals
DateTime Data Types
DateTime values represent dates, times, and timestamps. Use the following SQL data types to
specify DateTime values.
Interval Data Types
An interval value is a span of time. There are two mutually exclusive interval type categories.
LONG VARCHAR LONG VARCHAR specifies the longest permissible variable length
character string for Teradata Database internal character storage.
CLOB Represents a large character string. A character large object (CLOB)
column can store character data, such as simple text, HTML, or XML
documents.
Type Description
Type Description
DATE Represents a date value that includes year, month, and day components.
TIME Represents a time value that includes hour, minute, second, and
fractional second components.
TIMESTAMP Represents a timestamp value that includes year, month, day, hour,
minute, second, and fractional second components.
TIME WITH TIME
ZONE
Represents a time value that includes hour, minute, second, fractional
second, and time zone components.
TIMESTAMP WITH
TIIME ZONE
Represents a timestamp value that includes year, month, day, hour,
minute, second, fractional second, and time zone components.
Category Type Description
Year-Month • INTERVAL YEAR
• INTERVAL YEAR TO MONTH
• INTERVAL MONTH
Represent a time span that can include a
number of years and months.
Day-Time • INTERVAL DAY
• INTERVAL DAY TO HOUR
• INTERVAL DAY TO MINUTE
• INTERVAL DAY TO SECOND
• INTERVAL HOUR
• INTERVAL HOUR TO MINUTE
• INTERVAL HOUR TO SECOND
• INTERVAL MINUTE
• INTERVAL MINUTE TO SECOND
• INTERVAL SECOND
Represent a time span that can include a
number of days, hours, minutes, or
seconds.Chapter 1: Objects
Data Types
SQL Reference: Fundamentals 15
Byte Data Types
Byte data types store raw data as logical bit streams. For any machine, BYTE, VARBYTE, and
BLOB data is transmitted directly from the memory of the client system.
BLOB is ANSI SQL-2003-compliant. BYTE and VARBYTE are Teradata extensions to the
ANSI SQL-2003 standard.
UDT Data Types
UDT data types are custom data types that you define with the CREATE TYPE statement.
Teradata Database supports distinct and structured UDTs.
For more details on UDTs, including a synopsis of the steps you take to develop and use UDTs,
see “User-Defined Types” on page 58.
Related Topics
For detailed information on data types, see SQL Reference: Data Types and Literals.
Type Description
BYTE Represents a fixed-length binary string.
VARBYTE Represents a variable-length binary string.
BLOB Represents a large binary string of raw bytes. A binary large object (BLOB) column
can store binary objects, such as graphics, video clips, files, and documents.
Type Description
Distinct A UDT that is based on a single predefined data type, such as INTEGER or
VARCHAR.
Structured A UDT that is a collection of one or more fields called attributes, each of which is
defined as a predefined data type or other UDT (which allows nesting).Chapter 1: Objects
Keys
16 SQL Reference: Fundamentals
Keys
Definitions
Keys and Referential Integrity
Teradata Database uses primary and foreign keys to maintain referential integrity. For
additional information, see “Referential Integrity” on page 36.
Effect on Row Distribution
Because Teradata Database uses a unique primary or secondary index to enforce a primary
key, the primary key can affect how Teradata Database distributes and retrieves rows. For
more information, see “Primary Indexes” on page 22 and “Secondary Indexes” on page 25.
Differences Between Primary Keys and Primary Indexes
The following table summarizes the differences between keys and indexes using the primary
key and primary index for purposes of comparison.
Term Definition
Primary
Key
A primary key is a column, or combination of columns, in a table that uniquely identifies
each row in the table. The values defining a primary key for a table:
• Must be unique
• Cannot change
• Cannot be null
Foreign
Key
A foreign key is a column, or combination of columns, in a table that is also the primary
key in one or more additional tables in the same database. Foreign keys provide a
mechanism to link related tables based on key values.
Primary Key Primary Index
Important element of logical data model. Not used in logical data model.
Used to maintain referential integrity. Used to distribute and retrieve data.
Must be unique to identify each row. Can be unique or nonunique.
Values cannot change. Values can change.
Cannot be null. Can be null.
Does not imply access path. Defines the most common access path.
Not required for physical table definition. Required for physical table definition.Chapter 1: Objects
Indexes
SQL Reference: Fundamentals 17
Indexes
Definition
An index is a mechanism that the SQL query optimizer can use to make table access more
performant. Indexes enhance data access by providing a more-or-less direct path to stored
data to avoid performing full table scans to locate the small number of rows you typically want
to retrieve or update.
The Teradata Database parallel architecture makes indexing an aid to better performance, not
a crutch necessary to ensure adequate performance. Full table scans are not something to be
feared in the Teradata Database environment. This means that the sorts of unplanned, ad hoc
queries that characterize the data warehouse process, and that often are not supported by
indexes, perform very effectively for Teradata Database using full table scans.
The classic index for a relational database is itself a file made up of rows having two parts:
• A (possibly unique) data field in the referenced table.
• A pointer to the location of that row in the base table (if the index is unique) or a pointer
to all possible locations of rows with that data field value (if the index is nonunique).
Because the Teradata Database is a massively parallel architecture, it requires a more efficient
means of distributing and retrieving its data. One such method is hashing. All Teradata
Database indexes are based on row hash values rather than raw table column values, even
though secondary, hash, and join indexes can be stored in order of their values to make them
more useful for satisfying range conditions.
Selectivity of Indexes
An index that retrieves many rows is said to have weak selectivity.
An index that retrieves few rows is said to be strongly selective.
The more strongly selective an index is, the more useful it is. In some cases, it is possible to
link together several weakly selective nonunique secondary indexes by bit mapping them. The
result is effectively a strongly selective index and a dramatic reduction in the number of table
rows that must be accessed.
For more information on linking weakly selective secondary indexes into a strongly selective
unit using bit mapping, see “NUSI Bit Mapping” on page 28.
Row Hash and RowID
Teradata Database table rows are self-indexing with respect to their primary index and so
require no additional storage space. When a row is inserted into a table, the relational database
manager stores the 32-bit row hash value of the primary index with it.
Because row hash values are not necessarily unique, the relational database manager also
generates a unique 32-bit numeric value (called the Uniqueness Value) that it appends to the
row hash value, forming a unique RowID. This RowID makes each row in a table uniquely
identifiable and ensures that hash collisions do not occur.Chapter 1: Objects
Indexes
18 SQL Reference: Fundamentals
If a table is defined with a partitioned primary index (PPI), the RowID also includes the
partition number to which the row was assigned. For more information on PPIs, see
“Partitioned and Non-Partitioned Primary Indexes” on page 20.
The first row having a specific row hash value is always assigned a uniqueness value of 1, which
becomes the highest current uniqueness value. Thereafter, each time another row having the
same row hash value is inserted, the row is assigned the current high value incremented by 1,
and that value becomes the current high value. Table rows having the same row hash value are
stored on disk sorted in the ascending order of RowID.
Uniqueness values are not reused except for the special case in which the highest valued row
within a row hash is deleted from a table.
A RowID for a row might change, for instance, when a primary index or partitioning column
is changed, or when there is complex update of the table.
Index Hash Mapping
Rows are distributed across the AMPS using a hashing algorithm that computes a row hash
value based on the primary index. The row hash is a 32-bit value. The higher-order 16 bits of a
hash value determine an associated hash bucket.
Teradata Database databases have 65536 hash buckets. The hash buckets are distributed as
evenly as possible among the AMPs on a system.
Teradata Database maintains a hash map—an index of which hash buckets live on which
AMPs—that it uses to determine whether rows belong to an AMP based on their row hash
values. Row assignment is performed in a manner that ensures as equal a distribution as
possible among all the AMPs on a system.
Advantages of Indexes
The intent of indexes is to lessen the time it takes to retrieve rows from a database. The faster
the retrieval, the better.
Disadvantages of Indexes
Perhaps not so obvious is the disadvantage of using indexes.
• They must be updated every time a row is updated, deleted, or added to a table.
This is only a consideration for indexes other than the primary index in the Teradata
Database environment. The more indexes you have defined for a table, the bigger the
potential update downside becomes.
Because of this, secondary, join, and hash indexes are rarely appropriate for OLTP
situations.
• All Teradata Database secondary indexes are stored in subtables, and join and hash indexes
are stored in separate tables, exerting a burden on system storage space.Chapter 1: Objects
Indexes
SQL Reference: Fundamentals 19
• When FALLBACK is defined for a table, a further storage space burden is created because
secondary index subtables are always duplicated whenever FALLBACK is defined for a
table. An additional burden on system storage space is exerted when FALLBACK is defined
for join indexes or hash indexes or both.
For this reason, it is extremely important to use the EXPLAIN modifier to determine
optimum data manipulation statement syntax and index usage before putting statements and
indexes to work in a production environment. For more information on EXPLAIN, see SQL
Reference: Data Manipulation Statements.
Teradata Database Index Types
Teradata Database provides four different index types:
• Primary index
All Teradata Database tables require a primary index because the system distributes tables
on their primary indexes. Primary indexes can be:
• Unique or nonunique
• Partitioned or non-partitioned
• Secondary index
Secondary indexes can be unique or nonunique.
• Join index (JI)
• Hash index
Unique Indexes
A unique index, like a primary key, has a unique value for each row in a table.
Teradata Database defines two different types of unique index.
• Unique primary index (UPI)
UPIs provide optimal data distribution and are typically assigned to the primary key for a
table. When a NUPI makes better sense for a table, then the primary key is frequently
assigned to be a USI.
• Unique secondary index (USI)
USIs guarantee that each complete index value is unique, while ensuring that data access
based on it is always a two-AMP operation.
Nonunique Indexes
A nonunique index does not require its values to be unique. There are occasions when a
nonunique index is the best choice as the primary index for a table.
NUSIs are also very useful for many decision support situations.Chapter 1: Objects
Indexes
20 SQL Reference: Fundamentals
Partitioned and Non-Partitioned Primary Indexes
Primary indexes can be partitioned or non-partitioned.
A non-partitioned primary index (NPPI) is the traditional primary index by which rows are
assigned to AMPs.
A partitioned primary index (PPI) allows rows to be partitioned, based on some set of
columns, on the AMP to which they are distributed, and ordered by the hash of the primary
index columns within the partition.
A PPI can be used to improve query performance through partition elimination. A PPI
provides a useful alternative to an NPPI for executing range queries against a table, while still
providing efficient join and aggregation strategies on the primary index.
Join Indexes
A join index is an indexing structure containing columns from one or more base tables and is
generally used to resolve queries and eliminate the need to access and join the base tables it
represents.
Teradata Database join indexes can be defined in the following general ways.
• Simple or aggregate
• Single- or multitable
• Hash-ordered or value-ordered
• Complete or sparse
For details, see “Join Indexes” on page 30.
Hash Indexes
Hash indexes are used for the same purposes as are single-table join indexes, and are less
complicated to define. However, a join index offers more choices.
For additional information, see “Hash Indexes” on page 34.
Creating Indexes For a Table
Use the CREATE TABLE statement to define a primary index and one or more secondary
indexes. You can define the primary index (and any secondary index) as unique, depending on
whether duplicate values are to be allowed in the indexed column set. A partitioned primary
index cannot be defined as unique if one or more partitioning columns are not included in the
primary index.
To create hash or join indexes, use the CREATE HASH INDEX and CREATE JOIN INDEX
statements, respectively.Chapter 1: Objects
Indexes
SQL Reference: Fundamentals 21
Using EXPLAIN and Teradata Index Wizard to Determine the Usefulness of
Indexes
One important thing to remember is that the use of indexes by the optimizer is not under user
control in a relational database management system. That is, the only references made to
indexes in the SQL language concern their definition and not their use. The SQL data
manipulation language statements do not provide for any specification of indexes.
There are several implications of this behavior.
• First, it is very important to collect statistics regularly to ensure that the optimizer has
access to current information about how to best optimize any query or update made to the
database.
For additional information concerning collecting and maintaining accurate database
statistics, see “COLLECT STATISTICS” in SQL Reference: Data Definition Statements.
• Second, it is even more important to build your queries and updates in such a way that you
know their performance will be most optimal.
Apart from good logical database design, one way to ensure that you are accessing your
data in the most efficient manner possible is to use the EXPLAIN modifier to try out
various candidate queries or updates and to note which indexes are used by the optimizer
in their execution (if any) as well as examining the relative length of time required to
complete the operation.
There are several methods you can use to determine optimal sets of secondary indexes tailored
to particular application workloads:
• Teradata Index Wizard
• EXPLAIN reports
The Teradata Index Wizard client utility provides a method of determining optimum
secondary indexes for a given SQL statement workload automatically and then verifying that
the proposed indexes actually produce the expected performance enhancements.
See the following references for more information about the Teradata Index Wizard:
• Teradata Index Wizard User Guide
• SQL Reference: Statement and Transaction Processing
You can produce and analyze EXPLAIN reports using either the Teradata Visual Explain client
utility or the SQL EXPLAIN request modifier.
For each statement in the request, EXPLAIN output provides you with the following basic
information:
• The step-by-step access method the optimizer would use to execute the specified data
manipulation statement given the current set of table statistics it has to work with.
• The relative time it would take to perform the data manipulation statement.
While you cannot rely on the reported statement execution time as an absolute, you can
rely on it as a relative means for comparison with other candidate data manipulation
statements against the same tables with the same statistics defined.Chapter 1: Objects
Primary Indexes
22 SQL Reference: Fundamentals
Primary Indexes
Introduction
The primary index for a table controls the distribution and retrieval of the data for that table
across the AMPs. Both distribution and retrieval of the data is controlled using the Teradata
Database hashing algorithm (see “Row Hash and RowID” on page 17 and “Index Hash
Mapping” on page 18).
If the primary index is defined as a partitioned primary index (PPI), the data is partitioned,
based on some set of columns, on each AMP, and ordered by the hash of the primary index
columns within the partition.
Data accessed based on a primary index is always a one-AMP operation because a row and its
index are stored on the same AMP. This is true whether the primary index is unique or
nonunique, and whether it is partitioned or non-partitioned.
Tables Require a Primary Index
All Teradata Database tables require a primary index. To create a primary index, use the
CREATE TABLE statement.
If you do not assign a primary index explicitly when you create a table, Teradata Database
assigns a primary index, based on the following rules.
FOR more information on … SEE …
using the EXPLAIN request modifier SQL Reference: Data Manipulation Statements
using the Teradata Visual Explain client utility Teradata Visual Explain User Guide
additional performance-related information
about how to use the access and join plan reports
produced by EXPLAIN to optimize the
performance of your databases
• Database Design
• Performance Management
WHEN a CREATE TABLE statement defines a …
THEN Teradata Database selects the …
Primary
Index
Primary
Key
Unique Column
Constraint
No Yes No primary key column set to be a UPI.
No No Yes first column or columns having a UNIQUE
constraint to be a UPI.
No Yes Yes primary key column set to be a UPI.Chapter 1: Objects
Primary Indexes
SQL Reference: Fundamentals 23
In general, the best practice is to specify a primary index instead of having Teradata Database
select a default primary index.
Uniform Distribution of Data and Optimal Access Considerations
When choosing the primary index for a table, there are two essential factors to keep in mind:
uniform distribution of the data and optimal access.
With respect to uniform data distribution, consider the following factors:
• The more distinct the primary index values, the better.
• Rows having the same primary index value are distributed to the same AMP.
• Parallel processing is more efficient when table rows are distributed evenly across the
AMPs.
With respect to optimal data access, consider the following factors:
• Choose the primary index on the most frequently used access path.
For example, if rows are generally accessed by a range query, consider defining a
partitioned primary index on the table that creates a useful set of partitions.
If the table is frequently joined with a specific set of tables, consider defining the primary
index on the column set that is typically used as the join condition.
• Primary index operations must provide the full primary index value.
• Primary index retrievals on a single value are always one-AMP operations.
While it is true that the columns you choose to be the primary index for a table are often the
same columns that define the primary key, it is also true that primary indexes often comprise
fields that are neither unique nor components of the primary key for the table.
Unique and Nonunique Primary Index Considerations
In addition to uniform distribution of data and optimal access considerations, other
guidelines and performance considerations apply to selecting a unique or a nonunique
column set as the primary index for a table.
No No No first column defined for the table to be a NUPI.
If the data type of the first column in the table is
UDT or LOB, then the CREATE TABLE operation
aborts and the system returns an error message.
WHEN a CREATE TABLE statement defines a …
THEN Teradata Database selects the …
Primary
Index
Primary
Key
Unique Column
ConstraintChapter 1: Objects
Primary Indexes
24 SQL Reference: Fundamentals
Generally, other considerations can include the following:
• Primary and other alternate key column sets
• The value range seen when using predicates in a WHERE clause
• Whether access can involve multiple rows or a spool file or both
For more information on criteria for selecting a primary index, see Database Design.
Partitioning Considerations
The decision to define a Partitioned Primary Index (PPI) for a table depends on how its rows
are most frequently accessed. PPIs are designed to optimize range queries while also providing
efficient primary index join strategies. For range queries, only rows of the qualified partitions
need to be accessed.
PPI increases query efficiency by avoiding full table scans without the overhead and
maintenance costs of secondary indexes.
Various partitioning strategies are possible:
• For some applications, defining the partitions such that each has approximately the same
number of rows might be an effective strategy.
• For other applications, it might be desirable to have a varying number of rows per
partition. For example, more frequently accessed data (such as for the current year) might
be divided into finer partitions (such as weeks) but other data (such as previous years)
may have coarser partitions (such as months or multiples of months).
• Alternatively, it might be important to define each range with equal width, even if the
number of rows per range varies.
The most important factors for PPIs are accessibility and maximization of partition
elimination. In all cases, it is critical for parallel efficiency to define a primary index that
distributes the rows of the table fairly evenly across the AMPs.
For more information on partitioning considerations, see Database Design.
Primary Index Summary
Teradata Database primary indexes have the following properties.
• Defined with the CREATE TABLE data definition statement.
CREATE INDEX is used only to create secondary indexes.
• Modified with the ALTER TABLE data definition statement.
Some modifications, such as partitioning and primary index columns, require an empty
table.
• Automatically assigned by CREATE TABLE if you do not explicitly define a primary index.
However, the best practice is to always specify the primary index, because the default may
not be appropriate for the table.
• Can be composed of as many as 64 columns.
• A maximum of one can be defined per table.Chapter 1: Objects
Secondary Indexes
SQL Reference: Fundamentals 25
• Can be partitioned or non-partitioned.
Partitioned primary indexes are not automatically assigned. You must explicitly define a
partitioned primary index.
• Can be unique or non-unique.
Note that a partitioned primary index can only be unique if all the partitioning columns
are also included as primary index columns. If the primary index does not include all the
partitioning columns, uniqueness on the primary index columns may be enforced with a
unique secondary index on the same columns as the primary index.
• Defined as non-unique if the primary index is not defined explicitly as unique or if the
primary index is specified for a single column SET table.
• Controls data distribution and retrieval using the Teradata hashing algorithm.
• Improves performance when used correctly in the WHERE clause of an SQL data
manipulation statement to perform the following actions.
• Single-AMP retrievals
• Joins between tables with identical primary indexes, the optimal scenario
• Partition elimination when the primary index is partitioned
Related Topics
Consult the following books for more detailed information on using primary indexes to
enhance the performance of your databases:
• Database Design
• Performance Management
Secondary Indexes
Introduction
Secondary indexes are never required for Teradata Database tables, but they can often improve
system performance.
You create secondary indexes explicitly using the CREATE TABLE and CREATE INDEX
statements. Teradata Database can implicitly create unique secondary indexes; for example,
when you use a CREATE TABLE statement that specifies a primary index, Teradata Database
implicitly creates unique secondary indexes on column sets that you specify using PRIMARY
KEY or UNIQUE constraints.
Creating a secondary index causes the Teradata Database to build a separate internal subtable
to contain the index rows, thus adding another set of rows that requires updating each time a
table row is inserted, deleted, or updated.
Nonunique secondary indexes (NUSIs) can be specified as either hash-ordered or
value-ordered. Value-ordered NUSIs are limited to a single numeric-valued (including DATE)
sort key whose size is four or fewer bytes.Chapter 1: Objects
Secondary Indexes
26 SQL Reference: Fundamentals
Secondary index subtables are also duplicated whenever a table is defined with FALLBACK.
After the table is created and usage patterns have developed, additional secondary indexes can
be defined with the CREATE INDEX statement.
Differences Between Unique and Nonunique Secondary Indexes
Teradata Database processes USIs and NUSIs very differently.
Consider the following statements that define a USI and a NUSI.
The following table highlights differences in the build process for the preceding statements.
Secondary Index Statement
USI CREATE UNIQUE INDEX (customer_number)
ON customer_table;
NUSI CREATE INDEX (customer_name)
ON customer_table;
USI Build Process NUSI Build Process
Each AMP accesses its subset of the base table
rows.
Each AMP accesses its subset of the base table
rows.
Each AMP copies the secondary index value and
appends the RowID for the base table row.
Each AMP builds a spool file containing each
secondary index value found followed by the
RowID for the row it came from.
Each AMP creates a Row Hash on the secondary
index value and puts all three values onto the
BYNET.
For hash-ordered NUSIs, each AMP sorts the
RowIDs for each secondary index value into
ascending order.
For value-ordered NUSIs, the rows are sorted by
NUSI value order.
The appropriate AMP receives the data and
creates a row in the index subtable.
If the AMP receives a row with a duplicate index
value, an error is reported.
For hash-ordered NUSIs, each AMP creates a
row hash value for each secondary index value
on a local basis and creates a row in its portion
of the index subtable.
For value-ordered NUSIs, storage is based on
NUSI value rather than the row hash value for
the secondary index.
Each row contains one or more RowIDs for the
index value.Chapter 1: Objects
Secondary Indexes
SQL Reference: Fundamentals 27
Consider the following statements that access a USI and a NUSI.
The following table identifies differences for the access process of the preceding statements.
Note: The NUSI is not used if the estimated number of rows to be read in the base table is
equal to or greater than the estimated number of data blocks in the base table; in this case, a
full table scan is done, or, if appropriate, partition scans are done.
NUSIs and Covering
The Optimizer aggressively pursues NUSIs when they cover a query. Covered columns can be
specified anywhere in the query, including the select list, the WHERE clause, aggregate
functions, GROUP BY clauses, expressions, and so on. Presence of a WHERE condition on
each indexed column is not a prerequisite for using a NUSI to cover a query.
Value-Ordered NUSIs
Value-ordered NUSIs are very efficient for range conditions, and more so when strongly
selective or when combined with covering. Because the NUSI rows are sorted by data value, it
is possible to search only a portion of the index subtable for a given range of key values.
Secondary Index Statement
USI SELECT * FROM customer_table
WHERE customer_number=12;
NUSI SELECT * FROM customer_table
WHERE customer_name = 'SMITH';
USI Access Process NUSI Access Process
The supplied index value hashes to the
corresponding secondary index row.
A message containing the secondary index value
is broadcast to every AMP.
The retrieved base table RowID is used to access
the specific data row.
For a hash-ordered NUSI, each AMP creates a
local row hash and uses it to access its portion of
the index subtable to see if a corresponding row
exists.
Value-ordered NUSI index subtable values are
scanned only for the range of values specified by
the query.
The process is complete.
This is typically a two-AMP operation.
If an index row is found, the AMP uses the
RowID or value order list to access the
corresponding base table rows.
The process is complete.
This is always an all-AMP operation, with the
exception of a NUSI that is defined on the same
columns as the primary index.Chapter 1: Objects
Secondary Indexes
28 SQL Reference: Fundamentals
Value-ordered NUSIs have the following limitations.
• The sort key is limited to a single numeric or DATE column.
• The sort key column must be four or fewer bytes.
The following query is an example of the sort of SELECT statement for which value-ordered
NUSIs were designed.
SELECT *
FROM Orders
WHERE o_date BETWEEN DATE '1998-10-01' AND DATE '1998-10-07';
Multiple Secondary Indexes and Composites
Database designers frequently define multiple secondary indexes on a table.
For example, the following statements define two secondary indexes on the EMPLOYEE table:
CREATE INDEX (department_number) ON EMPLOYEE;
CREATE INDEX (job_code) ON EMPLOYEE;
The WHERE clause in the following query specifies the columns that have the secondary
indexes defined on them:
SELECT last_name, first_name, salary_amount
FROM employee
WHERE department_number = 500
AND job_code = 2147;
Whether the Optimizer chooses to include one, all, or none of the secondary indexes in its
query plan depends entirely on their individual and composite selectivity.
NUSI Bit Mapping
Bit mapping is a technique used by the Optimizer to effectively link several weakly selective
indexes in a way that creates a result that drastically reduces the number of base rows that
must be accessed to retrieve the desired data. The process determines common rowIDs among
multiple NUSI values by means of the logical intersection operation.
Bit mapping is significantly faster than the three-part process of copying, sorting, and
comparing rowID lists. Additionally, the technique dramatically reduces the number of base
table I/Os required to retrieve the requested rows.
FOR more information on … SEE …
multiple secondary index access Database Design
composite secondary index access
other aspects of index selectionChapter 1: Objects
Secondary Indexes
SQL Reference: Fundamentals 29
Secondary Index Summary
Teradata SQL secondary indexes have the following properties.
• Can enhance the speed of data retrieval.
Because of this, secondary indexes are most useful in decision support applications.
• Do not affect data distribution.
• Can be a maximum of 32 defined per table.
• Can be composed of as many as 64 columns.
• For a value-ordered NUSI, only a single numeric or DATE column of four or fewer bytes
may be specified for the sort key.
• For a hash-ordered covering index, only a single column may be specified for the hash
ordering.
• Can be created or dropped dynamically as data usage changes or if they are found not to be
useful for optimizing data retrieval performance.
• Require additional disk space to store subtables.
• Require additional I/Os on inserts and deletes.
Because of this, secondary indexes might not be as useful in OLTP applications.
• Should not be defined on columns whose values change frequently.
• Should not include columns that do not enhance selectivity.
• Should not use composite secondary indexes when multiple single column indexes and bit
mapping might be used instead.
• Composite secondary indexes is useful if it reduces the number of rows that must be
accessed.
• The Optimizer does not use composite secondary indexes unless there are explicit values
for each column in the index.
• Most efficient for selecting a small number of rows.
• Can be unique or non-unique.
• NUSIs can be hash-ordered r value-ordered, and can optionally include covering columns.
• Cannot be partitioned, but can be defined on a table with a partitioned primary index.
FOR more information on … SEE …
when Teradata Database performs NUSI bit
mapping
Database Design
how NUSI bit maps are computed
using the EXPLAIN modifier to determine if bit
mapping is being used for your indexes
• Database Design
• SQL Reference: Data Manipulation StatementsChapter 1: Objects
Join Indexes
30 SQL Reference: Fundamentals
Summary of USI and NUSI Properties
Unique and nonunique secondary indexes have the following properties.
For More Information About Secondary Indexes
See “SQL Data Definition Language Statement Syntax” of SQL Reference: Data Definition
Statements under “CREATE TABLE” and “CREATE INDEX” for more information.
Also consult the following manuals for more detailed information on using secondary indexes
to enhance the performance of your databases:
• Database Design
• Performance Management
Join Indexes
Introduction
Join indexes are not indexes in the usual sense of the word. They are file structures designed to
permit queries (join queries in the case of multitable join indexes) to be resolved by accessing
the index instead of having to access and join their underlying base tables.
You can use join indexes to:
• Define a prejoin table on frequently joined columns (with optional aggregation) without
denormalizing the database.
• Create a full or partial replication of a base table with a primary index on a foreign key
column table to facilitate joins of very large tables by hashing their rows to the same AMP
as the large table.
• Define a summary table without denormalizing the database.
You can define a join index on one or several tables.
Depending on how the index is defined, join indexes can also be useful for queries where the
index structure contains only some of the columns referenced in the statement. This situation
is referred to as a partial cover of the query.
Unlike traditional indexes, join indexes do not implicitly store pointers to their associated base
table rows. Instead, they are generally used as a fast path final access point that eliminates the
USI NUSI
• Guarantee that each complete
index value is unique.
• Any access using the index is a
two-AMP operation.
• Useful for locating rows having a specific value in the index.
• Can be hash-ordered or value-ordered.
Value-ordered NUSIs are particularly useful for enhancing
the performance of range queries.
• Can include covering columns.
• Any access using the index is an all-AMP operation.Chapter 1: Objects
Join Indexes
SQL Reference: Fundamentals 31
need to access and join the base tables they represent. They substitute for rather than point to
base table rows. The only exception to this is the case where an index partially covers a query.
If the index is defined using either the ROWID keyword or the UPI or USI of its base table as
one of its columns, then it can be used to join with the base table to cover the query.
Defining Join Indexes
To create a join index, use the CREATE JOIN INDEX statement.
For example, suppose that a common task is to look up customer orders by customer number
and date. You might create a join index like the following, linking the customer table, the
order table, and the order detail table:
CREATE JOIN INDEX cust_ord2
AS SELECT cust.customerid,cust.loc,ord.ordid,item,qty,odate
FROM cust, ord, orditm
WHERE cust.customerid = ord.customerid
AND ord.ordid = orditm.ordid;
Multitable Join Indexes
A multitable join index stores and maintains the joined rows of two or more tables and,
optionally, aggregates selected columns.
Multitable join indexes are for join queries that are performed frequently enough to justify
defining a prejoin on the joined columns.
A multitable join index is useful for queries where the index structure contains all the columns
referenced by one or more joins, thereby allowing the index to cover that part of the query,
making it possible to retrieve the requested data from the index rather than accessing its
underlying base tables. For obvious reasons, an index with this property is often referred to as
a covering index.
Single-Table Join Indexes
Single-table join indexes are very useful for resolving joins on large tables without having to
redistribute the joined rows across the AMPs.
Single-table join indexes facilitate joins by hashing a frequently joined subset of base table
columns to the same AMP as the table rows to which they are frequently joined. This
enhanced geography eliminates BYNET traffic as well as often providing a smaller sized row to
be read and joined.
Aggregate Join Indexes
When query performance is of utmost importance, aggregate join indexes offer an extremely
efficient, cost-effective method of resolving queries that frequently specify the same aggregate
operations on the same column or columns. When aggregate join indexes are available, the
system does not have to repeat aggregate calculations for every query.Chapter 1: Objects
Join Indexes
32 SQL Reference: Fundamentals
You can define an aggregate join index on two or more tables, or on a single table. A
single-table aggregate join index includes a summary table with:
• A subset of columns from a base table
• Additional columns for the aggregate summaries of the base table columns
Sparse Join Indexes
You can create join indexes that limit the number of rows in the index to only those that are
accessed when, for example, a frequently run query references only a small, well known subset
of the rows of a large base table. By using a constant expression to filter the rows included in
the join index, you can create what is known as a sparse index.
Any join index, whether simple or aggregate, multitable or single-table, can be sparse.
To create a sparse index, use the WHERE clause in the CREATE JOIN INDEX statement.
Effects of Join Indexes
Join index limits affect the following Teradata Database functions and features.
• Load Utilities
MultiLoad and FastLoad utilities cannot be used to load or unload data into base tables
that have a join index defined on them because join indexes are not maintained during the
execution of these utilities. If an error occurs because of the join index, drop the join index
and recreate it after loading data into that table.
The TPump utility, which performs standard SQL row inserts and updates, can be used to
load or unload data into base tables with join indexes because it properly maintains join
indexes during execution. However, in some cases, performance may improve by dropping
join indexes on the table prior to the load and recreating them after the load.
• ARC (Archive and Recovery)
Archive and Recovery cannot be used on a join index itself. Archiving is permitted on a
base table or database that has an associated join index defined. Before a restore of such a
base table or database, you must drop the existing join index definition. Before using any
such index again in the execution of queries, you must recreate the join index definition.
• Permanent Journal Recovery
Using a permanent journal to recover a base table (that is, ROLLBACK or
ROLLFORWARD) with an associated join index defined is permitted. The join index is
not automatically rebuilt during the recovery process. Instead, it is marked as non-valid
and it must be dropped and recreated before it can be used again in the execution of
queries.Chapter 1: Objects
Join Indexes
SQL Reference: Fundamentals 33
Comparison of Join Indexes and Base Tables
In most respects, a join index is similar to a base table. For example, you can do the following
things to a join index:
• Create nonunique secondary indexes on its columns.
• Execute COLLECT STATISTICS, DROP STATISTICS, HELP, and SHOW statements.
• Partition its primary index, if it is a non-compressed join index.
Note: Unlike a base table that has a PPI, however, you cannot use COLLECT STATISTICS
to collect PARTITION statistics on a non-compressed join index that has a PPI.
Unlike base tables, you cannot do the following things with join indexes:
• Query or update join index rows explicitly.
• Store and maintain arbitrary query results such as expressions.
Note: You can maintain aggregates or sparse indexes if you define the join index to do so.
• Create explicit unique indexes on its columns.
Related Topics
FOR more information on … SEE …
creating join indexes “CREATE JOIN INDEX” in SQL Reference:
Data Definition Statements
dropping join indexes “DROP JOIN INDEX” in SQL Reference: Data
Definition Statements
displaying the attributes of the columns defined
by a join index
“HELP JOIN INDEX” in SQL Reference: Data
Definition Statements
using join indexes to enhance the performance of
your databases
• Database Design
• Performance Management
• SQL Reference: Data Definition Statements
• database design considerations for join indexes
• improving join index performance
Database DesignChapter 1: Objects
Hash Indexes
34 SQL Reference: Fundamentals
Hash Indexes
Introduction
Hash indexes are used for the same purposes as single-table join indexes. The following table
lists the principal differences between hash indexes and single-table join indexes.
Hash indexes are useful for creating a full or partial replication of a base table with a primary
index on a foreign key column to facilitate joins of very large tables by hashing them to the
same AMP.
You can define a hash index on one table only. The functionality of hash indexes is a subset to
that of single-table join indexes.
Comparison of Hash and Single-Table Join Indexes
The reasons for using hash indexes are similar to those for using single-table join indexes. Not
only can hash indexes optionally be specified to be distributed in such a way that their rows
are AMP-local with their associated base table rows, they also implicitly provide an alternate
direct access path to those base table rows. This facility makes hash indexes somewhat similar
to secondary indexes in function. Hash indexes are also useful for covering queries so that the
base table need not be accessed at all.
Hash Index Single-Table Join Index
Column list cannot contain aggregate or
ordered analytical functions.
Column list can contain aggregate functions.
Cannot have a secondary index. Can have a secondary index.
Supports transparently added, system-defined
columns that point to the underlying base table
rows.
Does not implicitly add underlying base table
row pointers.
Pointers to underlying base table rows can be
created explicitly by defining one element of the
column list using the ROWID keyword or the
UPI or USI of the base table.
FOR information on … SEE …
using CREATE HASH INDEX to create a hash
index
SQL Reference: Data Definition Statements
using DROP HASH INDEX to drop a hash
index
using HELP HASH INDEX to display the data
types of the columns defined by a hash index
database design considerations for hash indexes Database DesignChapter 1: Objects
Hash Indexes
SQL Reference: Fundamentals 35
The following list summarizes the similarities of hash and single-table join indexes:
• Primary function of both is to improve query performance.
• Both are maintained automatically by the system when the relevant columns of their base
table are updated by a DELETE, INSERT, UPDATE, or MERGE statement.
• Both can be the object of any of the following SQL statements:
• COLLECT STATISTICS
• DROP STATISTICS
• HELP INDEX
• SHOW
• Both receive their space allocation from permanent space and are stored in distinct tables.
• The storage organization for both supports a compressed format to reduce storage space,
but for a hash index, Teradata Database makes this decision.
• Both can be FALLBACK protected.
• Neither can be queried or directly updated.
• Neither can store an arbitrary query result.
• Both share the same restrictions for use with the MultiLoad, FastLoad, and
Archive/Recovery utilities.
• A hash index implicitly defines a direct access path to base table rows. A join index may be
explicitly specified to define a direct access path to base table rows.
Effects of Hash Indexes
Join index limits affect the following Teradata Database functions and features.
• ARC (Archive and Recovery)
Archive and Recovery cannot be used on a hash index itself. Archiving is permitted on a
base table or database that has an associated hash index defined. During a restore of such a
base table or database, the system does not rebuild the hash index. You must drop the
existing hash index definition and create a new one before any such index can be used
again in the execution of queries.
• Load Utilities
MultiLoad and FastLoad utilities cannot be used to load or unload data into base tables
that have an associated hash index defined on them because hash indexes are not
maintained during the execution of these utilities. The hash index must be dropped and
recreated after that table has been loaded.
The TPump utility, which performs standard SQL row inserts and updates, can be used
because hash indexes are properly maintained during its execution. However, in some
cases, performance may improve by dropping hash indexes on the table prior to the load
and recreating them after the load.
• Permanent Journal Recovery
Using a permanent journal to recover a base table using ROLLBACK or ROLLFORWARD
with an associated hash index defined is permitted. The hash index is not automatically Chapter 1: Objects
Referential Integrity
36 SQL Reference: Fundamentals
rebuilt during the recovery process. Instead, the hash index is marked as non-valid and it
must be dropped and recreated before it can be used again in the execution of queries.
Queries Using a Hash Index
In most respects, a hash index is similar to a base table. For example, you can perform
COLLECT STATISTICS, DROP STATISTICS, HELP, and SHOW statements on a hash index.
Unlike base tables, you cannot do the following things with hash indexes:
• Query or update hash index rows explicitly.
• Store and maintain arbitrary query results such as expressions.
• Create explicit unique indexes on its columns.
• Partition the primary index of the hash index.
For More Information About Hash Indexes
Consult the following manuals for more detailed information on using hash indexes to
enhance the performance of your databases:
• Database Design
• Performance Management
• SQL Reference: Data Definition Statements
Referential Integrity
Introduction
Referential integrity (RI) is defined as all the following notions.
• The concept of relationships between tables, based on the definition of a primary key (or
UNIQUE alternate key) and a foreign key.
• A mechanism that provides for specification of columns within a referencing table that are
foreign keys for columns in some other referenced table.
Referenced columns must be defined as one of the following.
• Primary key columns
• Unique columns
• A reliable mechanism for preventing accidental database corruption when performing
inserts, updates, and deletes.
Referential integrity requires that a row having a non-null value for a referencing column
cannot exist in a table if an equal value does not exist in a referenced column.Chapter 1: Objects
Referential Integrity
SQL Reference: Fundamentals 37
Varieties of Referential Integrity Enforcement Supported by Teradata
Database
Teradata Database supports two forms of declarative SQL for enforcing referential integrity:
• A standard method that enforces RI on a row-by-row basis
• A batch method that enforces RI on a statement basis
Both methods offer the same measure of integrity enforcement, but perform it in different
ways.
A third form is related to these because it provides a declarative definition for a referential
relationship, but it does not enforce that relationship. Enforcement of the declared referential
relationship is left to the user by any appropriate method.
Referencing (Child) Table
The referencing table is referred to as the child table, and the specified child table columns are
the referencing columns.
Note: Referencing columns must have the same numbers and types of columns, data types,
and sensitivity as the referenced table keys. COMPRESS is not allowed on either referenced or
referencing columns and column-level constraints are not compared.
Referenced (Parent) Table
A child table must have a parent, and the referenced table is referred to as the parent table.
The parent key columns in the parent table are the referenced columns.
Because the referenced columns are defined as unique constraints, they must be one of the
following unique indexes.
• A unique primary index (UPI), defined as NOT NULL
• A unique secondary index (USI), defined as NOT NULL
Terms Related to Referential Integrity
The following terms are used to explain the concept of referential integrity.
Term Definition
Child Table A table where the referential constraints are defined.
Child table and referencing table are synonyms.
Parent Table The table referenced by a child table.
Parent table and referenced table are synonyms.
Primary Key A unique identifier for a row of a table.
UNIQUE
Alternate KeyChapter 1: Objects
Referential Integrity
38 SQL Reference: Fundamentals
Why Referential Integrity Is Important
Consider the employee and payroll tables for any business.
With referential integrity constraints, the two tables work together as one. When one table gets
updated, the other table also gets updated.
The following case depicts a useful referential integrity scenario.
Looking for a better career, Mr. Clark Johnson leaves his company. Clark Johnson is deleted
from the employee table.
The payroll table, however, does not get updated because the payroll clerk simply forgets to do
so. Consequently, Mr. Clark Johnson keeps getting paid.
With good database design, referential integrity relationship would have been defined on
these tables. They would have been linked and, depending on the defined constraints, the
deletion of Clark Johnson from the employee table could not be performed unless it was
accompanied by the deletion of Clark Johnson from the payroll table.
Foreign Key A column set in the child table that is also the primary key (or a UNIQUE alternate
key) in the parent table.
Foreign keys can consist of as many as 64 different columns.
Referential
Constraint
A constraint defined on a column set or a table to ensure referential integrity.
For example, consider the following table definition:
CREATE TABLE A
(A1 CHAR(10) REFERENCES B (B1),
A2 INTEGER
FOREIGN KEY (A1,A2) REFERENCES C
PRIMARY INDEX (A1));
This CREATE TABLE statement specifies the following referential integrity
constraints.
This
constraint …
Is defined at this level …
1 column.
Implicit foreign key A1 references the parent key B1 in table B.
2 table.
Explicit composite foreign key (A1, A2) implicitly references the
UPI (or a USI) of parent table C, which must be two columns,
the first typed CHAR(10) and the second typed INTEGER.
Both parent table columns must also be defined as NOT NULL.
Term DefinitionChapter 1: Objects
Referential Integrity
SQL Reference: Fundamentals 39
Besides data integrity and data consistency, referential integrity also has the benefits listed in
the following table.
Rules for Assigning Columns as FOREIGN KEYS
The FOREIGN KEY columns in the referencing table must be identical in definition with the
keys in the referenced table. Corresponding columns must have the same data type and case
sensitivity.
• The COMPRESS option is not permitted on either the referenced or referencing
column(s).
• Column level constraints are not compared.
• A one-column FOREIGN KEY cannot reference a single column in a multi-column
primary or unique key—the foreign and primary/unique key must contain the same
number of columns.
Circular References Are Allowed
References can be defined as circular in that TableA can reference TableB, which can reference
TableA. In this case, at least one set of FOREIGN KEYS must be defined on nullable columns.
If the FOREIGN KEYS in TableA are on columns defined as nullable, then rows could be
inserted into TableA with nulls for the FOREIGN KEY columns. Once the appropriate rows
exist in TableB, the nulls of the FOREIGN KEY columns in TableA could then be updated to
contain non-null values which match the TableB values.
References Can Be to the Table Itself
FOREIGN KEY references can also be to the same table that contains the FOREIGN KEY.
The referenced columns must be different columns than the FOREIGN KEY, and both the
referenced and referencing columns must subscribe to the referential integrity rules.
Benefit Description
Increases development
productivity
It is not necessary to code SQL statements to enforce referential
constraints.
The Teradata Database automatically enforces referential integrity.
Requires fewer
programs to be written
All update activities are programmed to ensure that referential constraints
are not violated.
The Teradata Database enforces referential integrity in all environments.
No additional programs are required.
Improves performance The Teradata Database chooses the most efficient method to enforce the
referential constraints.
The Teradata Database can optimize queries based on the fact that there is
referential integrity.Chapter 1: Objects
Referential Integrity
40 SQL Reference: Fundamentals
CREATE and ALTER TABLE Syntax
Referential integrity affects the syntax and semantics of CREATE TABLE and ALTER TABLE.
For more details, see “ALTER TABLE” and “CREATE TABLE” in SQL Reference: Data
Definition Statements.
Maintaining Foreign Keys
Definition of a FOREIGN KEY requires that the Teradata Database maintain the integrity
defined between the referenced and referencing table.
The Teradata Database maintains the integrity of foreign keys as explained in the following
table.
FOR this data manipulation activity … The system verifies that …
A row is inserted into a referencing
table and foreign key columns are
defined to be NOT NULL.
a row exists in the referenced table with the same values as
those in the foreign key columns.
If such a row does not exist, then an error is returned.
If the foreign key contains multiple columns, and if any one
column value of the foreign key is null, then none of the
foreign key values are validated.
The values in foreign key columns are
altered to be NOT NULL.
a row exists in the referenced table that contains values
equal to the altered values of all of the foreign key columns.
If such a row does not exist, then an error is returned.
A row is deleted from a referenced
table.
no rows exist in referencing tables with foreign key values
equal to those of the row to be deleted.
If such rows exist, then an error is returned.
Before a referenced column in a
referenced table is updated.
no rows exist in a referencing table with foreign key values
equal to those of the referenced columns.
If such rows exist, then an error is returned.
Before the structure of columns
defined as foreign keys or referenced
by foreign keys is altered.
the change would not violate the rules for definition of a
foreign key constraint.
An ALTER TABLE or DROP INDEX statement attempting
to change such a columns structure returns an error.
A table referenced by another is
dropped.
the referencing table has dropped its foreign key reference
to the referenced table.Chapter 1: Objects
Referential Integrity
SQL Reference: Fundamentals 41
Referential Integrity and the ARC Utility
The Archive (ARC) utility archives and restores individual tables. It also copies tables from
one database to another.
When a table is restored or copied into a database, the dictionary definition of that table is also
restored. The dictionary definitions of both the referenced (parent) and referencing (child)
table contain the complete definition of a reference.
By restoring a single table, it is possible to create an inconsistent reference definition in the
Teradata Database. When either a parent or child table is restored, the reference is marked as
inconsistent in the dictionary definitions. The ARC utility can validate these references once
the restore is done.
An ALTER TABLE statement adds a
foreign key reference to a table.
The same processes occur whether
the reference is defined for standard
or for soft referential integrity.
all of the values in the foreign key columns are validated
against columns in the referenced table.
When the system parses ALTER TABLE, it defines an error
table that:
• Has the same columns and primary index as the target
table of the ALTER TABLE statement.
• Has a name that is the same as the target table name
suffixed with the reference index number.
A reference index number is assigned to each foreign
key constraint for a table.
To determine the number, use one of the following
system views.
• RI_Child_Tables
• RI_Distinct_Children
• RI_Distinct_Parents
• RI_Parent_Tables
• Is created under the same user or database as the table
being altered.
If a table already exists with the same name as that
generated for the error table then an error is returned to
the ALTER TABLE statement.
Rows in the referencing table that contain values in the
foreign key columns that cannot be found in any row of the
referenced table are copied into the error table (the base
data of the target table is not modified).
It is your responsibility to:
• Correct data values in the referenced or referencing
tables so that full referential integrity exists between the
two tables.
Use the rows in the error table to define which
corrections to make.
• Maintain the error table.
FOR this data manipulation activity … The system verifies that …Chapter 1: Objects
Views
42 SQL Reference: Fundamentals
While a table is marked as inconsistent, no updates, inserts, or deletes are permitted. The table
is fully usable only when the inconsistencies are resolved (see below). This restriction is true
for both hard and soft (Referential Constraint) referential integrity constraints.
It is possible that the user either intends to or must revert to a definition of a table which
results in an inconsistent reference on that table. The Archive and Restore operations are the
most common cause of such inconsistencies.
To remove inconsistent references from a child table that is archived and restored, follow these
steps:
1 After archiving the child table, drop the parent table.
2 Restore the child table.
When the child table is restored, the parent table no longer exists. The normal ALTER
TABLE DROP FOREIGN KEY statement does not work, because the parent table
references cannot be resolved.
3 Use the DROP INCONSISTENT REFERENCES option to remove these inconsistent
references from a table.
The syntax is:
ALTER TABLE database_name.table_name DROP INCONSISTENT REFERENCES
You must have DROP privileges on the target table of the statement to perform this
option, which removes all inconsistent internal indexes used to establish references.
For further information, see Teradata Archive/Recovery Utility Reference or Teradata ASF2
Tape Reader User Guide.
Referential Integrity and the FastLoad and MultiLoad Utilities
Foreign key references are not supported for any table that is the target table for a FastLoad or
MultiLoad.
For further details, see:
• Database Design
• Teradata FastLoad Reference
• Teradata MultiLoad Reference
Views
Views and Tables
A view can be compared to a window through which you can see selected portions of a
database. Views are used to retrieve portions of one or more tables or other views.
Views look like tables to a user, but they are virtual, not physical, tables. They display data in
columns and rows and, in general, can be used as if they were physical tables. However, only
the column definitions for a view are stored: views are not physical tables.Chapter 1: Objects
Views
SQL Reference: Fundamentals 43
A view does not contain data: it is a virtual table whose definition is stored in the data
dictionary. The view is not materialized until it is referenced by a statement. Some operations
that are permitted for the manipulation of tables are not valid for views, and other operations
are restricted, depending on the view definition.
Defining a View
The CREATE VIEW statement defines a view. The statement names the view and its columns,
defines a SELECT on one or more columns from one or more underlying tables and/or views,
and can include conditional expressions and aggregate operators to limit the row retrieval.
Why Use Views?
The primary reason to use views is to simplify end user access to the Teradata database. Views
provide a constant vantage point from which to examine and manipulate the database. Their
perspective is altered neither by adding or nor by dropping columns from its component base
tables unless those columns are part of the view definition.
From an administrative perspective, views are useful for providing an easily maintained level
of security and authorization. For example, users in a Human Resources department can
access tables containing sensitive payroll information without being able to see salary and
bonus columns. Views also provide administrators with an ability to control read and update
privileges on the database with little effort.
Restrictions on Views
Some operations that are permitted on base tables are not permitted on views—sometimes for
obvious reasons and sometimes not.
The following set of rules outlines the restrictions on how views can be created and used.
• You cannot create an index on a view.
• A view definition cannot contain an ORDER BY clause.
• Any derived columns in a view must explicitly specify view column names, for example by
using an AS clause or by providing a column list immediately after the view name.
• You cannot update tables from a view under the following circumstances:
• The view is defined as a join view (defined on more than one table)
• The view contains derived columns.
• The view definition contains a DISTINCT clause.
• The view definition contains a GROUP BY clause.
• The view defines the same column more than once.Chapter 1: Objects
Triggers
44 SQL Reference: Fundamentals
Triggers
Definition
Triggers are active database objects associated with a subject table. A trigger essentially consists
of a stored SQL statement or a block of SQL statements.
Triggers execute when an INSERT, UPDATE, DELETE, or MERGE modifies a specified
column or columns in the subject table.
Typically, a stored trigger performs an UPDATE, INSERT, DELETE, MERGE, or other SQL
operation on one or more tables, which may possibly include the subject table.
Triggers in Teradata Database conform to the ANSI SQL-2003 standard, and also provide
some additional features.
Triggers have two types of granularity:
• Row triggers fire once for each row of the subject table that is changed by the triggering
event and that satisfies any qualifying condition included in the row trigger definition.
• Statement triggers fire once upon the execution of the triggering statement.
You can create, alter, and drop triggers.
For details on creating, dropping, and altering triggers, see SQL Reference: Data Definition
Statements.
Process Flow for a Trigger
The general process flow for a trigger is as follows. Note that this is a logical flow, not a
physical re-enactment of how the Teradata Database processes a trigger.
1 The triggering event occurs on the subject table.
2 A determination is made as to whether triggers defined on the subject table are to become
active upon a triggering event.
3 Qualified triggers are examined to determine the trigger action time, whether they are
defined to fire before or after the triggering event.
IF you want to … THEN use …
define a trigger CREATE TRIGGER.
• enable a trigger
• disable a trigger
• change the creation
timestamp for a
trigger
ALTER TRIGGER.
Disabling a trigger stops the trigger from functioning, but leaves the
trigger definition in place as an object. This allows utility operations on a
table that are not permitted on tables with enabled triggers.
Enabling a trigger restores its active state.
remove a trigger from
the system permanently
DROP TRIGGER.Chapter 1: Objects
Triggers
SQL Reference: Fundamentals 45
4 When multiple triggers qualify, then they fire normally in the ANSI-specified order of
creation timestamp.
To override the creation timestamp and specify a different execution order of triggers, you
can use the ORDER clause, a Teradata extension.
Even if triggers are created without the ORDER clause, you can redefine the order of
execution by changing the trigger creation timestamp using the ALTER TRIGGER
statement.
5 The triggered SQL statements (triggered action) execute.
If the trigger definition uses a REFERENCING clause to specify that old, new, or both old
and new data for the triggered action is to be collected under a correlation name (an alias),
then that information is stored in transition tables or transition rows as follows:
• OLD [ROW] or NEW [ROW] values, or both, under old (or new) values correlation
name.
• Entire set of rows as OLD TABLE or NEW TABLE under old (or new) values table alias.
6 The trigger passes control to the next trigger, if defined, in a cascaded sequence. The
sequence can include recursive triggers.
Otherwise, control passes to the next statement in the application.
7 If any of the actions involved in the triggering event or the triggered actions abort, then all
of the actions are aborted.
Restrictions on Using Triggers
Most Teradata load utilities cannot access a table that has an active trigger.
An application that uses triggers can use ALTER TRIGGER to disable the trigger and enable
the load. The application must be sure that loading a table with disabled triggers does not
result in a mismatch in a user defined relationship with a table referenced in the triggered
action.
The other restrictions on triggers include:
• BEFORE statement triggers are not allowed.
• BEFORE triggers cannot have data-changing statements as triggered action (triggered SQL
statements).
• BEFORE triggers cannot access OLD TABLE and NEW TABLE.
• Triggers and hash indexes are mutually exclusive. You cannot define triggers on a table on
which a hash index is already defined.
• A positioned (updatable cursor) UPDATE or DELETE is not allowed to fire a trigger. An
attempt to do so generates an error.Chapter 1: Objects
Macros
46 SQL Reference: Fundamentals
Related Topics
Macros
Introduction
A frequently used SQL statement or series of statements can be incorporated into a macro and
defined using the SQL CREATE MACRO statement. See “CREATE MACRO” in SQL
Reference: Data Definition Statements.
The statements in the macro are performed using the EXECUTE statement. See “EXECUTE
(Macro Form)” in SQL Reference: Data Manipulation Statements.
A macro can include an EXECUTE statement that executes another macro.
Definition
A macro consists of one or more statements that can be executed by performing a single
statement. Each time the macro is performed, one or more rows of data can be returned.
Performing a macro is similar to performing a multistatement request (see “Multistatement
Requests” on page 121).
Single-User and Multiuser Macros
You can create a macro for your own use, or grant execution authorization to others.
For example, your macro might enable a user in another department to perform operations
on the data in the Teradata Database. When executing the macro, a user need not be aware of
the database being accessed, the tables affected, or even the results.
FOR detailed information on … SEE …
• guidelines for creating triggers
• conditions that cause triggers to fire
• trigger action that occurs when a trigger fires
• the trigger action time
• when to use row triggers and when to use
statement triggers
CREATE TRIGGER in SQL Reference: Data
Definition Statements.
• temporarily disabling triggers
• enabling triggers
• changing the creation timestamp of a trigger
ALTER TRIGGER in SQL Reference: Data
Definition Statements.
permanently removing triggers from the system DROP TRIGGER in SQL Reference: Data
Definition Statements.Chapter 1: Objects
Macros
SQL Reference: Fundamentals 47
Multistatement Transactions Versus Macros
Although you can enter a multistatement operation interactively using an explicit transaction
(either BT/ET or COMMIT), a better practice is to define such an operation as a macro
because an explicit transaction holds locks placed on objects by statements in the transaction
until the statement sequence is completed with an END TRANSACTION or COMMIT
statement.
If you were to enter such a sequence interactively from BTEQ, items in the database would be
locked to others while you typed and entered each statement.
Contents of a Macro
With the exception of CREATE AUTHORIZATION and REPLACE AUTHORIZATION, a
data definition statement is allowed in macro if it is the only SQL statement in that macro.
A data definition statement is not resolved until the macro is executed, at which time
unqualified database object references are fully resolved using the default database of the user
submitting the EXECUTE statement. If this is not the desired result, you must fully qualify all
object references in a data definition statement in the macro body.
A macro can contain parameters that are substituted with data values each time the macro is
executed. It also can include a USING modifier, which allows the parameters to be filled with
data from an external source such as a disk file. A COLON character prefixes references to a
parameter name in the macro. Parameters cannot be used for data object names.
Executing a Macro
Regardless of the number of statements in a macro, the Teradata Database treats it as a single
request.
When you execute a macro, either all its statements are processed successfully or none are
processed. If a macro fails, it is aborted, any updates are backed out, and the database is
returned to its original state.
Ways to Perform SQL Macros in Embedded SQL
Macros in an embedded SQL program are performed in one of the following ways.
IF the macro … THEN use …
is a single statement,
and that statement
returns no data
• the EXEC statement to specify static execution of the macro
-or-
• the PREPARE and EXECUTE statements to specify dynamic execution.
Use DESCRIBE to verify that the single statement of the macro is not a
data returning statement.
• consists of multiple
statements
• returns data
a cursor, either static or dynamic.
The type of cursor used depends on the specific macro and on the needs of
the application.Chapter 1: Objects
Stored Procedures
48 SQL Reference: Fundamentals
Static SQL Macro Execution in Embedded SQL
Static SQL macro execution is associated with a macro cursor using the macro form of the
DECLARE CURSOR statement.
When you perform a static macro, you must use the EXEC form to distinguish it from the
dynamic SQL statement EXECUTE.
Dynamic SQL Macro Execution in Embedded SQL
Define dynamic macro execution using the PREPARE statement with the statement string
containing an EXEC macro_name statement rather than a single-statement request.
The dynamic request is then associated with a dynamic cursor. See “DECLARE CURSOR
(Macro Form)” in SQL Reference: Data Manipulation Statements for further information on
the use of macros.
Dropping, Replacing, Renaming, and Retrieving Information About a Macro
For more information, see SQL Reference: Data Definition Statements.
Stored Procedures
Introduction
Stored procedures are called Persistent Stored Modules in the ANSI SQL-2003 standard. They
are written in SQL and consist of a set of control and condition handling statements that make
SQL a computationally complete programming language.
These features provide a server-based procedural interface to the Teradata Database for
application programmers.
Teradata stored procedure facilities are a subset of and conform to the ANSI SQL-2003
standards for semantics.
IF you want to … THEN use the following statement …
drop a macro DROP MACRO
redefine an existing macro REPLACE MACRO
rename a macro RENAME MACRO
get the attributes for a macro HELP MACRO
get the data definition statement most recently
used to create, replace, or modify a macro
SHOW MACROChapter 1: Objects
Stored Procedures
SQL Reference: Fundamentals 49
Elements of Stored Procedures
The set of statements constituting the main tasks of the stored procedure is called the stored
procedure body, which can consist of a single statement or a compound statement, or block.
A single statement stored procedure body can contain one control statement, such as LOOP or
WHILE, or one SQL DDL, DML, or DCL statement, including dynamic SQL. Some
statements are not allowed, including:
• Any declaration (local variable, cursor, or condition handler) statement
• A cursor statement (OPEN, FETCH, or CLOSE)
A compound statement stored procedure body consists of a BEGIN-END statement enclosing
a set of declarations and statements, including:
• Local variable declarations
• Cursor declarations
• Condition handler declaration statements
• Control statements
• SQL DML, DDL, and DCL statements supported by stored procedures, including dynamic
SQL
Compound statements can also be nested.
For information about control statements, parameters, local variables, and labels, see SQL
Reference: Stored Procedures and Embedded SQL.
Privileges for Stored Procedures
The security for stored procedures is similar to that for other Teradata database objects like
tables, macros, views, and triggers.
The rights to ALTER PROCEDURE, CREATE PROCEDURE, DROP PROCEDURE, and
EXECUTE PROCEDURE can be granted using the GRANT statement and revoked using the
REVOKE statement. Of these:
• CREATE PROCEDURE is only a database-level privilege.
• ALTER PROCEDURE, DROP PROCEDURE and EXECUTE PROCEDURE privileges can
be granted at the object level and database or user level.
• Only DROP PROCEDURE is an automatic privilege for all users. This is granted when a
new user or database is created.
• EXECUTE PROCEDURE is an automatic privilege only for the creator of a stored
procedure, granted at the time of creation. Chapter 1: Objects
Stored Procedures
50 SQL Reference: Fundamentals
Creating Stored Procedures
A stored procedure can be created from:
• BTEQ utility using the COMPILE command
• CLIv2 applications, ODBC, JDBC, and Teradata SQL Assistant (formerly called
Queryman) using the SQL CREATE PROCEDURE or REPLACE PROCEDURE
statement.
The procedures are stored in the user database space as objects and are executed on the server.
For the syntax of data definition statements related to stored procedures, including CREATE
PROCEDURE and REPLACE PROCEDURE, see SQL Reference: Data Definition Statements.
Note: The stored procedure definitions in the next examples are designed only to demonstrate
the usage of the feature. They are not recommended for use.
Example
Assume you want to define a stored procedure NewProc to add new employees to the
Employee table and retrieve the name of the department to which the employee belongs.
You can also report an error, in case the row that you are trying to insert already exists, and
handle that error condition.
The CREATE PROCEDURE statement looks like this:
CREATE PROCEDURE NewProc (IN name CHAR(12),
IN number INTEGER,
IN dept INTEGER,
OUT dname CHAR(10),
INOUT errstr VARCHAR(30))
BEGIN
DECLARE CONTINUE HANDLER
FOR SQLSTATE VALUE '23505'
SET errstr = 'Duplicate Row.';
INSERT INTO Employee (EmpName, EmpNo, DeptNo )
VALUES (name, number, dept);
SELECT DeptName
INTO dname FROM Department
WHERE DeptNo = dept;
END;
This stored procedure defines parameters that must be filled in each time it is called.Chapter 1: Objects
Stored Procedures
SQL Reference: Fundamentals 51
Modifying Stored Procedures
You can modify a stored procedure definition using the REPLACE PROCEDURE statement.
Example
Assume you want to change the previous example to insert salary information to the
Employee table for new employees.
The REPLACE PROCEDURE statement looks like this:
REPLACE PROCEDURE NewProc (IN name CHAR(12),
IN number INTEGER,
IN dept INTEGER,
IN salary DECIMAL(10,2),
OUT dname CHAR(10),
INOUT errstr VARCHAR(30))
BEGIN
DECLARE CONTINUE HANDLER
FOR SQLSTATE VALUE '23505'
SET errstr = 'Duplicate Row.';
INSERT INTO Employee (EmpName, EmpNo, DeptNo, Salary_Amount)
VALUES (name, number, dept, salary);
SELECT DeptName
INTO dname FROM Department
WHERE DeptNo = dept;
END;
Executing Stored Procedures
You can execute a stored procedure from any supporting client utility or interface using the
SQL CALL statement. You have to specify arguments for all the parameters contained in the
stored procedure.
The CALL statement for executing the procedure created in the CREATE PROCEDURE
example looks like this:
CALL NewProc (Jonathan, 1066, 34, dname);
For details on executing stored procedures and on call arguments, see “CALL” in SQL
Reference: Data Manipulation Statements.
Recompiling Stored Procedures
The ALTER PROCEDURE feature enables recompilation of stored procedures without having
to execute SHOW PROCEDURE and REPLACE PROCEDURE statements.
This feature provides the following benefits:
• Stored procedures created in earlier releases of Teradata Database can be recompiled in
Teradata Database release V2R5.0 and later to derive the benefits of new features and
performance improvements.
• Recompilation is also useful for cross-platform archive and restoration of stored
procedures.Chapter 1: Objects
Stored Procedures
52 SQL Reference: Fundamentals
• ALTER PROCEDURE allows changes in the following compile-time attributes of a stored
procedure:
• SPL option
• Warnings option
Note: For stored procedures created in Teradata Database release V2R5.0 and later to work in
earlier releases, they must be recompiled.
Deleting Stored Procedures
You can delete a stored procedure from a database using the DROP PROCEDURE statement.
Assume you want to drop the NewProc procedure from the database.
The DROP PROCEDURE statement looks like this:
DROP PROCEDURE NewProc;
Renaming Stored Procedures
You can rename a stored procedure using the RENAME PROCEDURE statement. Assume you
want to rename the NewProc stored procedure as NewEmp. The statement looks like this:
RENAME PROCEDURE NewProc TO NewEmp;
Getting Stored Procedure Information
You can get information about the parameters specified in a stored procedure and their
attributes using the HELP PROCEDURE statement. The output contains a list of all the
parameters specified in the procedure and the attributes of each parameter. The statement to
specify is:
HELP PROCEDURE NewProc;
To view the creation-time attributes of the stored procedure, specify the following statement:
HELP PROCEDURE NewProc ATTRIBUTES;
Archiving Procedures
Stored procedures are archived and restored as part of a database archive and restoration.
Individual stored procedures cannot be archived or restored using the ARCHIVE (DUMP) or
RESTORE statements.
Related Topics
FOR details on … SEE …
stored procedure control and condition
handling statements
SQL Reference: Stored Procedures and Embedded SQL
invoking stored procedures the CALL statement in SQL Reference: Data
Manipulation StatementsChapter 1: Objects
External Stored Procedures
SQL Reference: Fundamentals 53
External Stored Procedures
Introduction
External stored procedures are written in the C or C++ programming language, installed on
the database, and then executed like stored procedures.
Usage
Here is a synopsis of the steps you take to develop, compile, install, and use external stored
procedures:
1 If you are creating a new external stored procedure, then write, test, and debug the C or
C++ code for the procedure.
-orIf you are using a third party object or package, then skip to the next step.
2 Use CREATE PROCEDURE or REPLACE PROCEDURE for external stored procedures to
identify the location of the source code, object, or package, and install it on the server.
The external stored procedure is compiled, if the source code is submitted, linked to the
dynamic linked library (DLL or SO) associated with the database in which the procedure
resides, and distributed to all Teradata Database nodes in the system.
3 Use GRANT to grant privileges to users who are authorized to use the external stored
procedure.
4 Invoke the procedure using the CALL statement.
Differences Between Stored Procedures and External Stored Procedures
Using external stored procedures is very similar to using stored procedures, except for the
following:
• Unlike stored procedures, external stored procedures cannot contain any embedded SQL
statements. To call a stored procedure, an external stored procedure can call the
FNC_CallSP library function.
• Invoking an external stored procedure from a client application does not affect the nesting
limit for stored procedures.
creating or replacing stored procedures SQL Reference: Data Definition Statements
dropping stored procedures
renaming stored procedures
FOR details on … SEE …Chapter 1: Objects
User-Defined Functions
54 SQL Reference: Fundamentals
• The CREATE PROCEDURE statement for external stored procedures is different from the
CREATE PROCEDURE statement for stored procedures. In addition to syntax differences,
you do not have to use the COMPILE command in BTEQ or BTEQWIN.
• To install an external stored procedure on a database, you must have the CREATE
EXTERNAL PROCEDURE privilege on the database.
Related Topics
User-Defined Functions
Introduction
SQL provides a set of useful functions, but they might not satisfy all of the particular
requirements you have to process your data.
User-defined functions (UDFs) allow you to extend SQL by writing your own functions in the
C or C++ programming language, installing them on the database, and then using them like
standard SQL functions.
You can also install UDF objects or packages from third-party vendors, without providing the
source code.
UDF Types
Teradata Database supports three types of UDFs.
FOR details on … SEE …
external stored procedure programming SQL Reference: UDF, UDM, and External Stored
Procedure Programming
invoking external stored procedures the CALL statement in SQL Reference: Data
Manipulation Statements
installing external stored procedures on the
server
the CREATE/REPLACE PROCEDURE statement in
SQL Reference: Data Definition Statements
UDF Type Description
Scalar Scalar functions take input parameters and return a single value result. Examples of
standard SQL scalar functions are CHARACTER_LENGTH, POSITION, and TRIM.
Aggregate Aggregate functions produce summary results. They differ from scalar functions in
that they take grouped sets of relational data, make a pass over each group, and return
one result for the group. Some examples of standard SQL aggregate functions are AVG,
SUM, MAX, and MIN.
Table A table function is invoked in the FROM clause of a SELECT statement and returns a
table to the statement.Chapter 1: Objects
Profiles
SQL Reference: Fundamentals 55
Usage
Here is a synopsis of the steps you take to develop, compile, install, and use a UDF:
1 If you are creating a new UDF, then write, test, and debug the C or C++ code for the UDF.
-orIf you are using a third party UDF object or package, then skip to the next step.
2 Use CREATE FUNCTION or REPLACE FUNCTION to identify the location of the source
code, object, or package, and install it on the server.
The function is compiled, if the source code is submitted, linked to the dynamic linked
library (DLL or SO) associated with the database in which the function resides, and
distributed to all Teradata Database nodes in the system.
3 Use GRANT to grant privileges to users who are authorized to use the UDF.
4 Call the function.
Related Topics
Profiles
Definition
Profiles define values for the following system parameters:
• Default database
• Spool space
• Temporary space
• Default account and alternate accounts
• Password security attributes
An administrator can define a profile and assign it to a group of users who share the same
settings.
FOR more information on … SEE …
writing, testing, and debugging source code for a UDF SQL Reference: UDF, UDM, and External
Stored Procedure Programming
data definition statements related to UDFs, including
CREATE FUNCTION and REPLACE FUNCTION
SQL Reference: Data Definition StatementsChapter 1: Objects
Profiles
56 SQL Reference: Fundamentals
Advantages of Using Profiles
Use profiles to:
• Simplify system administration.
Administrators can create a profile that contains system parameters and assign the profile
to a group of users. To change a parameter, the administrator updates the profile instead of
each individual user.
• Control password security.
A profile can define password attributes such as the number of:
• Days before a password expires
• Days before a password can be used again
• Minutes to lock out a user after a certain number of failed logon attempts
Administrators can assign the profile to an individual user or to a group of users.
Usage
The following steps describe how to use profiles to manage a common set of parameters for a
group of users.
1 Define a user profile.
A CREATE PROFILE statement defines a profile, and lets you set:
• Account identifiers to charge for the space used and a default account identifier
• Default database
• Space to allocate for spool files
• Space to allocate for temporary tables
• Number of days before the password expires
• Minimum and maximum number of characters in a password string
• Whether or not to allow digits and special characters in a password string
• Number of incorrect logon attempts to allow before locking a user
• Number of minutes before unlocking a locked user
• Number of days before a password can be used again
2 Assign the profile to users.
Use the CREATE USER or MODIFY USER statement to assign a profile to a user. Profile
settings override the values set for the user.
3 If necessary, change any of the system parameters for a profile.
Use the MODIFY PROFILE statement to change a profile.
Related Topics
For information on the syntax and usage of profiles, see SQL Reference: Data Definition
Statements.Chapter 1: Objects
Roles
SQL Reference: Fundamentals 57
Roles
Definition
Roles define access privileges on database objects. A user who is assigned a role can access all
the objects that the role has privileges to.
Roles simplify management of user access rights. A database administrator can create different
roles for different job functions and responsibilities, grant specific privileges on database
objects to the roles, and then grant membership to the roles to users.
Advantages of Using Roles
Use roles to:
• Simplify access rights administration.
A database administrator can grant rights on database objects to a role and have the rights
automatically applied to all users assigned to that role.
When a user’s function within an organization changes, changing the user’s role is far
easier than deleting old rights and granting new rights to go along with the new function.
• Reduce dictionary disk space.
Maintaining rights on a role level rather than on an individual level makes the size of the
DBC.AccessRights table much smaller. Instead of inserting one row per user per right on a
database object, the Teradata Database inserts one row per role per right in
DBC.AccessRights, and one row per role member in DBC.RoleGrants.
Usage
The following steps describe how to manage user access privileges using roles.
1 Define a role.
A CREATE ROLE statement defines a role. A newly created role does not have any
associated privileges.
2 Add access privileges to the role.
Use the GRANT statement to grant privileges to roles on databases, tables, views, macros,
columns, triggers, stored procedures, join indexes, hash indexes, and user-defined
functions.
3 Grant the role to users or other roles.
Use the GRANT statement to grant a role to users or other roles.Chapter 1: Objects
User-Defined Types
58 SQL Reference: Fundamentals
4 Assign default roles to users.
Use the DEFAULT ROLE option of the CREATE USER or MODIFY USER statement to
specify the default role for a user, where:
At logon time, the default role of the user becomes the current role for the session.
Rights validation uses the active roles for a user, which include the current role and all
nested roles.
5 If necessary, change the current role for a session.
Use the SET ROLE statement to change the current role for a session.
Managing role-based access rights requires sufficient privileges. For example, the CREATE
ROLE statement is only authorized to users who have the CREATE ROLE system privilege.
Related Topics
For information on the syntax and usage of roles, see SQL Reference: Data Definition
Statements.
User-Defined Types
Introduction
SQL provides a set of predefined data types, such as INTEGER and VARCHAR, that you can
use to store the data that your application uses, but they might not satisfy all of the
requirements you have to model your data.
User-defined types (UDTs) allow you to extend SQL by creating your own data types and then
using them like predefined data types.
DEFAULT
ROLE = … Specifies …
role_name the name of one role to assign as the default role for a user.
NONE
NULL
that the user does not have a default role.
ALL the default role to be all roles that are directly or indirectly granted to the user.Chapter 1: Objects
User-Defined Types
SQL Reference: Fundamentals 59
UDT Types
Teradata Database supports distinct and structured UDTs.
Distinct and structured UDTs can define methods that operate on the UDT. For example, a
distinct UDT named euro can define a method that converts the value to a US dollar amount.
Similarly, a structured UDT named circle can define a method that computes the area of the
circle using the radius attribute.
Using a Distinct UDT
Here is a synopsis of the steps you take to develop and use a distinct UDT:
1 Use the CREATE TYPE statement to create a distinct UDT that is based on a predefined
data type, such as INTEGER or VARCHAR.
The Teradata Database automatically generates functionality for the UDT that allows you
to import and export the UDT between the client and server, use the UDT in a table,
perform comparison operations between two UDTs, and perform data type conversions
between the UDT and the predefined data type on which the definition is based.
2 If the UDT defines methods, write, test, and debug the C or C++ code for the methods,
and then use CREATE METHOD or REPLACE METHOD to identify the location of the
source code and install it on the server.
The methods are compiled, linked to the dynamic linked library (DLL or SO) associated
with the SYSUDTLIB database, and distributed to all Teradata Database nodes in the
system.
3 Use GRANT to grant privileges to users who are authorized to use the UDT.
4 Use the UDT as the data type of a column in a table definition.
UDT Type Description Example
Distinct A UDT that is based on a single predefined
data type, such as INTEGER or
VARCHAR.
A distinct UDT named euro that is
based on a DECIMAL(8,2) data type
can store monetary data.
Structured A UDT that is a collection of one or more
fields called attributes, each of which is
defined as a predefined data type or other
UDT (which allows nesting).
A structured UDT named circle can
consist of x-coordinate, y-coordinate,
and radius attributes.Chapter 1: Objects
User-Defined Types
60 SQL Reference: Fundamentals
Using a Structured UDT
Here is a synopsis of the steps you take to develop and use a structured UDT:
1 Use the CREATE TYPE statement to create a structured UDT and specify attributes,
constructor methods, and instance methods.
Teradata Database automatically generates the following functionality:
• A default constructor function that you can use to construct a new instance of the
structured UDT and initialize the attributes to NULL
• Observer methods for each attribute that you can use to get the attribute values
• Mutator methods for each attribute that you can use to set the attribute values
2 Follow these steps to implement, install, and register cast functionality for the UDT
(Teradata Database does not automatically generate cast functionality for structured
UDTs):
a Write, test, and debug C or C++ code that implements cast functionality that allows
you to perform data type conversions between the UDT and other data types,
including other UDTs.
b Identify the location of the source code and install it on the server:
The source code is compiled, linked to the dynamic linked library (DLL or SO)
associated with the SYSUDTLIB database, and distributed to all Teradata Database
nodes in the system.
c Use the CREATE CAST or REPLACE CAST statement to register the method or
function as a cast routine for the UDT.
d Repeat Steps a through c for all methods or functions that provide cast functionality.
3 Follow these steps to implement, install, and register ordering functionality for the UDT
(Teradata Database does not automatically generate ordering functionality for structured
UDTs):
a Write, test, and debug C or C++ code that implements ordering functionality that
allows you to perform comparison operations between two UDTs.
b Identify the location of the source code and install it on the server:
IF you write the source code as a … THEN use one of the following statements …
method CREATE METHOD or REPLACE METHOD
function CREATE FUNCTION or REPLACE FUNCTION
IF you write the source code as a … THEN use one of the following statements …
method CREATE METHOD or REPLACE METHOD
function CREATE FUNCTION or REPLACE FUNCTIONChapter 1: Objects
User-Defined Types
SQL Reference: Fundamentals 61
The source code is compiled, linked to the dynamic linked library (DLL or SO)
associated with the SYSUDTLIB database, and distributed to all Teradata Database
nodes in the system.
c Use the CREATE ORDERING or REPLACE ORDERING statement to register the
method or function as an ordering routine for the UDT.
4 Follow these steps to implement, install, and register transform functionality for the UDT
(Teradata Database does not automatically generate transform functionality for structured
UDTs):
a Write, test, and debug C or C++ code that implements transform functionality that
allows you to import and export the UDT between the client and server.
b Identify the location of the source code and install it on the server:
The source code is compiled, linked to the dynamic linked library (DLL or SO)
associated with the SYSUDTLIB database, and distributed to all Teradata Database
nodes in the system.
c Repeat Steps a through b.
d Use the CREATE TRANSFORM or REPLACE TRANSFORM statement to register the
transform routines for the UDT.
5 If the UDT defines constructor methods or instance methods, write, test, and debug the C
or C++ code for the methods, and then use CREATE METHOD or REPLACE METHOD
to identify the location of the source code and install it on the server.
IF the source code
implements transform
functionality for … THEN …
importing the UDT
to the server
you must write the source code as a UDF and use CREATE
FUNCTION or REPLACE FUNCTION to identify the location of
the source code and install it on the server.
exporting the UDT
from the server
IF you write
the source
code as a …
THEN use one of the following statements to
identify the location of the source code and install
it on the server …
method CREATE METHOD or REPLACE METHOD
function CREATE FUNCTION or REPLACE FUNCTION
IF you took Steps a through b to implement
and install this transform functionality …
THEN repeat Steps a through b to implement
and install this transform functionality …
importing the UDT to the server exporting the UDT from the server
exporting the UDT from the server importing the UDT to the serverChapter 1: Objects
User-Defined Types
62 SQL Reference: Fundamentals
The methods are compiled, linked to the dynamic linked library (DLL or SO) associated
with the SYSUDTLIB database, and distributed to all Teradata Database nodes in the
system.
6 Use GRANT to grant privileges to users who are authorized to use the UDT.
7 Use the UDT as the data type of a column in a table definition.
Related Topics
FOR more information on … SEE …
• CREATE TYPE
• CREATE METHOD and REPLACE METHOD
• CREATE FUNCTION and REPLACE FUNCTION
• CREATE CAST and REPLACE CAST
• CREATE ORDERING and REPLACE ORDERING
• CREATE TRANSFORM and REPLACE
TRANSFORM
SQL Reference: Data Definition Statements
writing, testing, and debugging source code for a
constructor method or instance method
SQL Reference: UDF, UDM, and External
Stored Procedure ProgrammingSQL Reference: Fundamentals 63
CHAPTER 2 Basic SQL Syntax and Lexicon
This chapter explains the syntax and lexicon for Teradata SQL, a single, unified,
nonprocedural language that provides capabilities for queries, data definition, data
modification, and data control of the Teradata Database.
Topics include:
• Structure of an SQL statement
• Keywords
• Expressions
• Names
• Literals
• Operators
• Functions
• Delimiters
• Separators
• Comments
• Terminators
• Null statements
Structure of an SQL Statement
Syntax
The following diagram indicates the basic structure of an SQL statement.
FF07D232
statement_keyword
;
expressions
functions
keywords
clauses
,
phrasesChapter 2: Basic SQL Syntax and Lexicon
Structure of an SQL Statement
64 SQL Reference: Fundamentals
where:
Typical SQL Statement
A typical SQL statement consists of a statement keyword, one or more column names, a
database name, a table name, and one or more optional clauses introduced by keywords.
For example, in the following single-statement request, the statement keyword is SELECT:
SELECT deptno, name, salary
FROM personnel.employee
WHERE deptno IN(100, 500)
ORDER BY deptno, name ;
The select list for this statement is made up of the names:
• Deptno, name, and salary (the column names)
• Personnel (the database name)
• Employee (the table name)
The search condition, or WHERE clause, is introduced by the keyword WHERE.
WHERE deptno IN(100, 500)
The sort order, or ORDER BY, clause is introduced by the keywords ORDER BY.
ORDER BY deptno, name
This syntax element … Specifies …
statement_keyword the name of the statement.
expressions literals, name references, or operations using names and literals.
functions the name of a function and its arguments, if any.
keywords special values introducing clauses or phrases or representing special
objects, such as NULL.
Most keywords are reserved words and cannot be used in names.
clauses subordinate statement qualifiers.
phrases data attribute phrases.
; the Teradata SQL statement separator and request terminator.
The semicolon separates statements in a multistatement request and
terminates a request when it is the last non-blank character on an input
line in BTEQ.
Note that the request terminator is required for a request defined in the
body of a macro. For a discussion of macros and their use, see “Macros”
on page 46.Chapter 2: Basic SQL Syntax and Lexicon
SQL Lexicon Characters
SQL Reference: Fundamentals 65
Related Topics
The pages that follow provide details on the elements that appear in an SQL statement.
SQL Lexicon Characters
Client Character Data
The characters that make up the SQL lexicon can be represented on the client system in ASCII,
EBCDIC, UTF8, UTF16, or in an installed user-defined character set.
If the client system character data is not ASCII, then it is converted by the Teradata Database
to an internal form for processing and storage. Data returned to the client system is converted
to the client character set.
Server Character Data
The internal forms used for character support are described in International Character Set
Support.
The notation used for Japanese characters is described in:
• “Character Shorthand Notation Used In This Book”
• Appendix A: “Notation Conventions.”
Case Sensitivity
See the following topics in SQL Reference: Data Types and Literals:
• “Defining Case Sensitivity for Table Columns”
• “CASESPECIFIC Phrase”
• “UPPERCASE Phrase”
• "Character Data Literals"
FOR more information on … SEE …
statement_keyword “Keywords” on page 66
keywords
expressions “Expressions” on page 67
functions “Functions” on page 92
separators “Separators” on page 94
terminators “Terminators” on page 96Chapter 2: Basic SQL Syntax and Lexicon
Keywords
66 SQL Reference: Fundamentals
See the following topics in SQL Reference: Functions and Operators:
• “LOWER Function”
• “UPPER Function”
Keywords
Introduction
Keywords are words that have special meanings in SQL statements. There are two types of
keywords: reserved and non-reserved. You cannot use reserved keywords to name database
objects. Although you can use non-reserved keywords as object names, you usually should not
because of possible confusion resulting from their use.
Statement Keyword
The statement keyword, the first keyword in an SQL statement, is usually a verb.
For example, in the INSERT statement, the first keyword is INSERT.
Keywords
Other keywords appear throughout a statement as modifiers (for example, DISTINCT,
PERMANENT), or as words that introduce clauses (for example, IN, AS, AND, TO, WHERE).
In this book, keywords appear entirely in uppercase letters, though SQL does not discriminate
between uppercase and lowercase letters in a keyword.
For example, SQL interprets the following SELECT statements to be identical:
Select Salary from Employee where EmpNo = 10005;
SELECT Salary FROM Employee WHERE EmpNo = 10005;
select Salary FRom Employee WherE EmpNo = 10005;
All keywords must be from the ASCII repertoire. Fullwidth letters are not valid regardless of
the character set being used.
For a list of Teradata SQL keywords, see Appendix B: “Restricted Words for V2R6.2.”
Keywords and Object Names
Note that you cannot use reserved keywords to name database objects. Because new keywords
are frequently added to new releases of the Teradata Database, you may experience a problem
with database object names that were valid in prior releases but which become nonvalid in a
new release.
The workaround for this is to do one of the following things:
• Put the newly nonvalid name in double quotes.
• Rename the object.
In either case you must change your applications.Chapter 2: Basic SQL Syntax and Lexicon
Expressions
SQL Reference: Fundamentals 67
Expressions
Introduction
An expression specifies a value. An expression can consist of literals (or constants), name
references, or operations using names and literals.
Scalar Expressions
A scalar expression, or value expression, produces a single number, character string, byte
string, date, time, timestamp, or interval.
A value expression has exactly one declared type, common to every possible result of
evaluation. Implicit type conversion rules apply to expressions.
Query Expressions
Query expressions operate on table values and produce rows and tables of data.
Every query expression includes at least one FROM clause, which operates on a table reference
and returns a single table value.
Related Topics
Names
Introduction
In Teradata SQL, various database objects such as tables, views, stored procedures, macros,
columns, and collations are identified by a name.
The set of valid names depends on whether the system is enabled for Japanese language
support.
FOR more information on … SEE …
• CASE expresssions
• arithmetic expressions
• logical expressions
• datetime expressions
• interval expressions
• character expresssions
• byte expressions
SQL Reference: Functions and Operators.
data type conversions SQL Reference: Functions and Operators.
query expressions SQL Reference: Data Manipulation Statements.Chapter 2: Basic SQL Syntax and Lexicon
Names
68 SQL Reference: Fundamentals
Rules
The rules for naming Teradata Database database objects on systems enabled for standard
language support are as follows.
• You must define and reference each object, such as user, database, or table, by a name.
• In general, names consist of 1 to 30 characters.
• Names can appear as a sequence of characters within double quotes and as a quoted
hexadecimal string followed by the key letters XN. Such names have fewer restrictions on
the characters that can be included. The restrictions are described in “QUOTATION
MARKS Characters and Names” on page 69 and “Internal Hexadecimal Representation of
a Name” on page 70.
• Unquoted names have the following syntactic restrictions:
• They may only include the following characters:
• Uppercase or lowercase letters (A to Z and a to z)
• Digits (0 through 9)
• The special characters DOLLAR SIGN ($), NUMBER SIGN (#), and LOW LINE
( _ )
• They must not begin with a digit.
• They must not be a keyword.
• Systems that are enabled for Japanese language support allow various Japanese characters
to be used for names, but determining the maximum number of characters allowed in a
name becomes much more complex (see “Name Validation on Systems Enabled with
Japanese Language Support” on page 77).
• Names having any of the following characteristics are not ANSI SQL-2003 compliant:
• Contains lower case letters.
• Contains either a $ or a #.
• Begins with an underscore.
• Has more than 18 characters.
• Names that define databases and objects must observe the following rules.
• Databases, users, and profiles must have unique names.
• Tables, views, stored procedures, join or hash indexes, triggers, user-defined functions,
or macros can take the same name as the database or user in which they are created,
but cannot take the same name as another of these objects in the same database or user.
• Roles can have the same name as a profile, table, column, view, macro, trigger, table
function, user-defined function, external stored procedure, or stored procedure;
however, role names must be unique among users and databases.
• Table and view columns must have unique names.
• Parameters defined for a macro or stored procedure must have unique names.
• Secondary indexes on a table must have unique names.Chapter 2: Basic SQL Syntax and Lexicon
Names
SQL Reference: Fundamentals 69
• Named constraints on a table must have unique names.
• Secondary indexes and constraints can have the same name as the table they are
associated with.
• CHECK constraints, REFERENCE constraints, and INDEX objects can also have assigned
names. Names are optional for these objects.
• Names are not case-specific (see “Case Sensitivity and Names” on page 71).
QUOTATION MARKS Characters and Names
Enclosing names in QUOTATION MARKS characters (U+0022) greatly increases the valid set
of characters for defining names.
Pad characters and special characters can also be included. For example, the following strings
are both valid names.
• “Current Salary”
• “D’Augusta”
The QUOTATION MARKS characters are not part of the name, but they are required, if the
name is not valid otherwise.
For example, these two names are identical, even though one is enclosed within QUOTATION
MARKS characters.
• This_Name
• “This_Name”
On systems enabled for standard language support, any character translatable to the LATIN
server character set can appear in an object name, with the following exceptions:
• The NULL character (U+0000) is not allowed in any names, including quoted names.
• The object name must not consist entirely of blank characters. In this context, a blank
character is any of the following:
• NULL (U+0000)
• CHARACTER TABULATION (U+0009)
• LINE FEED (U+000A)
• LINE TABULATION (U+000B)
• FORM FEED (U+000C)
• CARRIAGE RETURN (U+000D)
• SPACE (U+0020)
• The code point 0x1A, which represents the error character for KANJI1 and LATIN server
character sets, cannot be translated between character sets and must not appear in object
names.
All of the following examples are valid names.
• Employee
• job_title Chapter 2: Basic SQL Syntax and Lexicon
Names
70 SQL Reference: Fundamentals
• CURRENT_SALARY
• DeptNo
• Population_of_Los_Angeles
• Totaldollars
• “Table A”
• “Today’s Date”
Note: If you use quoted names, the QUOTATION MARKS characters that delineate the
names are not counted in the length of the name and are not stored in Dictionary tables used
to track name usage.
If a Dictionary view is used to display such names, they are displayed without the double
quote characters, and if the resulting names are used without adding double quotes, the likely
outcome is an error report.
For example, “D’Augusta” might be the name of a column in the Dictionary view
DBC.Columns, and the HELP statements that return column names return the name as
D’Augusta (without being enclosed in QUOTATION MARKS characters).
Internal Hexadecimal Representation of a Name
You can also create and reference object names by their internal hexadecimal representation in
the Data Dictionary using the following syntax:
where:
The key letters XN specify that the string is a hexadecimal name.
On systems enabled for standard language support, any character translatable to the LATIN
server character set can appear in an object name, with the same exceptions listed in the
preceding section, “QUOTATION MARKS Characters and Names” on page 69.
For more information on using internal hexadecimal representations of names, see “Using the
Internal Hexadecimal Representation of a Name” on page 82.
Syntax element … Specifies …
'hexadecimal_digits' a quoted hexadecimal string representation of the Teradata Database
internal encoding.
HH01A099
'hexadecimal_digit(s)' XNChapter 2: Basic SQL Syntax and Lexicon
Standard Form for Data in Teradata Database
SQL Reference: Fundamentals 71
Case Sensitivity and Names
Names are not case-dependent—a name cannot be used twice by changing its case. Any mix
of uppercase and lowercase can be used when referencing symbolic names in a request.
For example, the following statements are identical.
SELECT Salary FROM Employee WHERE EmpNo = 10005;
SELECT SALARY FROM EMPLOYEE WHERE EMPNO = 10005;
SELECT salary FROM employee WHERE eMpNo = 10005;
The case in which a column name is defined can be important. The column name is the
default title of an output column, and symbolic names are returned in the same case in which
they were defined.
For example, assume that the columns in the SalesReps table are defined as follows:
CREATE TABLE SalesReps
( last_name VARCHAR(20) NOT NULL,
first_name VARCHAR(12) NOT NULL, ...
In response to a query that does not define a TITLE phrase, such as the following example, the
column names are returned exactly as defined they were defined, for example, last_name, then
first_name.
SELECT Last_Name, First_Name
FROM SalesReps
ORDER BY Last_Name;
You can use the TITLE phrase to specify the case, wording, and placement of an output
column heading either in the column definition or in an SQL statement.
For more information, see SQL Reference: Data Manipulation Statements.
Standard Form for Data in Teradata Database
Introduction
Data in Teradata Database is presented to a user according to the relational model, which
models data as two dimensional tables with rows and columns. Each row of a table is
composed one or more columns identified by column name. Each column contains a data
item (or a null) having a single data type.
Syntax for Referencing a Column
table_name.
FF07D238
column_name
database_name.Chapter 2: Basic SQL Syntax and Lexicon
Standard Form for Data in Teradata Database
72 SQL Reference: Fundamentals
where:
Definition: Fully Qualified Column Name
A fully qualified name consists of a database name, table name, and column name.
For example, a fully qualified reference for the Name column in the Employee table of the
Personnel database is:
Personnel.Employee.Name
Column Alias
In addition to referring to a column by name, an SQL query can reference a column by an
alias. Column aliases are used for join indexes when two columns have the same name.
However, an alias can be used for any column when a pseudonym is more descriptive or easier
to use. Using an alias to name an expression allows a query to reference the expression.
You can specify a column alias with or without the keyword AS on the first reference to the
column in the query. The following example creates and uses aliases for the first two columns.
SELECT departnumber AS d, employeename e, salary
FROM personnel.employee
WHERE d IN(100, 500)
ORDER BY d, e ;
Alias names must meet the same requirements as names of other database objects. For details,
see “Names” on page 67.
The scope of alias names is confined to the query.
Syntax element … Specifies …
database_name a qualifying name for the database in which the table and column being
referenced is stored.
Depending on the ambiguity of the reference, database_name might or
might not be required.
See “Unqualified Object Names” on page 73.
table_name a qualifying name for the table in which the column being referenced is
stored.
Depending on the ambiguity of the reference, table_name might or
might not be required.
See “Unqualified Object Names” on page 73.
column_name one of the following:
• The name of the column being referenced
• The alias of the column being referenced
• The keyword PARTITION
See “Column Alias” on page 72.Chapter 2: Basic SQL Syntax and Lexicon
Unqualified Object Names
SQL Reference: Fundamentals 73
Referencing All Columns in a Table
An asterisk references all columns in a row simultaneously, for example, the following
SELECT statement references all columns in the Employee table. A list of those fully qualified
column names follows the query.
SELECT * FROM Employee;
Personnel.Employee.EmpNo
Personnel.Employee.Name
Personnel.Employee.DeptNo
Personnel.Employee.JobTitle
Personnel.Employee.Salary
Personnel.Employee.YrsExp
Personnel.Employee.DOB
Personnel.Employee.Sex
Personnel.Employee.Race
Personnel.Employee.MStat
Personnel.Employee.EdLev
Personnel.Employee.HCap
Unqualified Object Names
Definition
An unqualified object name is a table, column, trigger, macro, or stored procedure reference
that is not fully qualified. For example, the WHERE clause in the following statement uses
“DeptNo” as an unqualified column name:
SELECT *
FROM Personnel.Employee
WHERE DeptNo = 100 ;
Unqualified Column Names
You can omit database and table name qualifiers when you reference columns as long as the
reference is not ambiguous.
For example, the WHERE clause in the following statement:
SELECT Name, DeptNo, JobTitle
FROM Personnel.Employee
WHERE Personnel.Employee.DeptNo = 100 ;
can be written as:
WHERE DeptNo = 100 ;
because the database name and table name can be derived from the Personnel.Employee
reference in the FROM clause.Chapter 2: Basic SQL Syntax and Lexicon
Unqualified Object Names
74 SQL Reference: Fundamentals
Omitting Database Names
When you omit the database name qualifier, Teradata Database looks in the following
databases to find the unqualified table, view, trigger, or macro name:
• The default database, which is established by a DATABASE, CREATE USER, MODIFY
USER, CREATE PROFILE, or MODIFY PROFILE statement
• Other databases, if any, referenced by the SQL statement
• The login user database for a volatile table, if the unqualified object name is a table name
The search must find the table name in only one of those databases. An ambiguous name
error message results if the name exists in more than one of those databases.
For example, if your login user database has no volatile tables named Employee and you have
established Personnel as your default database, you can omit the Personnel database name
qualifier from the preceding sample query.
Rules for Name Resolution
The following rules govern name resolution:
• Name resolution is performed statement by statement.
• When an INSERT statement contains a subquery, names are resolved in the subquery first.
• Names in a view are resolved when the view is created.
• Names in a macro data manipulation statement are resolved when the macro is created.
• Names in a macro data definition statement are resolved when the macro is performed
using the default database of the user submitting the EXECUTE statement.
Therefore, you should fully qualify all names in a macro data definition statement, unless
you specifically intend for the user’s default to be used.
• Names in stored procedure statements are resolved when the procedure is created. All
unqualified object names acquire the current default database name.
• An ambiguous unqualified name returns an error to the requestor.
Related Topics
FOR more information on … SEE …
default databases “Default Database” on page 75.
the DATABASE statement “SQL Data Definition Language Statement Syntax” in
SQL Reference: Data Definition Statements.
the CREATE USER statement
the MODIFY USER statementChapter 2: Basic SQL Syntax and Lexicon
Default Database
SQL Reference: Fundamentals 75
Default Database
Definition
The default database is a Teradata extension to SQL that defines a database that Teradata
Database uses to look for unqualified table, view, trigger, or macro names in SQL statements.
The default database is not the only database that Teradata Database uses to find an
unqualified table, view, trigger, or macro name in an SQL statement, however; Teradata
Database also looks for the name in:
• Other databases, if any, referenced by the SQL statement
• The login user database for a volatile table, if the unqualified object name is a table name
If the unqualified object name exists in more than one of the databases in which Teradata
Database looks, the SQL statement produces an ambiguous name error.
Establishing a Permanent Default Database
You can establish a permanent default database that is invoked each time you log on.
For example, the following statement automatically establishes Personnel as the default
database for Marks at the next logon:
MODIFY USER marks AS
DEFAULT DATABASE = personnel ;
After you assign a default database, Teradata Database uses that database as one of the
databases to look for all unqualified object references.
To obtain information from a table, view, trigger, or macro in another database, fully qualify
the table reference by specifying the database name, a FULLSTOP character, and the table
name.
TO … USE one of the following SQL Data Definition statements …
define a permanent default
database
• CREATE USER, with a DEFAULT DATABASE clause.
• CREATE USER, with a PROFILE clause that specifies a
profile that defines the default database.
change your permanent default
database definition
• MODIFY USER, with a DEFAULT DATABASE clause.
• MODIFY USER, with a PROFILE clause.
• MODIFY PROFILE, with a DEFAULT DATABASE clause.
add a default database when one
had not been established previouslyChapter 2: Basic SQL Syntax and Lexicon
Default Database
76 SQL Reference: Fundamentals
Establishing a Default Database for a Session
You can establish a default database for the current session that Teradata Database uses to look
for unqualified table, view, trigger, or macro names in SQL statements.
For example, after entering the following SQL statement:
DATABASE personnel ;
you can enter a SELECT statement as follows:
SELECT deptno (TITLE 'Org'), name
FROM employee ;
which has the same results as:
SELECT deptno (TITLE 'Org'), name
FROM personnel.employee;
To establish a default database, you must have some privilege on a database, macro, stored
procedure, table, user, or view in that database. Once defined, the default database remains in
effect until the end of a session or until it is replaced by a subsequent DATABASE statement.
Related Topics
TO … USE …
establish a default database for a session the DATABASE statement.
FOR more information on … SEE …
the DATABASE statement SQL Reference: Data Definition Statements.
the CREATE USER statement
the MODIFY USER statement
fully-qualified names “Standard Form for Data in Teradata Database” on page 71.
“Unqualified Object Names” on page 73.
using profiles to define a default
database
“Profiles” on page 55.Chapter 2: Basic SQL Syntax and Lexicon
Name Validation on Systems Enabled with Japanese Language Support
SQL Reference: Fundamentals 77
Name Validation on Systems Enabled with
Japanese Language Support
Introduction
A system that is enabled with Japanese language support allows thousands of additional
characters to be used for names, but also introduces additional restrictions.
Rules for Unquoted Names
Unquoted names can use the following characters when Japanese language support is enabled:
• Any character valid in an unquoted name under standard language support:
• Uppercase or lowercase letters (A to Z and a to z)
• Digits (0 through 9)
• The special characters DOLLAR SIGN ($), NUMBER SIGN (#), and LOW LINE ( _ )
• The fullwidth (zenkaku) versions of the characters valid for names under standard
language support:
• Fullwidth uppercase or lowercase letters (A to Z and a to z)
• Fullwidth digits (0 through 9)
• The special characters fullwidth DOLLAR SIGN ($), fullwidth NUMBER SIGN (#),
and fullwidth LOW LINE ( _ )
• Fullwidth (zenkaku) and halfwidth (hankaku) Katakana characters and sound marks.
• Hiragana characters.
• Kanji characters from JIS-x0208.
The length of a name is restricted in a complex fashion. Charts of the supported Japanese
character sets, the Teradata Database internal encodings, the valid character ranges for
Japanese object names and data, and the non-valid character ranges for Japanese data and
object names are documented in International Character Set Support.
Rules for Quoted Names and Internal Hexadecimal Representation of
Names
As described in “QUOTATION MARKS Characters and Names” on page 69 and “Internal
Hexadecimal Representation of a Name” on page 70, names can also appear as a sequence of
characters within double quotes or as a quoted hexadecimal string followed by the key letters
XN. Such names have fewer restrictions on the characters that can be included.
The following restrictions that apply to systems enabled for standard language support also
apply to systems enabled for Japanese language support:
• The NULL character (U+0000) is not allowed.
• The code point 0x1A, which represents the error character for KANJI1 and LATIN server
character sets, cannot be translated between character sets and must not appear in object
names.Chapter 2: Basic SQL Syntax and Lexicon
Name Validation on Systems Enabled with Japanese Language Support
78 SQL Reference: Fundamentals
• The object name must not consist entirely of blank characters. In this context, a blank
character is any of the following:
• NULL (U+0000)
• LINE FEED (U+000A)
• LINE TABULATION (U+000B)
• FORM FEED (U+000C)
• CARRIAGE RETURN (U+000D)
• SPACE (U+0020)
• CHARACTER TABULATION (U+0009)
Additional rules apply to sessions using non-Japanese client character sets on systems enabled
with Japanese language support. Here are some examples of predefined non-Japanese client
character sets (you can also define your own site-defined client character sets):
For sessions using non-Japanese client character sets on systems where Japanese language
support is enabled, object names can only have characters in the following inclusive ranges:
• U+0001 through U+000D
• U+0015 through U+005B
• U+005D through U+007D
• U+007F
REVERSE SOLIDUS (U+005C) and TILDE (U+007E) are not allowed.
Cross-Platform Integrity
If you need to access objects from heterogeneous clients, the best practice is to restrict the
object names to those allowed under standard language support.
Calculating the Length of a Name
The length of a name is measured by the physical bytes of its internal representation, not by
the number of viewable characters. Under the KanjiEBCDIC character sets, the Shift-Out and
Shift-In characters that delimit a multibyte character string are included in the byte count.
• EBCDIC
• EBCDIC037_0E
• ASCII
• LATIN1_0A
• LATIN9_0A
• LATIN1252_0A
• UTF8
• UTF16
• SCHEBCDIC935_2IJ
• TCHEBCDIC937_3IB
• HANGULEBCDIC933_1II
• SCHGB2312_1T0
• TCHBIG5_1R0
• HANGULKSC5601_2R4
• SCHGB2312_1T0
• TCHBIG5_1R0
• HANGULKSC5601_2R4Chapter 2: Basic SQL Syntax and Lexicon
Name Validation on Systems Enabled with Japanese Language Support
SQL Reference: Fundamentals 79
For example, the following table name contains six logical characters of mixed single byte
characters/multibyte characters, defined during a KanjiEBCDIC session:
QR
All single byte characters, including the Shift-Out/Shift-In characters, are translated into the
Teradata Database internal encoding, based on JIS-x0201. Under the KanjiEBCDIC character
sets, all multibyte characters remain in the client encoding.
Thus, the processed name is stored as a string of twelve bytes, padded on the right with the
single byte space character to a total of 30 bytes.
The internal representation is as follows:
0E 42E3 42C1 42C2 42F1 0F 51 52 20 20 20 20 20 20 20 20 20 20 20 20 ...
< T A B 1 > Q R
To ensure upgrade compatibility, an object name created under one character set cannot
exceed 30 bytes in any supported character set.
For example, a single Katakana character occupies 1 byte in KanjiShift-JIS. However, when
KanjiShift-JIS is converted to KanjiEUC, each Katakana character occupies two bytes. Thus, a
30-byte Katakana name in KanjiShift-JIS would expand in KanjiEUC to 60 bytes, which is
illegal.
The formula for calculating the correct length of an object name is as follows:
Length = ASCII + (2*KANJI) + MAX (2*KATAKANA, (KATAKANA + 2*S2M + 2*M2S))
where:
How Validation Occurs
Name validation occurs when the object is created or renamed, as follows:
• User names, database names, and account names are verified during the CREATE/
MODIFY USER and CREATE/MODIFY DATABASE statements.
• Names of work tables and error tables are validated by the MultiLoad and FastLoad client
utilities.
• Table names and column names are verified during the CREATE/ALTER TABLE and
RENAME TABLE statements. View and macro names are verified during the CREATE/
RENAME VIEW and CREATE/RENAME MACRO statements.
This variable … Represents the number of …
ASCII single-byte ASCII characters in the name.
KATAKANA single-byte Hankaku Katakana characters in the name.
KANJI double-byte characters in the name from the JIS-x0208 standard.
S2M transitions from ASCII or KATAKANA to JIS-x0208.
M2S transitions from JIS-x0208 to ASCII or KATAKANA.Chapter 2: Basic SQL Syntax and Lexicon
Name Validation on Systems Enabled with Japanese Language Support
80 SQL Reference: Fundamentals
Stored procedure names are verified during the execution of CREATE/RENAME/
REPLACE PROCEDURE statements.
• Alias object names used in the SELECT, UPDATE, and DELETE statements are verified.
The validation occurs only when the SELECT statement is used in a CREATE/REPLACE
VIEW statement, and when the SELECT, UPDATE, or DELETE TABLE statement is used
in a CREATE/REPLACE MACRO statement.
Examples of Validating Japanese Object Names
The following tables illustrate valid and non-valid object names under the Japanese character
sets: KanjiEBCDIC, KanjiEUC, and KanjiShift-JIS. The meanings of ASCII, KATAKANA,
KANJI, S2M, M2S, and LEN are defined in “Calculating the Length of a Name” on page 78.
KanjiEBCDIC Object Name Examples
KanjiEUC Object Name Examples
Name ASCII Katakana Kanji S2M M2S LEN Result
0 0 14 1 1 32 Not valid because LEN > 30.
kl 2 0 12 2 2 34 Not valid because LEN > 30.
kl<> 2 0 10 2 2 30 Not valid because consecutive SO
and SI characters are not allowed.
0 0 11 2 2 30 Not valid because consecutive SI
and SO characters are not allowed.
ABCDEFGHIJKLMNO 0 15 0 0 0 30 Valid.
KLMNO 0 5 10 1 1 30 Valid.
> 0 0 1 1 1 6 Not valid because the double byte
space is not allowed.
Name ASCII Katakana Kanji S2M M2S LEN Result
ABCDEFGHIJKLM 6 0 7 3 3 32 Not valid because LEN > 30 bytes.
ABCDEFGHIJKLM 6 0 7 2 2 28 Valid.
ss
2ABCDEFGHIJKL 0 1 11 1 1 27 Valid.
Ass
2
BCDEFGHIJKL 0 1 11 2 2 31 Not valid because LEN > 30 bytes.
ss
3C 0 0 0 1 1 4 Not valid because characters from
code set 3 are not allowed.Chapter 2: Basic SQL Syntax and Lexicon
Object Name Translation and Storage
SQL Reference: Fundamentals 81
KanjiShift-JIS Object Name Examples
Related Topics
For charts of the supported Japanese character sets, the Teradata Database internal encodings,
the valid character ranges for Japanese object names and data, and the non-valid character
ranges for Japanese data and object names, see International Character Set Support.
Object Name Translation and Storage
Object names are stored in the dictionary tables using the following translation conventions.
Both the ASCII character set and the EBCDIC character set are stored on the server as ASCII.
Name ASCII Katakana Kanji S2M M2S LEN Result
ABCDEFGHIJKLMNOPQR 6 7 5 1 1 30 Valid.
ABCDEFGHIJKLMNOPQR 6 7 5 2 2 31 Not valid because LEN > 30 bytes.
Character Type Description
Single byte All single byte characters in a name, including the KanjiEBCDIC Shift-Out/ShiftIn characters, are translated into the Teradata Database internal representation
(based on JIS-x0201 encoding).
Multibyte Multibyte characters in object names are handled according to the character set in
effect for the current session, as follows.
Multibyte Character Set Description
KanjiEBCDIC Each multibyte character within the Shift-Out/ShiftIn delimiters is stored without translation; that is, it
remains in the client encoding. The name string
must have matched (but not consecutive) Shift-Out
and Shift-In delimiters.
KanjiEUC Under code set 1, each multibyte character is
translated from KanjiEUC to KanjiShift-JIS. Under
code set 2, byte ss2 (0x8E) is translated to 0x80; the
second byte is left unmodified.
This translation preserves the relative ordering of
code set 0, code set 1, and code set 2.
KanjiShift-JIS Each multibyte character is stored without
translation; it remains in the client encoding.Chapter 2: Basic SQL Syntax and Lexicon
Object Name Comparisons
82 SQL Reference: Fundamentals
Object Name Comparisons
Comparison Rules
In comparing two names, the following rules apply:
• A simple Latin lowercase letter is equivalent to its corresponding simple Latin uppercase
letter. For example, 'a' is equivalent to 'A'.
• Multibyte characters that have the same logical presentation but have different physical
encodings under different character sets do not compare as equivalent.
• Two names compare as identical when their internal hexadecimal representations are the
same, even if their logical meanings are different under the originating character sets.
Note that identical characters on keyboards connected to different clients are not necessarily
identical in their internal encoding in the Teradata Database. The Teradata Database could
interpret two logically identical names as different names if the character sets under which
they were created are not the same.
For example, the following strings illustrate the internal representation of two names, both of
which were defined with the same logical multibyte characters. However, the first name was
created under KanjiEBCDIC, and the second name was created under KanjiShift-JIS.
KanjiEBCDIC: 0E 42E3 42C1 42C2 42F1 0F 51 52
KanjiShift-JIS: 8273 8260 8261 8250 D8 D9
To ensure upgrade compatibility, you must avoid semantically duplicate object names in
situations where duplicate object names would not normally be allowed.
Also, two different character sets might have the same internal encoding for two logically
different multibyte characters. Thus, two names might compare as identical even if their
logical meanings are different.
Using the Internal Hexadecimal Representation of a Name
The Teradata Database knows an object name by its internal hexadecimal representation, and
this is how it is stored in the various system tables of the Data Dictionary.
The encoding of the internal representation of an object name depends on the components of
the name string (are there single byte characters, multibyte characters, or both; are there Shift
Out/Shift In (SO/SI) characters, and so on) and the character set in effect when the name was
created.
Suppose that a user under one character set needs to reference an object created by a user
under a different character set. If the current user attempts to reference the name with the
actual characters (that is, by typing the characters or by selecting non-specific entries from a
dictionary table), the access could fail or the returned name could be meaningless.
For example, assume that User_1 invokes a session under KanjiEBCDIC and creates a table
name with multibyte characters.
User_2 invokes a session under KanjiEUC and issues the following statement.Chapter 2: Basic SQL Syntax and Lexicon
Object Name Comparisons
SQL Reference: Fundamentals 83
SELECT TableName
FROM DBC.Tables
The result returns the KanjiEBCDIC characters in KanjiEUC presentation, which probably
does not make sense.
You can avoid this problem by creating objects and specifying object names in the following
ways:
• Create objects using names that contain only simple single byte Latin letters (A...Z, a...z)
digits, and the DOLLAR SIGN ($), NUMBER SIGN (#), and LOW LINE ( _ ) symbols.
Because these characters always translate to the same internal representation, they display
exactly the same presentation to any session, regardless of the client or the character set.
• Use the following syntax to reference a name by its internal representation.
where:
The key letters XN specify that the string is a hexadecimal name.
Example
The following table name, which contains mixed single byte characters and multibyte
characters, was created under a KanjiEBCDIC character set:
KAN
The client encoding in which this name was received is as follows:
0E 42E3 42C1 42C2 42F1 0F D2 C1 D5
< T A B 1 > K A N
The single byte characters (the letters K, A, and N, and the SO/SI characters) were translated
into internal JIS-x0201 encoding. The multibyte characters were not translated and remained
in the host encoding.
The resulting internal string by which the name was stored is as follows:
0E 42E3 42C1 42C2 42F1 0F 4B 41 4E
< T A B 1 > K A N
To access this table from a KanjiShift-JIS or KanjiEUC character set, you could use the
following Teradata SQL statement:
SELECT *
FROM '0E42E342C142C242F10F4B414E'XN;
The response would be all rows from table KAN.
Syntax element … Specifies …
’hexadecimal_digits’ a quoted hexadecimal string representation of the Teradata
Database internal encoding.
HH01A099
'hexadecimal_digit(s)' XNChapter 2: Basic SQL Syntax and Lexicon
Finding the Internal Hexadecimal Representation for Object Names
84 SQL Reference: Fundamentals
Finding the Internal Hexadecimal
Representation for Object Names
Introduction
The CHAR2HEXINT function converts a character string to its internal hexadecimal
representation. You can use this function to find the internal representation of any Teradata
Database name.
For more information on CHAR2HEXINT, see SQL Reference: Functions and Operators.
Example 1
For example, to find the internal representation of all Teradata Database table names, issue the
following Teradata SQL statement.
SELECT CHAR2HEXINT(T.TableName) (TITLE 'Internal Hex Representation
of TableName'),T.TableName (TITLE 'TableName')
FROM DBC.Tables T
WHERE T.TableKind = 'T'
ORDER BY T.TableName;
This statement selects all rows from the DBC.Tables view where the value of the TableKind
column is T.
For each row selected, both the internal hexadecimal representation and the character format
of the value in the TableName column are returned, sorted alphabetically.
An example of a portion of the output from this statement is shown below. In this example,
the first name (double byte-A) was created using the KanjiEBCDIC character set.
Internal Hex Representation of TableName TableName
------------------------------------------------------------ -----------
0E42C10F2020202020202020202020202020202020202020202020202020
416363657373526967687473202020202020202020202020202020202020 AccessRights
4163634C6F6752756C6554626C2020202020202020202020202020202020 AccLogRuleTb
4163634C6F6754626C202020202020202020202020202020202020202020 AccLogTbl
4163636F756E747320202020202020202020202020202020202020202020 Accounts
416363746720202020202020202020202020202020202020202020202020 Acctg
416C6C202020202020202020202020202020202020202020202020202020 All
4368616E676564526F774A6F75726E616C20202020202020202020202020 ChangedRowJo
636865636B5F7065726D2020202020202020202020202020202020202020 check_perm
436F70496E666F54626C2020202020202020202020202020202020202020 CopInfoTbl
Note that the first name, , cannot be interpreted. To obtain a printable
version of a name, you must log onto a session under the same character set under which the
name was created.Chapter 2: Basic SQL Syntax and Lexicon
Finding the Internal Hexadecimal Representation for Object Names
SQL Reference: Fundamentals 85
Example 2
You can use the same syntax to obtain the internal hexadecimal representations of all views or
all macros.
To do this, modify the WHERE condition to TableKind=’V’ for views and to TableKind=’M’
for macros.
To obtain the internal hexadecimal representation of all database names, you can issue the
following statement:
SELECT CHAR2HEXINT(D.DatabaseName)(TITLE 'Internal Hex Representation
of DatabaseName'),D.DatabaseName (TITLE 'DatabaseName')
FROM DBC.Databases D
ORDER BY D.DatabaseName;
This statement selects every DatabaseName from DBC.Databases. For each DatabaseName, it
returns the internal hexadecimal representation and the name in character format, sorted by
DatabaseName.
An example of the output from this statement is as follows:
Internal Hex Representation of DatabaseName DatabaseName
------------------------------------------------------------ ------------
416C6C202020202020202020202020202020202020202020202020202020 All
434F4E534F4C452020202020202020202020202020202020202020202020 CONSOLE
437261736864756D70732020202020202020202020202020202020202020 Crashdumps
444243202020202020202020202020202020202020202020202020202020 DBC
44656661756C742020202020202020202020202020202020202020202020 Default
5055424C4943202020202020202020202020202020202020202020202020 PUBLIC
53797341646D696E20202020202020202020202020202020202020202020 SysAdmin
53797374656D466520202020202020202020202020202020202020202020 SystemFe
Example 3
Note that these statements return the padded hexadecimal name. The value 0x20 represents a
space character in the internal representation.
You can use the TRIM function to obtain the hexadecimal values without the trailing spaces,
as follows.
SELECT CHAR2HEXINT(TRIM(T.TableName)) (TITLE 'Internal Hex
Representation of TableName'),T.TableName (TITLE 'TableName')
FROM DBC.Tables T
WHERE T.TableKind = 'T'
ORDER BY T.TableName;Chapter 2: Basic SQL Syntax and Lexicon
Specifying Names in a Logon String
86 SQL Reference: Fundamentals
Specifying Names in a Logon String
Purpose
Identifies a user to the Teradata Database and, optionally, permits the user to specify a
particular account to log onto.
Syntax
where:
The Teradata Database does not support the hexadecimal representation of a username, a
password, or an accountname in a logon string.
For example, if you attempt to log on as user DBC by entering '444243'XN, the logon is not
successful and an error message is generated.
Passwords
The password format options allows the site administrator to change the minimum and
maximum number of characters allowed in the password string, and control the use of digits
and special characters.
Password string rules are identical to those for naming objects. See “Name Validation on
Systems Enabled with Japanese Language Support” on page 77.
The password formatting feature does not apply to multibyte client character sets on systems
enabled with Japanese language support.
Syntax element … Specifies …
tdp_id/username the client TDP the user wishes to use to communicate with the Teradata
Database and the name by which the Teradata Database knows the user.
The username parameter can contain mixed single byte and multibyte
characters if the current character set permits them.
password an optional (depending on how the user is defined) password required to gain
access to the Teradata Database.
The password parameter can contain mixed single byte and multibyte
characters if the current character set permits them.
accountname an optional account name or account string that specifies a user account or
account and performance-related variable parameters the user can use to tailor
the session being logged onto.
The accountname parameter can contain mixed single byte and multibyte
characters if the current character set permits them.
tdpid/username
HH01A079
,password ,accountnameChapter 2: Basic SQL Syntax and Lexicon
Literals
SQL Reference: Fundamentals 87
Literals
Literals, or constants, are values coded directly in the text of an SQL statement, view or macro
definition text, or CHECK constraint definition text. In general, the system is able to
determine the data type of a literal by its form.
Numeric Literals
A numeric literal (also referred to as a constant) is a character string of 1 to 40 characters
selected from the following:
• digits 0 through 9
• plus sign
• minus sign
• decimal point
There are three types of numeric literals: integer, decimal, and floating point.
Hexadecimal Literals
A hexadecimal literal specifies a string of 0 to 62000 hexadecimal digits that can represent a
byte, character, or integer value. A hexadecimal digit is a character from 0 to 9, a to f, or A to F.
Type Description
Integer
Literal
An integer literal declares literal strings of integer numbers. Integer literals consist
of an optional sign followed by a sequence of up to 10 digits.
A numeric literal that is outside the range of values of an integer literal is
considered a decimal literal.
Decimal
Literal
A decimal literal declares literal strings of decimal numbers.
Decimal literals consist of the following components, reading from left-to-right:
an optional sign, an optional sequence of up to 38 digits (mandatory only when
no digits appear after the decimal point), an optional decimal point, an optional
sequence of digits (mandatory only when no digits appear before the decimal
point). The scale and precision of a decimal literal are determined by the total
number of digits in the literal and the number of digits to the right of the decimal
point, respectively.
Floating Point
Literal
A floating point literal declares literal strings of floating point numbers.
Floating point literals consist of the following components, reading from left-toright: an optional sign, an optional sequence of digits (mandatory only when no
digits appear after the decimal point) representing the whole number portion of
the mantissa, an optional decimal point, an optional sequence of digits
(mandatory only when no digits appear before the decimal point) representing the
fractional portion of the mantissa, the literal character E, an optional sign, a
sequence of digits representing the exponent. Chapter 2: Basic SQL Syntax and Lexicon
Literals
88 SQL Reference: Fundamentals
DateTime Literals
Date and time literals declare date, time, or timestamp values in a SQL expression, view or
macro definition text, or CONSTRAINT definition text.
Date and time literals are introduced by keywords. For example:
DATE '1969-12-23'
There are three types of DateTime literals: DATE, TIME, and TIMESTAMP.
Interval Literals
Interval literals provide a means for declaring spans of time.
Interval literals are introduced and followed by keywords. For example:
INTERVAL '200' HOUR
There are two mutually exclusive categories of interval literals: Year-Month and Day-Time.
Type Description
DATE
Literal
A date literal declares a date value in ANSI DATE format. ANSI DATE literal is the
preferred format for DATE constants. All DATE operations accept this format.
TIME
Literal
A time literal declares a time value and an optional time zone offset.
TIMESTAMP
Literal
A timestamp literal declares a timestamp value and an optional time zone offset.
Category Type Description
Year-Month • YEAR
• YEAR TO MONTH
• MONTH
Represent a time span that can include a number
of years and months.
Day-Time • DAY
• DAY TO HOUR
• DAY TO MINUTE
• DAY TO SECOND
• HOUR
• HOUR TO MINUTE
• HOUR TO SECOND
• MINUTE
• MINUTE TO SECOND
• SECOND
Represent a time span that can include a number
of days, hours, minutes, or seconds.Chapter 2: Basic SQL Syntax and Lexicon
Literals
SQL Reference: Fundamentals 89
Character Literals
A character literal declares a character value in an expression, view or macro definition text, or
CHECK constraint definition text.
Character literals consist of 0 to 31000 bytes delimited by a matching pair of single quotes. A
zero-length character literal is represented by two consecutive single quotes ('').
Graphic Literals
A graphic literal specifies multibyte characters within the graphic repertoire.
Built-In Functions
The built-in functions, or special register functions, which are niladic (have no arguments),
return various information about the system and can be used like other literals within SQL
expressions. In an SQL query, the appropriate system value is substituted by the Parser after
optimization but prior to executing a query using a cachable plan.
Available built-in functions include all of the following:
• ACCOUNT
• CURRENT_DATE
• CURRENT_TIME
• CURRENT_TIMESTAMP
• DATABASE
• DATE
• PROFILE
• ROLE
• SESSION
• TIME
• USER
Related Topics
FOR more information on … SEE …
• numeric literals
• DateTime literals
• interval literals
• character literals
• graphic literals
• hexadecimal literals
SQL Reference: Data Types and Literals.
built-in functions SQL Reference: Functions and Operators.Chapter 2: Basic SQL Syntax and Lexicon
NULL Keyword as a Literal
90 SQL Reference: Fundamentals
NULL Keyword as a Literal
Null
A null represents any of three things:
• An empty column
• An unknown value
• An unknowable value
Nulls are neither values nor do they signify values; they represent the absence of value. A null is
a place holder indicating that no value is present.
NULL Keyword
The keyword NULL represents null, and is sometimes available as a special construct similar
to, but not identical with, a literal.
ANSI Compliance
NULL is ANSI SQL-2003-compliant with extensions.
Using NULL as a Literal
Use NULL as a literal in the following ways:
• A CAST source operand, for example:
SELECT CAST (NULL AS DATE);
• A CASE result, for example.
SELECT CASE WHEN orders = 10 THEN NULL END FROM sales_tbl;
• An insert item specifying a null is to be placed in a column position on INSERT.
• An update item specifying a null is to be placed in a column position on UPDATE.
• A default column definition specification, for example:
CREATE TABLE European_Sales
(Region INTEGER DEFAULT 99
,Sales Euro_Type DEFAULT NULL);
• An explicit SELECT item, for example:
SELECT NULL
This is a Teradata extension to ANSI.
• An operand of a function, for example:
SELECT TYPE(NULL)
This is a Teradata extension to ANSI.
Data Type of NULL
When you use NULL as an explicit SELECT item or as the operand of a function, its data type
is INTEGER. In all other cases NULL has no data type because it has no value.Chapter 2: Basic SQL Syntax and Lexicon
Operators
SQL Reference: Fundamentals 91
For example, if you perform SELECT TYPE(NULL), then INTEGER is returned as the data
type of NULL.
To avoid type issues, cast NULL to the desired type.
Related Topics
For information on the behavior of nulls and how to use them in data manipulation
statements, see “Manipulating Nulls” on page 134.
Operators
Introduction
SQL operators are used to express logical and arithmetic operations. Operators of the same
precedence are evaluated from left to right. See “SQL Operations and Precedence” on page 91
for more detailed information.
Parentheses can be used to control the order of precedence. When parentheses are present,
operations are performed from the innermost set of parentheses outward.
Definitions
The following definitions apply to SQL operators.
SQL Operations and Precedence
SQL operations, and the order in which they are performed when no parentheses are present,
appear in the following table. Operators of the same precedence are evaluated from left to
right.
Term Definition
numeric Any literal, data reference, or expression having a numeric value.
string Any character string or string expression.
logical A Boolean expression (resolves to TRUE, FALSE, or unknown).
value Any numeric, character, or byte data item.
set A collection of values returned by a subquery, or a list of values separated by commas
and enclosed by parentheses.
Precedence Result Type Operation
highest numeric + numeric (unary plus)
- numeric (unary minus)Chapter 2: Basic SQL Syntax and Lexicon
Functions
92 SQL Reference: Fundamentals
Functions
Scalar Functions
Scalar functions take input parameters and return a single value result. Some examples of
standard SQL scalar functions are CHARACTER_LENGTH, POSITION, and SUBSTRING.
Aggregate Functions
Aggregate functions produce summary results. They differ from scalar functions in that they
take grouped sets of relational data, make a pass over each group, and return one result for the
group. Some examples of standard SQL aggregate functions are AVG, SUM, MAX, and MIN.
intermediate numeric numeric ** numeric (exponentiation)
numeric numeric * numeric (multiplication)
numeric / numeric (division)
numeric MOD numeric (modulo operator)
numeric numeric + numeric (addition)
numeric - numeric (subtraction)
string concatenation operator
logical value EQ value
value NE value
value GT value
value LE value
value LT value
value GE value
value IN set
value NOT IN set
value BETWEEN value AND value
character value LIKE character value
logical NOT logical
logical logical AND logical
lowest logical logical OR logical
Precedence Result Type OperationChapter 2: Basic SQL Syntax and Lexicon
Delimiters
SQL Reference: Fundamentals 93
Related Topics
For the names, parameters, return values, and other details of scalar and aggregate functions,
see SQL Reference: Functions and Operators.
Delimiters
Introduction
Delimiters are special characters having meanings that depend on context.
The function of each delimiter appears in the following table.
Delimiter Name Purpose
(
)
LEFT
PARENTHESIS
RIGHT
PARENTHESIS
Group expressions and define the limits of various phrases.
, COMMA Separates and distinguishes column names in the select list, or
column names or parameters in an optional clause, or
DateTime fields in a DateTime type.
: COLON Prefixes reference parameters or client system variables.
Also separates DateTime fields in a DateTime type.
. FULLSTOP • Separates database names from table, trigger, UDF, UDT,
and stored procedure names, such as personnel.employee.
• Separates table names from a particular column name, such
as employee.deptno).
• In numeric constants, the period is the decimal point.
• Separates DateTime fields in a DateTime type.
• Separates a method name from a UDT expression in a
method invocation.
; SEMICOLON • Separates statements in multi-statement requests.
• Separates statements in a stored procedure body.
• Separates SQL procedure statements in a triggered SQL
statement in a trigger definition.
• Terminates requests submitted via utilities such as BTEQ.
• Terminates embedded SQL statements in C or PL/I
applications.
’ APOSTROPHE • Defines the boundaries of character string constants.
• To include an APOSTROPHE character or show possession
in a title, double the APOSTROPHE characters.
• Also separates DateTime fields in a DateTime type.Chapter 2: Basic SQL Syntax and Lexicon
Separators
94 SQL Reference: Fundamentals
Example
In the following statement submitted through BTEQ, the FULLSTOP separates the database
name (Examp and Personnel) from the table name (Profile and Employee), and, where
reference is qualified to avoid ambiguity, it separates the table name (Profile, Employee) from
the column name (DeptNo).
UPDATE Examp.Profile SET FinGrad = 'A'
WHERE Name = 'Phan A' ; SELECT EdLev, FinGrad,JobTitle,
YrsExp FROM Examp.Profile, Personnel.Employee
WHERE Profile.DeptNo = Employee.DeptNo ;
The first SEMICOLON separates the UPDATE statement from the SELECT statement. The
second SEMICOLON terminates the entire multistatement request.
The semicolon is required in Teradata SQL to separate multiple statements in a request and to
terminate a request submitted through BTEQ.
Separators
Lexical Separators
A lexical separator is a character string that can be placed between words, literals, and
delimiters without changing the meaning of a statement.
Valid lexical separators are any of the following.
• Comments
For an explanation of comment lexical separators, see “Comments” on page 95.
• Pad characters (several pad characters are treated as a single pad character except in a
string literal)
• RETURN characters (X’0D’)
Statement Separators
The SEMICOLON is a Teradata SQL statement separator.
“ QUOTATION
MARK
Defines the boundaries of nonstandard names.
/ SOLIDUS Separates DateTime fields in a DateTime type.
B
b
Uppercase B
Lowercase b
- HYPHENMINUS
Delimiter Name PurposeChapter 2: Basic SQL Syntax and Lexicon
Comments
SQL Reference: Fundamentals 95
Each statement of a multistatement request must be separated from any subsequent statement
with a semicolon.
The following multistatement request illustrates the use of the semicolon as a statement
separator.
SHOW TABLE Payroll_Test ; INSERT INTO Payroll_Test
(EmpNo, Name, DeptNo) VALUES ('10044', 'Jones M',
'300') ; INSERT INTO ...
For statements entered using BTEQ, a request terminates with an input line-ending semicolon
unless that line has a comment, beginning with two dashes (- -). Everything to the right of the
- - is a comment. In this case, the semicolon must be on the following line.
The SEMICOLON as a statement separator in a multistatement request is a Teradata extension
to the ANSI SQL-2003 standard.
Comments
Introduction
You can embed comments within an SQL request anywhere a blank can occur.
The SQL parser and the preprocessor recognize the following types of embedded comments:
• Simple
• Bracketed
Simple Comments
The simple form of a comment is delimited by two consecutive HYPHEN-MINUS (U+002D)
characters (--) at the beginning of the comment and the newline character at the end of the
comment.
The newline character is implementation-specific, but is typed by pressing the Enter (non-
3270 terminals) or Return (3270 terminals) key.
Simple SQL comments cannot span multiple lines.
Example
The following SELECT statement illustrates the use of a simple comment:
SELECT EmpNo, Name FROM Payroll_Test
ORDER BY Name -- Alphabetic order
;
1101E231
- - comment_text new_line_characterChapter 2: Basic SQL Syntax and Lexicon
Terminators
96 SQL Reference: Fundamentals
Bracketed Comments
A bracketed comment is a text string of unrestricted length that is delimited by the beginning
comment characters SOLIDUS (U+002F) and ASTERISK (U+002A) /* and the end comment
characters ASTERISK and SOLIDUS */.
Bracketed comments can begin anywhere on an input line and can span multiple lines.
Example
The following CREATE TABLE statement illustrates the use of a bracketed comment.
CREATE TABLE Payroll_Test /* This is a test table
set up to process actual payroll data on a test
basis. The data generated from this table will
be compared with the existing payroll system
data for 2 months as a parallel test. */
(EmpNo INTEGER NOT NULL FORMAT ’ZZZZ9’,
Name VARCHAR(12) NOT NULL,
DeptNo INTEGER FORMAT ’ZZZZ9’,
.
.
.
Comments With Multibyte Character Set Strings
You can include multibyte character set strings in both simple and bracketed comments.
When using mixed mode in comments, you must have a properly formed mixed mode string,
which means that a Shift-In (SI) must follow its associated Shift-Out (SO).
If an SI does not follow the multibyte string, the results are unpredictable.
When using bracketed comments that span multiple lines, the SI must be on the same line as
its associated SO. If the SI and SO are not on the same line, the results are unpredictable.
You must specify the bracketed comment delimiters (/* and */) as single byte characters.
Terminators
Definition
The SEMICOLON is a Teradata SQL request terminator when it is the last non-blank
character on an input line in BTEQ unless that line has a comment beginning with two dashes.
In this case, the SEMICOLON request terminator should be on the following line, after the
comment line.
/* comment_text */
1101E230Chapter 2: Basic SQL Syntax and Lexicon
Terminators
SQL Reference: Fundamentals 97
A request is considered complete when either the “End of Text” character or the request
terminator character is detected.
ANSI Compliance
The SEMICOLON as a request terminator is Teradata extension to the ANSI SQL-2003
standard.
Example
For example, on the following input line:
SELECT *
FROM Employee ;
the SEMICOLON terminates the single-statement request “SELECT * FROM Employee”.
BTEQ uses SEMICOLONs to terminate multistatement requests.
A request terminator is mandatory for request types that are:
• In the body of a macro
• Triggered action statements in a trigger definition
• Entered using the BTEQ interface
• Entered using other interfaces that require BTEQ
Example 1: Macro Request
The following statement illustrates the use of a request terminator in the body of a macro.
CREATE MACRO Test_Pay (number (INTEGER),
name (VARCHAR(12)),
dept (INTEGER) AS
( INSERT INTO Payroll_Test (EmpNo, Name, DeptNo)
VALUES (:number, :name, :dept) ;
UPDATE DeptCount
SET EmpCount = EmpCount + 1 ;
SELECT *
FROM DeptCount ; )
Example 2: BTEQ Request
When entered through BTEQ, the entire CREATE MACRO statement must be terminated.
CREATE MACRO Test_Pay
(number (INTEGER),
name (VARCHAR(12)),
dept (INTEGER) AS
(INSERT INTO Payroll_Test (EmpNo, Name, DeptNo)
VALUES (:number, :name, :dept) ;
UPDATE DeptCount
SET EmpCount = EmpCount + 1 ;
SELECT *
FROM DeptCount ; ) ;Chapter 2: Basic SQL Syntax and Lexicon
Null Statements
98 SQL Reference: Fundamentals
Null Statements
Introduction
A null statement is a statement that has no content except for optional pad characters or SQL
comments.
Example 1
The semicolon in the following request is a null statement.
/* This example shows a comment followed by
a semicolon used as a null statement */
; UPDATE Pay_Test SET ...
Example 2
The first SEMICOLON in the following request is a null statement. The second SEMICOLON
is taken as statement separator:
/* This example shows a semicolon used as a null
statement and as a statement separator */
; UPDATE Payroll_Test SET Name = 'Wedgewood A'
WHERE Name = 'Wedgewood A'
; SELECT ...
-- This example shows the use of an ANSI component
-- used as a null statement and statement separator ;
Example 3
A SEMICOLON that precedes the first (or only) statement of a request is taken as a null
statement.
;DROP TABLE temp_payroll;SQL Reference: Fundamentals 99
CHAPTER 3 SQL Data Definition, Control, and
Manipulation
This chapter describes the functional families of the SQL language.
Topics include:
• SQL Functional Families and Binding Styles
• Data Definition Language
• Data Control Language
• Data Manipulation Language
• Query and Workload Analysis Statements
• Help and Database Object Definition Tools
SQL Functional Families and Binding Styles
Introduction
The SQL language can be characterized in several different ways. This chapter is organized
around functional groupings of the components of the language with minor emphasis on
binding styles.
Definition: Functional Family
SQL provides facilities for defining database objects, for defining user access to those objects,
and for manipulating the data stored within them.
The following list describes the principal functional families of the SQL language.
• SQL Data Definition Language (DDL)
• SQL Data Control Language (DCL)
• SQL Data Manipulation Language (DML)
• Query and Workload Analysis Statements
• Help and Database Object Definition Tools
Some classifications of SQL group the data control language statements with the data
definition language statements.Chapter 3: SQL Data Definition, Control, and Manipulation
Embedded SQL
100 SQL Reference: Fundamentals
Definition: Binding Style
The ANSI SQL standards do not define the term binding style. The expression refers to a
possible method by which an SQL statement can be invoked.
Teradata Database supports the following SQL binding styles:
• Direct, or interactive
• Embedded SQL
• Stored procedure
• SQL Call Level Interface (as ODBC)
• JDBC
The direct binding style is usually not qualified in this manual set because it is the default
style.
Embedded SQL and stored procedure binding styles are always clearly specified, either
explicitly or by context.
Related Topics
You can find more information on binding styles in the SQL Reference set and in other books.
Embedded SQL
You can execute SQL statements from within client application programs. The expression
embedded SQL refers to SQL statements executed or declared from within a client application.
An embedded Teradata SQL client program consists of the following:
• Client programming language statements
• One or more embedded SQL statements
• Depending on the host language, one or more embedded SQL declare sections
SQL declare sections are optional in COBOL and PL/I, but must be used in C.
FOR more information on … SEE …
embedded SQL • “Embedded SQL” on page 100
• Teradata Preprocessor2 for Embedded SQL Programmer Guide
• SQL Reference: Stored Procedures and Embedded SQL
stored procedures • “Stored Procedures” on page 48
• SQL Reference: Stored Procedures and Embedded SQL
ODBC ODBC Driver for Teradata User Guide
JDBC Teradata Driver for the JDBC Interface User GuideChapter 3: SQL Data Definition, Control, and Manipulation
Data Definition Language
SQL Reference: Fundamentals 101
A special prefix, EXEC SQL, distinguishes the SQL language statements embedded into the
application program from the host programming language.
Embedded SQL statements must follow the rules of the host programming language
concerning statement continuation and termination, construction of variable names, and so
forth. Aside from these rules, embedded SQL is host language-independent.
Details of Teradata Database support for embedded SQL are described in SQL Reference:
Stored Procedures and Embedded SQL.
Data Definition Language
Definition
The SQL Data Definition Language (DDL) is a subset of the SQL language and consists of all
SQL statements that support the definition of database objects.
Purpose of Data Definition Language Statements
Data definition language statements perform the following functions:
• Create, drop, rename, and alter tables
• Create, drop, rename, and replace stored procedures, user-defined functions, views, and
macros
• Create, drop, and alter user-defined types
• Create, drop, and replace user-defined methods
• Create and drop indexes
• Create, drop, and modify users and databases
• Create, drop, alter, rename, and replace triggers
• Create, drop, and set roles
• Create, drop, and modify profiles
• Collect statistics on a column set or index
• Establish a default database
• Comment on database objects
• Set a different collation sequence, account priority, DateForm, time zone, and database for
the session
• Begin and end logging
Rules on Entering DDL Statements
A DDL statement can be entered as:
• A single statement request.
• The solitary statement, or the last statement, in an explicit transaction (in Teradata mode,
one or more requests enclosed by user-supplied BEGIN TRANSACTION and END Chapter 3: SQL Data Definition, Control, and Manipulation
Data Definition Language
102 SQL Reference: Fundamentals
TRANSACTION statement, or in ANSI mode, one or more requests ending with the
COMMIT keyword).
• The solitary statement in a macro.
DDL statements cannot be entered as part of a multistatement request.
Successful execution of a DDL statement automatically creates and updates entries in the Data
Dictionary.
SQL Data Definition Statements
DDL statements include the following:
Related Topics
For detailed information about the function, syntax, and usage of Teradata SQL Data
Definition statements, see SQL Reference: Data Definition Statements.
• ALTER FUNCTION
• ALTER METHOD
• ALTER PROCEDURE
• ALTER REPLICATION
GROUP
• ALTER TABLE
• ALTER TRIGGER
• ALTER TYPE
• BEGIN LOGGING
• COMMENT
• CREATE AUTHORIZATION
• CREATE CAST
• CREATE DATABASE
• CREATE FUNCTION
• CREATE HASH INDEX
• CREATE INDEX
• CREATE JOIN INDEX
• CREATE MACRO
• CREATE METHOD
• CREATE ORDERING
• CREATE PROCEDURE
• CREATE PROFILE
• CREATE REPLICATION
GROUP
• CREATE ROLE
• CREATE TABLE
• CREATE TRANSFORM
• CREATE TRIGGER
• CREATE TYPE
• CREATE USER
• CREATE VIEW
• DATABASE
• DELETE DATABASE
• DELETE USER
• DROP AUTHORIZATION
• DROP CAST
• DROP DATABASE
• DROP FUNCTION
• DROP HASH INDEX
• DROP INDEX
• DROP JOIN INDEX
• DROP MACRO
• DROP ORDERING
• DROP PROCEDURE
• DROP PROFILE
• DROP REPLICATION
GROUP
• DROP ROLE
• DROP TABLE
• DROP TRANSFORM
• DROP TRIGGER
• DROP TYPE
• DROP USER
• DROP VIEW
• END LOGGING
• MODIFY DATABASE
• MODIFY PROFILE
• MODIFY USER
• RENAME FUNCTION
• RENAME MACRO
• RENAME PROCEDURE
• RENAME TABLE
• RENAME TRIGGER
• RENAME VIEW
• REPLACE CAST
• REPLACE FUNCTION
• REPLACE MACRO
• REPLACE METHOD
• REPLACE ORDERING
• REPLACE PROCEDURE
• REPLACE TRANSFORM
• REPLACE TRIGGER
• REPLACE VIEW
• SET ROLE
• SET SESSION
• SET TIME ZONEChapter 3: SQL Data Definition, Control, and Manipulation
Altering Table Structure and Definition
SQL Reference: Fundamentals 103
Altering Table Structure and Definition
Introduction
You may need to change the structure or definition of an existing table or temporary table. In
many cases, you can use ALTER TABLE and RENAME to make the changes. Some changes,
however, may require you to use CREATE TABLE to recreate the table.
How to Make Changes
Use the RENAME TABLE statement to change the name of a table or temporary table.
Use the ALTER TABLE statement to perform any of the following functions:
• Add or drop columns on an existing table or temporary table
• Add column default control, FORMAT, and TITLE attributes on an existing table or
temporary table
• Add or remove journaling options on an existing table or temporary table
• Add or remove the FALLBACK option on an existing table or temporary table
• Change the DATABLOCKSIZE or percent FREESPACE on an existing table or temporary
table
• Add or drop column and table level constraints on an existing table or temporary table
• Change the LOG and ON COMMIT options for a global temporary table
• Modify referential constraints
• Change the properties of the primary index for a table (some cases require an empty table)
• Change the partitioning properties of the primary index for a table, including
modifications to the partitioning expression defined for use by a partitioned primary
index (some cases require an empty table)
• Regenerate table headers and optionally validate and correct the partitioning of PPI table
rows
• Define, modify, or delete the COMPRESS attribute for an existing column
• Change column attributes (that do not affect stored data) on an existing table or
temporary table
Restrictions apply to many of the preceding modifications. For a complete list of rules and
restrictions on using ALTER TABLE to change the structure or definition of an existing table,
see SQL Reference: Data Definition Statements.
To perform any of the following functions, use CREATE TABLE to recreate the table:
• Redefine the primary index or its partitioning for a non-empty table when not allowed for
ALTER TABLE
• Change a data type attribute that affects existing data
• Add a column that would exceed the maximum column count
Interactively, the SHOW TABLE statement can call up the current table definition, which can
then be modified and resubmitted to create a new table.Chapter 3: SQL Data Definition, Control, and Manipulation
Dropping and Renaming Objects
104 SQL Reference: Fundamentals
If the stored data is not affected by incompatible data type changes, an INSERT... SELECT
statement can be used to transfer data from the existing table to the new table.
Dropping and Renaming Objects
Dropping Objects
To drop an object, use the appropriate DDL statement.
Renaming Objects
Teradata SQL provides RENAME statements that you can use to rename some objects. To
rename objects that do not have associated RENAME statements, you must first drop them
and then recreate them with a new name, or, in the case of primary indexes, use ALTER
TABLE.
To drop this type of database object … Use this SQL statement …
Hash Index DROP HASH INDEX
Join Index DROP JOIN INDEX
Macro DROP MACRO
Profile DROP PROFILE
Role DROP ROLE
Secondary Index DROP INDEX
Stored procedure DROP PROCEDURE
Table DROP TABLE
Global temporary table or volatile table
Primary index
Trigger DROP TRIGGER
User-Defined Function DROP FUNCTION
User-Defined Method ALTER TYPE
User-Defined Type DROP TYPE
View DROP VIEW
To rename this type of database object … Use …
Hash index DROP HASH INDEX and then CREATE HASH INDEX
Join index DROP JOIN INDEX and then CREATE JOIN INDEXChapter 3: SQL Data Definition, Control, and Manipulation
Data Control Language
SQL Reference: Fundamentals 105
Related Topics
For further information on these statements, including rules that apply to usage, see SQL
Reference: Data Definition Statements.
Data Control Language
Definition
The SQL Data Control Language (DCL) is a subset of the SQL language and consists of all SQL
statements that support the definition of security authorization for accessing database objects.
Purpose of Data Control Statements
Data control statements perform the following functions:
• Grant and revoke privileges
• Give ownership of a database to another user
Rules on Entering Data Control Statements
A data control statement can be entered as:
• A single statement request
• The solitary statement, or as the last statement, in an “explicit transaction” (one or more
requests enclosed by user-supplied BEGIN TRANSACTION and END TRANSACTION
Macro RENAME MACRO
Primary index ALTER TABLE
Profile DROP PROFILE and then CREATE PROFILE
Role DROP ROLE and then CREATE ROLE
Secondary index DROP INDEX and then CREATE INDEX
Stored procedure RENAME PROCEDURE
Table RENAME TABLE
Global temporary table or volatile table
Trigger RENAME TRIGGER
User-Defined Function RENAME FUNCTION
User-Defined Method ALTER TYPE and then CREATE METHOD
User-Defined Type DROP TYPE and then CREATE TYPE
View RENAME VIEW
To rename this type of database object … Use …Chapter 3: SQL Data Definition, Control, and Manipulation
Data Manipulation Language
106 SQL Reference: Fundamentals
statement in Teradata mode, or in ANSI mode, one or more requests ending with the
COMMIT keyword).
• The solitary statement in a macro
A data control statement cannot be entered as part of a multistatement request.
Successful execution of a data control statement automatically creates and updates entries in
the Data Dictionary.
Teradata SQL Data Control Statements
Data control statements include the following:
• GIVE
• GRANT
• GRANT LOGON
• REVOKE
• REVOKE LOGON
Related Topics
For detailed information about the function, syntax, and usage of Teradata SQL Data Control
statements, see “SQL Data Control Language Statement Syntax” in SQL Reference: Data
Definition Statements.
Data Manipulation Language
Definition
The SQL Data Manipulation Language (DML) is a subset of the SQL language and consists of
all SQL statements that support the manipulation or processing of database objects.
Selecting Columns
The SELECT statement returns information from the tables in a relational database. SELECT
specifies the table columns from which to obtain the data, the corresponding database (if not
defined by default), and the table (or tables) to be accessed within that database.
For example, to request the data from the name, salary, and jobtitle columns of the Employee
table, type:
SELECT name, salary, jobtitle FROM employee ;
The response might be something like the following results table. Chapter 3: SQL Data Definition, Control, and Manipulation
Data Manipulation Language
SQL Reference: Fundamentals 107
Note: The left-to-right order of the columns in a result table is determined by the order in
which the column names are entered in the SELECT statement. Columns in a relational table
are not ordered logically.
As long as a statement is otherwise constructed properly, the spacing between statement
elements is not important as long as at least one pad character separates each element that is
not otherwise separated from the next.
For example, the SELECT statement in the above example could just as well be formulated like
this:
SELECT name, salary,jobtitle
FROM employee;
Notice that there are multiple pad characters between most of the elements and that a comma
only (with no pad characters) separates column name salary from column name jobtitle.
To select all the data in the employee table, you could enter the following SELECT statement:
SELECT * FROM employee ;
The asterisk specifies that the data in all columns (except system-derived columns) of the table
is to be returned.
Selecting Rows
The SELECT statement retrieves stored data from a table. All rows, specified rows, or specific
columns of all or specified rows can be retrieved. The FROM, WHERE, ORDER BY,
DISTINCT, WITH, GROUP BY, HAVING, and TOP clauses provide for a fine detail of
selection criteria.
To obtain data from specific rows of a table, use the WHERE clause of the SELECT statement.
That portion of the clause following the keyword WHERE causes a search for rows that satisfy
the condition specified.
For example, to get the name, salary, and title of each employee in Department 100, use the
WHERE clause:
SELECT name, salary, jobtitle FROM employee
WHERE deptno = 100 ;
Name Salary JobTitle
Newman P 28600.00 Test Tech
Chin M 38000.00 Controller
Aquilar J 45000.00 Manager
Russell S 65000.00 President
Clements D 38000.00 SalespersonChapter 3: SQL Data Definition, Control, and Manipulation
Data Manipulation Language
108 SQL Reference: Fundamentals
The response appears in the following table.
To obtain data from a multirow result table in embedded SQL, declare a cursor for the
SELECT statement and use it to fetch individual result rows for processing.
To obtain data from the row with the oldest timestamp value in a queue table, use the SELECT
AND CONSUME statement, which also deletes the row from the queue table.
Zero-Table SELECT
Zero-table SELECT statements return data but do not access tables.
For example, the following SELECT statement specifies an expression after the SELECT
keyword that does not require a column reference or FROM clause:
SELECT 40000.00 / 52.;
The response is one row:
(40000.00/52.)
-----------------
769.23
Here is another example that specifies an attribute function after the SELECT keyword:
SELECT TYPE(sales_table.region);
Because the argument to the TYPE function is a column reference that specifies the table
name, a FROM clause is not required and the query does not access the table.
The response is one row that might be something like the following:
Type(region)
---------------------------------------
INTEGER
Adding Rows
Use the INSERT statement to add rows to a table.
One statement is required for each new row, except in the case of an INSERT...SELECT
statement. For more details on this, see SQL Reference: Data Manipulation Statements.
Defaults and constraints defined by the CREATE TABLE statement affect an insert operation
in the following ways.
Name Salary JobTitle
Chin M 38000.00 Controller
Greene W 32500.00 Payroll Clerk
Moffit H 35000.00 Recruiter
Peterson J 25000.00 Payroll ClerkChapter 3: SQL Data Definition, Control, and Manipulation
Data Manipulation Language
SQL Reference: Fundamentals 109
Updating Rows
To modify data in one or more rows of a table, use the UPDATE statement. In the UPDATE
statement, you specify the column name of the data to be modified along with the new value.
You can also use a WHERE clause to qualify the rows to change.
Attributes specified in the CREATE TABLE statement affect an update operation in the
following ways:
• When an update supplies a value that violates some defined constraint on a column or
columns, the update operation is rejected and an error message is returned.
• When an update supplies the value NULL and a NULL is allowed, any existing data is
removed from the column.
• If the result of an UPDATE will violate uniqueness constraints or create a duplicate row in
a table which does not allow duplicate rows, an error message is returned.
To update rows in a multirow result table in embedded SQL, declare a cursor for the SELECT
statement and use it to fetch individual result rows for processing, then use a WHERE
CURRENT OF clause in a positioned UPDATE statement to update the selected rows.
The Teradata Database supports a special form of UPDATE, called the upsert form, which is a
single SQL statement that includes both UPDATE and INSERT functionality. The specified
update operation performs first, and if it fails to find a row to update, then the specified insert
operation performs automatically. Alternatively, use the MERGE statement.
Deleting Rows
The DELETE statement allows you to remove an entire row or rows from a table. A WHERE
clause qualifies the rows that are to be deleted.
WHEN an INSERT statement … THEN the system …
attempts to add a duplicate row
• for any unique index
• to a table defined as SET (not to allow
duplicate rows)
returns an error, with one exception. The
system silently ignores duplicate rows that an
INSERT … SELECT would create when the:
• table is defined as SET
• mode is Teradata
omits a value for a column for which a default
value is defined
stores the default value for that column.
omits a value for a column for which both of the
following statements are true:
• NOT NULL is specified
• no default is specified
rejects the operation and returns an error
message.
supplies a value that does not satisfy the
constraints specified for a column or violates
some defined constraint on a column or columnsChapter 3: SQL Data Definition, Control, and Manipulation
Subqueries
110 SQL Reference: Fundamentals
To delete rows in a multirow result table in embedded SQL, use the following process:
1 Declare a cursor for the SELECT statement.
2 Fetch individual result rows for processing using the cursor you declared.
3 Use a WHERE CURRENT OF clause in a positioned DELETE statement to delete the
selected rows.
Merging Rows
The MERGE statement merges a source row into a target table based on whether any target
rows satisfy a specified matching condition with the source row. The MERGE statement is a
single SQL statement that includes both UPDATE and INSERT functionality.
Subqueries
Introduction
Subqueries are nested SELECT statements. They can be used to ask a series of questions to
arrive at a single answer.
Three Level Subqueries: Example
The following subqueries, nested to three levels, are used to answer the question “Who
manages the manager of Marston?”
SELECT Name
FROM Employee
WHERE EmpNo IN
(SELECT MgrNo
FROM Department
WHERE DeptNo IN
(SELECT DeptNo
FROM Employee
WHERE Name = 'Marston A') ) ;
The subqueries that pose the questions leading to the final answer are inverted:
• The third subquery asks the Employee table for the number of Marston’s department.
• The second subquery asks the Department table for the employee number (MgrNo) of the
manager associated with this department number.
• The first subquery asks the Employee table for the name of the employee associated with
this employee number (MgrNo).
IF the source and target rows … THEN the merge operation is an …
satisfy the matching condition update based on the specified WHEN MATCHED THEN
UPDATE clause.
do not satisfy the matching condition insert based on the specified WHEN NOT MATCHED
THEN INSERT clause.Chapter 3: SQL Data Definition, Control, and Manipulation
Recursive Queries
SQL Reference: Fundamentals 111
The result table looks like the following:
Name
--------
Watson L
This result can be obtained using only two levels of subquery, as the following example shows.
SELECT Name
FROM Employee
WHERE EmpNo IN
(SELECT MgrNo
FROM Department, Employee
WHERE Employee.Name = 'Marston A'
AND Department.DeptNo = Employee.DeptNo) ;
In this example, the second subquery defines a join of Employee and Department tables.
This result could also be obtained using a one-level query that uses correlation names, as the
following example shows.
SELECT M.Name
FROM Employee M, Department D, Employee E
WHERE M.EmpNo = D.MgrNo AND
E.Name = 'Marston A' AND
D.DeptNo = E.DeptNo;
In some cases, as in the preceding example, the choice is a style preference. In other cases,
correct execution of the query may require a subquery.
For More Information
For more information, see SQL Reference: Data Manipulation Statements.
Recursive Queries
Introduction
A recursive query is a way to query hierarchies of data, such as an organizational structure,
bill-of-materials, and document hierarchy.
Recursion is typically characterized by three steps:
1 Initialization
2 Recursion, or repeated iteration of the logic through the hierarchy
3 Termination
Similarly, a recursive query has three execution phases:
1 Create an initial result set.
2 Recursion based on the existing result set.
3 Final query to return the final result set.Chapter 3: SQL Data Definition, Control, and Manipulation
Recursive Queries
112 SQL Reference: Fundamentals
Two Ways to Specify a Recursive Query
You can specify a recursive query by:
• Preceding a query with the WITH RECURSIVE clause
• Creating a permanent view using the RECURSIVE clause in a CREATE VIEW statement
Using the WITH RECURSIVE Clause
Consider the following employee table:
CREATE TABLE employee
(employee_number INTEGER
,manager_employee_number INTEGER
,last_name CHAR(20)
,first_name VARCHAR(30));
The table represents an organizational structure containing a hierarchy of employee-manager
data.
The following figure depicts what the employee table looks like hierarchically.
The following recursive query retrieves the employee numbers of all employees who directly
or indirectly report to the manager with employee_number 801:
WITH RECURSIVE temp_table (employee_number) AS
( SELECT root.employee_number
FROM employee root
WHERE root.manager_employee_number = 801
UNION ALL
SELECT indirect.employee_number
FROM temp_table direct, employee indirect
1101A285
employee # = 801
manager employee # = NULL
employee # = 1003
manager employee # = 801
employee # = 1010
manager employee # = 1003
employee # = 1012
manager employee # = 1004
employee # = 1002
manager employee # = 1004
employee # = 1015
manager employee # = 1004
employee # = 1001
manager employee # = 1003
employee # = 1004
manager employee # = 1003
employee # = 1008
manager employee # = 1019
employee # = 1006
manager employee # = 1019
employee # = 1014
manager employee # = 1019
employee # = 1011
manager employee # = 1019
employee # = 1019
manager employee # = 801
employee # = 1016
manager employee # = 801Chapter 3: SQL Data Definition, Control, and Manipulation
Recursive Queries
SQL Reference: Fundamentals 113
WHERE direct.employee_number = indirect.manager_employee_number
)
SELECT * FROM temp_table ORDER BY employee_number;
In the example, temp_table is a temporary named result set that can be referred to in the
FROM clause of the recursive statement.
The initial result set is established in temp_table by the non-recursive, or seed, statement and
contains the employees that report directly to the manager with an employee_number of 801:
SELECT root.employee_number
FROM employee root
WHERE root.manager_employee_number = 801
The recursion takes place by joining each employee in temp_table with employees who report
to the employees in temp_table. The UNION ALL adds the results to temp_table.
SELECT indirect.employee_number
FROM temp_table direct, employee indirect
WHERE direct.employee_number = indirect.manager_employee_number
Recursion stops when no new rows are added to temp_table.
The final query is not part of the recursive WITH clause and extracts the employee
information out of temp_table:
SELECT * FROM temp_table ORDER BY employee_number;
Here are the results of the recursive query:
employee_number
---------------
1001
1002
1003
1004
1006
1008
1010
1011
1012
1014
1015
1016
1019
Using the RECURSIVE Clause in a CREATE VIEW Statement
Creating a permanent view using the RECURSIVE clause is similar to preceding a query with
the WITH RECURSIVE clause.
Consider the employee table that was presented in “Using the WITH RECURSIVE Clause” on
page 112. The following statement creates a view named hierarchy_801 using a recursive
query that retrieves the employee numbers of all employees who directly or indirectly report
to the manager with employee_number 801:
CREATE RECURSIVE VIEW hierarchy_801 (employee_number) AS
( SELECT root.employee_number
FROM employee rootChapter 3: SQL Data Definition, Control, and Manipulation
Recursive Queries
114 SQL Reference: Fundamentals
WHERE root.manager_employee_number = 801
UNION ALL
SELECT indirect.employee_number
FROM hierarchy_801 direct, employee indirect
WHERE direct.employee_number = indirect.manager_employee_number
);
The seed statement and recursive statement in the view definition are the same as the seed
statement and recursive statement in the previous recursive query that uses the WITH
RECURSIVE clause, except that the hierarchy_801 view name is different from the temp_table
temporary result name.
To extract the employee information, use the following SELECT statement on the
hierarchy_801 view:
SELECT * FROM hierarchy_801 ORDER BY employee_number;
Here are the results:
employee_number
---------------
1001
1002
1003
1004
1006
1008
1010
1011
1012
1014
1015
1016
1019
Depth Control to Avoid Infinite Recursion
If the hierarchy is cyclic, or if the recursive statement specifies a bad join condition, a recursive
query can produce a runaway query that never completes with a finite result. The best practice
is to control the depth of the recursion as follows:
• Specify a depth control column in the column list of the WITH RECURSIVE clause or
recursive view
• Initialize the column value to 0 in the seed statements
• Increment the column value by 1 in the recursive statements
• Specify a limit for the value of the depth control column in the join condition of the
recursive statements
Here is an example that modifies the previous recursive query that uses the WITH
RECURSIVE clause of the employee table to limit the depth of the recursion to five cycles:
WITH RECURSIVE temp_table (employee_number, depth) AS
( SELECT root.employee_number, 0 AS depth
FROM employee root
WHERE root.manager_employee_number = 801 Chapter 3: SQL Data Definition, Control, and Manipulation
Query and Workload Analysis Statements
SQL Reference: Fundamentals 115
UNION ALL
SELECT indirect.employee_number, direct.depth+1 AS newdepth
FROM temp_table direct, employee indirect
WHERE direct.employee_number = indirect.manager_employee_number
AND newdepth <= 5
)
SELECT * FROM temp_table ORDER BY employee_number;
Related Topics
Query and Workload Analysis Statements
Data Collection and Analysis
Collected data can be used in several ways, for example:
• By the Optimizer, to produce the best query plans possible.
• To populate user-defined Query Capture Database (QCD) tables with data used by various
utilities to analyze query workloads as part of the ongoing process to reengineer the
database design process.
For example, the Teradata Index Wizard determines optimal secondary index sets to
support the query workloads you ask it to analyze.
Index Analysis and Target Level Emulation
Teradata also provides diagnostic statements that support the Teradata Index Wizard and the
sample-based components of the target level emulation facility used to emulate a production
environment on a test system:
• DIAGNOSTIC DUMP SAMPLES
• DIAGNOSTIC HELP SAMPLES
FOR details on … SEE …
recursive queries “WITH RECURSIVE” in SQL Reference: Data Manipulation Statements.
recursive views “CREATE VIEW” in SQL Reference: Data Definition Statements.
Teradata provides the following SQL statements for collecting and analyzing query and data
demographics and statistics:
• BEGIN QUERY LOGGING
• COLLECT DEMOGRAPHICS
• COLLECT STATISTICS
• DROP STATISTICS
• DUMP EXPLAIN
• END QUERY LOGGING
• INITIATE INDEX ANALYSIS
• INSERT EXPLAIN
• RESTART INDEX ANALYSISChapter 3: SQL Data Definition, Control, and Manipulation
Help and Database Object Definition Tools
116 SQL Reference: Fundamentals
• DIAGNOSTIC SET SAMPLES
• DIAGNOSTIC “Validate Index”
After configuring the test environment and enabling it with the appropriate production
system statistical and demographic data, you can perform various workload analyses to
determine optimal sets of secondary indexes to support those workloads in the production
environment.
Related Topics
For more information on query and workload analysis statements, see SQL Reference: Data
Definition Statements.
Help and Database Object Definition Tools
Introduction
Teradata SQL provides several powerful tools to get help about database object definitions and
summaries of database object definition statement text.
HELP Statements
The various HELP statements return reports about the current column definitions for named
database objects. The reports returned by these statements can be useful to database designers
who need to fine tune index definitions, column definitions (for example, changing data
typing to eliminate the necessity of ad hoc conversions), and so on.
IF you want to get … THEN use …
the attributes of a column, including whether it is a single-column
primary or secondary index and, if so, whether it is unique
HELP COLUMN
the attributes for a specific named constraint on a table HELP CONSTRAINT
the attributes, sorted by object name, for all tables, views, join and hash
indexes, stored procedures, user-defined functions, and macros in a
specified database
HELP DATABASE and
HELP USER
the specific function name, list of parameters, data types of the
parameters, and any comments associated with the parameters of a userdefined function
HELP FUNCTION
the data types of the columns defined by a particular hash index HELP HASH INDEX
the attributes for the indexes defined for a table or join index HELP INDEX
the attributes of the columns defined by a particular join index HELP JOIN INDEX
the attributes for the specified macro HELP MACRO
the specific name, list of parameters, data types of the parameters, and any
comments associated with the parameters of a user-defined method
HELP METHODChapter 3: SQL Data Definition, Control, and Manipulation
Help and Database Object Definition Tools
SQL Reference: Fundamentals 117
SHOW Statements
A SHOW statement returns a CREATE statement indicating the last data definition statement
performed against the named database object. These statements are particularly useful for
application developers who need to develop exact replicas of existing objects for purposes of
testing new software.
the attributes for the specified join index or table HELP TABLE
the attribute and format parameters for each parameter of the procedure
or just the creation time attributes for the specified procedure
HELP PROCEDURE
the attributes of the specified replication group and its member tables HELP REPLICATION
GROUP
the attributes for the specified trigger HELP TRIGGER
information on the type, attributes, methods, cast, ordering, and
transform of the specified user-defined type
HELP TYPE
the attributes for a specified view HELP VIEW
the attributes for the requested volatile table HELP VOLATILE
TABLE
IF you want to get … THEN use …
IF you want to get the data definition statement most recently used to
create, replace, or modify a specified … THEN use …
hash index SHOW HASH INDEX
join index SHOW JOIN INDEX
macro SHOW MACRO
stored procedure or external stored procedure SHOW PROCEDURE
table SHOW TABLE
trigger SHOW TRIGGER
user-defined function SHOW FUNCTION
user-defined method SHOW METHOD
user-defined type SHOW TYPE
view SHOW VIEWChapter 3: SQL Data Definition, Control, and Manipulation
Help and Database Object Definition Tools
118 SQL Reference: Fundamentals
Example
Consider the following definition for a table named department:
CREATE TABLE department, FALLBACK
(department_number SMALLINT
,department_name CHAR(30) NOT NULL
,budget_amount DECIMAL(10,2)
,manager_employee_number INTEGER
)
UNIQUE PRIMARY INDEX (department_number)
,UNIQUE INDEX (department_name);
To get the attributes for the table, use the HELP TABLE statement:
HELP TABLE department;
The HELP TABLE statement returns:
Column Name Type Comment
------------------------------ ---- -------------------------
department_number I2 ?
department_name CF ?
budget_amount D ?
manager_employee_number I ?
To get the CREATE TABLE statement that defines the department table, use the SHOW
TABLE statement:
SHOW TABLE department;
The SHOW TABLE statement returns:
CREATE SET TABLE TERADATA_EDUCATION.department, FALLBACK,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT
(department_number SMALLINT,
department_name CHAR(30) CHARACTER SET LATIN
NOT CASESPECIFIC NOT NULL,
budget_amount DECIMAL(10,2),
manager_employee_number INTEGER)
UNIQUE PRIMARY INDEX ( department_number )
UNIQUE INDEX ( department_name );
Related Topics
For more information, see SQL Reference: Data Definition Statements.SQL Reference: Fundamentals 119
CHAPTER 4 SQL Data Handling
This chapter describes the fundamentals of Teradata Database data handling.
Topics include:
• Requests
• Transactions
• Event processing
• Session Parameters
• Session Management
• Return Codes
Invoking SQL Statements
Introduction
One of the primary issues that motivated the development of relational database management
systems was the perceived need to create database management systems that could be queried
not just by predetermined, hard-coded requests but also interactively by well-formulated
ad hoc queries.
SQL addresses this issue by offering four ways to invoke an executable statement:
• Interactively from a terminal
• Embedded within an application program
• Dynamically performed from within an embedded application
• Embedded within a stored procedure
Executable SQL Statements
An executable SQL statement is one that performs an action. The action can be on data or on
a transaction or some other entity at a higher level than raw data.
Some examples of executable SQL statements are the following:
• SELECT
• CREATE TABLE
• COMMIT
• CONNECT
• PREPAREChapter 4: SQL Data Handling
Requests
120 SQL Reference: Fundamentals
Most, but not all, executable SQL statements can be performed interactively from a terminal
using an SQL query manager like BTEQ or Teradata SQL Assistant (formerly called
Queryman).
Types of executable SQL commands that cannot be performed interactively are the following:
• Cursor control and declaration statements
• Dynamic SQL control statements
• Stored procedure control statements and condition handlers
• Connection control statements
• Special forms of SQL statements such as SELECT INTO
These statements can only be used within an embedded SQL or stored procedure application.
Nonexecutable SQL Statements
A nonexecutable SQL statement is one that declares an SQL statement, object, or host or local
variable to the preprocessor or stored procedure compiler. Nonexecutable SQL statements are
not processed during program execution.
Some examples of nonexecutable SQL statements for embedded SQL applications include:
• DECLARE CURSOR
• BEGIN DECLARE SECTION
• END DECLARE SECTION
• EXEC SQL
Examples of nonexecutable SQL statements for stored procedures include:
• DECLARE CURSOR
• DECLARE
Requests
Introduction
A request to the Teradata Database consists of one or more SQL statements and can span any
number of input lines. Teradata Database can receive and perform SQL statements that are:
• Embedded in a client application program that is written in a procedural language.
• Embedded in a stored procedure.
• Entered interactively through BTEQ or Teradata SQL Assistant interfaces.
• Submitted in a BTEQ script as a batch job.
• Submitted through other supported methods (such as CLIv2, ODBC, and JDBC).Chapter 4: SQL Data Handling
Requests
SQL Reference: Fundamentals 121
Single Statement Requests
A single statement request consists of a statement keyword followed by one or more
expressions, other keywords, clauses, and phrases. A single statement request is treated as a
solitary unit of work.
Single Statement Syntax
Multistatement Requests
A multistatement request consists of two or more statements separated by SEMICOLON
characters.
Multistatement requests are non-ANSI standard.
For more information, see “Multistatement Requests” on page 124.
Multistatement Syntax
Iterated Requests
An iterated request is a single DML statement with multiple data records.
Iterated requests do not directly impact the syntax of SQL statements. They provide a more
performant way of processing DML statements that specify the USING row descriptor to
import or export data from the Teradata Database.
For more information, see “Iterated Requests” on page 127.
ANSI Session Mode
If an error is found in a request, then that request is aborted, but not the entire transaction.
Note: Some failures will abort the entire transaction.
Teradata Session Mode
A multistatement request is treated as an implicit transaction. That is, if an error is found in
any statement in the request, then the entire transaction is aborted.
HH01A003
;
statement
HH01A004
;
statement
;Chapter 4: SQL Data Handling
Transactions
122 SQL Reference: Fundamentals
Abort processing proceeds as follows:
1 Backs out any changes made to the database as a result of any preceding statements.
2 Deletes any associated spooled output.
3 Releases any associated locks.
4 Bypasses any remaining statements in the transaction.
Complete Requests
A request is considered complete when either an End of Text character or the request
terminator is encountered. The request terminator is a SEMICOLON character. It is the last
nonpad character on an input line.
A request terminator is optional except when the request is embedded in an SQL macro or
trigger or when it is entered through BTEQ.
In a stored procedure, each SQL statement is treated as a request. Stored procedures do not
support multistatement requests.
Transactions
Introduction
A transaction is a logical unit of work where the statements nested within the transaction
either execute successfully as a group or do not execute.
Transaction Processing Mode
You can perform transaction processing in either of the following session modes:
• ANSI
• Teradata
In ANSI session mode, transaction processing adheres to the rules defined by the ANSI SQL
specification. In Teradata session mode, transaction processing follows the rules defined by
Teradata Database over years of evolution.
To set the transaction processing mode, use the:
• SessionMode field of the DBS Control Record
• BTEQ command .SET SESSION TRANSACTION
• Preprocessor2 TRANSACT() option
• ODBC SessionMode option in the .odbc.ini file
• JDBC TeraDataSource.setTransactMode() methodChapter 4: SQL Data Handling
Transaction Processing in ANSI Session Mode
SQL Reference: Fundamentals 123
Related Topics
The next few pages highlight some of the differences between transaction processing in ANSI
session mode and transaction processing in Teradata session mode.
For detailed information on statement and transaction processing, see SQL Reference:
Statement and Transaction Processing.
Transaction Processing in ANSI Session Mode
Introduction
Transactions are always implicit in ANSI session mode.
A transaction initiates when one of the following happens:
• The first SQL statement in a session executes
• The first statement following the close of a transaction executes
The COMMIT or ROLLBACK/ABORT statements close a transaction.
If a transaction includes a DDL statement, it must be the last statement in the transaction.
Note that DATABASE and SET SESSION are DDL statements. See “Rollback Processing” in
SQL Reference: Statement and Transaction Processing.
If a session terminates with an open transaction, then any effects of that transaction are rolled
back.
Two-Phase Commit (2PC)
Sessions in ANSI session mode do not support 2PC. If an attempt is made to use the 2PC
protocol in ANSI session mode, the Logon process aborts and an error returns to the
requestor.
Transaction Processing in Teradata Session
Mode
Introduction
A Teradata SQL transaction can be a single Teradata SQL statement, or a sequence of Teradata
SQL statements, treated as a single unit of work.
Each request is processed as one of the following transaction types:
• Implicit
• Explicit
• Two-phase commit (2PC)Chapter 4: SQL Data Handling
Multistatement Requests
124 SQL Reference: Fundamentals
Implicit Transactions
An implicit transaction is a request that does not include the BEGIN TRANSACTION and
END TRANSACTION statements. The implicit transaction starts and completes all within the
SQL request: it is self-contained.
An implicit transaction can be one of the following:
• A single DML statement that affects one or more rows of one or more tables
• A macro or trigger containing one or more statements
• A request containing multiple statements separated by SEMICOLON characters. Each
SEMICOLON character can appear anywhere in the input line. The Parser interprets a
SEMICOLON character at the end of an input line as a transaction terminator.
DDL statements are not valid in an implicit multistatement transaction.
Explicit Transactions
In Teradata session mode, an explicit transaction contains one or more statements enclosed by
BEGIN TRANSACTION and END TRANSACTION statements. The first BEGIN
TRANSACTION initiates a transaction and the last END TRANSACTION terminates the
transaction.
When multiple statements are included in an explicit transaction, you can only specify a DDL
statement if it is the last statement in the series.
Two-Phase Commit (2PC) Rules
Two-phase commit (2PC) protocol is supported in Teradata session mode:
• A 2PC transaction contains one or more DML statements that affect multiple databases
and are coordinated externally using the 2PC protocol.
• A DDL statement is not valid in a two-phase commit transaction.
Multistatement Requests
Definition
An atomic request containing more than one SQL statement, each terminated by a
SEMICOLON character.
Syntax
HH01A004
;
statement
;Chapter 4: SQL Data Handling
Multistatement Requests
SQL Reference: Fundamentals 125
ANSI Compliance
Multistatement requests are non-ANSI SQL-2003 standard.
Rules
The Teradata Database imposes restrictions on the use of multistatement requests:
• Only one USING row descriptor is permitted per request, so only one USING row
descriptor can be used per multistatement request.
This rule applies to interactive SQL only. Embedded SQL and stored procedures do not
permit the USING row descriptor.
• A multistatement request cannot include a DDL statement.
• The keywords BEGIN REQUEST and END REQUEST must delimit a multistatement
request in a stored procedure.
Power of Multistatement Requests
The multistatement request is application-independent. It improves performance for a variety
of applications that can package more than one SQL statement at a time. BTEQ, CLI, and the
SQL preprocessor all support multistatement requests.
Multistatement requests improve system performance by reducing processing overhead. By
performing a series of statements as one request, performance for the client, the Parser, and
the Database Manager are all enhanced.
Because of this reduced overhead, using multistatement requests also decreases response time.
A multistatement request that contains 10 SQL statements could be as much as 10 times more
efficient than the 10 statements entered separately (depending on the types of statements
submitted).
Multistatement Requests Treated as Transaction
In a multistatement request, treated as a single unit of work, either all statements in the
request complete successfully, or the entire request is aborted.
In ANSI session mode, the request is rolled back if aborted. In Teradata session mode, any
updates to the database up to that point for the transaction are rolled back.
Parallel Step Processing
Teradata Database can perform some requests in parallel (see “Parallel Steps” on page 126).
This capability applies both to implicit transactions, such as macros and multistatement
requests, and to Teradata-style transactions explicitly defined by BEGIN/END
TRANSACTION statements.
Statements in a multistatement request are broken down by the Parser into one or more steps
that direct the execution performed by the AMPs. It is these steps, not the actual statements,
that are executed in parallel.Chapter 4: SQL Data Handling
Multistatement Requests
126 SQL Reference: Fundamentals
A handshaking protocol between the PE and the AMP allows the AMP to determine when the
PE can dispatch the next parallel step.
Up to twenty parallel steps can be processed per request if channels are not required, such as a
request with an equality constraint based on a primary index value. Up to ten channels can be
used for parallel processing when a request is not constrained to a primary index value.
For example, if an INSERT step and a DELETE step are allowed to run in parallel, the AMP
informs the PE that the DELETE step has progressed to the point where the INSERT step will
not impact it adversely. This handshaking protocol also reduces the chance of a deadlock.
“Parallel Steps” on page 126 illustrates the following process:
1 The statements in a multistatement request are broken down into a series of steps.
2 The Optimizer determines which steps in the series can be executed in parallel.
3 The steps are processed.
Each step undergoes some preliminary processing before it is executed, such as placing locks
on the objects involved. These preliminary processes are not performed in parallel with the
steps.
Parallel Steps
Time
Step 1
2
3
4
5
6
7
8
9
(1)
Time
(2)
Time
Step 1
2
5
6
9
7
8
(3)
FF02A001
Step 1
2
3
4
5
6
7
8
9
3 4Chapter 4: SQL Data Handling
Iterated Requests
SQL Reference: Fundamentals 127
Iterated Requests
Definition
A single DML statement with multiple data records.
Usage
An iterated request is an atomic request consisting of a single SQL DML statement with
multiple sets (records) of data.
Iterated requests do not directly impact the syntax of SQL statements. They provide an
efficient way to execute the same single-statement DML operation on multiple data records,
like the way that ODBC applications execute parameterized statements for arrays of parameter
values, for example.
Several Teradata Database client tools and interfaces provide facilities to pack multiple data
records in a single buffer with a single DML statement.
For example, suppose you use BTEQ to import rows of data into table ptable using the
following INSERT statement and USING row descriptor:
USING (pid INTEGER, pname CHAR(12))
INSERT INTO ptable VALUES(:pid, :pname);
To repeat the request as many times as necessary to read up to 200 data records and pack a
maximum of 100 data records with each request, precede the INSERT statement with the
following BTEQ command:
.REPEAT RECS 200 PACK 100
Note: The PACK option is ignored if the database being used does not support iterated
requests or if the request that follows the REPEAT command is not a DML statement
supported by iterated requests. For details, see “Rules” on page 128.
The following tools and interfaces provide facilities that you can use to execute iterated
requests.
Tool/Interface Facility
CLIv2 for network-attached systems using_data_count field in the DBCAREA data area
CLIv2 for channel-attached systems Using-data-count field in the DBCAREA data area
ODBC Parameter arrays
JDBC type 4 driver Batch operations
OLE DB Provider for Teradata Parameter sets
BTEQ • .REPEAT command
• .SET PACK commandChapter 4: SQL Data Handling
Iterated Requests
128 SQL Reference: Fundamentals
Rules
The following rules apply to iterated requests:
• The iterated request must consist of a single DML statement from the following list:
• ABORT
• DELETE (excluding the positioned form of DELETE)
• EXECUTE macro_name
The fully-expanded macro must be equivalent to a single DML statement that is
qualified to be in an iterated request.
• INSERT
• MERGE
• ROLLBACK
• SELECT
• UPDATE (including atomic UPSERT, but excluding the positioned form of UPDATE)
• The DML statement must reference user-supplied input data, either as named fields in a
USING row descriptor or as '?' parameter markers in a parameterized request.
• All the data records in a given request must use the same record layout. This restriction
applies by necessity to requests where the record layout is given by a single USING row
descriptor in the request text itself; but note that the restriction also applies to
parameterized requests, where the request text has no USING descriptor and does not
fully specify the input record.
• The server processes the iterated request as if it were a single multi-statement request, with
each iteration and its response associated with a corresponding statement number.
Related Topics
FOR more information on … SEE …
iterated request processing SQL Reference: Statement and Transaction Processing
which DML statements can be specified in
an iterated request
SQL Reference: Data Manipulation Statements
CLIv2 • Teradata Call-Level Interface Version 2 Reference
for Channel-Attached Systems
• Teradata Call-Level Interface Version 2 Reference
for Network-Attached Systems
ODBC parameter arrays ODBC Driver for Teradata User Guide
JDBC driver batch operations Teradata Driver for the JDBC Interface User Guide
OLE DB Provider for Teradata parameter
sets
OLE DB Provider for Teradata Installation and User
Guide
BTEQ PACK command Basic Teradata Query ReferenceChapter 4: SQL Data Handling
Dynamic and Static SQL
SQL Reference: Fundamentals 129
Dynamic and Static SQL
Definitions
ANSI Compliance
Dynamic SQL is ANSI SQL-2003-compliant.
The ANSI SQL standard does not define the expression static SQL, but relational database
management commonly uses it to contrast with the ANSI-defined expression dynamic SQL.
Ad Hoc and Hard-Coded Invocation of SQL Statements
Perhaps the best way to think of dynamic SQL is to contrast it with ad hoc SQL statements
created and executed from a terminal and with preprogrammed SQL statements created by an
application programmer and executed by an application program.
In the case of the ad hoc query, everything legal is available to the requester: choice of SQL
statements and clauses, variables and their names, databases, tables, and columns to
manipulate, and literals.
In the case of the application programmer, the choices are made in advance and hard-coded
into the source code of the application. Once the program is compiled, nothing can be
changed short of editing and recompiling the application.
Dynamic Invocation of SQL Statements
Dynamic SQL offers a compromise between these two extremes. By choosing to code dynamic
SQL statements in the application, the programmer has the flexibility to allow an end user to
select not only the variables to be manipulated at run time, but also the SQL statement to be
executed.
As you might expect, the flexibility that dynamic SQL offers a user is offset by more work and
increased attention to detail on the part of the application programmer, who needs to set up
additional dynamic SQL statements and manipulate information in the SQLDA to ensure a
correct result.
This is done by first preparing, or compiling, an SQL text string containing placeholder tokens
at run time and then executing the prepared statement, allowing the application to prompt the
user for values to be substituted for the placeholders.
Term Definition
Dynamic
SQL
Dynamic SQL is a method of invoking an SQL statement by compiling and performing
it at runtime from within an embedded SQL application program or a stored procedure.
The specification of data to be manipulated by the statement is also determined at
runtime.
Static
SQL
Static SQL is, by default, any method of invoking an SQL statement that is not dynamic.Chapter 4: SQL Data Handling
Dynamic SQL in Stored Procedures
130 SQL Reference: Fundamentals
SQL Statements to Set Up and Invoke Dynamic SQL
The embedded SQL statements for preparing and executing an SQL statement dynamically
are:
• PREPARE
• EXECUTE
• EXECUTE IMMEDIATE.
EXECUTE IMMEDIATE is a special form that combines PREPARE and EXECUTE into one
statement. EXECUTE IMMEDIATE can only be used in the case where there are no input host
variables.
This description applies directly to all executable SQL statements except SELECT, which
requires additional handling.
Note that SELECT INTO cannot be invoked dynamically.
For details, see SQL Reference: Stored Procedures and Embedded SQL.
Related Topics
Dynamic SQL in Stored Procedures
Overview
The way stored procedures support dynamic SQL statements is different from the way
embedded SQL does.
Use the following statement to set up and invoke dynamic SQL in a stored procedure:
CALL DBC.SysExecSQL(string_expression)
where string_expression is any valid string expression that builds an SQL statement.
The string expression is composed of string literals, status variables, local variables, input (IN
and INOUT) parameters, and for-loop aliases. Dynamic SQL statements are not validated at
compile time.
The resulting SQL statement cannot have status variables, local variables, parameters, for-loop
aliases, or a USING or EXPLAIN modifier.
For more information on … See …
examples of dynamic SQL code in C, COBOL,
and PL/I
Teradata Preprocessor2 for Embedded SQL
Programmer Guide.Chapter 4: SQL Data Handling
Using SELECT With Dynamic SQL
SQL Reference: Fundamentals 131
Example
The following example uses dynamic SQL within stored procedure source text:
CREATE PROCEDURE new_sales_table( my_table VARCHAR(30),
my_database VARCHAR(30))
BEGIN
DECLARE sales_columns VARCHAR(128)
DEFAULT '(item INTEGER, price DECIMAL(8,2), sold INTEGER)';
CALL DBC.SysExecSQL('CREATE TABLE ' || my_database ||
'.' || my_table || sales_columns);
END;
Any number of calls to SysExecSQL can be made in a stored procedure and the request text in
the string expression can specify a multistatement request.
Because the request text of dynamic SQL statements can vary from execution to execution,
dynamic SQL provides more usability and conciseness to the stored procedure definition.
Restrictions
Dynamic SQL statements can be specified in a stored procedure only when the creator is the
same as the immediate "owner" of the stored procedure.
The following SQL statements cannot be specified as dynamic SQL in stored procedures:
Related Topics
For rules and usage examples of dynamic SQL statements in stored procedures, see SQL
Reference: Stored Procedures and Embedded SQL.
Using SELECT With Dynamic SQL
Unlike other executable SQL statements, SELECT returns information beyond statement
responses and return codes to the requester.
DESCRIBE Statement
Because the requesting application needs to know how much (if any) data will be returned by
a dynamically prepared SELECT, you must use an additional SQL statement, DESCRIBE, to
make the application aware of the demographics of the data to be returned by the SELECT
statement (see “DESCRIBE” in SQL Reference: Stored Procedures and Embedded SQL).
• CALL
• DATABASE
• HELP
• SELECT
• SET SESSION ACCOUNT
• SET SESSION DATEFORM
• SHOW
• CREATE PROCEDURE
• EXPLAIN
• REPLACE PROCEDURE
• SELECT - INTO
• SET SESSION COLLATION
• SET TIME ZONEChapter 4: SQL Data Handling
Using SELECT With Dynamic SQL
132 SQL Reference: Fundamentals
DESCRIBE writes this information to the SQLDA declared for the SELECT statement as
follows.
General Procedure
An application must use the following general procedure to set up, execute, and retrieve the
results of a SELECT statement invoked as dynamic SQL.
1 Declare a dynamic cursor for the SELECT in the form:
DECLARE cursor_name CURSOR FOR sql_statement_name
2 Declare the SQLDA, preferably using an INCLUDE SQLDA statement.
3 Build and PREPARE the SELECT statement.
4 Issue a DESCRIBE statement in the form:
DESCRIBE sql_statement_name INTO SQLDA
DESCRIBE performs the following actions:
a Interrogate the database for the demographics of the expected results.
b Write the addresses of the target variables to receive those results to the SQLDA.
This step is bypassed if any of the following occurs:
• The request does not return any data.
• An INTO clause was present in the PREPARE statement.
• The statement returns known columns and the INTO clause is used on the
corresponding FETCH statement.
• The application code defines the SQLDA.
5 Allocate storage for target variables to receive the returned data based on the
demographics reported by DESCRIBE.
6 Retrieve the result rows using the following SQL cursor control statements:
• OPEN cursor_name
• FETCH cursor_name USING DESCRIPTOR SQLDA
• CLOSE cursor_name
Note that in step 6, results tables are examined one row at a time using the selection cursor.
This is because client programming languages do not support data in terms of sets, but only as
individual records.
THIS information … IS written to this field of SQLDA …
number of values to be returned SQLN
column name or label of n
th
value SQLVAR
(n
th
row in the SQLVAR(n) array)
column data type of n
th
value
column length of n
th
valueChapter 4: SQL Data Handling
Event Processing Using Queue Tables
SQL Reference: Fundamentals 133
Event Processing Using Queue Tables
Introduction
Teradata Database provides queue tables that you can use for event processing. Queue tables
are base tables with first-in-first-out (FIFO) queue properties.
When you create a queue table, you define a timestamp column. You can query the queue
table to retrieve data from the row with the oldest timestamp.
Usage
An application can perform FIFO push, pop, and peek operations on queue tables.
Here is an example of how an application can process events using queue tables:
• Internally, you can define a trigger on a base table to insert a row into the queue table when
the trigger fires.
• Externally, your application can submit a SELECT AND CONSUME statement that waits
for data in the queue table.
• When data arrives in the queue table, the waiting SELECT AND CONSUME statement
returns a result to the external application, which processes the event. Additionally, the
row is deleted from the queue table.
Related Topics
TO perform a FIFO … USE the …
push INSERT statement
pop SELECT AND CONSUME statement
peek SELECT statement
FOR more information on … SEE …
creating queue tables the CREATE/REPLACE TABLE statement in
SQL Reference: Data Definition Statements
SELECT AND CONSUME SQL Reference: Data Manipulation StatementsChapter 4: SQL Data Handling
Manipulating Nulls
134 SQL Reference: Fundamentals
Manipulating Nulls
Introduction
A null represents any of three things:
• An empty field
• An unknown value
• An unknowable value
Nulls are neither values nor do they signify values; they represent the absence of value. A null is
a place holder indicating that no value is present.
You cannot solve for the value of a null because, by definition, it has no value. For example, the
expression NULL = NULL has no meaning and therefore can never be true. A query that
specifies the predicate WHERE NULL = NULL is not valid because it can never be true. The
meaning of the comparison it specifies is not only unknown, but unknowable.
These properties make the use and interpretation of nulls in SQL problematic. The following
sections outline the behavior of nulls for various SQL operations to help you to understand
how to use them in data manipulation statements and to interpret the results those statements
affect.
NULL Literals
See “NULL Keyword as a Literal” on page 90 for information on how to use the NULL
keyword as a literal.
Nulls and DateTime and Interval Data
A DateTime or Interval value is either atomically null or it is not null. For example, you
cannot have an interval of YEAR TO MONTH in which YEAR is null and MONTH is not.
Result of Expressions That Contain Nulls
Here are some general rules for the result of expressions that contain nulls:
• When any component of a value expression is null, then the result is null.
• The result of a conditional expression that has a null component is unknown.
• If an operand of any arithmetic operator (such as + or -) or function (such as ABS or
SQRT) is null, then the result of the operation or function is null with the exception of
ZEROIFNULL. If the argument to ZEROIFNULL is NULL, then the result is 0.
• COALESCE, a special shorthand variant of the CASE expression, returns NULL if all its
arguments evaluate to null. Otherwise, COALESCE returns the value of the fist non-null
argument.
For more rules on the result of expressions containing nulls, see the sections that follow and
SQL Reference: Functions and Operators.Chapter 4: SQL Data Handling
Manipulating Nulls
SQL Reference: Fundamentals 135
Nulls and Comparison Operators
If either operand of a comparison operator is null, then the result is unknown. If either
operand is the keyword NULL, an error is returned that recommends using IS NULL or IS
NOT NULL instead. The following examples indicate this behavior.
5 = NULL
5 <> NULL
NULL = NULL
NULL <> NULL
5 = NULL + 5
Note that if the argument of the NOT operator is unknown, the result is also unknown. This
translates to FALSE as a final boolean result.
Instead of using comparison operators, use the IS NULL operator to search for fields that
contain nulls and the IS NOT NULL operator to search for fields that do not contain nulls. For
details, see “Searching for Nulls” on page 135 and “Excluding Nulls” on page 135.
Using IS NULL is different from using the comparison operator =. When you use an operator
like =, you specify a comparison between values or value expressions, whereas when you use
the IS NULL operator, you specify an existence condition.
Nulls and CASE Expressions
The following rules apply to nulls and CASE expressions:
• CASE and its related expressions COALESCE and NULLIF can return a null.
• NULL and null expressions are valid as the CASE test expression in a valued CASE
expression.
• When testing for NULL, it is best to use a searched CASE expression using the IS NULL or
IS NOT NULL operators in the WHEN clause.
• NULL and null expressions are valid as THEN clause conditions.
For details on the rules for nulls in CASE, NULLIF, and COALESCE expressions, see SQL
Reference: Functions and Operators.
Excluding Nulls
To exclude nulls from the results of a query, use the operator IS NOT NULL.
For example, to search for the names of all employees with a value other than null in the
jobtitle column, enter the statement.
SELECT name
FROM employee
WHERE jobtitle IS NOT NULL ;
Searching for Nulls
To search for columns that contain nulls, use the operator IS NULL.
The IS NULL operator tests row data for the presence of nulls. Chapter 4: SQL Data Handling
Manipulating Nulls
136 SQL Reference: Fundamentals
For example, to search for the names of all employees who have a null in the deptno column,
you could enter the statement:
SELECT name
FROM employee
WHERE deptno IS NULL ;
This query produces the names of all employees with a null in the deptno field.
Searching for Nulls and Non-Nulls Together
To search for nulls and non-nulls in the same statement, the search condition for nulls must
be separate from any other search conditions.
For example, to select the names of all employees with the job title of Vice Pres, Manager, or
null, enter the following SELECT statement.
SELECT name, jobtitle
FROM employee
WHERE jobtitle IN ('Manager', 'Vice Pres') OR jobtitle IS NULL ;
Including NULL in the IN list has no effect because NULL never equals NULL or any value.
Null Sorts as the Lowest Value in a Collation
When you use an ORDER BY clause to sort records, Teradata Database sorts null as the lowest
value. Sorting nulls can vary from RDBMS to RDBMS. Other systems may sort null as the
highest value.
If any row has a null in the column being grouped, then all rows having a null are placed into
one group.
NULL and Unique Indexes
For unique indexes, Teradata Database treats nulls as if they are equal rather than unknown
(and therefore false).
For single-column unique indexes, only one row may have null for the index value; otherwise
a uniqueness violation error occurs.
For multi-column unique indexes, no two rows can have nulls in the same columns of the
index and also have non-null values that are equal in the other columns of the index.
For example, consider a two-column index. Rows can occur with the following index values:
An attempt to insert a row that matches any of these rows will result in a uniqueness violation.
Value of First Column in Index Value of Second Column in Index
1 null
null 1
null nullChapter 4: SQL Data Handling
Manipulating Nulls
SQL Reference: Fundamentals 137
Teradata Database Replaces Nulls With Values on Return to Client in
Record Mode
When the Teradata Database returns information to a client system in record mode, nulls
must be replaced with some value for the underlying column because client system languages
do not recognize nulls.
The following table shows the values returned for various column data types.
The substitute values returned for nulls are not, by themselves, distinguishable from valid
non-null values. Data from CLI is normally accessed in IndicData mode, in which additional
identifying information that flags nulls is returned to the client.
BTEQ uses the identifying information, for example, to determine whether the values it
receives are values or just aliases for nulls so it can properly report the results. Note that BTEQ
displays nulls as ?, which are not by themselves distinguishable from a CHAR or VARCHAR
value of '?'.
Nulls and Aggregate Functions
With the important exception of COUNT(*), aggregate functions ignore nulls in their
arguments. This treatment of nulls is very different from the way arithmetic operators and
functions treat them.
This behavior can result in apparent nontransitive anomalies. For example, if there are nulls in
either column A or column B (or both), then the following expression is virtually always true.
SUM(A) + (SUM B) <> SUM (A+B)
Data Type Substitute Value Returned for Null
CHARACTER(n)
DATE (ANSI)
TIME
TIMESTAMP
INTERVAL
Pad character (or n pad characters for CHARACTER(n), where n > 1)
BYTE[(n)] Binary zero byte if n omitted else n binary zero bytes
VARBYTE(n) 0-length byte string
VARCHARACTER(n) 0-length character string
DATE (Teradata) 0
BIGINT
INTEGER
SMALLINT
BYTEINT
FLOAT
DECIMAL
REAL
DOUBLE PRECISION
NUMERIC
0Chapter 4: SQL Data Handling
Session Parameters
138 SQL Reference: Fundamentals
In other words, for the case of SUM, the result is never a simple iterated addition if there are
nulls in the data being summed.
The only exception to this is the case in which the values for columns A and B are both null in
the same rows, because in those cases the entire row is disregarded in the aggregation. This is a
trivial case that does not violate the general rule.
The same is true, the necessary changes being made, for all the aggregate functions except
COUNT(*).
If this property of nulls presents a problem, you can always do either of the following
workarounds, each of which produces the desired result of the aggregate computation
SUM(A) + SUM(B) = SUM(A+B).
• Always define NUMERIC columns as NOT NULL DEFAULT 0.
• Use the ZEROIFNULL function within the aggregate function to convert any nulls to zeros
for the computation, for example
SUM(ZEROIFNULL(x) + ZEROIFNULL(y))
which produces the same result as this:
SUM(ZEROIFNULL(x) + ZEROIFNULL(y)).
COUNT(*) does include nulls in its result. For details, see SQL Reference: Functions and
Operators.
RANGE_N and CASE_N Functions
Nulls have special considerations in the RANGE_N and CASE_N functions. For details, see
SQL Reference: Functions and Operators.
Session Parameters
Introduction
The following session parameters can be controlled with keywords or predefined system
variables.
Parameter Valid Keywords or System Variables
SQL Flagger ON
OFF
Transaction
Mode
ANSI (COMMIT)
Teradata (BTET)Chapter 4: SQL Data Handling
Session Parameters
SQL Reference: Fundamentals 139
SQL Flagger
When enabled, the SQL Flagger assists SQL programmers by notifying them of the use of nonANSI and non-entry level ANSI SQL syntax.
Enabling the SQL Flagger can be done regardless of whether you are in ANSI or Teradata
session mode.
Session
Collation
ASCII
EBCDIC
MULTINATIONAL
HOST
CHARSET_COLL
JIS_COLL
Account and
Priority
Account and reprioritization. Within the account identifier, you can specify a
performance group or use one of the following predefined performance groups:
• $R
• $H
• $M
• $L
Date Form ANSIDATE
INTEGERDATE
Character Set Indicates the character set being used by the client.
You can view site-installed client character sets from DBC.CharSets or
DBC.CharTranslations.
The following client character sets are permanently enabled:
• ASCII
• EBCDIC
• UTF8
• UTF16
For more information on character sets, see International Character Set Support.
Express Logon
(for networkattached clients)
ENABLE
DISABLE
Parameter Valid Keywords or System VariablesChapter 4: SQL Data Handling
Session Parameters
140 SQL Reference: Fundamentals
To set the SQL Flagger on or off for interactive SQL, use the .SET SESSION command in
BTEQ.
For more detail on using the SQL Flagger, see “SQL Flagger” on page 217.
To set the SQL Flagger on or off for embedded SQL, use the SQLCHECK or -sc and
SQLFLAGGER or -sf options when you invoke the preprocessor.
If you are using SQL in other application programs, see the reference manual for that
application for instructions on enabling the SQL Flagger.
Transaction Mode
You can run transactions in either Teradata or ANSI session modes and these modes can be set
or changed.
To set the transaction mode, use the .SET SESSION command in BTEQ.
For more detail on transaction semantics, see “Transaction Processing” in SQL Reference:
Statement and Transaction Processing.
If you are using SQL in other application programs, see the reference manual for that
application for instructions on setting or changing the transaction mode.
Session Collation
Collation of character data is an important and complex option. The Teradata Database
provides several named collations. The MULTINATIONAL and CHARSET_COLL collations
allow the system administrator to provide collation sequences tailored to the needs of the site.
The collation for the session is determined at logon from the defined default collation for the
user. You can change your collation any number of times during the session using the SET
SESSION COLLATION statement, but you cannot change your default logon in this way.
Your default collation is assigned via the COLLATION option of the CREATE USER or
MODIFY USER statement. This has no effect on any current session, only new logons.
To set this level of flagging … Set the flag variable to this value …
None SQLFLAG NONE
Entry level SQLFLAG ENTRY
Intermediate level SQLFLAG INTERMEDIATE
To run transactions in this mode … Set the variable to this value …
Teradata TRANSACTION BTET
ANSI TRANSACTION ANSIChapter 4: SQL Data Handling
Session Parameters
SQL Reference: Fundamentals 141
Each named collation can be CASESPECIFIC or NOT CASESPECIFIC. NOT CASESPECIFIC
collates lowercase data as if it were converted to uppercase before the named collation is
applied.
For details, see “SET SESSION COLLATION” in SQL Reference: Data Definition Statements.
Account and Priority
You can dynamically downgrade or upgrade the performance group priority for your account.
Collation Name Description
ASCII Character data is collated in the order it would appear if converted for an
ASCII session, and a binary sort performed.
EBCDIC Character data is collated in the order it would appear if converted for an
EBCDIC session, and a binary sort performed.
MULTINATIONAL The default MULTINATIONAL collation is a two-level collation based on
the Unicode collation standard.
Your system administrator can redefine this collation to any two-level
collation of characters in the LATIN repertoire.
For backward compatibility, the following are true:
• MULTINATIONAL collation of KANJI1 data is single level.
• The system administrator can redefine single byte character collation.
This definition is not compatible with MULTINATIONAL collation of nonKANJI1 data. CHARSET_COLL collation is usually a better solution for
KANJI1 data.
See “ORDER BY Clause” in SQL Reference: Data Manipulation Statements.
For information on setting up the MULTINATIONAL collation sequence,
see “Collation Sequences” in International Character Set Support.
HOST The default. HOST collation defaults are as follows:
• EBCDIC collation for channel-connected systems.
• ASCII collation for all others.
CHARSET_COLL Character data is collated in the order it would appear if converted to the
current client character set and then sorted in binary order.
CHARSET_COLL collation is a system administrator-defined collation.
JIS_COLL Character data is collated based on the Japanese Industrial Standards (JIS).
JIS characters collate in the following order:
1 JIS X 0201-defined characters in standard order
2 JIS X 0208-defined characters in standard order
3 JIS X 0212-defined characters in standard order
4 KanjiEBCDIC-defined characters not defined in JIS X 0201, JIS X 0208, or
JIS X 0212 in standard order
5 All remaining characters in Unicode standard orderChapter 4: SQL Data Handling
Session Parameters
142 SQL Reference: Fundamentals
Priorities can be downgraded or upgraded at either the session or the request level. For more
information, see “SET SESSION ACCOUNT” in SQL Reference: Data Definition Statements.
Note that changing the performance group for your account changes the account name for
accounting purposes because a performance group is part of an account name.
Date Form
You can change the format in which DATE data is imported or exported in your current
session.
DATE data can be set to be treated either using the ANSI date format
(DATEFORM=ANSIDATE) or using the Teradata date format
(DATEFORM=INTEGERDATE).
For details, see “SET SESSION DATEFORM” in SQL Reference: Data Definition Statements.
Character Set
To set the client character set, use one of the following:
• From BTEQ, use the BTEQ [.] SET SESSION CHARSET ‘name’ command.
• In a CLIv2 application, call CHARSET name.
• In the URL for selecting a Teradata JDBC driver connection to a Teradata Database, use
the CHARSET=name database connection parameter.
where the ‘name’ or name value is ASCII, EBCDIC, UTF8, UTF16, or a name assigned to the
translation codes that define an available character set.
If not explicitly requested, the session default is the character set associated with the logon
client. This is either the standard client default, or the character set assigned to the client by
the database administrator.
Express Logon
Express Logon improves the logon response time for network-attached, NCR UNIX MP-RAS
clients and is especially useful in the OLTP environment where sessions are short-lived.
Express Logon allows the gateway to choose the fast path when logging users onto the
Teradata Database.
Enable or disable this mode from the Gateway Global Utility, from the XGTWGLOBAL
interface:
In this mode … Use this command to enable or disable Express Logon …
Terminal ENABLE EXLOGON
DISABLE EXLOGON
Window EXLOGON button
(via the LOGON dialog box)Chapter 4: SQL Data Handling
Session Management
SQL Reference: Fundamentals 143
The feature can be enabled or disabled for a particular host group, or for all host groups. For
details on this feature, see the Utilities book.
For channel-attached clients, see “Session Pools” on page 143.
HELP SESSION
The HELP SESSION statement identifies the transaction mode, character set, collation
sequence, and date form in effect for the current session. See “HELP SESSION” in SQL
Reference: Data Definition Statements for details.
Session Management
Introduction
Each session is logged on and off via calls to CLIv2 routines or through ODBC or JDBC, which
offer a one-step logon-connect function.
Sessions are internally managed by dividing the session control functions into a series of single
small steps that are executed in sequence to implement multi-threaded tasking. This provides
concurrent processing of multiple logon and logoff events, which can be any combination of
individual users, and one or more concurrent sessions established by one or more users and
applications.
Once connected and active, a session can be viewed as a work stream consisting of a series of
requests between the client and server.
Session Pools
For channel-connected applications, you can establish session pools, which are collections of
sessions that are logged on to the Teradata Database in advance (generally at the time of TDP
initialization) for use by applications that require a ‘fast path’ logon. This capability is
particularly advantageous for transaction processing in which interaction with the Teradata
Database consists of many single, short transactions.
TDP identifies each session with a unique session number. Teradata Database identifies a
session with a session number, the username of the initiating user, and the logical host
identification number of the connection (LAN or mainframe channel) associated with the
controlling TDP or mTDP.
For network-connected, UNIX MP-RAS applications that require fast path logons, use the
Express Logon feature. For details, see “Express Logon” on page 142.
Session Reserve
Use the ENABLE SESSION RESERVE command from an OS/390 or VM client to reserve
session capacity in the event of a PE failure. To release reserved session capacity, use the
DISABLE SESSION RESERVE command. Chapter 4: SQL Data Handling
Return Codes
144 SQL Reference: Fundamentals
See Teradata Tools and Utilities Installation Guide for IBM OS/390 and z/OS and Teradata Tools
and Utilities Installation Guide for IBM VM for further information.
Session Control
The major functions of session control are session logon and logoff.
Upon receiving a session request, the logon function verifies authorization and returns a yes
or no response to the client.
The logoff function terminates any ongoing activity and deletes the session context.
Requests and Responses
Requests are sent to a server to initiate an action. Responses are sent by a server to reflect the
results of that action. Both requests and responses are associated with an established session.
A request consists of the following components:
• One or more Teradata SQL statements
• Control information
• Optional USING data
If any operation specified by an initiating request fails, the request is backed out, along with
any change that was made to the database. In this case, a failure response is returned to the
application.
Return Codes
Introduction
SQL return codes provide information about the status of a completed executable SQL DML
statement.
Status Variables for Receiving SQL Return Codes
ANSI SQL defines two status variables for receiving return codes:
• SQLSTATE
• SQLCODE
SQLCODE is not ANSI SQL-compliant. The ANSI SQL-92 standard explicitly deprecates
SQLCODE, and the ANSI SQL-99 standard does not define SQLCODE. The ANSI SQL
committee recommends that new applications use SQLSTATE in place of SQLCODE.
Teradata Database defines a third status variable for receiving the number of rows affected by
an SQL statement in a stored procedure:
• ACTIVITY_COUNT
Teradata SQL defines a non-ANSI SQL Communications Area (SQLCA) that also has a field
named SQLCODE for receiving return codes. Chapter 4: SQL Data Handling
Return Codes
SQL Reference: Fundamentals 145
Exception and Completion Conditions
ANSI SQL defines two categories of conditions that issue return codes:
• Exception conditions
• Completion conditions
Exception Conditions
An exception condition indicates a statement failure.
A statement that raises an exception condition does nothing more than return that exception
condition to the application.
There are as many exception condition return codes as there are specific exception conditions.
For more information about exception conditions, see “Failure Response” on page 150 and
“Error Response (ANSI Session Mode Only)” on page 149.
For a complete list of exception condition codes, see the Messages book.
Completion Conditions
A completion condition indicates statement success.
There are three categories of completion conditions:
• Successful completion
• Warnings
• No data found
For more information, see:
• “Statement Responses” on page 147
• “Success Response” on page 148
• “Warning Response” on page 149
A statement that raises a completion condition can take further action such as querying the
database and returning results to the requesting application, updating the database, initiating
an SQL transaction, and so on.
For information on … See …
• SQLSTATE
• SQLCODE
• ACTIVITY_COUNT
“Result Code Variables” in SQL Reference: Stored Procedures and
Embedded SQL
SQLCA “SQL Communications Area (SQLCA)” in SQL Reference: Stored
Procedures and Embedded SQLChapter 4: SQL Data Handling
Return Codes
146 SQL Reference: Fundamentals
Return Codes for Stored Procedures
The return code values are different in the case of SQL control statements in stored
procedures.
The return codes for stored procedures appear in the following table.
How an Application Uses SQL Return Codes
An application program or stored procedure tests the status of a completed executable SQL
statement to determine its status.
FOR this type of
completion
condition …
The value for this return code is …
SQLSTATE SQLCODE
Success '00000' 0
Warning '01901' 901
'01800' to '01841' 901
'01004' 902
No data found '02000' 100
FOR this type of
condition …
The value for this return code is …
SQLSTATE SQLCODE
Successful
completion
'00000' 0
Warning SQLSTATE value corresponding to
the warning code.
the Teradata Database warning code.
No data found or
any other Exception
SQLSTATE value corresponding to
the error code.
the Teradata Database error code.
IF the statement raises
this type of condition …
THEN the application or condition handler takes the following remedial
action …
Successful completion none.
Warning the statement execution continues.
If a warning condition handler is defined in the application, the handler
executes.Chapter 4: SQL Data Handling
Statement Responses
SQL Reference: Fundamentals 147
Statement Responses
Response Types
The Teradata Database responds to an SQL request with one of the following condition
responses:
• Success response, with optional warning
• Failure response
• Error response (ANSI session mode only)
Depending on the type of statement, the Teradata Database also responds with one or more
rows of data.
Multistatement Responses
A response to a request that contains more than one statement, such as a macro, is not
returned to the client until all statements in the request are successfully executed.
How a Response Is Returned to the User
The manner in which the response is returned depends on the interface that is being used.
For example, if an application is using a language preprocessor, then the activity count,
warning code, error code, and fields from a selected row are returned directly to the program
through its appropriately declared variables.
If the application is a stored procedure, then the activity count is returned directly in the
ACTIVITY_COUNT status variable.
If you are using BTEQ, then a success, error, or failure response is displayed automatically.
Response Condition Codes
SQL statements also return condition codes that are useful for handling errors and warnings
in embedded SQL and stored procedure applications.
No data found or any
other exception
whatever appropriate action is required by the exception.
If an EXIT handler has been defined for the exception, the statement
execution terminates.
If a CONTINUE handler has been defined, execution continues after the
remedial action.
IF the statement raises
this type of condition …
THEN the application or condition handler takes the following remedial
action …Chapter 4: SQL Data Handling
Success Response
148 SQL Reference: Fundamentals
For information about SQL response condition codes, see the following in SQL Reference:
Stored Procedures and Embedded SQL:
• SQLSTATE
• SQLCODE
• ACTIVITY_COUNT
Success Response
Definition
A success response contains an activity count that indicates the total number of rows involved
in the result.
For example, the activity count for a SELECT statement is the total number of rows selected
for the response. For a SELECT, COMMENT, or ECHO statement, the activity count is
followed by the data that completes the response.
An activity count is meaningful for statements that return a result set, for example:
• SELECT
• INSERT
• UPDATE
• DELETE
• HELP
• SHOW
• EXPLAIN
• CREATE PROCEDURE
• REPLACE PROCEDURE
For other SQL statements, activity count is meaningless.
Example
The following interactive SELECT statement returns the successful response message.
SELECT AVG(f1)
FROM Inventory;
*** Query completed. One row found. One column returned.
*** Total elapsed time was 1 second.
Average(f1)
-----------
14Chapter 4: SQL Data Handling
Warning Response
SQL Reference: Fundamentals 149
Warning Response
Definition
A success or OK response with a warning indicates either that an anomaly has occurred or
informs the user about the anomaly and indicates how it can be important to the
interpretation of the results returned.
Example
Assume the current session is running in ANSI session mode.
If nulls are included in the data for column f1, then the following interactive query returns the
successful response message with a warning about the nulls.
SELECT AVG(f1) FROM Inventory;
*** Query completed. One row found. One column returned.
*** Warning: 2892 Null value eliminated in set function.
*** Total elapsed time was 1 second.
Average(f1)
-----------
14
This warning response is not generated if the session is running in Teradata session mode.
Error Response (ANSI Session Mode Only)
Definition
An error response occurs when a query anomaly is severe enough to prevent the correct
processing of the request.
In ANSI session mode, an error for a request causes the request to rollback, and not the entire
transaction.
Example 1
The following command returns the error message immediately following.
.SET SESSION TRANS ANSI;
*** Error: You must not be logged on .logoff to change the SQLFLAG or
TRANSACTION settings.
Example 2
Assume that the session is running in ANSI session mode, and the following table is defined:
CREATE MULTISET TABLE inv, FALLBACK,
NO BEFORE JOURNAL,
NO AFTER JOURNALChapter 4: SQL Data Handling
Failure Response
150 SQL Reference: Fundamentals
(
item INTEGER CHECK ((item >=10) AND (item <= 20) ))
PRIMARY INDEX (item);
You insert a value of 12 into the item column of the inv table.
This is valid because the defined integer check specifies that any integer between 10 and 20
(inclusive) is valid.
INSERT INTO inv (12);
The following results message returns.
*** Insert completed. One row added....
You insert a value of 9 into the item column of the inv table.
This is not valid because the defined integer check specifies that any integer with a value less
than 10 is not valid.
INSERT INTO inv (9);
The following error response returns:
***Error 5317 Check constraint violation: Check error in field
inv.item.
You commit the current transaction:
COMMIT;
The following results message returns:
*** COMMIT done. ...
You select all rows from the inv table:
SELECT * FROM inv;
The following results message returns:
*** Query completed. One row found. One column returned.
item
-------
12
Failure Response
Definition
A failure response is a severe error. The response includes a statement number, an error code,
and an associated text string describing the cause of the failure.
Teradata Session Mode
In Teradata session mode, a failure causes the system to roll back the entire transaction.
If one statement in a macro fails, a single failure response is returned to the client, and the
results of any previous statements in the transaction are backed out.Chapter 4: SQL Data Handling
Failure Response
SQL Reference: Fundamentals 151
ANSI Session Mode
In ANSI session mode, a failure causes the system to roll back the entire transaction, for
example, when the current request:
• Results in a deadlock
• Performs a DDL statement that aborts
• Executes an explicit ROLLBACK or ABORT statement
Example 1
The following SELECT statement
SELECT * FROM Inventory:;
in BTEQ, returns the failure response message:
*** Failure 3709 Syntax error, replace the ':' that follows the name
with a ';'.
Statement# 1, Info =20
*** Total elapsed time was 1 second.
Example 2
Assume that the session is running in ANSI session mode, and the following table is defined:
CREATE MULTISET TABLE inv, FALLBACK,
NO BEFORE JOURNAL,
NO AFTER JOURNAL
(
item INTEGER CHECK ((item >=10) AND (item <= 20) ))
PRIMARY INDEX (item);
You insert a value of 12 into the item column of the inv table.
This is valid because the defined integer check specifies that any integer between 10 and 20
(inclusive) is valid.
INSERT INTO inv (12);
The following results message returns.
*** Insert completed. One row added....
You commit the current transaction:
COMMIT;
The following results message returns:
*** COMMIT done. ...
You insert a valid value of 15 info the item column of the inv table:
INSERT INTO inv (15);
The following results message returns.
*** Insert completed. One row added....Chapter 4: SQL Data Handling
Failure Response
152 SQL Reference: Fundamentals
You can use the ABORT statement to cause the system to roll back the transaction:
ABORT;
The following failure message returns:
*** Failure 3514 User-generated transaction ABORT.
Statement# 1, Info =0
You select all rows from the inv table:
SELECT * FROM inv;
The following results message returns:
*** Query completed. One row found. One column returned.
item
-------
12SQL Reference: Fundamentals 153
CHAPTER 5 Query Processing
This chapter discusses query processing, including single AMP requests and all AMP requests,
and table access methods available to the Optimizer.
Topics include:
• Query processing
• Table access methods
• Full-table scans
• Collecting statistics
Query Processing
Introduction
An SQL query (the definition for “query” here includes DELETE, INSERT, MERGE, and
UPDATE as well as SELECT) can affect one AMP, several AMPs, or all AMPs in the
configuration.
IF a query … THEN …
involving a single table uses a unique
primary index (UPI)
the row hash can be used to identify a single AMP.
At most one row can be returned.
involving a single table uses a
nonunique primary index (NUPI)
the row hash can be used to identify a single AMP.
Any number of rows can be returned.
uses a unique secondary index (USI) one or two AMPs are affected (one AMP if the subtable and
base table are on the same AMP).
At most one row can be returned.
uses a nonunique secondary index
(NUSI)
if the table has a partitioned primary index (PPI) and the
NUSI is the same column set as a NUPI, the query affects
one AMP.
Otherwise, all AMPs take part in the operation and any
number of rows can be returned.Chapter 5: Query Processing
Query Processing
154 SQL Reference: Fundamentals
The SELECT statements in subsequent examples reference the following table data.
Single AMP Request
Assume that a PE receives the following SELECT statement:
SELECT last_name
FROM Employee
WHERE employee_number = 1008;
Because a unique primary index value is used as the search condition (the column
employee_number is the primary index for the Employee table), PE1 generates a single AMP
step requesting the row for employee 1008. The AMP step, along with the PE identification, is
put into a message, and sent via the BYNET to the relevant AMP (processor).
This process is illustrated by the graphic under “Flow Diagram of a Single AMP Request” on
page 155. Only one BYNET is shown to simplify the illustration.
Abbreviation Meaning
PK Primary Key
FK Foreign Key
UPI Unique Primary Index
Employee
Employee
Number
Manager
Employee
Number
Dept.
Number
Job
Code
Last
Name
First
Name
Hire
Date
Birth
Date
Salary
Amount
PK/UPI FK FK FK
1006 1019 301 312101 Stein John 76105 531015 2945000
1008 1019 301 312102 Kanieski Carol 770201 580517 2925000
1005 0801 403 431100 Ryan Loretta 761015 550910 3120000
1004 1003 401 412101 Johnson Darlene 761015 460423 3630000
1007 1005 403 432101 Villegas Arnando 770102 370131 4970000
1003 0801 401 411100 Trader James 760731 470619 3755000
1016 0801 302 321100 Rogers Nora 780310 590904 5650000
1012 1005 403 432101 Hopkins Paulene 770315 420218 3790000
1019 0801 301 311100 Kubic Ron 780801 421211 5770000
1023 1017 501 512101 Rabbit Peter 790301 621029 2650000
1083 0801 619 414221 Kimble George 910312 410330 3620000
1017 0801 501 511100 Runyon Irene 780501 511110 6600000
1001 1003 401 412101 Hoover William 760818 500114 2552500Chapter 5: Query Processing
Query Processing
SQL Reference: Fundamentals 155
Flow Diagram of a Single AMP Request
Assuming that AMP2 has the row, it accepts the message.
As illustrated by the graphic under “Single AMP Response to Requesting PE” on page 156,
AMP2 retrieves the row from its DSU (disk storage unit), includes the row and the PE
identification in a return message, and sends the message back to PE1 via the BYNET.
PE1 accepts the message and returns the response row to the requesting application.
For an illustration of a single AMP request with partition elimination, see “Single AMP
Request With Partition Elimination” on page 160.
BYNET
PE1 PE2 AMP1 AMP2 AMP3 AMP4
1006 STEIN
1008 KANIESKI
1023 RABBIT
1004 JOHNSON
1101C002
AMP STEP
DSU DSU DSUChapter 5: Query Processing
Query Processing
156 SQL Reference: Fundamentals
Single AMP Response to Requesting PE
All AMP Request
Assume PE1 receives a SELECT statement that specifies a range of primary index values as a
search condition as shown in the following example:
SELECT last_name, employee_number
FROM employee
WHERE employee_number BETWEEEN 1001 AND 1010
ORDER BY last_name;
In this case, each value hashes differently, and all AMPs must search for the qualifying rows.
PE1 first parses the request and creates the following AMP steps:
• Retrieve rows between 1001 and 1010
• Sort ascending on last_name
• Merge the sorted rows to form the answer set
PE1 then builds a message for each AMP step and puts that message onto the BYNET.
Typically, each AMP step is completed before the next one begins; note, however, that some
queries can generate parallel steps.
When PE1 puts the message for the first AMP step on the BYNET, that message is broadcast to
all processors as illustrated by “Figure 1: Flow Diagram for an All AMP Request” on page 157.
BYNET
ROW 1008
PE1 PE2 AMP1 AMP2 AMP3 AMP4
1006 Stein
1008 Kanieski
1023 Rabbit
1004 Johnson
1101C003Chapter 5: Query Processing
Query Processing
SQL Reference: Fundamentals 157
Figure 1: Flow Diagram for an All AMP Request
The process is as follows:
1 All AMPs accept the message, but the PEs do not.
2 Each AMP checks for qualifying rows on its disk storage units.
3 If any qualifying rows are found, the data in the requested columns is converted to the
client format and copied to a spool file.
4 Each AMP completes the step, whether rows were found or not, and puts a completion
message on the BYNET.
The completion messages flow across the BYNET to PE1.
5 When all AMPs have returned a completion message, PE1 transmits a message containing
AMP Step 2 to the BYNET.
Upon receipt of Step 2, the AMPs sort their individual answer sets into ascending sequence
by last_name (see “Figure 2: Flow Diagram for an AMP Sort” on page 158).
Note: If partitioned on employee_number, the scan may be limited to a few partitions
based on partition elimination.
PE1 PE2 AMP1 AMP2 AMP3 AMP4
1006 STEIN
1008 KANIESKI
1004 JOHNSON
1007 VILLEGAS
1003 TRADER
1001 HOOVER 1005 RYAN
BYNET
FF02A004
DATA SPOOLChapter 5: Query Processing
Query Processing
158 SQL Reference: Fundamentals
Figure 2: Flow Diagram for an AMP Sort
6 Each AMP sorts its answer set, then puts a completion message on the BYNET.
7 When PE1 has received all completion messages for Step 2, it sends a message containing
AMP Step 3.
8 Upon receipt of Step 3, each AMP copies the first block from its sorted spool to the
BYNET.
Because there can be multiple AMPs on a single node, each node might be required to
handle sort spools from multiple AMPs (see “Figure 3: Flow Diagram for a BYNET
Merge” on page 159).
PE1 PE2 AMP1 AMP2 AMP3 AMP4
1006 STEIN
1008 KANIESKI
1004 JOHNSON
1007 VILLEGAS
1003 TRADER
1001 HOOVER 1005 RYAN
BYNET
FF02A005
1004 JOHNSON
1008 KANIESKI
1006 STEIN
1003 TRADER
1007 VILLEGAS
1001 HOOVER 1005 RYAN
DATA SPOOL
SORT SPOOLChapter 5: Query Processing
Query Processing
SQL Reference: Fundamentals 159
Figure 3: Flow Diagram for a BYNET Merge
9 Nodes that contain multiple AMPs must first perform an intermediate sort of the spools
generated by each of the local AMPs.
When the local sort is complete on each node, the lowest sorting row from each node is
sent over the BYNET to PE1. From this point on, PE1 acts as the Merge coordinator
among all the participating nodes.
10 The Merge continues with PE1 building a globally sorted buffer.
When this buffer fills, PE1 forwards it to the application and begins building subsequent
buffers.
11 When a participant node has exhausted its sort spool, it sends a Done message to PE1.
This causes PE1 to prune this node from the set of Merge participants.
When there are no remaining Merge participants, PE1 sends the final buffer to the
application along with an End Of File message.
Partition Elimination
A PPI can increase query efficiency via partition elimination. The degree of partition
elimination depends on the:
• Partition expression for the primary index of the table
• Conditions in the query
• Capability of the Optimizer to detect partition elimination
It is not always required that all values of the partitioning columns be specified in a query to
have partition elimination occur.
HD03A005
Global Sort Buffer
Local Sort
Tree
Sort Spools
Node 1
Local Sort
Tree
Sort Spools
Node 3
Sort Spools
Node 2
Local Sort
Tree
Local Sort
Tree
Sort Spools
Node 4
PE1
PE2
AMP
AMP
AMP
AMP
PE5
PE6
AMP
AMP
AMP
AMP
PE7
PE8
AMP
AMP
AMP
AMP
PE3
PE4
AMP
AMP
AMP
AMP
BYNETChapter 5: Query Processing
Query Processing
160 SQL Reference: Fundamentals
Single AMP Request With Partition Elimination
If a SELECT specifies values for all the primary index columns, the AMP where the rows reside
can be determined and only a single AMP is accessed.
If conditions are also specified on the partitioning columns, partition elimination may reduce
the number of partitions to be probed on that AMP.
IF a SELECT … THEN …
specifies values for
all the primary index
columns
the AMP where the rows reside can be determined and only a single AMP is
accessed.
IF conditions are … THEN …
not specified on the
partitioning columns
each partition can be probed to find the
rows based on the hash value.
also specified on the
partitioning columns
partition elimination may reduce the
number of partitions to be probed on that
AMP.
For an illustration, see “Single AMP Request With Partition Elimination”
on page 160.
does not specify the
values for all the
primary index
columns
an all-AMP full file scan is required for a table with an NPPI.
However, with a PPI, if conditions are specified on the partitioning
columns, partition elimination may reduce an all-AMP full file scan to an
all-AMP scan of only the non-eliminated partitions.Chapter 5: Query Processing
Table Access
SQL Reference: Fundamentals 161
The following diagram illustrates this process.
The AMP Step includes the list of partitions (P) to access. Partition elimination reduces access
to the partitions that satisfy the query requirements. In each partition, look for rows with a
given row hash value (RH) of the PI.
Table Access
Teradata Database uses indexes and partitions to access the rows of a table. If indexed or
partitioned access is not suitable for a query, the result is a full-table scan.
Access Methods
The following table access methods are available to the Optimizer:
BYNET
PE1 PE2 AMP1 AMP2 AMP3 AMP4
Table
P
P
1101A094
AMP STEP
DSU DSU DSU
RH
RH
RH
RH
• Unique Primary Index
• Unique Partitioned Primary Index
• Nonunique Primary Index
• Nonunique Partitioned Primary Index
• Unique Secondary Index
• Nonunique Secondary Index
• Join Index
• Hash Index
• Full-Table Scan
• Partition ScanChapter 5: Query Processing
Table Access
162 SQL Reference: Fundamentals
Effects of Conditions in WHERE Clause
Whether the system can use row hashing, or do a table scan with partition elimination, or
whether it must do a full-table scan depends on the predicates or conditions that appear in the
WHERE clause associated with an UPDATE, DELETE, or SELECT statement.
The following functions are applied to rows identified by the WHERE clause, and have no
effect on the selection of rows from the base table:
Statements that specify any of the following WHERE clause conditions result in full-table
scans (FTS). If the table has a PPI, partition elimination might reduce the FTS access to only
the affected partitions.
The type of table access that the system uses when statements specify any of the following
WHERE clause conditions depends on whether the column or columns are indexed, the type
of index, and its selectivity:
• GROUP BY
• HAVING
• INTERSECT
• MINUS/EXCEPT
• ORDER BY
• QUALIFY
• SAMPLE
• UNION
• WITH ... BY
• WITH
• nonequality comparisons
• column_name IS NOT NULL
• column_name NOT IN (explicit list of values)
• column_name NOT IN (subquery)
• column_name BETWEEN ... AND ...
• condition_1 OR condition_2
• NOT condition_1
• column_name LIKE
• column_1 || column_2 = value
• table1.column_x = table1.column_y
• table1.column_x [computation] = value
• table1.column_x [computation] - table1.column_y
• INDEX (column_name)
• SUBSTR (column_name)
• SUM
• MIN
• MAX
• AVG
• DISTINCT
• COUNT
• ANY
• ALL
• missing WHERE clause
• column_name = value or constant expression
• column_name IS NULL
• column_name IN (explicit list of values)
• column_name IN (subquery)
• condition_1 AND condition_2
• different data types
• table1.column_x = table2.column_xChapter 5: Query Processing
Full-Table Scans
SQL Reference: Fundamentals 163
In summary, a query influences processing choices as follows:
• A full-table scan (possibly with partition elimination if the table has a PPI) is required if
the query includes an implicit range of values, such as in the following WHERE examples.
Note that when a small BETWEEN range is specified, the optimizer can use row hashing
rather than a full-table scan.
... WHERE column_name [BETWEEN <, >, <>, <=, >=]
... WHERE column_name [NOT] IN (SELECT...)
... WHERE column_name NOT IN (val1, val2 [,val3])
• Row hashing can be used if the query includes an explicit value, as shown in the following
WHERE examples:
... WHERE column_name = val
... WHERE column_name IN (val1, val2, [,val3])
Related Topics
Full-Table Scans
Introduction
A full-table scan is a retrieval mechanism that touches all rows in a table.
If you do not specify a WHERE clause in your query, then the Teradata Database always uses a
full-table scan to access the data.
Even when results are qualified using a WHERE clause, indexed or partitioned access may not
be suitable for a query, and a full-table scan may result.
A full-table scan is always an all-AMP operation, and should be avoided when possible. Fulltable scans may generate spool files that can have as many rows as the base table.
Full-table scans are not something to fear, however. The architecture that the Teradata
Database uses makes a full-table scan an efficient procedure, and optimization is scalable
based on the number of AMPs defined for the system. The sorts of unplanned, ad hoc queries
that characterize the data warehouse process, and that often are not supported by indexes,
perform very effectively for Teradata Database using full-table scans.
FOR more information on … SEE …
the efficiency, number of AMPs used, and the
number of rows accessed by all table access methods
Database Design
strengths and weaknesses of table access methods Introduction to Teradata Warehouse
full-table scans “Full-Table Scans” on page 163
index access “Indexes” on page 17Chapter 5: Query Processing
Collecting Statistics
164 SQL Reference: Fundamentals
How a Full-Table Scan Accesses Rows
Because full-table scans necessarily touch every row on every AMP, they do not use the
following mechanisms for locating rows.
• Hashing algorithm and hash map
• Primary indexes
• Secondary indexes or their subtables
• Partitioning
Instead, a full-table scan uses the file system tables known as the Master Index and Cylinder
Index to locate each data block. Each row within a data block is located by a forward scan.
Because rows from different tables are never mixed within the same data block and because
rows never span blocks, an AMP can scan up to 128K bytes of the table on each block read,
making a full-table scan a very efficient operation. Data block read-ahead and cylinder reads
can also increase efficiency.
Related Topics
Collecting Statistics
The COLLECT STATISTICS (Optimizer form) statement collects demographic data for one
or more columns of a base table, hash index, or join index, computes a statistical profile of the
collected data, and stores the synopsis in the data dictionary.
The Optimizer uses the synopsis data when it generates its table access and join plans.
Usage
You should collect statistics on newly created, empty data tables. An empty collection defines
the columns, indexes, and synoptic data structure for loaded collections. You can easily collect
statistics again after the table is populated for prototyping, and again when it is in production.
FOR more information on … SEE …
full-table scans Database Design
cylinder reads Database Administration
data-block read ahead • Performance Management
• DBS Control Utility in UtilitiesChapter 5: Query Processing
Collecting Statistics
SQL Reference: Fundamentals 165
You can collect statistics on a:
• Unique index, which can be:
• Primary or secondary
• Single or multiple column
• Partitioned or non-partitioned
• Non-unique index, which can be:
• Primary or secondary
• Single or multiple column
• Partitioned or non-partitioned
• With or without COMPRESS fields
• Non-indexed column or set of columns, which can be:
• Partitioned or non-partitioned
• With or without COMPRESS fields
• Join index
• Hash index
• Temporary table
• If you specify the TEMPORARY keyword but a materialized table does not exist, the
system first materializes an instance based on the column names and indexes you
specify. This means that after a true instance is created, you can update (re-collect)
statistics on the columns by entering COLLECT STATISTICS and the TEMPORARY
keyword without having to specify the desired columns and index.
• If you omit the TEMPORARY keyword but the table is a temporary table, statistics are
collected for an empty base table rather than the materialized instance.
• Sample (system-selected percentage) of the rows of a data table or index, to detect data
skew and dynamically increase the sample size when found.
• The SAMPLE option is not supported for global temporary tables, join indexes, or
hash indexes.
• The system does not store both sampled and defined statistics for the same index or
column set. Once sampled statistics have been collected, implicit re-collection hits the
same columns and indexes, and operates in the same mode. To change this, specify any
keywords or options and name the columns and/or indexes.Chapter 5: Query Processing
Collecting Statistics
166 SQL Reference: Fundamentals
Related Topics
FOR more information on … SEE …
using the COLLECT STATISTICS statement SQL Reference: Data Definition Statements
collecting statistics on a join index Database Design
collecting statistics on a hash index
when to collect statistics on base table columns
instead of hash index columns
database administration and collecting statistics Database AdministrationSQL Reference: Fundamentals 167
APPENDIX A Notation Conventions
This appendix describes the notation conventions used in this book.
Throughout this book, three conventions are used to describe the SQL syntax and code:
• Syntax diagrams, used to describe SQL syntax form, including options. See “Syntax
Diagram Conventions” on page 167.
• Square braces in the text, used to represent options. The indicated parentheses are
required when you specify options.
For example:
• DECIMAL [(n[,m])] means the decimal data type can be defined optionally:
• without specifying the precision value n or scale value mspecifying precision (n)
only
• specifying both values (n,m)
• you cannot specify scale without first defining precision.
• CHARACTER [(n)] means that use of (n) is optional.
The values for n and m are integers in all cases
• Japanese character code shorthand notation, used to represent unprintable Japanese
characters. See “Character Shorthand Notation Used In This Book” on page 171.
Symbols from the predicate calculus are also used occasionally to describe logical operations.
See “Predicate Calculus Notation Used in This Book” on page 172.
Syntax Diagram Conventions
Notation Conventions
The following table defines the notation used in this section:
Item Definition / Comments
Letter An uppercase or lowercase alphabetic character ranging from A through Z.
Number A digit ranging from 0 through 9.
Do not use commas when entering a number with more than three digits.Appendix A: Notation Conventions
Syntax Diagram Conventions
168 SQL Reference: Fundamentals
Paths
The main path along the syntax diagram begins at the left, and proceeds, left to right, to the
vertical bar, which marks the end of the diagram. Paths that do not have an arrow or a vertical
bar only show portions of the syntax.
The only part of a path that reads from right to left is a loop.
Paths that are too long for one line use continuation links. Continuation links are small circles
with letters indicating the beginning and end of a link:
When you see a circled letter in a syntax diagram, go to the corresponding circled letter and
continue.
Word Variables and reserved words.
IF a word is shown in … THEN it represents …
UPPERCASE LETTERS a keyword.
Syntax diagrams show all keywords in uppercase,
unless operating system restrictions require them
to be in lowercase.
If a keyword is shown in uppercase, you may
enter it in uppercase or mixed case.
lowercase letters a keyword that you must enter in lowercase, such
as a UNIX command.
lowercase italic letters a variable such as a column or table name.
You must substitute a proper value.
lowercase bold letters a variable that is defined immediately following
the diagram that contains it.
UNDERLINED LETTERS the default value.
This applies both to uppercase and to lowercase
words.
Spaces Use one space between items, such as keywords or variables.
Punctuation Enter all punctuation exactly as it appears in the diagram.
Item Definition / Comments
FE0CA002
A
AAppendix A: Notation Conventions
Syntax Diagram Conventions
SQL Reference: Fundamentals 169
Required Items
Required items appear on the main path:
If you can choose from more than one item, the choices appear vertically, in a stack. The first
item appears on the main path:
Optional Items
Optional items appear below the main path:
If choosing one of the items is optional, all the choices appear below the main path:
You can choose one of the options, or you can disregard all of the options.
Abbreviations
If a keyword or a reserved word has a valid abbreviation, the unabbreviated form always
appears on the main path. The shortest valid abbreviation appears beneath.
In the above syntax, the following formats are valid:
• SHOW CONTROLS
• SHOW CONTROL
Loops
A loop is an entry or a group of entries that you can repeat one or more times. Syntax
diagrams show loops as a return path above the main path, over the item or items that you can
repeat.
FE0CA003
SHOW
FE0CA005
SHOW
VERSIONS
CONTROLS
FE0CA004
SHOW
CONTROLS
FE0CA006
SHOW
CONTROLS
VERSIONS
FE0CA042
SHOW
CONTROL
CONTROLSAppendix A: Notation Conventions
Syntax Diagram Conventions
170 SQL Reference: Fundamentals
The following rules apply to loops:
Excerpts
Sometimes a piece of a syntax phrase is too large to fit into the diagram. Such a phrase is
indicated by a break in the path, marked by | terminators on either side of the break. A name
for the excerpted piece appears between the break marks in boldface type.
The named phrase appears immediately after the complete diagram, as illustrated by the
following example.
IF … THEN …
there is a maximum
number of entries allowed
the number appears in a circle on the return path.
In the example, you may enter cname a maximum of 4 times.
there is a minimum
number of entries required
the number appears in a square on the return path.
In the example, you must enter at least 3 groups of column names.
a separator character is
required between entries
the character appears on the return path.
If the diagram does not show a separator character, use one blank
space.
In the example, the separator character is a comma.
a delimiter character is
required around entries
the beginning and end characters appear outside the return path.
Generally, a space is not needed between delimiter characters and
entries.
In the example, the delimiter characters are the left and right
parentheses.
JC01B012
(
,
4
cname )
,
3
LOCKING excerpt
where_cond
A
cname
excerpt
JC01A014
A
HAVING con
,
col_pos
,Appendix A: Notation Conventions
Character Shorthand Notation Used In This Book
SQL Reference: Fundamentals 171
Character Shorthand Notation Used In This
Book
Introduction
This book uses the UNICODE naming convention for characters. For example, the lowercase
character ‘a’ is more formally specified as either LATIN SMALL LETTER A or U+0041. The
U+xxxx notation refers to a particular code point in the Unicode standard, where xxxx stands
for the hexadecimal representation of the 16-bit value defined in the standard.
In parts of the book, it is convenient to use a symbol to represent a special character, or a
particular class of characters. This is particularly true in discussion of the following Japanese
character encodings.
• KanjiEBCDIC
• KanjiEUC
• KanjiShift-JIS
These encodings are further defined in the International Character Set Support book.
Symbols
The symbols, and the character sets with which they are used, are defined in the following
table.
Symbol Encoding Meaning
a..z
A..Z
0..9
Any Any single byte Latin letter or digit.
a..z
A..Z
0..9
Unicode
compatibility
zone
Any fullwidth Latin letter or digit.
< KanjiEBCDIC Shift Out [SO] (0x0E).
Indicates transition from single to multibyte character in KanjiEBCDIC.
> KanjiEBCDIC Shift In [SI] (0x0F).
Indicates transition from multibyte to single byte KanjiEBCDIC.
T Any Any multibyte character.
Its encoding depends on the current character set.
For KanjiEUC, “ss
3
” sometimes precedes code set 3 characters.
I Any Any single byte Hankaku Katakana character.
In KanjiEUC, it must be preceded by “ss
2
”, forming an individual
multibyte character.
? Any Represents the graphic pad character.Appendix A: Notation Conventions
Predicate Calculus Notation Used in This Book
172 SQL Reference: Fundamentals
For example, string “TEST”, where each letter is intended to be a fullwidth character, is written
as TEST. Occasionally, when encoding is important, hexadecimal representation is used.
For example, the following mixed single byte/multibyte character data in KanjiEBCDIC
character set
LMNQRS
is represented as:
D3 D4 D5 0E 42E3 42C5 42E2 42E3 0F D8 D9 E2
Pad Characters
The following table lists the pad characters for the various server character sets.
Predicate Calculus Notation Used in This Book
Relational databases are based on the theory of relations as developed in set theory. Predicate
calculus is often the most unambiguous way to express certain relational concepts.
Occasionally this book uses the following predicate calculus notation to explain concepts.
? Any Represents either a single or multibyte pad character, depending on
context.
ss
2 KanjiEUC Represents the EUC code set 2 introducer (0x8E).
ss
3 KanjiEUC Represents the EUC code set 3 introducer (0x8F).
Symbol Encoding Meaning
Server Character Set Pad Character Name Pad Character Value
LATIN SPACE 0x20
UNICODE SPACE U+0020
GRAPHIC IDEOGRAPHIC SPACE U+3000
KANJISJIS SPACE 0x20
KANJI1 SPACE 0x20
This symbol … Represents this phrase …
iff If and only if
? For all
? There existsSQL Reference: Fundamentals 173
APPENDIX B Restricted Words for V2R6.2
This appendix details restrictions for Release V2R6.2 on the use of certain terminology in SQL
queries and in other user application programs that interface with the Teradata Database. The
following sections are described:
• A current listing of Teradata reserved keywords, non-reserved keywords, those words
reserved for future use, and ANSI SQL-2003 reserved and non-reserved keywords.
• Statements about the varying usage restrictions of each type of word.
Reserved Words and Keywords for V2R6.2
The following list contains all classes of restricted words for Teradata Database Release
V2R6.2, and uses these conventions:
• Abbreviations and the full words they represent appear separately, except in cases where
the abbreviation is the only common usage, such as ASCII.
• The following definitions apply to the Teradata Database Status column:
Type Explanation
Reserved Teradata Database reserved word that cannot be used as an identifier to name host
variables, correlations, local variables in stored procedures, objects, such as
databases, tables, columns, or stored procedures, or parameters, such as macro or
stored procedure parameters, because Teradata Database already uses the word and
might misinterpret it.
Future Word reserved for future Teradata Database use and cannot be used as an identifier.
NonReserved
Teradata Database non-reserved keyword that is permitted as an identifier but
discouraged because of possible confusion that may result.
empty If the keyword does not have a Teradata Database status, the word is permitted as an
identifier but discouraged because it is an SQL-2003 reserved or non-reserved word.Appendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
174 SQL Reference: Fundamentals
• The following definitions apply to the SQL-2003 Status column:
Type Explanation
Reserved ANSI SQL-2003 reserved word.
If the Teradata Database Status is Reserved or Future, an SQL-2003 reserved word
cannot be used as an identifier. If the Teradata Database Status is Non-Reserved or
empty, the word is permitted as an identifier but discouraged because of possible
confusion that may result.
NonReserved
ANSI SQL-2003 non-reserved word.
If the Teradata Database Status is Reserved or Future, an SQL-2003 non-reserved
word cannot be used as an identifier. If the Teradata Database Status is
Non-Reserved or empty, the word is permitted as an identifier, but discouraged
because of the possible confusion that may result.
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReserved
A X
ABORT X
ABORTSESSION X
ABS X X
ABSOLUTE X
ACCESS X
ACCESS_LOCK X
ACCOUNT X
ACOS X
ACOSH X
ACTION X
ADA X
ADD X X
ADD_MONTHS X
ADMIN X X
AFTER X X
AG X
AGGREGATE XAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 175
ALIAS X
ALL X X
ALLOCATE X
ALLOCATION X
ALTER X X
ALWAYS X X
AMP X
ANALYSIS X
AND X X
ANSIDATE X
ANY X X
ARE X
ARGLPAREN X
ARRAY X
AS X X
ASC X X
ASCII X
ASENSITIVE X
ASIN X
ASINH X
ASSERTION X
ASSIGNMENT X X
ASYMMETRIC X
AT X X
ATAN X
ATAN2 X
ATANH X
ATOMIC X X
ATTR X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
176 SQL Reference: Fundamentals
ATTRIBUTE X X
ATTRIBUTES X X
ATTRS X
AUTHORIZATION X X
AVE X
AVERAGE X
AVG X X
BEFORE X X
BEGIN X X
BERNOULLI X
BETWEEN X X
BIGINT X X
BINARY X X
BLOB X X
BOOLEAN X
BOTH X X
BREADTH X
BT X
BUT X
BY X X
BYTE X
BYTEINT X
BYTES X
C X X
CALL X X
CALLED X X
CARDINALITY X
CASCADE X
CASCADED X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 177
CASE X X
CASE_N X
CASESPECIFIC X
CAST X X
CATALOG X
CATALOG_NAME X
CD X
CEIL X
CEILING X
CHAIN X
CHANGERATE X
CHAR X X
CHAR_LENGTH X X
CHAR2HEXINT X
CHARACTER X X
CHARACTER_LENGTH X X
CHARACTER_SET_CATALOG X
CHARACTER_SET_NAME X
CHARACTER_SET_SCHEMA X
CHARACTERISTICS X X
CHARACTERS X X
CHARS X
CHARSET_COLL X
CHECK X X
CHECKED
CHECKPOINT X
CHECKSUM X
CLASS X
CLASS_ORIGIN X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
178 SQL Reference: Fundamentals
CLOB X X
CLOSE X X
CLUSTER X
CM X
COALESCE X X
COBOL X
COLLATE X
COLLATION X X
COLLATION_CATALOG X
COLLATION_NAME X
COLLATION_SCHEMA X
COLLECT X X
COLUMN X X
COLUMN_NAME X
COLUMNSPERINDEX X
COLUMNSPERJOININDEX X
COMMAND_FUNCTION X
COMMAND_FUNCTION_CODE X
COMMENT X
COMMIT X X
COMMITTED X
COMPARISON X
COMPILE X
COMPRESS X
CONDITION X
CONDITION_NUMBER X
CONNECT X
CONNECTION X
CONNECTION_NAME X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 179
CONSTRAINT X X
CONSTRAINT_CATALOG X
CONSTRAINT_NAME X
CONSTRAINT_SCHEMA X
CONSTRAINTS X
CONSTRUCTOR X X
CONSUME X
CONTAINS X
CONTINUE X X
CONVERT X
CONVERT_TABLE_HEADER X
CORR X X
CORRESPONDING X
COS X
COSH X
COSTS X
COUNT X X
COVAR_POP X X
COVAR_SAMP X X
CPP X
CPUTIME X
CREATE X X
CROSS X X
CS X
CSUM X
CT X
CUBE X X
CUME_DIST X
CURRENT X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
180 SQL Reference: Fundamentals
CURRENT_DATE X X
CURRENT_DEFAULT_TRANSFORM_GROUP X
CURRENT_PATH X
CURRENT_ROLE X
CURRENT_TIME X X
CURRENT_TIMESTAMP X X
CURRENT_TRANSFORM_GROUP_FOR_TYPE X
CURRENT_USER X
CURSOR X X
CURSOR_NAME X
CV X
CYCLE X X
DATA X X
DATABASE X
DATABLOCKSIZE X
DATE X X
DATEFORM X
DATETIME_INTERVAL_CODE X
DATETIME_INTERVAL_PRECISION X
DAY X X
DBC X
DEALLOCATE X
DEBUG X
DEC X X
DECIMAL X X
DECLARE X X
DEFAULT X X
DEFAULTS X
DEFERRABLE X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 181
DEFERRED X X
DEFINED X
DEFINER X X
DEGREE X
DEGREES X
DEL X
DELETE X X
DEMOGRAPHICS X
DENIALS X
DENSE_RANK X
DEPTH X
DEREF X
DERIVED X
DESC X X
DESCRIBE X
DESCRIPTOR X X
DETERMINISTIC X X
DIAGNOSTIC X
DIAGNOSTICS X
DIGITS X
DISABLED X
DISCONNECT X
DISPATCH X
DISTINCT X X
DO X
DOMAIN X X
DOUBLE X X
DR X
DROP X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
182 SQL Reference: Fundamentals
DUAL X
DUMP X
DYNAMIC X
DYNAMIC_FUNCTION X
DYNAMIC_FUNCTION_CODE X
EACH X X
EBCDIC X
ECHO X
ELEMENT X
ELSE X X
ELSEIF X
ENABLED X
ENCRYPT X
END X X
END-EXEC X
EQ X
EQUALS X X
ERROR X
ERRORFILES X
ERRORTABLES X
ESCAPE X X
ET X
EVERY X
EXCEPT X X
EXCEPTION X
EXCL X
EXCLUDE X
EXCLUDING X
EXCLUSIVE X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 183
EXEC X X
EXECUTE X X
EXISTING
EXISTS X X
EXIT X
EXP X X
EXPIRE X
EXPLAIN X
EXTERNAL X X
EXTRACT X X
FALLBACK X
FALSE X
FASTEXPORT X
FETCH X X
FILTER X
FINAL X X
FIRST X X
FLOAT X X
FLOOR X
FOLLOWING X X
FOR X X
FOREIGN X X
FORMAT X
FORTRAN X
FOUND X X
FREE X
FREESPACE X
FROM X X
FULL X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
184 SQL Reference: Fundamentals
FUNCTION X X
FUSION X
G X X
GE X
GENERAL X
GENERATED X X
GET X
GIVE X
GLOBAL X X
GO X X
GOTO X X
GRANT X X
GRANTED X
GRAPHIC X
GROUP X X
GROUPING X X
GT X
HANDLER X
HASH X
HASHAMP X
HASHBAKAMP X
HASHBUCKET X
HASHROW X
HAVING X X
HELP X
HIERARCHY X
HIGH X
HOLD X
HOST X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 185
HOUR X X
IDENTITY X X
IF X
IFP X
IMMEDIATE X X
IMPLEMENTATION X
IN X X
INCLUDING X
INCONSISTENT X
INCREMENT X X
INDEX X
INDEXESPERTABLE X
INDEXMAINTMODE X
INDICATOR X X
INITIALLY X
INITIATE X
INNER X X
INOUT X X
INPUT X X
INS X
INSENSITIVE X
INSERT X X
INSTANCE X X
INSTANTIABLE X X
INSTEAD X
INT X X
INTEGER X X
INTEGERDATE X
INTERSECT X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
186 SQL Reference: Fundamentals
INTERSECTION X
INTERVAL X X
INTO X X
INVOKER X X
IOCOUNT X
IS X X
ISOLATION X X
ITERATE X
JAVA X
JIS_COLL X
JOIN X X
JOURNAL X
K X X
KANJI1 X
KANJISJIS X
KBYTE X
KBYTES X
KEEP X
KEY X X
KEY_MEMBER X
KEY_TYPE X
KILOBYTES X
KURTOSIS X
LANGUAGE X X
LARGE X X
LAST X X
LATERAL X
LATIN X
LE X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 187
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReserved
LEADING X X
LEAVE X
LEFT X X
LENGTH X
LEVEL X X
LIKE X X
LIMIT X
LN X X
LOADING X
LOCAL X X
LOCALTIME X
LOCALTIMESTAMP X
LOCATOR X X
LOCK X
LOCKEDUSEREXPIRE X
LOCKING X
LOG X
LOGGING X
LOGON X
LONG X
LOOP X
LOW X
LOWER X X
LT X
M X X
MACRO X
MAP X X
MATCH XAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
188 SQL Reference: Fundamentals
MATCHED X X
MAVG X
MAX X X
MAXCHAR X
MAXIMUM X
MAXLOGONATTEMPTS X
MAXVALUE X X
MCHARACTERS X
MDIFF X
MEDIUM X
MEMBER X
MERGE X X
MESSAGE_LENGTH X
MESSAGE_OCTET_LENGTH X
MESSAGE_TEXT X
METHOD X X
MIN X X
MINCHAR X
MINDEX X
MINIMUM X
MINUS X
MINUTE X X
MINVALUE X X
MLINREG X
MLOAD X
MOD X X
MODE X
MODIFIED X
MODIFIES X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 189
MODIFY X
MODULE X
MONITOR X
MONRESOURCE X
MONSESSION X
MONTH X X
MORE X
MSUBSTR X
MSUM X
MULTINATIONAL X
MULTISET X X
MUMPS X
NAME X X
NAMED X
NAMES X
NATIONAL X
NATURAL X X
NCHAR X
NCLOB X
NE X
NESTING X
NEW X X
NEW_TABLE X
NEXT X X
NO X X
NONE X X
NORMALIZE X
NORMALIZED X
NOT X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
190 SQL Reference: Fundamentals
NOWAIT X
NULL X X
NULLABLE X
NULLIF X X
NULLIFZERO X
NULLS X
NUMBER X
NUMERIC X X
OA X
OBJECT X X
OBJECTS X
OCTET_LENGTH X X
OCTETS X
OF X X
OFF X
OLD X X
OLD_TABLE X
ON X X
ONLY X X
OPEN X X
OPTION X X
OPTIONS X
OR X X
ORDER X X
ORDERED_ANALYTIC X
ORDERING X X
ORDINALITY X
OTHERS X
OUT X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 191
OUTER X X
OUTPUT X
OVER X X
OVERLAPS X X
OVERLAY X
OVERRIDE X
OVERRIDING X
PAD X
PARAMETER X X
PARAMETER_MODE X
PARAMETER_NAME X
PARAMETER_ORDINAL_POSITION X
PARAMETER_SPECIFIC_CATALOG X
PARAMETER_SPECIFIC_NAME X
PARAMETER_SPECIFIC_SCHEMA X
PARTIAL X
PARTITION X X
PARTITIONED X
PASCAL X
PASSWORD X
PATH X
PERCENT X
PERCENT_RANK X X
PERCENTILE_CONT X
PERCENTILE_DISC X
PERM X
PERMANENT X
PLACING X
PLI X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
192 SQL Reference: Fundamentals
POSITION X X
POWER X
PRECEDING X X
PRECISION X X
PREPARE X X
PRESERVE X X
PRIMARY X X
PRINT X
PRIOR X
PRIVATE X
PRIVILEGES X X
PROCEDURE X X
PROFILE X
PROTECTED X
PROTECTION X
PUBLIC X X
QUALIFIED X
QUALIFY X
QUANTILE X
QUEUE X
QUERY X
RADIANS X
RANDOM X
RANDOMIZED X
RANGE X X
RANGE_N X
RANK X X
READ X X
READS X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 193
REAL X X
RECALC X
RECURSIVE X X
REF X
REFERENCES X X
REFERENCING X X
REGR_AVGX X X
REGR_AVGY X X
REGR_COUNT X X
REGR_INTERCEPT X X
REGR_R2 X X
REGR_SLOPE X X
REGR_SXX X X
REGR_SXY X X
REGR_SYY X X
RELATIVE X X
RELEASE X X
RENAME X
REPEAT X
REPEATABLE X
REPLACE X
REPLACEMENT X
REPLCONTROL X
REPLICATION X
REQUEST X
RESTART X X
RESTORE X
RESTRICT X
RESULT X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
194 SQL Reference: Fundamentals
RESUME X
RET X
RETAIN X
RETRIEVE X
RETURN X
RETURNED_CARDINALITY X
RETURNED_LENGTH X
RETURNED_OCTET_LENGTH X
RETURNED_SQLSTATE X
RETURNS X X
REUSE X
REVALIDATE X
REVOKE X X
RIGHT X X
RIGHTS X
ROLE X X
ROLLBACK X X
ROLLFORWARD X
ROLLUP X X
ROUTINE X
ROUTINE_CATALOG X
ROUTINE_NAME X
ROUTINE_SCHEMA X
ROW X X
ROW_COUNT X
ROW_NUMBER X X
ROWID X
ROWS X X
RU X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 195
SAMPLE X
SAMPLEID X
SAMPLES X
SAVEPOINT X
SCALE X
SCHEMA X
SCHEMA_NAME X
SCOPE X
SCOPE_CATALOG X
SCOPE_NAME X
SCOPE_SCHEMA X
SCROLL X X
SEARCH X
SEARCHSPACE X
SECOND X X
SECTION X
SECURITY X X
SEED X
SEL X
SELECT X X
SELF X X
SENSITIVE X
SEQUENCE X
SERIALIZABLE X X
SERVER_NAME X
SESSION X X
SESSION_USER X
SET X X
SETRESRATE X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
196 SQL Reference: Fundamentals
SETS X X
SETSESSRATE X
SHARE X
SHOW X
SIMILAR X
SIMPLE X
SIN X
SINH X
SIZE X
SKEW X
SMALLINT X X
SOME X X
SOUNDEX X
SOURCE X X
SPACE X
SPECCHAR X
SPECIFIC X X
SPECIFIC_NAME X
SPECIFICTYPE X
SPL X
SPOOL X
SQL X X
SQLEXCEPTION X X
SQLSTATE X X
SQLTEXT X
SQLWARNING X X
SQRT X X
SR X
SS X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 197
START X X
STARTUP X
STAT X
STATE X
STATEMENT X X
STATIC X
STATISTICS X
STATS X
STDDEV_POP X X
STDDEV_SAMP X X
STEPINFO X
STRING_CS X
STRUCTURE X
STYLE X X
SUBCLASS_ORIGIN X
SUBLIST
SUBMULTISET X
SUBSCRIBER X
SUBSTR X
SUBSTRING X X
SUM X X
SUMMARY X
SUMMARYONLY X
SUSPEND X
SYMMETRIC X
SYSTEM X X
SYSTEM_USER X
SYSTEMTEST X
TABLE X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
198 SQL Reference: Fundamentals
TABLE_NAME X
TABLESAMPLE X
TAN X
TANH X
TARGET X
TBL_CS X
TD_GENERAL X
TD_INTERNAL X
TEMPORARY X X
TERMINATE X
TEXT X
THAN
THEN X X
THRESHOLD X
TIES X X
TIME X X
TIMESTAMP X X
TIMEZONE_HOUR X X
TIMEZONE_MINUTE X X
TITLE X
TO X X
TOP X
TPA X
TOP_LEVEL_COUNT X
TRACE X
TRAILING X X
TRANSACTION X X
TRANSACTION_ACTIVE X
TRANSACTIONS_COMMITTED X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 199
TRANSACTIONS_ROLLED_BACK X
TRANSFORM X X
TRANSFORMS X
TRANSLATE X X
TRANSLATE_CHK X
TRANSLATION X
TREAT X
TRIGGER X X
TRIGGER_CATALOG X
TRIGGER_NAME X
TRIGGER_SCHEMA X
TRIM X X
TRUE X
TYPE X X
UC X
UDTCASTAS X
UDTCASTLPAREN X
UDTMETHOD X
UDTTYPE X
UDTUSAGE X
UESCAPE X
UNBOUNDED X X
UNCOMMITTED X X
UNDEFINED X
UNDER X
UNDO X
UNICODE X
UNION X X
UNIQUE X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
200 SQL Reference: Fundamentals
UNKNOWN X X
UNNAMED X
UNNEST X
UNTIL X
UPD X
UPDATE X X
UPPER X X
UPPERCASE X
USAGE X
USE X
USER X X
USER_DEFINED_TYPE_CATALOG X
USER_DEFINED_TYPE_CODE X
USER_DEFINED_TYPE_NAME X
USER_DEFINED_TYPE_SCHEMA X
USING X X
VALUE X X
VALUES X X
VAR_POP X X
VAR_SAMP X X
VARBYTE X
VARCHAR X X
VARGRAPHIC X
VARYING X X
VIEW X X
VOLATILE X
WAIT X
WARNING X
WHEN X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
SQL Reference: Fundamentals 201
WHENEVER X
WHERE X X
WHILE X
WIDTH_BUCKET X X
WINDOW X
WITH X X
WITHIN X
WITHOUT X
WORK X X
WRITE X X
YEAR X X
ZEROIFNULL X
ZONE X X
Keyword
Teradata Database Status SQL-2003 Status
Reserved Future
NonReserved Reserved
NonReservedAppendix B: Restricted Words for V2R6.2
Reserved Words and Keywords for V2R6.2
202 SQL Reference: FundamentalsSQL Reference: Fundamentals 203
APPENDIX C Teradata Database Limits
This appendix provides the following Teradata Database limits:
• System limits
• Database limits
• Session limitsAppendix C: Teradata Database Limits
System Limits
204 SQL Reference: Fundamentals
System Limits
The system specifications in the following table apply to an entire Teradata Database
configuration.
Parameter Value
Maximum number of databases and
users
4.2 x 10
9
Total data capacity • Expressed as a base 10 value:
• 1.39 TB/AMP
(1.39 x 10
12
bytes/AMP)
• Expressed as a base 2 value:
• 1.26 TB/AMP
(1.26 x 10
12
bytes/AMP)
Maximum number of active concurrent
transactions
2048
Maximum data format descriptor size 30 characters
Maximum error message text size in
failure parcel
255 bytes
Maximum number of sectors per
datablock
255
a
Maximum data block size 130560 bytes
Datablock header size Depends on several factors:
FOR a datablock that is … The datablock header
size is this many bytes …
new or has been updated 72
on a 64-bit system and
has not been updated
40
on a 32-bit system and
has not been updated
36
Maximum number of sessions per PE 120
Maximum number of gateways per node 1
Maximum number of sessions per
Gateway
Tunable.
b
1200 maximum certified
Maximum number of parcels in one
message
256Appendix C: Teradata Database Limits
System Limits
SQL Reference: Fundamentals 205
Maximum message size Approximately 65000 bytes
Note: This limit applies to messages to/from host
systems and to some internal Teradata Database
messages.
Maximum number of PEs per system 1024
Maximum number of AMPs per system 16383
c
More generally, the maximum number of AMPs per
system depends on the number of PEs in the
configuration.
The following equation provides the most general
solution:
Maximum number of AMP and PE
vprocs, in any combination, per system
16384
Number of hash buckets per system 65536
d
Bucket numbers range from 0 to 65535.
Number of hash values per system 4.2 x 10
9
Maximum number of external routine
protected mode server tasks per PE or
AMP
20
e
Maximum number of external routine
secure mode server tasks per PE or AMP
20
c
Amount of private disk swap space
required per protected or secure mode
server per PE or AMP vproc
256 KB
a. The increase in datablock header size from 36 or 40 bytes to 64 bytes increases the size of roughly 6
percent of the datablocks by one sector (see “Datablock header size” on page 204).
b. See Utilities for details.
c. This value is derived by subtracting 1 from the maximum total of PE and AMP vprocs per system
(because each system must have at least one PE), which is 16384.
This is obviously not a practical configuration.
d. This value is fixed. The system assigns its 65536 hash buckets to AMPs as evenly as possible. For
example, a system with 1000 AMPs has 65 hash buckets on some AMPs and 66 hash buckets on others.
In this particular case, the AMPs having 66 hash buckets also perform 1.5 percent more work than
those with 65 hash buckets. The work per AMP imbalance increases as a function of the number of
AMPs in the system for those cases where 65536 is not evenly divisible by the total number of AMPs.
e. The valid range is 0 to 20, inclusive. The limit is 20 servers for each server type, not 20 combined for
both. See Utilities for details.
Parameter Value
16384 number_of_PEs –Appendix C: Teradata Database Limits
Database Limits
206 SQL Reference: Fundamentals
Database Limits
The database specifications in the following table apply to a single database. The values
presented are maxima for their respective parameters individually and not in combination.
Parameter Maximum Value
Number of journal tables per database 1
Number of data tables per database 4.2 x 10
9
Database, user, table, view, macro, index, constraint, userdefined function, stored procedure, user-defined method,
user-defined type, replication group, or column name size
30 bytes
Tables and Views
Number of columns per base table or view 2048
Number of UDT columns per base table or view Approximately 1600
g,h,i
Number of LOB type columns per base table 32
j
Number of columns created over the life of a base table 2560
Number of rows per base table Limited by disk capacity
Number of bytes per table header
a
• Approximately 64000 bytes
-or-
• Approximately 128000 bytes
Row size Approximately 65536 bytes
Logical row size
b
67106816000 bytes
k
Number of secondary
c
, hash, and join indexes, in any
combination, per table
32
Non-LOB column size
• 65522 bytes (NPPI table)
l
• 65520 bytes (PPI table)m
Number of columns per primary or secondary index 64
SQL title size 60 characters
Size of the queue table FIFO runtime cache per PE • 100 queue table entries
• 1 MB
Size of the queue table FIFO runtime cache per table 2211 row entries
Number of primary indexes per table 1
Number of partitions for a partitioned primary index 65535
Number of table-level constraints per table 100Appendix C: Teradata Database Limits
Database Limits
SQL Reference: Fundamentals 207
Number of referential constraints per table 64
Number of columns in foreign and parent keys 64
Number of compressed values per column 255 plus nulls
Predefined and User-Defined Types
BLOB object size 2097088000 bytes
CLOB object size • 2097088000 single-byte characters
• 1048544000 double-byte characters
Structured UDT size.
d
• 65521 bytes (NPPI table)
• 65519 bytes (PPI table)
Number of characters in a string constant 32000
Number of attributes that can be specified for a structured
UDT per CREATE TYPE or ALTER TYPE statement
300 - 512
n
Number of attributes that can be defined for a structured
UDT
Approximately 4000
o
Number of nested attributes in a structured UDT 512
Number of methods associated with a UDT Approximately 500
p
Macros, Stored Procedures, and External Routines
Expanded text size for macros and views 2 MB
Length of external name string for an external routine.
e
1000 characters
Package path length for an external routine 256 characters
SQL request size in a stored procedure 64 KB
Number of parameters specified in a UDF 128
Number of parameters specified in a UDM 128
Number of parameters specified in a macro 2048
Number of parameters in a stored procedure 256
Number of nested CALL statements 15
Number of open cursors 16 for embedded SQL,
15 for a stored procedure
Queries, Requests, and Responses
SQL request size 1 MB
(Includes SQL statement text, USING
data, and parcel overhead)
Parameter Maximum ValueAppendix C: Teradata Database Limits
Database Limits
208 SQL Reference: Fundamentals
SQL response size 1 MB
(Includes SQL result and parcel
overhead)
Number of columns per DML statement ORDER BY
clause
16
Number of tables that can be joined per query block 64
Number of subquery nesting levels per query 64
Number of fields in a USING row descriptor 2550
SQL activity count size 2
32
-1 rows
Number of SELECT AND CONSUME statements in a
delayed state per PE
24
Number of partitions for a hash join operation 50
Query and Workload Analysis
Size of the Index Wizard workload cache 256 MB
q
Number of indexes on which statistics can be collected
and maintained at one time
32
This limit is independent of the
number of pseudoindexes on which
statistics can be collected and
maintained.
Number of pseudoindexes
f
on which multicolumn
statistics can be collected and maintained at one time
32
This limit is independent of the
number of indexes on which statistics
can be collected and maintained.
Number of columns and indexes on which statistics can be
recollected for a table
512
Hash and Join Indexes
Number of columns referenced per single table in a hash
or join index
64
Number of columns referenced in the fixed part of a
compressed hash or join index
64
Number of columns referenced in the repeating part of a
compressed hash or join index
64
Number of columns in an uncompressed join index 2048
Number of columns in a compressed join index 128
Parameter Maximum ValueAppendix C: Teradata Database Limits
Database Limits
SQL Reference: Fundamentals 209
Replication
Row size permitted for a replication operation Approximately 25000 bytes
For details, see Teradata Replication
Solutions Overview and “CREATE
REPLICATION GROUP” in SQL
Reference: Data Definition Statements.
Number of replication groups per system 100
Number of tables that can be copied simultaneously with a
replication operation
15
Number of columns that can be defined for a replicated
table
1000
Character column data size permitted for a replication
operation
• CHARACTER(10000)
• VARCHAR(10000)
For UTF16, this translates to a
maximum of 5000 characters.
a. A table header that is large enough to require more than ~64000 bytes uses two 64Kbyte rows. A table
header that requires 64000 or fewer bytes does not use the second row that is required to contain a
table header of ~128000 bytes.
b. A logical row is defined as a base table row plus the sum of the bytes stored in a LOB subtable for that
row.
c. A NUSI defined with an ORDER BY clause counts as two indexes in this calculation.
d. Based on a table having a 1 byte (BYTEINT) primary index. Because a UDT column cannot be part of
any index definition, there must be at least one non-UDT column in the table for its primary index.
Row header overhead consumes 14 bytes in an NPPI table and 16 bytes in a PPI table, so the maximum
structured UDT size is derived by subtracting 15 bytes (for an NPPI table) or 17 bytes (for a PPI table)
from the row maximum of 65 536 bytes.
e. An external routine is the portion of a UDF, external stored procedure, or method that is written in C
or C++. This is the code that defines the semantics for the UDF, procedure, or method.
f. A pseudoindex is a file structure that allows you to collect statistics on a composite, or multicolumn,
column set in the same way you collect statistics on a composite index.
g. The absolute limit is 2048, and the realizable number varies as a function of the number of other
features declared for a table that occupy table header space.
h. The figure of 1600 UDT columns assumes a FAT table header.
i. This limit is true whether the UDT is a distinct or a structured type.
j. This includes both predefined type LOB columns and UDT LOB columns.
A UDT LOB column counts as one LOB column even if the UDT is a structured type that has multiple
LOB attributes.
k. This value is derived by multiplying the maximum number of LOB columns per base table (32) times
the maximum size of a LOB field (2 097 088 000 8-bit bytes). Remember that each LOB column
consumes 39 bytes of Object ID from the base table, so 1 248 of those 67 106 816 000 bytes cannot be
used for data.
l. Based on subtracting the minimum row overhead value for an NPPI table row (14 bytes) from the
system-defined maximum row length (65 536 bytes).
Parameter Maximum ValueAppendix C: Teradata Database Limits
Database Limits
210 SQL Reference: Fundamentals
m. Based on subtracting the minimum row overhead value for a PPI table row (16 bytes) from the
system-defined maximum row length (65 536 bytes).
n. The maximum is platform-dependent.
o. While you can specify no more than 300 to 512 attributes for a structured UDT per CREATE TYPE
or ALTER TYPE statement, you can submit any number of ALTER TYPE statements with the ADD
ATTRIBUTE option specified as necessary to add additional attributes to the type up to the upper
limit of approximately 4000.
p. There is no absolute limit on the number of methods that can be associated with a given UDT.
Methods can have a variable number of parameters, and the number of parameters directly affects the
limit, which is due to parser memory restrictions.
There is a workaround for this issue. See the documentation for ALTER TYPE in SQL Reference: Data
Manipulation Statements for details.
q. The default is 48 megabytes and the minimum is 32 megabytes.Appendix C: Teradata Database Limits
Session Limits
SQL Reference: Fundamentals 211
Session Limits
The session specifications in the following table apply to a single session.
Parameter Value
Active request result spool files 16
Parallel steps
Parallel steps can be used to process a request submitted within a
transaction (which may be either explicit or implicit).
The maximum number of steps generated per request is determined as
follows:
• Per request, if no channels
Note: Channels are not required for a primary index request with an
equality constraint.
20 steps
• A request that involves redistribution of rows to other AMPs, such as a
join or an INSERT-SELECT
Requires 4 channels
• A request that does not involve row distribution Requires 2 channels
Number of materialized global temporary tables per session 2000
Number of volatile tables per session 1000Appendix C: Teradata Database Limits
Session Limits
212 SQL Reference: FundamentalsSQL Reference: Fundamentals 213
APPENDIX D ANSI SQL Compliance
This appendix describes the ANSI SQL standard, Teradata compliance with the ANSI SQL
standard, and terminology differences between ANSI SQL and Teradata SQL.
Topics include:
• ANSI SQL Standard
• Terminology Differences Between ANSI SQL and Teradata
• SQL Flagger
• Differences Between Teradata and ANSI SQL
ANSI SQL Standard
Introduction
The American National Standards Institute (ANSI) SQL standard, formally titled
International Standard ISO/IEC 9075:2003, Database Language SQL, defines a version of
Structured Query Language that all vendors of relational database management systems
support to a greater or lesser degree.
Motivation Behind an SQL Standard
Teradata, like most vendors of relational database management systems, had its own dialect of
the SQL language for many years prior to the development of the SQL standard.
You might ask several questions like the following:
• Why should there be an industry-wide SQL standard?
• Why should any vendor with an entrenched user base consider modifying its SQL dialect
to conform with the ANSI SQL standard?
Why an SQL Standard?
National and international standards abound in the computer industry. As anyone who has
worked in the industry for any length of time knows, standardization offers both advantages
and disadvantages both to users and to vendors.
The principal advantages of having an SQL standard are the following:
• Open systems
The overwhelming trend in computer technology has been toward open systems with
publicly defined standards to facilitate third party and end user access and development
using the standardized products.Appendix D: ANSI SQL Compliance
ANSI SQL Standard
214 SQL Reference: Fundamentals
The ANSI SQL standard provides an open definition for the SQL language.
• Less training for transfer and new employees
A programmer trained in ANSI-standard SQL can move from one SQL programming
environment to another with no need to learn a new SQL dialect. When a core dialect of
the language is the lingua franca for SQL programmers, the need for retraining is
significantly reduced.
• Application portability
When there is a standardized public definition for a programming language, users can rest
assured that any applications they develop to the specifications of that standard are
portable to any environment that supports the same standard.
This is an extremely important budgetary consideration for any large scale end user
application development project.
• Definition and manipulation of heterogeneous databases is facilitated
Many user data centers support multiple merchant databases across different platforms. A
standard language for communicating with relational databases, irrespective of the vendor
offering the database management software, is an important factor in reducing the
overhead of maintaining such an environment.
• Intersystem communication is facilitated
It is common for an enterprise to exchange applications and data among different
merchant databases.
Common examples of this appear below.
• Two-phase commit transactions where rows are written to multiple databases
simultaneously.
• Bulk data import and export between different vendor databases.
These operations are made much cleaner and simpler when there is no need to translate
data types, database definitions, and other component definitions between source and
target databases.
Teradata Compliance With the ANSI Standard
Conformance to a standard presents problems for any vendor that produces an evolved
product and supports a large user base.
Teradata, in its historical development, has produced any number of innovative SQL language
elements that do not conform to the ANSI SQL standard, a standard that did not exist when
those features were conceived. The existing Teradata user base had invested substantial time,
effort, and capital into developing applications using that Teradata SQL dialect.
At the same time, new customers demand that vendors conform to open standards for
everything from chip sets to operating systems to application programming interfaces.
Meeting these divergent requirements presents a challenge that Teradata SQL solves by
following the multipronged policy outlined in the following table.Appendix D: ANSI SQL Compliance
ANSI SQL Standard
SQL Reference: Fundamentals 215
WHEN … THEN …
a new feature or feature
enhancement is added
to Teradata SQL
that feature conforms to the ANSI SQL standard.
the difference between
the Teradata SQL dialect
and the ANSI SQL
standard for a language
feature is slight
the ANSI SQL is added to the Teradata Database feature as an option.
the difference between
the Teradata SQL dialect
and the ANSI SQL
standard for a language
feature is significant
both syntaxes are offered and the user has the choice of operating in
either Teradata or ANSI mode or of turning off SQL Flagger.
The mode can be defined in the following ways:
• Persistently
Use the SessionMode field of the DBS Control Record to define
session mode characteristics.
• For a session
Use the BTEQ .SET SESSION TRANSACTION command to control
transaction semantics.
Use the BTEQ .SET SESSION SQLFLAG command to control use of
the SQL Flagger.
Use the SQL statement SET SESSION DATEFORM to control how
data typed as DATE is handled.
a new feature or feature
enhancement is added
to Teradata SQL and
that feature is not
defined by the ANSI
SQL standard
that feature is designed using the following criteria:
IF other vendors … THEN Teradata designs the new feature …
offer a similar
feature or feature
extension
to broadly comply with other solutions, but
consolidates the best ideas from all and, where
necessary, creates its own, cleaner solution.
do not offer a
similar feature or
feature extension
• as cleanly and generically as possible with
an eye toward creating a language element
that will not be subject to major revisions to
comply with future updates to the ANSI
SQL standard.
• in a way that offers the most power to users
without violating any of the basic tenets of
the ANSI SQL standard.Appendix D: ANSI SQL Compliance
Terminology Differences Between ANSI SQL and Teradata
216 SQL Reference: Fundamentals
Terminology Differences Between ANSI SQL
and Teradata
The ANSI SQL standard and Teradata occasionally use different terminology. The following
table lists the more important variances.
Note:
1) In the ANSI SQL standard, the term table has the following definitions:
• A base table
• A viewed table (view)
• A derived table
ANSI Teradata
Base table Table
1
Binding style Not defined, but implicitly includes the following:
• Interactive SQL
• Embedded SQL
• ODBC
• CLIv2
Authorization ID User ID
Catalog Dictionary
CLI ODBC
2
Direct SQL Interactive SQL
Domain Not defined
External routine function User-defined function (UDF)
Module Not defined
Persistent stored module Stored procedure
Schema User
Database
SQL database Relational database
Viewed table View
Not defined Explicit transaction
3
Not defined CLIv2
4
Not defined Macro
5Appendix D: ANSI SQL Compliance
SQL Flagger
SQL Reference: Fundamentals 217
2) ANSI CLI is not exactly equivalent to ODBC, but the ANSI standard is heavily based on
the ODBC definition.
3) ANSI transactions are always implicit, beginning with an executable SQL statement and
ending with either a COMMIT or a ROLLBACK statement.
4) Teradata CLIv2 is an implementation-defined binding style.
5) The function of Teradata Database macros is similar to that of ANSI persistent stored
modules without having the loop and branch capabilities stored modules offer.
SQL Flagger
Function
The SQL Flagger, when enabled, reports the use of non-standard SQL. The SQL Flagger always
permits statements flagged as non-entry-level or noncompliant ANSI SQL to execute. Its task
is not to enforce the standard, but rather to return a warning message to the requestor noting
the noncompliance.
The analysis includes syntax checking as well as some dictionary lookup, particularly the
implicit assignment and comparison of different data types (where ANSI requires use of the
CAST function to convert the types explicitly) as well as some semantic checks.
The SQL Flagger does not check or detect every condition for noncompliance; thus, a
statement that is not flagged does not necessarily mean it is compliant.
Enabling and Disabling the SQL Flagger
Flagging is enabled by a client application before a session is logged on and generally is used
only to assist in checking for ANSI compliance in code that must be portable across multiple
vendor environments.
The SQL Flagger is disabled by default. You can enable or disable it using any of the following
procedures, depending on your application.
FOR this software … USE these commands or options … TO turn the SQL Flagger …
BTEQ .[SET] SESSION SQLFLAG ENTRY to entry-level ANSI
.[SET] SESSION SQLFLAG NONE off
See Basic Teradata Query Reference for more detail on using BTEQ
commands.
Preprocessor2 SQLFLAGGER(ENTRY) to entry-level ANSI
SQLFLAGGER(NONE) off
See Teradata Preprocessor2 for Embedded SQL Programmer Guide for details
on setting Preprocessor options.Appendix D: ANSI SQL Compliance
Differences Between Teradata and ANSI SQL
218 SQL Reference: Fundamentals
Differences Between Teradata and ANSI SQL
For a complete list of SQL features in this release, see Appendix E. The list identifies which
features are ANSI SQL compliant and which features are Teradata extensions.
The list of features includes SQL statements and options, functions and operators, data types
and literals.
CLI set lang_conformance = ‘2’
set lang_conformance to ‘2’
to entry-level ANSI
set lang_conformance = ‘N’ off
See Teradata Call-Level Interface Version 2 Reference for Channel-Attached
Systems and Teradata Call-Level Interface Version 2 Reference for NetworkAttached Systems for details on setting the conformance field.
FOR this software … USE these commands or options … TO turn the SQL Flagger …SQL Reference: Fundamentals 219
APPENDIX E SQL Feature Summary
This appendix details the differences in SQL between this release and previous releases.
• “Statements and Modifiers” on page 219
• “Data Types and Literals” on page 277
• “Functions, Operators, and Expressions” on page 280
The intent of this appendix is to provide a way to readily identify new SQL in this release and
previous releases of Teradata Database. It is not meant as a Teradata SQL reference.
Notation Conventions
The following table describes the conventions used in this appendix.
Statements and Modifiers
The following table lists SQL statements and modifiers for this version and previous versions
of Teradata Database.
The following type codes appear in the ANSI Compliance column.
This notation … Means …
UPPERCASE a keyword
italics a variable, such as a column or table name
[ n ] that the use of n is optional
| n | that option n is described separately in this appendix
Code Definition
A ANSI SQL-2003 compliant
T Teradata extensionAppendix E: SQL Feature Summary
Statements and Modifiers
220 SQL Reference: Fundamentals
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0
ABORT T X X X
Options
FROM option T X X X
WHERE condition T X X X
ALTER FUNCTION
ALTER SPECIFIC FUNCTION
T X X X
Options
EXECUTE PROTECTED/
EXECUTE NOT PROTECTED
T X X X
COMPILE/
COMPILE ONLY
T X X X
ALTER METHOD
ALTER CONSTRUCTOR METHOD
ALTER INSTANCE METHOD
ALTER SPECIFIC METHOD
T X X
Options
EXECUTE PROTECTED/
EXECUTE NOT PROTECTED/
COMPILE/
COMPILE ONLY
T X X
ALTER PROCEDURE (external form) T X X X
Options
LANGUAGE C/
LANGUAGE CPP
T X X X
COMPILE/
COMPILE ONLY/
EXECUTE PROTECTED/
EXECUTE NOT PROTECTED
T X X XAppendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 221
ALTER PROCEDURE (internal form) T X X X
Options
COMPILE T X X X
WITH PRINT/
WITH NO PRINT
T X X X
WITH SPL/
WITH NO SPL
T X X X
WITH WARNING/
WITH NO WARNING
T X X X
ALTER REPLICATION GROUP T X X X
Options
ADD table_name/
ADD database_name.table_name
T X X X
DROP table_name/
DROP database_name.table_name
T X X X
ALTER TABLE A, T X X X
Options
ADD column_name
|Data Type|
|Data Type Attributes|
A X X X
ADD column_name
|Column Storage Attributes|
T X X X
ADD column_name NO COMPRESS T X
ADD column_name
|Column Constraint Attributes|
T X X X
ADD column_name
|Table Constraint Attributes|
T X X X
ADD
|Table Constraint Attributes|
A X X X
ADD column_name NULL T X X X
AFTER JOURNAL/
NO AFTER JOURNAL/
DUAL AFTER JOURNAL/
LOCAL AFTER JOURNAL/
NOT LOCAL AFTER JOURNAL
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
222 SQL Reference: Fundamentals
ALTER TABLE, continued
Options
BEFORE JOURNAL/
JOURNAL/
NO BEFORE JOURNAL/
DUAL BEFORE JOURNAL
T X X X
DATABLOCKSIZE IMMEDIATE/
MINIMUM DATABLOCKSIZE/
MAXIMUM DATABLOCKSIZE/
DEFAULT DATABLOCKSIZE
T X X X
CHECKSUM = DEFAULT/
CHECKSUM = NONE/
CHECKSUM = LOW/
CHECKSUM = MEDIUM/
CHECKSUM = HIGH/
CHECKSUM = ALL
T X X X
DROP column_name A X X X
DROP CHECK/
DROP column_name CHECK/
DROP CONSTRAINT name CHECK
T X X X
DROP CONSTRAINT T X X X
DROP FOREIGN KEY REFERENCES T X X X
WITH CHECK OPTION/
WITH NO CHECK OPTION
T X X X
DROP INCONSISTENT REFERENCES T X X X
FALLBACK PROTECTION/
NO FALLBACK PROTECTION
T X X X
FREESPACE/
DEFAULT FREESPACE
T X X X
LOG/NO LOG T X X X
MODIFY CHECK/
MODIFY column_name CHECK/
MODIFY CONSTRAINT name CHECK
T X X X
MODIFY [[NOT] UNIQUE] PRIMARY INDEX index [(column)]/
MODIFY [[NOT] UNIQUE] PRIMARY INDEX NOT NAMED
[(column)]
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 223
ALTER TABLE, continued
Options
NOT PARTITIONED/
PARTITION BY expression/
DROP RANGE WHERE expression [ADD RANGE ranges]/
DROP RANGE ranges [ADD RANGE ranges]/
ADD RANGE ranges
T X X X
ON COMMIT DELETE ROWS/
ON COMMIT PRESERVE ROWS
T X X X
RENAME column_name T X X X
REVALIDATE PRIMARY INDEX/
REVALIDATE PRIMARY INDEX WITH DELETE/
REVALIDATE PRIMARY INDEX WITH INSERT [INTO]
table_name
T X X X
WITH JOURNAL TABLE T X X X
ALTER TRIGGER T X X X
Options
ENABLED/DISABLED / T X X X
TIMESTAMP T X X X
ALTER TYPE A,T X X
Options
ADD ATTRIBUTE/
DROP ATTRIBUTE/
ADD METHOD/
ADD INSTANCE METHOD/
ADD CONSTRUCTOR METHOD/
ADD SPECIFIC METHOD/
DROP METHOD/
DROP INSTANCE METHOD/
DROP CONSTRUCTOR METHOD/
DROP SPECIFIC METHOD
A,T X X
BEGIN DECLARE SECTION A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
224 SQL Reference: Fundamentals
BEGIN LOGGING T X X X
Options
DENIALS T X X X
WITH TEXT T X X X
FIRST/
LAST/
FIRST AND LAST/
EACH
T X X X
BY database_name T X X X
ON ALL/
ON operation/
ON GRANT
T X X X
ON DATABASE/
ON USER/
ON TABLE/
ON VIEW/
ON MACRO/
ON PROCEDURE/
ON FUNCTION/
T X X X
ON TYPE T X X
BEGIN QUERY LOGGING T X X X
Options
WITH ALL/
WITH OBJECTS/
WITH SQL/
WITH STEPINFO/
T X X X
WITH COSTS T X X X
LIMIT SQLTEXT [=n] [AND …]/
LIMIT SUMMARY = n1, n2, n3 [AND …]/
LIMIT THRESHOLD [=n] [AND …]/
T X X X
LIMIT MAXCPU [=n] [AND …] T X X X
ON ALL/
ON user_name/
ON user_name ACCOUNT = 'account_name'/
ON user_name ACCOUNT = ('account_name'
[ … ,'account_name'])
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 225
BEGIN TRANSACTION/
BT
T X X X
CALL A X X X
Options
stored_procedure_name/ A X X X
external_stored_procedure_name A X X X
CHECKPOINT T X X X
Options
NAMED checkpoint T X X X
INTO host_variable_name T X X X
[INDICATOR] :host_indicator_name T X X X
CLOSE A X X X
COLLECT DEMOGRAPHICS T X X X
Options
FOR table_name/
FOR (table_name [ … ,table_name])
T X X X
ALL/
WITH NO INDEX
T X X X
COLLECT STATISTICS/
COLLECT STATS/
COLLECT STAT
(QCD form)
T X X X
Options
PERCENT T X X X
SET QUERY query_ID T X X
SAMPLEID statistics_ID T X X
UPDATE MODIFIED T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
226 SQL Reference: Fundamentals
COLLECT STATISTICS (QCD form), continued
Options
INDEX (column_name [ … , column_name])/
INDEX index_name/
COLUMN (column_name [ … ,column_name])/
COLUMN column_name/
T X X X
COLUMN (column_name [ … , column_name], PARTITION
[ … , column_name])/
COLUMN (PARTITION [ … , column_name])/
COLUMN PARTITION
T X X
COLLECT STATISTICS/
COLLECT STATS/
COLLECT STAT
(optimizer form)
T X X X
Options
USING SAMPLE T X X X
[ON] [TEMPORARY] table_name/
[ON] join_index_name/
[ON] hash_index_name
T X X X
INDEX (column_name [ … , column_name])/
INDEX index_name/
COLUMN (column_name [ … ,column_name])/
COLUMN column_name/
T X X X
COLUMN (column_name [ … , column_name], PARTITION
[ … , column_name])/
COLUMN (PARTITION [ … , column_name])/
COLUMN PARTITION
T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 227
COLLECT STATISTICS/
COLLECT STATS/
COLLECT STAT
(optimizer form, CREATE INDEX-style syntax)
T X X X
Options
USING SAMPLE T X X X
[UNIQUE] INDEX [index_name] [ALL]
(column_name [ … , column_name])
[ORDER BY [VALUES]] (column_name)/
[UNIQUE] INDEX [index_name] [ALL]
(column_name [ … , column_name])
[ORDER BY [HASH]] (column_name)/
COLUMN column_name/
COLUMN (column_name [ … , column_name])/
T X X X
COLUMN (column_name [ … , column_name], PARTITION
[ … , column_name])/
COLUMN (PARTITION [ … , column_name])/
COLUMN PARTITION
T X X
ON [TEMPORARY] table_name/
ON hash_index_name/
ON join_index_name
T X X X
COMMENT T X X X
Options
[ON] COLUMN object_name/
[ON] DATABASE object_name/
[ON] FUNCTION object_name/
[ON] MACRO object_name/
[ON] PROCEDURE object_name/
[ON] TABLE object_name/
[ON] TRIGGER object_name/
[ON] USER object_name/
[ON] VIEW object_name/
[ON] PROFILE object_name/
[ON] ROLE object_name/
T X X X
[ON] GROUP group_name/ T X X X
[ON] METHOD object_name/
[ON] TYPE object_name
T X X
AS 'comment'/
IS 'comment'
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
228 SQL Reference: Fundamentals
COMMENT (embedded SQL) T X X X
Options
[ON] COLUMN object_reference/
[ON] DATABASE object_reference/
[ON] FUNCTION object_name/
[ON] MACRO object_reference/
[ON] PROCEDURE object_reference/
[ON] TABLE object_reference/
[ON] TRIGGER object_reference/
[ON] USER object_reference/
[ON] VIEW object_reference/
[ON] PROFILE object_name/
[ON] ROLE object_name/
T X X X
[ON] GROUP group_name T X X X
INTO host_variable_name T X X X
[INDICATOR] :host_indicator_name T X X X
COMMIT A, T X X X
Options
WORK A X X X
RELEASE T X X X
CONNECT (embedded SQL) T X X X
Options
IDENTIFIED BY passwordvar/
IDENTIFIED BY :passwordvar
T X X X
AS connection_name/
AS :namevar
T X X X
CREATE AUTHORIZATION T X X
Options
[AS] DEFINER/
[AS] DEFINER DEFAULT/
[AS] INVOKER
T X X
DOMAIN 'domain_name' T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 229
CREATE CAST A X X
Options
WITH SPECIFIC METHOD specific_method_name/
WITH METHOD method_name/
WITH INSTANCE METHOD method_name/
WITH SPECIFIC FUNCTION specific_function_name/
WITH FUNCTION function_name
A X X
AS ASSIGNMENT A X X
CREATE DATABASE T X X X
Options
PERMANENT = n [BYTES] T X X X
SPOOL = n [BYTES] T X X X
TEMPORARY = n [BYTES] T X X X
ACCOUNT T X X X
FALLBACK [PROTECTION]/
NO FALLBACK [PROTECTION]
T X X X
BEFORE JOURNAL/
JOURNAL/
NO JOURNAL
NO BEFORE JOURNAL/
DUAL JOURNAL
DUAL BEFORE JOURNAL
T X X X
AFTER JOURNAL/
NO AFTER JOURNAL/
DUAL AFTER JOURNAL/
LOCAL AFTER JOURNAL/
NOT LOCAL AFTER JOURNAL
T X X X
DEFAULT JOURNAL TABLE T X X X
CREATE FUNCTION A, T X X X
Options
RETURNS data_type/
RETURNS data_type CAST FROM data_type
A X X X
LANGUAGE C/ A X X X
LANGUAGE CPP A X X X
NO SQL A X X X
SPECIFIC [database_name.] function_name A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
230 SQL Reference: Fundamentals
CREATE FUNCTION, continued
Options
CLASS AGGREGATE/
CLASS AG
T X X X
PARAMETER STYLE SQL/
PARAMETER STYLE TD_GENERAL
A X X X
DETERMINISTIC/
NOT DETERMINISTIC
A X X X
CALLED ON NULL INPUT/
RETURNS NULL ON NULL INPUT
A X X X
EXTERNAL/
EXTERNAL NAME function_name/
EXTERNAL NAME function_name PARAMETER STYLE SQL/
EXTERNAL NAME function_name PARAMETER STYLE
TD_GENERAL/
EXTERNAL PARAMETER STYLE SQL/
EXTERNAL
PARAMETER STYLE TD_GENERAL/
EXTERNAL NAME
'[F delimiter function_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
A X X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
A X X
CREATE FUNCTION (table function form) T X X X
Options
RETURNS TABLE ( column_name data_type
[ … , column_name data_type ] )
T X X X
LANGUAGE C/
LANGUAGE CPP
T X X X
NO SQL T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 231
CREATE FUNCTION (table function form), continued
Options
SPECIFIC [database_name.] function_name T X X X
PARAMETER STYLE SQL T X X X
DETERMINISTIC/
NOT DETERMINISTIC
T X X X
CALLED ON NULL INPUT/
RETURNS NULL ON NULL INPUT
T X X X
EXTERNAL/
EXTERNAL NAME function_name/
EXTERNAL NAME function_name PARAMETER STYLE SQL/
EXTERNAL PARAMETER STYLE SQL/
EXTERNAL NAME
'[F delimiter function_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
T X X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
T X X
CREATE HASH INDEX T X X X
Options
FALLBACK PROTECTION/
NO FALLBACK PROTECTION
T X X X
ORDER BY VALUES/
ORDER BY HASH
T X X X
CHECKSUM = DEFAULT/
CHECKSUM = NONE/
CHECKSUM = LOW/
CHECKSUM = MEDIUM/
CHECKSUM = HIGH/
CHECKSUM = ALL
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
232 SQL Reference: Fundamentals
CREATE INDEX
CREATE UNIQUE INDEX
T X X X
Options
ALL T X X X
ORDER BY VALUES/
ORDER BY HASH
T X X X
TEMPORARY T X X X
CREATE JOIN INDEX T X X X
Options
FALLBACK PROTECTION/
NO FALLBACK PROTECTION
T X X X
CHECKSUM = DEFAULT/
CHECKSUM = NONE/
CHECKSUM = LOW/
CHECKSUM = MEDIUM/
CHECKSUM = HIGH/
CHECKSUM = ALL
T X X X
ROWID T X X X
EXTRACT YEAR FROM/
EXTRACT MONTH FROM
T X X X
SUM numeric_expression T X X X
COUNT column_expression T X X X
FROM table_name/
FROM table_name correlation_name/
FROM table_name AS correlation_name
T X X X
FROM (joined_table) T X X X
FROM table JOIN table/
FROM table INNER JOIN table/
FROM table LEFT JOIN table/
FROM table LEFT OUTER JOIN table/
FROM table RIGHT JOIN table/
FROM table RIGHT OUTER JOIN table
T X X X
|WHERE statement modifier| A X X X
|GROUP BY statement modifier| T X X X
|ORDER BY statement modifier| A, T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 233
CREATE JOIN INDEX, continued
Options
INDEX [index_name] [ALL] (column_list)/
INDEX [index_name] [ALL] (column_list) ORDER BY HASH
[(column_name)]/
INDEX [index_name] [ALL] (column_list) ORDER BY VALUES
[(column_name)]/
UNIQUE INDEX [index_name] (column_list)/
PRIMARY INDEX [index_name] (column_list)/
T X X X
PRIMARY INDEX [index_name] (column_list) PARTITION BY
expression
T X
CREATE MACRO/
CM
T X X X
Options
AS statement T X X X
USING modifier T X X X
|LOCKING statement modifier| T X X X
CREATE METHOD
CREATE INSTANCE METHOD
CREATE CONSTRUCTOR METHOD
A X X
Options
EXTERNAL/
EXTERNAL NAME method_name/
EXTERNAL NAME
'[F delimiter function_entry_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
A X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
234 SQL Reference: Fundamentals
CREATE ORDERING A X X
Options
MAP WITH SPECIFIC METHOD specific_method_name/
MAP WITH METHOD method_name/
MAP WITH INSTANCE METHOD method_name/
MAP WITH SPECIFIC FUNCTION specific_function_name/
MAP WITH FUNCTION function_name
A X X
CREATE PROCEDURE (external stored procedure form) A X X X
Options
parameter_name data_type/
IN parameter_name data_type/
OUT parameter_name data_type/
INOUT parameter_name data_type
A X X X
LANGUAGE C/
LANGUAGE CPP
A X X X
NO SQL A X X X
PARAMETER STYLE SQL/
PARAMETER STYLE TD_GENERAL
A X X X
EXTERNAL/
EXTERNAL NAME procedure_name/
EXTERNAL NAME procedure_name PARAMETER STYLE SQL/
EXTERNAL NAME procedure_name PARAMETER STYLE
TD_GENERAL/
EXTERNAL PARAMETER STYLE SQL/
EXTERNAL PARAMETER STYLE TD_GENERAL/
EXTERNAL NAME
'[F delimiter function_entry_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
A X X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
A X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 235
CREATE PROCEDURE (stored procedure form) A, T X X X
Options
parameter_name data_type/
IN parameter_name data_type/
OUT parameter_name data_type/
INOUT parameter_name data_type
A X X X
NOT ATOMIC T X X X
DECLARE variable-name data-type
[DEFAULT literal]
DECLARE variable-name data-type
[DEFAULT NULL]
A X X X
DECLARE cursor_name [SCROLL] CURSOR FOR
cursor_specification [FOR READ ONLY]/
DECLARE cursor_name [SCROLL] CURSOR FOR
cursor_specification [FOR UPDATE]/
DECLARE cursor_name [NO SCROLL]
CURSOR FOR cursor_specification [FOR READ ONLY]/
DECLARE cursor_name [NO SCROLL]
CURSOR FOR cursor_specification [FOR UPDATE]/
A X X X
DECLARE CONTINUE HANDLER
DECLARE EXIT HANDLER
A X X X
FOR SQLSTATE sqlstate/
FOR SQLSTATE VALUE sqlstate
A X X X
FOR SQLEXCEPTION/
FOR SQLWARNING/
FOR NOT FOUND
A X X X
SET assignment_target = assignment_source A X X X
IF expression THEN statement
[ELSEIF expression THEN statement]
[ELSE statement] END IF
A X X X
CASE operand1 WHEN operand2 THEN statement [ELSE statement]
END CASE
A X X X
CASE WHEN expression THEN statement
[ELSE statement] END CASE
A X X X
ITERATE label_name A X X X
LEAVE label_name A X X X
PRINT string_literal/
PRINT print_variable_name
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
236 SQL Reference: Fundamentals
CREATE PROCEDURE, continued
Options
SQL_statement A X X X
CALL procedure_name A X X X
OPEN cursor_name A X X X
CLOSE cursor_name A X X X
FETCH [[NEXT] FROM] cursor_name INTO
local_variable_name [ … , local_variable_name]/
FETCH [[FIRST] FROM] cursor_name INTO
local_variable_name [ … , local_variable_name]/
FETCH [[NEXT] FROM] cursor_name INTO parameter_reference
[ … , parameter_reference]/
FETCH [[FIRST] FROM] cursor_name INTO parameter_reference
[ … , parameter_reference]
A X X X
WHILE expression DO statement END WHILE A X X X
LOOP statement END LOOP A X X X
FOR for_loop_variable AS
[cursor_name CURSOR FOR]
SELECT column_name [AS correlation_name]
FROM table_name
[WHERE clause] [SELECT clause]
DO statement_list END FOR/
FOR for_loop_variable AS
[cursor_name CURSOR FOR]
SELECT expression [AS correlation_name]
FROM table_name
[WHERE clause] [SELECT clause]
DO statement_list END FOR
A X X X
REPEAT statement_list
UNTIL conditional_expression END REPEAT
A X X X
CREATE PROFILE T X X X
Options
ACCOUNT = ‘account_id’/
ACCOUNT = (‘account_id’ [ … ,’account_id’])/
ACCOUNT = NULL
T X X X
DEFAULT DATABASE = database_name/
DEFAULT DATABASE = NULL
T X X X
SPOOL = n [BYTES]/
SPOOL = NULL
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 237
CREATE PROFILE, continued
Options
TEMPORARY = n [BYTES]/
TEMPORARY = NULL
T X X X
PASSWORD [ATTRIBUTES] = (
EXPIRE = n,
EXPIRE = NULL,
MINCHAR = n,
MINCHAR = NULL,
MAXCHAR = n,
MAXCHAR = NULL,
DIGITS = n,
DIGITS = NULL,
SPECCHAR = c,
SPECCHAR = NULL,
MAXLOGONATTEMPTS = n,
MAXLOGONATTEMPTS = NULL,
LOCKEDUSEREXPIRE = n,
LOCKEDUSEREXPIRE = NULL,
REUSE = n,
REUSE = NULL)
PASSWORD [ATTRIBUTES] = NULL
T X X X
CREATE REPLICATION GROUP A X X X
CREATE ROLE A X X X
CREATE TABLE/
CT
A, T X X X
Options
SET/
MULTISET
T X X X
GLOBAL TEMPORARY A X X X
GLOBAL TEMPORARY TRACE T X X X
VOLATILE T X X X
QUEUE T X X X
FALLBACK [PROTECTION]/
NO FALLBACK [PROTECTION]
T X X X
WITH JOURNAL TABLE = name T X X X
LOG/
NO LOG
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
238 SQL Reference: Fundamentals
CREATE TABLE, continued
Options
[BEFORE] JOURNAL/
NO [BEFORE] JOURNAL/
DUAL [BEFORE] JOURNAL/
T X X X
AFTER JOURNAL/
NO AFTER JOURNAL/
DUAL AFTER JOURNAL/
LOCAL JOURNAL/
NOT LOCAL JOURNAL
T X X X
FREESPACE = integer PERCENT T X X X
DATABLOCKSIZE = integer/
DATABLOCKSIZE = integer BYTES/
DATABLOCKSIZE = integer KBYTES/
DATABLOCKSIZE = integer KILOBYTES
T X X X
MINIMUM DATABLOCKSIZE/
MAXIMUM DATABLOCKSIZE
T X X X
CHECKSUM = DEFAULT/
CHECKSUM = NONE/
CHECKSUM = LOW/
CHECKSUM = MEDIUM/
CHECKSUM = HIGH/
CHECKSUM = ALL
T X X X
QUEUE/
NO QUEUE
T X X X
column_name
|Data Type|
|Data Type Attributes|
A X X X
column_name
|Data Type|
|Column Storage Attributes|
T X X X
column_name
|Data Type|
|Column Constraint Attributes|
A X X X
GENERATED ALWAYS AS IDENTITY/
GENERATED BY DEFAULT AS IDENTITY
A X X X
|Column Constraint Attributes| T X X X
|Table Constraint Attributes| T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 239
CREATE TABLE, continued
Options
[UNIQUE] [PRIMARY] INDEX [name] [ALL] (column_name) T X X X
[UNIQUE] PRIMARY INDEX [name] (column) PARTITION BY
expression
T X X X
INDEX [name] [ALL] (column_name)
ORDER BY VALUES (name)/
INDEX [name] [ALL] (column_name)
ORDER BY HASH (name)
T X X X
ON COMMIT DELETE ROWS/
ON COMMIT PRESERVE ROWS
A X X X
AS source_table_name WITH [NO] DATA/ A X X X
AS source_table_name WITH [NO] DATA AND [NO] STATISTICS/
AS source_table_name WITH [NO] DATA AND [NO] STATS/
AS source_table_name WITH [NO] DATA AND [NO] STAT/
T X
AS (query_expression) WITH [NO] DATA/ A X X X
AS (query_expression) WITH [NO] DATA AND [NO] STATISTICS/
AS (query_expression) WITH [NO] DATA AND [NO] STATS/
AS (query_expression) WITH [NO] DATA AND [NO] STAT
T X
CREATE TRANSFORM A X X
Options
TO SQL WITH SPECIFIC METHOD specific_method_name/
TO SQL WITH METHOD method_name/
TO SQL WITH INSTANCE METHOD method_name/
TO SQL WITH SPECIFIC FUNCTION specific_function_name/
TO SQL WITH FUNCTION function_name
A X X
FROM SQL WITH SPECIFIC METHOD specific_method_name/
FROM SQL WITH METHOD method_name/
FROM SQL WITH INSTANCE METHOD method_name/
FROM SQL WITH SPECIFIC FUNCTION specific_function_name/
FROM SQL WITH FUNCTION function_name
A X X
CREATE TRIGGER A, T X X X
Options
ENABLED/
DISABLED
T X X X
BEFORE/
AFTER
A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
240 SQL Reference: Fundamentals
CREATE TRIGGER, continued
Options
INSERT ON table_name [ORDER integer]/
DELETE ON table_name [ORDER integer]/
UPDATE [OF (column_list)] ON table_name [ORDER integer]
A X X X
REFERENCING OLD_TABLE [AS] identifier [NEW_TABLE [AS]
identifier]/
T X X X
REFERENCING OLD [AS] identifier
[NEW [AS] identifier]/
REFERENCING OLD TABLE [AS] identifier [NEW TABLE [AS]
identifier]/
REFERENCING OLD [ROW] [AS] identifier
[NEW [ROW] [AS] identifier]
A X X X
FOR EACH ROW/
FOR EACH STATEMENT
A X X X
WHEN (search_condition) A X X X
(SQL_proc_statement ;)/
SQL_proc_statement /
BEGIN ATOMIC (SQL_proc_statement;) END/
BEGIN ATOMIC SQL_proc_statement ; END
A,T X X X
CREATE TYPE (distinct form) A, T X X
Options
CHARACTER SET server_character_set T X X
METHOD [SYSUDTLIB.]method_name/
INSTANCE METHOD [SYSUDTLIB.]method_name
A, T X X
RETURNS predefined_data_type/
RETURNS predefined_data_type AS LOCATOR/
RETURNS predefined_data_type [AS LOCATOR] CAST FROM
predefined_data_type [AS LOCATOR]/
RETURNS predefined_data_type CAST FROM
[SYSUDTLIB.]UDT_name [AS LOCATOR]/
RETURNS [SYSUDTLIB.]UDT_name/
RETURNS [SYSUDTLIB.]UDT_name AS LOCATOR/
RETURNS [SYSUDTLIB.]UDT_name [AS LOCATOR] CAST
FROM predefined_data_type [AS LOCATOR]/
RETURNS [SYSUDTLIB.]UDT_name CAST FROM
[SYSUDTLIB.]UDT_name [AS LOCATOR]
A, T X X
LANGUAGE C/
LANGUAGE CPP
A X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 241
CREATE TYPE (distinct form), continued
Options
NO SQL A X X
SPECIFIC [SYSUDTLIB.] specific_method_name A, T X X
SELF AS RESULT A X X
PARAMETER STYLE SQL/
PARAMETER STYLE TD_GENERAL
A X X
DETERMINISTIC/
NOT DETERMINISTIC
A X X
CALLED ON NULL INPUT/
RETURNS NULL ON NULL INPUT
A X X
CREATE TYPE (structured form) A, T X X
Options
AS (attribute_name predefined_data_type)/
AS (attribute_name predefined_data_type CHARACTER SET
server_character_set)/
AS (attribute_name predefined_data_type [CHARACTER SET
server_character_set] […, attribute_name predefined_data_type
[CHARACTER SET server_character_set]] […, attribute_name
UDT_name])/
AS (attribute_name predefined_data_type [CHARACTER SET
server_character_set] […, attribute_name UDT_name]
[…, attribute_name predefined_data_type [CHARACTER SET
server_character_set]])/
AS (attribute_name UDT_name)/
AS (attribute_name UDT_name […, attribute_name UDT_name]
[…, attribute_name predefined_data_type [CHARACTER SET
server_character_set]])/
AS (attribute_name UDT_name […, attribute_name
predefined_data_type [CHARACTER SET server_character_set]]
[…, attribute_name UDT_name])
A X X
INSTANTIABLE A X X
METHOD [SYSUDTLIB.]method_name/
INSTANCE METHOD [SYSUDTLIB.]method_name
CONSTRUCTOR METHOD [SYSUDTLIB.]method_name
A, T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
242 SQL Reference: Fundamentals
CREATE TYPE (structured form), continued
Options
RETURNS predefined_data_type/
RETURNS predefined_data_type AS LOCATOR/
RETURNS predefined_data_type [AS LOCATOR] CAST FROM
predefined_data_type [AS LOCATOR]/
RETURNS predefined_data_type CAST FROM
[SYSUDTLIB.]UDT_name [AS LOCATOR]/
RETURNS [SYSUDTLIB.]UDT_name/
RETURNS [SYSUDTLIB.]UDT_name AS LOCATOR/
RETURNS [SYSUDTLIB.]UDT_name [AS LOCATOR] CAST
FROM predefined_data_type [AS LOCATOR]/
RETURNS [SYSUDTLIB.]UDT_name CAST FROM
[SYSUDTLIB.]UDT_name [AS LOCATOR]
A, T X X
LANGUAGE C/
LANGUAGE CPP
A X X
NO SQL A X X
SPECIFIC [SYSUDTLIB.] specific_method_name A, T X X
SELF AS RESULT A X X
PARAMETER STYLE SQL/
PARAMETER STYLE TD_GENERAL
A X X
DETERMINISTIC/
NOT DETERMINISTIC
A X X
CALLED ON NULL INPUT/
RETURNS NULL ON NULL INPUT
A X X
CREATE USER T X X X
Options
FROM database_name T X X X
PERMANENT = number [BYTES]/
PERM = number [BYTES]
T X X X
PASSWORD = password/
PASSWORD = NULL
T X X X
STARTUP = ‘string;’ T X X X
TEMPORARY = n [bytes] T X X X
SPOOL = n [BYTES] T X X X
DEFAULT DATABASE = database_name T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 243
CREATE USER, continued
Options
COLLATION = collation_sequence T X X X
ACCOUNT = ‘acct_ID’/
ACCOUNT = (‘acct_ID’ [ … ,’acct_ID’])
T X X X
[NO] FALLBACK [PROTECTION] T X X X
[BEFORE] JOURNAL/
NO [BEFORE] JOURNAL/
DUAL [BEFORE] JOURNAL
T X X X
AFTER JOURNAL/
NO AFTER JOURNAL/
DUAL AFTER JOURNAL/
LOCAL AFTER JOURNAL/
NOT LOCAL AFTER JOURNAL
T X X X
DEFAULT JOURNAL TABLE = table_name T X X X
TIME ZONE = LOCAL/
TIME ZONE = [sign] quotestring/
TIME ZONE = NULL
T X X X
DATEFORM = INTEGERDATE/
DATEFORM = ANSIDATE
T X X X
DEFAULT CHARACTER SET data_type T X X X
DEFAULT ROLE = role_name/
DEFAULT ROLE = NONE/
DEFAULT ROLE = NULL/
DEFAULT ROLE = ALL
T X X X
PROFILE = profile_name/
PROFILE = NULL
T X X X
CREATE VIEW A, T X X X
Options
(column_name [ … , column_name]) A X X X
AS [ |LOCKING statement modifier| ]
query_expression
A, T X X X
WITH CHECK OPTION A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
244 SQL Reference: Fundamentals
CREATE RECURSIVE VIEW A X X X
Options
(column_name [ … , column_name]) A X X X
AS (seed_statement [UNION ALL
recursive_statement)] [ … [UNION ALL seed_statement] [ …
UNION ALL recursive_statement])
A X X X
DATABASE T X X X
DECLARE CURSOR (selection form) A, T X X X
Options
FOR SELECT A X X X
FOR COMMENT/
FOR EXPLAIN/
FOR HELP/
FOR SHOW
T X X X
DECLARE CURSOR (request form) A X X X
Options
FOR 'request_specification' A X X X
DECLARE CURSOR (macro form) T X X X
Options
FOR EXEC macro_name T X X X
DECLARE CURSOR (dynamic SQL form) A X X X
Options
FOR statement_name A X X X
DECLARE STATEMENT T X X X
DECLARE TABLE T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 245
DELETE (basic/searched form)/
DEL
A, T X X X
Options
[FROM] table_name A X X X
[AS] alias_name A X X X
WHERE condition A X X X
ALL T X X X
DELETE (implied join condition form)/
DEL
A, T X X X
Options
delete_table_name T X X X
[FROM] table_name [ … ,[FROM] table_name] T X X X
[AS] alias_name A X X X
WHERE condition A X X X
ALL T X X X
DELETE (positioned form)/
DEL
A X X X
Options
FROM table_name A X X X
WHERE CURRENT OF cursor_name A X X X
DELETE DATABASE
DELETE USER
T X X X
Option
ALL T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
246 SQL Reference: Fundamentals
DESCRIBE T X X X
Options
INTO descriptor_area T X X X
USING NAMES/
USING ANY/
USING BOTH/
USING LABELS
T X X X
FOR STATEMENT statement_number/
FOR STATEMENT [:] num_var
T X X X
DIAGNOSTIC "validate index" T X X X
Option
ON/
NOT ON
T X X X
DIAGNOSTIC DUMP SAMPLES T X X X
DIAGNOSTIC HELP SAMPLES T X X X
DIAGNOSTIC SET SAMPLES T X X X
Options
ON/
NOT ON
T X X X
FOR SESSION/
FOR SYSTEM
T X X X
DROP AUTHORIZATION T X X
DROP CAST A X X
DROP DATABASE
DROP USER
T X X X
DROP FUNCTION
DROP SPECIFIC FUNCTION
A X X X
DROP HASH INDEX T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 247
DROP INDEX T X X X
Options
TEMPORARY T X X X
ORDER BY (column_name)/
ORDER BY VALUES (column_name)/
ORDER BY HASH (column_name)
T X X X
DROP JOIN INDEX T X X X
DROP MACRO T X X X
DROP ORDERING A X X
DROP PROCEDURE A X X X
DROP PROFILE T X X X
DROP REPLICATION GROUP T X X X
DROP ROLE A X X X
DROP STATISTICS/
DROP STATS/
DROP STAT
(optimizer form)
T X X X
Options
[FOR] [UNIQUE] INDEX index_name/
[FOR] [UNIQUE] INDEX [index_name] (col_name) [ORDER BY
col_name]/
[FOR] [UNIQUE] INDEX [index_name] (col_name) [ORDER BY
VALUES (col_name)]/
[FOR] [UNIQUE] INDEX [index_name] (col_name) [ORDER BY
HASH (col_name)]/
[FOR] COLUMN column_name/
[FOR] COLUMN (column_name [ … , column_name])/
T X X X
[FOR] COLUMN (column_name [ … , column_name], PARTITION
[ … , column_name])/
[FOR] COLUMN (PARTITION [ … , column_name])/
[FOR] COLUMN PARTITION
T X X
ON T X X X
TEMPORARY T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
248 SQL Reference: Fundamentals
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0
DROP STATISTICS/
DROP STATS/
DROP STAT
(QCD form)
T X X X
Options
INDEX (column_name [ … , column_name])/
INDEX index_name/
COLUMN (column_name [ … ,column_name])/
COLUMN column_name/
T X X X
COLUMN (column_name [ … , column_name], PARTITION
[ … , column_name])/
COLUMN (PARTITION [ … , column_name])/
COLUMN PARTITION
T X X
DROP TABLE A, T X X X
Options
TEMPORARY A X X X
ALL A X X X
OVERRIDE A X X X
DROP TRANSFORM A X X
DROP TRIGGER T X X X
DROP TYPE A X X
DROP VIEW T X X X
DUMP EXPLAIN T X X X
Options
AS query_plan_name T X X X
LIMIT/
LIMIT SQL/
LIMIT SQL = n
T X X X
CHECK STATISTICS T X X
ECHO T X X X
END DECLARE SECTION T X X X
END-EXEC A X X XAppendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 249
END LOGGING T X X X
Options
DENIALS T X X X
WITH TEXT T X X X
ALL/
operation/
GRANT
T X X X
BY database_name T X X X
ON DATABASE name/
ON FUNCTION/
ON MACRO name/
ON PROCEDURE name/
ON TABLE name/
ON TRIGGER name/
ON USER name/
ON VIEW name
T X X X
END QUERY LOGGING T X X X
Options
ON ALL/
ON user_name/
ON user_name ACCOUNT = 'account_name'/
ON user_name ACCOUNT = ('account_name'
[ … ,'account_name'])
T X X X
END TRANSACTION/
ET
T X X X
EXECUTE macro_name/
EXEC macro_name
T X X X
EXECUTE statement_name A X X X
Options
USING [:] host_variable_name A X X X
[INDICATOR] :host_indicator_name A X X X
USING DESCRIPTOR [:] descriptor_area A X X X
EXECUTE IMMEDIATE A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
250 SQL Reference: Fundamentals
FETCH A X X X
Options
INTO [:] host_variable_name A X X X
[INDICATOR] :host_indicator_name A X X X
USING DESCRIPTOR [:] descriptor_area A X X X
GET CRASH (embedded SQL) T X X X
GIVE T X X X
Options
database_name TO recipient_name/
user_name TO recipient_name
T X X X
GRANT A, T X X X
Options
ALL/ A X X X
ALL PRIVILEGES/
ALL BUT
T X X X
DELETE/
EXECUTE/
INSERT/
REFERENCES/
SELECT/
UPDATE/
A X X X
ALTER/
CHECKPOINT/
CREATE/
DROP/
DUMP/
INDEX/
RESTORE/
T X X X
REPLCONTROL/ T X X X
UDTMETHOD/
UDTTYPE/
UDTUSAGE
T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 251
GRANT, continued
Options
ON database_name/
ON database_name.object_name/
ON object_name/
ON PROCEDURE identifier/
ON SPECIFIC FUNCTION specific_function_name/
ON FUNCTION function_name/
A X X X
ON TYPE UDT_name/
ON TYPE SYSUDTLIB.UDT_name
A X X
TO user_name/
TO ALL user_name/
T X X X
TO PUBLIC A X X X
WITH GRANT OPTION A X X X
GRANT LOGON T X X X
Options
ON host_id/
ON ALL
T X X X
AS DEFAULT/
TO database_name/
FROM database_name
T X X X
WITH NULL PASSWORD T X X X
GRANT MONITOR/
GRANT monitor_privilege
T X X X
Options
PRIVILEGES/
BUT NOT monitor_privilege
T X X X
TO [ALL] user_name/
TO PUBLIC
T X X X
WITH GRANT OPTION T X X X
GRANT ROLE A X X X
Options
WITH ADMIN OPTION A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
252 SQL Reference: Fundamentals
HELP T X X X
Options
CAST [database_name.] UDT_name/
CAST [database_name.] UDT_name SOURCE/
CAST [database_name.] UDT_name TARGET
T X X
COLUMN column_name FROM table_name/
COLUMN * FROM table_name/
COLUMN table_name.column_name/
COLUMN table_name.*/
COLUMN expression
T X X X
CONSTRAINT [database_name.] table_name.name T X X X
DATABASE database_name T X X X
FUNCTION function_name
[(data_type [ … , data_type])]/
SPECIFIC FUNCTION specific_function_name
T X X X
HASH INDEX hash_index_name T X X X
[TEMPORARY] INDEX table_name
[(column_name)]/
[TEMPORARY] INDEX join_index_name
[(column_name)]
T X X X
JOIN INDEX join_index_name T X X X
MACRO macro_name T X X X
METHOD [database_name.] method_name/
INSTANCE METHOD [database_name.] method_name/
CONSTRUCTOR METHOD [database_name.] method_name/
SPECIFIC METHOD [database_name.] specific_method_name
T X X
PROCEDURE [database_name.] procedure_name/
PROCEDURE [database_name.] procedure_name ATTRIBUTES/
PROCEDURE [database_name.] procedure_name ATTR/
PROCEDURE [database_name.] procedure_name ATTRS
T X X X
REPLICATION GROUP T X X X
SESSION T X X X
TABLE table_name/
TABLE join_index_name
T X X X
TRANSFORM [database_name.] UDT_name T X X
TRIGGER [database_name.] trigger_name/
TRIGGER [database_name.] table_name
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 253
HELP, continued
Options
TYPE [database_name.] UDT_name/
TYPE [database_name.] UDT_name ATTRIBUTE/
TYPE [database_name.] UDT_name METHOD
T X X
USER user_name T X X X
VIEW view_name T X X X
VOLATILE TABLE T X X X
HELP STATISTICS/
HELP STATS/
HELP STAT
(optimizer form)
T X X X
Option
INDEX (column_name [ … , column_name])/
INDEX index_name/
COLUMN (column_name [ … ,column_name])/
COLUMN column_name/
T X X X
COLUMN (column_name [ … , column_name], PARTITION
[ … , column_name])/
COLUMN (PARTITION [ … , column_name])/
COLUMN PARTITION
T X X
HELP STATISTICS/
HELP STATS/
HELP STAT
(QCD form)
T X X X
Options
INDEX (column_name [ … , column_name])/
INDEX index_name/
COLUMN (column_name [ … ,column_name])/
COLUMN column_name/
T X X X
COLUMN (column_name [ … , column_name], PARTITION
[ … , column_name])/
COLUMN (PARTITION [ … , column_name])/
COLUMN PARTITION
T X X
FOR QUERY query_ID T X X
SAMPLEID statistics_ID T X X
UPDATE MODIFIED T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
254 SQL Reference: Fundamentals
INCLUDE A X X X
INCLUDE SQLCA T X X X
INCLUDE SQLDA T X X X
INITIATE INDEX ANALYSIS T X X X
Options
ON table_name [ … , table_name] T X X X
SET IndexesPerTable = value
[, SearchSpace = value]
[, ChangeRate = value]
[, ColumnsPerIndex = value]
T X X X
[, JoinIndexesPerTable = value]
[, ColumnsPerJoinIndex = value]
[, IndexMaintMode = value]
T X X
KEEP INDEX T X X X
USE MODIFIED STATISTICS/
USE MODIFIED STATS/
USE MODIFIED STAT
T X X X
WITH INDEX TYPE number/
WITH INDEX TYPE number [ … , number]/
WITH NO INDEX TYPE number/
WITH NO INDEX TYPE number [ … , number]
T X X X
CHECKPOINT checkpoint_trigger T X X X
INSERT/
INS
A
T
X X X
Options
[VALUES] (expression [ … , expression]) A X X X
(column_name [ … , column_name]) VALUES (expression [ … ,
expression])
A X X X
[(column_name [ … , column_name])] subquery A X X X
DEFAULT VALUES A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 255
INSERT EXPLAIN T X X X
Options
WITH [NO] STATISTICS T X X X
AND DEMOGRAPHICS T X X X
USING SAMPLE percentage/
USING SAMPLE percentage PERCENT
T X X
FOR table_name [ … , table_name] T X X X
AS query_plan_name T X X X
LIMIT/
LIMIT SQL/
LIMIT SQL = n
T X X X
FOR frequency T X X X
LOGOFF (embedded SQL) T X X X
Options
CURRENT/
ALL/
connection_name/
:host_variable_name
T X X X
LOGON (embedded SQL) T X X X
Options
AS connection_name/
AS :namevar
T X X X
MERGE A X X X
Options
INTO A X X X
AS correlation_name A X X X
VALUES using_expression/
(subquery)
A X X X
ON match_condition A X X X
WHEN MATCHED THEN UPDATE SET/
WHEN NOT MATCHED THEN INSERT
A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
256 SQL Reference: Fundamentals
MODIFY DATABASE T X X X
Options
PERMANENT = number [BYTES]/
PERM = number [BYTES]
T X X X
TEMPORARY = number [bytes] T X X X
SPOOL = number [BYTES] T X X X
ACCOUNT = ‘account_ID’ T X X X
[NO] FALLBACK [PROTECTION] T X X X
[BEFORE] JOURNAL/
NO [BEFORE] JOURNAL/
DUAL [BEFORE] JOURNAL
T X X X
AFTER JOURNAL/
NO AFTER JOURNAL/
DUAL AFTER JOURNAL/
LOCAL AFTER JOURNAL/
NOT LOCAL AFTER JOURNAL
T X X X
DEFAULT JOURNAL TABLE = table_name T X X X
DROP DEFAULT JOURNAL TABLE [= table_name] T X X X
MODIFY PROFILE T X X X
Options
ACCOUNT = ‘account_id’/
ACCOUNT = (‘account_id’ [ … ,’account_id’])/
ACCOUNT = NULL
T X X X
DEFAULT DATABASE = database_name/
DEFAULT DATABASE = NULL
T X X X
SPOOL = n [BYTES]/
SPOOL = NULL
T X X X
TEMPORARY = n [BYTES]/
TEMPORARY = NULL
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 257
MODIFY PROFILE, continued
Options
PASSWORD [ATTRIBUTES] = (
EXPIRE = n,
EXPIRE = NULL,
MINCHAR = n,
MINCHAR = NULL,
MAXCHAR = n,
MAXCHAR = NULL,
DIGITS = n,
DIGITS = NULL,
SPECCHAR = c,
SPECCHAR = NULL,
MAXLOGONATTEMPTS = n,
MAXLOGONATTEMPTS = NULL,
LOCKEDUSEREXPIRE = n,
LOCKEDUSEREXPIRE = NULL,
REUSE = n,
REUSE = NULL)
PASSWORD [ATTRIBUTES] = NULL
T X X X
MODIFY USER T X X X
Options
PERMANENT = number [BYTES]/
PERM = number [BYTES]
T X X X
PASSWORD = password [FOR USER] T X X X
STARTUP = ‘string;’/
STARTUP = NULL
T X X X
RELEASE PASSWORD LOCK T X X X
TEMPORARY = n [bytes] T X X X
SPOOL = n [BYTES] T X X X
ACCOUNT = ‘acct_ID’
ACCOUNT = (‘acct_ID’ [ … ,’acct_ID’])
T X X X
DEFAULT DATABASE = database_name T X X X
COLLATION = collation_sequence T X X X
[NO] FALLBACK [PROTECTION] T X X X
[BEFORE] JOURNAL/
NO [BEFORE] JOURNAL/
DUAL [BEFORE] JOURNAL
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
258 SQL Reference: Fundamentals
MODIFY USER, continued
Options
AFTER JOURNAL/
NO AFTER JOURNAL/
DUAL AFTER JOURNAL/
LOCAL AFTER JOURNAL/
NOT LOCAL AFTER JOURNAL
T X X X
DEFAULT JOURNAL TABLE = table_name T X X X
DROP DEFAULT JOURNAL TABLE [= table_name] T X X X
TIME ZONE = LOCAL/
TIME ZONE = [sign] quotestring/
TIME ZONE = NULL
T X X X
DATEFORM = INTEGERDATE/
DATEFORM = ANSIDATE
T X X X
DEFAULT CHARACTER SET data_type T X X X
DEFAULT ROLE T X X X
PROFILE T X X X
OPEN A X X X
Options
USING [:] host_variable_name A X X X
[INDICATOR] :host_indicator_name A X X X
USING DESCRIPTOR [:] descriptor_area A X X X
POSITION A X X X
Options
TO NEXT/
TO [STATEMENT] statement_number/
TO [STATEMENT] [:] numvar
A X X X
PREPARE A X X X
Options
INTO [:] descriptor_area A X X X
USING NAMES/
USING ANY/
USING BOTH/
USING LABELS
A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 259
PREPARE, continued
Options
FOR STATEMENT statement_number/
FOR STATEMENT [:] numvar
A X X X
FROM statement_string/
FROM [:] statement_string_var
A X X X
RENAME FUNCTION T X X X
RENAME MACRO T X X X
RENAME PROCEDURE T X X X
RENAME TABLE T X X X
RENAME TRIGGER T X X X
RENAME VIEW T X X X
REPLACE CAST T X X
Options
WITH SPECIFIC METHOD specific_method_name/
WITH METHOD method_name/
WITH INSTANCE METHOD method_name/
WITH SPECIFIC FUNCTION specific_function_name/
WITH FUNCTION function_name
T X X
AS ASSIGNMENT T X X
REPLACE FUNCTION T X X X
Options
RETURNS data_type/
RETURNS data_type CAST FROM data_type
A X X X
LANGUAGE C/
LANGUAGE CPP
A X X X
NO SQL A X X X
SPECIFIC [database_name.] function_name A X X X
CLASS AGGREGATE/
CLASS AG
T X X X
PARAMETER STYLE SQL/
PARAMETER STYLE TD_GENERAL
A X X X
DETERMINISTIC/
NOT DETERMINISTIC
A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
260 SQL Reference: Fundamentals
REPLACE FUNCTION, continued
Options
CALLED ON NULL INPUT/
RETURNS NULL ON NULL INPUT
A X X X
EXTERNAL/
EXTERNAL NAME function_name/
EXTERNAL NAME function_name PARAMETER STYLE SQL/
EXTERNAL NAME function_name PARAMETER STYLE
TD_GENERAL/
EXTERNAL PARAMETER STYLE SQL/
EXTERNAL PARAMETER STYLE TD_GENERAL/
EXTERNAL NAME
'[F delimiter function_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
A X X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
A X X
REPLACE FUNCTION (table function form) T X X X
Options
RETURNS TABLE ( column_name data_type
[ … , column_name data_type ] )
T X X X
LANGUAGE C/
LANGUAGE CPP
T X X X
NO SQL T X X X
SPECIFIC [database_name.] function_name T X X X
PARAMETER STYLE SQL T X X X
DETERMINISTIC/
NOT DETERMINISTIC
T X X X
CALLED ON NULL INPUT/
RETURNS NULL ON NULL INPUT
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 261
REPLACE FUNCTION (table function form), continued
Options
EXTERNAL/
EXTERNAL NAME function_name/
EXTERNAL NAME function_name PARAMETER STYLE SQL/
EXTERNAL PARAMETER STYLE SQL/
EXTERNAL NAME
'[F delimiter function_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
T X X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
T X X
REPLACE MACRO T X X X
Options
AS T X X X
USING T X X X
REPLACE METHOD
REPLACE CONSTRUCTOR METHOD
REPLACE INSTANCE METHOD
REPLACE SPECIFIC METHOD
T X X
Options
parameter_name data_type/
parameter_name UDT_name
T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
262 SQL Reference: Fundamentals
REPLACE METHOD, continued
Options
EXTERNAL/
EXTERNAL NAME method_name/
EXTERNAL NAME
'[F delimiter function_entry_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
T X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
T X X
REPLACE ORDERING A X X
Options
MAP WITH SPECIFIC METHOD specific_method_name/
MAP WITH METHOD method_name/
MAP WITH INSTANCE METHOD method_name/
MAP WITH SPECIFIC FUNCTION specific_function_name/
MAP WITH FUNCTION function_name
A X X
REPLACE PROCEDURE (external stored procedure form) A X X X
Options
parameter_name data_type/
IN parameter_name data_type/
OUT parameter_name data_type/
INOUT parameter_name data_type
A X X X
LANGUAGE C/
LANGUAGE CPP
A X X X
NO SQL A X X X
PARAMETER STYLE SQL/
PARAMETER STYLE TD_GENERAL
A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 263
REPLACE PROCEDURE (external stored procedure form), continued
Options
EXTERNAL/
EXTERNAL NAME procedure_name/
EXTERNAL NAME procedure_name PARAMETER STYLE SQL/
EXTERNAL NAME procedure_name PARAMETER STYLE
TD_GENERAL/
EXTERNAL PARAMETER STYLE SQL/
EXTERNAL PARAMETER STYLE TD_GENERAL/
EXTERNAL NAME
'[F delimiter function_entry_name]
[D]
[SI delimiter name delimiter include_name]
[CI delimiter name delimiter include_name]
[SL delimiter library_name]
[SO delimiter name delimiter object_name ]
[CO delimiter name delimiter object_name]
[SP delimiter package_name]
[SS delimiter name delimiter source_name]
[CS delimiter name delimiter source_name]'
A X X X
EXTERNAL SECURITY DEFINER/
EXTERNAL SECURITY DEFINER authorization_name/
EXTERNAL SECURITY INVOKER
A X X
REPLACE PROCEDURE (stored procedure form) T X X X
Options
parameter_name data_type/
IN parameter_name data_type/
OUT parameter_name data_type/
INOUT parameter_name data_type
T X X X
NOT ATOMIC T X X X
DECLARE variable-name data-type
[DEFAULT literal]
DECLARE variable-name data-type
[DEFAULT NULL]
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
264 SQL Reference: Fundamentals
REPLACE PROCEDURE (stored procedure form), continued
Options
DECLARE cursor_name [SCROLL] CURSOR FOR
cursor_specification [FOR READ ONLY]/
DECLARE cursor_name [SCROLL] CURSOR FOR
cursor_specification [FOR UPDATE]/
DECLARE cursor_name [NO SCROLL] CURSOR FOR
cursor_specification [FOR READ ONLY]/
DECLARE cursor_name [NO SCROLL] CURSOR FOR
cursor_specification [FOR UPDATE]/
T X X X
DECLARE CONTINUE HANDLER/
DECLARE EXIT HANDLER
T X X X
FOR SQLSTATE sqlstate/
FOR SQLSTATE VALUE sqlstate
T X X X
FOR SQLEXCEPTION/
FOR SQLWARNING/
FOR NOT FOUND
T X X X
SET assignment_target = assignment_source T X X X
IF expression THEN statement
[ELSEIF expression THEN statement]
[ELSE statement] END IF
T X X X
CASE operand1 WHEN operand2 THEN statement [ELSE statement]
END CASE
T X X X
CASE WHEN expression THEN statement
[ELSE statement] END CASE
T X X X
ITERATE label_name T X X X
LEAVE label_name T X X X
PRINT string_literal/
PRINT print_variable_name
T X X X
SQL_statement T X X X
CALL procedure_name T X X X
OPEN cursor_name T X X X
CLOSE cursor_name T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 265
REPLACE PROCEDURE (stored procedure form), continued
Options
FETCH [[NEXT] FROM] cursor_name INTO
local_variable_name [ … , local_variable_name]/
FETCH [[FIRST] FROM] cursor_name INTO
local_variable_name [ … , local_variable_name]/
FETCH [[NEXT] FROM] cursor_name INTO parameter_reference
[ … , parameter_reference]/
FETCH [[FIRST] FROM] cursor_name INTO parameter_reference
[ … , parameter_reference]
T X X X
WHILE expression DO statement END WHILE T X X X
LOOP statement END LOOP T X X X
FOR for_loop_variable AS
[cursor_name CURSOR FOR]
SELECT column_name [AS correlation_name]
FROM table_name [WHERE clause]
[SELECT clause] DO statement_list END FOR/
FOR for_loop_variable AS
[cursor_name CURSOR FOR]
SELECT expression [AS correlation_name]
FROM table_name [WHERE clause]
[SELECT clause] DO statement_list END FOR
T X X X
REPEAT statement_list
UNTIL conditional_expression END REPEAT
T X X X
REPLACE TRANSFORM T X X
Options
TO SQL WITH SPECIFIC METHODspecific_method_name/
TO SQL WITH METHOD method_name/
TO SQL WITH INSTANCE METHOD method_name/
TO SQL WITH SPECIFIC FUNCTION specific_function_name/
TO SQL WITH FUNCTION function_name
T X X
FROM SQL WITH SPECIFIC METHOD specific_method_name/
FROM SQL WITH METHOD method_name/
FROM SQL WITH INSTANCE METHOD method_name/
FROM SQL WITH SPECIFIC FUNCTION specific_function_name/
FROM SQL WITH FUNCTION function_name
T X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
266 SQL Reference: Fundamentals
REPLACE TRIGGER T X X X
Options
ENABLED/
DISABLED
T X X X
BEFORE/
AFTER
T X X X
INSERT/
DELETE/
UPDATE [OF (column_list)]
T X X X
ORDER integer T X X X
REFERENCING OLD_TABLE [AS] identifier [NEW_TABLE [AS]
identifier]/
REFERENCING OLD [AS] identifier [NEW [AS] identifier]/
REFERENCING OLD TABLE [AS] identifier [NEW TABLE [AS]
identifier]/
REFERENCING OLD [ROW] [AS] identifier
[NEW [ROW] [AS] identifier]
T X X X
FOR EACH ROW/
FOR EACH STATEMENT
T X X X
WHEN (search_condition) T X X X
(SQL_proc_statement ;)/
SQL_proc_statement /
BEGIN ATOMIC (SQL_proc_statement;) END/
BEGIN ATOMIC SQL_proc_statement ; END
T X X X
REPLACE VIEW A, T X X X
Options
(column_name [ … , column_name]) T X X X
AS [ |LOCKING statement modifier| ] query_expression A, T X X X
WITH CHECK OPTION A X X X
RESTART INDEX ANALYSIS T X X X
REVOKE A, T X X X
Options
GRANT OPTION FOR A X X X
ALL/
ALL PRIVILEGES/
ALL BUT operation
A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 267
REVOKE, continued
Options
DELETE/
INSERT/
SELECT/
REFERENCES/
UPDATE/
A X X X
ALTER/
CHECKPOINT/
CREATE/
DROP/
DUMP/
EXECUTE/
INDEX/
RESTORE/
T X X X
REPLCONTROL/ T X X X
UDTMETHOD/
UDTTYPE/
UDTUSAGE
T X X
ON database_name/
ON database_name.object_name/
ON object_name/
ON PROCEDURE procedure_name/
ON SPECIFIC FUNCTION specific_function_name/
ON FUNCTION function_name/
A X X X
ON TYPE UDT_name/
ON TYPE SYSUDTLIB.UDT_name
A X X
TO [ALL] user_name/
TO PUBLIC/
FROM [ALL] user_name/
FROM PUBLIC
T X X X
REVOKE LOGON T X X X
Options
ON host_id/
ON ALL
T X X X
AS DEFAULT/
TO database_name/
FROM database_name
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
268 SQL Reference: Fundamentals
REVOKE MONITOR/
REVOKE monitor_privilege
T X X X
Options
GRANT OPTION FOR T X X X
PRIVILEGES/
BUT NOT monitor_privilege
T X X X
TO [ALL] user_name/
TO PUBLIC/
FROM [ALL] user_name/
FROM PUBLIC
T X X X
REVOKE ROLE A X X X
Options
ADMIN OPTION FOR A X X X
REWIND T X X X
ROLLBACK A, T X X X
Options
WORK A X X X
WORK RELEASE T X X X
'abort_message' T X X X
FROM_clause T X X X
WHERE_clause T X X X
SELECT/
SEL
A, T X X X
Options
|WITH [RECURSIVE] statement modifier| A X X X
DISTINCT/
ALL
A X X X
TOP integer [WITH TIES]/
TOP integer PERCENT [WITH TIES]/
TOP decimal [WITH TIES]/
TOP decimal PERCENT [WITH TIES]
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 269
SELECT, continued
Options
*/
expression/
expression [AS] alias_name/
table_name.*/
A X X X
*.ALL/
table_name.*.ALL/
column_name.ALL
T X X
SAMPLEID T X X X
FROM table_name/
FROM table_name [AS] alias_name/
FROM join_table_name JOIN joined_table ON search_condition/
FROM join_table_name INNER JOIN joined_table ON
search_condition/
FROM join_table_name LEFT JOIN joined_table ON
search_condition/
FROM join_table_name LEFT OUTER JOIN
joined_table ON search_condition/
FROM join_table_name RIGHT JOIN joined_table ON
search_condition/
FROM join_table_name RIGHT OUTER JOIN joined_table ON
search_condition/
FROM join_table_name FULL JOIN joined_table ON
search_condition/
FROM join_table_name FULL OUTER JOIN
joined_table ON search_condition/
FROM join_table_name CROSS JOIN/
FROM (subquery) [AS] derived_table_name/
FROM (subquery) [AS] derived_table_name (column_name)/
FROM TABLE (function_name([expression
[ … , expression]])) [AS] derived_table_name/
FROM TABLE (function_name([expression
[ … , expression]])) [AS] derived_table_name (column_name [ … ,
column_name])
A X X X
|WHERE statement modifier| A X X X
|GROUP BY statement modifier| A, T X X X
|HAVING statement modifier| A X X X
|QUALIFY statement modifier| T X X X
|SAMPLE statement modifier| T X X X
|ORDER BY statement modifier| A, T X X X
|WITH statement modifier| T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
270 SQL Reference: Fundamentals
SELECT AND CONSUME TOP 1 T X X X
Options
FROM queue_table_name T X X X
SELECT … INTO/
SEL … INTO
A, T X X X
Options
DISTINCT/
ALL
A X X X
AND CONSUME TOP 1 T X X X
expression/
expression [AS] alias_name
A X X X
FROM table_name/
FROM table_name [AS] alias_name/
FROM join_table_name JOIN joined_table ON search_condition/
FROM join_table_name INNER JOIN joined_table ON
search_condition/
FROM join_table_name LEFT JOIN joined_table ON
search_condition/
FROM join_table_name LEFT OUTER JOIN
joined_table ON search_condition/
FROM join_table_name RIGHT JOIN joined_table ON
search_condition/
FROM join_table_name RIGHT OUTER JOIN joined_table ON
search_condition/
FROM join_table_name FULL JOIN joined_table ON
search_condition/
FROM join_table_name FULL OUTER JOIN
joined_table ON search_condition/
FROM join_table_name CROSS JOIN/
FROM (subquery) [AS] derived_table_name/
FROM (subquery) [AS] derived_table_name (column_name)
A X X X
|WHERE statement modifier| A X X X
SET BUFFERSIZE (embedded SQL) T X X X
SET CHARSET (embedded SQL) T X X X
SET CONNECTION (embedded SQL) T X X X
SET CRASH (embedded SQL) T X X X
Options
WAIT_NOTELL/
NOWAIT_TELL
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 271
SET ROLE A, T X X X
Options
role_name/
NONE/
A X X X
NULL/
ALL/
EXTERNAL
T X X X
SET SESSION ACCOUNT/
SS ACCOUNT
T X X X
Options
FOR SESSION/
FOR REQUEST
T X X X
SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION
LEVEL/
SS CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL
A X X
Options
RU/
READ UNCOMMITTED/
SR/
SERIALIZABLE
A X X
SET SESSION COLLATION/
SS COLLATION
T X X X
SET SESSION DATABASE/
SS DATABASE
T X X X
SET SESSION DATEFORM/
SS DATEFORM
T X X X
Options
ANSIDATE/
INTEGERDATE
T X X X
SET SESSION FUNCTION TRACE/
SS FUNCTION TRACE
T X X X
Options
OFF/
USING mask FOR TABLE table_name/
USING mask FOR TRACE TABLE table_name
T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
272 SQL Reference: Fundamentals
SET SESSION OVERRIDE REPLICATION/
SS OVERRIDE REPLICATION
T X X X
Options
OFF/
ON
T X X X
SET TIME ZONE T X X X
Options
LOCAL/
INTERVAL offset HOUR TO MINUTE/
USER
T X X X
SHOW T X X X
Options
QUALIFIED T X X X
SHOW CAST T X
SHOW FUNCTION
SHOW SPECIFIC FUNCTION
T X X X
SHOW HASH INDEX T X X X
SHOW JOIN INDEX T X X X
SHOW MACRO T X X X
SHOW METHOD
SHOW CONSTRUCTOR METHOD
SHOW INSTANCE METHOD
SHOW SPECIFIC METHOD
T X X
SHOW PROCEDURE T X X X
SHOW REPLICATION GROUP T X X X
SHOW [TEMPORARY] TABLE T X X X
SHOW TRIGGER T X X X
SHOW TYPE T X X
SHOW VIEW T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 273
TEST T X X X
Options
async_statement_identifier/
:namevar
T X X X
COMPLETION T X X X
UPDATE/
UPD
(searched form)
A, T X X X
Options
table_name A X X X
[AS] alias_name/
FROM table_name [[AS] alias_name]
[ … , table_name [[AS] alias_name]]
A, T X X X
SET column_name=expression [ … , column_name=expression]/ A X X X
SET column_name=expression [ … , column_name=expression]
[ … , column_name.mutator_name=expression]/
SET column_name.mutator_name=expression
[ … , column_name.mutator_name=expression]
[ … , column_name=expression]
A X X
ALL T X X X
|WHERE statement modifier| A X X X
UPDATE/
UPD
(positioned form)
A X X X
Options
table_name [alias_name] A X X X
SET column_name=expression [ … , column_name=expression] A X X X
WHERE CURRENT OF cursor_name A X X X
UPDATE/
UPD
(upsert form)
T X X X
Options
table_name_1 T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
274 SQL Reference: Fundamentals
UPDATE (upsert form), continued
Options
SET column_name=expression [ … , column_name=expression]/ T X X X
SET column_name=expression [ … , column_name=expression]
[ … , column_name.mutator_name=expression]/
SET column_name.mutator_name=expression
[ … , column_name.mutator_name=expression]
[ … , column_name=expression]
T X X
|WHERE statement modifier| T X X X
ELSE INSERT [INTO] table_name_2/
ELSE INS [INTO] table_name_2
T X X X
[(column_name [ … , column_name])] VALUES (expression)/
DEFAULT VALUES
T X X X
WAIT T X X X
Options
async_statement_identifier COMPLETION/
ALL COMPLETION/
ANY COMPLETION INTO [:] stmtvar, [:] sessvar
T X X X
WHENEVER A, T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
SQL Reference: Fundamentals 275
Request Modifier
EXPLAIN T X X X
Statement Modifiers
ASYNC T X X X
EXEC SQL A X X X
GROUP BY clause A, T X X X
Options
CUBE/
GROUPING SETS/
ROLLUP
A X X X
HAVING clause A X X X
LOCKING/
LOCK
T X X X
Options
DATABASE database_name/
TABLE table_name/
VIEW view_name/
ROW
T X X X
FOR/
IN
T X X X
ACCESS/
EXCLUSIVE/
EXCL/
SHARE/
WRITE/
CHECKSUM/
READ/
READ OVERRIDE
T X X X
MODE T X X X
NOWAIT T X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Statements and Modifiers
276 SQL Reference: Fundamentals
Statement Modifiers, continued
ORDER BY clause A, T X X X
Options
expression T X X X
column_name/
column_position
A X X X
ASC/
DESC
A X X X
QUALIFY clause T X X X
SAMPLE clause T X X X
Options
WITH REPLACEMENT T X X X
RANDOMIZED ALLOCATION T X X X
USING row descriptor T X X X
Options
AS DEFERRED/
AS LOCATOR
T X X X
WHERE clause A X X X
WITH clause T X X X
Options
expression_1 T X X X
BY expression_2 T X X X
ASC/
DESC
T X X X
WITH [RECURSIVE] clause A X X X
Options
(column_name [ … , column_name]) A X X X
AS (seed_statement [UNION ALL
recursive_statement)] [ … [UNION ALL seed_statement] [ …
UNION ALL recursive_statement])
A X X X
Statement
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Data Types and Literals
SQL Reference: Fundamentals 277
Data Types and Literals
The following list contains all SQL data types and literals for this version and previous
versions of Teradata Database.
The following type codes appear in the ANSI Compliance column.
Code Definition
A ANSI
T Teradata extension
Data Type / Literal
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0
Data Types
BIGINT A X
BINARY LARGE OBJECT, BLOB A X X X
BYTE T X X X
BYTEINT T X X X
CHAR, CHARACTER A X X X
CHAR VARYING, CHARACTER VARYING A X X X
CHARACTER LARGE OBJECT, CLOB A X X X
DATE A, T X X X
DEC, DECIMAL A X X X
DOUBLE PRECISION A X X X
FLOAT A X X X
GRAPHIC T X X X
INT, INTEGER A X X X
INTERVAL DAY A X X X
INTERVAL DAY TO HOUR A X X X
INTERVAL DAY TO MINUTE A X X X
INTERVAL DAY TO SECOND A X X X
INTERVAL HOUR A X X X
INTERVAL HOUR TO MINUTE A X X XAppendix E: SQL Feature Summary
Data Types and Literals
278 SQL Reference: Fundamentals
Data Types, continued
INTERVAL HOUR TO SECOND A X X X
INTERVAL MINUTE A X X X
INTERVAL MINUTE TO SECOND A X X X
INTERVAL MONTH A X X X
INTERVAL SECOND A X X X
INTERVAL YEAR A X X X
INTERVAL YEAR TO MONTH A X X X
LONG VARCHAR T X X X
LONG VARGRAPHIC T X X X
NUMERIC A X X X
REAL A X X X
SMALLINT A X X X
TIME A X X X
TIME WITH TIMEZONE A X X X
TIMESTAMP A X X X
TIMESTAMP WITH TIMEZONE A X X X
user-defined type (UDT) A X X
VARBYTE T X X X
VARCHAR A X X X
VARGRAPHIC T X X X
Literals
Character data A X X X
DATE A X X X
Decimal A X X X
Floating point A X X X
Graphic T X X X
Hexadecimal T X X X
Integer A X X X
Data Type / Literal
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Data Types and Literals
SQL Reference: Fundamentals 279
Literals, continued
Interval A X X X
TIME A X X X
TIMESTAMP A X X X
Data Type Attributes
AS output format phrase A X X X
CASESPECIFIC/NOT CASESPECIFIC phrase/
CS/NOT CS phrase
T X X X
CHARACTER SET A X X X
CHECK table constraint attribute A X X X
COMPRESS/
COMPRESS NULL/
COMPRESS string/
COMPRESS value
column storage attribute
T X X X
COMPRESS (value_list) column storage attribute T X X X
CONSTRAINT/
CONSTRAINT CHECK/
CONSTRAINT PRIMARY KEY/
CONSTRAINT REFERENCES/
CONSTRAINT UNIQUE
column constraint attribute
T X X X
DEFAULT constant_value/
DEFAULT DATE quotestring/
DEFAULT INTERVAL quotestring/
DEFAULT TIME quotestring/
DEFAULT TIMESTAMP quotestring
default value control phrase
A X X X
FOREIGN KEY table constraint attribute A X X X
FORMAT output format phrase T X X X
NAMED output format phrase T X X X
NOT NULL default value control phrase A X X X
PRIMARY KEY table constraint attribute A X X X
REFERENCES table constraint attribute A X X X
TITLE output format phrase T X X X
UC, UPPERCASE phrase T X X X
Data Type / Literal
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
280 SQL Reference: Fundamentals
Functions, Operators, and Expressions
The following list contains all SQL functions, operators, and expressions for this version and
previous versions of Teradata Database.
The following type codes appear in the ANSI Compliance column:
Data Type Attributes, continued
UNIQUE table constraint attribute A X X X
WITH CHECK OPTION/
WITH NO CHECK OPTION
column constraint attribute
T X X X
WITH DEFAULT default value control phrase T X X X
Data Type / Literal
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0
Code Definition
A ANSI
P Partially ANSI-compliant
T Teradata extension
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0
- (subtract) A X X X
- (unary minus) A X X X
* (multiply) A X X X
** (exponentiate) T X X X
/ (divide) A X X X
^= (inequality) T X X X
+ (add) A X X X
+ (unary plus) A X X X
< (less than) A X X X
<= (less than or equal) A X X X
<> (inequality) A X X XAppendix E: SQL Feature Summary
Functions, Operators, and Expressions
SQL Reference: Fundamentals 281
= (equality) A X X X
> (greater than) A X X X
>= (greater than or equal) A X X X
ABS T X X X
ACCOUNT T X X X
ACOS T X X X
ACOSH T X X X
ADD_MONTHS T X X X
ALL A X X X
AND A X X X
ANY A X X X
ASIN T X X X
ASINH T X X X
ATAN T X X X
ATAN2 T X X X
ATANH T X X X
AVE/
AVERAGE/
T X X X
AVG A X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
ROWS window_frame_extent A X X X
BETWEEN
NOT BETWEEN
A X X X
BYTE/
BYTES
T X X X
CASE A X X X
CASE_N T X X X
CAST A, T X X X
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
282 SQL Reference: Fundamentals
CHAR/
CHARACTERS/
CHARS
T X X X
CHAR_LENGTH/
CHARACTER_LENGTH
A X X X
CHAR2HEXINT T X X X
COALESCE A X X X
CORR A X X X
COS T X X X
COSH T X X X
COUNT A X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
ROWS window_frame_extent A X X X
COVAR_POP A X X X
COVAR_SAMP A X X X
CSUM T X X X
CURRENT_DATE A X X X
CURRENT_TIME A X X X
CURRENT_TIMESTAMP A X X X
DATABASE T X X X
DATE T X X X
DEFAULT A, T X
EQ T X X X
EXCEPT A, T X X X
Options
ALL T X X X
EXISTS
NOT EXISTS
A X X X
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
SQL Reference: Fundamentals 283
EXP T X X X
EXTRACT P X X X
FORMAT T X X X
GE T X X X
GROUPING A X X X
GT T X X X
HASHAMP T X X X
HASHBAKAMP T X X X
HASHBUCKET T X X X
HASHROW T X X X
IN
NOT IN
A X X X
INDEX T X X X
INTERSECT A, T X X X
Options
ALL T X X X
IS NULL
IS NOT NULL
A X X X
KURTOSIS A X X X
LE T X X X
LIKE
NOT LIKE
A X X X
LN T X X X
LOG T X X X
LOWER A X X X
LT T X X X
MAVG T X X X
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
284 SQL Reference: Fundamentals
MAX/
MAXIMUM
A
T
X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
ROWS window_frame_extent A X X X
MCHARACTERS T X X X
MDIFF T X X X
MIN/
MINIMUM
A
T
X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
ROWS window_frame_extent A X X X
MINUS T X X X
Options
ALL T X X X
MLINREG T X X X
MOD T X X X
MSUM T X X X
NE T X X X
NEW P X X
NOT A X X X
NOT= T X X X
NULLIF A X X X
NULLIFZERO T X X X
OCTET_LENGTH A X X X
OR A X X X
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
SQL Reference: Fundamentals 285
OVERLAPS A X X X
PERCENT_RANK A X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
POSITION A X X X
PROFILE T X X X
QUANTILE T X X X
RANDOM T X X X
RANGE_N T X X X
RANK T X X X
RANK A X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
REGR_AVGX A X X X
REGR_AVGY A X X X
REGR_COUNT A X X X
REGR_INTERCEPT A X X X
REGR_R2 A X X X
REGR_SLOPE A X X X
REGR_SXX A X X X
REGR_SXY A X X X
REGR_SYY A X X X
ROLE T X X X
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
286 SQL Reference: Fundamentals
ROW_NUMBER A X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
SESSION T X X X
SIN T X X X
SINH T X X X
SKEW A X X X
SOME A X X X
SOUNDEX T X X X
SQRT T X X X
STDDEV_POP A X X X
STDDEV_SAMP A X X X
SUBSTR T X X X
SUBSTRING A X X X
SUM A X X X
Options
OVER A X X X
PARTITION BY value_expression A X X X
ORDER BY value_expression A X X X
ROWS window_frame_extent A X X X
TAN T X X X
TANH T X X X
TIME T X X X
TITLE T X X X
TRANSLATE A X X X
TRANSLATE_CHK T X X X
TRIM P X X X
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
SQL Reference: Fundamentals 287
TYPE T X X X
UNION A, T X X X
Options
ALL T X X X
UPPER A X X X
USER A X X X
VAR_POP A X X X
VAR_SAMP A X X X
VARGRAPHIC T X X X
WIDTH_BUCKET A X X X
ZEROIFNULL T X X X
Function / Operator / Expression
ANSI
Compliance V2R6.2 V2R6.1 V2R6.0Appendix E: SQL Feature Summary
Functions, Operators, and Expressions
288 SQL Reference: FundamentalsSQL Reference: Fundamentals 289
Glossary
AMP Access Module Processor vproc
ANSI American National Standards Institute
BLOB Binary Large Object
BTEQ Basic TEradata Query facility
BYNET Banyan Network - High speed interconnect
CJK Chinese, Japanese, and Korean
CLIv2 Call Level Interface Version 2
CLOB Character Large Object
cs0, cs1, cs2, cs3 Four code sets (codeset 0, 1, 2, and 3) used in EUC encoding.
distinct type A UDT that is based on a single predefined data type
E2I External-to-Internal
EUC Extended UNIX Code
external routine UDF, UDM, or external stored procedure that is written using C or C++
external stored procedure a stored procedure that is written using C or C++
FK Foreign Key
HI Hash Index
I2E Internal-to-External
JI Join Index
JIS Japanese Industrial Standards
LOB Large Object
LT/ST Large Table/Small Table (join)
NPPI Non-Partitioned Primary Index
NUPI Non-Unique Primary Index
NUSI Non-Unique Secondary Index
OLAP On-Line Analytical Processing
OLTP On-Line Transaction ProcessingGlossary
290 SQL Reference: Fundamentals
QCD Query Capture Database
PDE Parallel Database Extensions
PE Parsing Engine vproc
PI Primary Index
PK Primary Key
PPI Partitioned Primary Index
predefined type Teradata Database system type such as INTEGER and VARCHAR
RDBMS Relational Database Management System
SDF Specification for Data Formatting
stored procedure a stored procedure that is written using SQL statements
structured type A UDT that is a collection of one or more fields called attributes, each of
which is defined as a predefined data type or other UDT (which allows nesting)
UCS-2 Universal Coded Character Set containing 2 bytes
UDF User-Defined Function
UDM User-Defined Method
UDT User-Defined Type
UPI Unique Primary Index
USI Unique Secondary Index
vproc Virtual ProcessSQL Reference: Fundamentals 291
Index
Numerics
2PC, request processing 124
A
ABORT statement 220
ABS function 281
ACCOUNT function 281
Account priority 141
ACOS function 281
ACOSH function 281
ACTIVITY_COUNT 144
ADD_MONTHS function 281
Aggregate join index 31
Aggregates, null and 137
ALL predicate 281
ALTER FUNCTION statement 220
ALTER METHOD statement 220
ALTER PROCEDURE statement 220
ALTER REPLICATION GROUP statement 221
ALTER SPECIFIC FUNCTION statement 220
ALTER SPECIFIC METHOD statement 220
ALTER TABLE statement 103, 221
ALTER TRIGGER statement 223
ALTER TYPE statement 223
Alternate key 37
AND operator 281
ANSI compliance and 218
ANSI DateTime, null and 134
ANSI SQL
differences 218
Teradata compliance with 214
Teradata extensions to 218
Teradata terminology and 216
terminology differences 216
ANY predicate 281
ARC
hash indexes and 35
join indexes and 32
referential integrity and 41
Archive and Recovery. See ARC
Arithmetic function, nulls and 134
Arithmetic operators, nulls and 134
AS data type attribute 279
ASCII session character set 139
ASIN function 281
ASINH function 281
ASYNC statement modifier 275
ATAN function 281
ATAN2 function 281
ATANH function 281
AVE function 281
AVERAGE function 281
AVG function 281
B
BEGIN DECLARE SECTION statement 223
BEGIN LOGGING statement 224
BEGIN QUERY LOGGING statement 224
BEGIN TRANSACTION statement 225
BETWEEN predicate 281
BIGINT data type 277
BINARY LARGE OBJECT. See BLOB
BLOB data type 277
BYTE data type 277
Byte data types 15
BYTE function 281
BYTEINT data type 277
BYTES function 281
C
CALL statement 225
Call-Level Interface. See CLI
Cardinality, defined 2
CASE expression 281
CASE_N function 281
CASESPECIFIC data type attribute 279
CAST function 281
CD-ROM images v
CHAR data type 277
CHAR function 282
CHAR VARYING data type 277
CHAR_LENGTH function 282
CHAR2HEXINT function 282
Character data literal 278
CHARACTER data type 277
Character data types 13
CHARACTER LARGE OBJECT. See CLOB
Character literals 89
Character names 77
CHARACTER SET data type attribute 279
Character set, request change of 142
Character sets, Teradata SQL lexicon 67Index
292 SQL Reference: Fundamentals
CHARACTER VARYING data type 277
CHARACTER_LENGTH function 282
CHARACTERS function 282
CHARS function 282
CHECK data type attribute 279
CHECKPOINT statement 225
Child table, defined 37
Circular reference, referential integrity 39
Classes of UDFs
aggregate 54
scalar 54
CLI
session management 143
CLOB data type 277
CLOSE statement 225
COALESCE expression 282
Collation sequences (SQL) 140
COLLECT DEMOGRAPHICS statement 225
COLLECT STAT INDEX statement 227
COLLECT STAT statement 226
COLLECT STATISTICS INDEX statement 227
COLLECT STATISTICS statement 226
COLLECT STATS INDEX statement 227
COLLECT STATS statement 226
Collecting statistics 164
Column alias 72
Columns
definition 12
referencing, syntax for 72
COMMENT statement 227
Comments
bracketed 96
multibyte character sets and 96
simple 95
COMMIT statement 228
Comparison operators, null and 135
COMPRESS data type attribute 279
CONNECT statement 228
Constants. See Literals
CONSTRAINT data type attribute 279
CORR function 282
COS function 282
COSH function 282
COVAR_SAMP function 282
Covering index 31
Covering, secondary index, non-unique, and 27
CREATE AUTHORIZATION statement 228
CREATE CAST statement 229
CREATE DATABASE statement 229
CREATE FUNCTION statement 55, 229
CREATE HASH INDEX statement 231
CREATE INDEX statement 232
CREATE JOIN INDEX statement 232
CREATE MACRO statement 233
CREATE METHOD statement 233
CREATE ORDERING statement 234
CREATE PROCEDURE statement 53, 234
CREATE PROFILE statement 236
CREATE RECURSIVE VIEW statement 244
CREATE REPLICATION GROUP statement 237
CREATE ROLE statement 237
CREATE TABLE statement 237
CREATE TRANSFORM statement 239
CREATE TRIGGER statement 239
CREATE TYPE statement 240, 241
CREATE USER statement 242
CREATE VIEW statement 243
CS data type attribute 279
CSUM function 282
CURRENT_DATE function 282
CURRENT_TIME function 282
CURRENT_TIMESTAMP function 282
Cylinder reads 164
D
Data Control Language. See DCL
Data Definition Language. See DDL
Data Manipulation Language. See DML
Data types
byte 15
character 13
DateTime 14
definition 13
interval 14
numeric 13
UDT 15, 58
Data, standard form of, Teradata Database 71
Database
default, establishing for session 76
default, establishing permanent 75
DATABASE function 282
DATABASE statement 244
Database, defined 1
DATE data type 277
DATE function 282
DATE literal 278
Date literals 88
Date, change format of 142
DateTime data types 14
DCL statements, defined 105
DDL
CREATE FUNCTION 55
CREATE PROCEDURE 53
REPLACE FUNCTION 55
REPLACE PROCEDURE 53
DDL statements, defined 101
DEC data type 277Index
SQL Reference: Fundamentals 293
DECIMAL data type 277
Decimal literal 278
Decimal literals 87
DECLARE CURSOR statement 244
DECLARE STATEMENT statement 244
DECLARE TABLE statement 244
DEFAULT data type attribute 279
DEFAULT function 282
Degree, defined 2
DELETE DATABASE statement 245
DELETE statement 245
DELETE USER statement 245
Delimiters 93
DESCRIBE statement 246
DIAGNOSTIC "validate index" statement 246
DIAGNOSTIC DUMP SAMPLES statement 246
DIAGNOSTIC HELP SAMPLES statement 246
DIAGNOSTIC SET SAMPLES statement 246
Distinct UDTs 58
DML statements, defined 106
DOUBLE PRECISION data type 277
DROP AUTHORIZATION statement 246
DROP CAST statement 246
DROP DATABASE statement 246
DROP FUNCTION statement 246
DROP HASH INDEX statement 246
DROP INDEX statement 247
DROP JOIN INDEX statement 247
DROP MACRO statement 247
DROP ORDERING statement 247
DROP PROCEDURE statement 247
DROP PROFILE statement 247, 256
DROP REPLICATION GROUP statement 247
DROP ROLE statement 247
DROP SPECIFIC FUNCTION statement 246
DROP STATISTICS statement 247
DROP TABLE statement 248
DROP TRANSFORM statement 248
DROP TRIGGER statement 248
DROP TYPE statement 248
DROP USER statement 246
DROP VIEW statement 248
DUMP EXPLAIN statement 248
E
EBCDIC session character set 139
ECHO statement 248
Embedded SQL
binding style 100
macros 46
END DECLARE SECTION statement 248
END LOGGING statement 249
END QUERY LOGGING statement 249
END TRANSACTION statement 249
END-EXEC statement 248
EQ operator 282
Event processing
SELECT AND CONSUME and 133
EXCEPT operator 282
EXEC SQL statement modifier 275
Executable SQL statements 119
EXECUTE IMMEDIATE statement 249
EXECUTE statement 249
EXISTS predicate 282
EXP function 283
EXPLAIN request modifier 19, 21, 275
Express logon 142
External stored procedures 53
usage 53
EXTRACT function 283
F
Fallback
hash indexes and 35
join indexes and 32
FastLoad
hash indexes and 35
join indexes and 32
referential integrity and 42
FETCH statement 250
FLOAT data type 277
Floating point literal 278
Floating point literals 87
Foreign key
defined 16
maintaining 40
FOREIGN KEY data type attribute 279
Foreign key. See also Key
Foreign key. See also Referential integrity
FORMAT data type attribute 279
FORMAT function 283
Full table scan 163
G
GE operator 283
general information about Teradata vi
GET CRASH statement 250
GIVE statement 250
GRANT statement 250
GRAPHIC data type 277
Graphic literal 278
Graphic literals 89
GROUP BY statement modifier 275
GROUPING function 283
GT operator 283Index
294 SQL Reference: Fundamentals
H
Hash buckets 18
Hash index
ARC and 35
effects of 35
MultiLoad and 35
permanent journal and 35
TPump and 35
Hash mapping 18
HASHAMP function 283
HASHBAKAMP function 283
HASHBUCKET function 283
HASHROW function 283
HAVING statement modifier 275
HELP statement 252
HELP statements 116
HELP STATISTICS statement 253
Hexadecimal
get representation of name 84
Hexadecimal literal 278
Hexadecimal literals 87
I
IN predicate 283
INCLUDE SQLCA statement 254
INCLUDE SQLDA statement 254
INCLUDE statement 254
Index
advantages of 18
covering 31
defined 17
disadvantages of 18
dropping 105
EXPLAIN, using 21
hash mapping and 18
join 20
keys and 16
maximum number of columns 206
non-unique 19
partitioned 20
row hash value and 17
RowID and 17
selectivity of 17
types of (Teradata) 19
unique 19
uniqueness value and 17
INDEX function 283
Information Products Publishing Library v
INITIATE INDEX ANALYSIS statement 254
INSERT EXPLAIN statement 255
INSERT statement 254
INT data type 277
INTEGER data type 277
Integer literal 278
Integer literals 87
INTERSECT operator 283
Interval data types 14
INTERVAL DAY data type 277
INTERVAL DAY TO HOUR data type 277
INTERVAL DAY TO MINUTE data type 277
INTERVAL DAY TO SECOND data type 277
INTERVAL HOUR data type 277
INTERVAL HOUR TO MINUTE data type 277
INTERVAL HOUR TO SECOND data type 278
Interval literal 279
Interval literals 88
INTERVAL MINUTE data type 278
INTERVAL MINUTE TO SECOND data type 278
INTERVAL MONTH data type 278
INTERVAL SECOND data type 278
INTERVAL YEAR data type 278
INTERVAL YEAR TO MONTH data type 278
IS NOT NULL predicate 283
IS NULL predicate 283
Iterated requests 127
J
Japanese character code notation, how to read 171
Japanese character names 77
JDBC 100
Join index
aggregate 31
described 30
effects of 32
multitable 31
performance and 33
queries using 33
single-table 31
sparse 32
Join Index. See also Index
K
Key
alternate 37
foreign 16
indexes and 16
primary 16
referential integrity and 16
Keywords 66
NULL 90
KURTOSIS function 283
L
LE operator 283
Lexical separators 94Index
SQL Reference: Fundamentals 295
LIKE predicate 283
Limits
database 206
session 211
system 204
Literals
character 89
date 88
decimal 87
floating point 87
graphic 89
hexadecimal 87
integer 87
interval 88
time 88
timestamp 88
LN function 283
LOCKING statement modifier 275
LOG function 283
LOGOFF statement 255
LOGON statement 255
Logon, express 142
LONG VARCHAR data type 278
LONG VARGRAPHIC data type 278
LOWER function 283
LT operator 283
M
Macros
contents 47
defined 46
executing 47
maximum expanded text size 207
maximum number of parameters 207
SQL statements and 46
MAVG function 283
MAX function 284
MAXIMUM function 284
MCHARACTERS function 284
MDIFF function 284
MERGE statement 255
MIN function 284
MINIMUM function 284
MINUS operator 284
MLINREG function 284
MOD operator 284
MODIFY DATABASE statement 256
MODIFY USER statement 257
MSUM function 284
MultiLoad
hash indexes and 35
join indexes and 32
referential integrity and 42
Multi-statement requests, performance 125
Multi-statement transactions 125
Multitable join index 31
N
Name
calculate length of 78
fully qualified 72
get hexadecimal representation 84
identify in logon string 86
maximum size 206
multiword 69
object 77
resolving 74
translation and storage 81
NAMED data type attribute 279
NE operator 284
NEW expression 284
Nonexecutable SQL statements 120
Non-partitioned primary index. See NPPI.
Non-unique index. See Index, Primary index, Secondary
index
NOT BETWEEN predicate 281
NOT CASESPECIFIC data type attribute 279
NOT CS data type attribute 279
NOT EXISTS predicate 282
NOT IN predicate 283
NOT LIKE predicate 283
NOT NULL data type attribute 279
NOT operator 284
NOT= operator 284
NPPI 20
Null
aggregates and 137
ANSI DateTime and 134
arithmetic functions and 134
arithmetic operators and 134
collation sequence 136
comparison operators and 135
excluding 135
operations on (SQL) 134
searching for 136
searching for, null and non-null 136
NULL keyword 90
Null statement 98
NULLIF expression 284
NULLIFZERO function 284
NUMERIC data type 278
Numeric data types 13
NUPI. See Primary index, non-unique
NUSI. See Secondary index, non-uniqueIndex
296 SQL Reference: Fundamentals
O
Object names 77
Object, name comparison 82
OCTET_LENGTH function 284
ODBC 100
OPEN statement 258
Operators 91
OR operator 284
ORDER BY statement modifier 276
ordering publications v
OVERLAPS operator 285
P
Parallel step processing 125
Parameters, session 138
Parent table, defined 37
Partial cover 30
Partition elimination 159
Partitioned primary index. See PPI.
PERCENT_RANK function 285
Permanent journal
creating 2
hash indexes and 35
join indexes and 32
POSITION function 285
POSITION statement 258
PPI
defined 20
maximum number of partitions 206
partition elimination and 159
Precedence, SQL operators 91
PREPARE statement 258
Primary index
choosing 23
default 22
described 22
non-unique 23
NULL and 136
summary 24
unique 23
PRIMARY KEY data type attribute 279
Primary key, defined 16
Primary key. See also Key
Procedure, dropping 105
product-related information v
PROFILE function 285
Profiles 55
publications related to this release v
Q
QCD tables
populating 115
QUALIFY statement modifier 276
QUANTILE function 285
Query Capture Database. See QCD
Query processing
access types 162
all AMP request 156
AMP sort 158
BYNET merge 159
defined 153
full table scan 163
single AMP request 154
single AMP response 156
Query, defined 153
R
RANDOM function 285
RANGE_N function 285
RANK function 285
REAL data type 278
Recursive queries (SQL) 112
Recursive query, defined 112
REFERENCES data type attribute 279
Referential integrity
ARC and 41
circular references and 39
described 36
FastLoad and 42
foreign keys and 39
importance of 38
MultiLoad and 42
terminology 37
REGR_AVGX function 285
REGR_AVGY function 285
REGR_COUNT function 285
REGR_INTERCEPT function 285
REGR_R2 function 285
REGR_SLOPE function 285
REGR_SXX function 285
REGR_SXY function 285
REGR_SYY function 285
release definition v
RENAME FUNCTION statement 259
RENAME MACRO statement 259
RENAME PROCEDURE statement 259
RENAME TABLE statement 259
RENAME TRIGGER statement 259
RENAME VIEW statement 259
REPLACE CAST statement 259
REPLACE FUNCTION statement 55, 259
REPLACE MACRO statement 261
REPLACE METHOD statement 261
REPLACE ORDERING statement 262
REPLACE PROCEDURE statement 53, 262, 263Index
SQL Reference: Fundamentals 297
REPLACE TRANSFORM statement 265
REPLACE TRIGGER statement 266
REPLACE VIEW statement 266
Request processing
2PC 124
ANSI mode 123
Teradata mode 123
Request terminator 96
Requests
iterated 127
maximum size 207
multi-statement 120
single-statement 120
Requests. See also Blocked requests, Multi-statement
requests, Request processing
Reserved words 219
RESTART INDEX ANALYSIS statement 266
Restricted words 173
REVOKE statement 266
REWIND statement 268
ROLE function 285
Roles 57
ROLLBACK statement 268
ROW_NUMBER function 286
Rows, maximum size 206
S
SAMPLE statement modifier 276
Secondary index
defined 25
dual 28
non-unique 26
bit mapping 28
covering and 27
value-ordered 27
NULL and 136
summary 29
unique 26
using Teradata Index Wizard 21
Security, user-level password attributes 56
Seed statements 113
SELECT statement 268
Selectivity
high 17
low 17
Semicolon
null statement 98
request terminator 96
statement separator 94
Separator
lexical 94
statement 94
Session character set
ASCII 139
EBCDIC 139
UTF16 139
UTF8 139
Session collation 140
Session control 138
SESSION function 286
Session handling, session control 144
Session management
CLI 143
ODBC 143
requests 144
session reserve 143
Session parameters 138
SET BUFFERSIZE statement 270
SET CHARSET statement 270
SET CONNECTION statement 270
SET CRASH statement 270
SET ROLE statement 271
SET SESSION ACCOUNT statement 271
SET SESSION CHARACTERISTICS AS TRANSACTION
ISOLATION LEVEL statement 271
SET SESSION COLLATION statement 271
SET SESSION DATABASE statement 271
SET SESSION DATEFORM statement 271
SET SESSION FUNCTION TRACE statement 271
SET SESSION OVERRIDE REPLICATION statement 272
SET SESSION statement 271
SET TIME ZONE statement 272
SHOW CAST statement 272
SHOW FUNCTION statement 272
SHOW HASH INDEX statement 272
SHOW JOIN INDEX statement 272
SHOW MACRO statement 272
SHOW METHOD statement 272
SHOW PROCEDURE statement 272
SHOW REPLICATION GROUP statement 272
SHOW SPECIFIC FUNCTION statement 272
SHOW statement 272
SHOW statements 117
SHOW TABLE statement 272
SHOW TRIGGER statement 272
SHOW TYPE statement 272
SHOW VIEW statement 272
SIN function 286
Single-table join index 31
SINH function 286
SKEW function 286
SMALLINT data type 278
SOME predicate 286
SOUNDEX function 286
Sparse join index 32Index
298 SQL Reference: Fundamentals
Specifications
database 206
session 211
system 204
SQL
dynamic 129
dynamic, SELECT statement and 131
static 129
SQL binding styles
CLI 100
defined 100
direct 100
embedded 100
JDBC 100
ODBC 100
stored procedure 100
SQL data type attributes
AS 279
CASESPECIFIC 279
CHARACTER SET 279
CHECK 279
COMPRESS 279
CONSTRAINT 279
CS 279
DEFAULT 279
FOREIGN KEY 279
FORMAT 279
NAMED 279
NOT CASESPECIFIC 279
NOT CS 279
NOT NULL 279
PRIMARY KEY 279
REFERENCES 279
TITLE 279
UC 279
UNIQUE 280
UPPERCASE 279
WITH CHECK OPTION 280
WITH DEFAULT 280
SQL data types
BIGINT 277
BLOB 277
BYTE 277
BYTEINT 277
CHAR 277
CHAR VARYING 277
CHARACTER 277
CHARACTER VARYING 277
CLOB 277
DATE 277
DEC 277
DECIMAL 277
DOUBLE PRECISION 277
FLOAT 277
GRAPHIC 277
INT 277
INTEGER 277
INTERVAL DAY 277
INTERVAL DAY TO HOUR 277
INTERVAL DAY TO MINUTE 277
INTERVAL DAY TO SECOND 277
INTERVAL HOUR 277
INTERVAL HOUR TO MINUTE 277
INTERVAL HOUR TO SECOND 278
INTERVAL MINUTE 278
INTERVAL MINUTE TO SECOND 278
INTERVAL MONTH 278
INTERVAL SECOND 278
INTERVAL YEAR 278
INTERVAL YEAR TO MONTH 278
LONG VARCHAR 278
LONG VARGRAPHIC 278
NUMERIC 278
REAL 278
SMALLINT 278
TIME 278
TIME WITH TIMEZONE 278
TIMESTAMP 278
TIMESTAMP WITH TIMEZONE 278
UDT 278
VARBYTE 278
VARCHAR 278
VARGRAPHIC 278
SQL error response (ANSI) 149
SQL expressions
CASE 281
COALESCE 282
NEW 284
NULLIF 284
SQL Flagger
enabling and disabling 217
function 217
session control 139
SQL functional families, defined 99
SQL functions
ABS 281
ACCOUNT 281
ACOS 281
ACOSH 281
ADD_MONTHS 281
ASIN 281
ASINH 281
ATAN 281
ATAN2 281
ATANH 281
AVE 281
AVERAGE 281
AVG 281Index
SQL Reference: Fundamentals 299
BYTE 281
BYTES 281
CASE_N 281
CAST 281
CHAR 282
CHAR_LENGTH 282
CHAR2HEXINT 282
CHARACTER_LENGTH 282
CHARACTERS 282
CHARS 282
CORR 282
COS 282
COSH 282
COVAR_SAMP 282
CSUM 282
CURRENT_DATE 282
CURRENT_TIME 282
CURRENT_TIMESTAMP 282
DATABASE 282
DATE 282
DEFAULT 282
EXP 283
EXTRACT 283
FORMAT 283
GROUPING 283
HASHAMP 283
HASHBAKAMP 283
HASHBUCKET 283
HASHROW 283
INDEX 283
KURTOSIS 283
LN 283
LOG 283
LOWER 283
MAVG 283
MAX 284
MAXIMUM 284
MCHARACTERS 284
MDIFF 284
MIN 284
MINIMUM 284
MLINREG 284
MSUM 284
NULLIFZERO 284
OCTET_LENGTH 284
PERCENT_RANK 285
POSITION 285
PROFILE 285
QUANTILE 285
RANDOM 285
RANGE_N 285
RANK 285
REGR_AVGX 285
REGR_AVGY 285
REGR_COUNT 285
REGR_INTERCEPT 285
REGR_R2 285
REGR_SLOPE 285
REGR_SXX 285
REGR_SXY 285
REGR_SYY 285
ROLE 285
ROW_NUMBER 286
SESSION 286
SIN 286
SINH 286
SKEW 286
SOUNDEX 286
SQRT 286
STDDEV_POP 286
STDDEV_SAMP 286
SUBSTR 286
SUBSTRING 286
SUM 286
TAN 286
TANH 286
TIME 286
TITLE 286
TRANSLATE 286
TRANSLATE_CHK 286
TRIM 286
TYPE 287
UNION 287
UPPER 287
USER 287
VAR_POP 287
VAR_SAMP 287
VARGRAPHIC 287
WIDTH_BUCKET 287
ZEROIFNULL 287
SQL lexicon
character names 77
delimiters 93
Japanese character names 67, 77
keywords 66
lexical separators 94
object names 77
operators 91
request terminator 96
statement separator 94
SQL literals
Character data 278
DATE 278
Decimal 278
Floating point 278
Graphic 278
Hexadecimal 278
Integer 278Index
300 SQL Reference: Fundamentals
Interval 279
TIME 279
TIMESTAMP 279
SQL operators
AND 281
EQ 282
EXCEPT 282
GE 283
GT 283
INTERSECT 283
LE 283
LT 283
MINUS 284
MOD 284
NE 284
NOT 284
NOT= 284
OR 284
OVERLAPS 285
SQL predicates
ALL 281
ANY 281
BETWEEN 281
EXISTS 282
IN 283
IS NOT NULL 283
IS NULL 283
LIKE 283
NOT BETWEEN 281
NOT EXISTS 282
NOT IN 283
NOT LIKE 283
SOME 286
SQL request modifier, EXPLAIN 19, 21, 275
SQL requests
iterated 127
multi-statement 120
single-statement 120
SQL responses 147
failure 150
success 148
warning 149
SQL return codes 144
SQL statement modifiers
ASYNC 275
EXEC SQL 275
GROUP BY 275
HAVING 275
LOCKING 275
ORDER BY 276
QUALIFY 276
SAMPLE 276
USING 276
WHERE 276
WITH 276
WITH RECURSIVE 276
SQL statements
ABORT 220
ALTER FUNCTION 220
ALTER METHOD 220
ALTER PROCEDURE 220
ALTER REPLICATION GROUP 221
ALTER SPECIFIC FUNCTION 220
ALTER SPECIFIC METHOD 220
ALTER TABLE 221
ALTER TRIGGER 223
ALTER TYPE 223
BEGIN DECLARE SECTION 223
BEGIN LOGGING 224
BEGIN QUERY LOGGING 224
BEGIN TRANSACTION 225
CALL 225
CHECKPOINT 225
CLOSE 225
COLLECT DEMOGRAPHICS 225
COLLECT STAT 226
COLLECT STAT INDEX 227
COLLECT STATISTICS 226
COLLECT STATISTICS INDEX 227
COLLECT STATS 226
COLLECT STATS INDEX 227
COMMENT 227
COMMIT 228
CONNECT 228
CREATE AUTHORIZATION 228
CREATE CAST 229
CREATE DATABASE 229
CREATE FUNCTION 229
CREATE HASH INDEX 231
CREATE INDEX 232
CREATE JOIN INDEX 232
CREATE MACRO 233
CREATE METHOD 233
CREATE ORDERING 234
CREATE PROCEDURE 234
CREATE PROFILE 236
CREATE RECURSIVE VIEW 244
CREATE REPLICATION GROUP 237
CREATE ROLE 237
CREATE TABLE 237
CREATE TRANSFORM 239
CREATE TRIGGER 239
CREATE TYPE 240, 241
CREATE USER 242
CREATE VIEW 243
DATABASE 244
DECLARE CURSOR 244
DECLARE STATEMENT 244Index
SQL Reference: Fundamentals 301
DECLARE TABLE 244
DELETE 245
DELETE DATABASE 245
DELETE USER 245
DESCRIBE 246
DIAGNOSTIC 115, 246
DIAGNOSTIC "validate index" 246
DIAGNOSTIC DUMP SAMPLES 246
DIAGNOSTIC HELP SAMPLES 246
DIAGNOSTIC SET SAMPLES 246
DROP AUTHORIZATION 246
DROP CAST 246
DROP DATABASE 246
DROP FUNCTION 246
DROP HASH INDEX 246
DROP INDEX 247
DROP JOIN INDEX 247
DROP MACRO 247
DROP ORDERING 247
DROP PROCEDURE 247
DROP PROFILE 247, 256
DROP REPLICATION GROUP 247
DROP ROLE 247
DROP SPECIFIC FUNCTION 246
DROP STATISTICS 247
DROP TABLE 248
DROP TRANSFORM 248
DROP TRIGGER 248
DROP TYPE 248
DROP USER 246
DROP VIEW 248
DUMP EXPLAIN 248
ECHO 248
END DECLARE SECTION 248
END LOGGING 249
END QUERY LOGGING 249
END TRANSACTION 249
END-EXEC 248
executable 119
EXECUTE 249
EXECUTE IMMEDIATE 249
FETCH 250
GET CRASH 250
GIVE 250
GRANT 250
HELP 252
HELP STATISTICS 253
INCLUDE 254
INCLUDE SQLCA 254
INCLUDE SQLDA 254
INITIATE INDEX ANALYSIS 254
INSERT 254
INSERT EXPLAIN 255
invoking 119
LOGOFF 255
LOGON 255
MERGE 255
MODIFY DATABASE 256
MODIFY USER 257
name resolution 74
nonexecutable 120
OPEN 258
partial names, use of 73
POSITION 258
PREPARE 258
RENAME FUNCTION 259
RENAME MACRO 259
RENAME PROCEDURE 259
RENAME TABLE 259
RENAME TRIGGER 259
RENAME VIEW 259
REPLACE CAST 259
REPLACE FUNCTION 259
REPLACE MACRO 261
REPLACE METHOD 261
REPLACE ORDERING 262
REPLACE PROCEDURE 262, 263
REPLACE TRANSFORM 265
REPLACE TRIGGER 266
REPLACE VIEW 266
RESTART INDEX ANALYSIS 266
REVOKE 266
REWIND 268
ROLLBACK 268
SELECT 268
SELECT, dynamic SQL 131
SET BUFFERSIZE 270
SET CHARSET 270
SET CONNECTION 270
SET CRASH 270
SET ROLE 271
SET SESSION 271
SET SESSION ACCOUNT 271
SET SESSION CHARACTERISTICS AS TRANSACTION
ISOLATION LEVEL 271
SET SESSION COLLATION 271
SET SESSION DATABASE 271
SET SESSION DATEFORM 271
SET SESSION FUNCTION TRACE 271
SET SESSION OVERRIDE REPLICATION 272
SET TIME ZONE 272
SHOW 272
SHOW CAST 272
SHOW FUNCTION 272
SHOW HASH INDEX 272
SHOW JOIN INDEX 272
SHOW MACRO 272
SHOW METHOD 272Index
302 SQL Reference: Fundamentals
SHOW PROCEDURE 272
SHOW REPLICATION GROUP 272
SHOW SPECIFIC FUNCTION 272
SHOW TABLE 272
SHOW TRIGGER 272
SHOW TYPE 272
SHOW VIEW 272
structure 63
subqueries 110
TEST 273
UPDATE 273
WAIT 274
WHENEVER 274
SQL statements, macros and 46
SQL. See also Embedded SQL
SQL-2003 non-reserved words 174
SQL-2003 reserved words 174
SQLCA 144
SQLCODE 144
SQLSTATE 144
SQRT function 286
Statement processing. See Query processing
Statement separator 94
STDDEV_POP function 286
STDDEV_SAMP function 286
Stored procedures
ACTIVITY_COUNT 144
creating 50
deleting 52
elements of 49
executing 51
modifying 51
privileges 49
renaming 52
Structured UDTs 58
Subqueries (SQL) 110
Subquery, defined 110
SUBSTR function 286
SUBSTRING function 286
SUM function 286
Syntax, how to read 167
T
Table
cardinality of 2
creating indexes for 20
defined 2
degree of 2
dropping 105
full table scan 163
global temporary 5
global temporary trace 4
maximum number of columns 206
maximum number of rows 206
queue 4
tuple and 2
volatile temporary 9
Table structure, altering 103
Table, change structure of 103
TAN function 286
TANH function 286
Target level emulation 115
Teradata Database
database specifications 206
session specifications 211
system specifications 204
Teradata DBS, session management 143
Teradata Index Wizard 21
determining optimum secondary indexes 21
SQL diagnostic statements 115
Teradata SQL 218
Teradata SQL, ANSI SQL and 214
Terminator, request 96
TEST statement 273
TIME data type 278
TIME function 286
TIME literal 279
Time literals 88
TIME WITH TIMEZONE data type 278
TIMESTAMP data type 278
TIMESTAMP literal 279
Timestamp literals 88
TIMESTAMP WITH TIMEZONE data type 278
TITLE data type attribute 279
TITLE function 286
TITLE phrase, column definition 71
TPump
hash indexes and 35
join indexes and 32
Transaction mode, session control 140
Transaction modes (SQL) 140
Transactions
defined 122
explicit, defined 124
implicit, defined 124
TRANSLATE function 286
TRANSLATE_CHK function 286
Trigger
altering 44
creating 44
defined 44
dropping 44, 105
process flow for 44
TRIM function 286
Two-phase commit. See 2PC
TYPE function 287Index
SQL Reference: Fundamentals 303
U
UC data type attribute 279
UDFs
classes 54
CREATE FUNCTION 55
CREATE PROCEDURE 53
usage 55
UDT data types 15, 58, 278
creating and using 59
distinct 58
structured 58
Unicode, notation 171
UNION function 287
UNIQUE alternate key 37
UNIQUE data type attribute 280
Unique index. See Index, Primary index, Secondary index
UPDATE statement 273
UPI. See Primary index, unique
UPPER function 287
UPPERCASE data type attribute 279
USER function 287
User, defined 1
User-defined types. See UDT data types
USI. See Secondary index, unique
USING statement modifier 276
UTF16 session character set 139
UTF8 session character set 139
V
VAR_POP function 287
VAR_SAMP function 287
VARBYTE data type 278
VARCHAR data type 278
VARGRAPHIC data type 278
VARGRAPHIC function 287
View
described 42
dropping 105
maximum expanded text size 207
maximum number of columns 206
restrictions 43
W
WAIT statement 274
WHENEVER statement 274
WHERE statement modifier 276
WIDTH_BUCKET function 287
WITH DEFAULT data type attribute 280
WITH NO CHECK OPTION data type attribute 280
WITH RECURSIVE statement modifier 276
WITH statement modifier 276
Z
ZEROIFNULL function 287
Zero-table SELECT statement 108Index
304 SQL Reference: Fundamentals