JBoss.orgCommunity Documentation

Teiid - Scalable Information Integration

Teiid Reference Documentation


Legal Notice

1. DML Support
1.1. Identifiers
1.1.1. Reserved Words
1.2. Expressions
1.2.1. Column Identifiers
1.2.2. Literals
1.2.3. Aggregate Functions
1.2.4. Window functions
1.2.5. Case and searched case
1.2.6. Scalar subqueries
1.2.7. Parameter references
1.3. Criteria
1.4. SQL Commands
1.4.1. SELECT Command
1.4.2. INSERT Command
1.4.3. UPDATE Command
1.4.4. DELETE Command
1.4.5. EXECUTE Command
1.4.6. Procedural Relational Command
1.5. Set Operations
1.6. Subqueries
1.6.1. Inline views
1.6.2. Subqueries can appear anywhere where an expression or criteria is expected.
2. SQL Clauses
2.1. WITH Clause
2.2. SELECT Clause
2.3. FROM Clause
2.3.1. From Clause Hints
2.3.2. Nested Table Reference
2.5. WHERE Clause
2.6. GROUP BY Clause
2.7. HAVING Clause
2.8. ORDER BY Clause
2.9. LIMIT Clause
2.10. INTO Clause
2.11. OPTION Clause
3. DDL Support
3.1. Temp Tables
3.2. Alter View
3.3. Alter Procedure
3.4. Create Trigger
3.5. Alter Trigger
4. XML SELECT Command
4.1. Overview
4.2. Query Structure
4.2.1. FROM Clause
4.2.2. SELECT Clause
4.2.3. WHERE Clause
4.2.4. ORDER BY Clause
4.3. Document Generation
4.3.1. Document Validation
5. Datatypes
5.1. Supported Types
5.2. Type Conversions
5.3. Special Conversion Cases
5.3.1. Conversion of String Literals
5.3.2. Converting to Boolean
5.3.3. Date/Time/Timestamp Type Conversions
5.4. Escaped Literal Syntax
6. Scalar Functions
6.1. Numeric Functions
6.1.1. Parsing Numeric Datatypes from Strings
6.1.2. Formatting Numeric Datatypes as Strings
6.2. String Functions
6.3. Date/Time Functions
6.3.1. Parsing Date Datatypes from Strings
6.3.2. Specifying Time Zones
6.4. Type Conversion Functions
6.5. Choice Functions
6.6. Decode Functions
6.7. Lookup Function
6.8. System Functions
6.8.2. ENV
6.8.4. USER
6.9. XML Functions
6.9.7. XMLPI
6.10. Security Functions
6.10.1. HASROLE
6.11. Miscellaneous Functions
6.11.1. array_get
6.11.2. array_length
6.11.3. uuid
6.12. Nondeterministic Function Handling
7. Updatable Views
7.1. Key-preserved Table
8. Procedures
8.1. Procedure Language
8.1.1. Command Statement
8.1.2. Dynamic SQL Command
8.1.3. Declaration Statement
8.1.4. Assignment Statement
8.1.5. Compound Statement
8.1.6. If Statement
8.1.7. Loop Statement
8.1.8. While Statement
8.1.9. Continue Statement
8.1.10. Break Statement
8.1.11. Leave Statement
8.1.12. Error Statement
8.2. Virtual Procedures
8.2.1. Virtual Procedure Definition
8.2.2. Procedure Parameters
8.2.3. Example Virtual Procedures
8.2.4. Executing Virtual Procedures
8.2.5. Limitations
8.3. Update Procedures
8.3.1. Update Procedure Processing
8.3.2. For Each Row
8.3.3. Create Procedure
9. Transaction Support
9.1. AutoCommitTxn Execution Property
9.2. Updating Model Count
9.3. JDBC and Transactions
9.3.1. JDBC API Functionality
9.3.2. J2EE Usage Models
9.4. Transactional Behavior with JBoss Data Source Types
9.5. Limitations and Workarounds
10. Data Roles
10.1. Permissions
10.2. Role Mapping
10.3. XML Definition
10.4. System Functions
10.5. Customizing
11. System Schema
11.1. System Tables
11.1.1. VDB, Schema, and Properties
11.1.2. Table Metadata
11.1.3. Procedure Metadata
11.1.4. Datatype Metadata
11.2. System Procedures
11.2.1. SYS.getXMLSchemas
11.2.2. SYSADMIN.logMsg
11.2.3. SYSADMIN.isLoggable
11.2.4. SYSADMIN.refreshMatView
11.2.5. SYSADMIN.refreshMatViewRow
11.2.6. Metadata Procedures
12. VDBs
12.1. VDB Definition
12.1.1. VDB Element
12.1.2. Model Element
12.1.3. Translator Element
12.2. Dynamic VDBs
12.3. Multi-Source Models and VDB
12.3.1. Multi-source SELECTs
12.3.2. Multi-source INSERTs
12.3.3. Multi-source UPDATEs
12.3.4. Multi-source DELETEs
12.3.5. Multi-source Stored Procedures
12.3.6. Additional Concerns
13. Translators
13.1. Introduction to the Teiid Connector Architecture
13.2. Translators
13.2.1. File Translator
13.2.2. JDBC Translator
13.2.3. LDAP Translator
13.2.4. Loopback Translator
13.2.5. Salesforce Translator
13.2.6. Web Services Translator
13.2.7. OLAP Translator
13.2.8. Delegating Translators
14. Federated Planning
14.1. Overview
14.2. Federated Optimizations
14.2.1. Access Patterns
14.2.2. Pushdown
14.2.3. Dependent Joins
14.2.4. Copy Criteria
14.2.5. Projection Minimization
14.2.6. Partial Aggregate Pushdown
14.2.7. Optional Join
14.2.8. Partitioned Union
14.2.9. Standard Relational Techniques
14.3. Subquery optimization
14.4. XQuery Optimization
14.5. Federated Failure Modes
14.5.1. Partial Results
14.6. Query Plans
14.6.1. Getting a Query Plan
14.6.2. Analyzing a Query Plan
14.6.3. Relational Plans
14.6.4. Source Hints
14.7. Query Planner
14.7.1. Relational Planner
14.7.2. Procedure Planner
14.7.3. XML Planner
15. Architecture
15.1. Terminology
15.2. Data Management
15.2.1. Cursoring and Batching
15.2.2. Buffer Management
15.2.3. Cleanup
15.3. Query Termination
15.3.1. Canceling Queries
15.3.2. User Query Timeouts
15.4. Processing
15.4.1. Join Algorithms
15.4.2. Sort Based Algorithms
A. BNF for SQL Grammar

Teiid offers a highly scalable and high performance solution to information integration. By allowing integrated and enriched data to be consumed relationally or as XML over multiple protocols, Teiid simplifies data access for developers and consuming applications.

Commercial development support, production support, and training for Teiid is available through JBoss Inc. Teiid is a Professional Open Source project and a critical component of the JBoss Enterprise Data Services Platform.

Teiid supports SQL for issuing queries and for defining view transformations; see also Procedure Language for how SQL is used in virtual procedures and update procedures.

Teiid provides nearly all of the functionality of SQL-92 DML. SQL-99 and later features are constantly being added based upon community need. The following does not attempt to cover SQL exhaustively, but rather highlights SQL's usage within Teiid. See the grammar for the exact form of SQL accepted by Teiid.

SQL commands contain references to tables and columns. These references are in the form of identifiers, which uniquely identify the tables and columns in the context of the command. All queries are processed in the context of a virtual database, or VDB. Because information can be federated across multiple sources, tables and columns must be scoped in some manner to avoid conflicts. This scoping is provided by schemas, which contain the information for each data source or set of views.

Fully-qualified identifiers can always be used in SQL commands. Partially- or unqualified forms can also be used, as long as the resulting names are unambiguous in the context of the command. Different forms of qualification can be mixed in the same query.

Identifiers, literals, and functions can be combined into expressions. Expressions can be used almost anywhere in a query -- SELECT, FROM (if specifying join criteria), WHERE, GROUP BY, HAVING, or ORDER BY.

Literal values represent fixed values. These can any of the 'standard' data types.

Syntax Rules:

  • Integer values will be assigned an integral data type big enough to hold the value (integer, long, or biginteger).

  • Floating point values will always be parsed as a double.

  • The keyword 'null' is used to represent an absent or unknown value and is inherently untyped. In many cases, a null literal value will be assigned an implied type based on context. For example, in the function '5 + null', the null value will be assigned the type 'integer' to match the type of the value '5'. A null literal used in the SELECT clause of a query with no implied context will be assigned to type 'string'.

Some examples of simple literal values are:

  • ‘abc’

  • ‘isn’’t true’ - use an extra single tick to escape a tick in a string with single ticks

  • 5

  • -37.75e01 - scientific notation

  • 100.0 - parsed as double

  • true

  • false

  • '\u0027' - unicode character

Aggregate functions take sets of values from a group produced by an explicit or implicit GROUP BY and return a single scalar value computed from the group.

Teiid supports the following aggregate functions:

Syntax Rules:

  • Some aggregate functions may contain a keyword 'DISTINCT' before the expression, indicating that duplicate expression values should be ignored. DISTINCT is not allowed in COUNT(*) and is not meaningful in MIN or MAX (result would be unchanged), so it can be used in COUNT, SUM, and AVG.

  • Aggregate functions cannot be used in FROM, GROUP BY, or WHERE clauses without an intervening query expression.

  • Aggregate functions cannot be nested within another aggregate function without an intervening query expression.

  • Aggregate functions may be nested inside other functions.

  • Any aggregate function may take an optional FILTER clasue of the form

    FILTER ( WHERE condition )

    . The condition may be any boolean value expression that does not contain a subquery or a correlated variable. The filter will logically be evaluated for each row prior to the grouping operation. If false the aggregate function will not accumulate a value for the given row.

For more information on aggregates, see the sections on GROUP BY or HAVING.

Teiid supports ANSI SQL 2003 window functions. A window function allows an aggregrate function to be applied to a subset of the result set, without the need for a GROUP BY clause. A window function is similar to an aggregate function, but requires the use of an OVER clause or window specification.


aggregate|ranking OVER ([PARTION BY expression [, expression]*] [ORDER BY ...])

aggregate can be any Section 1.2.3, “Aggregate Functions”. Ranking can be one of ROW_NUMBER(), RANK(), DENSE_RANK().

Syntax Rules:

  • Window functions can only appear in the SELECT and ORDER BY clauses of a query expression.

  • Window functions cannot be nested in one another.

  • Partitioning and order by expressions cannot contain subqueries or outer references.

  • The ranking (ROW_NUMBER, RANK, DENSE_RANK) functions require the use of the window specification ORDER BY clause.

  • An XMLAGG ORDER BY clause cannot be used when windowed.

  • The window specification ORDER BY clause cannot reference alias names or use positional ordering.

  • Windowed aggregates may not use DISTINCT if the window specification is ordered.


Syntax Rules:

  • The precedence ordering from lowest to highest is comparison, NOT, AND, OR

  • Criteria nested by parenthesis will be logically evaluated prior to evaluating the parent criteria.

Some examples of valid criteria are:

  • (balance > 2500.0)

  • 100*(50 - x)/(25 - y) > z

  • concat(areaCode,concat('-',phone)) LIKE '314%1'

Comparing null Values

Null values represent an unknown value. Comparison with a null value will evaluate to 'unknown', which can never be true even if 'not' is used.

There are 4 basic commands for manipulating data in SQL, corresponding to the CRUD create, read, update, and delete operations: INSERT, SELECT, UPDATE, and DELETE. In addition, procedures can be executed using the EXECUTE command or through a procedural relational command.

The SELECT command is used to retrieve records any number of relations.

All of these clauses other than OPTION are defined by the SQL specification. The specification also specifies the order that these clauses will be logically processed. Below is the processing order where each stage passes a set of rows to the following stage. Note that this processing model is logical and does not represent the way any actual database engine performs the processing, although it is a useful model for understanding questions about SQL.

  • WITH stage - gathers all rows from all with items in the order listed. Subsequent with items and the main query can reference the a with item as if it is a table.

  • FROM stage - gathers all rows from all tables involved in the query and logically joins them with a Cartesian product, producing a single large table with all columns from all tables. Joins and join criteria are then applied to filter rows that do not match the join structure.

  • WHERE stage - applies a criteria to every output row from the FROM stage, further reducing the number of rows.

  • GROUP BY stage - groups sets of rows with matching values in the group by columns.

  • HAVING stage - applies criteria to each group of rows. Criteria can only be applied to columns that will have constant values within a group (those in the grouping columns or aggregate functions applied across the group).

  • SELECT stage - specifies the column expressions that should be returned from the query. Expressions are evaluated, including aggregate functions based on the groups of rows, which will no longer exist after this point. The output columns are named using either column aliases or an implicit name determined by the engine. If SELECT DISTINCT is specified, duplicate removal will be performed on the rows being returned from the SELECT stage.

  • ORDER BY stage - sorts the rows returned from the SELECT stage as desired. Supports sorting on multiple columns in specified order, ascending or descending. The output columns will be identical to those columns returned from the SELECT stage and will have the same name.

  • LIMIT stage - returns only the specified rows (with skip and limit values).

This model can be used to understand many questions about SQL. For example, columns aliased in the SELECT clause can only be referenced by alias in the ORDER BY clause. Without knowledge of the processing model, this can be somewhat confusing. Seen in light of the model, it is clear that the ORDER BY stage is the only stage occurring after the SELECT stage, which is where the columns are named. Because the WHERE clause is processed before the SELECT, the columns have not yet been named and the aliases are not yet known.


The explicit table syntax TABLE x may be used as a shortcut for SELECT * FROM x.

Procedural relational commands use the syntax of a SELECT to emulate an EXEC. In a procedural relational command a procedure group names is used in a FROM clause in place of a table. That procedure will be executed in place of a normal table access if all of the necessary input values can be found in criteria against the procedure. Each combination of input values found in the criteria results in an execution of the procedure.

Multiple Execution

The usage of 'in' or join criteria can result in the procedure being executed multiple times.

Alternative Syntax

None of issues listed in the syntax rules above exist if a nested table reference is used.

A subquery is a SQL query embedded within another SQL query. The query containing the subquery is the outer query.

This section describes the clauses that are used in the various SQL commands described in the previous section. Nearly all these features follow standard SQL syntax and functionality, so any SQL reference can be used for more information.

The FROM clause specifies the target table(s) for SELECT, UPDATE, and DELETE statements.

Example Syntax:

MAKEIND, MAKEDEP, and MAKENOTDEP are hints used to control dependent join behavior. They should only be used in situations where the optimizer does not choose the most optimal plan based upon query structure, metadata, and costing information. The hints may appear in a comment that proceeds the from clause. The hints can be specified against any from clause, not just a named table.

NO_UNNEST can be specified against a subquery from clause or view to instruct the planner to not merge the nested SQL in the surrounding query - also known as view flattening. This hint only applies to Teiid planning and is not passed to source queries. NO_UNNEST may appear in a comment that proceeds the from clause.

The TEXTTABLE funciton processes character input to produce tabular ouptut. It supports both fixed and delimited file format parsing. The function itself defines what columns it projects. The TEXTTABLE function is implicitly a nested table and may be correlated to preceeding FROM clause entries.


TEXTTABLE(expression COLUMNS <COLUMN>, ... [NO ROW DELIMITER] [DELIMITER char] [(QUOTE|ESCAPE) char] [HEADER [integer]] [SKIP integer]) AS name

COLUMN := name datatype [WIDTH integer [NO TRIM]]

The XMLTABLE funciton uses XQuery to produce tabular ouptut. The XMLTABLE function is implicitly a nested table and may be correlated to preceeding FROM clause entries. XMLTABLE is part of the SQL/XML 2006 specification.


XMLTABLE([<NSP>,] xquery-expression [<PASSING>] [COLUMNS <COLUMN>, ... )] AS name

COLUMN := name (FOR ORDINALITY | (datatype [DEFAULT expression] [PATH string]))

See XMLELEMENT for the definition of NSP - XMLNAMESPACES.

See XMLQUERY for the definition of PASSING.



  • The optional XMLNAMESPACES clause specifies the namepaces for use in the XQuery and COLUMN path expressions.

  • The xquery-expression should be a valid XQuery. Each sequence item returned by the xquery will be used to create a row of values as defined by the COLUMNS clause.

  • If COLUMNS is not specified, then that is the same as having the COLUMNS clause: "COLUMNS OBJECT_VALUE XML PATH '.'", which returns the entire item as an XML value.

  • A FOR ORDINALITY column is typed as integer and will return the 1-based item number as its value.

  • Each non-ordinality column specifies a type and optionally a PATH and a DEFAULT expression.

  • If PATH is not specified, then the path will be the same as the column name.

Syntax Rules:

  • Only 1 FOR ORDINALITY column may be specified.

  • The columns names must be not contain duplicates.


  • Use of passing, returns 1 row [1]:

    select * from xmltable('/a' PASSING xmlparse(document '<a id="1"/>') COLUMNS id integer PATH '@id') x

  • As a nested table:

    select x.* from t, xmltable('/x/y' PASSING t.doc COLUMNS first string, second FOR ORDINALITY) x

The ORDER BY clause specifies how records should be sorted. The options are ASC (ascending) and DESC (descending).


ORDER BY expression [ASC|DESC] [NULLS (FIRST|LAST)], ...

When the into clause is specified with a SELECT, the results of the query are inserted into the specified table. This is often used to insert records into a temporary table. The INTO clause immediately precedes the FROM clause.


INTO table FROM ...

Syntax Rules:

  • The INTO clause is logically applied last in processing, after the ORDER BY and LIMIT clauses.

  • Teiid's support for SELECT INTO is similar to MS SQL Server. The target of the INTO clause is a table where the result of the rest select command will be inserted. SELECT INTO should not be used UNION query.

Teiid supports a subset of DDL to, create/drop temporary tables and to manipulate procedure and view definitions at runtime. It is not currently possible to arbitrarily drop/create non-temporary metadata entries.


A MetadataRepository must be configured to make a non-temporary metadata update persistent. See the Developers Guide Runtime Metadata Updates section for more.

Teiid supports creating temporary,or "temp", tables. Temp tables are dynamically created, but are treated as any other physical table.

Temp tables can be defined implicitly by referencing them in a INSERT statement or explicitly with a CREATE TABLE statement. Implicitly created temp tables must have a name that starts with '#'.

The following example is a series of statements that loads a temporary table with data from 2 sources, and with a manually inserted record, and then uses that temp table in a subsequent query.

CREATE LOCAL TEMPORARY TABLE TEMP (a integer, b integer, c integer); 
SELECT * INTO temp FROM Src1; SELECT * INTO temp FROM Src2; 
INSERT INTO temp VALUES (1,2,3); 
SELECT a,b,c FROM Src3, temp WHERE Src3.a = temp.b; 

See virtual procedures for more on temp table usage.




If the default inherent update is choosen in Teiid Designer, any SQL associated with update (shown in a greyed out text box) is not part of the VDB and cannot be enabled with an alter trigger statement.

A valid XML SELECT Command against a document model is of the form SELECT ... FROM ... [WHERE ...] [ORDER BY ...] . The use of any other SELECT command clause is not allowed.

The fully qualified name for an XML element is: "model"."document name".[path to element]."element name" .

The fully qualified name for an attribute is: "model"."document name".[path to element]."element name".[@]"attribute name"

Partially qualified names for elements and attributes can be used as long as the partial name is unique.

The where clause specifies how to filter content from the generated document based upon values contained in the underlying mapping classes. Most predicates are valid in an XML SELECT Command, however combining value references from different parts of the document may not always be allowed.

Criteria is logically applied to a context which is directly related to a mapping class. Starting with the root mapping class, there is a root context that describes all of the top level repeated elements that will be in the output document. Criteria applied to the root or any other context will change the related mapping class query to apply the affects of the criteria, which can include checking values from any of the descendant mapping classes.

Syntax Rules:

Sibling Root Mapping Classes

Implied root context user criteria against a document model with sibling root mapping classes is not generally semantically correct. It is applied as if each of the conjuncts is applied to only a single root mapping class. This behavior is the same as prior releases but may be fixed in a future release.

XML SELECT Command functions are resemble scalar functions, but act as hints in the WHERE clause. These functions are only valid in an XML SELECT Command.

Teiid supports a core set of runtime types. Runtime types can be different than semantic types defined in type fields at design time. The runtime type can also be specified at design time or it will be automatically chosen as the closest base type to the semantic type.

Table 5.1. Teiid Runtime Types

TypeDescriptionJava Runtime ClassJDBC TypeODBC Type
string or varcharvariable length character string with a maximum length of 4000. Note that the length cannot be explicitly set with the type literal, e.g. varchar(100).java.lang.StringVARCHARVARCHAR
chara single Unicode characterjava.lang.CharacterCHARCHAR
booleana single bit, or Boolean, that can be true, false, or null (unknown) java.lang.BooleanBITSMALLINT
byte or tinyintnumeric, integral type, signed 8-bitjava.lang.ByteTINYINTSMALLINT
short or smallintnumeric, integral type, signed 16-bitjava.lang.ShortSMALLINTSMALLINT
integer or serialnumeric, integral type, signed 32-bit. The serial type also implies not null and has an auto-incrementing value that starts at 1. serial types are not automatically UNIQUE.java.lang.IntegerINTEGERINTEGER
long or bigintnumeric, integral type, signed 64-bitjava.lang.LongBIGINTNUMERIC
bigintegernumeric, integral type, arbitrary precision of up to 1000 digitsjava.lang.BigIntegerNUMERICNUMERIC
float or realnumeric, floating point type, 32-bit IEEE 754 floating-point numbersjava.lang.FloatREALFLOAT
doublenumeric, floating point type, 64-bit IEEE 754 floating-point numbersjava.lang.StringDOUBLEDOUBLE
bigdecimal or decimalnumeric, floating point type, arbitrary precision of up to 1000 digits. Note that the precision and scale cannot be explicitly set with the type literal, e.g. decimal(38, 2).java.math.BigDecimalNUMERICNUMERIC
datedatetime, representing a single day (year, month, day)java.sql.DateDATEDATE
timedatetime, representing a single time (hours, minutes, seconds, milliseconds)java.sql.TimeTIMETIME
timestampdatetime, representing a single date and time (year, month, day, hours, minutes, seconds, milliseconds, nanoseconds)java.sql.TimestampTIMESTAMPTIMESTAMP
objectany arbitrary Java object, must implement java.lang.SerializableAnyJAVA_OBJECTVARCHAR
blobbinary large object, representing a stream of bytes java.sql.Blob [a] BLOBVARCHAR
clobcharacter large object, representing a stream of characters java.sql.Clob [b] CLOBVARCHAR
xmlXML document java.sql.SQLXML [c] JAVA_OBJECTVARCHAR

[a] The concrete type is expected to be org.teiid.core.types.BlobType

[b] The concrete type is expected to be org.teiid.core.types.ClobType

[c] The concrete type is expected to be org.teiid.core.types.XMLType

Data types may be converted from one form to another either explicitly or implicitly. Implicit conversions automatically occur in criteria and expressions to ease development. Explicit datatype conversions require the use of the CONVERT function or CAST keyword.

Teiid provides an extensive set of built-in scalar functions. See also SQL Support and Datatypes . In addition, Teiid provides the capability for user defined functions or UDFs. See the Developers Guide for adding UDFs. Once added UDFs may be called just like any other function.

Numeric functions return numeric values (integer, long, float, double, biginteger, bigdecimal). They generally take numeric values as inputs, though some take strings.



Datatype Constraint

+ - * /

Standard numeric operators

x in {integer, long, float, double, biginteger, bigdecimal}, return type is same as x [a]


Absolute value of x

See standard numeric operators above


Arc cosine of x

x in {double, bigdecimal}, return type is double


Arc sine of x

x in {double, bigdecimal}, return type is double


Arc tangent of x

x in {double, bigdecimal}, return type is double


Arc tangent of x and y

x, y in {double, bigdecimal}, return type is double


Ceiling of x

x in {double, float}, return type is double


Cosine of x

x in {double, bigdecimal}, return type is double


Cotangent of x

x in {double, bigdecimal}, return type is double


Convert x degrees to radians

x in {double, bigdecimal}, return type is double



x in {double, float}, return type is double


Floor of x

x in {double, float}, return type is double


Formats x using format y

x is bigdecimal, y is string, returns string


Formats x using format y

x is biginteger, y is string, returns string


Formats x using format y

x is double, y is string, returns string


Formats x using format y

x is float, y is string, returns string


Formats x using format y

x is integer, y is string, returns string


Formats x using format y

x is long, y is string, returns string


Natural log of x (base e)

x in {double, float}, return type is double


Log of x (base 10)

x in {double, float}, return type is double

MOD(x, y)

Modulus (remainder of x / y)

x in {integer, long, float, double, biginteger, bigdecimal}, return type is same as x


Parses x using format y

x, y are strings, returns bigdecimal


Parses x using format y

x, y are strings, returns biginteger


Parses x using format y

x, y are strings, returns double


Parses x using format y

x, y are strings, returns float


Parses x using format y

x, y are strings, returns integer


Parses x using format y

x, y are strings, returns long


Value of Pi

return is double


x to the y power

x in {double, bigdecimal, biginteger}, return is the same type as x


Convert x radians to degrees

x in {double, bigdecimal}, return type is double


Returns a random number, using generator established so far in the query or initializing with system clock if necessary.

Returns double.


Returns a random number, using new generator seeded with x.

x is integer, returns double.


Round x to y places; negative values of y indicate places to the left of the decimal point

x in {integer, float, double, bigdecimal} y is integer, return is same type as x


1 if x > 0, 0 if x = 0, -1 if x < 0

x in {integer, long, float, double, biginteger, bigdecimal}, return type is integer


Sine value of x

x in {double, bigdecimal}, return type is double


Square root of x

x in {long, double, bigdecimal}, return type is double


Tangent of x

x in {double, bigdecimal}, return type is double

BITAND(x, y)

Bitwise AND of x and y

x, y in {integer}, return type is integer

BITOR(x, y)

Bitwise OR of x and y

x, y in {integer}, return type is integer

BITXOR(x, y)

Bitwise XOR of x and y

x, y in {integer}, return type is integer


Bitwise NOT of x

x in {integer}, return type is integer

[a] The precision and scale of non-bigdecimal arithmetic function functions results matches that of Java. The results of bigdecimal operations match Java, except for division, which uses a preferred scale of max(16, dividend.scale + divisor.precision + 1), which then has trailing zeros removed by setting the scale to max(dividend.scale, normalized scale)

Teiid offers a set of functions you can use to parse numbers from strings. For each string, you need to provide the formatting of the string. These functions use the convention established by the java.text.DecimalFormat class to define the formats you can use with these functions. You can learn more about how this class defines numeric string formats by visiting the Sun Java Web site at the following URL for Sun Java.

For example, you could use these function calls, with the formatting string that adheres to the java.text.DecimalFormat convention, to parse strings and return the datatype you need:

Input String

Function Call to Format String

Output Value

Output Datatype


parseDouble(cost, '$#,##0.00;($#,##0.00)')




parseFloat(percent, '#,##0%')




parseFloat(total, '#,##0.###;-#,##0.###')




parseLong(amt, '0.###E0')




parseInteger(total, '#,##0;-#,##0')



Teiid offers a set of functions you can use to convert numeric datatypes into strings. For each string, you need to provide the formatting. These functions use the convention established within the java.text.DecimalFormat class to define the formats you can use with these functions. You can learn more about how this class defines numeric string formats by visiting the Sun Java Web site at the following URL for Sun Java .

For example, you could use these function calls, with the formatting string that adheres to the java.text.DecimalFormat convention, to format the numeric datatypes into strings:

Input Value

Input Datatype

Function Call to Format String

Output String



formatDouble(cost, '$#,##0.00;($#,##0.00)')




formatFloat(percent, '#,##0%')




formatFloat(total, '#,##0.###;-#,##0.###')




formatLong(amt, '0.###E0')




formatInteger(total, '#,##0;-#,##0')


String functions generally take strings as inputs and return strings as outputs.

Unless specified, all of the arguments and return types in the following table are strings and all indexes are 1-based. The 0 index is considered to be before the start of the string.



Datatype Constraint

x || y

Concatenation operator

x,y in {string}, return type is string


Provide ASCII value of the left most character in x. The empty string will as input will return null. [a]

return type is integer

CHR(x) CHAR(x)

Provide the character for ASCII value x [a]

x in {integer}

CONCAT(x, y)

Concatenates x and y with ANSI semantics. If x and/or y is null, returns null.

x, y in {string}

CONCAT2(x, y)

Concatenates x and y with non-ANSI null semantics. If x and y is null, returns null. If only x or y is null, returns the other value.

x, y in {string}


Make first letter of each word in string x capital and all others lowercase

x in {string}

INSERT(str1, start, length, str2)

Insert string2 into string1

str1 in {string}, start in {integer}, length in {integer}, str2 in {string}


Lowercase of x

x in {string}

LEFT(x, y)

Get left y characters of x

x in {string}, y in {integer}, return string


Length of x

return type is integer

LOCATE(x, y)

Find position of x in y starting at beginning of y

x in {string}, y in {string}, return integer

LOCATE(x, y, z)

Find position of x in y starting at z

x in {string}, y in {string}, z in {integer}, return integer

LPAD(x, y)

Pad input string x with spaces on the left to the length of y

x in {string}, y in {integer}, return string

LPAD(x, y, z)

Pad input string x on the left to the length of y using character z

x in {string}, y in {string}, z in {character}, return string


Left trim x of blank chars

x in {string}, return string

QUERYSTRING(path [, expr [AS name] ...])

Returns a properly encoded query string appended to the given path. Null valued expressions are omitted, and a null path is treated as ''.

Names are optional for column reference expressions.

e.g. QUERYSTRING('path', 'value' as "&x", ' & ' as y, null as z) returns 'path?%26x=value&y=%20%26%20'

path, expr in {string}. name is an identifier


Repeat string1 a specified number of times

str1 in {string}, instances in {integer} return string

REPLACE(x, y, z)

Replace all y in x with z

x,y,z in {string}, return string

RIGHT(x, y)

Get right y characters of x

x in {string}, y in {integer}, return string

RPAD(input string x, pad length y)

Pad input string x with spaces on the right to the length of y

x in {string}, y in {integer}, return string

RPAD(x, y, z)

Pad input string x on the right to the length of y using character z

x in {string}, y in {string}, z in {character}, return string


Right trim x of blank chars

x is string, return string



Get substring from x, from position y to the end of x

y in {integer}

SUBSTRING(x, y, z)


Get substring from x from position y with length z

y, z in {integer}

TO_CHARS(x, encoding)

Return a clob from the blob with the given encoding. BASE64, HEX, and the builtin Java Charset names are valid values for the encoding.[b]

x is a blob, encoding is a string, and returns a clob

TO_BYTES(x, encoding)

Return a blob from the clob with the given encoding. BASE64, HEX, and the builtin Java Charset names are valid values for the encoding.[b]

x in a clob, encoding is a string, and returns a blob

TRANSLATE(x, y, z)

Translate string x by replacing each character in y with the character in z at the same position

x in {string}


Trim the leading, trailing, or both ends of a string y of character x. If LEADING/TRAILING/BOTH is not specified, BOTH is used. If no trim character x is specficed then the blank space ' ' is used.

x in {character}, y in {string}


Uppercase of x

x in {string}


Unescaped version of x. Possible escape sequences are \b - backspace, \t - tab, \n - line feed, \f - form feed, \r - carriage return. \uXXXX, where X is a hex value, can be used to specify any unicode character. \XXX, where X is an octal digit, can be used to specify an octal byte value. If any other character appears after an escape character, that character will appear in the output and the escape character will be ignored.

x in {string}

[a] Non-ASCII range characters or integers used in these functions may produce different results or exceptions depending on where the function is evalutated (Teiid vs. source). Teiid's uses Java default int to char and char to int conversions, which operates over UTF16 values.

[b] See the Charset JavaDoc for more on supported Charset names. For charsets, unmappable chars will be replaced with the charset default character. binary formats, such as BASE64, will error in their conversion to bytes is a unrecognizable character is encountered.

Date and time functions return or operate on dates, times, or timestamps.

Parse and format Date/Time functions use the convention established within the java.text.SimpleDateFormat class to define the formats you can use with these functions. You can learn more about how this class defines formats by visiting the Sun Java Web site at the following URL for Sun Java.



Datatype Constraint


Return current date

returns date


Return current time

returns time


Return current timestamp (date and time)

returns timestamp


Return name of day

x in {date, timestamp}, returns string


Return day of month

x in {date, timestamp}, returns integer


Return day of week (Sunday=1)

x in {date, timestamp}, returns integer


Return Julian day number

x in {date, timestamp}, returns integer


Return the given field value from the date value x. Produces the same result as the assoceated YEAR, MONTH, DAYOFMONTH, HOUR, MINUTE, SECOND functions.

The SQL specification also allows for TIMEZONE_HOUR and TIMEZONE_MINUTE as extraction targets. In Teiid all date values are in the timezone of the server.

x in {date, time, timestamp}, returns integer


Format date x using format y

x is date, y is string, returns string


Format time x using format y

x is time, y is string, returns string


Format timestamp x using format y

x is timestamp, y is string, returns string

FROM_UNIXTIME (unix_timestamp)

Return the Unix timestamp (in seconds) as a Timestamp value

Unix timestamp (in seconds)


Return hour (in military 24-hour format)

x in {time, timestamp}, returns integer


Return minute

x in {time, timestamp}, returns integer

MODIFYTIMEZONE (timestamp, startTimeZone, endTimeZone)

Returns a timestamp based upon the incoming timestamp adjusted for the differential between the start and end time zones.  i.e. if the server is in GMT-6, then modifytimezone({ts '2006-01-10 04:00:00.0'},'GMT-7', 'GMT-8') will return the timestamp {ts '2006-01-10 05:00:00.0'} as read in GMT-6.  The value has been adjusted 1 hour ahead to compensate for the difference between GMT-7 and GMT-8.

startTimeZone and endTimeZone are strings, returns a timestamp

MODIFYTIMEZONE (timestamp, endTimeZone)

Return a timestamp in the same manner as modifytimezone(timestamp, startTimeZone, endTimeZone), but will assume that the startTimeZone is the same as the server process.

Timestamp is a timestamp; endTimeZone is a string, returns a timestamp


Return month

x in {date, timestamp}, returns integer


Return name of month

x in {date, timestamp}, returns string


Parse date from x using format y

x, y in {string}, returns date


Parse time from x using format y

x, y in {string}, returns time


Parse timestamp from x using format y

x, y in {string}, returns timestamp


Return quarter

x in {date, timestamp}, returns integer


Return seconds

x in {time, timestamp}, returns integer


Create a timestamp from a date and time

date in {date}, time in {time}, returns timestamp

TIMESTAMPADD(interval, count, timestamp)

Add a specified interval (hour, day of week, month) to the timestamp, where intervals can have the following definition:

  1. SQL_TSI_FRAC_SECOND - fractional seconds (billionths of a second)

  2. SQL_TSI_SECOND - seconds

  3. SQL_TSI_MINUTE - minutes

  4. SQL_TSI_HOUR - hours

  5. SQL_TSI_DAY - days

  6. SQL_TSI_WEEK - weeks

  7. SQL_TSI_MONTH - months

  8. SQL_TSI_QUARTER - quarters (3 months)

  9. SQL_TSI_YEAR - years

The interval constant may be specified either as a string literal or a constant value. Interval in {string}, count in {integer}, timestamp in {date, time, timestamp}

TIMESTAMPDIFF(interval, startTime, endTime)

Calculate the approximate number of whole intervals in (endTime - startTime) using a specific interval type (as defined by the constants in TIMESTAMPADD). If (endTime > startTime), a positive number will be returned. If (endTime < startTime), a negative number will be returned. Calculations are approximate and may be less accurate over longer time spans.

Interval in {string}; startTime, endTime in {timestamp}, returns a long.


Return week in year

x in {date, timestamp}, returns integer


Return four-digit year

x in {date, timestamp}, returns integer

Teiid does not implicitly convert strings that contain dates presented in different formats, such as ‘19970101’ and ‘31/1/1996’ to date-related datatypes. You can, however, use the parseDate, parseTime, and parseTimestamp functions, described in the next section, to explicitly convert strings with a different format to the appropriate datatype. These functions use the convention established within the java.text.SimpleDateFormat class to define the formats you can use with these functions. You can learn more about how this class defines date and time string formats by visiting the Sun Java Web site .

For example, you could use these function calls, with the formatting string that adheres to the java.text.SimpleDateFormat convention, to parse strings and return the datatype you need:


Function Call To Parse String


parseDate(myDateString, 'yyyyMMdd')


parseDate(myDateString, 'dd''/''MM''/''yyyy')

'22:08:56 CST'

parseTime (myTime, 'HH:mm:ss z')

'03.24.2003 at 06:14:32'

parseTimestamp(myTimestamp, 'MM.dd.yyyy ''at'' hh:mm:ss')

Within your queries, you can convert between datatypes using the CONVERT or CAST keyword. See also Data Type Conversions .



CONVERT(x, type)

Convert x to type, where type is a Teiid Base Type

CAST(x AS type)

Convert x to type, where type is a Teiid Base Type

These functions are identical other than syntax; CAST is the standard SQL syntax, CONVERT is the standard JDBC/ODBC syntax.

Choice functions provide a way to select from two values based on some characteristic of one of the values.



Datatype Constraint


Returns the first non-null parameter

x and all y's can be any compatible types


If x is null, return y; else return x

x, y, and the return type must be the same type but can be any type


If x is null, return y; else return x

x, y, and the return type must be the same type but can be any type

NULLIF(param1, param2)

Equivalent to case when (param1 = param2) then null else param1

param1 and param2 must be compatable comparable types

IFNULL and NVL are aliases of each other. They are the same function.

Decode functions allow you to have the Teiid Server examine the contents of a column in a result set and alter, or decode, the value so that your application can better use the results.



Datatype Constraint


Decode column x using string of value pairs y and return the decoded column as a string

all string


Decode column x using string of value pairs y with delimiter z and return the decoded column as a string

all string


Decode column x using string of value pairs y and return the decoded column as an integer

all string parameters, return integer


Decode column x using string of value pairs y with delimiter z and return the decoded column as an integer

all string parameters, return integer

Within each function call, you include the following arguments:

For example, your application might query a table called PARTS that contains a column called IS_IN_STOCK which contains a Boolean value that you need to change into an integer for your application to process. In this case, you can use the DECODEINTEGER function to change the Boolean values to integers:

SELECT DECODEINTEGER(IS_IN_STOCK, 'false, 0, true, 1') FROM PartsSupplier.PARTS;

When the Teiid System encounters the value false in the result set, it replaces the value with 0.

If, instead of using integers, your application requires string values, you can use the DECODESTRING function to return the string values you need:

SELECT DECODESTRING(IS_IN_STOCK, 'false, no, true, yes, null') FROM PartsSupplier.PARTS;

In addition to two input/output value pairs, this sample query provides a value to use if the column does not contain any of the preceding input values. If the row in the IS_IN_STOCK column does not contain true or false, the Teiid Server inserts a null into the result set.

When you use these DECODE functions, you can provide as many input/output value pairs if you want within the string. By default, the Teiid System expects a comma delimiter, but you can add a third parameter to the function call to specify a different delimiter:

SELECT DECODESTRING(IS_IN_STOCK, 'false:no:true:yes:null',’:’) FROM PartsSupplier.PARTS;

You can use keyword null in the DECODE string as either an input value or an output value to represent a null value. However, if you need to use the literal string null as an input or output value (which means the word null appears in the column and not a null value) you can put the word in quotes: "null".

SELECT DECODESTRING( IS_IN_STOCK, 'null,no,"null",no,nil,no,false,no,true,yes' ) FROM PartsSupplier.PARTS;

If the DECODE function does not find a matching output value in the column and you have not specified a default value, the DECODE function will return the original value the Teiid Server found in that column.

The Lookup function allows you to cache a key value pair table and access it through a scalar function. This caching accelerates response time to queries that use the lookup tables, known in business terminology as lookup tables or code tables.

LOOKUP(codeTable, returnColumn, keyColumn, keyValue)

In the lookup table codeTable, find the row where keyColumn has the value keyValue and return the associated returnColumn value or null if no matching key is found. codeTable must be a string literal that is the fully-qualified name of the target table. returnColumn and key Column must also be string literals of just the relevant column names. The keyValue can be any expression that must match the datatype of the keyColumn. The return datatype matches that of returnColumn.

When you call this function for any combination of codeTable, returnColumn, and keyColumn for the first time, the Teiid System caches the result. The Teiid System uses this cache for all queries, in all sessions, that later access this lookup table.

The Teiid System unloads these cached lookup tables when you stop and restart the Teiid System. Thus, you should not use this function for data that is subject to updates. Instead, you can use it against static data that does not change over time.

See the Caching Guide for more on the caching aspects of the lookup function.


  • The keyColumn is expected to contain unique values. If the column contains duplicate values, an exception will be thrown.

XML functions provide functionality for working with XML data.

Returns an concatination of XML elements for each content item.

XMLFOREST(content [AS name] [, <NSP>] [, content [AS name]]*)

See XMLELEMENT for the definition of NSP - XMLNAMESPACES.

Name is an identifier. Content can be any type. Return value is xml.

If a name is not supplied for a content item, the expression must be a column reference, in which case the element name will be a partially escaped version of the column name.

Teiid categorizes functions by varying degrees of determinism. When a function is evaluated and to what extent the result can be cached are based upon its determinism level.

  1. Deterministic - the function will always return the same result for the given inputs. Deterministic functions are evaluated by the engine as soon as all input values are known, which may occur as soon as the rewrite phase. Some functions, such as the lookup function, are not truly deterministic, but is treated as such for performance. All functions not categorized below are considered deterministic.

  2. User Deterministic - the function will return the same result for the given inputs for the same user. This includes the hasRole and user functions. User deterministic functions are evaluated by the engine as soon as all input values are known, which may occur as soon as the rewrite phase. If a user deterministic function is evaluated during the creation of a prepared processing plan, then the resulting plan will be cached only for the user.

  3. Session Deterministic - the function will return the same result for the given inputs under the same user session. This category includes the env function. Session deterministic functions are evaluated by the engine as soon as all input values are known, which may occur as soon as the rewrite phase. If a session deterministic function is evaluated during the creation of a prepared processing plan, then the resulting plan will be cached only for the user's session.

  4. Command Deterministic - the result of function evaluation is only deterministic within the scope of the user command. This category include the curdate, curtime, now, and commandpayload functions. Command deterministic functions are delayed in evaluation until processing to ensure that even prepared plans utilizing these functions will be executed with relevant values. Command deterministic function evaulation will occur prior to pushdown - however multiple occurances of the same command deterministic time function are not guarenteed to evaluate to the same value.

  5. Nondeterministic - the result of function evaluation is fully nondeterministic. This category includes the rand function and UDFs marked as nondeterministic. Nondeterministic functions are delayed in evaluation until processing with a preference for pushdown. If the function is not pushed down, then it may be evaluated for every row in it's execution context (for example if the function is used in the select clause).

Any view may be marked as updatable. In many circumstances the view definition may allow the view to be inherently updatable without the need to manually define handing of INSERT/UPDATE/DELETE operations.

An inherently updatable view cannot be defined with a query that has:

  • A set operation (INTERSECT, EXCEPT, UNION).


  • Aggregation (aggregate functions, GROUP BY, HAVING)

  • A LIMIT clause

A UNION ALL can define an inherently updatable view only if each of the UNION branches is itself inherently updatable. A view defined by a UNION ALL can support inherent INSERTs if it is a Section 14.2.8, “Partitioned Union” and the INSERT specifies values that belong to a single partition.

Any view column that is not mapped directly to a column is not updatable and cannot be targeted by an UPDATE set clause or be an INSERT column.

If a view is defined by a join query or has a WITH clause it may still be inherently updatable. However in these situations there are further restrictions and the resulting query plan may execute multiple statements. For a non-simple query to be updatable, it is required:

If the default handling is not available or you wish to have an alternative implementation of an INSERT/UPDATE/DELETE, then you may use Section 8.3, “Update Procedures” to define procedures to handle the respective operations.

Teiid supports a procedural language for defining virtual procedures . These are similar to stored procedures in relational database management systems. You can use this language to define the transformation logic for decomposing INSERT, UPDATE, and DELETE commands against views; these are known as update procedures .

A command statement executes a SQL command , such as SELECT, INSERT, UPDATE, DELETE, or EXECUTE, against one or more data sources.

EXECUTE command statements may access IN/OUT, OUT, and RETURN parameters. To access the return value the statement will have the form var = EXEC proc.... To access OUT or IN/OUT values named parameter syntax must be used. For example, EXEC proc(in_param=>'1', out_param=>var) will assign the value of the out parameter to the variable var. It is expected that the datatype of parameter will be implicitly convertable to the datatype of the variable.

Dynamic SQL allows for the execution of an arbitrary SQL command in a virtual procedure. Dynamic SQL is useful in situations where the exact command form is not known prior to execution.


EXECUTE STRING <expression> [AS <variable> <type> [, <variable> <type>]* [INTO <variable>]] 
[USING <variable>=<expression> [,<variable>=<expression>]*] [UPDATE <literal>]        

Syntax Rules:

Here is an example showing a more complex approach to building criteria for the dynamic SQL string. In short, the virtual procedure AccountAccess.GetAccounts has inputs ID, LastName, and bday. If a value is specified for ID it will be the only value used in the dynamic SQL criteria. Otherwise if a value is specified for LastName the procedure will detect if the value is a search string. If bday is specified in addition to LastName, it will be used to form compound criteria with LastName.

Known Limitations and Work-Arounds

Virtual procedures are defined using the Teiid procedural language. A virtual procedure has zero or more input parameters, and a result set return type. Virtual procedures support the ability to execute queries and other SQL commands, define temporary tables, add data to temporary tables, walk through result sets, use loops, and use conditional logic.



The CREATE VIRTUAL PROCEDURE line demarcates the beginning of the procedure. Within the body of the procedure, any valid statement may be used.

There is no explict cursoring or return statement, rather the last command statement executed in the procedure that returns a result set will be returned as the result. The output of that statement must match the expected result set and parameters of the procedure.

You execute procedures using the SQL EXECUTE command. If the procedure has defined inputs, you specify those in a sequential list, or using "name=value" syntax. You must use the name of the input parameter, scoped by the full procedure name if the parameter name is ambiguous in the context of other columns or variables in the procedure.

A virtual procedure call will return a result set just like any SELECT, so you can use this in many places you can use a SELECT. Typically you'll use the following syntax:


Views are abstractions above physical sources. They typically union or join information from multiple tables, often from multiple data sources or other views. Teiid can perform update operations against views. Update commands - INSERT, UPDATE, or DELETE - against a view require logic to define how the tables and views integrated by the view are affected by each type of command. This transformation logic is invoked when an update command is issued against a view. Update procedures define the logic for how a user's update command against a view should be decomposed into the individual commands to be executed against the underlying physical sources. Similar to virtual procedures , update procedures have the ability to execute queries or other commands, define temporary tables, add data to temporary tables, walk through result sets, use loops, and use conditional logic.

A FOR EACH ROW procedure will evaluate its block for each row of the view affected by the update statement. For UPDATE and DELETE statements this will be every row that passes the WHERE condition. For INSERT statements there will be 1 new row for each set of values from the VALUES or query expression. The rows updated is reported as this number regardless of the affect of the underlying procedure logic.

Teiid FOR EACH ROW update procedures function like INSTEAD OF triggers in traditional databases. There may only be 1 FOR EACH ROW procedure for each INSERT, UPDATE, or DELETE operation against a view. FOR EACH ROW update procedures can also be used to emulate BEFORE/AFTER each row triggers while still retaining the ability to perform an inherent update. This BEFORE/AFTER trigger behavior with an inherent update can be achieved by creating an additional updatable view over the target view with update proceudres of the form:

	--before row logic
	--default insert/update/delete against the target view
	INSERT INTO VW (c1, c2, c3) VALUES (NEW.c1, NEW.c2, NEW.c3); 
	--after row logic



The BEGIN and END keywords are used to denote block boundaries. Within the body of the procedure, any valid statement may be used.


The use of the atomic keyword is currently optional for backward compatibility, but unlike a normal block, the default for instead of triggers is atomic.

You can use a number of special variables when defining your update procedure.

Update procedures defined by "CREATE PROCEDURE ..." have been deprecated. The TRANSLATE CRITERIA mechanism and associated logic is typically not adequite to correctly define an updatable view.

You can use a number of special variables when defining your update procedure.

You can use a number of special SQL clauses when defining UPDATE or DELETE procedures. These make it easier to do variable substitutions in WHERE clauses or to check on the change state of variables without using a lot of conditional logic.

You can use the HAS CRITERIA clause to check whether the user’s command has a particular kind of criteria on a particular set of attributes. This clause evaluates to either true or false. You can use it anywhere you can use a criteria within a procedure.


HAS [criteria operator] CRITERIA [ON (column list)]

Each unoptimized conjunct of the user criteria is evaluated against the criteria selector. If any conjunct matches then HAS CRITERIA evaluates to TRUE. The use of OR or NOT will prevent contained predicates from matching the criteria selector.

Some samples of the HAS CRITERIA clause:

HAS CRITERIAChecks simply whether there was any criteria at all.
HAS CRITERIA ON (column1, column2)Checks whether the criteria uses column1 and column2.
HAS = CRITERIA ON (column1)Checks whether the criteria has a comparison criteria with = operator.
HAS LIKE CRITERIAChecks whether the criteria has a match criteria using LIKE.

The HAS CRITERIA predicate is most commonly used in an IF clause, to determine if the user issued a particular form of command and to respond appropriately.

You can use the TRANSLATE CRITERIA clause to convert the criteria from the user application’s SQL command into the form required to interact with the target source or view tables. The TRANSLATE CRITERIA statement uses the SELECT transformation to infer the column mapping. This clause evaluates to a translated criteria that is evaluated in the context of a command. You can use these mappings either to replace the default mappings generated from the SELECT transformation or to specify a reverse expression when a virtual column is defined by an expression.


TRANSLATE [criteria operator] CRITERIA [ON (column list)] [WITH (mapping list)]

If there is no user criteria, then the translated criteria is always treated as TRUE.

Each unoptimized conjunct of the user criteria is translated using the criteria selector. If a conjunct does not match the selector, it will not be translated - which effectively treats the conjunct as TRUE. The use of OR or NOT will prevent contained predicates from matching the criteria selector.


TRANSLATE CRITERIATranslates any user criteria using the default mappings.
TRANSLATE CRITERIA WITH (column1 = 'A', column2 = INPUTS.column2 + 2)Translates any criteria with some additional mappings: column1 is always mapped to 'A' and column2 is mapped to the incoming column2 value + 2.
TRANSLATE = CRITERIA ON (column1)Translates only criteria that have = comparison operator and involve column1.

Teiid utilizes XA transactions for participating in global transactions and for demarcating its local and command scoped transactions. JBoss Transactions is used by Teiid as its transaction manager. See this documentation for the advanced features provided by JBoss Transactions.

The default transaction isolation level for Teiid is READ_COMMITTED.

JBoss AS allows creation of different types of data sources, based on their transactional capabilities. The type of data source you create for your VDB's sources also dictates if that data source will be participating the distributed transaction or not, irrespective of the transaction scope you selected from above. Here are different types of data sources

If you have three different sources A, B, C and they are being used in Teiid. Here are some variations on how they behave with different types of data sources. The suffixes "xa", "local", "no-tx" define different type of sources used.

If your datasource is not XA, and not the only local source and can not use "no-tx", then you can look into extending the source to implement the compensating XA implementation. i.e. define your own resource manager for your source and manage the transaction the way you want it to behave. Note that this could be complicated if not impossible if your source natively does not support distributed XA protocol. In summay

Data roles, also called entitlements, are sets of permissions defined per VDB that dictate data access (create, read, update, delete). Data roles use a fine-grained permission system that Teiid will enforce at runtime and provide audit log entries for access violations (see that Admin and Developers Guide for more on Audit Logging).

Prior to applying data roles, you should consider restricting source system access through the fundamental design of your VDB. Foremost, Teiid can only access source entries that are represented in imported metadata. You should narrow imported metadata to only what is necessary for use by your VDB. When using Teiid Designer, you may then go further and modify the imported metadata at a granular level to remove specific columns, mark tables as non-updatable, etc.

If data roles is enabled and data roles are defined in a VDB, then access permissions will be enforced by the Teiid Server. The use of data roles may be disabled system wide via the <jboss-install>/server/<profile>/deploy/teiid/teiid-jboss-beans.xml file, by setting the property useDataRoles to false in the configuration section of the RuntimeEngineDeployer.


Unlike previous versions of Teiid data roles will only be checked if present in a VDB. A VDB deployed without data roles is open for use by any authenticated user.

CREATE, READ, UPDATE, DELETE (CRUD) permissions can be set for any resource path in a VDB. A resource path can be as specific as the fully qualified name of a column or as general a top level model (schema) name. Permissions granted to a particular path apply to it and any resource paths that share the same partial name. For example, granting read to "model" will also grant read to "model.table", "model.table.column", etc. Allowing or denying a particular action is determined by searching for permissions from the most to least specific resource paths. The first permission found with a specific allow or deny will be used. Thus it is possible to set very general permissions at high-level resource path names and to override only as necessary at more specific resource paths.

Permission grants are only needed for resources that a role needs access to. Permissions are also only applied to the columns/tables/procedures in the user query - not to every resource accessed transitively through view and procedure definitions. It is important therefore to ensure that permission grants are applied consistently across models that access the same resources.

Permissions are not applicable to the SYS and pg_catalog schemas. These metadata reporting schemas are always accessible regardless of the user. The SYSADMIN schema however may need permissions as applicable.

Data roles are defined inside the vdb.xml file (inside the .vdb Zip archive under META-INF/vdb.xml) if you used Designer. The "vdb.xml" file is checked against the schema file vdb-deployer.xsd, which can be found in the kit under teiid-docs/schema. This example will show a sample "vdb.xml" file with few simple data roles.

For example, if a VDB defines a table "TableA" in schema "modelName" with columns (column1, column2) - note that the column types do not matter. And we wish to define three roles "RoleA", "RoleB", "RoleC" with following permissions:

The above XML defined two data roles, "RoleA" which allows everything except delete on the table, "RoleC" that allows only read operation on the table. Since Teiid uses deny by default, there is no explicit data-role entry needed for "RoleB". Note that explicit column permissions are not needed for RoleA, since the parent resource path, modelName.TableA, permissions still apply. RoleC however must explicitly disallow read to column2.

The "mapped-role-name" defines the container JAAS roles that are assigned the data role. For assigning roles to your users in the JBoss AS, check out the instructions for the selected Login Module. Check the "Admin Guide" for configuring Login Modules. You may also choose to allow any authenticated user to have a data role by setting the any-authenticated attribute value to true on data-role element.

The "allow-create-temporary-tables" data-role boolean attribute is used to explicitly enable or disable temporary table usage for the role. If it is left unspecified, then the value will be determined by the configuration setting allowCreateTemporaryTablesByDefault located in the <jboss-install>/server/<profile>/deploy/teiid/teiid-jboss-beans.xml file in the RuntimeEngineDeployer section.

The built-in SYS and SYSADMIN schemas provide metadata tables and procedures against the current VDB.

This table provides the current VDB contents.

Column Name





The path to the contents.



The contents as a blob.

This table supplies information about all the schemas in the virtual database, including the system schema itself (System).

Column Name





VDB name



Schema name



True if this represents a source



Unique ID



Unique ID (see note below)






URI for the primary metamodel describing the model used for this schema

This table supplies user-defined properties on all objects based on metamodel extensions. Normally, this table is empty if no metamodel extensions are being used.

Column Name





Extension property name



Extension property value



Key unique ID



Unique ID (see note below)

This table supplies information about all the groups (tables, views, documents, etc) in the virtual database.

Column Name





VDB name



Schema Name



Short group name



Table type (Table, View, Document, ...)



Name of this group in the source



True if this is a source table



True if group can be updated



Group unique ID



Unique ID (see note below)



Approximate number of rows in the group






True if in system table

This table supplies information about all the materailized views in the virtual database.

Column Name





VDB name



Schema Name



Short group name



Name of the materialized table schema



Name of the materialized table



True if materialized table is currently valid.



The load state, can be one of NEEDS_LOADING, LOADING, LOADED, FAILED_LOAD



The timestamp of the last full refresh.



The number of rows in the materialized view table.

This table supplies information about all the elements (columns, tags, attributes, etc) in the virtual database.

Column Name





VDB name



Schema Name



Table name



Element name (not qualified)



Position in group (1-based)



Name of element in source



Teiid runtime data type name



Number of digits after the decimal point



Element length (mostly used for strings)



Whether the length is fixed or variable



Element can be used in SELECT



Values can be inserted or updated in the element



Element is case-sensitive



Element is signed numeric value



Element represents monetary value



Element is auto-incremented in the source



Nullability: "Nullable", "No Nulls", "Unknown"



Minimum value



Maximum value



Distinct value count, -1 can indicate unknown



Null value count, -1 can indicate unknown



Searchability: "Searchable", "All Except Like", "Like Only", "Unsearchable"



Format of string value



Default value



Java class that will be returned



Number of digits in numeric value



Measure of return value size



Radix for numeric values



Upper-case full group name



Upper-case element name



Element unique ID



Unique ID (see note below)




This table supplies information about primary, foreign, and unique keys.

Column Name





VDB name



Schema Name

Table Name


Table name



Key name






Name of key in source system



Type of key: "Primary", "Foreign", "Unique", etc



True if key is indexed



Referenced key UID (if foreign key)



Key unique ID



Unique ID (see note below)

This table supplies information about the columns referenced by a key.

Column Name





VDB name



Schema Name



Table name



Element name



Key name



Key type: "Primary", "Foreign", "Unique", etc



Referenced key UID






Unique ID (see note below)



Position in key

This table supplies information about the procedures in the virtual database.

Column Name





VDB name



Schema Name



Procedure name



Procedure name in source system



Returns a result set



Procedure UID



Unique ID (see note below)




This supplies information on procedure parameters.

Column Name





VDB name



Schema Name



Procedure name



Parameter name



Teiid runtime data type name



Position in procedure args



Parameter direction: "In", "Out", "InOut", "ResultSet", "ReturnValue"



Parameter is optional



Precision of parameter



Length of parameter value



Scale of parameter



Radix of parameter



Nullability: "Nullable", "No Nulls", "Unknown"

This table supplies information on datatypes.

Column Name





Teiid design-time type name



Always false



Always false



Design-time type name (same as Name)



Java class returned for this type



Max scale of this type



Max length of this type



Nullability: "Nullable", "No Nulls", "Unknown"



Is signed numeric?



Is auto-incremented?



Is case-sensitive?



Max precision of this type



Radix of this type



Searchability: "Searchable", "All Except Like", "Like Only", "Unsearchable"



Data type unique ID



Unique ID (see note below)



Teiid runtime data type name



Base type



Description of type


The OID column is guranteed to be unique/consistent only for given version running instance of a VDB. If a different version of the VDB is deployed, these IDs are not guranteed to be the same or unique across both versions of the VDB. Dynamic VDB OIDs are not cluster safe.

A VDB or virtual database definition is contained in an XML file. For .vdb archive files created in the design tool, this file is embedded in the archive and most field can be updated through tooling. The XML schema for this file can be found in the teiid-docs/schema directory.

Multi-source models can be used to quickly access data in multiple sources with homogeneous metadata. When you have multiple instances of data that are using identical schema (horizontal sharding), Teiid can help you aggregate data across all the instances, using "multi-source" models. In this scenario, instead of creating/importing a model for every data source, user needs to define one source model that represents the schema and configure multiple data "sources" underneath it. During runtime, when a query issued against this model, the query engine analyzes the information and gathers the required data from all the sources configured and aggregates the results and provides in a single result set. Since all sources utilize the same physical metadata, this feature is most appropriate for accessing the same source type with multiple instances.

To mark a model as multi-source, the user needs to supply property called supports-multi-source-bindings, in the "vdb.xml" file. Also, the user needs to define multiple sources. Here is code example showing dynamic vdb with single model with multiple sources defined.

<vdb name="vdbname" version="1">
    <model visible="true" type="PHYSICAL" name="Customers" path="/Test/Customers.xmi">
        <property name="supports-multi-source-bindings" value="true"/>
        <source name="chicago" 
            translator-name="oracle" connection-jndi-name="chicago-customers"/>
        <source name="newyork" 
            translator-name="oracle" connection-jndi-name="newyork-customers"/>
        <source name="la" 
            translator-name="oracle" connection-jndi-name="la-customers"/>

In the above example, the VDB defined has single model called Customers, that has multiple sources (chicago, newyork, and la) that define different instances of data. Every time a model is marked as "multi-source", the runtime engine adds a additional column called "SOURCE_NAME" to every table in that model. This column maps to the source's name from the XML. In the above XML code that would be chicago, la, newyork. This allows queries like the following:

select * from table where SOURCE_NAME = 'newyork'
update table column=value  where SOURCE_NAME='chicago'
delete from table where column = x and SOURCE_NAME='la'
insert into table (column, SOURCE_NAME) VALUES ('value', 'newyork')

Note that when user do not supply the "SOURCE_NAME" in the criteria, the command applies to all the sources. If SOURCE_NAME supplied, the query is executed only aginst the source specified. Another useful feature along with this feature is "partial results" to skip unavailable sources if they are down.

More complex partitioning scenarios, such as heterogeneous sources or list partitioning will require the use of a Section 14.2.8, “Partitioned Union”.


Currently the tooling support for managing the multi-source feature is limited, so if you need to use this feature build the VDB as usual in the Teiid Designer and then edit the "vdb.xml" file in the VDB archive using a Text editor to add the additional sources as defined above. You must deploy a separate data source for each source defined in the xml file.


If you would like to use "SOURCE_NAME" in your transformations to control which sources are accessed or updated, you would manually need to add this extra column on your view table in the Designer. This column will not be automatically added on the source table, when you import the medata from source. It is important to understand that a column or IN procedure parameter named source_name in multi-source mode will always be treated as the explicit form of the multi-source source_name column and will no longer be treated as an actual physical column or procedure parameter.

A Translator is typically paired with a particular JCA resource adapter. In instances where pooling, environment dependent configuration management, advanced security handling, etc. are not needed, then a JCA resource adapter is not needed. The configuration of JCA ConnectionFactories for needed resource adapters is not part of this guide, please see the Teiid Administrator Guide and the kit examples for configuring resource adapters for use in JBossAS.

Translators can have a number of configurable properties. These are broken down into execution properties, which determine aspects of how data is retrieved, and import settings, which determine what metadata is read for import.

The execution properties for a translator typically have reasonable defaults. For specific translator types, e.g. the Derby translator, base execution properties are already tuned to match the source. In most cases the user will not need to adjust their values.


Only a subset of the supports metadata can be set through execution properties. If more control is needed, please consult the Teiid Developers Guide.

There are no base importer settings.

The file translator, known by the type name file, exposes stored procedures to leverage file system resources exposed by the file resource adapter. It will commonly be used with the TEXTTABLE or XMLTABLE table functions to use CSV or XML formated data.

The JDBC translator bridges between SQL semantic and data type difference between Teiid and a target RDBMS. Teiid has a range of specific translators that target the most popular open source and proprietary databases.

Type names:

Table 13.4. Importer Properties - shared by all JDBC Translators

catalogSee DatabaseMetaData.getTables[1]null
schemaPatternSee DatabaseMetaData.getTables[1]null
tableNamePatternSee DatabaseMetaData.getTables[1]null
procedurePatternNameSee DatabaseMetaData.getProcedures[1]null
tableTypesComma separated list - without spaces - of imported table types. See DatabaseMetaData.getTables[1]null
useFullSchemaNameWhen false, directs the importer to drop the source catalog/schema from the Teiid object name, so that the Teiid fully qualified name will be in the form of <model name>.<table name> - Note: that this may lead to objects with duplicate names when importing from multiple schemas, which results in an exceptiontrue
importKeystrue to import primary and foriegn keystrue
importIndexestrue to import index/unique key/cardinality informationtrue
importApproximateIndexestrue to import approximate index information. See DatabaseMetaData.getIndexInfo[1]true
importProcedurestrue to import procedures and procedure columns - Note that it is not always possible to import procedure result set columns due to database limitations. It is also not currently possible to import overloaded procedures.true
widenUnsignedTypestrue to convert unsigned types to the next widest type. For example SQL Server reports tinyint as an unsigned type. With this option enabled, tinyint would be imported as a short instead of a byte.true
quoteNameInSourcefalse will override the default and direct Teiid to create source queries using unquoted identifiers.true
useProcedureSpecificNametrue will allow the import of overloaded procedures (which will normally result in a duplicate procedure error) by using the unique procedure specific name as the Teiid name. This option will only work with JDBC 4.0 compatable drivers that report specific names.false
useCatalogNametrue will use any non-null/non-empty catalog name as part of the name in source, e.g. "catalog"."table"."column", and in the Teiid runtime name if useFullSchemaName is true. false will not use the catalog name in either the name in source or the Teiid runtime name. Should be set to false for sources that do not fully support a catalog concept, but return a non-null catalog name in their metadata - such as HSQL.true

[1] Full JavaDoc for DatabaseMetaData


The default import settings will crawl all available metadata. This import process is time consuming and full metadata import is not needed in most situations. Most commonly you'll want to limit import by schemaPattern and tableTypes.

Example importer settings to only import tables and views from my-schema.

<property name="importer.tableTypes" value="TABLE,VIEW"/>
<property name="importer.schemaPattern" value="my-schema"/>

Both physical tables and procedures may optionally have native queries associated with them. No validation of the native query is performed, it is simply used in a straight-forward manner to generate the source SQL. For a physical table setting the teiid:native-query extension metadata to the desired query string will have Teiid execute the native query as an inline view in the source query. This feature should only be used against sources that support inline views. For example on a physical table y with nameInSource=x and teiid:native-query=select c from g, the Teiid source query "SELECT c FROM y" would generate the SQL query "SELECT c FROM (select c from g) as x". Note that the column names in the native query must match the nameInSource of the physical table columns for the resulting SQL to be valid.

For physical procedures you may also set the teiid:native-query extension metadata to a desired query string with the added ability to positionally reference IN parameters. A parameter reference has the form $integer, e.g. $1. Note that 1 based indexing is used and that only IN parameters may be referenced. Dollar-sign integer is reserved in physical procedure native queries. To use a $integer directly, it must be escaped with another $, e.g. $$1. By default bind values will be used for parameter values. In some situations you may wish to bind values directly into the resulting SQL. The teiid:non-prepared extension metadata property may be set to false to turn off parameter binding. Note this option should be used with caution as inbound may allow for SQL injection attacks if not properly validated. The native query does not need to call a stored procedure. Any SQL that returns a result set positionally matching the result set expected by the physical stored procedure metadata will work. For example on a stored procedure x with teiid:native-query=select c from g where c1 = $1 and c2 = '$$1', the Teiid source query "CALL x(?)" would generate the SQL query "select c from g where c1 = ? and c2 = '$1'". Note that ? in this example will be replaced with the actual value bound to parameter 1.

The Salesforce translator, known by the type name salesforce supports the SELECT, DELETE, INSERT and UPDATE operations against a Salesforce.com account. It is designed for use with the Teiid Salesforce resource adapter.

The Salesforce translator can import metadata, but does not currently have import settings.

Salesforce does not provide the same set of functionality as a relational database. For example, Salesforce does not support arbitrary joins between tables. However, working in combination with the Teiid Query Planner, the Salesforce connector supports nearly all of the SQL syntax supported by the Teiid.

The Salesforce Connector executes SQL commands by “pushing down” the command to Salesforce whenever possible, based on the supported capabilities. Teiid will automatically provide additional database functionality when the Salesforce Connector does not explicitly provide support for a given SQL construct. In these cases, the SQL construct cannot be “pushed down” to the data source, so it will be evaluated in Teiid, in order to ensure that the operation is performed.

In cases where certain SQL capabilities cannot be pushed down to Salesforce, Teiid will push down the capabilities that are supported, and fetch a set of data from Salesforce. Then, Teiid will evaluate the additional capabilities, creating a subset of the original data set. Finally, Teiid will pass the result to the client.

SELECT sum(Reports) FROM Supervisor where Division = 'customer support';

Neither Salesforce nor the Salesforce Connector support the sum() scalar function, but they do support CompareCriteriaEquals, so the query that is passed to Salesforce by the connector will be transformed to this query.

SELECT Reports FROM Supervisor where Division = 'customer support';

The sum() scalar function will be applied by the Teiid Query Engine to the result set returned by the connector.

In some cases multiple calls to the Salesforce application will be made to support the SQL passed to the connector.

DELETE From Case WHERE Status = 'Closed';

The API in Salesforce to delete objects only supports deleting by ID. In order to accomplish this the Salesforce connector will first execute a query to get the IDs of the correct objects, and then delete those objects. So the above DELETE command will result in the following two commands.

SELECT ID From Case WHERE Status = 'Closed';
DELETE From Case where ID IN (<result of query>);

*The Salesforce API DELETE call is not expressed in SQL, but the above is an SQL equivalent expression.

It's useful to be aware of unsupported capabilities, in order to avoid fetching large data sets from Salesforce and making you queries as performant as possible. See all Supported Capabilities.

A multi-select picklist is a field type in Salesforce that can contain multiple values in a single field. Query criteria operators for fields of this type in SOQL are limited to EQ, NE, includes and excludes. The full Salesforce documentation for selecting from mullti-select picklists can be found at the following link. Querying Mulit-select Picklists

Teiid SQL does not support the includes or excludes operators, but the Salesforce connector provides user defined function definitions for these operators that provided equivalent functionality for fields of type multi-select. The definition for the functions is:

boolean includes(Column column, String param)
boolean excludes(Column column, String param)

For example, take a single multi-select picklist column called Status that contains all of these values.

  • current

  • working

  • critical

For that column, all of the below are valid queries:

SELECT * FROM Issue WHERE true = includes (Status, 'current, working' );
SELECT * FROM Issue WHERE true = excludes (Status, 'current, working' );
SELECT * FROM Issue WHERE true = includes (Status, 'current;working, critical' );

EQ and NE criteria will pass to Salesforce as supplied. For example, these queries will not be modified by the connector.

SELECT * FROM Issue WHERE Status = 'current';
SELECT * FROM Issue WHERE Status = 'current;critical';
SELECT * FROM Issue WHERE Status != 'current;working';

The Web Services translator, known by the type name ws, exposes stored procedures for calling web services backed by a Teiid WS resource adapter. It will commonly be used with the TEXTTABLE or XMLTABLE table functions to use CSV or XML formated data.


Setting the proper binding value on the translator is recommended as it removes the need for callers to pass an explict value. If your service is actually uses SOAP11, but the binding used SOAP12 you will receive execution failures.

There are ws importer settings, but it does provide metadata for dynamic VDBs.

The WS translator exposes low level procedures for accessing web services. See also the ws-weather example in the kit.

The OLAP Services translator, known by the type name olap, exposes stored procedures for calling analysis sevices backed by a OLAP server using MDX query lanaguage. This translator exposes a stored procedure, invokeMDX, that returns a result set containing tuple array values for a given MDX query. invokeMDX will commonly be used with the ARRAYTABLE table function to extract the results.

Since the Cube metadata exposed by the OLAP servers and relational database metadata are so different, there is no single way to map the metadata from one to other. It is best to query OLAP system using its own native MDX language through. MDX queries my be defined statically or built dynamically in Teiid's abstraction layers.

Teiid at its core is a federated relational query engine. This query engine allows you to treat all of your data sources as one virtual database and access them in a single SQL query. This allows you to focus on building your application, not on hand-coding joins, and other relational operations, between data sources.

When the query engine receives an incoming SQL query it performs the following operations:

The logical query plan is a tree of operations used to transform data in source tables to the expected result set. In the tree, data flows from the bottom (tables) to the top (output). The primary logical operations are select (select or filter rows based on a criteria), project (project or compute column values), join , source (retrieve data from a table), sort (ORDER BY), duplicate removal (SELECT DISTINCT), group (GROUP BY), and union (UNION).

For example, consider the following query that retrieves all engineering employees born since 1970.

Logically, the data from the Employees and Departments tables are retrieved, then joined, then filtered as specified, and finally the output columns are projected. The canonical query plan thus looks like this:

Data flows from the tables at the bottom upwards through the join, through the select, and finally through the project to produce the final results. The data passed between each node is logically a result set with columns and rows.

Of course, this is what happens logically , not how the plan is actually executed. Starting from this initial plan, the query planner performs transformations on the query plan tree to produce an equivalent plan that retrieves the same results faster. Both a federated query planner and a relational database planner deal with the same concepts and many of the same plan transformations. In this example, the criteria on the Departments and Employees tables will be pushed down the tree to filter the results as early as possible.

In both cases, the goal is to retrieve the query results in the fastest possible time. However, the relational database planner does this primarily by optimizing the access paths in pulling data from storage.

In contrast, a federated query planner is less concerned about storage access because it is typically pushing that burden to the data source. The most important consideration for a federated query planner is minimizing data transfer.

A special optimization called a dependent join is used to reduce the rows returned from one of the two relations involved in a multi-source join. In a dependent join, queries are issued to each source sequentially rather than in parallel, with the results obtained from the first source used to restrict the records returned from the second. Dependent joins can perform some joins much faster by drastically reducing the amount of data retrieved from the second source and the number of join comparisons that must be performed.

The conditions when a dependent join is used are determined by the query planner based on Section 14.2.1, “Access Patterns”, hints, and costing information.

Teiid supports hints to control dependent join behavior:

  • MAKEIND - indicates that the clause should be the independent side of a depedent join.

  • MAKEDEP - indicates that the clause should be the dependent side of a join.

  • MAKENOTDEP - prevents the clause from being the dependent side of a join.

Theses can be placed in either the OPTION clause or directly in the FROM clause . As long as all Section 14.2.1, “Access Patterns” can be met, the MAKEIND, MAKEDEP, and MAKENOTDEP hints override any use of costing information. MAKENOTDEP supersedes the other hints.


The MAKEDEP/MAKEIND hint should only be used if the proper query plan is not chosen by default. You should ensure that your costing information is representative of the actual source cardinality. An inappropriate MAKEDEP/MAKEIND hint can force an inefficient join structure and may result in many source queries.

The engine will for IN clauses to filter the values coming from the dependent side. If the number of values from the independent side exceeds the translators MaxInCriteriaSize, the values will be split into multiple IN predicates up to MaxDependentPredicates. When the number of independent values exceeds MaxInCriteriaSize*MaxDependentPredicates, then multiple dependent queries will be issued in parallel.

The optional join hint indicates to the optimizer that a joined table should be omitted if none of its columns are used by the output of the user query or in a meaningful way to construct the results of the user query. This hint is typically only used in view layers containing multi-source joins.

The optional join hint is applied as a comment on a join clause. It can be applied in both ANSI and non-ANSI joins. With non-ANSI joins an entire joined table may be marked as optional.

The optional join hint will not remove a bridging table that is still required.

A technique known as document projection is used to reduce the memory footprint of the context item document. Document projection loads only the parts of the document needed by the relevant XQuery and path expressions. Since document projection analysis uses all relevant path expressions, even 1 expression that could potentially use many nodes, e.g. //x rather than /a/b/x will cause a larger memory footprint. With the relevant content removed the entire document will still be loaded into memory for processing. Document projection will only be used when there is a context item (unnamed PASSING clause item) passed to XMLTABLE/XMLQUERY. A named variable will not have document projection performed. In some cases the expressions used may be too complex for the optimizer to use document projection. You should check the SHOWPLAN DEBUG full plan output to see if the appropriate optimization has been performed.

With additional restrictions, simple context path expressions allow the processor to evaluate document subtrees independently - without loading the full document in memory. A simple context path expression can be of the form "[/][ns:]root/[ns1:]elem/...", where a namespace prefix or element name can also be the * wild card. As with normal XQuery processing if namespace prefixes are used in the XQuery expression, they should be declared using the XMLNAMESPACES clause.

When using XMLTABLE, the COLUMN PATH expressions have additional restrictions. They are allowed to reference any part of the element subtree formed by the context expression and they may use any attribute value from their direct parentage. Any path expression where it is possible to reference a non-direct ancestor or sibling of the current context item prevent streaming from being used.

When integrating information using a federated query planner, it is useful to be able to view the query plans that are created, to better understand how information is being accessed and processed, and to troubleshoot problems.

A query plan is a set of instructions created by a query engine for executing a command submitted by a user or application. The purpose of the query plan is to execute the user's query in as efficient a way as possible.

Relational plans represent the actually processing plan that is composed of nodes that are the basic building blocks of logical relational operations. Physical relational plans differ from logical relational plans in that they will contain additional operations and execution specifics that were chosen by the optimizer.

The nodes for a relational query plan are:

Every node has a set of statistics that are output. These can be used to determine the amount of data flowing through the node.




Node Output Rows

Number of records output from the node


Node Process Time

Time processing in this node only


Node Cumulative Process Time

Elapsed time from beginning of processing to end


Node Cumulative Next Batch Process Time

Time processing in this node + child nodes


Node Next Batch Calls

Number of times a node was called for processing


Node Blocks

Number of times a blocked exception was thrown by this node or a child


In addition to node statistics, some nodes display cost estimates computed at the node.

Cost Estimates



Estimated Node Cardinality

Estimated number of records that will be output from the node; -1 if unknown


For each sub-command in the user command an appropriate kind of sub-planner is used (relational, XML, procedure, etc).

Each planner has three primary phases:

The GenerateCanonical class generates the initial (or “canonical” plan).  This plan is based on the typical logical order that a SQL query gets executed.  A SQL select query has the following possible clauses (all but SELECT are optional):  SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY, LIMIT.  These clauses are logically executed in the following order:

These clauses translate into the following types of planning nodes:

There is also a Null Node that can be created as the result of rewrite or planning optimizations. It represents a node that produces no rows

Relational optimization is based upon rule execution that evolves the initial plan into the execution plan.  There are a set of pre-defined rules that are dynamically assembled into a rule stack for every query.  The rule stack is assembled based on the contents of the user’s query and its transformations.  For example, if there are no view layers, then RuleMergeVirtual, which merges view layers together, is not needed and will not be added to the stack.  This allows the rule stack to reflect the complexity of the query.

Logically the plan node data structure represents a tree of nodes where the source data comes up from the leaf nodes (typically Access nodes in the final plan), flows up through the tree and produces the user’s results out the top.  The nodes in the plan structure can have bidirectional links, dynamic properties, and allow any number of child nodes.  Processing plan nodes in contrast typical have fixed properties, and only allow for binary operations - due to algorithmic limitations.

Below are some of the rules included in the planner:

  • RuleRemoveSorts - removes sort nodes that do not have an effect on the result.  This most common when a view has an non-limited ORDER BY.

  • RulePlaceAccess - insert an Access node above every physical Source node.  The source node represents a table typically.  An access node represents the point at which everything below the access node gets pushed to the source.  Later rules focus on either pushing stuff under the access or pulling the access node up the tree to move more work down to the data sources.  This rule is also responsible for placing Section 14.2.1, “Access Patterns”.

  • RulePushSelectCriteria - pushes select criteria down through unions, joins, and views into the source below the access node.  In most cases movement down the tree is good as this will filter rows earlier in the plan.  We currently do not undo the decisions made by PushSelectCriteria.  However in situations where criteria cannot be evaluated by the source, this can lead to sub optimal plans.

    One of the most important optimization related to pushing criteria, is how the criteria will be pushed trough join.  Consider the following plan tree that represents a subtree of the plan for the query "select ... from A inner join b on (A.x = B.x) where A.y = 3"

              SELECT (B.y = 3)
              JOIN - Inner Join on (A.x = B.x
             /     \    
          SRC (A)   SRC (B)

    Note: SELECT nodes represent criteria, and SRC stands for SOURCE.

    It is always valid for inner join and cross joins to push (single source) criteria that are above the join, below the join.  This allows for criteria originating in the user query to eventually be present in source queries below the joins.  This result can be represented visually as:

              JOIN - Inner Join on (A.x = B.x)
              /    \
             /   SELECT (B.y = 3)
            |        |
          SRC (A)   SRC (B)

    The same optimization is valid for criteria specified against the outer side of an outer join.  For example:

              SELECT (B.y = 3) 
              JOIN - Right Outer Join on (A.x = B.x)
             /     \    
          SRC (A)   SRC (B)


              JOIN - Right Outer Join on (A.x = B.x)
              /    \
             /   SELECT (B.y = 3)
            |        |
          SRC (A)   SRC (B)

    However criteria specified against the inner side of an outer join needs special consideration.  The above scenario with a left or full outer join is not the same.  For example:

              SELECT (B.y = 3)
              JOIN - Left Outer Join on (A.x = B.x)
             /     \    
          SRC (A)   SRC (B)

    Can become (available only after 5.0.2):

              JOIN - Inner Join on (A.x = B.x)
              /    \
             /   SELECT (B.y = 3)
            |        |
          SRC (A)   SRC (B)

    Since the criterion is not dependent upon the null values that may be populated from the inner side of the join, the criterion is eligible to be pushed below the join – but only if the join type is also changed to an inner join.  

    On the other hand, criteria that are dependent upon the presence of null values CANNOT be moved.  For example:

              SELECT (B.y is null)
              JOIN - Left Outer Join on (A.x = B.x)
             /     \   
          SRC (A)   SRC (B)

    This plan tree must have the criteria remain above the join, since the outer join may be introducing null values itself.  This will be true regardless of which version of Teiid is used.

  • RulePushNonJoinCriteria - this rule will push criteria out of an on clause if it is not necessary for the correctness of the join.

  • RuleRaiseNull - this rule will raise null nodes to their highest possible point.  Raising a null node removes the need to consider any part of the old plan that was below the null node.

  • RuleMergeVirtual - merges view layers together.  View layers are connected by nesting canonical plans under source leaf nodes of the parent plan.  Each canonical plan is also sometimes referred to as a “query frame”.  RuleMergeVirtual attempts to merge child frames into the parent frame.   The merge involves renaming any symbols in the lower frame that overlap with symbols in the upper frame.  It also involves merging the join information together.

  • RuleRemoveOptionalJoins - removes optional join nodes form the plan tree as soon as possible so that planning will be more optimal.

  • RulePlanJoins - this rule attempts to find an optimal ordering of the joins performed in the plan, while ensuring that Section 14.2.1, “Access Patterns” dependencies are met.  This rule has three main steps.  First it must determine an ordering of joins that satisfy the access patterns present.  Second it will heuristically create joins that can be pushed to the source (if a set of joins are pushed to the source, we will not attempt to create an optimal ordering within that set.  More than likely it will be sent to the source in the non-ANSI multi-join syntax and will be optimized by the database).  Third it will use costing information to determine the best left-linear ordering of joins performed in the processing engine.  This third step will do an exhaustive search for 6 or less join sources and is heuristically driven by join selectivity for 7 or more sources.

  • RuleCopyCriteria - this rule copies criteria over an equality criteria that is present in the criteria of a join.  Since the equality defines an equivalence, this is a valid way to create a new criteria that may limit results on the other side of the join (especially in the case of a multi-source join).  

  • RuleCleanCriteria - this rule cleans up criteria after all the other rules.

  • RuleMergeCriteria - looks for adjacent criteria nodes and merges them together.  It looks for adjacent identical conjuncts and removes duplicates.  

  • RuleRaiseAccess - this rule attempts to raise the Access nodes as far up the plan as possible.  This is mostly done by looking at the source’s capabilities and determining whether the operations can be achieved in the source or not.

  • RuleChooseDependent - this rule looks at each join node and determines whether the join should be made dependent and in which direction.  Cardinality, the number of distinct values, and primary key information are used in several formulas to determine whether a dependent join is likely to be worthwhile.  The dependent join differs in performance ideally because a fewer number of values will be returned from the dependent side.  Also, we must consider the number of values passed from independent to dependent side.  If that set is larger than the max number of values in an IN criteria on the dependent side, then we must break the query into a set of queries and combine their results.  Executing each query in the connector has some overhead and that is taken into account.  Without costing information a lot of common cases where the only criteria specified is on a non-unique (but strongly limiting) field are missed.  A join is eligible to be dependent if:

    1. there is at least one equi-join criterion, i.e. tablea.col = tableb.col

    2. the join is not a full outer join and the dependent side of the join is on the inner side of the join

    The join will be made dependent if one of the following conditions, listed in precedence order, holds:

    1. There is an unsatisfied access pattern that can be satisfied with the dependent join criteria

    2. The potential dependent side of the join is marked with an option makedep

    3. (4.3.2) if costing was enabled, the estimated cost for the dependent join (5.0+ possibly in each direction in the case of inner joins) is computed and compared to not performing the dependent join.  If the costs were all determined (which requires all relevant table cardinality, column ndv, and possibly nnv values to be populated) the lowest is chosen.

    4. If key metadata information indicates that the potential dependent side is not “small” and the other side is “not small” or (5.0.1) the potential dependent side is the inner side of a left outer join.

    Dependent join is the key optimization we use to efficiently process multi-source joins.

    Instead of reading all of source A and all of source B and joining them on A.x = B.x, we read all of A then build a set of A.x that are passed as a criteria when querying B.  In cases where A is small and B is large, this can drastically reduce the data retrieved from B, thus greatly speeding the overall query.

  • RuleChooseJoinStrategy - Determines the base join strategy.  Currently this is a decision as to whether to use a merge join rather than the default strategy, which is a nested loop join.  Ideally the choice of a hash join would also be evaluated here.  Also costing should be used to determine the strategy cost.  

  • RuleDecomposeJoin - this rule perfomrs a partition-wise join optimization on joins of Section 14.2.8, “Partitioned Union”. The decision to decompose is based upon detecting that each side of the join is a partitioned union (note that non-ansi joins of more than 2 tables may cause the optimization to not detect the appropriate join). The rule currently only looks for situations where at most 1 partition matches from each side.

  • RuleCollapseSource - this rule removes all nodes below an Access node and collapses them into an equivalent query that is placed in the Access node.

  • RuleAssignOutputElements - this rule walks top down through every node and calculates the output columns for each node.  Columns that are not needed are dropped at every node.  This is done by keeping track of both the columns needed to feed the parent node and also keeping track of columns that are “created” at a certain node.

  • RuleValidateWhereAll - this rule validates a rarely used model option.

  • RuleAccessPatternValidation - validates that all access patterns have been satisfied.

  • RulePushLimit - pushes limit and offset information as far as possible in the plan.

The XML Planner creates an XML plan that is relatively close to the end result of the Procedure Planner – a program with instructions.  Many of the instructions are even similar (while loop, execute SQL, etc). Additional instructions deal with producing the output result document (adding elements and attributes).  

The XML planner does several types of planning (not necessarily in this order):

- Document selection - determine which tags of the virtual document should be excluded from the output document.  This is done based on a combination of the model (which marks parts of the document excluded) and the query (which may specify a subset of columns to include in the SELECT clause).  

- Criteria evaluation - breaks apart the user’s criteria, determine which result set the criteria should be applied to, and add that criteria to that result set query.

- Result set ordering - the query’s ORDER BY clause is broken up and the ORDER BY is applied to each result set as necessary

- Result set planning - ultimately, each result set is planned using the relational planner and taking into account all the impacts from the user’s query

- Program generation - a set of instructions to produce the desired output document is produced, taking into account the final result set queries and the excluded parts of the document.  Generally, this involves walking through the virtual document in document order, executing queries as necessary and emitting elements and attributes.

XML programs can also be recursive, which involves using the same document fragment for both the initial fragment and a set of repeated fragments (each a new query) until some termination criteria or limit is met.

<DEFAULT> SKIP : { " " | "\t" | "\n" | "\r" }




<DEFAULT> TOKEN : { <STRING: "string"> | <VARCHAR: "varchar"> | <BOOLEAN: "boolean"> | <BYTE: "byte"> | <TINYINT: "tinyint"> | <SHORT: "short"> | <SMALLINT: "smallint"> | <CHAR: "char"> | <INTEGER: "integer"> | <LONG: "long"> | <BIGINT: "bigint"> | <BIGINTEGER: "biginteger"> | <FLOAT: "float"> | <REAL: "real"> | <DOUBLE: "double"> | <BIGDECIMAL: "bigdecimal"> | <DECIMAL: "decimal"> | <DATE: "date"> | <TIME: "time"> | <TIMESTAMP: "timestamp"> | <OBJECT: "object"> | <BLOB: "blob"> | <CLOB: "clob"> | <XML: "xml"> }

<DEFAULT> TOKEN : { <CAST: "cast"> | <CONVERT: "convert"> }

<DEFAULT> TOKEN : { <ADD: "add"> | <ALL: "all"> | <ALTER: "alter"> | <AND: "and"> | <ANY: "any"> | <ARRAY: "array"> | <ARRAY_AGG: "array_agg"> | <AS: "as"> | <ASC: "asc"> | <ATOMIC: "atomic"> | <AUTORIZATION: "authorization"> | <BEGIN: "begin"> | <BETWEEN: "between"> | <BINARY: "binary"> | <BOTH: "both"> | <BREAK: "break"> | <BY: "by"> | <CALL: "call"> | <CALLED: "called"> | <CASCADED: "cascaded"> | <CASE: "case"> | <CHARACTER: "character"> | <CHECK: "check"> | <CLOSE: "close"> | <COLLATE: "collate"> | <COLUMN: "column"> | <COMMIT: "commit"> | <CONNECT: "connect"> | <CONSTRAINT: "constraint"> | <CONTINUE: "continue"> | <CORRESPONDING: "corresponding"> | <CURRENT_DATE: "current_date"> | <CURRENT_TIME: "current_time"> | <CURRENT_TIMESTAMP: "current_timestamp"> | <CURRENT_USER: "current_user"> | <CREATE: "create"> | <CRITERIA: "criteria"> | <CROSS: "cross"> | <CURSOR: "cursor"> | <DAY: "day"> | <DEALLOCATE: "deallocate"> | <DEFAULT_KEYWORD: "default"> | <DECLARE: "declare"> | <DELETE: "delete"> | <DESC: "desc"> | <DESCRIBE: "describe"> | <DETERMINISTIC: "deterministic"> | <DISCONNECT: "disconnect"> | <DISTINCT: "distinct"> | <DROP: "drop"> | <EACH: "each"> | <ELSE: "else"> | <END: "end"> | <ERROR: "error"> | <ESCAPE: "escape"> | <EXCEPT: "except"> | <EXEC: "exec"> | <EXECUTE: "execute"> | <EXTERNAL: "external"> | <EXISTS: "exists"> | <FALSE: "false"> | <FETCH: "fetch"> | <FILTER: "filter"> | <FOR: "for"> | <FORIEGN: "foriegn"> | <FROM: "from"> | <FULL: "full"> | <FUNCTION: "function"> | <GET: "get"> | <GLOBAL: "global"> | <GRANT: "grant"> | <GROUP: "group"> | <HAS: "has"> | <HAVING: "having"> | <HOLD: "hold"> | <HOUR: "hour"> | <IF: "if"> | <IDENTITY: "identity"> | <IMMEDIATE: "immediate"> | <IN: "in"> | <INDICATOR: "indicator"> | <INNER: "inner"> | <INPUT: "input"> | <INOUT: "inout"> | <INSENSITIVE: "insensitive"> | <INSERT: "insert"> | <INTERSECT: "intersect"> | <INTERVAL: "interval"> | <INTO: "into"> | <IS: "is"> | <ISOLATION: "isolation"> | <JOIN: "join"> | <LEFT: "left"> | <LANGUAGE: "language"> | <LARGE: "large"> | <LEADING: "leading"> | <LEAVE: "leave"> | <LIKE: "like"> | <LIKE_REGEX: "like_regex"> | <LIMIT: "limit"> | <LOCAL: "local"> | <LOOP: "loop"> | <MAKEDEP: "makedep"> | <MAKENOTDEP: "makenotdep"> | <MATCH: "match"> | <MERGE: "merge"> | <METHOD: "method"> | <MINUTE: "minute"> | <MODIFIES: "modifies"> | <MODULE: "module"> | <MONTH: "month"> | <NATURAL: "natural"> | <NEW: "new"> | <NOCACHE: "nocache"> | <NO: "no"> | <NONE: "none"> | <NOT: "not"> | <NULL: "null"> | <OF: "of"> | <OFFSET: "offset"> | <OLD: "old"> | <ON: "on"> | <ONLY: "only"> | <OPEN: "open"> | <OPTION: "option"> | <OR: "or"> | <ORDER: "order"> | <OUTER: "outer"> | <OUTPUT: "output"> | <OVER: "over"> | <OVERLAPS: "OVERLAPS"> | <PARAMETER: "parameter"> | <PARTITION: "partition"> | <PRECISION: "precision"> | <PREPARE: "prepare"> | <PRIMARY: "primary"> | <PROCEDURE: "procedure"> | <RANGE: "range"> | <READS: "reads"> | <RECURSIVE: "recursive"> | <REFERENCES: "REFERENCES"> | <REFERENCING: "REFERENCING"> | <RETURN: "return"> | <RETURNS: "returns"> | <REVOKE: "REVOKE"> | <RIGHT: "right"> | <ROLLBACK: "ROLLBACK"> | <ROLLUP: "ROLLUP"> | <ROW: "row"> | <ROWS: "rows"> | <SAVEPOINT: "savepoint"> | <SCROLL: "scroll"> | <SEARCH: "search"> | <SECOND: "second"> | <SELECT: "select"> | <SENSITIVE: "sensitive"> | <SESSION_USER: "session_user"> | <SET: "set"> | <SIMILAR: "similar"> | <SPECIFIC: "specific"> | <SOME: "some"> | <SQL: "sql"> | <SQLEXCEPTION: "sqlexception"> | <SQLSTATE: "sqlstate"> | <SQLWARNING: "sqlwarning"> | <START: "start"> | <STATIC: "static"> | <SYSTEM: "system"> | <SYSTEM_USER: "system_user"> | <TABLE: "table"> | <TEMPORARY: "temporary"> | <THEN: "then"> | <TIMEZONE_HOUR: "timezone_hour"> | <TIMEZONE_MINUTE: "timezone_minute"> | <TO: "to"> | <TRAILING: "trailing"> | <TRANSLATE: "translate"> | <TRIGGER: "trigger"> | <TRUE: "true"> | <UNION: "union"> | <UNIQUE: "unique"> | <UNKNOWN: "unknown"> | <USER: "user"> | <UPDATE: "update"> | <USING: "using"> | <VALUE: "value"> | <VALUES: "values"> | <VIRTUAL: "virtual"> | <WHEN: "when"> | <WHENEVER: "whenever"> | <WHERE: "where"> | <WITH: "with"> | <WHILE: "while"> | <WINDOW: "window"> | <WITHIN: "within"> | <WITHOUT: "without"> | <YEAR: "year"> | <ALLOCATE: "allocate"> | <ARE: "are"> | <ASENSITIVE: "asensitive"> | <ASYMETRIC: "asymetric"> | <CYCLE: "cycle"> | <DEC: "dec"> | <DEREF: "deref"> | <DYNAMIC: "dynamic"> | <ELEMENT: "element"> | <FREE: "free"> | <INT: "int"> | <LATERAL: "lateral"> | <LOCALTIME: "localtime"> | <LOCALTIMESTAMP: "localtimestamp"> | <MEMBER: "member"> | <MULTISET: "multiset"> | <NATIONAL: "national"> | <NCHAR: "nchar"> | <NCLOB: "nclob"> | <NUMERIC: "numeric"> | <RELEASE: "release"> | <SPECIFICTYPE: "specifictype"> | <SYMETRIC: "symetric"> | <SUBMULTILIST: "submultilist"> | <TRANSLATION: "translation"> | <TREAT: "treat"> | <VARYING: "varying"> }

<DEFAULT> TOKEN : { <XMLAGG: "xmlagg"> | <XMLATTRIBUTES: "xmlattributes"> | <XMLBINARY: "xmlbinary"> | <XMLCAST: "xmlcast"> | <XMLCONCAT: "xmlconcat"> | <XMLCOMMENT: "xmlcomment"> | <XMLDOCUMENT: "xmldocument"> | <XMLELEMENT: "xmlelement"> | <XMLEXISTS: "xmlexists"> | <XMLFOREST: "xmlforest"> | <XMLITERATE: "xmliterate"> | <XMLNAMESPACES: "xmlnamespaces"> | <XMLPARSE: "xmlparse"> | <XMLPI: "xmlpi"> | <XMLQUERY: "xmlquery"> | <XMLSERIALIZE: "xmlserialize"> | <XMLTABLE: "xmltable"> | <XMLTEXT: "xmltext"> | <XMLVALIDATE: "xmlvalidate"> }

<DEFAULT> TOKEN : { <DATALINK: "datalink"> | <DLNEWCOPY: "dlnewcopy"> | <DLPREVIOUSCOPY: "dlpreviouscopy"> | <DLURLCOMPLETE: "dlurlcomplete"> | <DLURLCOMPLETEWRITE: "dlurlcompletewrite"> | <DLURLCOMPLETEONLY: "dlurlcompleteonly"> | <DLURLPATH: "dlurlpath"> | <DLURLPATHWRITE: "dlurlpathwrite"> | <DLURLPATHONLY: "dlurlpathonly"> | <DLURLSCHEME: "dlurlscheme"> | <DLURLSERVER: "dlurlserver"> | <DLVALUE: "dlvalue"> | <IMPORT: "import"> }

<DEFAULT> TOKEN : { <ALL_IN_GROUP: <ID> <PERIOD> <STAR>> | <ID: <QUOTED_ID> (<PERIOD> <QUOTED_ID>)*> | <#QUOTED_ID: <ID_PART> | "\"" ("\"\"" | ~["\""])+ "\""> | <#ID_PART: ("@" | "#" | <LETTER>) (<LETTER> | "_" | <DIGIT>)*> | <DATETYPE: "{" "d"> | <TIMETYPE: "{" "t"> | <TIMESTAMPTYPE: "{" "ts"> | <BOOLEANTYPE: "{" "b"> | <POS_REF: ["$"] (<DIGIT>)+> | <INTEGERVAL: (<MINUS>)? (<DIGIT>)+> | <FLOATVAL: (<MINUS>)? (<DIGIT>)* <PERIOD> (<DIGIT>)+ (["e","E"] (["+","-"])? (<DIGIT>)+)?> | <STRINGVAL: ("N" | "E")? "\'" ("\'\'" | ~["\'"])* "\'"> | <#LETTER: ["a"-"z","A"-"Z"] | ["\u0153"-"\ufffd"]> | <#DIGIT: ["0"-"9"]> }

<DEFAULT> TOKEN : { <COMMA: ","> | <PERIOD: "."> | <LPAREN: "("> | <RPAREN: ")"> | <LBRACE: "{"> | <RBRACE: "}"> | <LSBRACE: "["> | <RSBRACE: "]"> | <EQ: "="> | <NE: "<>"> | <NE2: "!="> | <LT: "<"> | <LE: "<="> | <GT: ">"> | <GE: ">="> | <STAR: "*"> | <SLASH: "/"> | <PLUS: "+"> | <MINUS: "-"> | <QMARK: "?"> | <DOLLAR: "$"> | <SEMICOLON: ";"> | <COLON: ":"> | <CONCAT_OP: "||"> }


::= ( <STRINGVAL> )


::= ( <ID> )


::= ( createUpdateProcedure | userCommand | callableStatement ) ( <SEMICOLON> )? <EOF>


::= ( updateProcedure | userCommand ) ( <SEMICOLON> )? <EOF>


::= ( createUpdateProcedure | forEachRowTriggerAction ) <EOF>


::= <CREATE> <TRIGGER> <ON> id nonReserved <OF> ( <INSERT> | <UPDATE> | <DELETE> ) <AS> forEachRowTriggerAction


::= <ALTER> ( ( nonReserved id <AS> queryExpression ) | ( <PROCEDURE> id <AS> statement ) | ( <TRIGGER> <ON> id nonReserved <OF> ( <INSERT> | <UPDATE> | <DELETE> ) ( ( <AS> forEachRowTriggerAction ) | nonReserved ) ) )


::= <FOR> <EACH> <ROW> ( ( <BEGIN> ( <ATOMIC> )? ( statement )* <END> ) | statement )


::= ( queryExpression | storedProcedure | insert | update | delete | dropTable | createTempTable | alter | createTrigger )


::= <DROP> <TABLE> id


::= <CREATE> <LOCAL> <TEMPORARY> <TABLE> id <LPAREN> tableElement ( <COMMA> tableElement )* ( <COMMA> <PRIMARY> nonReserved <LPAREN> id ( <COMMA> id )* <RPAREN> )? <RPAREN>


::= id ( dataTypeString | nonReserved ) ( <NOT> <NULL> )?


::= <ERROR> expression


::= ( ( ( id <COLON> )? ( loopStatement | whileStatement | compoundStatement ) ) | ( ifStatement | delimitedStatement ) )


::= ( assignStatement | sqlStatement | errorStatement | declareStatement | branchingStatement ) <SEMICOLON>


::= <BEGIN> ( ( <NOT> )? <ATOMIC> )? ( statement )* <END>


::= ( ( ( <BREAK> | <CONTINUE> ) ( id )? ) | ( <LEAVE> id ) )


::= <WHILE> <LPAREN> criteria <RPAREN> statement


::= <LOOP> <ON> <LPAREN> queryExpression <RPAREN> <AS> id statement


::= <IF> <LPAREN> criteria <RPAREN> statement ( <ELSE> statement )?


::= ( ( <EQ> | <NE> | <NE2> | <LE> | <GE> | <LT> | <GT> | <IN> | <LIKE> | ( <IS> <NULL> ) | <BETWEEN> ) )? <CRITERIA> ( <ON> <LPAREN> id ( <COMMA> id )* <RPAREN> )?


::= <HAS> criteriaSelector


::= <DECLARE> dataType id ( ( nonReserved | <EQ> ) assignStatementOperand )?


::= id ( nonReserved | <EQ> ) assignStatementOperand


::= ( ( insert ) | update | delete | ( expression ) | queryExpression )


::= ( ( userCommand ) | dynamicCommand | ( id ( nonReserved | <EQ> ) storedProcedure ) )


::= <TRANSLATE> criteriaSelector ( <WITH> <LPAREN> id <EQ> expression ( <COMMA> id <EQ> expression )* <RPAREN> )?


::= <CREATE> ( <VIRTUAL> )? ( <UPDATE> )? <PROCEDURE> statement


::= ( <EXECUTE> | <EXEC> ) ( ( <STRING> | <IMMEDIATE> ) )? expression ( <AS> createElementsWithTypes ( <INTO> id )? )? ( <USING> setClauseList )? ( <UPDATE> ( ( <INTEGERVAL> ) | ( <STAR> ) ) )?


::= id <EQ> ( <COMMA> id <EQ> )*


::= id dataTypeString ( <COMMA> id dataTypeString )*


::= <LBRACE> ( <QMARK> <EQ> )? <CALL> id ( <LPAREN> ( executeUnnamedParams ) <RPAREN> )? <RBRACE> ( option )?


::= ( ( <EXEC> | <EXECUTE> | <CALL> ) id <LPAREN> ( executeNamedParams | executeUnnamedParams ) <RPAREN> ) ( option )?


::= ( expression ( <COMMA> expression )* )?


::= ( id <EQ> ( <GT> )? expression ( <COMMA> id <EQ> ( <GT> )? expression )* )


::= <INSERT> <INTO> id ( columnList )? ( ( <VALUES> <LPAREN> expressionList <RPAREN> ) | ( queryExpression ) ) ( option )?


::= <LPAREN> id ( <COMMA> id )* <RPAREN>


::= expression ( <COMMA> expression )*


::= <UPDATE> id <SET> setClauseList ( where )? ( option )?


::= <DELETE> <FROM> id ( where )? ( option )?


::= ( <WITH> withListElement ( <COMMA> withListElement )* )? queryExpressionBody


::= id ( columnList )? <AS> <LPAREN> queryExpression <RPAREN>


::= queryTerm ( ( <UNION> | <EXCEPT> ) ( <ALL> | <DISTINCT> )? queryTerm )* ( orderby )? ( limit )? ( option )?


::= queryPrimary ( <INTERSECT> ( <ALL> | <DISTINCT> )? queryPrimary )*


::= ( query | ( <TABLE> id ) | ( <LPAREN> queryExpressionBody <RPAREN> ) )


::= select ( into )? ( from ( where )? ( groupBy )? ( having )? )?


::= <INTO> ( id )


::= <SELECT> ( <ALL> | ( <DISTINCT> ) )? ( <STAR> | ( selectSymbol ( <COMMA> selectSymbol )* ) )


::= ( selectExpression | allInGroupSymbol )


::= ( expression ( ( <AS> )? id )? )


::= ( expression ( <AS> id )? )




::= ( <XMLAGG> | <ARRAY_AGG> ) <LPAREN> expression ( orderby )? <RPAREN> filterClause


::= nonReserved <LPAREN> <FOR> derivedColumn ( <COMMA> derivedColumn )* ( <ID> charVal )? ( ( <ID> charVal ) )? ( <ID> )? ( ( <ID> id ) )? ( orderby )? <RPAREN> filterClause


::= ( ( ( nonReserved <LPAREN> <STAR> <RPAREN> ) | ( nonReserved <LPAREN> <RPAREN> ) | ( ( nonReserved | <ANY> | <SOME> ) <LPAREN> ( <DISTINCT> | <ALL> )? expression <RPAREN> ) ) filterClause )


::= ( <FILTER> <LPAREN> <WHERE> booleanPrimary <RPAREN> )?


::= <FROM> ( tableReference ( <COMMA> tableReference )* )


::= ( ( <LBRACE> nonReserved joinedTable <RBRACE> ) | joinedTable )


::= tablePrimary ( ( crossJoin | qualifiedJoin ) )*


::= ( ( <CROSS> | <UNION> ) <JOIN> tablePrimary )


::= ( ( ( <RIGHT> ( <OUTER> )? ) | ( <LEFT> ( <OUTER> )? ) | ( <FULL> ( <OUTER> )? ) | <INNER> )? <JOIN> tableReference <ON> criteria )


::= ( textTable | arrayTable | xmlTable | unaryFromClause | subqueryFromClause | ( <LPAREN> joinedTable <RPAREN> ) ) ( ( <MAKEDEP> ) | ( <MAKENOTDEP> ) )?


::= <XMLSERIALIZE> <LPAREN> ( nonReserved )? expression ( <AS> ( <STRING> | <VARCHAR> | <CLOB> ) )? <RPAREN>


::= <ID>


::= <ID> <LPAREN> expression nonReserved createElementsWithTypes <RPAREN> ( <AS> )? id


::= <ID> <LPAREN> expression nonReserved textColumn ( <COMMA> textColumn )* ( <NO> <ROW> nonReserved )? ( <ID> charVal )? ( ( <ESCAPE> charVal ) | ( <ID> charVal ) )? ( <ID> ( intVal )? )? ( <ID> intVal )? <RPAREN> ( <AS> )? id


::= id dataType ( <ID> intVal ( <NO> nonReserved )? )?


::= <XMLQUERY> <LPAREN> ( xmlNamespaces <COMMA> )? stringVal ( <ID> derivedColumn ( <COMMA> derivedColumn )* )? ( ( <NULL> | nonReserved ) <ON> nonReserved )? <RPAREN>


::= <XMLTABLE> <LPAREN> ( xmlNamespaces <COMMA> )? stringVal ( <ID> derivedColumn ( <COMMA> derivedColumn )* )? ( <ID> xmlColumn ( <COMMA> xmlColumn )* )? <RPAREN> ( <AS> )? id


::= id ( ( <FOR> nonReserved ) | ( dataType ( <DEFAULT_KEYWORD> expression )? ( nonReserved stringVal )? ) )




::= ( <TABLE> )? <LPAREN> ( queryExpression | storedProcedure ) <RPAREN> ( <AS> )? id


::= ( <ID> ( ( <AS> )? id )? )


::= <WHERE> criteria


::= compoundCritOr


::= compoundCritAnd ( <OR> compoundCritAnd )*


::= notCrit ( <AND> notCrit )*


::= ( <NOT> )? booleanPrimary


::= ( translateCriteria | ( commonValueExpression ( ( betweenCrit | matchCrit | regexMatchCrit | setCrit | isNullCrit | subqueryCompareCriteria | compareCrit ) )? ) | existsCriteria | hasCriteria )


::= ( <EQ> | <NE> | <NE2> | <LT> | <LE> | <GT> | <GE> )


::= operator commonValueExpression


::= <LPAREN> ( queryExpression | ( storedProcedure ) ) <RPAREN>


::= subquery


::= operator ( <ANY> | <SOME> | <ALL> ) subquery


::= ( <NOT> )? ( <LIKE> | ( <SIMILAR> <TO> ) ) commonValueExpression ( <ESCAPE> charVal | ( <LBRACE> <ESCAPE> charVal <RBRACE> ) )?


::= ( <NOT> )? <LIKE_REGEX> commonValueExpression


::= stringVal


::= ( <NOT> )? <BETWEEN> commonValueExpression <AND> commonValueExpression


::= <IS> ( <NOT> )? <NULL>


::= ( <NOT> )? <IN> ( ( subqueryAndHint ) | ( <LPAREN> commonValueExpression ( <COMMA> commonValueExpression )* <RPAREN> ) )


::= <EXISTS> subqueryAndHint


::= <GROUP> <BY> expressionList


::= <HAVING> criteria


::= <ORDER> <BY> sortSpecification ( <COMMA> sortSpecification )*


::= sortKey ( <ASC> | <DESC> )? ( nonReserved nonReserved )?


::= expression


::= ( intVal | <QMARK> )


::= ( ( <LIMIT> intParam ( <COMMA> intParam )? ) | ( <OFFSET> intParam ( <ROW> | <ROWS> ) ( fetchLimit )? ) | ( fetchLimit ) )


::= <FETCH> nonReserved ( intParam )? ( <ROW> | <ROWS> ) <ONLY>


::= <OPTION> ( <MAKEDEP> id ( <COMMA> id )* | <MAKENOTDEP> id ( <COMMA> id )* | <NOCACHE> ( id ( <COMMA> id )* )? )*


::= criteria


::= ( plusExpression ( <CONCAT_OP> plusExpression )* )


::= ( timesExpression ( plusOperator timesExpression )* )


::= ( <PLUS> | <MINUS> )


::= ( valueExpressionPrimary ( timesOperator valueExpressionPrimary )* )


::= ( <STAR> | <SLASH> )


::= ( <QMARK> | <POS_REF> | literal | ( <LBRACE> nonReserved function <RBRACE> ) | ( textAgg ( windowSpecification )? ) | ( aggregateSymbol ( windowSpecification )? ) | ( aggregateSymbol ( windowSpecification )? ) | orderedAgg ( windowSpecification )? | ( aggregateSymbol windowSpecification ) | ( function ) | ( <ID> ( <LSBRACE> intVal <RSBRACE> )? ) | subquery | ( <LPAREN> expression <RPAREN> ( <LSBRACE> intVal <RSBRACE> )? ) | searchedCaseExpression | caseExpression )


::= <OVER> <LPAREN> ( <PARTITION> <BY> expressionList )? ( orderby )? <RPAREN>


::= <CASE> expression ( <WHEN> expression <THEN> expression )+ ( <ELSE> expression )? <END>


::= <CASE> ( <WHEN> criteria <THEN> expression )+ ( <ELSE> expression )? <END>


::= ( ( <CONVERT> <LPAREN> expression <COMMA> dataType <RPAREN> ) | ( <CAST> <LPAREN> expression <AS> dataType <RPAREN> ) | ( nonReserved <LPAREN> expression <FROM> expression ( <FOR> expression )? <RPAREN> ) | ( nonReserved <LPAREN> ( <YEAR> | <MONTH> | <DAY> | <HOUR> | <MINUTE> | <SECOND> ) <FROM> expression <RPAREN> ) | ( nonReserved <LPAREN> ( ( ( ( <LEADING> | <TRAILING> | <BOTH> ) ( expression )? ) | expression ) <FROM> )? expression <RPAREN> ) | ( nonReserved <LPAREN> expression <COMMA> stringConstant <RPAREN> ) | ( nonReserved <LPAREN> intervalType <COMMA> expression <COMMA> expression <RPAREN> ) | queryString | ( ( <LEFT> | <RIGHT> | <CHAR> | <USER> | <YEAR> | <MONTH> | <HOUR> | <MINUTE> | <SECOND> | <XMLCONCAT> | <XMLCOMMENT> ) <LPAREN> ( expressionList )? <RPAREN> ) | ( ( <TRANSLATE> | <INSERT> ) <LPAREN> ( expressionList )? <RPAREN> ) | xmlParse | xmlElement | ( <XMLPI> <LPAREN> ( <ID> idExpression | idExpression ) ( <COMMA> expression )? <RPAREN> ) | xmlForest | xmlSerialize | xmlQuery | ( id <LPAREN> ( expressionList )? <RPAREN> ) )


::= stringVal


::= <XMLPARSE> <LPAREN> nonReserved expression ( nonReserved )? <RPAREN>


::= nonReserved <LPAREN> expression ( <COMMA> derivedColumn )* <RPAREN>


::= <XMLELEMENT> <LPAREN> ( <ID> id | id ) ( <COMMA> xmlNamespaces )? ( <COMMA> xmlAttributes )? ( <COMMA> expression )* <RPAREN>


::= <XMLATTRIBUTES> <LPAREN> derivedColumn ( <COMMA> derivedColumn )* <RPAREN>


::= <XMLFOREST> <LPAREN> ( xmlNamespaces <COMMA> )? derivedColumn ( <COMMA> derivedColumn )* <RPAREN>


::= <XMLNAMESPACES> <LPAREN> namespaceItem ( <COMMA> namespaceItem )* <RPAREN>


::= ( stringVal <AS> id )


::= ( <DEFAULT_KEYWORD> stringVal )


::= id




::= dataTypeString


::= ( nonReserved )


::= ( stringVal | <INTEGERVAL> | <FLOATVAL> | <FALSE> | <TRUE> | <UNKNOWN> | <NULL> | ( ( <BOOLEANTYPE> | <TIMESTAMPTYPE> | <DATETYPE> | <TIMETYPE> ) stringVal <RBRACE> ) )