Transferring data from a place to another is the cornerstone of almost every data transformation job. XDTL has a very flexible and feature-ladden fetch command for that job. fetch element has the following attributes:

source

a query to a source database. source can contain either an implicit query or a variable containing the query. 

target

target table or output text file. Specifying destination connection attribute indicates that target is a table. In case target is a text file, all the common attributes of text files (delimiter, quote, null, encoding) also apply and can optionally be present.

overwrite

overwrite="1" causes an automatic TRUNCATE command against the target table. 

rowset

specifies an XDTL variable (internally a Java array) to store the resultset of the source query. rowset and target are mutually exclusive and should not be present in the same fetch element. 

connection

connection of the source query. 

destination 

connection of the target table. 

 

The most obvious use case for fetch would be transferring data from a database table to a text file. The following xdtl command would fetch the data specified by the query (source attribute) into a CSV (target attribute) file with ; as the delimiter and " as the quote symbol (defaults, they can be overriden):

<xdtl:fetch source="SELECT col, col, ... FROM table WHERE cond" target="textfile" connection="$conn"/> 

In case we need to process the results of the fetch operation further inside the xdtl package, fetch can store the result set in a variable (technically a Java array): 

<xdtl:fetch source="SELECT col, col, ... FROM table WHERE cond" rowset="myrowset" connection="$conn"/>
<xdtl:for item="myrow" rowset="$myrowset"> 
...do something here. columns have to be addressed via Java array syntax: ${myrow[0]}, ${myrow[1]}, etc...
</xdtl:for>

We can go even further and let the read command - normally used to read data from text file(s) into database table(s) - to consider this rowset variable as its input and store data into a table with compliant structure, ie. matching the rowset by position  (type="ROWSET" tells read that input data is coming from a Java array instead of a text file):

<xdtl:fetch source="SELECT * FROM sourcetable WHERE cond" rowset="myrowset" connection="$conn"/>
<xdtl:read source="$myrowset" target="targettable" type="ROWSET" overwrite="1" connection="$dest"/>

Finally, in case we have no need to process the data in the xdtl package, use fetch command to achieve the behavior of a typical DataPump task in most of the graphical ETL tools in just one step (here, the second connection attribute, destination is used to specify target connection):

<xdtl:fetch source="SELECT * FROM sourcetable WHERE cond" target="targettable" connection="$conn" destination="$dest"/> 

Caution! fetch can transfer data between different servers that could come from different vendors. It can do automatic datatype conversions within reasonable limits (*), but all 'weird', non-standard data types (**) need to be taken care of in the source query statement. Source and target data structures are assumed to be identical, this requirement must be handled in source query by presenting data in suitable format and structure for the target table. And, mondo importante: data columns are matched based on position, not column names!

Taking all this into consideration, all the basic data transfer tasks can be handled with ease. For more complex transformations extensions or customized library tasks should be used. Naturally fetch would work best for reasonable data amounts; for large data other approaches (eg. bulkload tools) should be employed.

(*) Datatype conversion is done in JDBC via Java Primitive Data Types.

(**)