JPumpDX - Relational Data Migration with XML

JPumpDX is an effort to implement best practices in relational data migration. It uses XML as external data store. Table and record data can be migrated from one database to another, and from a database to XML and vice versa. Currently ten databases are tested.
You can use JPumpDX as INSERT and UPDATE tool.
The contained SQL dialect is configurable with XML. You possibly can adapt it to your database without coding Java, just by writing an XML dialect configuration.
Export and import behaviour is configurable via XML.
JPumpDX is written in Java. It builds on JDBC database connections, and on meta data delivered by JDBC. You need a Java with an XML parser (since 1.3). At the time it has no GUI, just a commandline migration tool.

This package is open source, published under Lesser GNU Public License (LGPL).
Thanks to all database manufacturers for making it possible to test JPumpDX with their products (although some products were too big for my harddisk).


SourceForge.net Logo
Author: Ritzberger Fritz, May 2005



Contents


Features


Terms

Definition of some terms used in implementation and this document.

Migration
Copying relational data to some other store (database or XML), including foreign key associations granting referential integrity. Used as the base term for both export and import.
Mapping:
datatype-mapper,
identifier-mapper,
primarykey-mapper
Used in three different contexts:
1. The finding of a new data type for a source data type that the target database does not support. Typically this happens on BOOLEAN (see class DataTypeMapper).
2. Renaming catalogs/schemas/tables/columns on migration (see class DbIdentifierMapper).
3. Primary key mapping happens when key generator must be used on import. The old and new value of the key is stored into a Map to resolve foreign key values (see interface PrimaryKeyMapper).
Filter
Excluding or including certain schemas, tables, columns on migration. Neccessary for selective migration. See class DbIdentifierMapper.
Condition
An SQL-abstracted WHERE clause that reduces the number of records to be exported from a table. Configurable via XML on export.
Appender
When a JDBC driver does not report indexes and unique constraints correctly, you need to append them to retrieved MetaData. This can be done by XML configuration, using the elements UniqueConstraintAppender, IndexAppender and ForeignKeyAppender.
Meta data
Data about other data, needed to understand and process the referenced data. In this context this means catalogs, schemas, tables, constraints, indexes. Everything that must be in the database to enable record import. Everthing that is contained in JDBC DatabaseMetaData.
Instance data
Non-metadata, records and their column values. Instance data would include column values, but not their data types.
To ensure consistency, JPumpDX instance data are always accompained by their meta data, on any level (Record, NamedValue) and in every context (XML, Java). So in fact they are not pure instance data, which makes these terms redundant.
Catalog
An SQL-92 term that defines a container for schemas. Here it is understood in the JDBC meaning, everything that is returned from DatabaseMetaData.getCatalogs() call. Some vendors call this "database" or "database instance". You always need vendor tools to create a catalog. Sometimes the catalog name must be included in the JDBC database URL. Old databases like Oracle use this term for other contexts.
Some databases support catalogs but not schemas (MySQL), some support both (Frontbase, Daffodil), some schemas only (Oracle, McKoi, Pointbase) and some none (HSQL, InstantDB), for some this is not quite clear (Postgres, every created table appears in every catalog).
Schema
An SQL-92 term that defines a container for tables. Here it is understood in the JDBC meaning, everything that is returned from DatabaseMetaData.getSchemas() call. Many databases provide the SQL-92 command to create a schema. Nevertheless you might need privileges to do this.
Natural foreign key
One or more significant columns (with values) that characterize a record of a non-exported table. Needed to find that record in another database when a foreign key relation must be established. The best natural foreign key is a unique column that is not a key column (as key columns might get mapped). A natural foreign key includes a catalog/schema/table name, the column name in referencing table, referenced column names with their data types and values.
Precision
JDBC term, used in two different contexts:
1. Defines the length of character/binary data types, and the digits for numbers, including digits before and after decimal point, excluding the point itself. Scale gives the digits after the decimal point. JDBC even provides a radix to report this, most databases do this with radix 10. Sun has published recommendations for JDBC data type precisions.
2. Defines the actual maximum or fixed (CHAR) length of some character/binary fields, and the maximum digits for number fields. The better term would have been "maximum length".
Native
Here this means database- or product-native (not Java-native). For example, some databases provide only uppercase identifiers, some define a limit of 30 characters for identifiers, ...
Identifier
SQL and JDBC term that includes schema-, table-, index-, constraint- and column-names. Most databases provide identifier quoting to allow names that would conflict with SQL keywords, e.g. "ORDER".
Key recognizer
When keys are alphanumeric, TableImporter will not recognize them as mapable keys. When they must be mapped, providing an alphanumeric key generator by a PrimaryKeyMapper implementation, configuring a KeyRecognizer helps to mark these keys as mapable. Mind that this is not a workaround the fact that some relational models do not declare keys (either for performance tuning or because they are temporal).





Motivation and aims

Following was the motivation to write this library.
JPumpDX supports continuity and connectivity by providing an XML data format, and an SQL dialect configurable by XML. It supports most JDBC data types listed in java.sql.Types. Data of type ARRAY, DATALINK, REF, JAVA_OBJECT, NULL, OTHERS could work from database to database, but not with XML (except you implement the String-Object conversion).

These were the targets:



Quick Start

You need a Java that has an XML parser. Everything since JDK 1.3 will do it. No libraries except the JDBC drivers for your databases is required.

All necessary XML document type definitions are in dtds subdirectory in the distribution. They can be used to create the XML files passed as arguments to the application.

Write every needed JDBC connection into an XML file that looks like the following. You can load ConnectionParameters.dtd when using an XML editor.
<?xml version="1.0"?>
<ConnectionParameters
    Url="jdbc:mckoi://localhost:9158"
    DriverClass="com.mckoi.JDBCDriver"
    User="test"
    Password="test"
/>

Import (from XML)

Before you import any data, make sure that the target catalog and schema exist in the database. JPumpDX has a CREATE SCHEMA command, but there are many bases where this needs special privileges. Creating catalogs mostly is not possible without native tools, as catalog is a synonym for database and often appears in the JDBC URL.

Write your meta- and instance-data into a file that looks like the following. You can load DatabaseInstanceData.dtd when using an XML editor. The TypeId attributes must be good for your data values, but the TypeName need not match exactly what your database provides. The column type will be found using the TypeId when TypeName can not be found. You find a list of all possible JDBC TypeIds in the JDK reference of java.sql.Types, or in DialectConfiguration.dtd.

(Alternatively you can use one of the XML files in the sample directory.)
<?xml version='1.0'?>
<DatabaseData>

    <TableData>
        <Table Name='AirCraft' Schema='TEST'>
            <Column Name='id' TypeId='2' TypeName='BIGINT' PrimaryKey='true' Nullable='false'>
                <ReferencedBy Table='Flight' Column='aircraft_id' Schema='TEST' Constraint='fk_aircraft'/>
            </Column>
            <Column Name='name' TypeId='12' TypeName='VARCHAR' MaxLength='50'/>
            <Column Name='description' TypeId='12' TypeName='VARCHAR' MaxLength='250'/>
        </Table>
        <Rec>
            <Nv Name='id'>1</Nv>
            <Nv Name='name'>DC-10-A</Nv>
            <Nv Name='description'>DC-10 Passenger Plane</Nv>
        </Rec>
    </TableData>

    <TableData>
        <Table Name='Flight' Schema='TEST'>
            <Column Name='id' TypeId='2' TypeName='BIGINT' PrimaryKey='true' Nullable='false'/>
            <Column Name='number' TypeId='12' TypeName='VARCHAR' MaxLength='50'/>
            <Column Name='from' TypeId='12' TypeName='VARCHAR' MaxLength='150'/>
            <Column Name='to' TypeId='12' TypeName='VARCHAR' MaxLength='150'/>
            <Column Name='departs' TypeId='93' TypeName='TIMESTAMP'/>
            <Column Name='arrives' TypeId='93' TypeName='TIMESTAMP'/>
            <Column Name='aircraft_id' TypeId='2' TypeName='BIGINT'>
                <ReferenceTo Table='AirCraft' Column='id' Schema='TEST' Constraint='fk_aircraft'/>
            </Column>
        </Table>
        <Rec>
            <Nv Name='id'>2</Nv>
            <Nv Name='number'>EN-15</Nv>
            <Nv Name='from'>Sheffield, England</Nv>
            <Nv Name='to'>Isle Of Wight, England</Nv>
            <Nv Name='aircraft_id'>1</Nv>
            <Nv Name='departs'>2001-09-07 10:45:00:000 CEST</Nv>
            <Nv Name='arrives'>2001-09-07 13:25:00:000 CEST</Nv>
        </Rec>
    </TableData>

</DatabaseData>


Now write following commandline (syntax is: from - to), assuming you want to import to schema APP.
You must have jpumpdx.jar and the JDBC driver JAR in your CLASSPATH, e.g.:  java -cp jpumpdx.jar;mckoidb.jar ...
java -cp ... fri.util.database.tools.Migration -xml importdata.xml -db connection.xml -schema APP

The data will be imported into the database, tables get created when not existing, foreign keys get established when tables did not exist. When tables and records exist, they will be updated.

Export (to XML)

Write following commandline (syntax is: from - to), assuming you want to export the schema APP.
java -cp ... fri.util.database.tools.Migration -db connection.xml -schema APP -xml exportdata.xml -schema APP

The data of all catalogs/schemas/tables/columns in the database will appear in exportdata.xml. Surely you can select certain schema(s) or table(s) using the -schema option. See below for syntax.

Migration (database to database)

Write following commandline (syntax is: from - to), assuming you want to copy the schema APP to another database into schema TST:
java -cp ... fri.util.database.tools.Migration -db source-connection.xml -schema APP -db target-connection.xml -schema TST

All catalogs and schemas from source database will be copied to target database. You can specify a certain catalog/schema/table by using the appropriate options (-catalog, -schema, -table), or by using export- and import configuration (-config, see manual).

Tool Syntax

The Migration tool can quickly migrate from database to database, from XML to database, or from database to XML. It can optionally migrate a selected catalog/schema/table, when not given, it migrates all catalogs/schemas/tables. The migration can even be configured by XML. All XML must be in files.

This is the full syntax of the migration tool ('copy from - to'):
java -cp jpumpdx.jar;jdbcDriver.jar  fri.util.database.tools.Migration
       { -xml instanceData.xml | -db connectionParameters.xml }
           { [-catalog CATALOGNAME] [-schema SCHEMANAME] [-table TABLENAME] } | [-config exportConfig.xml [-graph]]
       { -db connection
Parameters.xml | -xml instanceData.xml }
           { [-catalog CATALOGNAME] [-schema SCHEMANAME] [-table TABLENAME] } | [-config importConfig.xml]

The source (from) can be XML or database, the target (to) can be XML or database. XML for both source and target will not be accepted. One catalog, schema and table can be passed as selection. Alternatively a compound XML configuration can be given. Use the -graph option for exporting a graph of related records, which will be built from contained table conditions.

IMPORTANT:

The -catalog and -schema options perform (1) source selection and (2) target mapping of catalog/schema names. Mapping to null is possible. So when you map an export from schema APP to null in XML, no schema name will appear in XML. When you re-import the data, you must map them to an target schema, else the import will fail!
So when exporting schema APP, and when you want the schema to appear in XML, you must write the -schema option two times: -db connection.xml -schema APP -xml exportdata.xml -schema APP

You find DTD's for all configuration XML files in the dtds subdirectory in the distribution (-config option).
Do not forget to put the JDBC driver JAR into CLASSPATH!

To drop your test tables, there is another tool:
java -cp jpumpdx.jar;jdbcDriver.jar   fri.util.database.tools.DropTables
           [-catalog CATALOGNAME] [-schema SCHEMANAME] [-table TABLENAME]
Be careful with -table option. You can not drop a single table from the database that others depend on. SqlBridge relies on compound dropping in dependency order. It does not check if the dropped table is referenced by others, it just drops the foreign key constraints to others. Else it would have to set foreign keys to null, and if these are mandatory, this would cause an error.

When errors occur, you might work on a dialect for your database. Here is the tool to do this:
java -cp jpumpdx.jar;jdbcDriver.jar  fri.util.database.tools.CheckDataTypes  -db ConnectionParameters.xml [-dialect DialectConfiguration.xml]
This tests all supported JDBC data types by creating a table with one column that has a certain data type, inserting a record and reading it back. The created data type is checked (if it is the given one or a compatible), the read value is checked for equality. If no dialect is passed, the generic Dialect is used. It is quite difficult to realize the resulting errors. The tool proposes a dialect configuration at end, but do not take this serious, the errors are too subtile.



Hope this works! When not, its time to look at the Manual.






Manual

JPumpDX is a database administration tool, running on JDBC. You can load databases with data and/or metadata from XML, you can export database data and/or metadata to XML, and you can migrate from one database to another. You can use JPumpDX either as blackbox tool, doing basic things by commandline and XML configuration, or you can use it as API to implement special migration logic or dialects. Several capabilities like establishing MetaData, defining PrimaryKeyMappers or Dialect configurations are not available via commandline. For API usage you should also read the Design chapters, and the JavaDoc of course.

Configuring JDBC connections

The well-known connection parameters for JDBC are:
These can be written and read from XML, the DTD is ConnectionParameters.dtd, and a connection looks like mentioned above.
Write this to a file named myConnection.xml, and pass it then as argument to the commandline Migration tool.

Or, coding Java, you can read this into an Java object by
ConnectionParameters connectionParameters = (ConnectionParameters)
    JaxbObjectFactory.getFileInstance("myConnection.xml", ConnectionParameters.class);
You can allocate an SqlBridge directly using the XML file:
SqlBridge sqlBridge = SqlBridgeFactory.getInstance("myConnection.xml");    // could be already in use
or
SqlBridge sqlBridge = SqlBridgeFactory.newInstance(new File("myConnection.xml"));    // ensure new connection

Do not forget to put the JDBC driver JAR into CLASSPATH!
Some databases have their own JDBC driver in some product directory (Oracle, DB2, ...), some have open source drivers (MySQL, Postgres, MS SQL Server (jtds.sourceforge.net). Go to the Sun JDBC page to find your driver.

Table design only (using XML)

You can design your tables database-independently with XML. The DTD for table metadata is DatabaseMetaData.dtd. Use this when you want to define just meta data. Do not use this when you want to define records, too.

You do not have to care for import dependency order, as foreign keys get established at end of import. But when tables already exist, or when records are contained (using DatabaseInstanceData.dtd), you should write your data in dependency order. That means, when Flight depends on AirCraft by referencing it with a foreign key, AirCraft must be defined before Flight definition.

<?xml version='1.0'?>
<Database>
        <Table Name='AirCraft'>
            <Column Name='id' TypeId='2' TypeName='BIGINT' PrimaryKey='true' Nullable='false'>
                <ReferencedBy Table='Flight' Column='aircraft_id' Constraint='fk_aircraft'/>
            </Column>
            <Column Name='name' TypeId='12' TypeName='VARCHAR' MaxLength='50'/>
            <Column Name='description' TypeId='12' TypeName='VARCHAR' MaxLength='250'/>
        </Table>

        <Table Name='Flight' >
            <Column Name='id' TypeId='-5' TypeName='BIGINT' PrimaryKey='true' Nullable='false' />
            <Column Name='number' TypeId='12' TypeName='VARCHAR' MaxLength='50'/>
            <Column Name='from' TypeId='12' TypeName='VARCHAR' MaxLength='150'/>
            <Column Name='to' TypeId='12' TypeName='VARCHAR' MaxLength='150'/>
            <Column Name='aircraft_id' TypeId='-5' TypeName='BIGINT'>
                 <ReferenceTo Table='AirCraft' Column='id'  Constraint='fk_aircraft'/>
            </Column>
            <Column Name='departs' TypeId='93' TypeName='TIMESTAMP'/>
            <Column Name='arrives' TypeId='93' TypeName='TIMESTAMP'/>
        </Table>

</Database>

The TypeId attribute is mandatory and must be good for your data values. You can find a list of possible numeric values in JDK reference of class java.sql.Types, or in DialectConfiguration.dtd. The TypeName attribute needs not to match the type in your database exactly, it is optional to force a special product data type. All names will be converted to the database products naming conventions. This influences upper/lower case, and the length of identifiers. If the database supports mixed case, the names will remain as they are. If the table name is too long for the product, it will be cutten.

Every association is given bidirectionally by defining ReferencedBy and ReferencedTo in the table holding the foreign key. Some databases do not support associations across schemas, but it is possible to define it here.

And this is the Java code that establishes the MetaData within the database:
String sourceCatalog = null;    // assume the catalog is not defined in XML
String sourceSchema = null;    // assume the schema is not defined in XML
String targetCatalog = "users";    // assume database has such a catalog
String targetSchema = "test";    // assume database lets create a schema or has such a schema
ImportConfiguration importConfig = new ImportConfiguration();
importConfig.getDbIdentifierMapper().getCatalogMapper().setMapping(sourceCatalogtargetCatalog);
importConfig.getDbIdentifierMapper().getSchemaMapper().setMapping(sourceSchematargetSchema);
String fileName = "MyMetadata.xml";
SqlBridge sqlBridge = new SqlBridge(url, driverClass, user, password);
try    {
    new XmlToMetaData(fileName, sqlBridge, importConfig);
}
finally    {

    sqlBridge.close();
}

Record editing with table design (using XML)

Text and numbers

You should mix table metadata with instancedata (records) using DatabaseInstanceData.dtd. Doing so, you will not need to define MetaData separately. The Table element is the same as in DatabaseMetaData.dtd.
<?xml version='1.0'?>
<DatabaseData>
    <TableData>
        <Table Name='AirCraft'>
            <Column Name='id' TypeId='2' TypeName='BIGINT' PrimaryKey='true' Nullable='false'>
                <ReferencedBy Table='Flight' Column='aircraft_id' Constraint='fk_aircraft'/>
            </Column>
            <Column Name='name' TypeId='12' TypeName='VARCHAR' MaxLength='50'/>
            <Column Name='description' TypeId='12' TypeName='VARCHAR' MaxLength='250'/>
        </Table>
        <Rec>
            <Nv Name='id'>1</Nv>
            <Nv Name='name'>DC-10-A</Nv>
            <Nv Name='description'>DC-10 Passenger Plane</Nv>
        </Rec>
        <Rec>
            <Nv Name='id'>2</Nv>
            <Nv Name='name'>DC-10-B</Nv>
            <Nv Name='description'>DC-10 Passenger Plane</Nv>
        </Rec>

    </TableData>

</DatabaseData>

Rec stands for record, Nv for named value (these abbreviations save a lot of space). Natural foreign keys are named Nfk, their named values Fnv (foreign named value).

Numbers must be written with "." as decimal point, national symbols are not supported. Double.valueOf() is called to decode decimals from XML text, Double.toString() to encode them. See AbstractNamedValue overrides on convertStringToObject() and convertObjectToString(), and their base implementations in AbstractReflectionXmlElement for learning more about value conversion.

If the tables already exist, it is possible to write the records as follows. But keep in mind that you depend on a database when doing so! It is better to keep metadata and instancedata in one place. Further it is not possible to construct a regular TableData object from such an XML file (without importing these data into a database), as the record values do not know its data type then, an can not be converted to Java objects.
<!--
     Possible, but strongly discouraged, as record values can only be converted
     to Java objects when an import database prepares their data types!

-->
<DatabaseData>
    <TableData>
        <Table Name='AirCraft' />
        <Rec>
            <Nv Name='id'>1</Nv>
            <Nv Name='name'>DC-10-A</Nv>
            <Nv Name='description'>DC-10 Passenger Plane</Nv>
        </Rec>
 

Binary content

When you need to put a binary content into the database, you need a BINARY, VARBINARY, LONGVARBINARY or BLOB column. You can write the data for that column into the XML file by using Base-64 encoding. JPumpDX provides a tool to convert a binary file into a Base-64 encoded file. Take the contents of the converted Base-64 data and paste it into the XML tag. Take care to not add any space or newline (except the normal closing newline of Base-64).

The syntax is "convert from - to" (text to binary):
java fri.util.database.tools.Base64Converter -txt base64file -bin binaryfile
or (binary to text):
java fri.util.database.tools.Base64Converter -bin binaryfile -txt base64file

This tool will never overwrite an existing file, you got to remove this before.

After pasting the contents of the textfile into XML your data will look similar to this:

                  <Nv Name='image'>jnicdrhSlejNde1KVllnsnT4s+M7TPE5h9RsmjlS0WeMoUzjfizhkUlwtYf94Z8/2WtH9l80klth
/J3+V+0SdB7q0f4SSWkumEBjlC/7RnzSSRHoT4Kb7J3sq0H2ZSSRLo4/klx/7tH/ANg/ZSpJIArz
/EE1v2aSSaJlxDI+ZViPkEkk5DiMPMqF6SSS6N8KbPtJv/c/gKRv23ySSQxR4Wj8bPmndEklK6Ni
HJa3CPgd7pJLm8z/AFGuD9kXaT/pp91yTeSSS8zB9OjN0//Z

</Nv>

Java code

Now

Following is the Java code to import manually written XML data into a database:

Table and record migration

You can use the Migration commandline utility to migrate meta data and records: see Quickstart - Tool Syntax.

The basic migration behaviour of JPumpDX is as follows:
Currently there is no automatic DELETE service implemented. JPumpDX will never delete a record. Use the SqlBridge API for that.

Most of the responsibilities are implemented in TableExporter and TableImporter class. TableImporter gets any Table within setTable(Table) call, and any Record for that Table in addRecord(Record) call. The same is with TableExporter and getTable() and getRecord(). Please read the source to learn more about migration logic.

Java sample code

Database to database:
    SqlBridge sourceSqlBridge = SqlBridgeFactory.newInstance(new File("sourceConnection.xml"));
    SqlBridge targetSqlBridge = SqlBridgeFactory.newInstance(new File("targetConnection.xml"));
    String sourceCatalog = "myCatalog";    // assume such a catalog exists in source DB
    String sourceSchema = "mySchema";
    // assume such a schema exists in source DB
    ExportConfiguration exportConfig = new ExportConfiguration();
    exportConfig.getDbIdentifierMapper().getCatalogFilter().include(
sourceCatalog);  // export only "myCatalog"
    exportConfig.getDbIdentifierMapper().getSchemaFilter().include(
sourceSchema);  // export only "mySchema"
    ImportConfiguration importConfig = new ImportConfiguration();
    importConfig.getDbIdentifierMapper().getCatalogMapper().setMapping(
sourceCatalog, null);  // assume target DB does not support catalogs
    importConfig.getDbIdentifierMapper().getSchemaMapper().setMapping(
sourceSchema, "test");  // assume target DB has such a target schema
    importConfig.setPrimaryKeyMapper(targetSqlBridge.createPrimaryKeyMapper());    // perform primary key mapping
    try    {
        new InstanceDataToInstanceData(
sourceSqlBridge, targetSqlBridge, exportConfig, importConfig);
   
}
    catch (Exception e)    {
        targetSqlBridge.rollback();
        throw e;
    }
    finally    {
        sourceSqlBridge.close();
        targetSqlBridge.close();  // includes commit
    }
Database to XML:
    SqlBridge sqlBridge = SqlBridgeFactory.newInstance(new File("myConnection.xml"));
    String fileName = "myInstancedata.xml";
    String catalog = "myCatalog";
    String schema = "mySchema";
    ExportConfiguration exportConfig = new ExportConfiguration();
    exportConfig.getDbIdentifierMapper().getCatalogFilter().include(catalog); 
// export only "myCatalog"
    exportConfig.getDbIdentifierMapper().getSchemaFilter().include(schema); 
// export only "mySchema"
    exportConfig.getDbIdentifierMapper().getCatalogMapper().setMapping(catalog, null); 
// no catalog will appear in XML
    exportConfig.getDbIdentifierMapper().getSchemaMapper().setMapping(schema, null); 
// no schema will appear in XML
    try    {
        new InstanceDataToXml(new FileOutputStream(fileName), sqlBridge, exportConfig);
    }
    finally    {
        sqlBridge.close();
    }

XML to database:
String sourceCatalog = null;    // assume the catalog is not defined in XML
String sourceSchema = null;    // assume the schema is not defined in XML
ImportConfiguration importConfig = new ImportConfiguration();
importConfig.getDbIdentifierMapper().getCatalogMapper().setMapping(sourceCatalog"users"); // assume DB has such a catalog
importConfig.getDbIdentifierMapper().getSchemaMapper().setMapping(sourceSchema"test"); // DB lets create schemas or has such one
SqlBridge sqlBridge = new SqlBridge(url, driverClass, user, password);
importConfig.setPrimaryKeyMapper(sqlBridge.createPrimaryKeyMapper());    // perform primary key mapping
String fileName = "MyInstancedata.xml";
try    {
    new XmlToInstanceData(fileName, sqlBridge, importConfig);
}
finally    {
    sqlBridge.close();
}

For porting meta data, the classes
exist. Arguments and configurations are the same as with instancedata. Mind that you can do many more things with ExportConfiguration and ImportConfiguration.


Establishing foreign keys

Relational migration is a complicated thing, primarily because of table associations by foreign keys. There are a lot of cases to consider.
It would fill a lot of space to discuss all these topics and their solutions in JPumpDX. Please read the source of TableImporter and TableExporter to learn more about this.

Configuration

Like the data format, all configurations happen using XML. You find the needed DTDs in the dtds subdirectory in the distribution. They should be well-documented and easy to understand. In Java you use ExportConfiguration/ExportGraphConfiguration and ImportConfiguration classes, and their inner classes.

Migration

When porting data from one database to another, or from a database to XML or vice versa, a lot of settings can be used to customize the migration behaviour. The migration will never delete a record, so 'move' is not supported, only 'copy'. Nevertheless you can use the SqlBridge to delete records programmatically. You can even drop tables (or schemas, when the database supports this). Or you might use the method TableExporter.deleteExportedRecords() (experts, needs programming/overriding).

When copying data, you possibly would like to
Both import and export have these facilities. This is done by using DbIdentifierMapper, which is available within ImportConfiguration or ExportConfiguration (both extending MigrationConfiguration). The "filter" options works exclusive, when includes are defined, no excludes are possible. Wildcards are not supported at the time.

Further you would like to append manually written
Both import and export have these facilities. This is done by using UniqueConstraintAppender, IndexAppender and ForeignKeyAppender.

Import

Look at ImportConfiguration.dtd. You install the import configuration on construction of one of the classes XmlToInstanceData or InstanceDataToInstanceData. Custom primary key mappers can be installed in ImportConfiguration.

On import, you would like to
This is done by using RecordExistenceCheck, NoPrimaryKeyMappingTable and NoInsertWhenForeignKeyNotEvaluable.

The primary key mapping is done by one of the three default key generator implementations that implement the interface PrimaryKeyMapper, or you can install your own implementation using ImportConfiguration (done by coding Java, this can not be configured using XML).

The SqlBridge provides one of the default implementations according to the XML settings of the dialect  (SqlBridge.createPrimaryKeyMapper()). But the implementation has to be set into the MigrationConfiguration programmatically, else no mapping will happen. The Migration tool does this by default, so primary key mapping always happens when using this tool.

Furthermore it is possible to customize the identifiers and design of SequenceTable by overriding TableSequencePrimaryKeyMapper (when you must be compatible to some JDO mapper).

Export

Look at ExportConfiguration.dtd, or ExportGraphConfiguration.dtd. You install the export configuration on construction of one of the classes InstanceDataToXml or InstanceDataToInstanceData.

On export, you would like to
This is done by using NaturalKeyReferencedTable, ReferencedColumnsCsvList and TableCondition. Mind that there is no SQL contained in TableCondition. You have to mask XML meta characters. You can use the key-value comparison operators
which can be logically associated by

Additionally you might like to
When you want to export a graph of related records, you can do this using ExportGraphConfiguration. This is nearly the same as ExportConfiguration, except the condition flags TopDownBindingCsvColumns (binding records that an exported one references, can be as much as foreign keys exist) and BottomUpBindingCsvColumns (binding records that reference an exported one, can be as much as foreign keys referencing that table). These are like "JOIN" conditions. You have a sample XML config in samples/configs/FlightGraphConfiguration.xml.

Default "top down" binding is set to all foreign keys ("*"), which means that every referenced record will be exported (when its table is among exported ones). No bottom-up binding will happen by default. This favours hierarchies of records. When a record references a record in another table, and that record is is excluded by the Condition on that table, it will not be exported when that Condition has an AND operator, and it will be exported when it has an OR operator, or none (default OR is assumed). Then the foreign key will be set to null on import, except when you configure DoInsertWhenForeignKeyNotEvaluable to false on import (then it will not be imported, which again can cause dependent records not to be imported).

When binding top-down, you can restrict the hierarchy to certain foreign keys by using TopDownBindingCsvColumns, where you can name the foreign key attributes that should perform binding, e.g. for table "Ticket":
TopDownBindingCsvColumns="customer_id, flight_id"
The same is for BottomUpBindingCsvColumns, except that you must name the foreign key attributes of the foreign table (as several tables could point to the primary key of a table). You do this by naming these foreign keys with "qualifed" name, which means "catalog.schema.table.column", e.g. for table "Customer"
BottomUpBindingCsvColumns="APP.Ticket.customer_id, APP.WaitingList.customer_id"
(names separated by dot, leave out catalog and schema when the database does not support such). You can use the wildcard "*" for all references.
The strategy the export configuration performs then is propagation. The exported tables are looped until no more propagation happens. Within that loop the records of any table that has a condition are read, and their foreign keys (or primary key for bottom-up binding) are propagated to the peer tables, which means they get a condition. Recursively those tables then read their records and propagate their configured bindings to other tables. When key values are propagated that aleady are read, no new propagation takes place. A table that gets no condition will not be in export data.

Dialect

To provide connectivity to every database without programming efforts, the SQL dialect implementation is configurable via XML. Look at the well-documented DTD DialectConfiguration.dtd. Here is the Java code to register a dialect with a new XML configuration:
String dialectName = "sql_server";    // must be contained case-insensitive in product name
String xmlConfigFile = "mydialects/sql_server.xml";    // your dialect configuration
ConfiguredDialectFactory.singleton.registerDialect(new ConfiguredDialect(dialectName, xmlConfigFile));

When there is a problem that is not covered by the XML configuration, you can extend ConfiguredDialect (or Dialect) and register the new dialect at the factory:
Dialect myDialect = new ConfiguredDialect("mysql", "mydialects/mysql.xml")    {
    // do some overrides
};
ConfiguredDialectFactory.singleton.registerDialect(myDialect);
Every dialect must have a no-argument constructor. When the dialect was registered, the SqlBridge can instantiate it using ConfiguredDialectFactory. The factory finds it by the product contained in MetaData that get passed by SqlBridge. The factory will instantiate the dialect, calling the no-arg constructor. The registered dialect will never be used, it's only a "template".


The XML configuration is mainly to fix bugs in JDBC MetaData. You need not to write anything that is correctly in MetaData. When all JDBC drivers would deliver correct metadata, this configuration would be dispensable. Following things you can customize with XML dialect configuration (see DialectConfiguration.dtd in the dtds subdirectory for details):
To get the flavor, here ist the DTD without comments, see in the dtds subdirectory for the full version. Every element and attribute is documented there.
<!ELEMENT DialectConfiguration (Supports?, Lengths?, TypeNameToIdMap?, DataDefinition?, DataManipulation?)>
    <!ATTLIST DialectConfiguration
        ProductName CDATA #IMPLIED
        ProductVersion CDATA #IMPLIED
    >
    <!ELEMENT Supports (Schemas?, Catalogs?)>
        <!ATTLIST Supports
            Sequences (oldStyle | newStyle | impliedPerTable | none) 'newStyle'
            AutoIncrementColumns (true | false) 'false'
            IdentifierQuote CDATA #IMPLIED
            IdentifierCase (mixed | lower | upper) 'upper'
            PreparedStatements (true | false) 'true'
        >
        <!ELEMENT Catalogs EMPTY>
            <!ATTLIST Catalogs
                Supports (true | false) 'true'
                InTableDefiniton (true | false) 'true'
                InConstraintDefiniton (true | false) 'true'
                InIndexDefiniton (true | false) 'true'
                InSequenceDefiniton (true | false) 'true'
                InDataManipulation (true | false) 'true'
                Separator CDATA #IMPLIED
                SystemCatalogs CDATA #IMPLIED
            >
        <!ELEMENT Schemas EMPTY>
            <!ATTLIST Schemas
                Supports (true | false) 'true'
                InTableDefiniton (true | false) 'true'
                InConstraintDefiniton (true | false) 'true'
                InIndexDefiniton (true | false) 'true'
                InSequenceDefiniton (true | false) 'true'
                InDataManipulation (true | false) 'true'
                Separator CDATA #IMPLIED
                SystemSchemas CDATA #IMPLIED
            >
    <!ELEMENT Lengths EMPTY>
        <!ATTLIST Lengths
            MaxCatalogName CDATA #IMPLIED
            MaxSchemaName CDATA #IMPLIED
            MaxTableName CDATA #IMPLIED
            MaxColumnName CDATA #IMPLIED
        >
    <!ELEMENT TypeNameToIdMap (Type*)>
        <!ELEMENT Type EMPTY>
            <!ATTLIST Type
                TypeName CDATA #REQUIRED
                TypeId CDATA #IMPLIED
                InsertValueTypeId CDATA #IMPLIED
                Precision CDATA #IMPLIED
                PrecisionRadix CDATA #IMPLIED
                MinScale CDATA #IMPLIED
                MaxScale CDATA #IMPLIED
                MustHaveLength (true | false) 'false'
                CanHaveLength (true | false) 'true'
                CanHavePrecision (true | false) 'true'
                CanHaveScale (true | false) 'true'
                CheckBinCharMaximum (true | false) 'true'
                CheckNumericMaximum (true | false) 'true'
                ColumnDefinitionMask CDATA #IMPLIED
                AppendCreateParamsLiterally (true | false) 'false'
                RetrieveProxyMethod CDATA #IMPLIED
            >
    <!ELEMENT DataDefinition EMPTY>
        <!ATTLIST DataDefinition
            CreateCatalogStatement CDATA #IMPLIED
            CreateSchemaStatement CDATA #IMPLIED
            AddToCreateSchema CDATA #IMPLIED
            AddToCreateTable CDATA #IMPLIED
            AddToCreateIndex CDATA #IMPLIED
            DropCatalogStatement CDATA #IMPLIED
            DropSchemaStatement CDATA #IMPLIED
            AddToDropSchema CDATA #IMPLIED
            AddToDropTable CDATA #IMPLIED
            AddToDropConstraint CDATA #IMPLIED
            AddToDropIndex CDATA #IMPLIED
            AddToDropSequence CDATA #IMPLIED
            AutoIncrementPrimaryKeyTypeName CDATA #IMPLIED
            AddToAutoIncrementPrimaryKeyDefinition CDATA #IMPLIED
            CreateIndexesOnAllKeyColumns  (true | false) 'false'
            NeedsTableDotIndexOnDropIndex (true | false) 'false'
            ShouldDropForeignKeyConstraintsBeforeDropTable (true | false) 'true'
            ShouldDropIndexesBeforeDropTable (true | false) 'true'
            RequiredNumericPrimaryKeyType CDATA #IMPLIED
            SequenceNamesQuery CDATA #IMPLIED
            CreateSequenceStatement CDATA #IMPLIED
            MustCommitCreateSequence (true | false) 'false'
            DropSequenceStatement CDATA #IMPLIED
        >
    <!ELEMENT DataManipulation EMPTY>
        <!ATTLIST DataManipulation
            NextSequenceValueStatement CDATA #IMPLIED
            NextAutoIncrementedValueStatement CDATA #IMPLIED
            CanInsertNullOnAutoIncrementColumn (true | false) 'true'
            WriteNullAs_IS_NULL_InConditions (true | false) 'true'
            MustPrefetchBlob (true | false) 'false'
            TrimChar (true | false) 'false'
        >

Developing an XML dialect configuration
DataTypeChecker is a tool made to check basic CREATE TABLE and INSERT behaviour of the database and the XML dialect configuration. If all tests succeed, a database migration likely would work, except when there are problems with autoincrement key columns, or inserting null values.

The cecker creates a table with one column for every JDBC data type, checks if the type or a comptible one was created, inserts a value, re-reads the value and checks for equality. Do not take the XML configuration proposal at end serious. For syntax see above. You must provide a ConnectionParameter.xml file and the dialect file as arguments. When you do not pass a dialect, the generic Dialect is used (which likely will fail).

First the checker runs through meta data and looks if the database type names are plausible for the represented JDBC type. It outputs the types, and reports unplausible names, mostly occuring at BOOLEAN, as some database provide only TINYINT for that type. Then it starts to create test tables. Following are the most freqent type check errors.

When all tests with DataTypeChecker succeeded, you must find out if catalog/schema names are supported in CONSTRAINT and INDEX definitions. Work with XML Supports - Schema - Catalog elements, using the attributes InConstraintDefinition and InIndexDefinition. Some databases show strange error messages on such mistakes. Then create a test schema in your database and try to import one of the XML files in the samples directory. Following are some import error examples.




Software Design

The first version has about 15000 lines of Java and XML code, 67 Java source files and 20 XML and DTD files. There are no other open source components integrated, i decided to implement a small framework for Java-XML binding. As there is no open source SQL dialect implementation, i tried to design a new kind of SQL dialect which is configurable via XML. As things go, this XML-based configuration is the union of all workarounds that were necessary during testing. JPumpDX comes with several JUnit test cases that show the usage of the API. I spent much more time on unit testing and bugfixing than on writing sourcecode.

Here are the main "components":
Here are the most important classes and their responsibilities. The classes probably needed for programming are marked bold.

Java-XML binding

This has no dependencies to the database package. It works with SAX to avoid big DOM models within memory.
The Java-XML binding is based on reflection. There is a factory to get a Java object hierarchy from an XML file or URL, and to store a Java object hierarchy to an XML file.
The package is
fri.util.xml.jaxb
The factory is
fri.util.xml.jaxb.JaxbObjectFactory
// load an Java object from an XML File
ExportConfiguration exportConfig = JaxbObjectFactory.getFileInstance("exportConf.xml",
ExportConfiguration.class);

...   
// do some programmatical changes

// save the Java object to a XML File
JaxbObjectFactory.putFileInstance(exportConfig"exportConf.xml");

XML loadable and saveable Java classes must extend the AbstractReflectionXmlElement base class, or implement one of or both of XmlLoadable and XmlSaveable interfaces.

This mini-JAXB is used to load handcoded Java classes that simply reflect parameters and do not contain any logic (except returning null when an impossible element index is requested). These classes could be generated by a source generator (not yet implemented).

XML configurable SQL dialect

The XML configurable dialect is in package
fri.util.database.dialect.*
The ConfiguredDialect is configured by DialectConfiguration and extends the generic Dialect implementation in
fri.util.database.Dialect
The Dialect class implements generic SQL behaviour and exposes a lot of protected methods to subclasses.
fri.util.database.dialect.ConfiguredDialect
overwrites the protected methods to control behaviour from its XML configuration parameters. This class could be extended again to overwrite methods for solving some complicated problems. Such subclasses must provide a no-argument constructor, and pass a dialect name to the superclass. The ConfiguredDialectFactory uses every registered dialect as template and constructs new instances from their class, using getClass().newInstance() which calls the no-arg constructor, and passing the MetaData object after construction.

All ready-made dialects are in the this package:
fri/util/database/dialect/*.xml
Not all dialect.xml files are already tested. See the list of tested databases.
An XML configuration can be loaded from some File or URL. Look at the constructors of ConfiguredDialect.

The SQL bridge

The SqlBridge class encapsulates the database connection and the dialect. It provides methods to create and drop tables, constraints, indexes, sequences, and it provides select, insert, update and delete of records. It does not generate any SLQ, this is the responsibility of the dialect. It can work only on one table, no joins are possible (use RecordGraph for that purpose). An SqlBridge establishes a JDBC Connection on construction, and closes it on close() or garbage collection by finalize().
fri.util.database.SqlBridge
To express a WHERE clause, the class QueryFilter is used. This abstracts any SQL and uses a QueryWriter class to generate SQL (which is out of control of the dialect, as it uses only the generic keywords AND, OR, IN , and parentheses). Several QueryFilter instances can be logically associated using QueryFilterGroup. You could use these filters for JDO or any medium that supports a query language, assumed that a QueryWriter can be provided. The QueryFilter classes are in
fri.util.database.queryfilter.*
fri.util.database.queryfilter.jdbc.PreparedStatementQueryWriter
To manage and cache prepared statements, the package
fri.util.database.statement.*
contains classes for prepared and literal statements. Every Dialect has its own statement cache instance. The literal statements do not provide LOBs and are not very good tested, as most databases provide prepared statements.

Following is an query specification using the abstraction with SqlBridge and PreparedStatementQueryWriter.
QueryFilterGroup filter = new QueryFilterGroup(new QueryFilter []    {
    new QueryFilter("SURNAME", QueryFilter.LIKE, "Goslin%"),
    new QueryFilterGroup(QueryFilter.AND, new QueryFilter [] {
         new QueryFilter(                "DEVGROUP", QueryFilter.EQUAL, new Character('J')),
         new QueryFilter(QueryFilter.OR, "DEVGROUP", QueryFilter.EQUAL, new Character('C')),
    }),
    new QueryFilter(QueryFilter.AND_NOT, "JOB", QueryFilter.EQUAL, null)
});

sqlBridge.selectFromTable(table, filter);


Data type mapping

The DataTypeMapper is part of the Dialect.
fri.util.database.DataTypeMapper
Data types are the biggest problem when migrating data from one database to another. For example, REAL, FLOAT and DOUBLE types have other precisions in almost every database. Some databases provide only seconds precision in TIMESTAMP. Furthermore VARCHAR is restricted differently in every database, but sometimes it is not. Data types require different arguments on column definitions. Sometimes a CHAR column without precision is only one character long (which is definitely not what we want). Most CHAR type implementations fill with spaces when defined with a length. Etc., etc.

JPumpDX just checks for precision on character and binary types. For numbers it relies on JDBC standards. Errors could happen when some data require a high precision. When a binary or character target type is too narrow for the source type, it scales up to the next bigger, e.g. from LONGVARCHAR to CLOB.

For following data types a length argument is assumed to be mandatory (implemented in DataTypeMapper, can be configured using XML):
For following number types precision and scale arguments are assumed to be optional:
Look at DialectConfiguration.dtd for customizing data types.
The algorithm to map data types from one database product to another is implemented in the method DataTypeMapper.findBestType().

Primary key generation and mapping

The primary key mapping classes are
fri.util.database.migration.PrimaryKeyMapper (interface)
fri.util.database.migration.NumericPrimaryKeyMapper (all style SEQUENCEs)
fri.util.database.migration.AutoIncrementPrimaryKeyMapper (for IDENTITY columns, needs a statement to retrieve the generated key after insert)
fri.util.database.migration.TableSequencePrimaryKeyMapper (workaround for databases that do not support SEQUENCES)
The problem solved by primary key mapping is the fact that associations between records are done by keys that are unique only per database instance, and even per table. That means when importing records from another database you have to change these values, else they will conflict with existing ones. Even when having no data in the target database, you will likely want to create new records after import, where to take the primary key values from? You need a key generator that knows about the already created records per table.

Primary key generation is mostly done by three methods:
Primary key mapping then means to generate a new value, putting it into the record (about to be inserted), and storing it into a Map with key=oldValue and value=newValue. Or, for autincrement mapping, providing null for the key column, inserting the record, and retrieving the generated value by a database-specific SQL statement. Foreign keys now can be mapped by querying the newValue with their oldValue from the Map. Composed keys are not supported.

Primary key mapping can be switched on and off by configuration. When using one of the service classes like XmlToInstanceData, you have to set the PrimaryKeyMapper explicitely to turn on the  mapping. You can use sqlBridge.createPrimaryKeyMapper() for that purpose, as SqlBridge knows by XML configuration about the way the database supports key generation.

The class that caches and maps all key values is TableImporter, the Map object is in the PrimaryKeyMapper implementation (AbstractPrimaryKeyMapper derivate) . So you should use the same  TableImporter (or ImportConfiguration) instance for all imported data, else foreign keys will get no or a wrong value.

For adapting the mapper classes to already existing key generators, or to an already existing virtual sequence table, extend NumericPrimaryKeyMapper or TableSequencePrimaryKeyMapper . You can install this derivation by passing it to one of the Configuration classes: importConfig.setPrimaryKeyMapper(myMapper);

Source code hints

To make it easier to find the classes to override for certain purposes, here is a small HOWTO

How to switch off certain data types by XML configuration

A sample is in oracle.xml. Oracle provides LONG as character datatype, but it allows only one LONG column per table. So it is better not to use it on a migration when you do not know how much LONGVARCHAR columns are in source tables.

You can switch off the unwanted data type by setting its JDBC type to Types.OTHERS (1111). The dialect will find the data type by its name ('LONG') and assign another JDBC TypeId to it.
<DialectConfiguration>
    <TypeNameToIdMap>
        <Type TypeName='LONG' TypeId='1111' />
        <Type TypeName='LONG RAW' TypeId='1111' />
    </TypeNameToIdMap>
</DialectConfiguration>

How to use another Java object than the default on INSERT.

Every JDBC type has its default Java pendant. This defaults are hardcoded in DbUtil class, following the Sun recommendations. Some database drivers do not accept these default types, e.g. FrontBase insists on getting java.lang.Integer as value for a SMALLINT column, it throws an exception on getting the default java.lang.Short.

You must reassign the  inserted JDBC type using dialect configuration. This looks like the following.
<DialectConfiguration>
    <TypeNameToIdMap>
        <Type TypeName='TINYINT' InsertValueTypeId='4' /> <!-- must have Types.INTEGER (4) on INSERT -->
        <Type TypeName='SMALLINT' InsertValueTypeId='4' />
    </TypeNameToIdMap>
</DialectConfiguration>

How to exclude system catalogs or schemas by configuration

Most databases have their meta data within system catalogs or schemas. When exporting a database, you do not want these to be in export data.

You can explicitely exclude them by using ExportConfiguration/DbIdentifierMapper, or by telling the dialect their names:
<DialectConfiguration>
    <Supports>
        <Schemas SystemSchemas = "SYS_JDBC, SYS_INFO" />
    </Supports>
<DialectConfiguration>

How to dedicate a dialect configuration to one or more specific database versions

When the ConfiguredDialectFactory allocates a dialect, it matches all registered dialects against the MetaData it got, by calling Dialect.matchesProductName(String productName). When this matches, it calls Dialect.matchesProductVersion(String productVersion). Both the product name and version come from JDBC DatabaseMetaData. When both calls return true, the dialect is instantiated by using the no-arg constructor of its class.

The ConfiguredDialect class overwrites these methods to ask the XML driven dialectConfig for values in  attributes ProductName and ProductVersion. These must match exactly. When there are no such attributes configured, it delegates to super.  Dialect implements a "lower-case contains" match against the product name, that means, for matching  a product name like "Mckoi 1.0.3" the dialect's name must be something like "mckoi" (spaces are replaced by "_"). The Dialect version match always returns true (does no version check).

So all you got to do is write the version into the XML dialect configuration. You can match several versions by concatenating them using a comma:
<DialectConfiguration ProductVersion="1.2.1, 1.2.2, 1.3.0">
...
As you can not derive an XML file from another, you got to copy the version-specific XML file from the unspecific one.
Do not forget to register a newly constructed ConfiguredDialect instance with the modified XML configuration at ConfiguredDialectFactory.


Please read the DialectConfiguration.dtd comments to learn more about dialect configuration!


How to add STORAGE clauses per table to CREATE TABLE

You could add the same storage clause to every CREATE TABLE / CREATE INDEX statement by using the XML attributes AddToCreateTable / AddToCreateIndex (which would be enough to just put tables and indexes in separate tablespaces), but you must write a Java ConfiguredDialect derivate to do this per table/index.

public class StorageClauseAppendingDialect extends ConfiguredDialect
{
    public StorageClauseAppendingDialect()    {
        super("oracle");
    }

    protected void appendToEndOfCreateTableStatement(StringBuffer sb, Table table, List unsupportedAttributes)    {
        if (table.getName().equals("CUSTOMER"))
            sb.append(" TABLESPACE TS_CUSTOMER_TABLES STORAGE (INITIAL 20 M PCTINCREASE 2 M");
    }
   
    protected void appendToEndOfIndexDefinition(StringBuffer sb, Table table, String indexName)    {
        if (indexName.equals("INDEX_CUSTOMER"))
            sb.append(" TABLESPACE TS_CUSTOMER_INDEXES STORAGE (INITIAL 5 M)");
    }
}

....
ConfiguredDialectFactory.singleton.registerDialect(new StorageClauseAppendingDialect());
SqlBridge sqlBridge = SqlBridgeFactory.newInstance(new File("oracleConnectionParameters.xml"));
....


How to implement a new String (XML) to Object (database) conversion (e.g. for Types.ARRAY, or some other currently unsupported JDBC type)

You can use the implementation for Base64 encoding of binary data types as a sample. You find it in AbstractNamedValue.convertStringToObject(String value, Class targetClass) and convertObjectToString(Object value). To extend the actually used derivate NamedValue you must override SqlBridge.createNamedValue(Table.Column column, Object value).

SqlBridge sqlBridge = new SqlBridge(...)    {
    protected NamedValue createNamedValue(Table.Column column, Object value)    {
        return new NamedValue(column, value)    {
            protected Object convertStringToObject(String value, Class targetClass)
                throws JaxbException
            {
                if (targetClass.equals(Object.class))    {    // value of NamedValue is arriving
                    try    {
                        switch (getType().intValue())    {
                            case Types.JAVA_OBJECT:
                                ....   // convert String to a Java object
                            default:
                                
return super.convertStringToObject(value, targetClass);
                        }
                    }
                    catch (Exception e)    {
                        throw new JaxbException(e);
                    }
                }
                else    {    // name arriving
                    return super.convertStringToObject(value, targetClass);
                }
            }

            protected String convertObjectToString(Object value)
                throws JaxbException
            {
                try    {
                    if (value instanceof MyJavaObject)    {
                        ...    // convert the Java object to a String
                    }
                    else    {
                        return super.convertObjectToString(value);
                    }
                }
                catch (JaxbException e)    {
                    throw e;
                }
                catch (Exception e)    {
                    throw new JaxbException(e);
                }
            }
        };
    }
};

How to introduce another XML configuration attribute

Assuming you need another configuration parameter for your database migration, you got to do the following:




APPENDIX

List of tested databases


Database Type Table

This table was created directly (literally) from JDBC drivers DatabaseMetaData (including all errors) using XSLT. Data type arguments are written italic.

JDBC InstantDB Version 3.26 Mckoi SQL Database ( 1.0.3 ) 1.0.3 HSQL Database Engine 1.7.2 DB2/NT 07.02.0005 Oracle 9.2.0.1.0 Firebird WI-V6.3.1.4481 Firebird 1.5 FrontBase 3.6.41 MySQL 4.0.12-standard PointBase 4.4GA DaffodilDB 3.4 PostgreSQL 7.2 Microsoft SQL Server Product Version: 7.0.623
-6: TINYINT
BYTE
TINYINT
TINYINT
-
NUMBER (3)
-
TINYINT
BOOLEAN
TINYINT [(M)] [UNSIGNED] [ZEROFILL]
-
tinyint
byte
-
tinyint
tinyint identity
-6: TINYINT
4: INTEGER
INTEGER
INTEGER
INTEGER
INTeger
NUMBER (10)
INTEGER
INTEGER
INTEGER [(M)] [UNSIGNED] [ZEROFILL]
INT [(M)] [UNSIGNED] [ZEROFILL]
MEDIUMINT [(M)] [UNSIGNED] [ZEROFILL]
INTEGER
int
integer
int4
oid
int
int identity
4: INTEGER
-5: BIGINT
LONG
CURRENCY
BIGINT
BIGINT
BIGINT
NUMBER
BIGINT
LONGINT
BIGINT [(M)] [UNSIGNED] [ZEROFILL]
-
long
bigint
int8
- -5: BIGINT
8: DOUBLE
DOUBLE
DOUBLE
DOUBLE
FLOAT
-
DOUBLE PRECISION
DOUBLE PRECISION
DOUBLE [(M,D)] [ZEROFILL]
DOUBLE PRECISION [(M,D)] [ZEROFILL]
REAL [(M,D)] [ZEROFILL]
DOUBLE PRECISION
double precision
float8
money
- 8: DOUBLE
6: FLOAT
FLOAT
FLOAT
FLOAT
FLOAT
FLOAT
FLOAT
FLOAT [(precision)]
-
FLOAT Optional Precision
float
-
float
6: FLOAT
91: DATE
DATE
DATE
DATE
DATE
DATE
DATE
DATE
DATE
DATE
date
date
- 91: DATE
12: VARCHAR
CHAR length
VARCHAR
VARCHAR LENGTH
VARCHAR_IGNORECASE LENGTH
VARCHAR (length)
VARCHAR2
VARCHAR length
CHARACTER VARYING (length)
VARCHAR (M)
ENUM
SET
VARCHAR Length
character varying max length
char varying max length
varchar max length
name
text
varchar
varchar max length
12: VARCHAR
-4: LONGVARBINARY
BINARY
LONGVARBINARY
LONGVARBINARY
LONG VARCHAR FOR BIT DATA
LONG RAW
BLOB SUB_TYPE 0
-
LONG VARBINARY
MEDIUMBLOB
LONGBLOB
BLOB
TINYBLOB
-
long varbinary length
-
image
-4: LONGVARBINARY
-7: BIT -
BIT
BOOLEAN
- -
NUMBER (1)
-
BIT [(length)]
BIT
BOOL
-
boolean
bool
bit
-7: BIT
5: SMALLINT -
SMALLINT
SMALLINT
SMALLINT
NUMBER (5)
SMALLINT
SMALLINT
SMALLINT [(M)] [UNSIGNED] [ZEROFILL]
SMALLINT
smallint
int2
smallint
smallint identity
5: SMALLINT
7: REAL -
REAL
REAL
REAL
REAL
-
REAL
FLOAT [(M,D)] [ZEROFILL]
REAL
real
float4
real
7: REAL
2: NUMERIC -
NUMERIC
NUMERIC PRECISION,SCALE
NUMeric (precision,scale)
NUMBER
NUMERIC precision,scale
NUMERIC [ (precision[,scale]) ]
NUMERIC [(M[,D])] [ZEROFILL]
NUMERIC Optional Precision and Scale
numeric precision,scale
numeric
numeric precision,scale
numeric() identity precision
2: NUMERIC
3: DECIMAL -
DECIMAL
DECIMAL PRECISION,SCALE
DECimal (precision,scale)
-
DECIMAL precision,scale
DECIMAL [ (precision[,scale]) ]
DECIMAL [(M[,D])] [ZEROFILL]
DECIMAL Optional Precision and Scale
decimal precision,scale
dec precision,scale
-
decimal precision,scale
money
smallmoney
decimal() identity precision
3: DECIMAL
1: CHAR -
CHAR
CHAR LENGTH
CHARacter (length)
CHAR
CHAR length
CHARACTER [(length)]
CHAR (M)
CHARACTER Optional Length
character length
char length
char
bpchar
char length
1: CHAR
-1: LONGVARCHAR -
LONGVARCHAR
LONGVARCHAR
LONG VARCHAR
LONG
BLOB SUB_TYPE 1
-
LONG VARCHAR
MEDIUMTEXT
LONGTEXT
TEXT
TINYTEXT
-
long varchar length
-
text
-1: LONGVARCHAR
92: TIME -
TIME
TIME
TIME
DATE
TIME
TIME
TIME WITH TIME ZONE
TIME
TIME
time timeprecision
time
- 92: TIME
93: TIMESTAMP -
TIMESTAMP
TIMESTAMP
TIMESTAMP
TIMESTAMP
TIMESTAMP
TIMESTAMP
TIMESTAMP WITH TIME ZONE
DATETIME
TIMESTAMP [(M)]
TIMESTAMP
timeStamp timestampprecision
abstime
timestamp
timestamptz
datetime
smalldatetime
93: TIMESTAMP
-2: BINARY -
BINARY
BINARY
CHARacter () FOR BIT DATA (length) FOR BIT DATA
- - -
BINARY (M)
-
bit length
binary length
bytea
binary length
timestamp
-2: BINARY
-3: VARBINARY -
VARBINARY
VARBINARY
VARCHAR () FOR BIT DATA (length) FOR BIT DATA
RAW
-
BIT VARYING (length)
VARBINARY (M)
-
bit varying length
varbinary length
-
varbinary max length
-3: VARBINARY
2000: JAVA_OBJECT -
JAVA_OBJECT
- - - - - - -
long varbinary length
- - 2000: JAVA_OBJECT
16: BOOLEAN - -
BOOLEAN
- - - - -
BOOLEAN
boolean
- - 16: BOOLEAN
1111: OTHER - -
OTHER
- -
ARRAY
INTERVAL
- -
long varbinary length
int2vector
regproc
tid
xid
cid
oidvector
SET
pg_type
pg_attribute
pg_proc
pg_class
pg_shadow
pg_group
pg_database
smgr
point
lseg
path
box
polygon
line
_line
reltime
tinterval
unknown
circle
_circle
_money
macaddr
inet
cidr
_name
_int2vector
_regproc
_tid
_xid
_cid
_oidvector
_bpchar
_point
_lseg
_path
_box
_reltime
_tinterval
_polygon
aclitem
_aclitem
_macaddr
_inet
_cidr
_timestamptz
interval
_interval
timetz
_timetz
bit
_bit
varbit
_varbit
refcursor
pg_attrdef
pg_relcheck
pg_inherits
pg_index
pg_operator
pg_opclass
pg_am
pg_amop
pg_amproc
pg_language
pg_largeobject
pg_aggregate
pg_statistic
pg_rewrite
pg_trigger
pg_listener
pg_description
pg_toast_16384
pg_toast_16416
pg_toast_1255
pg_toast_16386
pg_toast_16410
pg_toast_16408
pg_user
pg_rules
pg_views
pg_tables
pg_indexes
pg_stats
pg_stat_all_tables
pg_stat_sys_tables
pg_stat_user_tables
pg_statio_all_tables
pg_statio_sys_tables
pg_statio_user_tables
pg_stat_all_indexes
pg_stat_sys_indexes
pg_stat_user_indexes
pg_statio_all_indexes
pg_statio_sys_indexes
pg_statio_user_indexes
pg_statio_all_sequences
pg_statio_sys_sequences
pg_statio_user_sequences
pg_stat_activity
pg_stat_database
meinetesttabelle
pg_toast_16557
- 1111: OTHER
2004: BLOB - - -
BLOB (length)
BLOB
BLOB SUB_TYPE <0
BLOB
-
BLOB Optional Large Object Length
blob max length
- - 2004: BLOB
2005: CLOB - - -
CLOB (length)
CLOB
-
CLOB
-
CLOB Optional Large Object Length
character large object max length
char large object max length
clob max length
- - 2005: CLOB
70: DATALINK - - -
DATALINK
- - - - - - - - 70: DATALINK
-104: - - - -
INTERVALDS
- - - - - - - -104:
-103: - - - -
INTERVALYM
- - - - - - - -103:
-102: - - - -
TIMESTAMP WITH LOCAL TIME ZONE
- - - - - - - -102:
-101: - - - -
TIMESTAMP WITH TIME ZONE
- - - - - - - -101:
2002: STRUCT - - - -
STRUCT
- - - - - - - 2002: STRUCT
2003: ARRAY - - - -
ARRAY
- - - - -
_bool
_bytea
_char
_int2
_int4
_text
_oid
_varchar
_int8
_float4
_float8
_abstime
_timestamp
_date
_time
_numeric
- 2003: ARRAY
2006: REF - - - -
REF
- - - - - - - 2006: REF
9: - - - - - - - -
BIGINT
- - - 9:
0: NULL - - - - - - - - -
null
- - 0: NULL
-11: - - - - - - - - - - -
uniqueidentifier
-11:
-10: - - - - - - - - - - -
ntext
-10:
-9: - - - - - - - - - - -
nvarchar max length
sysname
-9:
-8: - - - - - - - - - - -
nchar length
-8:
InstantDB Version 3.26 Mckoi SQL Database ( 1.0.3 ) 1.0.3 HSQL Database Engine 1.7.2 DB2/NT 07.02.0005 Oracle 9.2.0.1.0 Firebird WI-V6.3.1.4481 Firebird 1.5 FrontBase 3.6.41 MySQL 4.0.12-standard PointBase 4.4GA DaffodilDB 3.4 PostgreSQL 7.2 Microsoft SQL Server Product Version: 7.0.623





JDBC Meta Data Table


db2 mysql hsql frontbase instantdb firebird postgres pointbase mssql oracle mckoi daffodil
allProceduresAreCallable
false
false
true
true
false
false
true
true
true
false
false
true
allProceduresAreCallable
allTablesAreSelectable
false
false
true
true
true
false
true
true
true
false
false
true
allTablesAreSelectable
dataDefinitionCausesTransactionCommit
false
true
true
false
true
false
false
false
false
true
false
true
dataDefinitionCausesTransactionCommit
dataDefinitionIgnoredInTransactions
false
false
false
false
true
false
true
false
false
false
false
false
dataDefinitionIgnoredInTransactions
doesMaxRowSizeIncludeBlobs
false
true
true
false
false
true
false
true
false
true
false
false
doesMaxRowSizeIncludeBlobs
getCatalogSeparator
.
.

.
.

.
.
.


.
getCatalogSeparator
getCatalogTerm
stored procedure
database

CATALOG
Catalog

database
CATALOG
database

Catalog
Catalog
getCatalogTerm
getDatabaseProductName
DB2/NT
MySQL
HSQL Database Engine
FrontBase
InstantDB
Firebird
PostgreSQL
PointBase
Microsoft SQL Server
Oracle
Mckoi SQL Database ( 1.0.3 )
DaffodilDB
getDatabaseProductName
getDatabaseProductVersion
07.02.0006
4.0.12-standard
1.7.2
3.6.41
Version 3.26
WI-V6.3.1.4481 Firebird 1.5
7.2
4.4GA
7.0.623
Oracle9i Enterprise Edition Release 9.2.0.1.0
1.0.3
3.4
getDatabaseProductVersion
getDefaultTransactionIsolation
READ_COMMITTED
READ_COMMITTED
READ_UNCOMMITTED
SERIALIZABLE
NONE
READ_COMMITTED
READ_COMMITTED
READ_COMMITTED
READ_COMMITTED
READ_COMMITTED
SERIALIZABLE
NONE
getDefaultTransactionIsolation
getDriverMajorVersion
7
3
1
2
3
1
7
4
6
9
1
1
getDriverMajorVersion
getDriverMinorVersion
1
0
7
4
26
5
2
4
3
2
0
0
getDriverMinorVersion
getDriverName
IBM DB2 JDBC 2.0 Type 2
MySQL-AB JDBC Driver
HSQL Database Engine Driver
FBJDriver
InstantDB JDBC Driver
JayBird JCA/JDBC driver
PostgreSQL Native Driver
PointBase JDBC Driver
i-net SPRINTA
Oracle JDBC driver
Mckoi JDBC Driver
DaffodilDBDriver
getDriverName
getDriverVersion
07.02.0005
mysql-connector-java-3.0.15-ga ( $Date: 2004/08/09 22:15:11 $, $Revision: 1.27.2.43 $ )
1.7.2
2.4.7
Version 3.26
1.5
PostgreSQL 7.2 JDBC2
4.4GA
6.03
9.2.0.1.0
1.0
1.0
getDriverVersion
getExtraNameCharacters
@#
#@



$

#@$
#@
$#


getExtraNameCharacters
getIdentifierQuoteString
"
`
"
"
"
"
"
"
"
"
"

getIdentifierQuoteString
getMaxBinaryLiteralLength
4000
16777208
0
268435456
256
0
0
65536
0
1000
0
0
getMaxBinaryLiteralLength
getMaxCatalogNameLength
0
32
0
128
0
0
0
128
128
0
0
128
getMaxCatalogNameLength
getMaxCharLiteralLength
4000
16777208
0
2147483647
256
32767
0
65536
0
2000
32768
4192
getMaxCharLiteralLength
getMaxColumnNameLength
30
64
0
128
256
31
32
128
128
30
256
128
getMaxColumnNameLength
getMaxColumnsInGroupBy
500
64
0
0
0
0
0
0
16
0
4096
0
getMaxColumnsInGroupBy
getMaxColumnsInIndex
16
16
0
0
8
0
1600
0
16
32
1
0
getMaxColumnsInIndex
getMaxColumnsInOrderBy
500
64
0
0
0
0
0
0
16
0
4096
0
getMaxColumnsInOrderBy
getMaxColumnsInSelect
500
256
0
0
2048
0
0
0
0
0
4096
0
getMaxColumnsInSelect
getMaxColumnsInTable
500
512
0
0
2048
32767
1600
0
250
1000
4096
0
getMaxColumnsInTable
getMaxConnections
0
0
0
0
128
0
8192
0
0
0
8000
0
getMaxConnections
getMaxCursorNameLength
18
64
0
128
0
31
32
0
18
0
0
0
getMaxCursorNameLength
getMaxIndexLength
255
256
0
0
2147483647
0
0
0
0
0
0
0
getMaxIndexLength
getMaxProcedureNameLength
256
0
0
128
0
31
32
128
128
30
0
128
getMaxProcedureNameLength
getMaxRowSize
4005
2147483639
0
0
65536
65531
1073741824
0
0
2000
16777216
0
getMaxRowSize
getMaxSchemaNameLength
30
0
0
128
0
0
0
128
128
30
0
128
getMaxSchemaNameLength
getMaxStatementLength
65535
65531
0
0
65536
0
0
65536
0
65535
60000
0
getMaxStatementLength
getMaxStatements
0
0
0
0
512
0
1
0
0
0
1024
0
getMaxStatements
getMaxTableNameLength
128
64
0
128
256
31
32
128
128
30
50
128
getMaxTableNameLength
getMaxTablesInSelect
0
256
0
0
256
0
0
0
0
0
512
0
getMaxTablesInSelect
getMaxUserNameLength
30
16
0
128
256
31
32
128
128
30
50
128
getMaxUserNameLength
getProcedureTerm
stored procedure


PROCEDURE
Procedure
PROCEDURE
function
PROCEDURE
stored procedure
procedure
Procedure
Procedure
getProcedureTerm
getSchemaTerm
schema


SCHEMA
Schema

schema
SCHEMA
owner
schema
Schema
Schema
getSchemaTerm
getSearchStringEscape
\
\
\

\
\
\
\
?
//
\
_%
getSearchStringEscape
isCatalogAtStart
false
true
false
true
false
false
true
true
true
false
false
true
isCatalogAtStart
isReadOnly
false
false
false
false
false
false
false
false
false
false
false
false
isReadOnly
nullPlusNonNullIsNull
true
true
true
true
true
true
true
true
true
true
true
true
nullPlusNonNullIsNull
nullsAreSortedAtEnd
false
false
false
false
false
true
false
false
false
false
false
false
nullsAreSortedAtEnd
nullsAreSortedAtStart
false
false
false
false
false
false
false
false
false
false
false
false
nullsAreSortedAtStart
nullsAreSortedHigh
true
false
false
false
false
false
true
false
true
false
false
false
nullsAreSortedHigh
nullsAreSortedLow
false
true
true
true
false
false
false
true
false
true
true
true
nullsAreSortedLow
storesLowerCaseIdentifiers
false
false
false
false
false
false
true
false
false
false
false
false
storesLowerCaseIdentifiers
storesLowerCaseQuotedIdentifiers
false
false
false
false
false
false
false
false
false
false
false
false
storesLowerCaseQuotedIdentifiers
storesMixedCaseIdentifiers
false
true
false
false
true
false
false
false
true
false
true
true
storesMixedCaseIdentifiers
storesMixedCaseQuotedIdentifiers
false
false
false
false
true
false
false
true
true
true
true
false
storesMixedCaseQuotedIdentifiers
storesUpperCaseIdentifiers
true
false
true
true
false
true
false
true
false
true
false
false
storesUpperCaseIdentifiers
storesUpperCaseQuotedIdentifiers
false
false
false
true
false
false
false
false
false
false
false
false
storesUpperCaseQuotedIdentifiers
supportsANSI92EntryLevelSQL
true
true
false
true
true
true
false
true
true
true
false
true
supportsANSI92EntryLevelSQL
supportsANSI92FullSQL
false
false
false
true
false
false
false
false
false
false
false
true
supportsANSI92FullSQL
supportsANSI92IntermediateSQL
false
false
false
true
false
false
false
true
false
false
false
true
supportsANSI92IntermediateSQL
supportsAlterTableWithAddColumn
true
true
true
true
false
true
true
true
true
true
true
true
supportsAlterTableWithAddColumn
supportsAlterTableWithDropColumn
false
true
true
true
false
true
false
true
false
false
true
true
supportsAlterTableWithDropColumn
supportsBatchUpdates
true
true
true
true
-
true
true
true
true
true
true
true
supportsBatchUpdates
supportsCatalogsInDataManipulation
false
true
false
true
false
false
false
false
true
false
false
true
supportsCatalogsInDataManipulation
supportsCatalogsInIndexDefinitions
false
false
false
true
false
false
false
false
true
false
false
true
supportsCatalogsInIndexDefinitions
supportsCatalogsInPrivilegeDefinitions
false
false
false
true
false
false
false
false
true
false
false
true
supportsCatalogsInPrivilegeDefinitions
supportsCatalogsInProcedureCalls
false
false
false
true
false
false
false
false
false
false
false
true
supportsCatalogsInProcedureCalls
supportsCatalogsInTableDefinitions
false
false
false
true
false
false
false
false
false
false
false
true
supportsCatalogsInTableDefinitions
supportsColumnAliasing
true
true
true
true
true
true
true
true
true
true
true
true
supportsColumnAliasing
supportsConvert
true
true
true
false
false
false
false
false
true
false
false
true
supportsConvert
supportsCoreSQLGrammar
false
true
true
true
false
true
false
true
true
true
false
true
supportsCoreSQLGrammar
supportsCorrelatedSubqueries
true
false
true
true
true
true
true
false
true
true
false
true
supportsCorrelatedSubqueries
supportsDataDefinitionAndDataManipulationTransactions
true
false
false
true
true
true
true
true
true
true
true
true
supportsDataDefinitionAndDataManipulationTransactions
supportsDataManipulationTransactionsOnly
false
false
true
false
false
false
false
false
false
true
false
true
supportsDataManipulationTransactionsOnly
supportsDifferentTableCorrelationNames
false
true
true
false
false
false
false
true
false
true
false
true
supportsDifferentTableCorrelationNames
supportsExpressionsInOrderBy
true
true
true
false
false
false
true
true
true
true
true
true
supportsExpressionsInOrderBy
supportsExtendedSQLGrammar
true
false
false
true
false
true
false
true
false
true
false
true
supportsExtendedSQLGrammar
supportsFullOuterJoins
true
false
false
true
false
true
true
false
true
true
false
false
supportsFullOuterJoins
supportsGroupBy
true
true
true
true
false
true
true
true
true
true
true
true
supportsGroupBy
supportsGroupByBeyondSelect
true
true
true
false
false
false
true
true
false
true
false
true
supportsGroupByBeyondSelect
supportsGroupByUnrelated
false
false
true
false
false
false
true
false
true
true
true
false
supportsGroupByUnrelated
supportsIntegrityEnhancementFacility
true
false
true
true
false
true
false
true
true
true
false
true
supportsIntegrityEnhancementFacility
supportsLikeEscapeClause
true
true
true
true
true
true
true
true
true
true
true
true
supportsLikeEscapeClause
supportsLimitedOuterJoins
true
true
true
true
false
true
true
true
true
true
true
true
supportsLimitedOuterJoins
supportsMinimumSQLGrammar
false
true
false
true
true
true
true
true
true
true
false
true
supportsMinimumSQLGrammar
supportsMixedCaseIdentifiers
false
false
false
false
false
false
false
false
false
false
true
false
supportsMixedCaseIdentifiers
supportsMixedCaseQuotedIdentifiers
true
false
true
false
true
true
true
true
true
true
true
false
supportsMixedCaseQuotedIdentifiers
supportsMultipleResultSets
true
false
false
true
false
false
false
true
true
false
false
true
supportsMultipleResultSets
supportsMultipleTransactions
true
true
true
true
true
true
true
true
true
true
true
true
supportsMultipleTransactions
supportsNonNullableColumns
true
true
true
true
true
true
true
true
true
true
true
true
supportsNonNullableColumns
supportsOpenCursorsAcrossCommit
true
false
false
false
true
false
false
false
true
false
true
true
supportsOpenCursorsAcrossCommit
supportsOpenCursorsAcrossRollback
false
false
false
false
true
false
false
false
true
false
true
true
supportsOpenCursorsAcrossRollback
supportsOpenStatementsAcrossCommit
true
false
true
true
true
true
true
true
true
false
true
true
supportsOpenStatementsAcrossCommit
supportsOpenStatementsAcrossRollback
true
false
true
true
true
true
true
true
true
false
true
true
supportsOpenStatementsAcrossRollback
supportsOrderByUnrelated
true
false
true
false
false
true
true
true
true
true
true
true
supportsOrderByUnrelated
supportsOuterJoins
true
true
true
true
true
true
true
true
true
true
true
true
supportsOuterJoins
supportsPositionedDelete
true
false
false
true
false
true
false
false
true
false
false
false
supportsPositionedDelete
supportsPositionedUpdate
true
false
false
true
false
true
false
false
true
false
false
false
supportsPositionedUpdate
supportsSchemasInDataManipulation
true
false
false
true
false
false
false
true
true
true
true
true
supportsSchemasInDataManipulation
supportsSchemasInIndexDefinitions
true
false
false
true
false
false
false
false
true
true
true
true
supportsSchemasInIndexDefinitions
supportsSchemasInPrivilegeDefinitions
true
false
false
true
false
false
false
false
true
true
true
true
supportsSchemasInPrivilegeDefinitions
supportsSchemasInProcedureCalls
false
false
false
true
false
false
false
true
true
true
true
true
supportsSchemasInProcedureCalls
supportsSchemasInTableDefinitions
true
false
false
true
false
false
false
true
true
true
true
true
supportsSchemasInTableDefinitions
supportsSelectForUpdate
true
false
false
true
false
true
true
false
true
true
false
true
supportsSelectForUpdate
supportsStoredProcedures
true
false
true
true
false
true
false
true
true
true
false
true
supportsStoredProcedures
supportsSubqueriesInComparisons
true
false
true
true
true
true
true
true
true
true
false
true
supportsSubqueriesInComparisons
supportsSubqueriesInExists
true
false
true
true
true
true
true
false
true
true
false
true
supportsSubqueriesInExists
supportsSubqueriesInIns
true
false
true
true
true
true
true
true
true
true
true
true
supportsSubqueriesInIns
supportsSubqueriesInQuantifieds
true
false
true
true
true
true
true
false
true
true
false
true
supportsSubqueriesInQuantifieds
supportsTableCorrelationNames
true
true
true
true
true
true
true
true
true
true
true
true
supportsTableCorrelationNames
supportsTransactions
true
true
true
true
true
true
true
true
true
true
true
true
supportsTransactions
supportsUnion
true
true
true
true
false
true
true
false
true
true
false
true
supportsUnion
supportsUnionAll
true
true
true
true
false
false
true
true
true
true
false
true
supportsUnionAll
usesLocalFilePerTable
false
false
false
false
true
false
false
false
false
false
false
- usesLocalFilePerTable
usesLocalFiles
false
false
false
false
true
false
false
false
false
false
false
- usesLocalFiles
getDatabaseMajorVersion -
4
1
- -
1
-
4
- - -
3
getDatabaseMajorVersion
getDatabaseMinorVersion -
0
7
- -
5
-
4
- - -
4
getDatabaseMinorVersion
getJDBCMajorVersion -
3
3
- -
2
-
3
- -
3
3
getJDBCMajorVersion
getJDBCMinorVersion -
0
0
- -
0
-
0
- -
0
0
getJDBCMinorVersion
getResultSetHoldability -
2
1
- -
2
-
2
- -
1
2
getResultSetHoldability
getSQLStateType -
1
2
- -
1
-
2
- - -
2
getSQLStateType
locatorsUpdateCopy -
true
false
- -
false
- - - - - - locatorsUpdateCopy
supportsGetGeneratedKeys -
true
false
- -
false
-
true
- -
false
- supportsGetGeneratedKeys
supportsMultipleOpenResults -
false
false
- -
false
-
false
- -
false
- supportsMultipleOpenResults
supportsNamedParameters -
false
true
- -
false
-
false
- -
false
- supportsNamedParameters
supportsSavepoints -
false
true
- -
true
-
false
-
true
false
- supportsSavepoints
supportsStatementPooling -
false
false
- -
false
-
false
- -
true
- supportsStatementPooling
getDatabaseCreationVersion - - -
3.6.41
- - - - - - - - getDatabaseCreationVersion
getDriverSubVersion - - -
7
- - - - - - - - getDriverSubVersion
getObjectId - - - - - - -
1
- - - - getObjectId
getDriverMajorVersionInfo - - - - - - - - -
9
- - getDriverMajorVersionInfo
getDriverMinorVersionInfo - - - - - - - - -
2
- - getDriverMinorVersionInfo
getDriverNameInfo - - - - - - - - -
Oracle JDBC driver
- - getDriverNameInfo
getDriverVersionInfo - - - - - - - - -
9.2.0.1.0
- - getDriverVersionInfo
getGetLobPrecision - - - - - - - - -
true
- - getGetLobPrecision
getLobPrecision - - - - - - - - -
4294967295
- - getLobPrecision

db2 mysql hsql frontbase instantdb firebird postgres pointbase mssql oracle mckoi daffodil





Author: Ritzberger Fritz, April 2005