mmx metadata framework
...the DNA of your data
MMX metadata framework is a lightweight implementation of OMG Metadata Object Facility built on relational database technology. MMX framework
is based on three general concepts:
Metamodel | MMX Metamodel provides a storage mechanism for various knowledge models. The data model underlying the metadata framework is more abstract in nature than metadata models in general. The model consists of only a few abstract entities... see more.
Access layer | Object oriented methods can be exploited using inheritance to derive the whole data access layer from a small set of primitives created in SQL. MMX Metadata Framework provides several diverse methods of data access to fulfill different requirements... see more.
Generic transformation | A large part of relationships between different objects in metadata model are too complex to be described through simple static relations. Instead, universal data transformation concept is put to use enabling definition of transformations, mappings and transitions of any complexity... see more.

XDTL (eXtensible Data Transformation Language)

April 12, 2009 21:30 by marx

Traditional ETL (Extract Transform Load) tools broadly used in Data Warehouse environments tend to have two common deficiencies:

- emphasis on graphical user interface (and lack of a more efficient code interface) makes the design process slow and inflexible;

- dedicated ETL server generally means one extra hop for the data being transferred, which might be unacceptable considering today's data loads.

Enter XDTL (eXtensible Data Transformation Language). XDTL is an XML based descriptional language designed for specifying data transformations from one database/storage to another. XDTL syntax is defined in an XML Schema document ( XML Schema of XDTL has semantic annotations linking it to XDTL ontology model.

XDTL documents are interpreted by an XDTL Runtime Engine. XDTL/XDTL Runtime Engine is built not from the perspective of a slick IDE or a cool engine, but an efficient language for describing the data transformations. The goal is to produce a lightweight ETL development/runtime environment that would handle most of the common requirements with better efficiency than traditional jack-of-all-trades tools. XDTL Runtime Engine is currently under development for both .NET and Linux environments. XDTL language is free to use for anyone.

XDTL documents are stored in well-formed XML files that can be validated with XDTL XML Schema. Package is the primary unit of execution that can be addressed by the runtime engine. A single XDTL document can contain several packages. Every package has a unique identifier in the form of a URI. There are also special non-executable packages (libraries) that serve as containers of tasks callable by other packages. A package contains an arbitrary number of Variables, an unordered collection of Connections and an ordered collection of Tasks. 

Variables are name-value pairs mostly used for parameterization and are accessible by transformations. Connections are used to define data sources and targets used in transformations. Connections can refer to database resources (tables, views, result sets), text files in various formats (CSV, fixed format, Excel files) or Internet resources in tabular format.

Tasks are the smallest units of execution that have a unique identifier. A task is an ordered collection of Transformations that move, create or change data. There is at least one transformation in a task. Tasks can be parameterized in case one or several of it's transformations have parameters; in that case all the parameters should have default values defined as package variables.

What sets XDTL apart from traditional ETL tools? 
- while ETL tools in general focus on the graphical IDE and entry-level user, the needs of a professional user are not addressed as he/she has to struggle with inefficient workflow. XDTL relies on XML as development vehicle making it easy to generate the data transformation documents automatically or with XML tools of choice.

- as data amounts grow, the paradigm shifts from ETL to ELT, where bulk of the transformations take place inside the (target) database. Therefore most of the fancy features provided by heavyweight ETL tools are rarely or never used and the main workforce is SQL. However, there is very little to boost the productivity of SQL generation and reuse. XDTL addresses this with metadata-based mappings and transformations served from metadata repository, and by the use of transformation templates instead of SQL generation, capturing the typical scenarios in task libraries for easy reuse.

-  most of the heavyweight tools try to address every single conceivable problem which turns solving the trivial tasks obscure and too complex. They also aim to provide support for every single database product even if the chances to ever encounter most of them are almost zero. XDTL focuses on the most frequent scenarios and mainstream brands and put the emphasize on productivity and efficiency. 

- XDTL takes advantage of the general-purpose metadata repository of MMX Framework targeting a broad range of metadata-related activities and not locking the user into an ETL-specific and ETL-only repository from <insert your ETL tool vendor>.


Trees and Hierarchies the MMX Way

March 30, 2009 22:34 by marx

Implementing trees and hierarchies in a relational database is an issue that has been puzzling many and has triggered numerous posts, articles and even some books on the topic. 

As stated by Joe Celko in Chapter 26, Trees [1]: "Unfortunately, SQL provides poor support for such data. It does not directly map hierarchical data into tables, because tables are based on sets rather than on graphs. SQL directly supports neither the retrieval of the raw data in a meaningful recursive or hierarchical fashion nor computation of recursively defined functions that commonly occur in these types of applications. <...> Since the nodes contain the data, we can add columns to represent the edges of a tree. This is usually done in one of two ways in SQL: a single table or two tables." The single table representation enables one-to-many relationships via self-references (parent-child) while more general, two table representation handles many-to-many relationships of arbitrary cardinality. Based on the principles of Meta-Object Facility (MOF), MMX implements both M1 (model) and M2 (metamodel) layers of abstraction. Two most important relationship types defined by UML, Generalization and Association, are realized.

Generalization is defined on M2 level and is implemented via SQL self-relationship mechanism. Each class defined in M2 must belong to one class hierarchy, and only single inheritance is allowed. In terms of semantic relationship types in Controlled Vocabularies [2], this is an 'isA' relationship. Associations (as well as aggregations and compositions) are realized as a relationship table (an associative or a 'join table') allowing any class to be related to any other class with an arbitrary number of associations of different type (with support for mandatory and multiplicity constraints). This implementation enables straightforward translation of metamodels expressed as UML class diagrams into equivalent representation as MMX M2 level class objects.

M1 level deals with instances of M2 classes and parent-child hierarchies here denote 'inclusion', 'broader-narrower' or structural relationships between objects ('partOf' relationship in Controlled Vocabularies world). UML Links are implemented as a many-to-many relationship table, with both parent-child and link relationships being inherited from associations defined on M2 level. This inheritance enables automatic validation of M1 models against M2 metamodels by defining general rules to reinforce the integrity of models based on the characteristics of respective metamodel elements.

('single table', parent-child, one-to-many)
('two tables', relationship table, many-to-many)
Class hierarchy ('isA'),
UML Generalization
UML Associations
Object hierarchies ('whole-part'),
UML Links
UML Links

There seems to be a huge controversy in data management community whether implementing hierarchies in SQL should employ recursion support built into modern database systems or not. While a technique employing manual traversal and management of tree structures is proposed by Joe Celko in [1], the book is 15 years old and meanwhile the world (and databases) have changed a bit. Recursion is now part of ANSI SQL-99 with most big players providing at least basic support for it, and in many cases arguable gain in performance without taking advantage of recursive processing makes way to the gain in ease and speed of application development with it.

MMX Framework encapsulates all the details of handling inheritance, traversing hierarchies, navigating linked object paths etc. in MMX Metadata API realized as a set of table functions (database functions that return table as the result) that can be easily mapped by Object-Relational Mappers [3]. The performance penalty paid for recursion that might be an issue in an enterprise scale DWH is not an issue here - after all, MMX Framework is designed for (and mostly used in) metadata management, where data amounts are not beyond comprehension. 

[1] Joe Celko's SQL For Smarties: Advanced SQL Programming, 1995.

[2] Zeng, Marcia Lei. Construction of Controlled Vocabularies, A Primer (based on Z39.19), 2005.

[3] Scott W. Ambler. Mapping Objects to Relational Databases, 2000.

Metamodel-based validation of models

December 30, 2008 13:01 by marx
When creating a metamodel instance (ie. model), the structure of the metamodel can be used to automatically validate the structure of the corresponding model. Basically every class, association and attribute value can be interpreted as a constraint (validation rule) to enforce certain properties and characteristics of the model (metadata). As metamodels are often seen as 'defining languages for model descriptions' we might consider these rules a syntax check for a model expressed in this language.  
Constraints (validation rules) can be materialized and enforced during metadata scanning/loading process, during a dedicated validation maintenence task, on demand etc. In MMX framework the rules are implemented on database level as a set of data constraints of the metadata repository and form a protective layer transaparent to a user or an application built on the framework. Only 'structural' properties of a metamodel have been implemented - 'semantic' properties (homonyms, synonyms, reflexivity, transitivity etc.) and their use as validation rules is a separate (and much more complex) topic not covered yet. The rules for model validation implemented in MMX (and how they are enforced through constraints) are as follows: 
:{M1} objects inherit their type codes from corresponding classes in {M2} metamodel(s). Only concrete classes can have corresponding objects.
object.Type *partof(objectClass.Type) & objectClass.IsAbstractClass = False
relation.Type *partof(relationClass.Type)
property.Type *partof(propertyClass.Type)
:{M2} class names are unique within a namespace, ie. {M2} metamodel and are never empty.

objectClass.Name *isunique(objectClass.Name) & objectClass.Name <> nil

:{M1} parent-child relations between objects are derived from designated associations between their superclasses in {M2} metamodel(s). 
object.parent.Type *partof(*tree(relationClass.relatedObject.Type)) & relationClass.IsTaxonomy = True

:{M1} related objects inherit their type codes from {M2} classes and/or their superclasses related through {M2} associations and/or {M2} attributes.
relation.object.Type *partof(*tree(relationClass.object.Type))
relation.relatedObject.Type *partof(*tree(relationClass.relatedObject.Type))
property.object.Type *partof(*tree(propertyClass.object.Type))
property.domain.Type *partof(*tree(propertyClass.domain.Type))

:{M1} linear properties as well as significiant elements of hierarchical properties with empty (null) value inherit the default value from the corresponding {M2} attributes.

property.Value *coalesce(property.Value, propertyClass.defaultValue)

:The number of {M1} objects participating in a relation cannot exceed or subcede the one expressed by the multiplicity property of the corresponding {M2} association on both ends.

*numberof(relation.object) *ge(relationClass.multiplicity.minValue)
*numberof(relation.relatedObject) *ge(relationClass.multiplicity.minValue)
*numberof(relation.object) *le(relationClass.multiplicity.maxValue)
*numberof(relation.relatedObject) *le(relationClass.multiplicity.maxValue)

:When the 'whole' object of a {M1} relation (descending from a {M2} association of type 'aggregation') is deleted, the relation itself is also deleted. When the 'whole' object of a {M1} relation (descending from a {M2} association of type 'composition') is deleted, both the relation and the 'parts' object are deleted.

*isdeleted(relation.object) & relationClass.Type = 'AGGR' -> relation := nil
*isdeleted(relation.object) & relationClass.Type = 'COMP' -> relation := nil
*isdeleted(relation.object) & relationClass.Type = 'COMP' -> relation.relatedObject := nil

The implementations are defined in an intuitive semi-formal notation. The operators *isunique, *partof, *tree, *isdeleted, *numberof, *ge and *le denote abstract pseudo-operations of uniqueness, being part of, tree of ancestors, deletion, count, greater-than-or-equal and less-than-or-equal.