Happy to see this MySQL JSON momentum !

Last week at Percona Live Facebook has presented for the first time Docstore which is a native JSON implementation in MySQL. Oracle has also presented their MySQL 5.7 lab release that includes the implementation of a native JSON type.
This is an important move as MySQL was behind other other RDMS regarding JSON (PostgreSQL already had a nice and complete implementation). JSON being widely adopted in various area (JS dev, data exchange and database format, …) this move was awaited.

The 2 implementations of Native JSON type are different but they both provide validation of input data and a binary format JSON storage.

If we look at the native JSON Oracle implementation the native JSON format is optimized for property retrieval. To achieve good performance the binary JSON storage format includes a preamble with a lookup table. This lookup table will speedup key/value pair access when executing functions like JSN_EXTRACT.
Speeding up JSN_EXTRACT is important for indexing JSON fields. For the Oracle implementation the JSON Indexes are indexes on a non-materialized Virtual Columns(functional indexes). These virtual column are a call to JSN_EXTRACT for the JSON property we want to index.

The Facebook native JSON implementation use a docpath syntax that is understood by the MySQL parser. It is possible to have a secondary composite index containing a JSON index part. The optimizer can use the secondary index for covering index, index only range scan … There is not yet a full optimizer support but this is a good start.
Contrary to Facebook, Oracle has not modified the parser and all is wrapped by JSON helper functions.

The main difference I sees between these 2 implementation is the fact that with oracle implementation the secondary index is bound to a virtual column that need to be in the DDL. With the Oracle implementation it is not possible to have a composite index mixing a regular column. So for the moment the Facebook implementation seems to be more flexible on this part.
What I like about the Oracle implementation is that the provided JSON function work both for native JSON format or for JSON text. That might explain why the udf function lab release has been remove from downloads.

So will it still make sense to store JSON as a text field ?
It depends on use case. Having no validation, no binary format conversion in and out to present the JSON payload to the application makes it faster for insert. For small JSON document the speedup resulting from lookup might be marginal when running JSN_EXTRACT. Building the index entry is just done once so access through the index will not have to redo the JSON_EXTRACT.

What I really like about this addition of the JSON format to MySQL is that we are able to have an hybrid approach mixing tabular data with JSON column. Contrary to JSON only, schema less datastore like MongoDB this hybrid data store might turn out to be the best choice for a wide range of application and might please developers.

Select from a mariaDB table into JSON format ?

Can we output content of a mariaDB table into JSON using the CONNECT Storage Engine ?
MariaDB and MySQL are not currently JSON friendly databases. The usage of JSON in MySQL is almost inexistent with the excetion of the explain JSON output and of MariaDB dynamic column dump.
There also exist a few udf (in MySQL lab area) to help manipulate JSON strings(searching, modification …).

A basic question often raised is how to output the content of a MariaDB table in JSON format. Can the MariaDB CONNECT storage engine help do that ?

Suppose we have a simple table :

MariaDB > create table book
  id       integer,
  isbn     varchar(20),
  lang     varchar(20),
  subject  varchar(80),
  title    varchar(80),
  datepub  date
)  engine=innodb;

Then insert a few rows.

insert into book values (1,'9782212090819','fr','applications','Construire une application XML','2008-7-04');
insert into book values (1,'9782840825685','fr','applications','XML en Action','1999-7-04');

There is no direct way to query this innoDB table and get the result in a JSON format. Many users have written PHP(or other language) scripts to do this in a more or less generic way.
Let us create a CONNECT/JSON table based on this innoDB table.

MariaDB > create table bookj  
  engine=CONNECT table_type=JSON 
  as select * from book;

This create table will produce a json file with the JSON representation of the table content :

bookj.jsn :
		"id": 1,
		"isbn": "9782212090819",
		"lang": "fr",
		"subject": "applications",
		"title": "Construire une application XML",
		"datepub": 1215129600
		"id": 2,
		"isbn": "9782840825685",
		"lang": "fr",
		"subject": "applications",
		"title": "XML en Action",
		"datepub": 931046400

But how can we directly request the JSON representation through a select ?
We will create a second CONNECT/JSON table based on the same underlying file produced by previous step.

create table bookj2 (
	ROW varchar(500) field_format='*'
engine=CONNECT table_type=JSON file_name='/var/lib/mysql/json/bookj.jsn';

The field_format=’*’ is a JSONPath meaning that we take the JSON as is and do not try to navigate properties.

MariaDB > select * from bookj2;
| ROW                                                                                                                                |
| {"id":1,"isbn":"9782212090819","lang":"fr","subject":"applications","title":"Construire une application XML","datepub":1215129600} |
| {"id":2,"isbn":"9782840825685","lang":"fr","subject":"applications","title":"XML en Action","datepub":931046400}                   |

That is not really simple but that do the job. The structure produced is a flat structure without array or embeded JSON.
In the real world the equivalent of a JSON hierarchy is a set of tables linked through foreign keys. With MariaDB this can also include dynamic column for multi values columns.

MariaDB CONNECT Storage Engine and JSONPath

MariaDB CONNECT Storage Engine allows to access a JSON file as if it was a local table.
The mapping between a hierarchical structure and a relational structure is not ‘natural’ and needs to be defined. To achieve that we need a specific syntax. We associate with each column a ‘field_format’ attribute. This ‘field_format’ column attribute defines what property in the JSON hierarchy we want to map to the MariaDB table column.

With field_format=’AUTHOR:*’ the AUTHOR column which is an array of objects will remain in JSON format (We use the same JSON file example as in the previous post).

MariaDB > create table bibop1 (
     ISBN char(15),
     AUTHOR char(200) field_format='AUTHOR:*',
     PUBLISHER char(20) field_format='PUBLISHER:NAME')
     engine=CONNECT table_type=JSON file_name='/var/lib/mysql/json/biblio3.jsn';
MariaDB > select * from bibop1;
| ISBN          | AUTHOR                                            | PUBLISHER       |
| 9782212090819 | [{"FIRSTNAME":"Jean-Christophe","LASTNAME":"Bernadac"},

With field_format=’AUTHOR:[]‘ the AUTHOR column will be represented as a coma separated list. Each JSON object is represented as a concatenated string of its properties.

| ISBN          | AUTHOR                                               | PUBLISHER       |
| 9782212090819 | Jean-Christophe Bernadac, John Knab, Charles Dickens | Eyrolles        |

To obtain a normalize view of Author first name and last name we associate column AuthorFN with field_format=’AUTHOR:[X]:FIRSTNAME’ and obtain a presentation with each author split on a line.

| ISBN          | AuthorFN        | AuthorLN |
| 9782212090819 | Jean-Christophe | Bernadac |

Here is a more exhaustive view of the various possibilities offered for the mapping:
We will see later that we can also use this syntax to modify a JSON document including array embedded in an objects.

Specification Array Type Limit Description
[n] All N.A Take the nth value of the array. Ignore it if n is 0.
[X] or [x] All Expand. Generate one row for each array value.
["string"] String Concatenate all values separated by the specified string.
[+] Numeric Make the sum of all the array values.
[*] Numeric Make the product of all array values.
[!] Numeric Make the average of all the array values.
[>] or [<] All Return the greatest or least value of the array.
[#] All N.A Return the number of values in the array.
[] All Expand if under an expanded object. Otherwise sum if numeric, else concatenation separated by “, “.
All N.A If an array, expand it if under an expanded object or take the first value of it.

We all know how XPAH works for XML documents. There has been some work done to define a JSONPath syntax to access JSON document content. The scope of what has been defined by JSONPath is broader than what we need for the MariaDB CONNECT storage Engine to define the mapping between table columns and JSON object properties. Here is the syntax available with JSONPath (with equivalent XPATH syntax):

XPath JSONPath Description
/ $ the root object/element
. @ the current object/element
/ . or [] child operator
.. n/a parent operator
// .. recursive descent. JSONPath borrows this syntax from E4X.
* * wildcard. All objects/elements regardless their names.
@ n/a attribute access. JSON structures don’t have attributes.
[] [] subscript operator. XPath uses it to iterate over element collections and for predicates. In Javascript and JSON it is the native array operator.
| [,] Union operator in XPath results in a combination of node sets. JSONPath allows alternate names or array indices as a set.
n/a [start:end:step] array slice operator borrowed from ES4.
[] ?() applies a filter (script) expression.
n/a () script expression, using the underlying script engine.
() n/a grouping in Xpath

A more complete description of What has been proposed for JSONPath is here
MySQL JSON UDF functions version 0.4.0 use part of this syntax for function like JSON_EXTRACT or JSON_SEARCH.
With MySQL JSON udf you can also express a path as a variable list of keyparts. Each keypart being a JSON property or an index in a JSON array : select json_extract(doc, keypart1, keypart2 ...)
PostgreSQL with its native JSON also uses a variable list of path elements to define a JSON path. For example
json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6') where the keypath is 'f4', 'f6' return foo.

We will discover at Percona Live what syntax and set of helper functions have been chosen by Facebook and Oracle for their respective MySQL Native JSON type implementation. In fact I do not know if the two implementations are different. We will see.

The JSONPath syntax also allows to define queries on the JSON data. The following JSONPath query


will filter all books cheaper than 10 when applied to the following JSON document :

{ "store": {
    "book": [
      { "category": "reference",
        "author": "Nigel Rees",
        "title": "Sayings of the Century",
        "price": 8.95
      { "category": "fiction",
        "author": "Evelyn Waugh",
        "title": "Sword of Honour",
        "price": 12.99
      { "category": "fiction",
        "author": "Herman Melville",
        "title": "Moby Dick",
        "isbn": "0-553-21311-3",
        "price": 8.99
      { "category": "fiction",
        "author": "J. R. R. Tolkien",
        "title": "The Lord of the Rings",
        "isbn": "0-395-19395-8",
        "price": 22.99
    "bicycle": {
      "color": "red",
      "price": 19.95

With CONNECT we will first do the mapping of JSON properties to table columns. We will then use plain SQL to do the query.

As MariaDB CONNECT Storage Engine JSON functionality is still experimental your feedback on bugs, improvements, features is welcome. To fill bug it is https://mariadb.atlassian.net/projects/MDEV

Additional info :
MariaDB Knowledge Base Documentation for CONNECT Storage Engine CONNECT
github repository for MariaDB 10.0 including CONNECT Storage Engine
Presentation at future Percona Live 2015 SantaClara conference :
MariaDB CONNECT Storage Engine to simplify heterogeneous data access (JSON focus)

MariaDB CONNECT storage engine now offers access to JSON

The MariaDB CONNECT storage engine now offers access to JSON file and allows you to see a external JSON file as a MariaDB table. JSON (JavaScript Object Notation) is a lightweight data-interchange format widely used on the Internet.
JSON like XML represents data hierarchically. The mapping from hierarchical data to tabular data needs to be specified.

To use the CONNECT storage engine JSON capabilities the  ’create table‘ describes what should be retrieved from the file and how it should be processed. Let us start from the file “biblio3.json”. This file consists in an array of objects. Some object’s properties have object or array values :

  "ISBN": "9782212090819",
  "LANG": "fr",
  "SUBJECT": "applications",
  "AUTHOR": [
      "FIRSTNAME": "Jean-Christophe",
      "LASTNAME": "Bernadac"
      "FIRSTNAME": "François",
      "LASTNAME": "Knab"
  "TITLE": "Construire une application XML",
    "NAME": "Eyrolles",
    "PLACE": "Paris"
  "DATEPUB": 1999
    "ISBN": "9782840825685",
    "LANG": "fr",
    "SUBJECT": "applications",
    "AUTHOR": [
        "FIRSTNAME": "William J.",
        "LASTNAME": "Pardi"
    "TITLE": "XML en Action",
      "PREFIX": "adapté de l'anglais par",
      "TRANSLATOR": {
        "FIRSTNAME": "James",
        "LASTNAME": "Guerin"
    "PUBLISHER": {
      "NAME": "Microsoft Press",
      "PLACE": "Paris"
    "DATEPUB": 1999

We can see this JSON hierarchical collection of objects that way :

                      |                  …                             |
            <BOOK:ISBN,LANG,SUBJECT>                     <BOOK:ISBN,LANG,SUBJECT>
        ______________|_______________          ________________________|__________________
       |        |         |          |          |            |         |        |         |
    ____|____            ___|____          _____|_        ___|___            ___|____ 
   |    |    |          |        |        |       |      |       |          |        | 
   |         |

The create table options allow to specifying how JSON file’s properties will be mapped to table columns :

MariaDB > create table bibop1 (
     ISBN char(15),
     AUTHOR char(200) field_format='AUTHOR:*',
     PUBLISHER char(20) field_format='PUBLISHER:NAME')
     engine=CONNECT table_type=JSON file_name='/var/lib/mysql/json/biblio3.jsn';
MariaDB > select * from bibop1;
| ISBN          | AUTHOR                                            | PUBLISHER       |
| 9782212090819 | [{"FIRSTNAME":"Jean-Christophe","LASTNAME":"Bernadac"},
                                                                    | Eyrolles        |
| 9782840825685 | [{"FIRSTNAME":"William J.","LASTNAME":"Pardi"}]
                                                                    | Microsoft Press |

In that table part of the JSON format is preserved as a JSON text. This happens as we have defined the AUTHOR column with field_format=’AUTHOR:*’

MariaDB > create table bibop2 (
     ISBN char(15),
     AUTHOR char(200) field_format='AUTHOR:[]',
     PUBLISHER char(20) field_format='PUBLISHER:NAME')
     engine=CONNECT table_type=JSON file_name='/var/lib/mysql/json/biblio3.jsn';

In that case the array is represented by values comma separated(we could choose a different separator like a word ‘AND’). This happens as we have defined the AUTHOR column with field_format=’AUTHOR:[]‘

MariaDB > select * from bibop2;
| ISBN          | AUTHOR                                               | PUBLISHER       |
| 9782212090819 | Jean-Christophe Bernadac, John Knab, Charles Dickens | Eyrolles        |
| 9782840825685 | William J. Pardi                                     | Microsoft Press |

We can also choose to normalize the presentation with one author name by row.We can define the AuthorFN column with field_format=’AUTHOR:[X]:FIRSTNAME’

MariaDB > create table bibop3 (
     ISBN char(15),
     AuthorFN char(128) field_format='AUTHOR:[X]:FIRSTNAME',
     AuthorLN char(128) field_format='AUTHOR:[X]:LASTNAME')
     engine=CONNECT table_type=JSON file_name='/var/lib/mysql/json/biblio3.jsn';
MariaDB > select * from bibop3;
| ISBN          | AuthorFN        | AuthorLN |
| 9782212090819 | Jean-Christophe | Bernadac |
| 9782212090819 | John            | Knab     |
| 9782212090819 | Charles         | Dickens  |
| 9782840825685 | William J.      | Pardi    |

All these CONNECT tables can exist simultaneously. They can be considered as view on the same underlying raw JSON file. The CONNECT storage engine keeps with JSON the same principle it does with other data sources : the data remains fully external.

MySQL going to challenge MongoDB with native JSON ?

When looking at Percona live Santa Clara 2015 agenda I saw two intriguing conferences :
- JSON support in MySQL 5.7
– Docstore: document database for MySQL at Facebook

Both conference are talking about supporting a native JSON type in MySQL ! This sounds really great. I have never seen any blog post or github repository around this subject. This is a good surprise. Having in MySQL a native JSON datatype with set of functions for handling JSON documents, with indexing capabilities, and with good underlying storage sounds very cool!

MongoDB and Couchbase as documents stores have open the road to JSON usage by a database. They built their database on a JSON storage format. PostgreSQL has also introduced a native JSON type with associated functions.

But MongoDB storage foundation were not that solid. Memory-mapped files were the critical piece of the storage engine in MongoDB. The locking with was done at database level. The atomicity of a commit was at a document level. All this impacted concurrency capabilities of the data store.
The NoSQL paradigm of MongoDB also means no join capabilities. It has lead to highly denormalised data model. Excessive denormalisation can hurt as it incurs high space consumption. Efficient indexing required padding to avoid too much work rebuilding index when data changed.  The schema-less nature of the database were the schema is in the code is problematic for application evolution.

Then came TokuDB that build TokuMX to solve some of the flaws of MongoDB. The nice idea was to use the efficient storage capabilities of TokuDB. The storage was more solid with true recoverability, transactions, excellent compression and online operations.

MongoDB maybe inspired by what TokuMX achieved has recently decided to change their product’s architecture. MongoDB can now use the WiredTiger storage engine.  This provides document-level locking and compression. But as we all know changing the foundation a house while you live in is not that easy.

The MySQL approach is different. The foundation are very solid. The InnoDB transactional storage engine has huge qualities : high concurrency, reliability, safe crash recovery, on line operations, reliable replication capabilities …

Building a document store on these foundations might lead to some good end result. We have to wait for feedback from the field . Benchmarks need to be run to show how much one can expect from the MySQL /JSON approach. With the newly added JSON support in MySQL, you can combine the flexibility of NoSQL with the strength of a relational database. I am a big fan of this hybrid approach. This dual approach mixing relational content with the flexibility of schemaless  JSON might turn very attractive in a developer perspective.



Since now on github is the place for MariaDB new dev !

Today while browsing through my emails I was very happy to read this email from Sergei Golubchik :

I'm happy to announce that MariaDB-10.1 tree has been completely migrated to github. Since now on
 we'll use github for the new development in MariaDB. It's https://github.com/MariaDB/server, 
go on, fork it, hack around, submit pull requests. Have fun!
Older trees (10.0, 5.5, 5.3, 5.2, 5.1) are not on github - we do hope to migrate 
them too eventually, but at the moment they are still on launchpad.
If you're a maria-captain on launchpad - for you to get write access to the mariadb repository 
on github you need to tell me your github account name.
P.S.: Don't forget that 10.1 is still *pre-alpha*. You've been warned.
Mailing list: https://launchpad.net/~maria-developers
Post to     : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help   : https://help.launchpad.net/ListHelp

So if you want to embark the MariaDB adventure
it is here : https://github.com/MariaDB/server

Go on,
Fork it,
Hack around,
Submit pull requests.
Have fun!

MaxScale has now its own public irc channel

MaxScale is a Proxy for the MySQL protocol built with a modular architecture.
The underlying concept of modules allows to extend the MaxScale proxy services. The current version implements Read Write splitting and Connection Load Balancing. Internally MySQL queries go through a SQL parsing phase. This gives MaxScale great capabilities regarding queries routing.

So if you are interested by MaxScale do not hesitate to enter the chat room #maxscale on irc.freenode.net. its developers will be there. Bring your ideas and problems. You are welcome.

The MaxScale sources code is on github

THere is a MaxScale Google group available

And a good starting point on how to install and configure MaxScale is Ivan Zoratti’s blog post

MaxScale, ProxySQL and MySQL Proxy

At FOSDEM 2014 ProxySQL and MaxScale were both presented. Both are proxy that can help build sophisticated MariaDB/MySQL architectures.
But currently what is the most used proxy with MySQL? It is HAproxy. HAproxy is a level 4 proxy that has no knowledge of the MySQL protocol. Being low level makes it very fast but it cannot accomplish any advanced proxy task. In many case this is enough for pure load balancing. But it cannot handle filtering, routing, query rewriting. This requires to understand the MySQL protocol and to look at the query passing through the proxy.

ProxySQL and MaxScale contrary to HAproxy are level 7 proxies. This means that they both understand the MySQL network protocol and they have access to the query payload.

At FOSDEM ProxySQL was presented by its developer René Cannaò. ProxySQL has interesting features like a very efficient query cache, query rewriting capabilities and a request queuing mechanism when the backend is not available. ProxySQL use regex to do query filtering and rewriting. It does not provide full query parsing.

Maxscale was presented by two of its developers Massimiliano Pinto and Vilho Raatikka. MaxScale is a more ambitious project. It supports a modular architecture. The underlying concept of modules allows to extend the MaxScale proxy services. The current version implements Read Write splitting and Load Balancing. Internally MySQL queries go through a SQL parsing phase. This gives MaxScale great capability regarding queries routing.

Talking about proxy technology without talking about MySQL Proxy would not be complete. MySQL Proxy was mentioned during presentations. When MySQL Proxy was born it attracted a lot of users as it was a very promising technology. The extensibility capability through the LUA scripting language makes it a very flexible technology. MySQL proxy does not embed a SQL parser and basic tokenization was made through LUA language. Unfortunately MySQL Proxy stayed forever in alpha status with no more improvement and no visibility on its future.

So if you are interested in proxy technology and want to know more about MaxScale do not miss tomorrow webinar MariaDB Enterprise and MaxScale – why all the buzz? delivered by Ivan Zoratti.

To get MaxScale sources
To fill bugs.

MariaDB CONNECT Storage Engine as an ETL (or ELT) ?

The MariaDB CONNECT Storage Engine allows to access heterogeneous data sources. In my previous post I show you how to use the MariaDB CONNECT Storage Engine to access an Oracle database. This is quite easy through the CONNECT Storage Engine ODBC table type.

For most architectures where heterogeneous databases are involved an ETL (Extract-Transform-Load) is used to move data across the databases. Most ETL like Informatica, DataStage, Talend, Goldengate or Tungsten are tools that perform complex data transformations using middle-tier ETL engines. It require to describe the transformation process in a proprietary manner. This approach requires an extra product with extra expertise. Data is generally duplicated when it is moved around.

In many cases this extra product and competence requirement can be avoided. Data can be moved directly from database to database. Transformation can be directly apply inside the source or target database. This can be done through standard SQL commands.
This concept was successfully brought to the market by products like Sunopsis (bought by Oracle). This approach is known as ELT (Extract-Load-Transform) where data is moved directly from database to database. The database itself is used to transform data through standard SQL commands. This makes sense as some databases have very strong data transformation capabilities through powerful SQL extensions (analytical SQL)and parallel execution capabilities.

With the MariaDB CONNECT Storage Engine the approach of linking directly the two databases through an ODBC connection allows to implement this ELT pattern. Doing data transformation at the database level is very beneficial. All the advanced features of the source or target databases can be used.

In our case transformation will be applied both at the source and target database. Let us consider the following ODBC table created with the MariaDB CONNECT Storage Engine

create Table lineitem2   ENGINE=CONNECT TABLE_TYPE=ODBC
SRCDEF='select l_suppkey, sum(l_quantity) qt from dbt3.lineitem3 group by l_suppkey'

We can now query this table. All the aggregation specified in the SRCDEF argument will be done on the remote Oracle RDBMS node and not on the MySQL node. Some extra computation can be done on the local MySQL table. Here we compute the sum of the 10 biggest aggregated quantities.

MariaDB [dbt3]> select sum(qt) from (select qt  from lineitem2  order by qt DESC limit 10) as derivedT;
| sum(qt)   |
|     31444 |

This example is very simplistic but is here to give you an idea of whan can be achieved whith this pattern. So if you want to move data from an Oracle (or any ODBC accessible) database to MariaDB you should give the MariaDB CONNECT storage engine a try.

We already have a few customers using the MariaDB CONNECT Storage Engine to simplify their data integration processes (currently with Oracle and Sybase databases).

If you come to FOSDEM 2014 MySQL devroom on Sunday Feb 2nd. I wil present and will be available to answer your questions. if you are here do not hesitae to register to the MySQL Community diner Saturday Feb 1rst.

Slides on the MariaDB CONNECT Storage Engine

Let us know about your test. You just need to download MariaDB 10.0.7 which includes the CONNECT storage engine. Give us your feedback : positive, negative, improvement requests, bugs, doc pb, …

Download the latest MariaDB 10.0.7 Beta release
CONNECT Storage Engine Documentation

MariaDB CONNECT Storage Engine and non MySQL syntax selects

When you pass a SQL command through MySQL it has to be compliant with MySQL syntax as it will have to go through the MySQL parser. Can we pass non MySQL syntax compatible selects to a target ODBC database ? MariaDB CONNECT Storage Engine allows through ODBC to access heterogeneous data sources like Oracle or Microsoft SQL Server.
In my previous post I showed you how to use the CONNECT Storage Engine ODBC table type to access an Oracle database. It was possible to run select commands against these remote ODBC tables. In many cases It would be very interesting to run the sql command directly on the target database.
The first reason would be for efficiency as this would avoid a lot of network roundtrips between the Oracle server and the MariaDB Server. The second reason would be to avoid the restriction on MySQL syntax compliance.

With the MySQL CONNECT storage Engine it is possible to define a remote ODBC table and to associate a SQL request to this table definition. This SQL request will be executed on the remote server. For example for a remote Oracle database it is possible to pass a request using proprietary syntax. In the case of Oracle suppose we want to use the Oracle proprietary “CONNECT BY” syntax that does not exists in MySQL, we will do it that way:

create table emp_hierarchy  
TABLE_TYPE=ODBC tabname='EMP' CONNECTION='DSN=orcl;UID=scott;PWD=manager1' 
srcdef='SELECT empno, ename, mgr, LEVEL FROM emp CONNECT BY PRIOR empno = mgr;';

The table definition includes a srcdef argument that defines the SQL request that will be executed on the Oracle target database. When we request this table on MariaDB everything is in fact done on the Oracle server.

MariaDB [test]> select * from emp_hierarchy;
|  7788 | SCOTT  | 7566 |     1 |
|  7876 | ADAMS  | 7788 |     2 |
|  7839 | KING   | NULL |     1 |
|  7521 | WARD   | 7698 |     3 |
43 rows in set (2.54 sec)
MariaDB [test]>

Beside this basic example this open the door to the ability to execute any select on the target database. This allows for example to take benefit of Oracle’s In-Database SQL Analytics which includes a lot of powerful functions(Ranking, Windowing, Reporting Aggregates, LAG/LEAD, FIRST/LAST, Inverse Percentile, Hypothetical Rank and Distribution, Pattern Matching, Modeling, Advanced aggregations..). The same idea applies to powerful spatial or full text capabilities.
And do not forget that with the MariaDB CONNECT Storage Engine you can aggregate many of these table located on different server to be seen as a single table. For that you use the CONNECT TBL table type. This also gives the benefit of parallel execution on these remote servers.

What would be great would be to have someone write a MySQL procedure wrapper that transparently create the CONNECT table to run the query against(something like “callRemoteSQL”) ;-) .