Skip to content

Syslog is widely used for logging. It allows distributed logging. Having MySQL/MariaDB audit data logged to a remote Syslog server is a strong guaranty regarding security of the audit data. PCI compliance requires separation of duties. The separation of duties between DBA profiles and a security officer is a way to guaranty that Audit data is tamper-proof from the DBA.

To set up the MariaDB Audit Plugin to log to remotely syslog is quite simple. First you install the MariaDB Audit Pluggin : You download the MariaDB audit plugin, you copy it to lib/plugin in your MySQL/MariaDB install directory and you activate it :

MariaDB [(none)]> INSTALL PLUGIN server_audit SONAME 'server_audit.so';
MariaDB [test]> SET GLOBAL server_audit_output_type=SYSLOG;
MariaDB [test]> SET GLOBAL server_audit_events='CONNECT,QUERY ';
MariaDB [test]> SET GLOBAL server_audit_logging=on;

To have the audit logging data sent to a remote server you first need to configure the remote syslog server to accept request from the network(here on port 514) by editing /etc/rsyslog.conf

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
 
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

Restart the syslog daemon :

service syslog restart

To check that your syslog system accept data from a remote source we verify that it is listening on the configured 514 port :

[root@centos2 etc]# netstat -anp|grep 514
tcp        0      0 0.0.0.0:514                 0.0.0.0:*                   LISTEN      11467/rsyslogd      
tcp        0      0 :::514                      :::*                        LISTEN      11467/rsyslogd      
udp        0      0 0.0.0.0:514                 0.0.0.0:*                               11467/rsyslogd      
udp        0      0 :::514                      :::*                                    11467/rsyslogd

On the source server where your MariaDB / MySQL server produce audit entries you should configure syslog to push log entries to the remote system here 192.168.56.11. You edit /etc/rsyslog.conf that way:

*.info;mail.none;authpriv.none;cron.none                @192.168.56.11

And on the target system you now get the audit records tagged withe the originating system:

Sep 21 00:52:37 centos1 kernel: imklog 5.8.10, log source = /proc/kmsg started.
Sep 21 00:52:37 centos1 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1647" x-info="http://www.rsyslog.com"] start
Sep 21 00:52:59 centos1 mysql-server_auditing:  centos1.localdomain,root,localhost,1,19,QUERY,test,'show tables',0
Sep 21 00:53:14 centos1 mysql-server_auditing:  centos1.localdomain,root,localhost,1,20,QUERY,test,'show tables',0

So this is quite simple to setup. Of course you can have multiple MariaDB/MySQL servers sending audit data to a single syslog server.

Syslog is widely used for logging. It allows separation of the storage of generated messages from the system that generated them. A lot of solution exists to aggregate, monitor, query, analyze syslog data. Syslog separation of data from the source is a key element for security auditing as it make it more difficult to repudiate or wipe out logging data. Syslog data can be consolidated in a central repository with make it easier to correlate various events. Consolidated logs can go through reporting, analytics or artificial intelligence algorithms to detect patterns and alert customers of problems.

To set up the MariaDB Audit Plugin to logging to syslog is quite simple :

You download the MariaDB audit plugin (a single share library file) from :
http://www.skysql.com/downloads/mariadb-audit-plugin-beta
You copy the server_audit.so file to the lib/plugin directory in your MySQL/MariaDB install directory and activate it through :

MariaDB [(none)]> INSTALL PLUGIN server_audit SONAME 'server_audit.so';

By default login is done to file and you need to switch it to syslog :

MariaDB [test]> SET GLOBAL server_audit_output_type=SYSLOG;
MariaDB [test]> SET GLOBAL server_audit_events='CONNECT,QUERY';
MariaDB [test]> SET GLOBAL server_audit_logging=on;

You can then see all the variable allowing to configure it :

MariaDB [test]> show variables like '%audit%';
+-------------------------------------+-----------------------+
| Variable_name                       | Value                 |
+-------------------------------------+-----------------------+
| server_audit_events                 | CONNECT,QUERY         |
| server_audit_excl_users |           |                       |
| server_audit_file_path              | server_audit.log      |
| server_audit_file_rotate_now        | OFF                   |
| server_audit_file_rotate_size       | 1000000               |
| server_audit_file_rotations         | 9                     |
| server_audit_incl_users             |                       |
| server_audit_logging                | ON                    |
| server_audit_mode                   | 0                     |
| server_audit_output_type            | syslog                |
| server_audit_syslog_facility        | LOG_USER              |
| server_audit_syslog_ident           | mysql-server_auditing |
| server_audit_syslog_info            |                       |
| server_audit_syslog_priority        | LOG_INFO              |
+-------------------------------------+-----------------------+
14 rows in set (0.00 sec)

you get a few status values:

MariaDB [test]> show status like '%audit%';
+------------------------------ +--------------+
| Variable_name                 | Value        |
+----------------------------- -+--------------+
| server_audit_active           | ON           |
| server_audit_current_log      | [SYSLOG]     |
| server_audit_last_error       |              |
| server_audit_writes_failed    | 0            |
+-------------------------------+--------------+
4 rows in set (0.00 sec)

Check that rsyslog is running :

[root@centos1 log]# service rsyslog restart
Shutting down system logger: [ OK ]
Starting system logger: [ OK ]

Now all connect and query MariaDB actions are written to the audit log in syslog.

[root@centos1 log]# tail -f /var/log/messages
Sep 21 00:07:07 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,10,QUERY,,'set global server_audit_logging=on',0
Sep 21 00:07:11 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,11,QUERY,,'show status like \'%audit%\'',0
Sep 21 00:07:21 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,12,QUERY,,'show variables like \'%audit%\'',0
Sep 21 00:10:06 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,13,QUERY,,'set global server_audit_events=\'CONNECT,QUERY\'',0
Sep 21 00:13:09 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,14,QUERY,,'SELECT DATABASE()',0
Sep 21 00:13:09 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,16,QUERY,test,'show databases',0
Sep 21 00:13:09 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,17,QUERY,test,'show tables',0
Sep 21 00:13:14 centos1 mysql-server_auditing: centos1.localdomain,root,localhost,1,18,QUERY,test,'show tables',0

The fact that the log entries goes to /var/log/messages comes from the /etc/rsyslog.conf config file line

*.info;mail.none;authpriv.none;cron.none /var/log/messages

To log to a different file you just add a line to the /etc/rsyslog.conf config file

if $programname == 'mysql-server_auditing' then /var/log/mariadbaudit1

Using MariaDB Audit Plugin with the syslog system is a good choice for security and sys admins.

 

By going to the download section of  SkySQL website  some users have noticed "MariaDB Audit Plugin". This auditing feature for MySQL has been requested by more and more customers. Legal constraints make it mandatory for more and more companies to keep logging information about database access and activity.

It is very important for the MySQL community to have an open source audit plugin available. MariaDB team has always stick to the principle of keeping MySQL 100% open source and has developed the MariaDB Audit Plugin according to these principles. The MariaDB Audit Plugin has been developed using the standard MySQL Audit Plugin API. Being based on standard API makes it run both with MariaDB and with Oracle MySQL Server. The MariaDB Audit Plugin also has some unique features only available for MariaDB.

To develop this plugin we have sticked to the principle of listening to our big customer to define the specifications. This is the first version and some more improvements will come. We will be pleased to hear your feedbacks. All bug reports and critics are welcomed. The MariaDB Audit Plugin already covers the main requests that have been expressed. But we have more to come and your improvement requests are welcomed.

The purpose of the MariaDB Audit Plugin is to log the server's activity. Who connected to the server, what queries ran and what tables were touched is stored to a rotating log file. The MariaDB Audit Plugin also supports output to the widely used syslogd system. It offers the option to include/exclude users from auditing.

Oracle has released an audit plugin available through its MySQL Enterprise Server version. This make it mandatory to have an active subscription. The version you run is then a fully closed source server under a commercial license. In that case you cannot rely on third party support like SkySQL or Percona.

The MariaDB Audit Plugin is delivered as a very light shared library that can be very easily downloaded and installed on any MariaDB/MySQL server.

Where to download the MariaDB Audit Plugin ?
http://www.skysql.com/downloads/mariadb-audit-plugin-beta

Source Code of the MariaDB Audit Plugin :
https://code.launchpad.net/~maria-captains/maria/5.5-noga-hf

Pointers :
MariaDB Audit Plugin Getting started guide
MariaDB Audit Plugin documentation

My blog post on other auditing techniques
Oracle MySQL Audit Plugin
Oracle MySQL Audit Plugin doc
Macfee Audit Plugin
Macfee Audit Plugin doc

 

Engine condition pushdown is a MySQL internal mechanism that is intended to avoid to send non matching rows from the storage engine to the SQL layer. This is very important when the storage engine involves traffic over the network.
This mechanism was initially created to optimize MySQL Cluster (NDB) behavior.
For the NDB storage engine it avoids a lot of round trips through the network. Part of the filtering is done on the remote data node. Without engine condition pushed down the filtering is done by the sql layer after rows have been sent by the storage engine.

The MariaDB CONNECT Storage Engine implement various table type. Some of these table type correspond to remote access. This is the case of the ODBC and MySQL table type. These types can be considered as a more powerful evolution of the federated storage Engine.
In both case the engine condition pushdown gives a great benefit by avoiding network roundtrips.

The conditions are trapped by the storage engine handler through this API implementation:
const COND *ha_connect::cond_push(const COND *cond)

How to use engine condition pushdown with the MariaDB Connect Storage Engine.

The "engine condition pushdown" mechanism should not be confused with the "index condition pushdown" optimization introduced in MariaDB 5.5 and MySQL 5.6. The index condition pushdown was introduced to use as much as possible the index to filter rows.

As Stewart mentioned in in post where-are-they-now-mysql-storage-engines : "Federated It’s still there… but is effectively unmaintained and dead. There’s even FederatedX in MariaDB which is an improvement, but still, the MySQL server really doesn’t lend itself kindly to this type of engine… it’s always been an oddity only suitable for very specific tasks."

The CONNECT storage engine is lifting the FEDERATED(X) limitations. It is no more a MySQL table only stuff.
It gives the MySQL access to local or remote MySQL table : in that case you can use a subset of columns, reorder them and have the limit clause correctly applied directly on the target.
But you now also have the capability to do an ODBC access to a data source.
This data source can be a local data source accessed through a DSN : Excel, Access, firebird . in that case the multiple ODBC tables concept apply. The concept of multiple table allows ODBC tables that are physically represented by multiple files. For instance to Excel or Access tables you ca views a set of monthly files as a single table.
This can also be any remote data source for which there is an ODBC driver : ORACLE, SQL Server, DB2 ...

What is nice is that these MySQL or ODBC type CONNECT tables can be mixed with the CONNECT TBL Type (Table List) table. This allow to execute queries distributed on multiple remote or local servers.
This is a very powerfull extension of the concept of MERGE table. Table can be of any type. Subset of columns can be used.

SkySQL and MariaDB are very pleased to announce a new baby in the storage Engine family : The "CONNECT Storage Engine"
I have had the pleasure to work with Olivier Bertrand creator of the CONNECT storage Engine. Olivier has over 50 years of experience in database development as an ex-IBM database researcher. Olivier was very interested in adding the result of his experience around BI data access to MariaDB. He wanted a more versatile way to access external data sources in various formats. The CONNECT storage engine enables MariaDB to directly use external data as if it were tables in the server without actually being obliged to load them into the physical database.
Olivier has been welcomed by the MariaDB team and has been working with them to integrate his work, build test cases and fix remaining bugs. The CONNECT storage engine is 100% open source and released under the GPL license. 

The integration of CONNECT to MariaDB is yet another example of the innovation going on around the MariaDB server. The "MariaDB  CONNECT Storage Engine" adds one more innovative differentiation to the MariaDB ecosystem (Cassandra, LevelDB, TokuDB, Spider being other nice examples). All these extra features are available under the GPL and they will come with the standard MariaDB release.

So yes the storage engine arena is very lively. Stewart was worried in his post Where are they now: MySQL Storage Engines . I think that storage engines is where innovation is taking place to turn MariaDB into a database platform.

The CONNECT storage engine give you access to various external data sources either local or remote. This includes files in various formats such as DBase, CSV, DOS, FMT and XML.
It also includes access to remote MySQL tables and almost any other data-source through an ODBC supported target.

You can also build a table based on a list of tables (equivalent of the MERGE storage engine). These table list (TBL) can be built on any type of table including any CONNECT table. This means that you can build a table whose data is distributed across multiple remote servers.

You can also define a table with per column storage (VEC type). This can greatly benefit tables with a lot of columns where only a limited subset is queried.
The CONNECT storage engine offers a major enhancement of what was previously offered by a combination of the CSV, MERGE and FEDERATED storage engines.

This storage engine is smart :
it can auto discover the structure of your data based on the metadata contained within the file.
it can index your data, optimizing for read only access.
it can work on compressed data.

It is a storage engine. It is built into MariaDB 10 which means that it can be used once the server is up and running. Just start creating tables with the "engine=connect" option and specify the files or the remote tables you want to access. This is documented with various examples.
As with any storage engine it is built using the MySQL Plugin API. It is a smart storage engine that uses condition push down to filter at the target data source. This can greatly increase performance by limiting data transfer. It also implements the limit clause of select. This storage engine has been designed with Business Intelligence (BI) queries in mind . It is not designed to be used with Online Transaction Processing (OLTP) activity. This choice allows for the optimization of code by leaving out a lot of costly internal mechanisms needed only for OLTP. As with any storage engine your external data source data can be joined with your local data.

When should you use the CONNECT storage engine ? When you have data in various file formats or remotely available or both and you would like to query this data. The main benefit of this approach to BI is that there is no need to go through a lengthy ETL (Extract, Transform, Load) process. The data can be directly accessed in place.

If you want to talk about it you are welcomed at the free SkySQL/MariaDB event immediately following Percona Live 2013 : "MySQL & Cloud Database Solutions Day"
To try it
CONNECT storage Engine documentation in MariaDB knowledge base
SkySQL support the CONNECT storage Engine
Get the white paper

In the recent years a lot of emphasis has been put on the InnoDB storage engine. This is a great thing. This has turned InnoDB from a very limited transactional storage engine to a very powerful piece of the MySQL architecture. InnoDB is now scalable to many cores. It is also very resilient. It supports advanced features(compression, buffer pool saving ... long list ...). No doubt this was the direction to go. There is still a lot of improvements to come and this is definitely needed for enterprise class applications.

But there are some other part of the server that need some attention and improvement. In the early days of the MySQL plugin architecture some nice concepts have been implemented as storage engines : csv storage engine, federated storage engine, merge storage engine, archive ...
In my personal opinion these implementations are very useful but have not been much improved since their first implementation. They can be considered as excellent proof of concept and should have been pushed much further.

If we look at them :

The Merge storage engine implemented the concept of a table composed of multiple underlying table. The first constrain is that all the table have to be in the myISAM format only. With all the improvements innoDB has gone through regarding performance, including read only and complex query execution this is frustrating. The second big limitation is that all the table need to be strictly identical regarding their structure. Third constraint is that there is no parallel execution of the query. This is not a big surprise as there is currently no parallelism at all in MySQL except in marginal areas (slaves apply, innodb full text indexing). One other big limitation of the merge storage engine is that it cannot handle remotely distributed tables.

The Federated storage engine implemented the concept of a MySQL table located on a remote server. The local and remote table must have an identical table structure. The federated storage engine was reengineered into the federatex storage engine which is included in MariaDB. This new design was done to allow in the future to implement a heterogeneous database access like through an ODBC implementation. This never happened. The federated(x) storage engine keeps a lot of limitation. When you issue a select through a federated table a select * is done to access the remote data. There is no condition push down implementation. This works only with MySQL remote database. The limit clause of the sql statement is not pushed to the underlying system. The condition push down mechanism is not used with this engine. These limitations make the federated storage engine of limited use. The Oracle MySQL version of federated is old and does not implement federatedx improvements.

The CSV storage engine implemented the concept of table representing a csv file external from the database. This was a great idea as the csv format is a very common file format. Unfortunately the csv storage engine is very basic and does not implement the limit clause in select. There is no index associated to the file. Condition pushdown is neither implemented.

Auditing of  a MySQL server activity is a request raised more and more often as compliance rules are more strict for companies. MySQL is used in more and more critical areas.

What are the various loging technologies available on a MySQL server that could be use for auditing :

  • The error log
  • The slow query log
  • The binary log
  • Custom made triggers
  • Using MySQL Proxy
  • The general  log

Using the error log :

The error log contains information indicating when mysqld was started and stopped and also any critical errors.
The log_warnings system variable can be used to control warning logging to the error log.  If enabled aborted connections are written to the error log, and access-denied errors for new connection attempts are written.

mysql> set global log_warnings=2;

if we generate a connection with wrong password we get in the error log

$ mysql --protocol=TCP -hlocalhost -u root -P3309 -pxxx2
 
$ tail -f serge.err
...
130403 15:24:19 [Warning] Access denied for user 'root'@'localhost' (using password: YES)

Unfortunately the error log does not contain any information about the queries run against the database so it cannot be used as the basis of an auditing solution.

Using the he slow query log :

Can we use the MySQL Slow  Query log as an audit solution ? It give some info but unfortunately this is not enough to be considered as an audit solution.
The slow query log consists of SQL statements that took more than long_query_time seconds to execute. The long _query_time can be set to 0 to log everything. Catching all queries through the slow query log can be costly as timing information is also collected and written.

mysql> set global slow_query_log=on;
mysql> set global long_query_time=0;
mysql> set global log_queries_not_using_indexes=on;

For a givent query we got in the log :

Time Id Command Argument
# Time: 130403 15:51:43
# User@Host: root[root] @ localhost [127.0.0.1] Id: 1
# Query_time: 0.591516 Lock_time: 0.043317 Rows_sent: 100 Rows_examined: 29214
use test;
SET timestamp=1364997103;
select * from (SELECT title, post_url,date(post_date) dt, author, MATCH(title,extract) AGAINST (' backup restore' in boolean mode) as Relevance FROM post where post_date > '2010-01-01'ORDER BY Relevance DESC limit 100) xx order by dt desc;

All statement are logged (select, insert update, delete). Connections are missing. We do not get  trace of failed requests.  There is no idea of filtering on users / schemas / tables. It is an everything or nothing mechanism. The impact on performance can be big.

Using binary log :

The binary log contains only queries that have modified the data through commited transactions. So if a suspicious select is generated it will not appear here. This can however (if no other audit information is available) be used to discover when a modification was made. The binary log can be read an filtered to discover the modification.

Using triggers :

Another sometimes used solution would be to use triggers on critical table. This solution is painful to maintain. Like with binary logs this only deals with data modification tracking. There is no trigger on connection or on select. Moreover as MySQL allow only one trigger per table this can become complicated for application requiring triggers for other purposes.

Using the MySQL Proxy :

http://dev.mysql.com/doc/refman/5.6/en/mysql-proxy.html MySQL Proxy when it firs appeared has raised a lot of expectations. It can effectively be used to implement auditing, policy enforcement. Unfortunately the MySQL Proxy product has remained in the alpha status since many years. Because of this alpha status and total lack of visibility it cannot be part of a production long term solution.

Using the general log :

The general log is a logging capability of the MySQL server allowing to log all queries recieved  by the server. As it logs all the queries it gives more info than the precedent techniques.

Let us try it

[sfrezefo@serge ~]$ mysql --protocol=TCP -hlocalhost -u root -P3309 -pmanager1
...
mysql> use test;
mysql> show tables;
mysql> create table titi(col1 int);
mysql> insert into titi values(3);
mysql> commit;
mysql> exit;

let do a failing connection :

[sfrezefo@serge ~]$ mysql --protocol=TCP -hlocalhost -u root -P3309 -pmanager2
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

This is what we get int the general log

Time Id Command Argument
130327 10:51:34 2 Connect root@localhost on
                2 Query select @@version_comment limit 1
130327 10:55:03 2 Quit
130327 10:55:07 3 Connect root@localhost on
                3 Query select @@version_comment limit 1
130327 10:55:15 3 Query SELECT DATABASE()
                3 Init DB test
                3 Query show databases
                3 Query show tables
                3 Field List planetpost
130327 10:55:26 3 Query show tables
130327 10:55:50 3 Query create table titi(col1 int)
130327 10:56:08 3 Query insert into titi values(3)
130327 10:56:13 3 Query commit
130327 13:33:41 3 Quit
130327 13:33:47 4 Connect root@localhost on
                4 Connect Access denied for user 'root'@'localhost' (using password: YES)

What is missing ? We  get the successful or failed connection. Subsequent queries are linked to the opened sessions  We get  trace of failed request except when syntactically invalid.  There is no idea of filtering on users/schemas/tables. It is an everything or nothing mechanism. The impact on performance can be big.

Conclusion :

None of the technique presented carry enough information or flexibility to be considered as  auditing solutions.

 

As alway with new technology there is always different approaches regarding the adoption.
You can try to use the bleeding edge features or start with a very standard configuration.
My personal advise to new users is to start with the most basic configuration.

This allow you to get familiar with the fundamentals :
- how to install
- how to operate
- how to monitor

For MariaDB Galera Cluster he most basic configuration is a 3 nodes cluster. You can chose to use it :
as an HA solution
Galera Cluster is currently the easiest way to solve the HA problem.
when you think HA think Galera Cluster it is so much simpler. Failover is totally transparent and you have nothing to do like you would have with standard replication. if you have a lodbalancer in front you just have to push out from the configuration THE FAILED NODE and that is done.

as a scale out solution
A usual MySQL scale out architecture is based on master/slaves architecture. This solution incurs to the application the choice of where to send the read (Master or most up to date slave…). Scale out of read with Galera cluster synchronous replication is much simpler. Nothing need to be done at the application layer.
You have a synchronous data up to data available on all nodes . You do not have the risk to read stale data when the replication lags. Nothing need to be done at the application level like taking care of reading were you write to have correct data.

Contrary to usual HA solutions or compare to MySQL cluster Galera Cluster is very simple to setup and operate.
Getting Started with MariaDB Galera Cluster

Then of course next step is to push Galera usage a little bit further. One main area is of course write scalability.
A few benchmark have been produced but we do not have yet much experience and  you have to be more careful. With write/write configurations you have to be careful about hot spots in the database. This can lead to deadlock and the behavior has to be correctly understood.

A good distribution for download to do your testing is : MariaDB Galera Cluster 5.5.29 Stable

Some useful pointers to understand various behaviours of Galera Cluster.

 

------- upcoming events :

Tomorrow Seppo Jaakola, Codership, will present Galera Cluster for MySQL & MariaDB at the Meetup SkySQL & MariaDB - Paris

Henrik Ingo, Max Mether and Colin Charles will present "MariaDB Galera Cluster Overview" at the free MySQL & Cloud Solutions Day taking place in Santa Clara the 26th of April. You can register for free.

 

Who has not experienced a fast growing system table space. This is related to innoDB undos (Rollback segments).

In MySQL 5.5 and in 5.6 by default undos are located in the innoDB system tablespace. Beside the space consomption issue, the second penalty is that the access to the undos generates a lot of random IOs on the system tablespace.

Once the system table space has grow too much there is no way to reclaim the space. The only solution is to recreate the database. This is very problematic for online 24/7 databases. This happens most of the time because of overgrowing undos. Under normal circumstances undos are cleaned up by an innoDB purge mechanism. In MySQL 5.6 the multi threaded purge mechanism has greatly improved this purge undo cleanup process. But even with this purge mechanism it can still happen that the undos growth gets out of control. The main reason is long running transactions or very slow queries. Both these circumstances obliges innoDB to keep undos for consistent read.
A new feature has been introduced in 5.6 that allow to have the undos out of the system tablespace. The undo tablespace can be split into multiple files located in a user defined directory.

One of the advantage of this separate directory for undos is the ability to move the random UNDOs related IOs to to a different place. This allows for example to place undos on SSD for better random IOs.

This feature is available through these 3 new configuration variables :
innodb_undo_dir – where on disk to place undos
innodb_undo_tablespaces – number of UNDO tabespaces. this number must be set at database creation
innodb_undo_logs - number of rollback segments. The number of rollback segments physically present in the system never decreases.

If you have a running MySQL 5.6 instance , you shut it down and try to change innodb_undo_tablespaces to 4 like I did you will get :

130322 22:33:57 InnoDB: Expected to open 4 undo tablespaces but was able
130322 22:33:57 InnoDB: to find only 0 undo tablespaces.
130322 22:33:57 InnoDB: Set the innodb_undo_tablespaces parameter to the
130322 22:33:57 InnoDB: correct value and retry. Suggested value is 0
130322 22:33:57 [ERROR] Plugin 'InnoDB' init function returned error.
130322 22:33:57 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.

So yes MySQL 5.6 innodb_undo_tablespaces is very useful but only if this choice has been made at the database creation. The second points that must be noted is that this feature unfortunately does not help to reclaim space. So yes this features is great but it is the first step in a correct management of undos space.

innodb_undo_dir, innodb_undo_tablespaces are part of the numerous MySQL 5.6 server configuration variables. MySQL 5.6 is a major release and there are a lot of obsoleted parameters, new parameters, parameters with different default values. It is very important for users moving to MySQL 5.6 to understand all these database setting changes.
Sheeri Cabral has tracked depreciated variables. Peter Zaitsev has done a complete comparison of the default values differences between variables in MySQL 5.5 and 5.6.

-------------------- Useful Pointers -------------------------------

Reasons-for-run-away-main-innodb-tablespace June 10, 2010 Peter Zaitsev
Purge-thread-spiral-of-death June 10, 2010 Peter Zaitsev
Being-highly-irresponsible-or-howto-dos-nearly-all-rdbmss   Stewart Smith
Finding-and-killing-long-running-innodb-transactions-with-events  
Tracking-long-running-transactions APRIL 6, 2011 Yoshinori Matsunobu
Kill-mysql-procs 2012-12-27  rugmonster.org
How-to-debug-long-running-transactions-in-mysql March 8, 2011 by Baron Schwartz
Killing-idle-transactions-to-increase-database-uptime  August 13th, 2012 by Xaprb