Schedule Meeting


MariaDB and MySQL storage engines: an overview

by | Jul 26, 2023 | MySQL InnoDB, MariaDB Storage Engines

Need Help?  Click Here for Expert Support

MySQL was the first DBMS to introduce the concept of storage engines in the early 2000s. This was one of its main features. Later, MariaDB extended the storage engine API and included some storage engine maintained by third parties as part of its official distribution. This means that support to third-party engines is offered by MariaDB.

The idea of storage engines was then used by other database. For example, MongoDB supported them for a long time, and PostgreSQL 12 introduced basic support for storage engines.

Here we’ll discuss which storage engines exist in MariaDB and MySQL. They don’t support exactly the same storage engines, so this page might also be useful to evaluate whether a project should use MariaDB and MySQL.

What storage engines are

The idea is simple: MySQL doesn’t know how to physically read data, maintain indexes, cache data and indexes, etc. It delegates these operations to a special type of plugins called storage engines.

This gives storage engines a lot of flexibility. When MySQL says “please, Mr. Storage Engine, insert this row“, the storage engine could do literally anything, as long as it reports success or an error. For example, InnoDB starts a transaction, even if it was built when MySQL had absolutely no knowledge of transactions; CONNECT could write to a file, or send a query to a remote server; BLACKHOLE does… nothing. Really!

MariaDB started its journey as a fork of MySQL 5.1, so it had support for storage engines from the very beginning. It also includes by default some storage engines that are produced by third parties, and therefore are not included in MySQL or supported by Oracle.

Let’s see which storage engines exist for MariaDB and MySQL!

Main storage engines

We recommend to use one of these storage engines as the default. Once you pick a default, your tables should normally be built using it. Only for special cases you should use one of the other engines.

Don’t use more than one storage engine from this section. InnoDB is usually the best choice.

InnoDB / XtraDB

InnoDB was initially developed to implement features that were missing in MySQL, particularly transactions and foreign keys. Nowadays it’s the default choice. Users who don’t know about storage engines just use InnoDB. And there are great reasons for that! InnoDB is still the only storage engine to support transactions, and its performance characteristics make it the best choice for the general case.

The term XtraDB is only present in Percona Server and very old MariaDB versions. XtraDB is a Percona fork of InnoDB.

See our InnoDB articles.

RocksDB / MyRocks (MariaDB, Percona Server)

Originally created by Facebook, this engine provided an interface between MySQL and RocksDB technology. In other words, regular SQL and transactions can be used on top of RocksDB technology. MyRocks is shipped with MariaDB (with the name of RocksDB) and Percona Server, it is not shipped with MySQL. Both MariaDB and Percona teams independently modified RocksDB/MyRocks.

The main purpose of RocksDB/MyRocks is to provide an alternative to InnoDB which uses less resources and compresses data better. The reason why it is not widely used, is that… most companies are not Facebook. Probably you’re not much concerned about the life span of your SSD devices. But if resource usage is a problem for you, and your configuration and queries are already well optimised, RocksDB/MyRocks is one of the solutions that you might want to consider.

Initially MariaDB called the engine MariaRocks, and this name can be found in some articles or videos. MariaRocks is just an older name for MyRocks in MariaDB.

ColumnStore (MariaDB)

MariaDB can be used as a columnar, distributed database for OLAP and data warehousing. We can do this by using the ColumnStore storage engine. ColumnStore can run on a single node, or it can run as a cluster. A MariaDB ColumnStore node or cluster has a primary node that accepts writes and secondary nodes that replicate data. Queries are broken into jobs distributed over the whole cluster, and withing each node, they are distributed over a pool of threads.

Despite its special architecture, ColumnStore can be used in combination with other storage engines. So we can join columnar data with purely relational tables, such as InnoDB tables.

The main reasons to use MariaDB ColumnStore instead of specialised columnar databases are actually related to its strict integration with MariaDB. The ability to replicate from regular MariaDB nodes, or import data via CONNECT or SPIDER, eliminates the need for a complex pipeline or ETL processes. The ability to join ColumnStore tables with other tables makes analytics flexible and agile.

ColumnStore derives from an older technology, InfiniDB, which was a MySQL fork. MariaDB acquired InfiniDB and initially used it to produce a separate MariaDB edition. With the version 10.5 of MariaDB, ColumnStore became part of the regular MariaDB edition.

See our MariaDB ColumnStore articles.

NDB Cluster (MySQL)

NDB, or NDBCLUSTER, is a storage engine that allows to form an in-memory database cluster that also persists data on disk. It shards data, and each fragment can have multiple replicas for redundancy. Its data model is key/value, though it has an SQL interface. In an NDB cluster, MySQL is only a part of the overall architecture. The components of a cluster are:

  • SQL nodes: an SQL server with the NDB engine;
  • Storage nodes, each is a fragment replica;
  • Management nodes that orchestrate the cluster.

NDB only runs on a modified version of MySQL. A regular MySQL binary can’t be used for NDB. Each of these node types is managed with some utilities that are distributed with NDB.

NDB has many limitations compared to regular MySQL.


NDB has an open source fork started by its own creator. It’s called RonDB and it’s part of Hopsworks. It regularly includes the latest changes from MySQL Cluster, as well as some unique improvements.

Other useful storage engines

These storage engines are not for the general use, but they are extremely useful in some particular cases.


MEMORY writes rows in… memory. When MariaDB/MySQL is restarted or crashes, the contents of these tables are lost. Another limitation is that BLOB and TEXT columns are not supported, and other variable-length types are inefficient. This limitation was removed from Percona Server, and might be removed from MariaDB at some point (see MDEV-19).

MEMORY is mainly useful as a cache.

Some people argue that better technologies exist for caching, but actually different levels of caching can exist. Using something like Redis or Memcached to store the results of a query makes sense. But you might have a query that generates results that are often joined to other tables, or used as a subquery. In that case, caching those results in a MEMORY table could be a good idea.


Generally speaking, CONNECT allows us to read and write to remote data sources as if they were local tables.

In particular, it can access these classes of data sources:

  • Remote databases
    • MariaDB/MySQL native protocol
    • ODBC
    • JDBC
    • MongoDB
  • Web APIs
  • Files (CSV, JSON, XML, HTML, custom logs…)
  • Special sources (directory contents, MAC address, Windows WMI)
  • Query transformation (pivot, raw data to summary, summary to raw data)

See our CONNECT articles.


SPIDER is a storage engine to implement data sharding, shipped with MariaDB. It was originally built for MySQL and its website used to distribute MySQL builds with SPIDER. I can’t find SPIDER website anymore, so I supposed that nowadays it is only available in MariaDB.

If used in the most basic way, SPIDER is linked to a remote MariaDB or MySQL table. But SPIDER supports partitioning, and it allows to link each partition to a different table, possibly on different servers. In this way, we can have identical tables, but on different servers, that contain different sets of data.

Also, currently MariaDB doesn’t parallelise queries internally, not even for joins or partitioned tables. However, SPIDER allows to create a partitioned table that points to multiple local tables, or to the partitions of a local partitioned table. When we run a query against the SPIDER table, it will be parallelised. For every partition that we read, SPIDER will start a separate connection to localhost.

S3 (MariaDB)

When we convert a regular table to an S3 table, data is sent to an Amazon S3 or any technology that uses the S3 protocol. This is one of the easiest ways to archive historical data, and as a plus, it allows to run queries on archived data. Querying S3 data is much slower than querying local data, but this is meant to be a one-off operation. Normally, if you need to query data that was sent to S3, you would need to download them and insert them into a database first. Querying the S3 storage engine is incomparably faster and simpler.

As mentioned, Amazon S3 is not the only technology to implement the S3 protocol. An open source alternative is MinIO.


ARCHIVE is the storage engine that provides the best data compression rate for most types of data. Note that it has many limitations: for example it doesn’t support transactions, tables are append-only, and ARCHIVE support for indexes is very limited. This storage engine is meant to contain data that normally is only read, and can be occasionally appended in big batches.

Also, note that ARCHIVE tables don’t have a size limit. But this shouldn’t be a reason to use ARCHIVE. InnoDB tables default maximum size is 64T, and depending on the page size, it can grow up to 256T. With files of this size, some operations are a nightmare.


SEQUENCE tables are virtual tables. We can’t create them, drop them, alter them, or write into them. We can only query them, as if they existed. They return a numerical sequence. The table name that we mention in the query determines the boundaries of the sequence and (optionally) the increment. For example:

  • SELECT seq FROM seq_10_to_15;
    returns: 10, 11, 12, 13, 14, 15.
  • SELECT seq FROM seq_15_to_10;
    returns 15, 14, 13, 12, 11, 10.
  • SELECT seq FROM seq_1_to_10_step_2;
    returns 1, 3, 5, 7, 9.

MariaDB also support sequences, just like most other DBMSs. The difference is that a sequence allow us to advance the current value and read it, usually to assign it to a primary key; whereas a SEQUENCE table allows us to read an entire numerical sequence with a single query.

The SEQUENCE engine is mostly useful to generate test data (not necessarily numerical data), and insert it into a regular table.


OQGRAPH builds virtual tables that are based on regular tables (for example, InnoDB tables) and allows to see the data as graphs. Some people think that this engine was made obsolete when MariaDB added support for common table expressions (the WITH ... SELECT syntax) but it actually deals with different problems. In particular, OQGRAPH allows to find the shortest path between two nodes using two different algorithms. Arcs between nodes can also have a weight, which is useful when representing geographical points at different distances from each other.

OQGRAPH should not be used in place of a graph database such as Neo4J. But introducing a new database technology in the company comes with costs and risks, so in some cases it’s not worth it. For example, if this kind of queries are only needed for a nightly job, it is probably better to run them in MariaDB with OQGRAPH rather than introducing a graph database.


A BLACKHOLE table is similar to Linux /dev/null file. It is always empty. Trying to insert data into BLACKHOLE won’t return any error, but the table will remain empty.

However, INSERTs that target a BLACKHOLE table will be written into the binary log and will be written to the replicas (if any). So, if a table is of type BLACKHOLE on the master and of type InnoDB on the replicas, we can run INSERTs on the master that will only insert rows in the replicas. Vice versa, if the table is InnoDB on the master and BLACKHOLE on the replicas, we can write data on the master only.

Mroonga (MariaDB)

Groonga describes itself as an open source fulltext search engine and column store. Its main characteristics are:

  • Fast fulltext capabilities that support Chinese, Japanese and Korean character sets. These character sets are normally not supported by fulltext engines because they don’t have a character to separate words.
  • Extended GIS capabilities that can be accessed using regular MariaDB syntax.
  • Groonga only runs on Linux.

Mroonga is the MariaDB/MySQL storage engine that embeds Groonga. It is distributed with MariaDB, but not with MySQL. Similarly, a PostgreSQL extension exists, and it’s called PGroonga.

Sphinx (MariaDB)

Sphinx is a fulltext search technology that was popular some years ago. It is simple but fast, it uses MySQL protocol, and an SQL dialect that is very similar to MySQL’s, supporting some of the typical features of relational databases. It offers much more power and much better performance than MySQL and MariaDB when it comes to fulltext searches.

MariaDB comes with a Sphinx storage engine that allows us to query tables from a remote Sphinx server as if they were local MariaDB tables. The main advantages of using this approach are:

  • Updating Sphinx from MariaDB is simpler;
  • The ability of joining Sphinx indexes and MariaDB tables;
  • Simpler query caching if we use external cache technologies like Redis.

Aria (MariaDB)

Aria is similar to MyISAM, but it’s crash-safe. Just like MyISAM, it doesn’t support transactions. Furthermore, writing to an Aria table is slower than writing to, for example, InnoDB.

Aria is used for internal temporary tables, as explained below. I don’t recommend using it for other purposes.

See our Aria articles.


This is an example storage engine that does nothing, and is not compiled by default. Those who are interested in developing a new storage engine can use EXAMPLE as a boilerplate. EXAMPLE is included in the source code, but not in the binaries.

Legacy storage engines

These storage engines are old but still present in MariaDB or MySQL. There can still be some reasons to use them, but rarely. If we still use them, we should probably consider migrating the tables to InnoDB or some other storage engine.


This is the most ancient storage engine shipped with MariaDB and MySQL. It used to be the default storage engine before it was replaced by InnoDB.

MyISAM is not transactional. It has an index cache called key buffer, and it relies on filesystem buffers for the data. When MariaDB crashes, MyISAM loses all data that is not yet flushed to the disk. If some data is only partially written at the time of the crash, the table becomes corrupted and partially written data is lost.

MyISAM supports compression, and compresses data very well. However, compressed tables are read-only, and the DBMS needs to be stopped before compressing tables.

See our MyISAM articles.


MERGE, or MRG_MyISAM, was developed to see several MyISAM tables as one. This was particularly useful years ago, when MyISAM exceeded the file size limit imposed by the operating system. But this problem is extremely unlikely nowadays.

MariaDB and MySQL have been supporting partitioned tables and views for many years now. These features can of course be used with MyISAM tables, so MERGE is not needed anymore.


The CSV storage engine treats CSV files as tables. When we create a new table, we can base it on an existing file, or let the engine create a new file automatically.

CSV has very important limitations: it doesn’t support indexes and it can’t store NULL values.

In MariaDB, CSV was superseded by CONNECT, but it can still be useful with MySQL. Both MariaDB and MySQL use it for the slow log and the general log when log_output=FILE (not recommended).


FEDERATED allows to query a remote MySQL or MariaDB table as if it was a local table. It is no longer actively maintained.

MariaDB replaced FEDERATED with FEDERATEDX, which is essentially a higher quality refactoring of the engine.

Using a remote table as if it was a local table is still useful, especially for one time operations such as moving a table from one server to another. But I recommend to do it using SPIDER or CONNECT.

Storage engines for internal temporary tables

Internal temporary tables are created to materialise the intermediate results of a query when it’s necessary. For example, to perform a two steps sort. They can be in-memory or on-disk.

For in-memory temporary tables, MariaDB uses MEMORY. MySQL used MEMORY before version 8.0. Now it uses TEMPTABLE, a special storage engine that is not on this list because it can only be used for this purpose. Users can’t create TEMPTABLE tables.

For on-disk temporary tables, MariaDB uses Aria. It is theoretically possible to use MyISAM as old MySQL versions did, but this requires recompiling MariaDB with aria_used_for_temp_tables=OFF. This could change at some point, see MDEV-6630. Modern MySQL versions use InnoDB for on-disk temporary tables.

A reason why vanilla MySQL introduced the TEMPTABLE engine is probably MEMORY lack of support for BLOB and TEXT types. A query can’t materialise intermediate results with the MEMORY engine if BLOB or TEXT columns are present. Note that this limitation was removed from Percona Server, where MEMORY was extended to support these types.

See also

Other posts on this topic:

Federico Razzoli

All content in this blog is distributed under the CreativeCommons Attribution-ShareAlike 4.0 International license. You can use it for your needs and even modify it, but please refer to Vettabase and the author of the original post. Read more about the terms and conditions:

About Federico Razzoli
Federico Razzoli is a database professional, with a preference for open source databases, who has been working with DBMSs since year 2000. In the past 20+ years, he served in a number of companies as a DBA, Database Engineer, Database Consultant and Software Developer. In 2016, Federico summarized his extensive experience with MariaDB in the “Mastering MariaDB” book published by Packt. Being an experienced database events speaker, Federico speaks at professional conferences and meetups and conducts database trainings. He is also a supporter and advocate of open source software. As the Director of Vettabase, Federico does business worldwide but prefers to do it from Scotland where he lives.

Recent Posts

Can we shrink InnoDB Buffer Pool?

Can we shrink InnoDB Buffer Pool?

An oversized InnoDB buffer pool will consume too many resources and can be slower than necessary. Let’s see how to check if we can shrink it.

Is InnoDB Buffer Pool big enough?

Is InnoDB Buffer Pool big enough?

InnoDB buffer pool is the most important memory area to allocate. It contains the most frequently read data and index entries from InnoDB tables. Let’s see how to check if it is big enough.


Need Help?  Click Here for Expert Support


Submit a Comment

Your email address will not be published. Required fields are marked *