Can Manticore work without MySQL?

It is a common question  and  a misconception that both Sphinx and Manticore have as requirement the MySQL database. This is not true and in this article we will talk about how the search engine can be used independent of MySQL.

While most of users are using the MySQL database (or it’s variants like MariaDB/Percona Server) and Sphinx was targeted toward MySQL users –  as a db plugin (SphinxSE) was implemented and also one of querying protocol is mysql-based, the software is not a dedicated solution for MySQL, but a general purpose search tool.

For compiling Sphinx and Manticore there is no requirement (or ever was) of any MySQL library. The mysql lib client is required only for indexer’s mysql driver. However even without this, you can still index data from a MySQL database using the ODBC driver. The MySQL protocol for SphinxQL is native implemented and doesn’t have a requirement for MySQL headers or libraries.

Indexing

Manticore implements several  drivers that can get data from a database. Specific database drivers are available for MySQL, MSSQL and PostgreSQL and there is also available support for ODBC. The MySQL and PostgreSQL drivers require their respective client libraries. MSSQL driver require the ODBC client library. However Manticore can be built without supporting some of these drivers or not support any at all.

Beside database drivers  even more general data source drivers are available by the XML pipe and CSV/TSV drivers. With these, data can be taken from a non-relational warehouse like NoSQL or simply plain files storage. The XML driver requires the XML file(s) in a specific format. In case of CSV/TSV, the only requirement is for first column to be the document ID. In case of these drivers, Manticore doesn’t require a full path to a file, but execute a file and expects the reponse to be a XML/CSV/TSV file. This can be used to process multiple files or entire folders containing the files.

Searching

The first API implement was SphinxAPI, a custom binary protocol for which client libraries were officially developed in several programming languages and third parties also ported the client libraries in other languages. While it  doesn’t have all the features, the SphinxAPI protocol do offer a full implementation for searching. The issue with the SphinxAPI protocol (which is also used internally by distributed indexes) is keeping the client libraries up to date. The SphinxQL protocol became more popular as most users were fine to use a mysql client/connector and didn’t require any maintenance between daemon upgrades – in contrast the SphinxAPI would require keeping it up-to-date especially for accessing new implemented features. The SphinxAPI client was implemented officially in several popular languages like PHP, Python, Java and Ruby, but 3rd party ports exists for other languages.

Starting with Sphinx 2.3   HTTP protocol was added, but it only worked as a proxy for searches in sql or SphinxAPI format. This was improved in Manticore with new endpoints that use JSON for both payloads of requests and responses. As it works like a regular web service, responses can be proxied or cached with a reverse proxy server (like nginx for example).  Currently the JSON API offer paths for searching, data manipulation and percolation with more commands to be implemented in future.

 

Leave a Reply