The existing HTTP interface features only simple functionality. The new endpoints added in Manticore Search 2.5.1 launch a new API.
Until now, 2 endpoints existed:
- /search - execute search queries in a a simple format
- /sql - execute a SphinxQL select, thus allowing to reuse a search in SphinxQL format without using a mysql client
The new endpoint, /json , unlike the first 2, seeks to provide a new API on it’s own using JSON over HTTP in both requests and responses.
Some may ask, why adding a new API when we already have SphinxQL? What would be it’s advantages?
To use Manticore Search, the common way is to use SphinxQL. The syntax is almost the same as existing SQL protocols and you just need a MySQL client or library. A HTTP interface requires a HTTP client and most languages already come with one, so no need for extra packages to install and even better, you can simply use a browser (with a REST API extension) to connect to the engine and perform tests. There are also the cases when users don’t use MySQL or a traditional database at all and they expect, like other projects do, a HTTP protocol.
The second advantage and more important is that using JSON over HTTP can allow more complex requests and responses. In future, we could add endpoints that can do more than just one query, like being able to perform multiple queries that are connected between them (let’s say one could use in it’s parameters the results of another) and responses that can contain more complex structures than a collection of rows.
Current endpoints for data manipulation include /json/insert,/json/update,/json/replace and /json/delete. There is also /bulk, which allows batching data manipulation operations and they can be of different types. Note that /json/bulk requires the request body as newline-delimited json.
Searching is available on the /json/search endpoint. The search is organized as an abstract syntax tree of queries embedded as JSON object and can contain leaf queries clauses like “match”, “range”, “sort” (used for full-text matches,attribute filtering) or compound query clauses like “bool” (to combine other queries in logical fashion) or behaviour clauses (like “profile” to enable query profiling in the response). Text highlighting can be also declared in the query request to get back highlighted snippet in the response. Geo distance is also functional and can be used in sorting. Improvements and new commands will be added in future releases.
A simple example of a phrase search:
curl -X POST 'http://localhost:9308/json/search' -d '{"index":"geodemo","query":{"match_phrase":{"name":"Gloucester City Middle School"}}}'
And the response:
{
"took": 1,
"timed_out": false,
"hits": {
"total": 1,
"hits": [
{
"_id": "7142245",
"_score": 4609,
"_source": {
"elevation": 0,
"population": 0,
"latitude": 0.69612807035446167,
"longitude": -1.3109307289123535,
"latitude_deg": 39.885200500488281,
"longitude_deg": -75.110801696777344,
"name": "Gloucester City Middle School and High School",
"feature_code": "SCH",
"country_code": "US",
"state_code": "NJ",
"level3_code": "",
"level4_code": "",
"dem": "",
"timezone": ""
}
}
]
}
}
The API is still in an early stage. We aim to have a syntax similar to Query DSL and a lot of stuff available in SphinxQL is not yet ported. For this we need feedback from the users interested to use the JSON queries instead of existing APIs. Give it a try and let us know what are your thoughts!