New way of tokenization of Chinese

Why is the Chinese language considered difficult to handle with search engines? The answer is that the Chinese language belongs to the so-called CJK languages that include, alongside with Chinese, Japanese and Korean languages. And the CJK languages have no spaces or any other word separators between words. So why is it bad? Because all the sentences in the text, being divided with spaces or not, consist of words. To find a correct match in full-text search, we need to perform a text tokenization, i.e. to determine the boundaries between text's words. […]


Personal and team training will maximize them performance. 

Custom development

Need cone custom or individual features?

Fill the form and don’t forget to make the description of what you need.

Free config review

There are often optimizations that can be made to a Sphinx / Manticore setup by changing some simple directives in the configuration or making quick changes to an index definition.

Some common mistakes and issues can include:

  • doing main+delta without kill-lists, even if the delta does include updated records found in the main
  • using wildcarding with very short prefix/infix which can hammer performance in some cases
  • disabled (unintentional) seamless rotates and getting stalls on index rotations
  • adding texts as string attributes even if they are not using for any kind of operation (filtering, grouping, sorting) or mandatory to be present in results
  • using deprecated settings¬†

Having a quick look on the configuration can show issues or potential issues, this is why we want to offer a gift to our growing community!

When uploading your configuration file, we recommend to remove any database credentials first.

We suggest also you give as many possible details about your setup: how big is the data you have, how typical queries look and what issues you experience.

Contact us