TL;DR:
- DX1 uses Manticore Search for customer and parts search with a fast typeahead UX
- Chosen for open-source licensing and speed
- Deployed on Azure VMs running Ubuntu, aligned with DX1’s existing Azure footprint
- Handles 20M+ parts; best typeahead performance requires indexes in memory
- Scales by upgrading VM memory or adding nodes to a Manticore cluster
- Day-to-day operations are low touch and low maintenance
Context
This article is based on direct input from Damir Tresnjo at DX1 . It describes how DX1 runs Manticore Search in production on Microsoft Azure today, focusing on why they chose Manticore, how they deploy it, and what they have learned about performance and scaling.
DX1 in One Paragraph
DX1 uses Manticore Search as a fast, user-facing search layer for customers and a parts catalog that has grown beyond 20 million records. The setup is intentionally simple: Manticore runs on Ubuntu-based Azure VMs alongside the rest of their Azure infrastructure, delivering responsive typeahead while staying “low touch” operationally. As their data and traffic grow, they scale in a straightforward way by upgrading VM sizes or adding more nodes.
Search That Customers Actually Enjoy Using
DX1 uses Manticore Search to power search across customer and parts data. Typeahead is a core part of the experience, and according to Damir, it is one of the most appreciated features by their users.
“We use it for searching through customers and parts data, we have a type ahead functionality that our customers love.”
This is a practical, user-facing use case where milliseconds matter, and it has shaped both infrastructure and operational choices.
If you're exploring autocomplete in Manticore, there are multiple ways to implement it depending on data and UX requirements. For a deeper dive, see our overview of fuzzy search and autocomplete: New fuzzy search and autocomplete .
Why DX1 Chose Manticore Search
The decision to use Manticore Search was straightforward: it is open source and fast.
“Open source and very fast.”
That combination made it a good fit for DX1’s search workload and cost expectations, while keeping the stack approachable for a lean team.
Deployment on Azure VMs
DX1 runs all of its infrastructure on Azure, so deploying Manticore there was the natural choice. The team runs Manticore Search on Azure virtual machines using Ubuntu.
**“We run everything on Azure, so we deployed Manticore there as well."
No Azure-specific expensive managed services were required; VMs provided the flexibility they needed while staying consistent with the rest of their environment.
Performance, Memory, and Scale
Manticore has been fast and stable for DX1, even at large scale. Their production dataset includes over 20 million parts.
“It performs very fast, we have over 20 million parts we search through.”
One practical consideration is memory. Typeahead performance benefits from indexes being in memory, which means VM memory may need to grow alongside the index.
“It does need the database to be in memory for the type ahead performance. As soon as index outgrows available memory, we need to upgrade the VM memory.”
This creates a clear scaling path: grow memory on existing VMs or add more nodes to a cluster.
“We can scale each VM or we can add more VMs to a Manticore cluster.”
Day-to-Day Operations
Operationally, DX1 describes Manticore as low touch and low maintenance.
“Low touch, low maintenance, most of the time it just runs.”
There are no special Azure features involved; the setup is deliberately simple, focused on VMs and predictable operations.
Recommendation
DX1 would recommend Manticore Search to other teams looking for a fast and cost-effective search engine.
“Yes, I would recommend Manticore to anyone looking for a fast, reliable and cost effective search engine.”
For DX1, the combination of speed, open-source flexibility, and straightforward VM-based deployment on Azure has been a dependable foundation for search at scale.
Conclusion
DX1’s story is a good fit for teams who want a fast, reliable search engine without turning search infrastructure into a project of its own: run Manticore on straightforward Linux VMs, keep operations simple, and scale predictably. For low-latency typeahead in particular, it’s normal to plan for sufficient RAM headroom, so scaling often starts with memory (scale up) and later expands to adding nodes (scale out) as data and traffic grow.
Talk to Us About Migrating to Manticore
If you're considering a migration to Manticore Search and want a quick architecture review (for example, a VM-based setup on Azure), get in touch with us . Share a bit about your dataset size, query patterns, and latency targets, and we will help you validate an approach and plan the next steps.
