Post Query Refinement Suggestions in Search UX, and an Algolia Demo App

What are Post-Query Refinement Suggestions?

One of my favorite search UX patterns is post-query refinement suggestions — buttons that appear between the search box and results, adjusting the query in various ways. See, for instance, these suggestions on Etsy, which recommend that I filter by shipping speed, seller type, cost range, style, etc.

Filter Suggestions on Etsy

PQRS is a mouthful of an abbreviation, but most of what I've seen as alternatives either describes the UI element ("refinement pills") rather than the function, or is excessively vague ("guided search"). So PQRSs it is, sorry.

PQRSs differ from other search components. Auto-suggest and auto-complete help users finish queries before submitting. Faceted search is a full filtering interface using a taxonomy. On Etsy, faceted search appears when you press the Show Filters button.

PQRSs work like a recommendation system, but applied to searches rather than items or categories. The semantics can vary: "did you mean?" suggestions to fix typos or reformulate without changing scope, or "narrow your search" suggestions. Here, I'm focusing on a specific case: "narrow your search by adding filters."

Suggesting filters helps in a couple of ways. It reduces friction when users try to narrow an overwhelming set of results to something manageable. It also helps communicate the range or diversity of items in the search results. Research suggests that users often struggle when presented with very long lists of results, or complex and new-to-them taxonomies. (I.e., the paradox of choice.) Compared with approaches such as faceted search, PQRSs are simpler to use, more curated, and less intimidating, but less flexible for power users. In many domains (e.g., e-commerce) that's the right trade-off to make.

Queries that particularly need refinement are what Daniel Tunkelang has called broad and ambiguous queries. Broad queries ("shoes") yield results that are too long to be practically reviewed, while ambiguous ones ("mixer") might return several very different groups of results. Filters can address both of these, if only users could quickly identify which filter to apply.

PQRSs also address word-proximity issues. When users type multi-word queries, search systems prefer items where the phrase appears as-is or with few words between. For example, "blue sneakers" will preferentially find items where that phrase is used in the item, especially in the most-important fields like the item title. But it's highly dependent on how your item titles and descriptions are generated or worded. In one recent case, I helped a client address an issue where several brands had the equivalent of "blue sneakers" in the title, but one brand had "blue X123 sneakers", with a product code. As a result, that brand was buried in the search results. If the user had used filters, with a query like color:blue category:sneakers, this issue would not have arisen.

In Algolia, this proximity issue can be partially mitigated by the minProximity setting, but allowing intermediate words (setting minProximity to 2 or more) can cause less-relevant matches to intrude in cases where the query actually was a phrase. There are trade-offs.

By highlighting useful filters, PQRSs reduce query length and proximity issues. Traditional query-suggest patterns add more search terms (typing "blue" suggests "blue sneakers"), not filters.

How do you identify which filters to suggest to users? A typical e-commerce site may have a dozen or more facets (color, brand, size, style, price range, etc), each with many values that could be applied as filters. Search engines will happily tell you how many items in the result set have each facet value, but wading through a facet menu is time-consuming. And you don't want to give users a huge list of filters that they could apply -- Baymard has good research on this -- we want a tighter, more-curated alternative to faceted search.

PQRSs are that alternative, if you can find a data source and algorithm that gives your users suggestions they actually want to use.

I think there are four main sources for PQRS suggestions: 1) the result set, 2) interaction popularity, 3) business rules, and 4) personalization. The result set is is all items matching the query (retrieved, not necessarily ranked). Interaction popularity shows filters commonly applied by users — often effective. Business rules highlight categories the business wants to promote (high-margin items, new categories). Personalization uses the user's history to find relevant refinements. I'll be focusing on result-set suggestions here, but the others are also worth investigating and incorporating in practice.

Also worth noting: research has found that removing superfluous search terms can be helpful in search, too. One option is auto-filter. The UI can be almost the same, but instead of adding a filter, the system both adds a filter and removes the equivalent search term. This is the "did you mean?" pattern applied to query refinement. E.g., rugged shoes becomes rugged category:shoes. Recent work on AI-driven UX patterns discusses interesting smart auto-filter approaches.

For a client, I recently built (well, vibe-coded, so designed and product-managed) a tool to help them see what various PQRS algorithms might look like on their specific catalog. That tool is of course work-for-hire, so I can't share it, but I've created a similar, generic demo tool, with no proprietary elements, that many Algolia customers can use to explore a few PQRS approaches for their catalog.

Algolia Post-Query Filter Suggestion Demo

Algolia Post-Query Filter Suggestion Demo

A few features:

  • Interactive setup with your App ID, API Key, and Index Name -- nothing is shared with me
  • Visual algorithm comparison: switch strategies, see real-time suggestions and effects
  • Easy customization: choose which fields to display, hide irrelevant facets
  • Demonstrates click-to-apply filters pattern for faster result refinement
  • Transparent algorithm logic -- step-by-step breakdowns for Coverage Diversity available in-app

The app's About page gives more details.

When properly configured, the suggested filters reduce ambiguity while clarifying the catalog structure. All of the algorithms in the demo are based on the result set -- the top items that were returned as well as the facet counts. Following the principle that users should understand what's happening, the demo does not auto-apply any filters -- users can see exactly what's happening to their search. (Algolia can be used to set up auto-filter rules, but they are generally opaque to the user, which can cause problems if the filters are applied improperly. Query Categorization is another related Algolia feature, which can be used to implement a type of PQRSs, or to auto-filter.)

I implemented four algorithms for this demo app: popularity, information gain, inverse frequency, and coverage diversity. There are many more algorithms and variations possible -- these were just easy to implement and represent different approaches.

Popularity simply suggests the most common facet values in the result set as filters. This sort of popularity is distinct from which filters users apply most often -- both have value. But suggestions can be redundant with the query or with each other, so popularity alone isn't ideal. Information gain suggests filters that cover as close to 1/2 of the result set as possible. It's good for efficiently narrowing the results, but often can be unexpected. Inverse frequency suggests uncommon facet values, which is rarely useful on its own, but is worth seeing for comparison.

Coverage diversity is my preferred primary approach -- it suggests a diverse set of facet filters that together cover the result set well. Intuitively, it finds a filter that "carves off" about half of the items in the result. Then, it looks at the remaining items, and tries to find another filter that applies to half of the remainder. Compared with Popularity or Information Gain, Coverage Diversity identifies non-redundant filters. For an Algolia implementation, only the top items are available, so the algorithm uses those items as a proxy for the full set.

Demo app's explanation of coverage diversity algorithm.

In the example above, from an Algolia demo dataset, where the user typed "blue", the algorithm identifies that more than half of the top results are from a Brand called BLU. Of the rest, many are Unlocked Cell Phones. Clicking either of those buttons would apply the filter and re-run the algorithm with the resulting, narrower results.

Coverage diversity is related to Maximum Marginal Relevance (MMR), which diversifies search results by penalizing items similar to already-shown ones. (Elastic has a good explanation). MMR applies to items instead of facet values, but has similar intuitions.

In practice, combining coverage diversity with other approaches will likely work best. The exact algorithm depends on the contents and size of your catalog, your taxonomy, and the expertise, biases, and patience of your users. This tool is a good first step -- if what you see is promising, vibe-coding something specific to test your chosen algorithm(s) is quick.


Advertisement: Does your organization struggle to get the relevance, user experience, and business impact you need from Algolia? I'm a freelance consultant with years of Algolia experience who can help you get the most out of advanced tools and search algorithms. Get in touch!


Note: This post was primarily human-authored, with AI assistance for research, editing, and organization. The AI filled a Secondary author role. The core ideas and final voice are mine.