# Natural Language Search
One of the most powerful capabilities of Large Language Models (LLMs) is their ability to turn natural language into structured data. In this guide, we will learn how to make use of this capability to understand a user's search query and convert it into a structured Typesense search query.
# Use-case
Let's take an example of a public cars dataset (opens new window). Using Google's Gemini (opens new window) LLM along with Typesense we can support natural language queries like the following:
A honda or BMW with at least 200hp, rear-wheel drive, from 20K to 50K, must be newer than 2014
Show me the most powerful car you have
High performance Italian cars, above 700hp
I don't know how to drive a manual
Notice how in some queries there might be multiple criteria mentioned, and in some cases the keyword itself might not be present in the dataset.
Here's a sample record from this dataset for context:
{
"city_mpg": 13,
"driven_wheels": "rear wheel drive",
"engine_cylinders": 8,
"engine_fuel_type": "premium unleaded (recommended)",
"engine_hp": 707,
"highway_mpg": 22,
"id": "1480",
"make": "Dodge",
"market_category": ["Factory Tuner", "High-Performance"],
"model": "Charger",
"msrp": 65945,
"number_of_doors": 4,
"popularity": 1851,
"transmission_type": "AUTOMATIC",
"vehicle_size": "Large",
"vehicle_style": "Sedan",
"year": 2017
}
# Data flow
The key idea is this:
- Take the natural language query that the user types in
- Send it to the LLM with specific instructions on how to convert it into a Typesense search query with the
filter_by
,sort_by
andq
search parameters - Execute a query in Typesense with those search parameters returned by the LLM and return the results
We're essentially doing something similar to Text-to-SQL, except that we're now doing Text-to-Typesense-Query, running the query and returning results.
This seemingly simple concept helps build powerful natural language search experiences. The trick though with LLMs is to refine the prompt well-enough that it consistently produces a good translation of the text into valid Typesense syntax.
# Live Demo
Here's a video of what we'll be building in this guide:
You can also play around with it live here: https://natural-language-search-cars-genkit.typesense.org/ (opens new window)
Let's now see how to build this application end-to-end.
# Setting up the project
We will be using Next.js (opens new window) and Genkit (opens new window) which is a framework that makes it really easy to add generative AI in our applications.
Follow the instructions in Genkit's documentation (opens new window) to learn how to initialize Genkit in a Next.js app.
Next, let's install the Typesense client and zod (opens new window) into our app:
npm i typesense@next zod
The dataset we will use can be downloaded from Github (opens new window).
# Initializing Typesense client
We will need two separate Typesense API keys:
- A search-only API key for use on the front end
- A backend API key with write access
Please refer to API Keys docs on how to generate a search-only-api key.
If you're using Typesense Cloud, click on the "Generate API key" button on the cluster page. This will give you a set of hostnames and API keys to use.
# Create the Typesense collection
We'll use the following schema to create a Typesense Collection and import our cars dataset into it:
We're now ready to index the dataset into the collection we just created:
# Writing the prompt
Our goal is to translate a natural language query e.g. 'Latest Ford under 40K$'
into Typesense's query format:
{
"filter_by": "make:Ford && msrp:<40000",
"sort_by": "year:desc"
}
In Genkit, the model output schema is defined using Zod.
We can make the LLM ouput conform to our TypesenseQuerySchema
by specifying it in defineDotprompt()
:
# Dynamic Prompt based on the Schema
Notice the getCachedCollectionProperties()
function in the prompt above.
That function essentially converts the Typesense collection schema into a tabular format with a list of field names and sample enum values in each field. We're using a markdown format to help the LLM recognize these field values in the query and convert them into appropriate field filters.
Here's an example of what the output of that function could look like:
## Car properties
| Name | Data Type | Filter | Sort | Enum Values | Description |
| ----------------- | --------- | ------ | ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------- |
| year | int32 | Yes | Yes | | |
| engine_hp | float | Yes | Yes | | |
| engine_cylinders | int32 | Yes | Yes | | |
| number_of_doors | int32 | Yes | Yes | | |
| highway_mpg | int32 | Yes | Yes | | |
| city_mpg | int32 | Yes | Yes | | |
| popularity | int32 | Yes | Yes | | |
| msrp | int32 | Yes | Yes | | in USD |
| make | string | Yes | No | Chevrolet, Ford, Dodge, Mercedes-Benz, BMW, Toyota, Infiniti, GMC, Volkswagen, Nissan, Mazda, Audi, Cadillac, Lexus, Volvo, Honda, Suzuki, Hyundai, Pontiac, Mitsubishi, Chrysler, Kia, Porsche, Subaru, Acura, Buick, Oldsmobile, Saab, Lincoln, Bentley, Ferrari, Plymouth, Aston Martin, Land Rover, Lamborghini, Maserati, Scion, FIAT, Rolls-Royce, Lotus, Maybach, HUMMER, McLaren, Alfa Romeo, Genesis, Spyker, Bugatti | |
| model | string | Yes | No | 911, F-150, Tundra, E-Class, Silverado 1500, 3 Series, Sierra 1500, Tacoma, B-Series Pickup, Truck, Accord, Colorado, 300-Class, 9-3, Civic, Q50, Forte, Canyon, Frontier, Ram Pickup 1500, R8, C-Class, 4 Series, 3, S-Class, Gallardo, 6 Series, Dakota, Golf GTI, Jetta, Camaro, 900, 850, S-10, Colt, Charger, Continental GT, G6, Juke, 370Z, Jimmy, Pickup, Sidekick, Corvette, Q70, Shadow, Ranger, Mustang, G Coupe, Durango, Silverado 1500 Classic | There are more enum values for this field |
| engine_fuel_type | string | Yes | No | regular unleaded, premium unleaded (required), premium unleaded (recommended), flex-fuel (unleaded/E85), diesel, flex-fuel (premium unleaded required/E85), flex-fuel (premium unleaded recommended/E85), electric, natural gas | |
| transmission_type | string | Yes | No | AUTOMATIC, MANUAL, AUTOMATED_MANUAL, UNKNOWN, DIRECT_DRIVE | |
| driven_wheels | string | Yes | No | front wheel drive, rear wheel drive, all wheel drive, four wheel drive | |
| market_category | string[] | Yes | No | Luxury, Performance, High-Performance, Crossover, Hatchback, Factory Tuner, Flex Fuel, Exotic, Hybrid, Diesel | |
| vehicle_size | string | Yes | No | Compact, Midsize, Large | |
| vehicle_style | string | Yes | No | Sedan, 4dr SUV, Coupe, Convertible, Wagon, 4dr Hatchback, Extended Cab Pickup, 2dr Hatchback, Crew Cab Pickup, Passenger Minivan, Regular Cab Pickup, 2dr SUV, Cargo Van, Passenger Van, Cargo Minivan, Convertible SUV | |
Here's how that function looks:
For facet enabled fields, we can supply facet values to the LLM via the Enum Values
column by making a search request with q: "*", facet_by: field
. When a collection has too many facet values to fit in the prompt, the number of values returned can be limited using the max_facet_values
parameter.
We can add and update the field description by updating our collection metadata. Let's specify the currency for our msrp
field (manufacturer's suggested retail price) as USD.
Since fetching the collection properties everytime the user make a search request is expensive, we will cache its response using Nextjs unstable_cache
.
# Integrate with Typesense
Let's now integrate our dynamic prompt into our application:
Finally, we can call the server action and use its response to make a search request to Typesense.
That's it! Our users can now use natural language to search for cars and the intended filters will automatically be applied!
Keep in mind that LLMs may occasionally misunderstand queries or generate Typesense queries that are invalid. In such cases, tweaking the prompt to handle specific edge cases or incorporating fallback logic can ensure better results over time.
You can find the full source code (opens new window) of the demo application on GitHub and a live demo here (opens new window).