# Documents

# Index a document

A document to be indexed in a given collection must conform to the schema of the collection.

If the document contains an id field of type string, Typesense would use that field as the identifier for the document. Otherwise, Typesense would assign an identifier of its choice to the document. Note that the id should not include spaces or any other characters that require encoding in urls (opens new window).

# Upsert

You can also upsert a document.

To index multiple documents at the same time, in a batch/bulk operation, see importing documents.

# Sample Response

# Definition

POST ${TYPESENSE_HOST}/collections/:collection/documents

In Typesense, a search consists of a query against one or more text fields and a list of filters against numerical or facet fields. You can also sort and facet your results.

# Sample Response

When a string[] field is queried, the highlights structure would include the corresponding matching array indices of the snippets. For e.g:

# Group by

You can aggregate search results into groups or buckets by specify one or more group_by fields.

Grouping hits this way is useful in:

  • Deduplication: By using one or more group_by fields, you can consolidate items and remove duplicates in the search results. For example, if there are multiple shoes of the same size, by doing a group_by=size&group_limit=1, you ensure that only a single shoe of each size is returned in the search results.
  • Correcting skew: When your results are dominated by documents of a particular type, you can use group_by and group_limit to correct that skew. For example, if your search results for a query contains way too many documents of the same brand, you can do a group_by=brand&group_limit=3 to ensure that only the top 3 results of each brand is returned in the search results.

TIP

To group on a particular field, it must be a faceted field.

Grouping returns the hits in a nested structure, that's different from the plain JSON response format we saw earlier. Let's repeat the query we made earlier with a group_by parameter:

# Definition

GET ${TYPESENSE_HOST}/collections/:collection/documents/search

# Arguments

Parameter Required Description
q yes The query text to search for in the collection.

Use * as the search string to return all documents. This is typically useful when used in conjunction with filter_by.

For example, to return all documents that match a filter, use:q=*&filter_by=num_employees:10.

To exclude words in your query explicitly, prefix the word with the - operator, e.g. q: 'electric car -tesla'.
query_by yes One or more string / string[] fields that should be queried against. Separate multiple fields with a comma: company_name, country

The order of the fields is important: a record that matches on a field earlier in the list is considered more relevant than a record matched on a field later in the list. So, in the example above, documents that match on the company_name field are ranked above documents matched on the country field.
query_by_weights no The relative weight to give each query_by field when ranking results. This can be used to boost fields in priority, when looking for matches.

Separate each weight with a comma, in the same order as the query_by fields. For eg: query_by_weights: 1,1,2 with query_by: field_a,field_b,field_c will give equal weightage to field_a and field_b, and will give twice the weightage to field_c comparatively.
prefix no Boolean field to indicate that the last word in the query should be treated as a prefix, and not as a whole word. This is necessary for building autocomplete and instant search interfaces.

Default: true
filter_by no Filter conditions for refining your search results.

A field can be matched against one or more values.

country: USA

country: [USA, UK] - returns documents that have country of USA OR UK.

To match a string field exactly, you have to mark the field as a facet and use the := operator.

For eg: category:=Shoe will match documents from the category shoes and not from a category like shoe rack. You can also filter using multiple values: category:= [Shoe, Sneaker].

Get numeric values between a min and max value, using the range operator [min..max]

For eg: num_employees:[10..100]

Separate multiple conditions with the && operator.

For eg: num_employees:>100 && country: [USA, UK]

More examples:

num_employees:10
num_employees:<=10
sort_by no A list of numerical fields and their corresponding sort orders that will be used for ordering your results. Separate multiple fields with a comma.

Up to 3 sort fields can be specified in a single search query, and they'll be used as a tie-breaker - if the first value in the first sort_by field ties for a set of documents, the value in the second sort_by field is used to break the tie, and if that also ties, the value in the 3rd field is used to break the tie between documents. If all 3 fields tie, the document insertion order is used to break the final tie.

E.g. num_employees:desc,year_started:asc

The text similarity score is exposed as a special _text_match field that you can use in the list of sorting fields.

If one or two sorting fields are specified, _text_match is used for tie breaking, as the last sorting field.

Default:

If no sort_by parameter is specified, results are sorted by: _text_match:desc,default_sorting_field:desc.
facet_by no A list of fields that will be used for faceting your results on. Separate multiple fields with a comma.
max_facet_values no Maximum number of facet values to be returned.
facet_query no Facet values that are returned can now be filtered via this parameter. The matching facet text is also highlighted. For example, when faceting by category, you can set facet_query=category:shoe to return only facet values that contain the prefix "shoe".
num_typos no Number of typographical errors (1 or 2) that would be tolerated.

Damerau–Levenshtein distance (opens new window) is used to calculate the number of errors.

Default: 2
page no Results from this specific page number would be fetched.
per_page no Number of results to fetch per page.

Default: 10

NOTE: Only upto 250 hits can be fetched per page.
group_by no You can aggregate search results into groups or buckets by specify one or more group_by fields. Separate multiple fields with a comma.

NOTE: To group on a particular field, it must be a faceted field.

E.g. group_by=country,company_name
group_limit no Maximum number of hits to be returned for every group. If the group_limit is set as K then only the top K hits in each group are returned in the response.

Default: 3
include_fields no Comma-separated list of fields from the document to include in the search result.
exclude_fields no Comma-separated list of fields from the document to exclude in the search result.
highlight_full_fields no Comma separated list of fields which should be highlighted fully without snippeting.

Default: all fields will be snippeted.
highlight_affix_num_tokens no The number of tokens that should surround the highlighted text on each side.

Default: 4
highlight_start_tag no The start tag used for the highlighted snippets.

Default: <mark>
highlight_end_tag no The end tag used for the highlighted snippets.

Default: </mark>
snippet_threshold no Field values under this length will be fully highlighted, instead of showing a snippet of relevant portion.

Default: 30
drop_tokens_threshold no If the number of results found for a specific query is less than this number, Typesense will attempt to drop the tokens in the query until enough results are found. Tokens that have the least individual hits are dropped first. Set drop_tokens_threshold to 0 to disable dropping of tokens.

Default: 10
typo_tokens_threshold no If the number of results found for a specific query is less than this number, Typesense will attempt to look for tokens with more typos until enough results are found.

Default: 100
pinned_hits no A list of records to unconditionally include in the search results at specific positions.

An example use case would be to feature or promote certain items on the top of search results.

A comma separated list of record_id:hit_position. Eg: to include a record with ID 123 at Position 1 and another record with ID 456 at Position 5, you'd specify 123:1,456:5.

You could also use the Overrides feature to override search results based on rules. Overrides are applied first, followed by pinned_hits and finally hidden_hits.
hidden_hits no A list of records to unconditionally hide from search results.

A comma separated list of record_ids to hide. Eg: to hide records with IDs 123 and 456, you'd specify 123,456.

You could also use the Overrides feature to override search results based on rules. Overrides are applied first, followed by pinned_hits and finally hidden_hits.
limit_hits no Maximum number of hits that can be fetched from the collection. Eg: 200

page * per_page should be less than this number for the search request to return results.

Default: no limit

You'd typically want to generate a scoped API key with this parameter embedded and use that API key to perform the search, so it's automatically applied and can't be changed at search time.

You can send multiple search requests in a single HTTP request, using the Multi-Search feature. This is especially useful to avoid round-trip network latencies incurred otherwise if each of these requests are sent in separate HTTP requests.

You can also use this feature to do a federated search across multiple collections in a single HTTP request.

# Sample Response

# Definition

POST ${TYPESENSE_HOST}/multi_search

# Arguments

Parameter Required Description
limit_multi_searches no Max number of search requests that can be sent in a multi-search request. Eg: 20

Default: 50

You'd typically want to generate a scoped API key with this parameter embedded and use that API key to perform the search, so it's automatically applied and can't be changed at search time.

TIP

The results array in a multi_search response is guaranteed to be in the same order as the queries you send in the searches array in your request.

# Retrieve a document

Fetch an individual document from a collection by using its id.

# Sample Response

# Definition

GET ${TYPESENSE_HOST}/collections/:collection/documents/:id

# Update a document

Update an individual document from a collection by using its id. The update can be partial, as shown below:

# Sample Response

# Definition

PATCH ${TYPESENSE_HOST}/collections/:collection/documents/:id

# Delete documents

Delete an individual document from a collection by using its id.

# Sample Response

# Definition

DELETE ${TYPESENSE_HOST}/collections/:collection/documents/:id

# Delete by query

You can also delete a bunch of documents that match a specific filter condition:

Use the batch_size parameter to control the number of documents that should deleted at a time. A larger value will speed up deletions, but will impact performance of other operations running on the server.

# Sample Response

# Definition

DELETE ${TYPESENSE_HOST}/collections/:collection/documents?filter_by=X&batch_size=N

# Export documents

# Sample Response

# Definition

GET ${TYPESENSE_HOST}/collections/:collection/documents/export

# Import documents

You can index multiple documents in a batch using the import API.

# Definition

POST ${TYPESENSE_HOST}/collections/:collection/documents/import

The documents to import need to be formatted as a newline delimited JSON string, aka JSONLines (opens new window) format. This is essentially one JSON object per line, without commas between documents. For example, here are a set of 3 documents represented in JSONL format.

{"id": "124", "company_name": "Stark Industries", "num_employees": 5215, "country": "US"}
{"id": "125", "company_name": "Future Technology", "num_employees": 1232, "country": "UK"}
{"id": "126", "company_name": "Random Corp.", "num_employees": 531, "country": "AU"}

If you are using one of our client libraries, you can also pass in an array of documents and the library will take care of converting it into JSONL.

Besides create, the other allowed action modes are upsert and update.

# Action modes

create (default) Creates a new document. Fails if a document with the same id already exists
upsert Creates a new document or updates an existing document if a document with the same id already exists.
update Updates an existing document. Fails if a document with the given id does not exist.

# Import a JSONL file

You can feed the output of a Typesense export operation directly as import to the import end-point since both use JSONL.

Here's an example file:

You can import the above documents.jsonl file like this.


# Import a JSON file

If you have a file in JSON format, you can convert it into JSONL format using jq (opens new window):

cat documents.json | jq -c .[] > documents.jsonl

Once you have the JSONL file, you can then import it following the instructions above to import a JSONL file.

# Import a CSV file

If you have a CSV file with column headers, you can convert it into JSONL format using mlr (opens new window):

mlr --icsv --ojsonl cat documents.csv > documents.jsonl

Once you have the JSONL file, you can then import it following the instructions above to import a JSONL file.

# Configure batch size

By default, Typesense ingests 40 documents at a time into Typesense. To increase this value, use the batch_size parameter.

NOTE: Larger batch sizes will consume larger transient memory during import.

# Sample Response

Each line of the response indicates the result of each document present in the request body (in the same order). If the import of a single document fails, it does not affect the other documents.

If there is a failure, the response line will include a corresponding error message and as well as the actual document content. For example, the second document had an import failure in the following response:

# Dealing with Dirty Data

The dirty_values parameter determines what Typesense should do when the type of a particular field being indexed does not match the previously inferred type for that field.

Value Behavior
coerce_or_reject Attempt coercion of the field's value to previously inferred type. If coercion fails, reject the write outright with an error message.
coerce_or_drop Attempt coercion of the field's value to previously inferred type. If coercion fails, drop the particular field and index the rest of the document.
drop Drop the particular field and index rest of the document.
reject Reject the document outright.

Default behaviour

If a wildcard (.*) field is defined in the schema or if the schema contains any field name with a regular expression (e.g a field named .*_name), the default behavior is coerce_or_reject. Otherwise, the default behavior is reject (this ensures backward compatibility with older Typesense versions).

# Indexing a document with dirty data

Let's now attempt to index a document with a title field that contains an integer. We will assume that this field was previously inferred to be of type string. Let's use the coerce_or_reject behavior here:

Similarly, we can use the dirty_values parameter for the update and import operations as well.

# Indexing all values as string

Typesense provides a convenient way to store all fields as strings through the use of the string* field type.

Defining a type as string* allows Typesense to accept both singular and multi-value/array values.

Let's say we want to ingest data from multiple devices but want to store them as strings since each device could be using a different data type for the same field name (e.g. one device could send an record_id as an integer, while another device could send an record_id as a string).

To do that, we can define a schema as follows:

{
  "name": "device_data",
  "fields": [
    {"name": ".*", "type": "string*" }
  ]
}

Now, Typesense will automatically convert any single/multi-valued data into their corresponding string representations automatically when data is indexed with the dirty_values: "coerce_or_reject" mode.

You can see how they will be transformed below:

Last Updated: 2/27/2023, 12:47:48 PM