forked from hans/Nominatim
add documentation for new query preprocessing
This commit is contained in:
@@ -4,12 +4,11 @@ The tokenizer module in Nominatim is responsible for analysing the names given
|
||||
to OSM objects and the terms of an incoming query in order to make sure, they
|
||||
can be matched appropriately.
|
||||
|
||||
Nominatim offers different tokenizer modules, which behave differently and have
|
||||
different configuration options. This sections describes the tokenizers and how
|
||||
they can be configured.
|
||||
Nominatim currently offers only one tokenizer module, the ICU tokenizer. This section
|
||||
describes the tokenizer and how it can be configured.
|
||||
|
||||
!!! important
|
||||
The use of a tokenizer is tied to a database installation. You need to choose
|
||||
The selection of tokenizer is tied to a database installation. You need to choose
|
||||
and configure the tokenizer before starting the initial import. Once the import
|
||||
is done, you cannot switch to another tokenizer anymore. Reconfiguring the
|
||||
chosen tokenizer is very limited as well. See the comments in each tokenizer
|
||||
@@ -43,10 +42,19 @@ On import the tokenizer processes names in the following three stages:
|
||||
See the [Token analysis](#token-analysis) section below for more
|
||||
information.
|
||||
|
||||
During query time, only normalization and transliteration are relevant.
|
||||
An incoming query is first split into name chunks (this usually means splitting
|
||||
the string at the commas) and the each part is normalised and transliterated.
|
||||
The result is used to look up places in the search index.
|
||||
During query time, the tokeinzer is responsible for processing incoming
|
||||
queries. This happens in two stages:
|
||||
|
||||
1. During **query preprocessing** the incoming text is split into name
|
||||
chunks and normalised. This usually means applying the same normalisation
|
||||
as during the import process but may involve other processing like,
|
||||
for example, word break detection.
|
||||
2. The **token analysis** step breaks down the query parts into tokens,
|
||||
looks them up in the database and assignes them possible functions and
|
||||
probabilities.
|
||||
|
||||
Query processing can be further customized while the rest of the analysis
|
||||
is hard-coded.
|
||||
|
||||
### Configuration
|
||||
|
||||
@@ -58,6 +66,8 @@ have no effect.
|
||||
Here is an example configuration file:
|
||||
|
||||
``` yaml
|
||||
query-preprocessing:
|
||||
- normalize
|
||||
normalization:
|
||||
- ":: lower ()"
|
||||
- "ß > 'ss'" # German szet is unambiguously equal to double ss
|
||||
@@ -81,6 +91,22 @@ token-analysis:
|
||||
The configuration file contains four sections:
|
||||
`normalization`, `transliteration`, `sanitizers` and `token-analysis`.
|
||||
|
||||
#### Query preprocessing
|
||||
|
||||
The section for `query-preprocessing` defines an ordered list of functions
|
||||
that are applied to the query before the token analysis.
|
||||
|
||||
The following is a list of preprocessors that are shipped with Nominatim.
|
||||
|
||||
##### normalize
|
||||
|
||||
::: nominatim_api.query_preprocessing.normalize
|
||||
options:
|
||||
members: False
|
||||
heading_level: 6
|
||||
docstring_section_style: spacy
|
||||
|
||||
|
||||
#### Normalization and Transliteration
|
||||
|
||||
The normalization and transliteration sections each define a set of
|
||||
|
||||
@@ -14,10 +14,11 @@ of sanitizers and token analysis.
|
||||
implemented, it is not guaranteed to be stable at the moment.
|
||||
|
||||
|
||||
## Using non-standard sanitizers and token analyzers
|
||||
## Using non-standard modules
|
||||
|
||||
Sanitizer names (in the `step` property) and token analysis names (in the
|
||||
`analyzer`) may refer to externally supplied modules. There are two ways
|
||||
Sanitizer names (in the `step` property), token analysis names (in the
|
||||
`analyzer`) and query preprocessor names (in the `step` property)
|
||||
may refer to externally supplied modules. There are two ways
|
||||
to include external modules: through a library or from the project directory.
|
||||
|
||||
To include a module from a library, use the absolute import path as name and
|
||||
@@ -27,6 +28,47 @@ To use a custom module without creating a library, you can put the module
|
||||
somewhere in your project directory and then use the relative path to the
|
||||
file. Include the whole name of the file including the `.py` ending.
|
||||
|
||||
## Custom query preprocessors
|
||||
|
||||
A query preprocessor must export a single factory function `create` with
|
||||
the following signature:
|
||||
|
||||
``` python
|
||||
create(self, config: QueryConfig) -> Callable[[list[Phrase]], list[Phrase]]
|
||||
```
|
||||
|
||||
The function receives the custom configuration for the preprocessor and
|
||||
returns a callable (function or class) with the actual preprocessing
|
||||
code. When a query comes in, then the callable gets a list of phrases
|
||||
and needs to return the transformed list of phrases. The list and phrases
|
||||
may be changed in place or a completely new list may be generated.
|
||||
|
||||
The `QueryConfig` is a simple dictionary which contains all configuration
|
||||
options given in the yaml configuration of the ICU tokenizer. It is up to
|
||||
the function to interpret the values.
|
||||
|
||||
A `nominatim_api.search.Phrase` describes a part of the query that contains one or more independent
|
||||
search terms. Breaking a query into phrases helps reducing the number of
|
||||
possible tokens Nominatim has to take into account. However a phrase break
|
||||
is definitive: a multi-term search word cannot go over a phrase break.
|
||||
A Phrase object has two fields:
|
||||
|
||||
* `ptype` further refines the type of phrase (see list below)
|
||||
* `text` contains the query text for the phrase
|
||||
|
||||
The order of phrases matters to Nominatim when doing further processing.
|
||||
Thus, while you may split or join phrases, you should not reorder them
|
||||
unless you really know what you are doing.
|
||||
|
||||
Phrase types (`nominatim_api.search.PhraseType`) can further help narrowing
|
||||
down how the tokens in the phrase are interpreted. The following phrase types
|
||||
are known:
|
||||
|
||||
::: nominatim_api.search.PhraseType
|
||||
options:
|
||||
heading_level: 6
|
||||
|
||||
|
||||
## Custom sanitizer modules
|
||||
|
||||
A sanitizer module must export a single factory function `create` with the
|
||||
@@ -90,21 +132,22 @@ adding extra attributes) or completely replace the list with a different one.
|
||||
The following sanitizer removes the directional prefixes from street names
|
||||
in the US:
|
||||
|
||||
``` python
|
||||
import re
|
||||
!!! example
|
||||
``` python
|
||||
import re
|
||||
|
||||
def _filter_function(obj):
|
||||
if obj.place.country_code == 'us' \
|
||||
and obj.place.rank_address >= 26 and obj.place.rank_address <= 27:
|
||||
for name in obj.names:
|
||||
name.name = re.sub(r'^(north|south|west|east) ',
|
||||
'',
|
||||
name.name,
|
||||
flags=re.IGNORECASE)
|
||||
def _filter_function(obj):
|
||||
if obj.place.country_code == 'us' \
|
||||
and obj.place.rank_address >= 26 and obj.place.rank_address <= 27:
|
||||
for name in obj.names:
|
||||
name.name = re.sub(r'^(north|south|west|east) ',
|
||||
'',
|
||||
name.name,
|
||||
flags=re.IGNORECASE)
|
||||
|
||||
def create(config):
|
||||
return _filter_function
|
||||
```
|
||||
def create(config):
|
||||
return _filter_function
|
||||
```
|
||||
|
||||
This is the most simple form of a sanitizer module. If defines a single
|
||||
filter function and implements the required `create()` function by returning
|
||||
@@ -128,13 +171,13 @@ sanitizers:
|
||||
|
||||
!!! warning
|
||||
This example is just a simplified show case on how to create a sanitizer.
|
||||
It is not really read for real-world use: while the sanitizer would
|
||||
It is not really meant for real-world use: while the sanitizer would
|
||||
correctly transform `West 5th Street` into `5th Street`. it would also
|
||||
shorten a simple `North Street` to `Street`.
|
||||
|
||||
For more sanitizer examples, have a look at the sanitizers provided by Nominatim.
|
||||
They can be found in the directory
|
||||
[`nominatim/tokenizer/sanitizers`](https://github.com/osm-search/Nominatim/tree/master/nominatim/tokenizer/sanitizers).
|
||||
[`src/nominatim_db/tokenizer/sanitizers`](https://github.com/osm-search/Nominatim/tree/master/src/nominatim_db/tokenizer/sanitizers).
|
||||
|
||||
|
||||
## Custom token analysis module
|
||||
|
||||
@@ -5,7 +5,12 @@
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Normalize query test using an ICU transliterator.
|
||||
Normalize query text using the same ICU normalization rules that are
|
||||
applied during import. If a phrase becomes empty because the normalization
|
||||
removes all terms, then the phrase is deleted.
|
||||
|
||||
This preprocessor does not come with any extra information. Instead it will
|
||||
use the configuration from the `normalization` section.
|
||||
"""
|
||||
from typing import cast
|
||||
|
||||
|
||||
Reference in New Issue
Block a user