mirror of
https://github.com/osm-search/Nominatim.git
synced 2026-02-15 19:07:58 +00:00
Compare commits
1 Commits
8a96e4f802
...
docs-5.1.x
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
76b8b07f16 |
2
.flake8
2
.flake8
@@ -7,5 +7,5 @@ extend-ignore =
|
||||
per-file-ignores =
|
||||
__init__.py: F401
|
||||
test/python/utils/test_json_writer.py: E131
|
||||
**/conftest.py: E402
|
||||
test/python/conftest.py: E402
|
||||
test/bdd/*: F821
|
||||
|
||||
43
.github/workflows/ci-tests.yml
vendored
43
.github/workflows/ci-tests.yml
vendored
@@ -68,34 +68,26 @@ jobs:
|
||||
with:
|
||||
dependencies: ${{ matrix.dependencies }}
|
||||
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
/usr/local/bin/osm2pgsql
|
||||
key: osm2pgsql-bin-22-1
|
||||
if: matrix.ubuntu == '22'
|
||||
|
||||
- name: Compile osm2pgsql
|
||||
run: |
|
||||
if [ ! -f /usr/local/bin/osm2pgsql ]; then
|
||||
sudo apt-get install -y -qq libboost-system-dev libboost-filesystem-dev libexpat1-dev zlib1g-dev libbz2-dev libpq-dev libproj-dev libicu-dev liblua${LUA_VERSION}-dev lua-dkjson nlohmann-json3-dev
|
||||
mkdir osm2pgsql-build
|
||||
cd osm2pgsql-build
|
||||
git clone https://github.com/osm2pgsql-dev/osm2pgsql
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ../osm2pgsql
|
||||
make
|
||||
sudo make install
|
||||
cd ../..
|
||||
rm -rf osm2pgsql-build
|
||||
else
|
||||
sudo apt-get install -y -qq libexpat1 liblua${LUA_VERSION}
|
||||
fi
|
||||
sudo apt-get install -y -qq libboost-system-dev libboost-filesystem-dev libexpat1-dev zlib1g-dev libbz2-dev libpq-dev libproj-dev libicu-dev liblua${LUA_VERSION}-dev lua-dkjson nlohmann-json3-dev
|
||||
mkdir osm2pgsql-build
|
||||
cd osm2pgsql-build
|
||||
git clone https://github.com/osm2pgsql-dev/osm2pgsql
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ../osm2pgsql
|
||||
make
|
||||
sudo make install
|
||||
cd ../..
|
||||
rm -rf osm2pgsql-build
|
||||
if: matrix.ubuntu == '22'
|
||||
env:
|
||||
LUA_VERSION: ${{ matrix.lua }}
|
||||
|
||||
- name: Install test prerequisites
|
||||
run: ./venv/bin/pip install behave==1.2.6
|
||||
|
||||
- name: Install test prerequisites (apt)
|
||||
run: sudo apt-get install -y -qq python3-pytest python3-pytest-asyncio uvicorn python3-falcon python3-aiosqlite python3-pyosmium
|
||||
if: matrix.dependencies == 'apt'
|
||||
@@ -104,9 +96,6 @@ jobs:
|
||||
run: ./venv/bin/pip install pytest-asyncio falcon starlette asgi_lifespan aiosqlite osmium uvicorn
|
||||
if: matrix.dependencies == 'pip'
|
||||
|
||||
- name: Install test prerequisites
|
||||
run: ./venv/bin/pip install pytest-bdd
|
||||
|
||||
- name: Install latest flake8
|
||||
run: ./venv/bin/pip install -U flake8
|
||||
|
||||
@@ -129,8 +118,8 @@ jobs:
|
||||
|
||||
- name: BDD tests
|
||||
run: |
|
||||
../venv/bin/python -m pytest test/bdd --nominatim-purge
|
||||
working-directory: Nominatim
|
||||
../../../venv/bin/python -m behave -DREMOVE_TEMPLATE=1 --format=progress3
|
||||
working-directory: Nominatim/test/bdd
|
||||
|
||||
install:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
@@ -113,5 +113,3 @@ Checklist for releases:
|
||||
* run `nominatim --version` to confirm correct version
|
||||
* [ ] tag new release and add a release on github.com
|
||||
* [ ] build pip packages and upload to pypi
|
||||
* `make build`
|
||||
* `twine upload dist/*`
|
||||
|
||||
2
Makefile
2
Makefile
@@ -27,7 +27,7 @@ lint:
|
||||
flake8 src test/python test/bdd
|
||||
|
||||
bdd:
|
||||
pytest test/bdd --nominatim-purge
|
||||
cd test/bdd; behave -DREMOVE_TEMPLATE=1
|
||||
|
||||
# Documentation
|
||||
|
||||
|
||||
15
README.md
15
README.md
@@ -27,25 +27,18 @@ can be found at nominatim.org as well.
|
||||
|
||||
A quick summary of the necessary steps:
|
||||
|
||||
|
||||
1. Clone this git repository and download the country grid
|
||||
|
||||
git clone https://github.com/osm-search/Nominatim.git
|
||||
wget -O Nominatim/data/country_osm_grid.sql.gz https://nominatim.org/data/country_grid.sql.gz
|
||||
|
||||
2. Create a Python virtualenv and install the packages:
|
||||
1. Create a Python virtualenv and install the packages:
|
||||
|
||||
python3 -m venv nominatim-venv
|
||||
./nominatim-venv/bin/pip install packaging/nominatim-{api,db}
|
||||
|
||||
3. Create a project directory, get OSM data and import:
|
||||
2. Create a project directory, get OSM data and import:
|
||||
|
||||
mkdir nominatim-project
|
||||
cd nominatim-project
|
||||
../nominatim-venv/bin/nominatim import --osm-file <your planet file> 2>&1 | tee setup.log
|
||||
../nominatim-venv/bin/nominatim import --osm-file <your planet file>
|
||||
|
||||
|
||||
4. Start the webserver:
|
||||
3. Start the webserver:
|
||||
|
||||
./nominatim-venv/bin/pip install uvicorn falcon
|
||||
../nominatim-venv/bin/nominatim serve
|
||||
|
||||
@@ -110,14 +110,17 @@ Then you can install Nominatim with:
|
||||
|
||||
pip install nominatim-db nominatim-api
|
||||
|
||||
## Downloading and building Nominatim
|
||||
## Downloading and building Nominatim from source
|
||||
|
||||
### Downloading the latest release
|
||||
The following instructions are only relevant, if you want to build and
|
||||
install Nominatim **from source**.
|
||||
|
||||
### Downloading the source for the latest release
|
||||
|
||||
You can download the [latest release from nominatim.org](https://nominatim.org/downloads/).
|
||||
The release contains all necessary files. Just unpack it.
|
||||
|
||||
### Downloading the latest development version
|
||||
### Downloading the source for the latest development version
|
||||
|
||||
If you want to install latest development version from github:
|
||||
|
||||
@@ -131,7 +134,7 @@ The development version does not include the country grid. Download it separatel
|
||||
wget -O Nominatim/data/country_osm_grid.sql.gz https://nominatim.org/data/country_grid.sql.gz
|
||||
```
|
||||
|
||||
### Building Nominatim
|
||||
### Building Nominatim from source
|
||||
|
||||
Nominatim is easiest to run from its own virtual environment. To create one, run:
|
||||
|
||||
|
||||
@@ -36,11 +36,11 @@ The website is now available at `http://localhost:8765`.
|
||||
## Forwarding searches to nominatim-ui
|
||||
|
||||
Nominatim used to provide the search interface directly by itself when
|
||||
`format=html` was requested. For the `/search` endpoint this even used
|
||||
to be the default.
|
||||
`format=html` was requested. For all endpoints except for `/reverse` and
|
||||
`/lookup` this even used to be the default.
|
||||
|
||||
The following section describes how to set up Apache or nginx, so that your
|
||||
users are forwarded to nominatim-ui when they go to a URL that formerly presented
|
||||
users are forwarded to nominatim-ui when they go to URL that formerly presented
|
||||
the UI.
|
||||
|
||||
### Setting up forwarding in Nginx
|
||||
@@ -73,28 +73,41 @@ map $args $format {
|
||||
|
||||
# Determine from the URI and the format parameter above if forwarding is needed.
|
||||
map $uri/$format $forward_to_ui {
|
||||
default 0; # no forwarding by default
|
||||
~/search.*/default 1; # Use this line only, if search should go to UI by default.
|
||||
~/reverse.*/html 1; # Forward API calls that UI supports, when
|
||||
~/status.*/html 1; # format=html is explicitly requested.
|
||||
~/search.*/html 1;
|
||||
~/details.*/html 1;
|
||||
default 1; # The default is to forward.
|
||||
~^/ui 0; # If the URI point to the UI already, we are done.
|
||||
~/other$ 0; # An explicit non-html format parameter. No forwarding.
|
||||
~/reverse.*/default 0; # Reverse and lookup assume xml format when
|
||||
~/lookup.*/default 0; # no format parameter is given. No forwarding.
|
||||
}
|
||||
```
|
||||
|
||||
The `$forward_to_ui` parameter can now be used to conditionally forward the
|
||||
calls:
|
||||
|
||||
``` nginx
|
||||
location / {
|
||||
if ($forward_to_ui) {
|
||||
rewrite ^(/[^/.]*) https://$http_host/ui$1.html redirect;
|
||||
}
|
||||
```
|
||||
# When no endpoint is given, default to search.
|
||||
# Need to add a rewrite so that the rewrite rules below catch it correctly.
|
||||
rewrite ^/$ /search;
|
||||
|
||||
# proxy_pass commands
|
||||
location @php {
|
||||
# fastcgi stuff..
|
||||
if ($forward_to_ui) {
|
||||
rewrite ^(/[^/]*) https://yourserver.com/ui$1.html redirect;
|
||||
}
|
||||
}
|
||||
|
||||
location ~ [^/]\.php(/|$) {
|
||||
# fastcgi stuff..
|
||||
if ($forward_to_ui) {
|
||||
rewrite (.*).php https://yourserver.com/ui$1.html redirect;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Be aware that the rewrite commands are slightly different for URIs with and
|
||||
without the .php suffix.
|
||||
|
||||
Reload nginx and the UI should be available.
|
||||
|
||||
### Setting up forwarding in Apache
|
||||
@@ -146,16 +159,18 @@ directory like this:
|
||||
RewriteBase "/nominatim/"
|
||||
|
||||
# If no endpoint is given, then use search.
|
||||
RewriteRule ^(/|$) "search"
|
||||
RewriteRule ^(/|$) "search.php"
|
||||
|
||||
# If format-html is explicitly requested, forward to the UI.
|
||||
RewriteCond %{QUERY_STRING} "format=html"
|
||||
RewriteRule ^([^/.]+) ui/$1.html [R,END]
|
||||
RewriteRule ^([^/]+)(.php)? ui/$1.html [R,END]
|
||||
|
||||
# Optionally: if no format parameter is there then forward /search.
|
||||
# If no format parameter is there then forward anything
|
||||
# but /reverse and /lookup to the UI.
|
||||
RewriteCond %{QUERY_STRING} "!format="
|
||||
RewriteCond %{REQUEST_URI} "/search"
|
||||
RewriteRule ^([^/.]+) ui/$1.html [R,END]
|
||||
RewriteCond %{REQUEST_URI} "!/lookup"
|
||||
RewriteCond %{REQUEST_URI} "!/reverse"
|
||||
RewriteRule ^([^/]+)(.php)? ui/$1.html [R,END]
|
||||
</Directory>
|
||||
```
|
||||
|
||||
|
||||
@@ -212,7 +212,7 @@ other layers.
|
||||
The featureType allows to have a more fine-grained selection for places
|
||||
from the address layer. Results can be restricted to places that make up
|
||||
the 'state', 'country' or 'city' part of an address. A featureType of
|
||||
`settlement` selects any human inhabited feature from 'state' down to
|
||||
settlement selects any human inhabited feature from 'state' down to
|
||||
'neighbourhood'.
|
||||
|
||||
When featureType is set, then results are automatically restricted
|
||||
|
||||
@@ -556,6 +556,16 @@ the Nominatim topic.
|
||||
```
|
||||
Discarding country-level boundaries when running under themepark.
|
||||
|
||||
## osm2pgsql gazetteer output
|
||||
|
||||
Nominatim still allows you to configure the gazetteer output to remain
|
||||
backwards compatible with older imports. It will be automatically used
|
||||
when the style file name ends in `.style`. For documentation of the
|
||||
old import style, please refer to the documentation of older releases
|
||||
of Nominatim. Do not use the gazetteer output for new imports. There is no
|
||||
guarantee that new versions of Nominatim are fully compatible with the
|
||||
gazetteer output.
|
||||
|
||||
## Changing the style of existing databases
|
||||
|
||||
There is usually no issue changing the style of a database that is already
|
||||
|
||||
@@ -602,44 +602,25 @@ results gathered so far.
|
||||
Note that under high load you may observe that users receive different results
|
||||
than usual without seeing an error. This may cause some confusion.
|
||||
|
||||
#### NOMINATIM_OUTPUT_NAMES
|
||||
### Logging Settings
|
||||
|
||||
#### NOMINATIM_LOG_DB
|
||||
|
||||
| Summary | |
|
||||
| -------------- | --------------------------------------------------- |
|
||||
| **Description:** | Specifies order of name tags |
|
||||
| **Format:** | string: comma-separated list of tag names |
|
||||
| **Default:** | name:XX,name,brand,official_name:XX,short_name:XX,official_name,short_name,ref |
|
||||
| **Description:** | Log requests into the database |
|
||||
| **Format:** | boolean |
|
||||
| **Default:** | no |
|
||||
| **After Changes:** | run `nominatim refresh --website` |
|
||||
|
||||
Specifies the order in which different name tags are used.
|
||||
The values in this list determine the preferred order of name variants,
|
||||
including language-specific names (in OSM: the name tag with and without any language suffix).
|
||||
Enable logging requests into a database table with this setting. The logs
|
||||
can be found in the table `new_query_log`.
|
||||
|
||||
Comma-separated list, where :XX stands for language suffix
|
||||
(e.g. name:en) and no :XX stands for general tags (e.g. name).
|
||||
When using this logging method, it is advisable to set up a job that
|
||||
regularly clears out old logging information. Nominatim will not do that
|
||||
on its own.
|
||||
|
||||
See also [NOMINATIM_DEFAULT_LANGUAGE](#nominatim_default_language).
|
||||
|
||||
!!! note
|
||||
If NOMINATIM_OUTPUT_NAMES = `name:XX,name,short_name:XX,short_name` the search follows
|
||||
|
||||
```
|
||||
'name', 'short_name'
|
||||
```
|
||||
|
||||
if we have no preferred language order for showing search results.
|
||||
|
||||
For languages ['en', 'es'] the search follows
|
||||
|
||||
```
|
||||
'name:en', 'name:es',
|
||||
'name',
|
||||
'short_name:en', 'short_name:es',
|
||||
'short_name'
|
||||
```
|
||||
|
||||
For those familiar with the internal implementation, the `_place_*` expansion is added, but to simplify, it is not included in this example.
|
||||
|
||||
### Logging Settings
|
||||
Can be used as the same time as NOMINATIM_LOG_FILE.
|
||||
|
||||
#### NOMINATIM_LOG_FILE
|
||||
|
||||
@@ -664,6 +645,8 @@ given in seconds and includes the entire time the query was queued and executed
|
||||
in the frontend.
|
||||
type contains the name of the endpoint used.
|
||||
|
||||
Can be used as the same time as NOMINATIM_LOG_DB.
|
||||
|
||||
#### NOMINATIM_DEBUG_SQL
|
||||
|
||||
| Summary | |
|
||||
|
||||
@@ -67,13 +67,7 @@ Here is an example configuration file:
|
||||
|
||||
``` yaml
|
||||
query-preprocessing:
|
||||
- step: split_japanese_phrases
|
||||
- step: regex_replace
|
||||
replacements:
|
||||
- pattern: https?://[^\s]* # Filter URLs starting with http or https
|
||||
replace: ''
|
||||
- step: normalize
|
||||
|
||||
- normalize
|
||||
normalization:
|
||||
- ":: lower ()"
|
||||
- "ß > 'ss'" # German szet is unambiguously equal to double ss
|
||||
@@ -94,8 +88,8 @@ token-analysis:
|
||||
replacements: ['ä', 'ae']
|
||||
```
|
||||
|
||||
The configuration file contains five sections:
|
||||
`query-preprocessing`, `normalization`, `transliteration`, `sanitizers` and `token-analysis`.
|
||||
The configuration file contains four sections:
|
||||
`normalization`, `transliteration`, `sanitizers` and `token-analysis`.
|
||||
|
||||
#### Query preprocessing
|
||||
|
||||
@@ -112,19 +106,6 @@ The following is a list of preprocessors that are shipped with Nominatim.
|
||||
heading_level: 6
|
||||
docstring_section_style: spacy
|
||||
|
||||
##### regex-replace
|
||||
|
||||
::: nominatim_api.query_preprocessing.regex_replace
|
||||
options:
|
||||
members: False
|
||||
heading_level: 6
|
||||
docstring_section_style: spacy
|
||||
description:
|
||||
This option runs any given regex pattern on the input and replaces values accordingly
|
||||
replacements:
|
||||
- pattern: regex pattern
|
||||
replace: string to replace with
|
||||
|
||||
|
||||
#### Normalization and Transliteration
|
||||
|
||||
|
||||
@@ -3,7 +3,8 @@
|
||||
### Import tables
|
||||
|
||||
OSM data is initially imported using [osm2pgsql](https://osm2pgsql.org).
|
||||
Nominatim uses a custom flex style to create the initial import tables.
|
||||
Nominatim uses its own data output style 'gazetteer', which differs from the
|
||||
output style created for map rendering.
|
||||
|
||||
The import process creates the following tables:
|
||||
|
||||
@@ -13,7 +14,7 @@ The `planet_osm_*` tables are the usual backing tables for OSM data. Note
|
||||
that Nominatim uses them to look up special relations and to find nodes on
|
||||
ways.
|
||||
|
||||
The osm2pgsql import produces a single table `place` as output with the following
|
||||
The gazetteer style produces a single table `place` as output with the following
|
||||
columns:
|
||||
|
||||
* `osm_type` - kind of OSM object (**N** - node, **W** - way, **R** - relation)
|
||||
|
||||
@@ -25,15 +25,15 @@ following packages should get you started:
|
||||
|
||||
## Prerequisites for testing and documentation
|
||||
|
||||
The Nominatim test suite consists of behavioural tests (using pytest-bdd) and
|
||||
The Nominatim test suite consists of behavioural tests (using behave) and
|
||||
unit tests (using pytest). It has the following additional requirements:
|
||||
|
||||
* [behave test framework](https://behave.readthedocs.io) >= 1.2.6
|
||||
* [flake8](https://flake8.pycqa.org/en/stable/) (CI always runs the latest version from pip)
|
||||
* [mypy](http://mypy-lang.org/) (plus typing information for external libs)
|
||||
* [Python Typing Extensions](https://github.com/python/typing_extensions) (for Python < 3.9)
|
||||
* [pytest](https://pytest.org)
|
||||
* [pytest-asyncio](https://pytest-asyncio.readthedocs.io)
|
||||
* [pytest-bdd](https://pytest-bdd.readthedocs.io)
|
||||
|
||||
For testing the Python search frontend, you need to install extra dependencies
|
||||
depending on your choice of webserver framework:
|
||||
@@ -48,6 +48,9 @@ The documentation is built with mkdocs:
|
||||
* [mkdocs-material](https://squidfunk.github.io/mkdocs-material/)
|
||||
* [mkdocs-gen-files](https://oprypin.github.io/mkdocs-gen-files/)
|
||||
|
||||
Please be aware that tests always run against the globally installed
|
||||
osm2pgsql, so you need to have this set up. If you want to test against
|
||||
the vendored version of osm2pgsql, you need to set the PATH accordingly.
|
||||
|
||||
### Installing prerequisites on Ubuntu/Debian
|
||||
|
||||
@@ -67,9 +70,8 @@ To set up the virtual environment with all necessary packages run:
|
||||
virtualenv ~/nominatim-dev-venv
|
||||
~/nominatim-dev-venv/bin/pip install\
|
||||
psutil 'psycopg[binary]' PyICU SQLAlchemy \
|
||||
python-dotenv jinja2 pyYAML \
|
||||
mkdocs 'mkdocstrings[python]' mkdocs-gen-files \
|
||||
pytest pytest-asyncio pytest-bdd flake8 \
|
||||
python-dotenv jinja2 pyYAML behave \
|
||||
mkdocs 'mkdocstrings[python]' mkdocs-gen-files pytest pytest-asyncio flake8 \
|
||||
types-jinja2 types-markupsafe types-psutil types-psycopg2 \
|
||||
types-pygments types-pyyaml types-requests types-ujson \
|
||||
types-urllib3 typing-extensions unicorn falcon starlette \
|
||||
|
||||
@@ -43,53 +43,53 @@ The name of the pytest binary depends on your installation.
|
||||
## BDD Functional Tests (`test/bdd`)
|
||||
|
||||
Functional tests are written as BDD instructions. For more information on
|
||||
the philosophy of BDD testing, read the Wikipedia article on
|
||||
[Behaviour-driven development](https://en.wikipedia.org/wiki/Behavior-driven_development).
|
||||
the philosophy of BDD testing, see the
|
||||
[Behave manual](http://pythonhosted.org/behave/philosophy.html).
|
||||
|
||||
The following explanation assume that the reader is familiar with the BDD
|
||||
notations of features, scenarios and steps.
|
||||
|
||||
All possible steps can be found in the `steps` directory and should ideally
|
||||
be documented.
|
||||
|
||||
### General Usage
|
||||
|
||||
To run the functional tests, do
|
||||
|
||||
pytest test/bdd
|
||||
cd test/bdd
|
||||
behave
|
||||
|
||||
The BDD tests create databases for the tests. You can set name of the databases
|
||||
through configuration variables in your `pytest.ini`:
|
||||
The tests can be configured with a set of environment variables (`behave -D key=val`):
|
||||
|
||||
* `nominatim_test_db` defines the name of the temporary database created for
|
||||
a single test (default: `test_nominatim`)
|
||||
* `nominatim_api_test_db` defines the name of the database containing
|
||||
the API test data, see also below (default: `test_api_nominatim`)
|
||||
* `nominatim_template_db` defines the name of the template database used
|
||||
for creating the temporary test databases. It contains some static setup
|
||||
which usually doesn't change between imports of OSM data
|
||||
(default: `test_template_nominatim`)
|
||||
|
||||
To change other connection parameters for the PostgreSQL database, use
|
||||
the [libpq enivronment variables](https://www.postgresql.org/docs/current/libpq-envars.html).
|
||||
Never set a password through these variables. Use a
|
||||
[password file](https://www.postgresql.org/docs/current/libpq-pgpass.html) instead.
|
||||
|
||||
The API test database and the template database are only created once and then
|
||||
left untouched. This is usually what you want because it speeds up subsequent
|
||||
runs of BDD tests. If you do change code that has an influence on the content
|
||||
of these databases, you can run pytest with the `--nominatim-purge` parameter
|
||||
and the databases will be dropped and recreated from scratch.
|
||||
|
||||
When running the BDD tests with make (using `make tests` or `make bdd`), then
|
||||
the databases will always be purged.
|
||||
|
||||
The temporary test database is usually dropped directly after the test, so
|
||||
it does not take up unnecessary space. If you want to keep the database around,
|
||||
for example while debugging a specific BDD test, use the parameter
|
||||
`--nominatim-keep-db`.
|
||||
* `TEMPLATE_DB` - name of template database used as a skeleton for
|
||||
the test databases (db tests)
|
||||
* `TEST_DB` - name of test database (db tests)
|
||||
* `API_TEST_DB` - name of the database containing the API test data (api tests)
|
||||
* `API_TEST_FILE` - OSM file to be imported into the API test database (api tests)
|
||||
* `API_ENGINE` - webframe to use for running search queries, same values as
|
||||
`nominatim serve --engine` parameter
|
||||
* `DB_HOST` - (optional) hostname of database host
|
||||
* `DB_PORT` - (optional) port of database on host
|
||||
* `DB_USER` - (optional) username of database login
|
||||
* `DB_PASS` - (optional) password for database login
|
||||
* `REMOVE_TEMPLATE` - if true, the template and API database will not be reused
|
||||
during the next run. Reusing the base templates speeds
|
||||
up tests considerably but might lead to outdated errors
|
||||
for some changes in the database layout.
|
||||
* `KEEP_TEST_DB` - if true, the test database will not be dropped after a test
|
||||
is finished. Should only be used if one single scenario is
|
||||
run, otherwise the result is undefined.
|
||||
|
||||
Logging can be defined through command line parameters of behave itself. Check
|
||||
out `behave --help` for details. Also have a look at the 'work-in-progress'
|
||||
feature of behave which comes in handy when writing new tests.
|
||||
|
||||
### API Tests (`test/bdd/api`)
|
||||
|
||||
These tests are meant to test the different API endpoints and their parameters.
|
||||
They require to import several datasets into a test database. This is normally
|
||||
done automatically during setup of the test. The API test database is then
|
||||
kept around and reused in subsequent runs of behave. Use `--nominatim-purge`
|
||||
kept around and reused in subsequent runs of behave. Use `behave -DREMOVE_TEMPLATE`
|
||||
to force a reimport of the database.
|
||||
|
||||
The official test dataset is saved in the file `test/testdb/apidb-test-data.pbf`
|
||||
@@ -109,12 +109,12 @@ test the correctness of osm2pgsql. Each test will write some data into the `plac
|
||||
table (and optionally the `planet_osm_*` tables if required) and then run
|
||||
Nominatim's processing functions on that.
|
||||
|
||||
These tests use the template database and create temporary test databases for
|
||||
each test.
|
||||
These tests need to create their own test databases. By default they will be
|
||||
called `test_template_nominatim` and `test_nominatim`. Names can be changed with
|
||||
the environment variables `TEMPLATE_DB` and `TEST_DB`. The user running the tests
|
||||
needs superuser rights for postgres.
|
||||
|
||||
### Import Tests (`test/bdd/osm2pgsql`)
|
||||
|
||||
These tests check that data is imported correctly into the place table.
|
||||
|
||||
These tests also use the template database and create temporary test databases
|
||||
for each test.
|
||||
These tests check that data is imported correctly into the place table. They
|
||||
use the same template database as the DB Creation tests, so the same remarks apply.
|
||||
|
||||
@@ -9,7 +9,7 @@ the address computation and the search frontend.
|
||||
The __data import__ stage reads the raw OSM data and extracts all information
|
||||
that is useful for geocoding. This part is done by osm2pgsql, the same tool
|
||||
that can also be used to import a rendering database. It uses the special
|
||||
flex output style defined in the directory `/lib-lua`. The result of
|
||||
gazetteer output plugin in `osm2pgsql/src/output-gazetter.[ch]pp`. The result of
|
||||
the import can be found in the database table `place`.
|
||||
|
||||
The __address computation__ or __indexing__ stage takes the data from `place`
|
||||
|
||||
@@ -187,7 +187,7 @@ module.MAIN_TAGS_POIS = function (group)
|
||||
passing_place = group,
|
||||
street_lamp = 'named',
|
||||
traffic_signals = 'named'},
|
||||
historic = {'fallback',
|
||||
historic = {'always',
|
||||
yes = group,
|
||||
no = group},
|
||||
information = {include_when_tag_present('tourism', 'information'),
|
||||
@@ -196,7 +196,6 @@ module.MAIN_TAGS_POIS = function (group)
|
||||
trail_blaze = 'never'},
|
||||
junction = {'fallback',
|
||||
no = group},
|
||||
landuse = {cemetery = 'always'},
|
||||
leisure = {'always',
|
||||
nature_reserve = 'fallback',
|
||||
swimming_pool = 'named',
|
||||
@@ -230,7 +229,6 @@ module.MAIN_TAGS_POIS = function (group)
|
||||
shop = {'always',
|
||||
no = group},
|
||||
tourism = {'always',
|
||||
attraction = 'fallback',
|
||||
no = group,
|
||||
yes = group,
|
||||
information = exclude_when_key_present('information')},
|
||||
@@ -332,7 +330,7 @@ module.NAME_TAGS.core = {main = {'name', 'name:*',
|
||||
}
|
||||
module.NAME_TAGS.address = {house = {'addr:housename'}}
|
||||
module.NAME_TAGS.poi = group_merge({main = {'brand'},
|
||||
extra = {'iata', 'icao', 'faa'}},
|
||||
extra = {'iata', 'icao'}},
|
||||
module.NAME_TAGS.core)
|
||||
|
||||
-- Address tagging
|
||||
|
||||
@@ -638,10 +638,8 @@ BEGIN
|
||||
|
||||
-- Add it to the list of search terms
|
||||
{% if not db.reverse_only %}
|
||||
IF location.rank_address != 11 AND location.rank_address != 5 THEN
|
||||
nameaddress_vector := array_merge(nameaddress_vector,
|
||||
location.keywords::integer[]);
|
||||
END IF;
|
||||
nameaddress_vector := array_merge(nameaddress_vector,
|
||||
location.keywords::integer[]);
|
||||
{% endif %}
|
||||
|
||||
INSERT INTO place_addressline (place_id, address_place_id, fromarea,
|
||||
|
||||
@@ -88,10 +88,6 @@ BEGIN
|
||||
area := area / 3;
|
||||
ELSIF country_code IN ('bo', 'ar', 'sd', 'mn', 'in', 'et', 'cd', 'mz', 'ly', 'cl', 'zm') THEN
|
||||
area := area / 2;
|
||||
ELSIF country_code IN ('sg', 'ws', 'st', 'kn') THEN
|
||||
area := area * 5;
|
||||
ELSIF country_code IN ('dm', 'mt', 'lc', 'gg', 'sc', 'nr') THEN
|
||||
area := area * 20;
|
||||
END IF;
|
||||
|
||||
IF area > 1 THEN
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
site_name: Nominatim Manual
|
||||
site_name: Nominatim 5.1.0 Manual
|
||||
theme:
|
||||
font: false
|
||||
name: material
|
||||
|
||||
@@ -944,7 +944,7 @@ kp:
|
||||
# South Korea (대한민국)
|
||||
kr:
|
||||
partition: 49
|
||||
languages: ko
|
||||
languages: ko, en
|
||||
names: !include country-names/kr.yaml
|
||||
postcode:
|
||||
pattern: "ddddd"
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
# Database connection string.
|
||||
# Add host, port, user etc through additional semicolon-separated attributes.
|
||||
# e.g. ;host=...;port=...;user=...;password=...
|
||||
# Changing this variable requires to run 'nominatim refresh --website'.
|
||||
NOMINATIM_DATABASE_DSN="pgsql:dbname=nominatim"
|
||||
|
||||
# Database web user.
|
||||
@@ -35,11 +36,11 @@ NOMINATIM_TOKENIZER_CONFIG=
|
||||
|
||||
# Search in the Tiger house number data for the US.
|
||||
# Note: The tables must already exist or queries will throw errors.
|
||||
# Changing this value requires to run ./utils/setup --create-functions.
|
||||
# Changing this value requires to run ./utils/setup --create-functions --setup-website.
|
||||
NOMINATIM_USE_US_TIGER_DATA=no
|
||||
|
||||
# Search in the auxiliary housenumber table.
|
||||
# Changing this value requires to run ./utils/setup --create-functions.
|
||||
# Changing this value requires to run ./utils/setup --create-functions --setup-website.
|
||||
NOMINATIM_USE_AUX_LOCATION_DATA=no
|
||||
|
||||
# Proxy settings
|
||||
@@ -142,7 +143,8 @@ NOMINATIM_REPLICATION_RECHECK_INTERVAL=60
|
||||
|
||||
### API settings
|
||||
#
|
||||
# The following settings configure the API responses.
|
||||
# The following settings configure the API responses. You must rerun
|
||||
# 'nominatim refresh --website' after changing any of them.
|
||||
|
||||
# Send permissive CORS access headers.
|
||||
# When enabled, send CORS headers to allow access to everybody.
|
||||
@@ -190,17 +192,16 @@ NOMINATIM_REQUEST_TIMEOUT=60
|
||||
# to geocode" instead.
|
||||
NOMINATIM_SEARCH_WITHIN_COUNTRIES=False
|
||||
|
||||
# Specifies the order in which different name tags are used.
|
||||
# The values in this list determine the preferred order of name variants,
|
||||
# including language-specific names.
|
||||
# Comma-separated list, where :XX stands for language-specific tags
|
||||
# (e.g. name:en) and no :XX stands for general tags (e.g. name).
|
||||
NOMINATIM_OUTPUT_NAMES=name:XX,name,brand,official_name:XX,short_name:XX,official_name,short_name,ref
|
||||
|
||||
### Log settings
|
||||
#
|
||||
# The following options allow to enable logging of API requests.
|
||||
# You must rerun 'nominatim refresh --website' after changing any of them.
|
||||
#
|
||||
# Enable logging of requests into the DB.
|
||||
# The request will be logged into the new_query_log table.
|
||||
# You should set up a cron job that regularly clears out this table.
|
||||
NOMINATIM_LOG_DB=no
|
||||
|
||||
# Enable logging of requests into a file.
|
||||
# To enable logging set this setting to the file to log to.
|
||||
NOMINATIM_LOG_FILE=
|
||||
|
||||
@@ -8,7 +8,6 @@
|
||||
Helper functions for localizing names of results.
|
||||
"""
|
||||
from typing import Mapping, List, Optional
|
||||
from .config import Configuration
|
||||
|
||||
import re
|
||||
|
||||
@@ -21,18 +20,14 @@ class Locales:
|
||||
"""
|
||||
|
||||
def __init__(self, langs: Optional[List[str]] = None):
|
||||
self.config = Configuration(None)
|
||||
self.languages = langs or []
|
||||
self.name_tags: List[str] = []
|
||||
|
||||
parts = self.config.OUTPUT_NAMES.split(',')
|
||||
|
||||
for part in parts:
|
||||
part = part.strip()
|
||||
if part.endswith(":XX"):
|
||||
self._add_lang_tags(part[:-3])
|
||||
else:
|
||||
self._add_tags(part)
|
||||
# Build the list of supported tags. It is currently hard-coded.
|
||||
self._add_lang_tags('name')
|
||||
self._add_tags('name', 'brand')
|
||||
self._add_lang_tags('official_name', 'short_name')
|
||||
self._add_tags('official_name', 'short_name', 'ref')
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return len(self.languages) > 0
|
||||
|
||||
@@ -342,8 +342,7 @@ HTML_HEADER: str = """<!DOCTYPE html>
|
||||
<title>Nominatim - Debug</title>
|
||||
<style>
|
||||
""" + \
|
||||
(HtmlFormatter(nobackground=True).get_style_defs('.highlight') # type: ignore[no-untyped-call]
|
||||
if CODE_HIGHLIGHT else '') + \
|
||||
(HtmlFormatter(nobackground=True).get_style_defs('.highlight') if CODE_HIGHLIGHT else '') + \
|
||||
"""
|
||||
h2 { font-size: x-large }
|
||||
|
||||
|
||||
@@ -1,52 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
This preprocessor replaces values in a given input based on pre-defined regex rules.
|
||||
|
||||
Arguments:
|
||||
pattern: Regex pattern to be applied on the input
|
||||
replace: The string that it is to be replaced with
|
||||
"""
|
||||
from typing import List
|
||||
import re
|
||||
|
||||
from .config import QueryConfig
|
||||
from .base import QueryProcessingFunc
|
||||
from ..search.query import Phrase
|
||||
|
||||
|
||||
class _GenericPreprocessing:
|
||||
"""Perform replacements to input phrases using custom regex patterns."""
|
||||
|
||||
def __init__(self, config: QueryConfig) -> None:
|
||||
"""Initialise the _GenericPreprocessing class with patterns from the ICU config file."""
|
||||
self.config = config
|
||||
|
||||
match_patterns = self.config.get('replacements', 'Key not found')
|
||||
self.compiled_patterns = [
|
||||
(re.compile(item['pattern']), item['replace']) for item in match_patterns
|
||||
]
|
||||
|
||||
def split_phrase(self, phrase: Phrase) -> Phrase:
|
||||
"""This function performs replacements on the given text using regex patterns."""
|
||||
for item in self.compiled_patterns:
|
||||
phrase.text = item[0].sub(item[1], phrase.text)
|
||||
|
||||
return phrase
|
||||
|
||||
def __call__(self, phrases: List[Phrase]) -> List[Phrase]:
|
||||
"""
|
||||
Return the final Phrase list.
|
||||
Returns an empty list if there is nothing left after split_phrase.
|
||||
"""
|
||||
result = [p for p in map(self.split_phrase, phrases) if p.text.strip()]
|
||||
return result
|
||||
|
||||
|
||||
def create(config: QueryConfig) -> QueryProcessingFunc:
|
||||
""" Create a function for generic preprocessing."""
|
||||
return _GenericPreprocessing(config)
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Conversion from token assignment to an abstract DB search.
|
||||
@@ -146,7 +146,7 @@ class SearchBuilder:
|
||||
if address:
|
||||
sdata.lookups = [dbf.FieldLookup('nameaddress_vector',
|
||||
[t.token for r in address
|
||||
for t in self.query.iter_partials(r)],
|
||||
for t in self.query.get_partials_list(r)],
|
||||
lookups.Restrict)]
|
||||
yield dbs.PostcodeSearch(penalty, sdata)
|
||||
|
||||
@@ -155,39 +155,32 @@ class SearchBuilder:
|
||||
""" Build a simple address search for special entries where the
|
||||
housenumber is the main name token.
|
||||
"""
|
||||
partials = dbf.CountedTokenIDs((t for trange in address
|
||||
for t in self.query.iter_partials(trange)),
|
||||
'addr_count')
|
||||
sdata.lookups = [dbf.FieldLookup('name_vector', [t.token for t in hnrs], lookups.LookupAny)]
|
||||
expected_count = sum(t.count for t in hnrs)
|
||||
|
||||
partials = {t.token: t.addr_count for trange in address
|
||||
for t in self.query.get_partials_list(trange)}
|
||||
|
||||
if not partials:
|
||||
# can happen when none of the partials is indexed
|
||||
return
|
||||
|
||||
expected_count = sum(t.count for t in hnrs)
|
||||
hnr_tokens = [t.token for t in hnrs]
|
||||
|
||||
if expected_count < 10000:
|
||||
sdata.lookups = [dbf.FieldLookup('name_vector', hnr_tokens, lookups.LookupAny),
|
||||
dbf.FieldLookup('nameaddress_vector',
|
||||
partials.get_tokens(),
|
||||
lookups.Restrict)]
|
||||
if expected_count < 8000:
|
||||
sdata.lookups.append(dbf.FieldLookup('nameaddress_vector',
|
||||
list(partials), lookups.Restrict))
|
||||
elif len(partials) != 1 or list(partials.values())[0] < 10000:
|
||||
sdata.lookups.append(dbf.FieldLookup('nameaddress_vector',
|
||||
list(partials), lookups.LookupAll))
|
||||
else:
|
||||
split = partials.get_num_lookup_tokens(20000, 5)
|
||||
if split > 0:
|
||||
sdata.lookups = partials.split_lookup(split, 'nameaddress_vector')
|
||||
sdata.lookups.append(
|
||||
dbf.FieldLookup('name_vector', hnr_tokens, lookups.Restrict))
|
||||
else:
|
||||
addr_fulls = [t.token for t in
|
||||
self.query.get_tokens(address[0], qmod.TOKEN_WORD)]
|
||||
if len(addr_fulls) > 5:
|
||||
return
|
||||
sdata.lookups = [
|
||||
dbf.FieldLookup('name_vector', hnr_tokens, lookups.LookupAny),
|
||||
dbf.FieldLookup('nameaddress_vector', addr_fulls, lookups.LookupAny)]
|
||||
addr_fulls = [t.token for t
|
||||
in self.query.get_tokens(address[0], qmod.TOKEN_WORD)]
|
||||
if len(addr_fulls) > 5:
|
||||
return
|
||||
sdata.lookups.append(
|
||||
dbf.FieldLookup('nameaddress_vector', addr_fulls, lookups.LookupAny))
|
||||
|
||||
sdata.housenumbers = dbf.WeightedStrings([], [])
|
||||
yield dbs.PlaceSearch(0.05, sdata, expected_count, True)
|
||||
yield dbs.PlaceSearch(0.05, sdata, expected_count)
|
||||
|
||||
def build_name_search(self, sdata: dbf.SearchData,
|
||||
name: qmod.TokenRange, address: List[qmod.TokenRange],
|
||||
@@ -201,10 +194,7 @@ class SearchBuilder:
|
||||
sdata.rankings.append(ranking)
|
||||
for penalty, count, lookup in self.yield_lookups(name, address):
|
||||
sdata.lookups = lookup
|
||||
if sdata.housenumbers:
|
||||
yield dbs.AddressSearch(penalty + name_penalty, sdata, count, bool(address))
|
||||
else:
|
||||
yield dbs.PlaceSearch(penalty + name_penalty, sdata, count, bool(address))
|
||||
yield dbs.PlaceSearch(penalty + name_penalty, sdata, count)
|
||||
|
||||
def yield_lookups(self, name: qmod.TokenRange, address: List[qmod.TokenRange]
|
||||
) -> Iterator[Tuple[float, int, List[dbf.FieldLookup]]]:
|
||||
@@ -212,88 +202,37 @@ class SearchBuilder:
|
||||
be searched for. This takes into account how frequent the terms
|
||||
are and tries to find a lookup that optimizes index use.
|
||||
"""
|
||||
name_partials = dbf.CountedTokenIDs(self.query.iter_partials(name))
|
||||
addr_partials = dbf.CountedTokenIDs((t for r in address
|
||||
for t in self.query.iter_partials(r)),
|
||||
'addr_count')
|
||||
|
||||
if not addr_partials:
|
||||
yield from self.yield_name_only_lookups(name_partials, name)
|
||||
else:
|
||||
yield from self.yield_address_lookups(name_partials, addr_partials, name)
|
||||
|
||||
def yield_name_only_lookups(self, partials: dbf.CountedTokenIDs, name: qmod.TokenRange
|
||||
) -> Iterator[Tuple[float, int, List[dbf.FieldLookup]]]:
|
||||
""" Yield the best lookup for a name-only search.
|
||||
"""
|
||||
split = partials.get_num_lookup_tokens(30000, 6)
|
||||
|
||||
if split > 0:
|
||||
yield 0.0, partials.expected_for_all_search(5), \
|
||||
partials.split_lookup(split, 'name_vector')
|
||||
else:
|
||||
# lots of results expected: try lookup by full names first
|
||||
name_fulls = list(filter(lambda t: t.count < 50000,
|
||||
self.query.get_tokens(name, qmod.TOKEN_WORD)))
|
||||
if name_fulls:
|
||||
yield 0.0, sum(t.count for t in name_fulls), \
|
||||
dbf.lookup_by_any_name([t.token for t in name_fulls], [], [])
|
||||
|
||||
# look the name up by its partials
|
||||
exp_count = partials.expected_for_all_search(5)
|
||||
if exp_count < 50000:
|
||||
yield 1.0, exp_count, \
|
||||
[dbf.FieldLookup('name_vector', partials.get_tokens(), lookups.LookupAll)]
|
||||
|
||||
def yield_address_lookups(self, name_partials: dbf.CountedTokenIDs,
|
||||
addr_partials: dbf.CountedTokenIDs, name: qmod.TokenRange,
|
||||
) -> Iterator[Tuple[float, int, List[dbf.FieldLookup]]]:
|
||||
penalty = 0.0 # extra penalty
|
||||
name_partials = {t.token: t for t in self.query.get_partials_list(name)}
|
||||
|
||||
name_split = name_partials.get_num_lookup_tokens(20000, 6)
|
||||
addr_split = addr_partials.get_num_lookup_tokens(10000, 3)
|
||||
addr_partials = [t for r in address for t in self.query.get_partials_list(r)]
|
||||
addr_tokens = list({t.token for t in addr_partials})
|
||||
|
||||
if name_split < 0 and addr_split < 0:
|
||||
# Partial term too frequent. Try looking up by rare full names first.
|
||||
name_fulls = self.query.get_tokens(name, qmod.TOKEN_WORD)
|
||||
if name_fulls:
|
||||
fulls_count = sum(t.count for t in name_fulls)
|
||||
exp_count = min(t.count for t in name_partials.values()) / (3**(len(name_partials) - 1))
|
||||
|
||||
if fulls_count < 80000:
|
||||
yield 0.0, fulls_count, \
|
||||
dbf.lookup_by_any_name([t.token for t in name_fulls],
|
||||
addr_partials.get_tokens(),
|
||||
[])
|
||||
penalty += 0.2
|
||||
penalty += 0.4
|
||||
if (len(name_partials) > 3 or exp_count < 8000):
|
||||
yield penalty, exp_count, dbf.lookup_by_names(list(name_partials.keys()), addr_tokens)
|
||||
return
|
||||
|
||||
name_split = name_partials.get_num_lookup_tokens(50000, 10)
|
||||
addr_split = addr_partials.get_num_lookup_tokens(30000, 5)
|
||||
addr_count = min(t.addr_count for t in addr_partials) if addr_partials else 50000
|
||||
# Partial term to frequent. Try looking up by rare full names first.
|
||||
name_fulls = self.query.get_tokens(name, qmod.TOKEN_WORD)
|
||||
if name_fulls:
|
||||
fulls_count = sum(t.count for t in name_fulls)
|
||||
|
||||
if name_split > 0 \
|
||||
and (addr_split < 0 or name_partials.min_count() <= addr_partials.min_count()):
|
||||
# lookup by name
|
||||
lookup = name_partials.split_lookup(name_split, 'name_vector')
|
||||
lookup.append(dbf.FieldLookup('nameaddress_vector',
|
||||
addr_partials.get_tokens(), lookups.Restrict))
|
||||
yield penalty, name_partials.expected_for_all_search(5), lookup
|
||||
elif addr_split > 0:
|
||||
# lookup by address
|
||||
lookup = addr_partials.split_lookup(addr_split, 'nameaddress_vector')
|
||||
lookup.append(dbf.FieldLookup('name_vector',
|
||||
name_partials.get_tokens(), lookups.Restrict))
|
||||
yield penalty, addr_partials.expected_for_all_search(3), lookup
|
||||
elif len(name_partials) > 1:
|
||||
penalty += 0.5
|
||||
# To catch remaining results, lookup by name and address
|
||||
# We only do this if there is a reasonable number of results expected.
|
||||
exp_count = min(name_partials.min_count(), addr_partials.min_count())
|
||||
exp_count = int(exp_count / (min(3, len(name_partials)) + min(3, len(addr_partials))))
|
||||
if exp_count < 50000:
|
||||
lookup = name_partials.split_lookup(3, 'name_vector')
|
||||
lookup.extend(addr_partials.split_lookup(3, 'nameaddress_vector'))
|
||||
if fulls_count < 50000 or addr_count < 50000:
|
||||
yield penalty, fulls_count / (2**len(addr_tokens)), \
|
||||
self.get_full_name_ranking(name_fulls, addr_partials,
|
||||
fulls_count > 30000 / max(1, len(addr_tokens)))
|
||||
|
||||
yield penalty, exp_count, lookup
|
||||
# To catch remaining results, lookup by name and address
|
||||
# We only do this if there is a reasonable number of results expected.
|
||||
exp_count /= 2**len(addr_tokens)
|
||||
if exp_count < 10000 and addr_count < 20000:
|
||||
penalty += 0.35 * max(1 if name_fulls else 0.1,
|
||||
5 - len(name_partials) - len(addr_tokens))
|
||||
yield penalty, exp_count, \
|
||||
self.get_name_address_ranking(list(name_partials.keys()), addr_partials)
|
||||
|
||||
def get_name_address_ranking(self, name_tokens: List[int],
|
||||
addr_partials: List[qmod.Token]) -> List[dbf.FieldLookup]:
|
||||
@@ -340,14 +279,11 @@ class SearchBuilder:
|
||||
""" Create a ranking expression for a name term in the given range.
|
||||
"""
|
||||
name_fulls = self.query.get_tokens(trange, qmod.TOKEN_WORD)
|
||||
full_word_penalty = self.query.get_in_word_penalty(trange)
|
||||
ranks = [dbf.RankedTokens(t.penalty + full_word_penalty, [t.token])
|
||||
for t in name_fulls]
|
||||
ranks = [dbf.RankedTokens(t.penalty, [t.token]) for t in name_fulls]
|
||||
ranks.sort(key=lambda r: r.penalty)
|
||||
# Fallback, sum of penalty for partials
|
||||
default = sum(t.penalty for t in self.query.iter_partials(trange)) + 0.2
|
||||
default += sum(n.word_break_penalty
|
||||
for n in self.query.nodes[trange.start + 1:trange.end])
|
||||
name_partials = self.query.get_partials_list(trange)
|
||||
default = sum(t.penalty for t in name_partials) + 0.2
|
||||
return dbf.FieldRanking(db_field, default, ranks)
|
||||
|
||||
def get_addr_ranking(self, trange: qmod.TokenRange) -> dbf.FieldRanking:
|
||||
@@ -359,40 +295,36 @@ class SearchBuilder:
|
||||
ranks: List[dbf.RankedTokens] = []
|
||||
|
||||
while todo:
|
||||
_, pos, rank = heapq.heappop(todo)
|
||||
# partial node
|
||||
partial = self.query.nodes[pos].partial
|
||||
if partial is not None:
|
||||
if pos + 1 < trange.end:
|
||||
penalty = rank.penalty + partial.penalty \
|
||||
+ self.query.nodes[pos + 1].word_break_penalty
|
||||
heapq.heappush(todo, (-(pos + 1), pos + 1,
|
||||
dbf.RankedTokens(penalty, rank.tokens)))
|
||||
else:
|
||||
ranks.append(dbf.RankedTokens(rank.penalty + partial.penalty,
|
||||
rank.tokens))
|
||||
# full words
|
||||
neglen, pos, rank = heapq.heappop(todo)
|
||||
for tlist in self.query.nodes[pos].starting:
|
||||
if tlist.ttype == qmod.TOKEN_WORD:
|
||||
if tlist.ttype in (qmod.TOKEN_PARTIAL, qmod.TOKEN_WORD):
|
||||
if tlist.end < trange.end:
|
||||
chgpenalty = self.query.nodes[tlist.end].word_break_penalty \
|
||||
+ self.query.get_in_word_penalty(
|
||||
qmod.TokenRange(pos, tlist.end))
|
||||
for t in tlist.tokens:
|
||||
heapq.heappush(todo, (-tlist.end, tlist.end,
|
||||
rank.with_token(t, chgpenalty)))
|
||||
chgpenalty = PENALTY_WORDCHANGE[self.query.nodes[tlist.end].btype]
|
||||
if tlist.ttype == qmod.TOKEN_PARTIAL:
|
||||
penalty = rank.penalty + chgpenalty \
|
||||
+ max(t.penalty for t in tlist.tokens)
|
||||
heapq.heappush(todo, (neglen - 1, tlist.end,
|
||||
dbf.RankedTokens(penalty, rank.tokens)))
|
||||
else:
|
||||
for t in tlist.tokens:
|
||||
heapq.heappush(todo, (neglen - 1, tlist.end,
|
||||
rank.with_token(t, chgpenalty)))
|
||||
elif tlist.end == trange.end:
|
||||
ranks.extend(rank.with_token(t, 0.0) for t in tlist.tokens)
|
||||
|
||||
if len(ranks) >= 10:
|
||||
# Too many variants, bail out and only add
|
||||
# Worst-case Fallback: sum of penalty of partials
|
||||
default = sum(t.penalty for t in self.query.iter_partials(trange)) + 0.2
|
||||
default += sum(n.word_break_penalty
|
||||
for n in self.query.nodes[trange.start + 1:trange.end])
|
||||
ranks.append(dbf.RankedTokens(rank.penalty + default, []))
|
||||
# Bail out of outer loop
|
||||
break
|
||||
if tlist.ttype == qmod.TOKEN_PARTIAL:
|
||||
ranks.append(dbf.RankedTokens(rank.penalty
|
||||
+ max(t.penalty for t in tlist.tokens),
|
||||
rank.tokens))
|
||||
else:
|
||||
ranks.extend(rank.with_token(t, 0.0) for t in tlist.tokens)
|
||||
if len(ranks) >= 10:
|
||||
# Too many variants, bail out and only add
|
||||
# Worst-case Fallback: sum of penalty of partials
|
||||
name_partials = self.query.get_partials_list(trange)
|
||||
default = sum(t.penalty for t in name_partials) + 0.2
|
||||
ranks.append(dbf.RankedTokens(rank.penalty + default, []))
|
||||
# Bail out of outer loop
|
||||
todo.clear()
|
||||
break
|
||||
|
||||
ranks.sort(key=lambda r: len(r.tokens))
|
||||
default = ranks[0].penalty + 0.3
|
||||
@@ -412,7 +344,6 @@ class SearchBuilder:
|
||||
if not tokens:
|
||||
return None
|
||||
sdata.set_strings('countries', tokens)
|
||||
sdata.penalty += self.query.get_in_word_penalty(assignment.country)
|
||||
elif self.details.countries:
|
||||
sdata.countries = dbf.WeightedStrings(self.details.countries,
|
||||
[0.0] * len(self.details.countries))
|
||||
@@ -420,24 +351,29 @@ class SearchBuilder:
|
||||
sdata.set_strings('housenumbers',
|
||||
self.query.get_tokens(assignment.housenumber,
|
||||
qmod.TOKEN_HOUSENUMBER))
|
||||
sdata.penalty += self.query.get_in_word_penalty(assignment.housenumber)
|
||||
if assignment.postcode:
|
||||
sdata.set_strings('postcodes',
|
||||
self.query.get_tokens(assignment.postcode,
|
||||
qmod.TOKEN_POSTCODE))
|
||||
sdata.penalty += self.query.get_in_word_penalty(assignment.postcode)
|
||||
if assignment.qualifier:
|
||||
tokens = self.get_qualifier_tokens(assignment.qualifier)
|
||||
if not tokens:
|
||||
return None
|
||||
sdata.set_qualifiers(tokens)
|
||||
sdata.penalty += self.query.get_in_word_penalty(assignment.qualifier)
|
||||
elif self.details.categories:
|
||||
sdata.qualifiers = dbf.WeightedCategories(self.details.categories,
|
||||
[0.0] * len(self.details.categories))
|
||||
|
||||
if assignment.address:
|
||||
sdata.set_ranking([self.get_addr_ranking(r) for r in assignment.address])
|
||||
if not assignment.name and assignment.housenumber:
|
||||
# housenumber search: the first item needs to be handled like
|
||||
# a name in ranking or penalties are not comparable with
|
||||
# normal searches.
|
||||
sdata.set_ranking([self.get_name_ranking(assignment.address[0],
|
||||
db_field='nameaddress_vector')]
|
||||
+ [self.get_addr_ranking(r) for r in assignment.address[1:]])
|
||||
else:
|
||||
sdata.set_ranking([self.get_addr_ranking(r) for r in assignment.address])
|
||||
else:
|
||||
sdata.rankings = []
|
||||
|
||||
@@ -483,3 +419,14 @@ class SearchBuilder:
|
||||
return dbf.WeightedCategories(list(tokens.keys()), list(tokens.values()))
|
||||
|
||||
return None
|
||||
|
||||
|
||||
PENALTY_WORDCHANGE = {
|
||||
qmod.BREAK_START: 0.0,
|
||||
qmod.BREAK_END: 0.0,
|
||||
qmod.BREAK_PHRASE: 0.0,
|
||||
qmod.BREAK_SOFT_PHRASE: 0.0,
|
||||
qmod.BREAK_WORD: 0.1,
|
||||
qmod.BREAK_PART: 0.2,
|
||||
qmod.BREAK_TOKEN: 0.4
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"""
|
||||
Data structures for more complex fields in abstract search descriptions.
|
||||
"""
|
||||
from typing import List, Tuple, Iterator, Dict, Type, cast
|
||||
from typing import List, Tuple, Iterator, Dict, Type
|
||||
import dataclasses
|
||||
|
||||
import sqlalchemy as sa
|
||||
@@ -18,66 +18,6 @@ from .query import Token
|
||||
from . import db_search_lookups as lookups
|
||||
|
||||
|
||||
class CountedTokenIDs:
|
||||
""" A list of token IDs with their respective counts, sorted
|
||||
from least frequent to most frequent.
|
||||
|
||||
If a token count is one, then statistics are likely to be unavaible
|
||||
and a relatively high count is assumed instead.
|
||||
"""
|
||||
|
||||
def __init__(self, tokens: Iterator[Token], count_column: str = 'count'):
|
||||
self.tokens = list({(cast(int, getattr(t, count_column)), t.token) for t in tokens})
|
||||
self.tokens.sort(key=lambda t: t[0] if t[0] > 1 else 100000)
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self.tokens)
|
||||
|
||||
def get_num_lookup_tokens(self, limit: int, fac: int) -> int:
|
||||
""" Suggest the number of tokens to be used for an index lookup.
|
||||
The idea here is to use as few items as possible while making
|
||||
sure the number of rows returned stays below 'limit' which
|
||||
makes recheck of the returned rows more expensive than adding
|
||||
another item for the index lookup. 'fac' is the factor by which
|
||||
the limit is increased every time a lookup item is added.
|
||||
|
||||
If the list of tokens doesn't seem suitable at all for index
|
||||
lookup, -1 is returned.
|
||||
"""
|
||||
length = len(self.tokens)
|
||||
min_count = self.tokens[0][0]
|
||||
if min_count == 1:
|
||||
return min(length, 3) # no statistics available, use index
|
||||
|
||||
for i in range(min(length, 3)):
|
||||
if min_count < limit:
|
||||
return i + 1
|
||||
limit = limit * fac
|
||||
|
||||
return -1
|
||||
|
||||
def min_count(self) -> int:
|
||||
return self.tokens[0][0]
|
||||
|
||||
def expected_for_all_search(self, fac: int = 5) -> int:
|
||||
return int(self.tokens[0][0] / (fac**(len(self.tokens) - 1)))
|
||||
|
||||
def get_tokens(self) -> List[int]:
|
||||
return [t[1] for t in self.tokens]
|
||||
|
||||
def get_head_tokens(self, num_tokens: int) -> List[int]:
|
||||
return [t[1] for t in self.tokens[:num_tokens]]
|
||||
|
||||
def get_tail_tokens(self, first: int) -> List[int]:
|
||||
return [t[1] for t in self.tokens[first:]]
|
||||
|
||||
def split_lookup(self, split: int, column: str) -> 'List[FieldLookup]':
|
||||
lookup = [FieldLookup(column, self.get_head_tokens(split), lookups.LookupAll)]
|
||||
if split < len(self.tokens):
|
||||
lookup.append(FieldLookup(column, self.get_tail_tokens(split), lookups.Restrict))
|
||||
return lookup
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class WeightedStrings:
|
||||
""" A list of strings together with a penalty.
|
||||
|
||||
867
src/nominatim_api/search/db_searches.py
Normal file
867
src/nominatim_api/search/db_searches.py
Normal file
@@ -0,0 +1,867 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of the actual database accesses for forward search.
|
||||
"""
|
||||
from typing import List, Tuple, AsyncIterator, Dict, Any, Callable, cast
|
||||
import abc
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from ..typing import SaFromClause, SaScalarSelect, SaColumn, \
|
||||
SaExpression, SaSelect, SaLambdaSelect, SaRow, SaBind
|
||||
from ..sql.sqlalchemy_types import Geometry, IntArray
|
||||
from ..connection import SearchConnection
|
||||
from ..types import SearchDetails, DataLayer, GeometryFormat, Bbox
|
||||
from .. import results as nres
|
||||
from .db_search_fields import SearchData, WeightedCategories
|
||||
|
||||
|
||||
def no_index(expr: SaColumn) -> SaColumn:
|
||||
""" Wrap the given expression, so that the query planner will
|
||||
refrain from using the expression for index lookup.
|
||||
"""
|
||||
return sa.func.coalesce(sa.null(), expr)
|
||||
|
||||
|
||||
def _details_to_bind_params(details: SearchDetails) -> Dict[str, Any]:
|
||||
""" Create a dictionary from search parameters that can be used
|
||||
as bind parameter for SQL execute.
|
||||
"""
|
||||
return {'limit': details.max_results,
|
||||
'min_rank': details.min_rank,
|
||||
'max_rank': details.max_rank,
|
||||
'viewbox': details.viewbox,
|
||||
'viewbox2': details.viewbox_x2,
|
||||
'near': details.near,
|
||||
'near_radius': details.near_radius,
|
||||
'excluded': details.excluded,
|
||||
'countries': details.countries}
|
||||
|
||||
|
||||
LIMIT_PARAM: SaBind = sa.bindparam('limit')
|
||||
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
|
||||
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
|
||||
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
|
||||
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
|
||||
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
|
||||
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
|
||||
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
|
||||
|
||||
|
||||
def filter_by_area(sql: SaSelect, t: SaFromClause,
|
||||
details: SearchDetails, avoid_index: bool = False) -> SaSelect:
|
||||
""" Apply SQL statements for filtering by viewbox and near point,
|
||||
if applicable.
|
||||
"""
|
||||
if details.near is not None and details.near_radius is not None:
|
||||
if details.near_radius < 0.1 and not avoid_index:
|
||||
sql = sql.where(t.c.geometry.within_distance(NEAR_PARAM, NEAR_RADIUS_PARAM))
|
||||
else:
|
||||
sql = sql.where(t.c.geometry.ST_Distance(NEAR_PARAM) <= NEAR_RADIUS_PARAM)
|
||||
if details.viewbox is not None and details.bounded_viewbox:
|
||||
sql = sql.where(t.c.geometry.intersects(VIEWBOX_PARAM,
|
||||
use_index=not avoid_index and
|
||||
details.viewbox.area < 0.2))
|
||||
|
||||
return sql
|
||||
|
||||
|
||||
def _exclude_places(t: SaFromClause) -> Callable[[], SaExpression]:
|
||||
return lambda: t.c.place_id.not_in(sa.bindparam('excluded'))
|
||||
|
||||
|
||||
def _select_placex(t: SaFromClause) -> SaSelect:
|
||||
return sa.select(t.c.place_id, t.c.osm_type, t.c.osm_id, t.c.name,
|
||||
t.c.class_, t.c.type,
|
||||
t.c.address, t.c.extratags,
|
||||
t.c.housenumber, t.c.postcode, t.c.country_code,
|
||||
t.c.wikipedia,
|
||||
t.c.parent_place_id, t.c.rank_address, t.c.rank_search,
|
||||
t.c.linked_place_id, t.c.admin_level,
|
||||
t.c.centroid,
|
||||
t.c.geometry.ST_Expand(0).label('bbox'))
|
||||
|
||||
|
||||
def _add_geometry_columns(sql: SaLambdaSelect, col: SaColumn, details: SearchDetails) -> SaSelect:
|
||||
out = []
|
||||
|
||||
if details.geometry_simplification > 0.0:
|
||||
col = sa.func.ST_SimplifyPreserveTopology(col, details.geometry_simplification)
|
||||
|
||||
if details.geometry_output & GeometryFormat.GEOJSON:
|
||||
out.append(sa.func.ST_AsGeoJSON(col, 7).label('geometry_geojson'))
|
||||
if details.geometry_output & GeometryFormat.TEXT:
|
||||
out.append(sa.func.ST_AsText(col).label('geometry_text'))
|
||||
if details.geometry_output & GeometryFormat.KML:
|
||||
out.append(sa.func.ST_AsKML(col, 7).label('geometry_kml'))
|
||||
if details.geometry_output & GeometryFormat.SVG:
|
||||
out.append(sa.func.ST_AsSVG(col, 0, 7).label('geometry_svg'))
|
||||
|
||||
return sql.add_columns(*out)
|
||||
|
||||
|
||||
def _make_interpolation_subquery(table: SaFromClause, inner: SaFromClause,
|
||||
numerals: List[int], details: SearchDetails) -> SaScalarSelect:
|
||||
all_ids = sa.func.ArrayAgg(table.c.place_id)
|
||||
sql = sa.select(all_ids).where(table.c.parent_place_id == inner.c.place_id)
|
||||
|
||||
if len(numerals) == 1:
|
||||
sql = sql.where(sa.between(numerals[0], table.c.startnumber, table.c.endnumber))\
|
||||
.where((numerals[0] - table.c.startnumber) % table.c.step == 0)
|
||||
else:
|
||||
sql = sql.where(sa.or_(
|
||||
*(sa.and_(sa.between(n, table.c.startnumber, table.c.endnumber),
|
||||
(n - table.c.startnumber) % table.c.step == 0)
|
||||
for n in numerals)))
|
||||
|
||||
if details.excluded:
|
||||
sql = sql.where(_exclude_places(table))
|
||||
|
||||
return sql.scalar_subquery()
|
||||
|
||||
|
||||
def _filter_by_layer(table: SaFromClause, layers: DataLayer) -> SaColumn:
|
||||
orexpr: List[SaExpression] = []
|
||||
if layers & DataLayer.ADDRESS and layers & DataLayer.POI:
|
||||
orexpr.append(no_index(table.c.rank_address).between(1, 30))
|
||||
elif layers & DataLayer.ADDRESS:
|
||||
orexpr.append(no_index(table.c.rank_address).between(1, 29))
|
||||
orexpr.append(sa.func.IsAddressPoint(table))
|
||||
elif layers & DataLayer.POI:
|
||||
orexpr.append(sa.and_(no_index(table.c.rank_address) == 30,
|
||||
table.c.class_.not_in(('place', 'building'))))
|
||||
|
||||
if layers & DataLayer.MANMADE:
|
||||
exclude = []
|
||||
if not layers & DataLayer.RAILWAY:
|
||||
exclude.append('railway')
|
||||
if not layers & DataLayer.NATURAL:
|
||||
exclude.extend(('natural', 'water', 'waterway'))
|
||||
orexpr.append(sa.and_(table.c.class_.not_in(tuple(exclude)),
|
||||
no_index(table.c.rank_address) == 0))
|
||||
else:
|
||||
include = []
|
||||
if layers & DataLayer.RAILWAY:
|
||||
include.append('railway')
|
||||
if layers & DataLayer.NATURAL:
|
||||
include.extend(('natural', 'water', 'waterway'))
|
||||
orexpr.append(sa.and_(table.c.class_.in_(tuple(include)),
|
||||
no_index(table.c.rank_address) == 0))
|
||||
|
||||
if len(orexpr) == 1:
|
||||
return orexpr[0]
|
||||
|
||||
return sa.or_(*orexpr)
|
||||
|
||||
|
||||
def _interpolated_position(table: SaFromClause, nr: SaColumn) -> SaColumn:
|
||||
pos = sa.cast(nr - table.c.startnumber, sa.Float) / (table.c.endnumber - table.c.startnumber)
|
||||
return sa.case(
|
||||
(table.c.endnumber == table.c.startnumber, table.c.linegeo.ST_Centroid()),
|
||||
else_=table.c.linegeo.ST_LineInterpolatePoint(pos)).label('centroid')
|
||||
|
||||
|
||||
async def _get_placex_housenumbers(conn: SearchConnection,
|
||||
place_ids: List[int],
|
||||
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
|
||||
t = conn.t.placex
|
||||
sql = _select_placex(t).add_columns(t.c.importance)\
|
||||
.where(t.c.place_id.in_(place_ids))
|
||||
|
||||
if details.geometry_output:
|
||||
sql = _add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
for row in await conn.execute(sql):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
yield result
|
||||
|
||||
|
||||
def _int_list_to_subquery(inp: List[int]) -> 'sa.Subquery':
|
||||
""" Create a subselect that returns the given list of integers
|
||||
as rows in the column 'nr'.
|
||||
"""
|
||||
vtab = sa.func.JsonArrayEach(sa.type_coerce(inp, sa.JSON))\
|
||||
.table_valued(sa.column('value', type_=sa.JSON))
|
||||
return sa.select(sa.cast(sa.cast(vtab.c.value, sa.Text), sa.Integer).label('nr')).subquery()
|
||||
|
||||
|
||||
async def _get_osmline(conn: SearchConnection, place_ids: List[int],
|
||||
numerals: List[int],
|
||||
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
|
||||
t = conn.t.osmline
|
||||
|
||||
values = _int_list_to_subquery(numerals)
|
||||
sql = sa.select(t.c.place_id, t.c.osm_id,
|
||||
t.c.parent_place_id, t.c.address,
|
||||
values.c.nr.label('housenumber'),
|
||||
_interpolated_position(t, values.c.nr),
|
||||
t.c.postcode, t.c.country_code)\
|
||||
.where(t.c.place_id.in_(place_ids))\
|
||||
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
|
||||
|
||||
if details.geometry_output:
|
||||
sub = sql.subquery()
|
||||
sql = _add_geometry_columns(sa.select(sub), sub.c.centroid, details)
|
||||
|
||||
for row in await conn.execute(sql):
|
||||
result = nres.create_from_osmline_row(row, nres.SearchResult)
|
||||
assert result
|
||||
yield result
|
||||
|
||||
|
||||
async def _get_tiger(conn: SearchConnection, place_ids: List[int],
|
||||
numerals: List[int], osm_id: int,
|
||||
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
|
||||
t = conn.t.tiger
|
||||
values = _int_list_to_subquery(numerals)
|
||||
sql = sa.select(t.c.place_id, t.c.parent_place_id,
|
||||
sa.literal('W').label('osm_type'),
|
||||
sa.literal(osm_id).label('osm_id'),
|
||||
values.c.nr.label('housenumber'),
|
||||
_interpolated_position(t, values.c.nr),
|
||||
t.c.postcode)\
|
||||
.where(t.c.place_id.in_(place_ids))\
|
||||
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
|
||||
|
||||
if details.geometry_output:
|
||||
sub = sql.subquery()
|
||||
sql = _add_geometry_columns(sa.select(sub), sub.c.centroid, details)
|
||||
|
||||
for row in await conn.execute(sql):
|
||||
result = nres.create_from_tiger_row(row, nres.SearchResult)
|
||||
assert result
|
||||
yield result
|
||||
|
||||
|
||||
class AbstractSearch(abc.ABC):
|
||||
""" Encapuslation of a single lookup in the database.
|
||||
"""
|
||||
SEARCH_PRIO: int = 2
|
||||
|
||||
def __init__(self, penalty: float) -> None:
|
||||
self.penalty = penalty
|
||||
|
||||
@abc.abstractmethod
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
|
||||
|
||||
class NearSearch(AbstractSearch):
|
||||
""" Category search of a place type near the result of another search.
|
||||
"""
|
||||
def __init__(self, penalty: float, categories: WeightedCategories,
|
||||
search: AbstractSearch) -> None:
|
||||
super().__init__(penalty)
|
||||
self.search = search
|
||||
self.categories = categories
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
results = nres.SearchResults()
|
||||
base = await self.search.lookup(conn, details)
|
||||
|
||||
if not base:
|
||||
return results
|
||||
|
||||
base.sort(key=lambda r: (r.accuracy, r.rank_search))
|
||||
max_accuracy = base[0].accuracy + 0.5
|
||||
if base[0].rank_address == 0:
|
||||
min_rank = 0
|
||||
max_rank = 0
|
||||
elif base[0].rank_address < 26:
|
||||
min_rank = 1
|
||||
max_rank = min(25, base[0].rank_address + 4)
|
||||
else:
|
||||
min_rank = 26
|
||||
max_rank = 30
|
||||
base = nres.SearchResults(r for r in base
|
||||
if (r.source_table == nres.SourceTable.PLACEX
|
||||
and r.accuracy <= max_accuracy
|
||||
and r.bbox and r.bbox.area < 20
|
||||
and r.rank_address >= min_rank
|
||||
and r.rank_address <= max_rank))
|
||||
|
||||
if base:
|
||||
baseids = [b.place_id for b in base[:5] if b.place_id]
|
||||
|
||||
for category, penalty in self.categories:
|
||||
await self.lookup_category(results, conn, baseids, category, penalty, details)
|
||||
if len(results) >= details.max_results:
|
||||
break
|
||||
|
||||
return results
|
||||
|
||||
async def lookup_category(self, results: nres.SearchResults,
|
||||
conn: SearchConnection, ids: List[int],
|
||||
category: Tuple[str, str], penalty: float,
|
||||
details: SearchDetails) -> None:
|
||||
""" Find places of the given category near the list of
|
||||
place ids and add the results to 'results'.
|
||||
"""
|
||||
table = await conn.get_class_table(*category)
|
||||
|
||||
tgeom = conn.t.placex.alias('pgeom')
|
||||
|
||||
if table is None:
|
||||
# No classtype table available, do a simplified lookup in placex.
|
||||
table = conn.t.placex
|
||||
sql = sa.select(table.c.place_id,
|
||||
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
|
||||
.label('dist'))\
|
||||
.join(tgeom, table.c.geometry.intersects(tgeom.c.centroid.ST_Expand(0.01)))\
|
||||
.where(table.c.class_ == category[0])\
|
||||
.where(table.c.type == category[1])
|
||||
else:
|
||||
# Use classtype table. We can afford to use a larger
|
||||
# radius for the lookup.
|
||||
sql = sa.select(table.c.place_id,
|
||||
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
|
||||
.label('dist'))\
|
||||
.join(tgeom,
|
||||
table.c.centroid.ST_CoveredBy(
|
||||
sa.case((sa.and_(tgeom.c.rank_address > 9,
|
||||
tgeom.c.geometry.is_area()),
|
||||
tgeom.c.geometry),
|
||||
else_=tgeom.c.centroid.ST_Expand(0.05))))
|
||||
|
||||
inner = sql.where(tgeom.c.place_id.in_(ids))\
|
||||
.group_by(table.c.place_id).subquery()
|
||||
|
||||
t = conn.t.placex
|
||||
sql = _select_placex(t).add_columns((-inner.c.dist).label('importance'))\
|
||||
.join(inner, inner.c.place_id == t.c.place_id)\
|
||||
.order_by(inner.c.dist)
|
||||
|
||||
sql = sql.where(no_index(t.c.rank_address).between(MIN_RANK_PARAM, MAX_RANK_PARAM))
|
||||
if details.countries:
|
||||
sql = sql.where(t.c.country_code.in_(COUNTRIES_PARAM))
|
||||
if details.excluded:
|
||||
sql = sql.where(_exclude_places(t))
|
||||
if details.layers is not None:
|
||||
sql = sql.where(_filter_by_layer(t, details.layers))
|
||||
|
||||
sql = sql.limit(LIMIT_PARAM)
|
||||
for row in await conn.execute(sql, _details_to_bind_params(details)):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.accuracy = self.penalty + penalty
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
results.append(result)
|
||||
|
||||
|
||||
class PoiSearch(AbstractSearch):
|
||||
""" Category search in a geographic area.
|
||||
"""
|
||||
def __init__(self, sdata: SearchData) -> None:
|
||||
super().__init__(sdata.penalty)
|
||||
self.qualifiers = sdata.qualifiers
|
||||
self.countries = sdata.countries
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
bind_params = _details_to_bind_params(details)
|
||||
t = conn.t.placex
|
||||
|
||||
rows: List[SaRow] = []
|
||||
|
||||
if details.near and details.near_radius is not None and details.near_radius < 0.2:
|
||||
# simply search in placex table
|
||||
def _base_query() -> SaSelect:
|
||||
return _select_placex(t) \
|
||||
.add_columns((-t.c.centroid.ST_Distance(NEAR_PARAM))
|
||||
.label('importance'))\
|
||||
.where(t.c.linked_place_id == None) \
|
||||
.where(t.c.geometry.within_distance(NEAR_PARAM, NEAR_RADIUS_PARAM)) \
|
||||
.order_by(t.c.centroid.ST_Distance(NEAR_PARAM)) \
|
||||
.limit(LIMIT_PARAM)
|
||||
|
||||
classtype = self.qualifiers.values
|
||||
if len(classtype) == 1:
|
||||
cclass, ctype = classtype[0]
|
||||
sql: SaLambdaSelect = sa.lambda_stmt(
|
||||
lambda: _base_query().where(t.c.class_ == cclass)
|
||||
.where(t.c.type == ctype))
|
||||
else:
|
||||
sql = _base_query().where(sa.or_(*(sa.and_(t.c.class_ == cls, t.c.type == typ)
|
||||
for cls, typ in classtype)))
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
if details.viewbox is not None and details.bounded_viewbox:
|
||||
sql = sql.where(t.c.geometry.intersects(VIEWBOX_PARAM))
|
||||
|
||||
rows.extend(await conn.execute(sql, bind_params))
|
||||
else:
|
||||
# use the class type tables
|
||||
for category in self.qualifiers.values:
|
||||
table = await conn.get_class_table(*category)
|
||||
if table is not None:
|
||||
sql = _select_placex(t)\
|
||||
.add_columns(t.c.importance)\
|
||||
.join(table, t.c.place_id == table.c.place_id)\
|
||||
.where(t.c.class_ == category[0])\
|
||||
.where(t.c.type == category[1])
|
||||
|
||||
if details.viewbox is not None and details.bounded_viewbox:
|
||||
sql = sql.where(table.c.centroid.intersects(VIEWBOX_PARAM))
|
||||
|
||||
if details.near and details.near_radius is not None:
|
||||
sql = sql.order_by(table.c.centroid.ST_Distance(NEAR_PARAM))\
|
||||
.where(table.c.centroid.within_distance(NEAR_PARAM,
|
||||
NEAR_RADIUS_PARAM))
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
sql = sql.limit(LIMIT_PARAM)
|
||||
rows.extend(await conn.execute(sql, bind_params))
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in rows:
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.accuracy = self.penalty + self.qualifiers.get_penalty((row.class_, row.type))
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
class CountrySearch(AbstractSearch):
|
||||
""" Search for a country name or country code.
|
||||
"""
|
||||
SEARCH_PRIO = 0
|
||||
|
||||
def __init__(self, sdata: SearchData) -> None:
|
||||
super().__init__(sdata.penalty)
|
||||
self.countries = sdata.countries
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
t = conn.t.placex
|
||||
|
||||
ccodes = self.countries.values
|
||||
sql = _select_placex(t)\
|
||||
.add_columns(t.c.importance)\
|
||||
.where(t.c.country_code.in_(ccodes))\
|
||||
.where(t.c.rank_address == 4)
|
||||
|
||||
if details.geometry_output:
|
||||
sql = _add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
if details.excluded:
|
||||
sql = sql.where(_exclude_places(t))
|
||||
|
||||
sql = filter_by_area(sql, t, details)
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, _details_to_bind_params(details)):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
results.append(result)
|
||||
|
||||
if not results:
|
||||
results = await self.lookup_in_country_table(conn, details)
|
||||
|
||||
if results:
|
||||
details.min_rank = min(5, details.max_rank)
|
||||
details.max_rank = min(25, details.max_rank)
|
||||
|
||||
return results
|
||||
|
||||
async def lookup_in_country_table(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Look up the country in the fallback country tables.
|
||||
"""
|
||||
# Avoid the fallback search when this is a more search. Country results
|
||||
# usually are in the first batch of results and it is not possible
|
||||
# to exclude these fallbacks.
|
||||
if details.excluded:
|
||||
return nres.SearchResults()
|
||||
|
||||
t = conn.t.country_name
|
||||
tgrid = conn.t.country_grid
|
||||
|
||||
sql = sa.select(tgrid.c.country_code,
|
||||
tgrid.c.geometry.ST_Centroid().ST_Collect().ST_Centroid()
|
||||
.label('centroid'),
|
||||
tgrid.c.geometry.ST_Collect().ST_Expand(0).label('bbox'))\
|
||||
.where(tgrid.c.country_code.in_(self.countries.values))\
|
||||
.group_by(tgrid.c.country_code)
|
||||
|
||||
sql = filter_by_area(sql, tgrid, details, avoid_index=True)
|
||||
|
||||
sub = sql.subquery('grid')
|
||||
|
||||
sql = sa.select(t.c.country_code,
|
||||
t.c.name.merge(t.c.derived_name).label('name'),
|
||||
sub.c.centroid, sub.c.bbox)\
|
||||
.join(sub, t.c.country_code == sub.c.country_code)
|
||||
|
||||
if details.geometry_output:
|
||||
sql = _add_geometry_columns(sql, sub.c.centroid, details)
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, _details_to_bind_params(details)):
|
||||
result = nres.create_from_country_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
class PostcodeSearch(AbstractSearch):
|
||||
""" Search for a postcode.
|
||||
"""
|
||||
def __init__(self, extra_penalty: float, sdata: SearchData) -> None:
|
||||
super().__init__(sdata.penalty + extra_penalty)
|
||||
self.countries = sdata.countries
|
||||
self.postcodes = sdata.postcodes
|
||||
self.lookups = sdata.lookups
|
||||
self.rankings = sdata.rankings
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
t = conn.t.postcode
|
||||
pcs = self.postcodes.values
|
||||
|
||||
sql = sa.select(t.c.place_id, t.c.parent_place_id,
|
||||
t.c.rank_search, t.c.rank_address,
|
||||
t.c.postcode, t.c.country_code,
|
||||
t.c.geometry.label('centroid'))\
|
||||
.where(t.c.postcode.in_(pcs))
|
||||
|
||||
if details.geometry_output:
|
||||
sql = _add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
penalty: SaExpression = sa.literal(self.penalty)
|
||||
|
||||
if details.viewbox is not None and not details.bounded_viewbox:
|
||||
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM), 0.0),
|
||||
(t.c.geometry.intersects(VIEWBOX2_PARAM), 0.5),
|
||||
else_=1.0)
|
||||
|
||||
if details.near is not None:
|
||||
sql = sql.order_by(t.c.geometry.ST_Distance(NEAR_PARAM))
|
||||
|
||||
sql = filter_by_area(sql, t, details)
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
if details.excluded:
|
||||
sql = sql.where(_exclude_places(t))
|
||||
|
||||
if self.lookups:
|
||||
assert len(self.lookups) == 1
|
||||
tsearch = conn.t.search_name
|
||||
sql = sql.where(tsearch.c.place_id == t.c.parent_place_id)\
|
||||
.where((tsearch.c.name_vector + tsearch.c.nameaddress_vector)
|
||||
.contains(sa.type_coerce(self.lookups[0].tokens,
|
||||
IntArray)))
|
||||
# Do NOT add rerank penalties based on the address terms.
|
||||
# The standard rerank penalty only checks the address vector
|
||||
# while terms may appear in name and address vector. This would
|
||||
# lead to overly high penalties.
|
||||
# We assume that a postcode is precise enough to not require
|
||||
# additional full name matches.
|
||||
|
||||
penalty += sa.case(*((t.c.postcode == v, p) for v, p in self.postcodes),
|
||||
else_=1.0)
|
||||
|
||||
sql = sql.add_columns(penalty.label('accuracy'))
|
||||
sql = sql.order_by('accuracy').limit(LIMIT_PARAM)
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, _details_to_bind_params(details)):
|
||||
p = conn.t.placex
|
||||
placex_sql = _select_placex(p)\
|
||||
.add_columns(p.c.importance)\
|
||||
.where(sa.text("""class = 'boundary'
|
||||
AND type = 'postal_code'
|
||||
AND osm_type = 'R'"""))\
|
||||
.where(p.c.country_code == row.country_code)\
|
||||
.where(p.c.postcode == row.postcode)\
|
||||
.limit(1)
|
||||
|
||||
if details.geometry_output:
|
||||
placex_sql = _add_geometry_columns(placex_sql, p.c.geometry, details)
|
||||
|
||||
for prow in await conn.execute(placex_sql, _details_to_bind_params(details)):
|
||||
result = nres.create_from_placex_row(prow, nres.SearchResult)
|
||||
if result is not None:
|
||||
result.bbox = Bbox.from_wkb(prow.bbox)
|
||||
break
|
||||
else:
|
||||
result = nres.create_from_postcode_row(row, nres.SearchResult)
|
||||
|
||||
assert result
|
||||
if result.place_id not in details.excluded:
|
||||
result.accuracy = row.accuracy
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
class PlaceSearch(AbstractSearch):
|
||||
""" Generic search for an address or named place.
|
||||
"""
|
||||
SEARCH_PRIO = 1
|
||||
|
||||
def __init__(self, extra_penalty: float, sdata: SearchData, expected_count: int) -> None:
|
||||
super().__init__(sdata.penalty + extra_penalty)
|
||||
self.countries = sdata.countries
|
||||
self.postcodes = sdata.postcodes
|
||||
self.housenumbers = sdata.housenumbers
|
||||
self.qualifiers = sdata.qualifiers
|
||||
self.lookups = sdata.lookups
|
||||
self.rankings = sdata.rankings
|
||||
self.expected_count = expected_count
|
||||
|
||||
def _inner_search_name_cte(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> 'sa.CTE':
|
||||
""" Create a subquery that preselects the rows in the search_name
|
||||
table.
|
||||
"""
|
||||
t = conn.t.search_name
|
||||
|
||||
penalty: SaExpression = sa.literal(self.penalty)
|
||||
for ranking in self.rankings:
|
||||
penalty += ranking.sql_penalty(t)
|
||||
|
||||
sql = sa.select(t.c.place_id, t.c.search_rank, t.c.address_rank,
|
||||
t.c.country_code, t.c.centroid,
|
||||
t.c.name_vector, t.c.nameaddress_vector,
|
||||
sa.case((t.c.importance > 0, t.c.importance),
|
||||
else_=0.40001-(sa.cast(t.c.search_rank, sa.Float())/75))
|
||||
.label('importance'),
|
||||
penalty.label('penalty'))
|
||||
|
||||
for lookup in self.lookups:
|
||||
sql = sql.where(lookup.sql_condition(t))
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
if self.postcodes:
|
||||
# if a postcode is given, don't search for state or country level objects
|
||||
sql = sql.where(t.c.address_rank > 9)
|
||||
if self.expected_count > 10000:
|
||||
# Many results expected. Restrict by postcode.
|
||||
tpc = conn.t.postcode
|
||||
sql = sql.where(sa.select(tpc.c.postcode)
|
||||
.where(tpc.c.postcode.in_(self.postcodes.values))
|
||||
.where(t.c.centroid.within_distance(tpc.c.geometry, 0.4))
|
||||
.exists())
|
||||
|
||||
if details.viewbox is not None:
|
||||
if details.bounded_viewbox:
|
||||
sql = sql.where(t.c.centroid
|
||||
.intersects(VIEWBOX_PARAM,
|
||||
use_index=details.viewbox.area < 0.2))
|
||||
elif not self.postcodes and not self.housenumbers and self.expected_count >= 10000:
|
||||
sql = sql.where(t.c.centroid
|
||||
.intersects(VIEWBOX2_PARAM,
|
||||
use_index=details.viewbox.area < 0.5))
|
||||
|
||||
if details.near is not None and details.near_radius is not None:
|
||||
if details.near_radius < 0.1:
|
||||
sql = sql.where(t.c.centroid.within_distance(NEAR_PARAM,
|
||||
NEAR_RADIUS_PARAM))
|
||||
else:
|
||||
sql = sql.where(t.c.centroid
|
||||
.ST_Distance(NEAR_PARAM) < NEAR_RADIUS_PARAM)
|
||||
|
||||
if self.housenumbers:
|
||||
sql = sql.where(t.c.address_rank.between(16, 30))
|
||||
else:
|
||||
if details.excluded:
|
||||
sql = sql.where(_exclude_places(t))
|
||||
if details.min_rank > 0:
|
||||
sql = sql.where(sa.or_(t.c.address_rank >= MIN_RANK_PARAM,
|
||||
t.c.search_rank >= MIN_RANK_PARAM))
|
||||
if details.max_rank < 30:
|
||||
sql = sql.where(sa.or_(t.c.address_rank <= MAX_RANK_PARAM,
|
||||
t.c.search_rank <= MAX_RANK_PARAM))
|
||||
|
||||
inner = sql.limit(10000).order_by(sa.desc(sa.text('importance'))).subquery()
|
||||
|
||||
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
|
||||
inner.c.country_code, inner.c.centroid, inner.c.importance,
|
||||
inner.c.penalty)
|
||||
|
||||
# If the query is not an address search or has a geographic preference,
|
||||
# preselect most important items to restrict the number of places
|
||||
# that need to be looked up in placex.
|
||||
if not self.housenumbers\
|
||||
and (details.viewbox is None or details.bounded_viewbox)\
|
||||
and (details.near is None or details.near_radius is not None)\
|
||||
and not self.qualifiers:
|
||||
sql = sql.add_columns(sa.func.first_value(inner.c.penalty - inner.c.importance)
|
||||
.over(order_by=inner.c.penalty - inner.c.importance)
|
||||
.label('min_penalty'))
|
||||
|
||||
inner = sql.subquery()
|
||||
|
||||
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
|
||||
inner.c.country_code, inner.c.centroid, inner.c.importance,
|
||||
inner.c.penalty)\
|
||||
.where(inner.c.penalty - inner.c.importance < inner.c.min_penalty + 0.5)
|
||||
|
||||
return sql.cte('searches')
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
t = conn.t.placex
|
||||
tsearch = self._inner_search_name_cte(conn, details)
|
||||
|
||||
sql = _select_placex(t).join(tsearch, t.c.place_id == tsearch.c.place_id)
|
||||
|
||||
if details.geometry_output:
|
||||
sql = _add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
penalty: SaExpression = tsearch.c.penalty
|
||||
|
||||
if self.postcodes:
|
||||
tpc = conn.t.postcode
|
||||
pcs = self.postcodes.values
|
||||
|
||||
pc_near = sa.select(sa.func.min(tpc.c.geometry.ST_Distance(t.c.centroid)))\
|
||||
.where(tpc.c.postcode.in_(pcs))\
|
||||
.scalar_subquery()
|
||||
penalty += sa.case((t.c.postcode.in_(pcs), 0.0),
|
||||
else_=sa.func.coalesce(pc_near, cast(SaColumn, 2.0)))
|
||||
|
||||
if details.viewbox is not None and not details.bounded_viewbox:
|
||||
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM, use_index=False), 0.0),
|
||||
(t.c.geometry.intersects(VIEWBOX2_PARAM, use_index=False), 0.5),
|
||||
else_=1.0)
|
||||
|
||||
if details.near is not None:
|
||||
sql = sql.add_columns((-tsearch.c.centroid.ST_Distance(NEAR_PARAM))
|
||||
.label('importance'))
|
||||
sql = sql.order_by(sa.desc(sa.text('importance')))
|
||||
else:
|
||||
sql = sql.order_by(penalty - tsearch.c.importance)
|
||||
sql = sql.add_columns(tsearch.c.importance)
|
||||
|
||||
sql = sql.add_columns(penalty.label('accuracy'))\
|
||||
.order_by(sa.text('accuracy'))
|
||||
|
||||
if self.housenumbers:
|
||||
hnr_list = '|'.join(self.housenumbers.values)
|
||||
inner = sql.where(sa.or_(tsearch.c.address_rank < 30,
|
||||
sa.func.RegexpWord(hnr_list, t.c.housenumber)))\
|
||||
.subquery()
|
||||
|
||||
# Housenumbers from placex
|
||||
thnr = conn.t.placex.alias('hnr')
|
||||
pid_list = sa.func.ArrayAgg(thnr.c.place_id)
|
||||
place_sql = sa.select(pid_list)\
|
||||
.where(thnr.c.parent_place_id == inner.c.place_id)\
|
||||
.where(sa.func.RegexpWord(hnr_list, thnr.c.housenumber))\
|
||||
.where(thnr.c.linked_place_id == None)\
|
||||
.where(thnr.c.indexed_status == 0)
|
||||
|
||||
if details.excluded:
|
||||
place_sql = place_sql.where(thnr.c.place_id.not_in(sa.bindparam('excluded')))
|
||||
if self.qualifiers:
|
||||
place_sql = place_sql.where(self.qualifiers.sql_restrict(thnr))
|
||||
|
||||
numerals = [int(n) for n in self.housenumbers.values
|
||||
if n.isdigit() and len(n) < 8]
|
||||
interpol_sql: SaColumn
|
||||
tiger_sql: SaColumn
|
||||
if numerals and \
|
||||
(not self.qualifiers or ('place', 'house') in self.qualifiers.values):
|
||||
# Housenumbers from interpolations
|
||||
interpol_sql = _make_interpolation_subquery(conn.t.osmline, inner,
|
||||
numerals, details)
|
||||
# Housenumbers from Tiger
|
||||
tiger_sql = sa.case((inner.c.country_code == 'us',
|
||||
_make_interpolation_subquery(conn.t.tiger, inner,
|
||||
numerals, details)
|
||||
), else_=None)
|
||||
else:
|
||||
interpol_sql = sa.null()
|
||||
tiger_sql = sa.null()
|
||||
|
||||
unsort = sa.select(inner, place_sql.scalar_subquery().label('placex_hnr'),
|
||||
interpol_sql.label('interpol_hnr'),
|
||||
tiger_sql.label('tiger_hnr')).subquery('unsort')
|
||||
sql = sa.select(unsort)\
|
||||
.order_by(sa.case((unsort.c.placex_hnr != None, 1),
|
||||
(unsort.c.interpol_hnr != None, 2),
|
||||
(unsort.c.tiger_hnr != None, 3),
|
||||
else_=4),
|
||||
unsort.c.accuracy)
|
||||
else:
|
||||
sql = sql.where(t.c.linked_place_id == None)\
|
||||
.where(t.c.indexed_status == 0)
|
||||
if self.qualifiers:
|
||||
sql = sql.where(self.qualifiers.sql_restrict(t))
|
||||
if details.layers is not None:
|
||||
sql = sql.where(_filter_by_layer(t, details.layers))
|
||||
|
||||
sql = sql.limit(LIMIT_PARAM)
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, _details_to_bind_params(details)):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
result.accuracy = row.accuracy
|
||||
if self.housenumbers and row.rank_address < 30:
|
||||
if row.placex_hnr:
|
||||
subs = _get_placex_housenumbers(conn, row.placex_hnr, details)
|
||||
elif row.interpol_hnr:
|
||||
subs = _get_osmline(conn, row.interpol_hnr, numerals, details)
|
||||
elif row.tiger_hnr:
|
||||
subs = _get_tiger(conn, row.tiger_hnr, numerals, row.osm_id, details)
|
||||
else:
|
||||
subs = None
|
||||
|
||||
if subs is not None:
|
||||
async for sub in subs:
|
||||
assert sub.housenumber
|
||||
sub.accuracy = result.accuracy
|
||||
if not any(nr in self.housenumbers.values
|
||||
for nr in sub.housenumber.split(';')):
|
||||
sub.accuracy += 0.6
|
||||
results.append(sub)
|
||||
|
||||
# Only add the street as a result, if it meets all other
|
||||
# filter conditions.
|
||||
if (not details.excluded or result.place_id not in details.excluded)\
|
||||
and (not self.qualifiers or result.category in self.qualifiers.values)\
|
||||
and result.rank_address >= details.min_rank:
|
||||
result.accuracy += 1.0 # penalty for missing housenumber
|
||||
results.append(result)
|
||||
else:
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
@@ -1,17 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Module implementing the actual database accesses for forward search.
|
||||
"""
|
||||
|
||||
from .base import AbstractSearch as AbstractSearch
|
||||
from .near_search import NearSearch as NearSearch
|
||||
from .poi_search import PoiSearch as PoiSearch
|
||||
from .country_search import CountrySearch as CountrySearch
|
||||
from .postcode_search import PostcodeSearch as PostcodeSearch
|
||||
from .place_search import PlaceSearch as PlaceSearch
|
||||
from .address_search import AddressSearch as AddressSearch
|
||||
@@ -1,360 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of search for an address (search with housenumber).
|
||||
"""
|
||||
from typing import cast, List, AsyncIterator
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from . import base
|
||||
from ...typing import SaBind, SaExpression, SaColumn, SaFromClause, SaScalarSelect
|
||||
from ...types import SearchDetails, Bbox
|
||||
from ...sql.sqlalchemy_types import Geometry
|
||||
from ...connection import SearchConnection
|
||||
from ... import results as nres
|
||||
from ..db_search_fields import SearchData
|
||||
|
||||
|
||||
LIMIT_PARAM: SaBind = sa.bindparam('limit')
|
||||
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
|
||||
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
|
||||
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
|
||||
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
|
||||
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
|
||||
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
|
||||
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
|
||||
|
||||
|
||||
def _int_list_to_subquery(inp: List[int]) -> 'sa.Subquery':
|
||||
""" Create a subselect that returns the given list of integers
|
||||
as rows in the column 'nr'.
|
||||
"""
|
||||
vtab = sa.func.JsonArrayEach(sa.type_coerce(inp, sa.JSON))\
|
||||
.table_valued(sa.column('value', type_=sa.JSON))
|
||||
return sa.select(sa.cast(sa.cast(vtab.c.value, sa.Text), sa.Integer).label('nr')).subquery()
|
||||
|
||||
|
||||
def _interpolated_position(table: SaFromClause, nr: SaColumn) -> SaColumn:
|
||||
pos = sa.cast(nr - table.c.startnumber, sa.Float) / (table.c.endnumber - table.c.startnumber)
|
||||
return sa.case(
|
||||
(table.c.endnumber == table.c.startnumber, table.c.linegeo.ST_Centroid()),
|
||||
else_=table.c.linegeo.ST_LineInterpolatePoint(pos)).label('centroid')
|
||||
|
||||
|
||||
def _make_interpolation_subquery(table: SaFromClause, inner: SaFromClause,
|
||||
numerals: List[int], details: SearchDetails) -> SaScalarSelect:
|
||||
all_ids = sa.func.ArrayAgg(table.c.place_id)
|
||||
sql = sa.select(all_ids).where(table.c.parent_place_id == inner.c.place_id)
|
||||
|
||||
if len(numerals) == 1:
|
||||
sql = sql.where(sa.between(numerals[0], table.c.startnumber, table.c.endnumber))\
|
||||
.where((numerals[0] - table.c.startnumber) % table.c.step == 0)
|
||||
else:
|
||||
sql = sql.where(sa.or_(
|
||||
*(sa.and_(sa.between(n, table.c.startnumber, table.c.endnumber),
|
||||
(n - table.c.startnumber) % table.c.step == 0)
|
||||
for n in numerals)))
|
||||
|
||||
if details.excluded:
|
||||
sql = sql.where(base.exclude_places(table))
|
||||
|
||||
return sql.scalar_subquery()
|
||||
|
||||
|
||||
async def _get_placex_housenumbers(conn: SearchConnection,
|
||||
place_ids: List[int],
|
||||
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
|
||||
t = conn.t.placex
|
||||
sql = base.select_placex(t).add_columns(t.c.importance)\
|
||||
.where(t.c.place_id.in_(place_ids))
|
||||
|
||||
if details.geometry_output:
|
||||
sql = base.add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
for row in await conn.execute(sql):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
yield result
|
||||
|
||||
|
||||
async def _get_osmline(conn: SearchConnection, place_ids: List[int],
|
||||
numerals: List[int],
|
||||
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
|
||||
t = conn.t.osmline
|
||||
|
||||
values = _int_list_to_subquery(numerals)
|
||||
sql = sa.select(t.c.place_id, t.c.osm_id,
|
||||
t.c.parent_place_id, t.c.address,
|
||||
values.c.nr.label('housenumber'),
|
||||
_interpolated_position(t, values.c.nr),
|
||||
t.c.postcode, t.c.country_code)\
|
||||
.where(t.c.place_id.in_(place_ids))\
|
||||
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
|
||||
|
||||
if details.geometry_output:
|
||||
sub = sql.subquery()
|
||||
sql = base.add_geometry_columns(sa.select(sub), sub.c.centroid, details)
|
||||
|
||||
for row in await conn.execute(sql):
|
||||
result = nres.create_from_osmline_row(row, nres.SearchResult)
|
||||
assert result
|
||||
yield result
|
||||
|
||||
|
||||
async def _get_tiger(conn: SearchConnection, place_ids: List[int],
|
||||
numerals: List[int], osm_id: int,
|
||||
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
|
||||
t = conn.t.tiger
|
||||
values = _int_list_to_subquery(numerals)
|
||||
sql = sa.select(t.c.place_id, t.c.parent_place_id,
|
||||
sa.literal('W').label('osm_type'),
|
||||
sa.literal(osm_id).label('osm_id'),
|
||||
values.c.nr.label('housenumber'),
|
||||
_interpolated_position(t, values.c.nr),
|
||||
t.c.postcode)\
|
||||
.where(t.c.place_id.in_(place_ids))\
|
||||
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
|
||||
|
||||
if details.geometry_output:
|
||||
sub = sql.subquery()
|
||||
sql = base.add_geometry_columns(sa.select(sub), sub.c.centroid, details)
|
||||
|
||||
for row in await conn.execute(sql):
|
||||
result = nres.create_from_tiger_row(row, nres.SearchResult)
|
||||
assert result
|
||||
yield result
|
||||
|
||||
|
||||
class AddressSearch(base.AbstractSearch):
|
||||
""" Generic search for an address or named place.
|
||||
"""
|
||||
SEARCH_PRIO = 1
|
||||
|
||||
def __init__(self, extra_penalty: float, sdata: SearchData,
|
||||
expected_count: int, has_address_terms: bool) -> None:
|
||||
assert sdata.housenumbers
|
||||
super().__init__(sdata.penalty + extra_penalty)
|
||||
self.countries = sdata.countries
|
||||
self.postcodes = sdata.postcodes
|
||||
self.housenumbers = sdata.housenumbers
|
||||
self.qualifiers = sdata.qualifiers
|
||||
self.lookups = sdata.lookups
|
||||
self.rankings = sdata.rankings
|
||||
self.expected_count = expected_count
|
||||
self.has_address_terms = has_address_terms
|
||||
|
||||
def _inner_search_name_cte(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> 'sa.CTE':
|
||||
""" Create a subquery that preselects the rows in the search_name
|
||||
table.
|
||||
"""
|
||||
t = conn.t.search_name
|
||||
|
||||
penalty: SaExpression = sa.literal(self.penalty)
|
||||
for ranking in self.rankings:
|
||||
penalty += ranking.sql_penalty(t)
|
||||
|
||||
sql = sa.select(t.c.place_id, t.c.search_rank, t.c.address_rank,
|
||||
t.c.country_code, t.c.centroid,
|
||||
t.c.name_vector, t.c.nameaddress_vector,
|
||||
sa.case((t.c.importance > 0, t.c.importance),
|
||||
else_=0.40001-(sa.cast(t.c.search_rank, sa.Float())/75))
|
||||
.label('importance'),
|
||||
penalty.label('penalty'))
|
||||
|
||||
for lookup in self.lookups:
|
||||
sql = sql.where(lookup.sql_condition(t))
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
if self.postcodes:
|
||||
if self.expected_count > 10000:
|
||||
tpc = conn.t.postcode
|
||||
sql = sql.where(sa.select(tpc.c.postcode)
|
||||
.where(tpc.c.postcode.in_(self.postcodes.values))
|
||||
.where(tpc.c.country_code == t.c.country_code)
|
||||
.where(t.c.centroid.within_distance(tpc.c.geometry, 0.4))
|
||||
.exists())
|
||||
|
||||
if details.viewbox is not None:
|
||||
if details.bounded_viewbox:
|
||||
sql = sql.where(t.c.centroid
|
||||
.intersects(VIEWBOX_PARAM,
|
||||
use_index=details.viewbox.area < 0.2))
|
||||
|
||||
if details.near is not None and details.near_radius is not None:
|
||||
if details.near_radius < 0.1:
|
||||
sql = sql.where(t.c.centroid.within_distance(NEAR_PARAM,
|
||||
NEAR_RADIUS_PARAM))
|
||||
else:
|
||||
sql = sql.where(t.c.centroid
|
||||
.ST_Distance(NEAR_PARAM) < NEAR_RADIUS_PARAM)
|
||||
|
||||
if self.has_address_terms:
|
||||
sql = sql.where(t.c.address_rank.between(16, 30))
|
||||
else:
|
||||
# If no further address terms are given, then the base street must
|
||||
# be in the name. No search for named POIs with the given house number.
|
||||
sql = sql.where(t.c.address_rank.between(16, 27))
|
||||
|
||||
inner = sql.limit(10000).order_by(sa.desc(sa.text('importance'))).subquery()
|
||||
|
||||
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
|
||||
inner.c.country_code, inner.c.centroid, inner.c.importance,
|
||||
inner.c.penalty)
|
||||
|
||||
return sql.cte('searches')
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
t = conn.t.placex
|
||||
tsearch = self._inner_search_name_cte(conn, details)
|
||||
|
||||
sql = base.select_placex(t).join(tsearch, t.c.place_id == tsearch.c.place_id)
|
||||
|
||||
if details.geometry_output:
|
||||
sql = base.add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
penalty: SaExpression = tsearch.c.penalty
|
||||
|
||||
if self.postcodes:
|
||||
tpc = conn.t.postcode
|
||||
pcs = self.postcodes.values
|
||||
|
||||
pc_near = sa.select(sa.func.min(tpc.c.geometry.ST_Distance(t.c.centroid)
|
||||
* (tpc.c.rank_search - 19)))\
|
||||
.where(tpc.c.postcode.in_(pcs))\
|
||||
.where(tpc.c.country_code == t.c.country_code)\
|
||||
.scalar_subquery()
|
||||
penalty += sa.case((t.c.postcode.in_(pcs), 0.0),
|
||||
else_=sa.func.coalesce(pc_near, cast(SaColumn, 2.0)))
|
||||
|
||||
if details.viewbox is not None and not details.bounded_viewbox:
|
||||
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM, use_index=False), 0.0),
|
||||
(t.c.geometry.intersects(VIEWBOX2_PARAM, use_index=False), 0.5),
|
||||
else_=1.0)
|
||||
|
||||
if details.near is not None:
|
||||
sql = sql.add_columns((-tsearch.c.centroid.ST_Distance(NEAR_PARAM))
|
||||
.label('importance'))
|
||||
sql = sql.order_by(sa.desc(sa.text('importance')))
|
||||
else:
|
||||
sql = sql.order_by(penalty - tsearch.c.importance)
|
||||
sql = sql.add_columns(tsearch.c.importance)
|
||||
|
||||
sql = sql.add_columns(penalty.label('accuracy'))\
|
||||
.order_by(sa.text('accuracy'))
|
||||
|
||||
hnr_list = '|'.join(self.housenumbers.values)
|
||||
|
||||
if self.has_address_terms:
|
||||
sql = sql.where(sa.or_(tsearch.c.address_rank < 30,
|
||||
sa.func.RegexpWord(hnr_list, t.c.housenumber)))
|
||||
|
||||
inner = sql.subquery()
|
||||
|
||||
# Housenumbers from placex
|
||||
thnr = conn.t.placex.alias('hnr')
|
||||
pid_list = sa.func.ArrayAgg(thnr.c.place_id)
|
||||
place_sql = sa.select(pid_list)\
|
||||
.where(thnr.c.parent_place_id == inner.c.place_id)\
|
||||
.where(sa.func.RegexpWord(hnr_list, thnr.c.housenumber))\
|
||||
.where(thnr.c.linked_place_id == None)\
|
||||
.where(thnr.c.indexed_status == 0)
|
||||
|
||||
if details.excluded:
|
||||
place_sql = place_sql.where(thnr.c.place_id.not_in(sa.bindparam('excluded')))
|
||||
if self.qualifiers:
|
||||
place_sql = place_sql.where(self.qualifiers.sql_restrict(thnr))
|
||||
|
||||
numerals = [int(n) for n in self.housenumbers.values
|
||||
if n.isdigit() and len(n) < 8]
|
||||
interpol_sql: SaColumn
|
||||
tiger_sql: SaColumn
|
||||
if numerals and \
|
||||
(not self.qualifiers or ('place', 'house') in self.qualifiers.values):
|
||||
# Housenumbers from interpolations
|
||||
interpol_sql = _make_interpolation_subquery(conn.t.osmline, inner,
|
||||
numerals, details)
|
||||
# Housenumbers from Tiger
|
||||
tiger_sql = sa.case((inner.c.country_code == 'us',
|
||||
_make_interpolation_subquery(conn.t.tiger, inner,
|
||||
numerals, details)
|
||||
), else_=None)
|
||||
else:
|
||||
interpol_sql = sa.null()
|
||||
tiger_sql = sa.null()
|
||||
|
||||
unsort = sa.select(inner, place_sql.scalar_subquery().label('placex_hnr'),
|
||||
interpol_sql.label('interpol_hnr'),
|
||||
tiger_sql.label('tiger_hnr')).subquery('unsort')
|
||||
sql = sa.select(unsort)\
|
||||
.order_by(unsort.c.accuracy +
|
||||
sa.case((unsort.c.placex_hnr != None, 0),
|
||||
(unsort.c.interpol_hnr != None, 0),
|
||||
(unsort.c.tiger_hnr != None, 0),
|
||||
else_=1),
|
||||
sa.case((unsort.c.placex_hnr != None, 1),
|
||||
(unsort.c.interpol_hnr != None, 2),
|
||||
(unsort.c.tiger_hnr != None, 3),
|
||||
else_=4))
|
||||
|
||||
sql = sql.limit(LIMIT_PARAM)
|
||||
|
||||
bind_params = {
|
||||
'limit': details.max_results,
|
||||
'min_rank': details.min_rank,
|
||||
'max_rank': details.max_rank,
|
||||
'viewbox': details.viewbox,
|
||||
'viewbox2': details.viewbox_x2,
|
||||
'near': details.near,
|
||||
'near_radius': details.near_radius,
|
||||
'excluded': details.excluded,
|
||||
'countries': details.countries
|
||||
}
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, bind_params):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
result.accuracy = row.accuracy
|
||||
if row.rank_address < 30:
|
||||
if row.placex_hnr:
|
||||
subs = _get_placex_housenumbers(conn, row.placex_hnr, details)
|
||||
elif row.interpol_hnr:
|
||||
subs = _get_osmline(conn, row.interpol_hnr, numerals, details)
|
||||
elif row.tiger_hnr:
|
||||
subs = _get_tiger(conn, row.tiger_hnr, numerals, row.osm_id, details)
|
||||
else:
|
||||
subs = None
|
||||
|
||||
if subs is not None:
|
||||
async for sub in subs:
|
||||
assert sub.housenumber
|
||||
sub.accuracy = result.accuracy
|
||||
if not any(nr in self.housenumbers.values
|
||||
for nr in sub.housenumber.split(';')):
|
||||
sub.accuracy += 0.6
|
||||
results.append(sub)
|
||||
|
||||
# Only add the street as a result, if it meets all other
|
||||
# filter conditions.
|
||||
if (not details.excluded or result.place_id not in details.excluded)\
|
||||
and (not self.qualifiers or result.category in self.qualifiers.values)\
|
||||
and result.rank_address >= details.min_rank:
|
||||
result.accuracy += 1.0 # penalty for missing housenumber
|
||||
results.append(result)
|
||||
else:
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
@@ -1,144 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Interface for classes implementing a database search.
|
||||
"""
|
||||
from typing import Callable, List
|
||||
import abc
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from ...typing import SaFromClause, SaSelect, SaColumn, SaExpression, SaLambdaSelect
|
||||
from ...sql.sqlalchemy_types import Geometry
|
||||
from ...connection import SearchConnection
|
||||
from ...types import SearchDetails, DataLayer, GeometryFormat
|
||||
from ...results import SearchResults
|
||||
|
||||
|
||||
class AbstractSearch(abc.ABC):
|
||||
""" Encapuslation of a single lookup in the database.
|
||||
"""
|
||||
SEARCH_PRIO: int = 2
|
||||
|
||||
def __init__(self, penalty: float) -> None:
|
||||
self.penalty = penalty
|
||||
|
||||
@abc.abstractmethod
|
||||
async def lookup(self, conn: SearchConnection, details: SearchDetails) -> SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
|
||||
|
||||
def select_placex(t: SaFromClause) -> SaSelect:
|
||||
""" Return the basic select query for placex which returns all
|
||||
fields necessary to fill a Nominatim result. 't' must either be
|
||||
the placex table or a subquery returning appropriate fields from
|
||||
a placex-related query.
|
||||
"""
|
||||
return sa.select(t.c.place_id, t.c.osm_type, t.c.osm_id, t.c.name,
|
||||
t.c.class_, t.c.type,
|
||||
t.c.address, t.c.extratags,
|
||||
t.c.housenumber, t.c.postcode, t.c.country_code,
|
||||
t.c.wikipedia,
|
||||
t.c.parent_place_id, t.c.rank_address, t.c.rank_search,
|
||||
t.c.linked_place_id, t.c.admin_level,
|
||||
t.c.centroid,
|
||||
t.c.geometry.ST_Expand(0).label('bbox'))
|
||||
|
||||
|
||||
def exclude_places(t: SaFromClause) -> Callable[[], SaExpression]:
|
||||
""" Return an expression to exclude place IDs from the list in the
|
||||
SearchDetails.
|
||||
|
||||
Requires the excluded IDs to be supplied as a bind parameter in SQL.
|
||||
"""
|
||||
return lambda: t.c.place_id.not_in(sa.bindparam('excluded'))
|
||||
|
||||
|
||||
def filter_by_layer(table: SaFromClause, layers: DataLayer) -> SaColumn:
|
||||
""" Return an expression that filters the given table by layers.
|
||||
"""
|
||||
orexpr: List[SaExpression] = []
|
||||
if layers & DataLayer.ADDRESS and layers & DataLayer.POI:
|
||||
orexpr.append(no_index(table.c.rank_address).between(1, 30))
|
||||
elif layers & DataLayer.ADDRESS:
|
||||
orexpr.append(no_index(table.c.rank_address).between(1, 29))
|
||||
orexpr.append(sa.func.IsAddressPoint(table))
|
||||
elif layers & DataLayer.POI:
|
||||
orexpr.append(sa.and_(no_index(table.c.rank_address) == 30,
|
||||
table.c.class_.not_in(('place', 'building'))))
|
||||
|
||||
if layers & DataLayer.MANMADE:
|
||||
exclude = []
|
||||
if not layers & DataLayer.RAILWAY:
|
||||
exclude.append('railway')
|
||||
if not layers & DataLayer.NATURAL:
|
||||
exclude.extend(('natural', 'water', 'waterway'))
|
||||
orexpr.append(sa.and_(table.c.class_.not_in(tuple(exclude)),
|
||||
no_index(table.c.rank_address) == 0))
|
||||
else:
|
||||
include = []
|
||||
if layers & DataLayer.RAILWAY:
|
||||
include.append('railway')
|
||||
if layers & DataLayer.NATURAL:
|
||||
include.extend(('natural', 'water', 'waterway'))
|
||||
orexpr.append(sa.and_(table.c.class_.in_(tuple(include)),
|
||||
no_index(table.c.rank_address) == 0))
|
||||
|
||||
if len(orexpr) == 1:
|
||||
return orexpr[0]
|
||||
|
||||
return sa.or_(*orexpr)
|
||||
|
||||
|
||||
def no_index(expr: SaColumn) -> SaColumn:
|
||||
""" Wrap the given expression, so that the query planner will
|
||||
refrain from using the expression for index lookup.
|
||||
"""
|
||||
return sa.func.coalesce(sa.null(), expr)
|
||||
|
||||
|
||||
def filter_by_area(sql: SaSelect, t: SaFromClause,
|
||||
details: SearchDetails, avoid_index: bool = False) -> SaSelect:
|
||||
""" Apply SQL statements for filtering by viewbox and near point,
|
||||
if applicable.
|
||||
"""
|
||||
if details.near is not None and details.near_radius is not None:
|
||||
if details.near_radius < 0.1 and not avoid_index:
|
||||
sql = sql.where(
|
||||
t.c.geometry.within_distance(sa.bindparam('near', type_=Geometry),
|
||||
sa.bindparam('near_radius')))
|
||||
else:
|
||||
sql = sql.where(
|
||||
t.c.geometry.ST_Distance(
|
||||
sa.bindparam('near', type_=Geometry)) <= sa.bindparam('near_radius'))
|
||||
if details.viewbox is not None and details.bounded_viewbox:
|
||||
sql = sql.where(t.c.geometry.intersects(sa.bindparam('viewbox', type_=Geometry),
|
||||
use_index=not avoid_index and
|
||||
details.viewbox.area < 0.2))
|
||||
|
||||
return sql
|
||||
|
||||
|
||||
def add_geometry_columns(sql: SaLambdaSelect, col: SaColumn, details: SearchDetails) -> SaSelect:
|
||||
""" Add columns for requested geometry formats and return the new query.
|
||||
"""
|
||||
out = []
|
||||
|
||||
if details.geometry_simplification > 0.0:
|
||||
col = sa.func.ST_SimplifyPreserveTopology(col, details.geometry_simplification)
|
||||
|
||||
if details.geometry_output & GeometryFormat.GEOJSON:
|
||||
out.append(sa.func.ST_AsGeoJSON(col, 7).label('geometry_geojson'))
|
||||
if details.geometry_output & GeometryFormat.TEXT:
|
||||
out.append(sa.func.ST_AsText(col).label('geometry_text'))
|
||||
if details.geometry_output & GeometryFormat.KML:
|
||||
out.append(sa.func.ST_AsKML(col, 7).label('geometry_kml'))
|
||||
if details.geometry_output & GeometryFormat.SVG:
|
||||
out.append(sa.func.ST_AsSVG(col, 0, 7).label('geometry_svg'))
|
||||
|
||||
return sql.add_columns(*out)
|
||||
@@ -1,119 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of searches for a country.
|
||||
"""
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from . import base
|
||||
from ..db_search_fields import SearchData
|
||||
from ... import results as nres
|
||||
from ...connection import SearchConnection
|
||||
from ...types import SearchDetails, Bbox
|
||||
|
||||
|
||||
class CountrySearch(base.AbstractSearch):
|
||||
""" Search for a country name or country code.
|
||||
"""
|
||||
SEARCH_PRIO = 0
|
||||
|
||||
def __init__(self, sdata: SearchData) -> None:
|
||||
super().__init__(sdata.penalty)
|
||||
self.countries = sdata.countries
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
t = conn.t.placex
|
||||
|
||||
ccodes = self.countries.values
|
||||
sql = base.select_placex(t)\
|
||||
.add_columns(t.c.importance)\
|
||||
.where(t.c.country_code.in_(ccodes))\
|
||||
.where(t.c.rank_address == 4)
|
||||
|
||||
if details.geometry_output:
|
||||
sql = base.add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
if details.excluded:
|
||||
sql = sql.where(base.exclude_places(t))
|
||||
|
||||
sql = base.filter_by_area(sql, t, details)
|
||||
|
||||
bind_params = {
|
||||
'excluded': details.excluded,
|
||||
'viewbox': details.viewbox,
|
||||
'near': details.near,
|
||||
'near_radius': details.near_radius
|
||||
}
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, bind_params):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
results.append(result)
|
||||
|
||||
if not results:
|
||||
results = await self.lookup_in_country_table(conn, details)
|
||||
|
||||
if results:
|
||||
details.min_rank = min(5, details.max_rank)
|
||||
details.max_rank = min(25, details.max_rank)
|
||||
|
||||
return results
|
||||
|
||||
async def lookup_in_country_table(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Look up the country in the fallback country tables.
|
||||
"""
|
||||
# Avoid the fallback search when this is a more search. Country results
|
||||
# usually are in the first batch of results and it is not possible
|
||||
# to exclude these fallbacks.
|
||||
if details.excluded:
|
||||
return nres.SearchResults()
|
||||
|
||||
t = conn.t.country_name
|
||||
tgrid = conn.t.country_grid
|
||||
|
||||
sql = sa.select(tgrid.c.country_code,
|
||||
tgrid.c.geometry.ST_Centroid().ST_Collect().ST_Centroid()
|
||||
.label('centroid'),
|
||||
tgrid.c.geometry.ST_Collect().ST_Expand(0).label('bbox'))\
|
||||
.where(tgrid.c.country_code.in_(self.countries.values))\
|
||||
.group_by(tgrid.c.country_code)
|
||||
|
||||
sql = base.filter_by_area(sql, tgrid, details, avoid_index=True)
|
||||
|
||||
sub = sql.subquery('grid')
|
||||
|
||||
sql = sa.select(t.c.country_code,
|
||||
t.c.name.merge(t.c.derived_name).label('name'),
|
||||
sub.c.centroid, sub.c.bbox)\
|
||||
.join(sub, t.c.country_code == sub.c.country_code)
|
||||
|
||||
if details.geometry_output:
|
||||
sql = base.add_geometry_columns(sql, sub.c.centroid, details)
|
||||
|
||||
bind_params = {
|
||||
'viewbox': details.viewbox,
|
||||
'near': details.near,
|
||||
'near_radius': details.near_radius
|
||||
}
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, bind_params):
|
||||
result = nres.create_from_country_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
@@ -1,136 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of a category search around a place.
|
||||
"""
|
||||
from typing import List, Tuple
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from . import base
|
||||
from ...typing import SaBind
|
||||
from ...types import SearchDetails, Bbox
|
||||
from ...connection import SearchConnection
|
||||
from ... import results as nres
|
||||
from ..db_search_fields import WeightedCategories
|
||||
|
||||
|
||||
LIMIT_PARAM: SaBind = sa.bindparam('limit')
|
||||
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
|
||||
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
|
||||
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
|
||||
|
||||
|
||||
class NearSearch(base.AbstractSearch):
|
||||
""" Category search of a place type near the result of another search.
|
||||
"""
|
||||
def __init__(self, penalty: float, categories: WeightedCategories,
|
||||
search: base.AbstractSearch) -> None:
|
||||
super().__init__(penalty)
|
||||
self.search = search
|
||||
self.categories = categories
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
results = nres.SearchResults()
|
||||
base = await self.search.lookup(conn, details)
|
||||
|
||||
if not base:
|
||||
return results
|
||||
|
||||
base.sort(key=lambda r: (r.accuracy, r.rank_search))
|
||||
max_accuracy = base[0].accuracy + 0.5
|
||||
if base[0].rank_address == 0:
|
||||
min_rank = 0
|
||||
max_rank = 0
|
||||
elif base[0].rank_address < 26:
|
||||
min_rank = 1
|
||||
max_rank = min(25, base[0].rank_address + 4)
|
||||
else:
|
||||
min_rank = 26
|
||||
max_rank = 30
|
||||
base = nres.SearchResults(r for r in base
|
||||
if (r.source_table == nres.SourceTable.PLACEX
|
||||
and r.accuracy <= max_accuracy
|
||||
and r.bbox and r.bbox.area < 20
|
||||
and r.rank_address >= min_rank
|
||||
and r.rank_address <= max_rank))
|
||||
|
||||
if base:
|
||||
baseids = [b.place_id for b in base[:5] if b.place_id]
|
||||
|
||||
for category, penalty in self.categories:
|
||||
await self.lookup_category(results, conn, baseids, category, penalty, details)
|
||||
if len(results) >= details.max_results:
|
||||
break
|
||||
|
||||
return results
|
||||
|
||||
async def lookup_category(self, results: nres.SearchResults,
|
||||
conn: SearchConnection, ids: List[int],
|
||||
category: Tuple[str, str], penalty: float,
|
||||
details: SearchDetails) -> None:
|
||||
""" Find places of the given category near the list of
|
||||
place ids and add the results to 'results'.
|
||||
"""
|
||||
table = await conn.get_class_table(*category)
|
||||
|
||||
tgeom = conn.t.placex.alias('pgeom')
|
||||
|
||||
if table is None:
|
||||
# No classtype table available, do a simplified lookup in placex.
|
||||
table = conn.t.placex
|
||||
sql = sa.select(table.c.place_id,
|
||||
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
|
||||
.label('dist'))\
|
||||
.join(tgeom, table.c.geometry.intersects(tgeom.c.centroid.ST_Expand(0.01)))\
|
||||
.where(table.c.class_ == category[0])\
|
||||
.where(table.c.type == category[1])
|
||||
else:
|
||||
# Use classtype table. We can afford to use a larger
|
||||
# radius for the lookup.
|
||||
sql = sa.select(table.c.place_id,
|
||||
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
|
||||
.label('dist'))\
|
||||
.join(tgeom,
|
||||
table.c.centroid.ST_CoveredBy(
|
||||
sa.case((sa.and_(tgeom.c.rank_address > 9,
|
||||
tgeom.c.geometry.is_area()),
|
||||
tgeom.c.geometry),
|
||||
else_=tgeom.c.centroid.ST_Expand(0.05))))
|
||||
|
||||
inner = sql.where(tgeom.c.place_id.in_(ids))\
|
||||
.group_by(table.c.place_id).subquery()
|
||||
|
||||
t = conn.t.placex
|
||||
sql = base.select_placex(t).add_columns((-inner.c.dist).label('importance'))\
|
||||
.join(inner, inner.c.place_id == t.c.place_id)\
|
||||
.order_by(inner.c.dist)
|
||||
|
||||
sql = sql.where(base.no_index(t.c.rank_address).between(MIN_RANK_PARAM, MAX_RANK_PARAM))
|
||||
if details.countries:
|
||||
sql = sql.where(t.c.country_code.in_(COUNTRIES_PARAM))
|
||||
if details.excluded:
|
||||
sql = sql.where(base.exclude_places(t))
|
||||
if details.layers is not None:
|
||||
sql = sql.where(base.filter_by_layer(t, details.layers))
|
||||
|
||||
sql = sql.limit(LIMIT_PARAM)
|
||||
|
||||
bind_params = {'limit': details.max_results,
|
||||
'min_rank': details.min_rank,
|
||||
'max_rank': details.max_rank,
|
||||
'excluded': details.excluded,
|
||||
'countries': details.countries}
|
||||
for row in await conn.execute(sql, bind_params):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.accuracy = self.penalty + penalty
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
results.append(result)
|
||||
@@ -1,214 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of search for a named place (without housenumber).
|
||||
"""
|
||||
from typing import cast
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from . import base
|
||||
from ...typing import SaBind, SaExpression, SaColumn
|
||||
from ...types import SearchDetails, Bbox
|
||||
from ...sql.sqlalchemy_types import Geometry
|
||||
from ...connection import SearchConnection
|
||||
from ... import results as nres
|
||||
from ..db_search_fields import SearchData
|
||||
|
||||
|
||||
LIMIT_PARAM: SaBind = sa.bindparam('limit')
|
||||
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
|
||||
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
|
||||
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
|
||||
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
|
||||
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
|
||||
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
|
||||
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
|
||||
|
||||
|
||||
class PlaceSearch(base.AbstractSearch):
|
||||
""" Generic search for a named place.
|
||||
"""
|
||||
SEARCH_PRIO = 1
|
||||
|
||||
def __init__(self, extra_penalty: float, sdata: SearchData,
|
||||
expected_count: int, has_address_terms: bool) -> None:
|
||||
assert not sdata.housenumbers
|
||||
super().__init__(sdata.penalty + extra_penalty)
|
||||
self.countries = sdata.countries
|
||||
self.postcodes = sdata.postcodes
|
||||
self.qualifiers = sdata.qualifiers
|
||||
self.lookups = sdata.lookups
|
||||
self.rankings = sdata.rankings
|
||||
self.expected_count = expected_count
|
||||
self.has_address_terms = has_address_terms
|
||||
|
||||
def _inner_search_name_cte(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> 'sa.CTE':
|
||||
""" Create a subquery that preselects the rows in the search_name
|
||||
table.
|
||||
"""
|
||||
t = conn.t.search_name
|
||||
|
||||
penalty: SaExpression = sa.literal(self.penalty)
|
||||
for ranking in self.rankings:
|
||||
penalty += ranking.sql_penalty(t)
|
||||
|
||||
sql = sa.select(t.c.place_id, t.c.search_rank, t.c.address_rank,
|
||||
t.c.country_code, t.c.centroid,
|
||||
t.c.name_vector, t.c.nameaddress_vector,
|
||||
sa.case((t.c.importance > 0, t.c.importance),
|
||||
else_=0.40001-(sa.cast(t.c.search_rank, sa.Float())/75))
|
||||
.label('importance'),
|
||||
penalty.label('penalty'))
|
||||
|
||||
for lookup in self.lookups:
|
||||
sql = sql.where(lookup.sql_condition(t))
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
if self.postcodes:
|
||||
# if a postcode is given, don't search for state or country level objects
|
||||
sql = sql.where(t.c.address_rank > 9)
|
||||
if self.expected_count > 10000:
|
||||
# Many results expected. Restrict by postcode.
|
||||
tpc = conn.t.postcode
|
||||
sql = sql.where(sa.select(tpc.c.postcode)
|
||||
.where(tpc.c.postcode.in_(self.postcodes.values))
|
||||
.where(t.c.centroid.within_distance(tpc.c.geometry, 0.4))
|
||||
.exists())
|
||||
|
||||
if details.viewbox is not None:
|
||||
if details.bounded_viewbox:
|
||||
sql = sql.where(t.c.centroid
|
||||
.intersects(VIEWBOX_PARAM,
|
||||
use_index=details.viewbox.area < 0.2))
|
||||
elif not self.postcodes and self.expected_count >= 10000:
|
||||
sql = sql.where(t.c.centroid
|
||||
.intersects(VIEWBOX2_PARAM,
|
||||
use_index=details.viewbox.area < 0.5))
|
||||
|
||||
if details.near is not None and details.near_radius is not None:
|
||||
if details.near_radius < 0.1:
|
||||
sql = sql.where(t.c.centroid.within_distance(NEAR_PARAM,
|
||||
NEAR_RADIUS_PARAM))
|
||||
else:
|
||||
sql = sql.where(t.c.centroid
|
||||
.ST_Distance(NEAR_PARAM) < NEAR_RADIUS_PARAM)
|
||||
|
||||
if details.excluded:
|
||||
sql = sql.where(base.exclude_places(t))
|
||||
if details.min_rank > 0:
|
||||
sql = sql.where(sa.or_(t.c.address_rank >= MIN_RANK_PARAM,
|
||||
t.c.search_rank >= MIN_RANK_PARAM))
|
||||
if details.max_rank < 30:
|
||||
sql = sql.where(sa.or_(t.c.address_rank <= MAX_RANK_PARAM,
|
||||
t.c.search_rank <= MAX_RANK_PARAM))
|
||||
|
||||
inner = sql.limit(5000 if self.qualifiers else 1000)\
|
||||
.order_by(sa.desc(sa.text('importance')))\
|
||||
.subquery()
|
||||
|
||||
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
|
||||
inner.c.country_code, inner.c.centroid, inner.c.importance,
|
||||
inner.c.penalty)
|
||||
|
||||
# If the query is not an address search or has a geographic preference,
|
||||
# preselect most important items to restrict the number of places
|
||||
# that need to be looked up in placex.
|
||||
if (details.viewbox is None or details.bounded_viewbox)\
|
||||
and (details.near is None or details.near_radius is not None)\
|
||||
and not self.qualifiers:
|
||||
sql = sql.add_columns(sa.func.first_value(inner.c.penalty - inner.c.importance)
|
||||
.over(order_by=inner.c.penalty - inner.c.importance)
|
||||
.label('min_penalty'))
|
||||
|
||||
inner = sql.subquery()
|
||||
|
||||
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
|
||||
inner.c.country_code, inner.c.centroid, inner.c.importance,
|
||||
inner.c.penalty)\
|
||||
.where(inner.c.penalty - inner.c.importance < inner.c.min_penalty + 0.5)
|
||||
|
||||
return sql.cte('searches')
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
t = conn.t.placex
|
||||
tsearch = self._inner_search_name_cte(conn, details)
|
||||
|
||||
sql = base.select_placex(t).join(tsearch, t.c.place_id == tsearch.c.place_id)
|
||||
|
||||
if details.geometry_output:
|
||||
sql = base.add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
penalty: SaExpression = tsearch.c.penalty
|
||||
|
||||
if self.postcodes:
|
||||
if self.has_address_terms:
|
||||
tpc = conn.t.postcode
|
||||
pcs = self.postcodes.values
|
||||
|
||||
pc_near = sa.select(sa.func.min(tpc.c.geometry.ST_Distance(t.c.centroid)))\
|
||||
.where(tpc.c.postcode.in_(pcs))\
|
||||
.scalar_subquery()
|
||||
penalty += sa.case((t.c.postcode.in_(pcs), 0.0),
|
||||
else_=sa.func.coalesce(pc_near, cast(SaColumn, 2.0)))
|
||||
else:
|
||||
# High penalty if the postcode is not an exact match.
|
||||
# The postcode search needs to get priority here.
|
||||
penalty += sa.case((t.c.postcode.in_(self.postcodes.values), 0.0), else_=1.0)
|
||||
|
||||
if details.viewbox is not None and not details.bounded_viewbox:
|
||||
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM, use_index=False), 0.0),
|
||||
(t.c.geometry.intersects(VIEWBOX2_PARAM, use_index=False), 0.5),
|
||||
else_=1.0)
|
||||
|
||||
if details.near is not None:
|
||||
sql = sql.add_columns((-tsearch.c.centroid.ST_Distance(NEAR_PARAM))
|
||||
.label('importance'))
|
||||
sql = sql.order_by(sa.desc(sa.text('importance')))
|
||||
else:
|
||||
sql = sql.order_by(penalty - tsearch.c.importance)
|
||||
sql = sql.add_columns(tsearch.c.importance)
|
||||
|
||||
sql = sql.add_columns(penalty.label('accuracy'))\
|
||||
.order_by(sa.text('accuracy'))
|
||||
|
||||
sql = sql.where(t.c.linked_place_id == None)\
|
||||
.where(t.c.indexed_status == 0)
|
||||
if self.qualifiers:
|
||||
sql = sql.where(self.qualifiers.sql_restrict(t))
|
||||
if details.layers is not None:
|
||||
sql = sql.where(base.filter_by_layer(t, details.layers))
|
||||
|
||||
sql = sql.limit(LIMIT_PARAM)
|
||||
|
||||
bind_params = {
|
||||
'limit': details.max_results,
|
||||
'min_rank': details.min_rank,
|
||||
'max_rank': details.max_rank,
|
||||
'viewbox': details.viewbox,
|
||||
'viewbox2': details.viewbox_x2,
|
||||
'near': details.near,
|
||||
'near_radius': details.near_radius,
|
||||
'excluded': details.excluded,
|
||||
'countries': details.countries
|
||||
}
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, bind_params):
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
result.accuracy = row.accuracy
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
@@ -1,114 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of category search.
|
||||
"""
|
||||
from typing import List
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from . import base
|
||||
from ..db_search_fields import SearchData
|
||||
from ... import results as nres
|
||||
from ...typing import SaBind, SaRow, SaSelect, SaLambdaSelect
|
||||
from ...sql.sqlalchemy_types import Geometry
|
||||
from ...connection import SearchConnection
|
||||
from ...types import SearchDetails, Bbox
|
||||
|
||||
|
||||
LIMIT_PARAM: SaBind = sa.bindparam('limit')
|
||||
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
|
||||
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
|
||||
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
|
||||
|
||||
|
||||
class PoiSearch(base.AbstractSearch):
|
||||
""" Category search in a geographic area.
|
||||
"""
|
||||
def __init__(self, sdata: SearchData) -> None:
|
||||
super().__init__(sdata.penalty)
|
||||
self.qualifiers = sdata.qualifiers
|
||||
self.countries = sdata.countries
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
bind_params = {
|
||||
'limit': details.max_results,
|
||||
'viewbox': details.viewbox,
|
||||
'near': details.near,
|
||||
'near_radius': details.near_radius,
|
||||
'excluded': details.excluded
|
||||
}
|
||||
|
||||
t = conn.t.placex
|
||||
|
||||
rows: List[SaRow] = []
|
||||
|
||||
if details.near and details.near_radius is not None and details.near_radius < 0.2:
|
||||
# simply search in placex table
|
||||
def _base_query() -> SaSelect:
|
||||
return base.select_placex(t) \
|
||||
.add_columns((-t.c.centroid.ST_Distance(NEAR_PARAM))
|
||||
.label('importance'))\
|
||||
.where(t.c.linked_place_id == None) \
|
||||
.where(t.c.geometry.within_distance(NEAR_PARAM, NEAR_RADIUS_PARAM)) \
|
||||
.order_by(t.c.centroid.ST_Distance(NEAR_PARAM)) \
|
||||
.limit(LIMIT_PARAM)
|
||||
|
||||
classtype = self.qualifiers.values
|
||||
if len(classtype) == 1:
|
||||
cclass, ctype = classtype[0]
|
||||
sql: SaLambdaSelect = sa.lambda_stmt(
|
||||
lambda: _base_query().where(t.c.class_ == cclass)
|
||||
.where(t.c.type == ctype))
|
||||
else:
|
||||
sql = _base_query().where(sa.or_(*(sa.and_(t.c.class_ == cls, t.c.type == typ)
|
||||
for cls, typ in classtype)))
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
if details.viewbox is not None and details.bounded_viewbox:
|
||||
sql = sql.where(t.c.geometry.intersects(VIEWBOX_PARAM))
|
||||
|
||||
rows.extend(await conn.execute(sql, bind_params))
|
||||
else:
|
||||
# use the class type tables
|
||||
for category in self.qualifiers.values:
|
||||
table = await conn.get_class_table(*category)
|
||||
if table is not None:
|
||||
sql = base.select_placex(t)\
|
||||
.add_columns(t.c.importance)\
|
||||
.join(table, t.c.place_id == table.c.place_id)\
|
||||
.where(t.c.class_ == category[0])\
|
||||
.where(t.c.type == category[1])
|
||||
|
||||
if details.viewbox is not None and details.bounded_viewbox:
|
||||
sql = sql.where(table.c.centroid.intersects(VIEWBOX_PARAM))
|
||||
|
||||
if details.near and details.near_radius is not None:
|
||||
sql = sql.order_by(table.c.centroid.ST_Distance(NEAR_PARAM))\
|
||||
.where(table.c.centroid.within_distance(NEAR_PARAM,
|
||||
NEAR_RADIUS_PARAM))
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
sql = sql.limit(LIMIT_PARAM)
|
||||
rows.extend(await conn.execute(sql, bind_params))
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in rows:
|
||||
result = nres.create_from_placex_row(row, nres.SearchResult)
|
||||
assert result
|
||||
result.accuracy = self.penalty + self.qualifiers.get_penalty((row.class_, row.type))
|
||||
result.bbox = Bbox.from_wkb(row.bbox)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
@@ -1,129 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of search for a postcode.
|
||||
"""
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
from . import base
|
||||
from ...typing import SaBind, SaExpression
|
||||
from ...sql.sqlalchemy_types import Geometry, IntArray
|
||||
from ...connection import SearchConnection
|
||||
from ...types import SearchDetails, Bbox
|
||||
from ... import results as nres
|
||||
from ..db_search_fields import SearchData
|
||||
|
||||
|
||||
LIMIT_PARAM: SaBind = sa.bindparam('limit')
|
||||
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
|
||||
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
|
||||
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
|
||||
|
||||
|
||||
class PostcodeSearch(base.AbstractSearch):
|
||||
""" Search for a postcode.
|
||||
"""
|
||||
def __init__(self, extra_penalty: float, sdata: SearchData) -> None:
|
||||
super().__init__(sdata.penalty + extra_penalty)
|
||||
self.countries = sdata.countries
|
||||
self.postcodes = sdata.postcodes
|
||||
self.lookups = sdata.lookups
|
||||
self.rankings = sdata.rankings
|
||||
|
||||
async def lookup(self, conn: SearchConnection,
|
||||
details: SearchDetails) -> nres.SearchResults:
|
||||
""" Find results for the search in the database.
|
||||
"""
|
||||
t = conn.t.postcode
|
||||
pcs = self.postcodes.values
|
||||
|
||||
sql = sa.select(t.c.place_id, t.c.parent_place_id,
|
||||
t.c.rank_search, t.c.rank_address,
|
||||
t.c.postcode, t.c.country_code,
|
||||
t.c.geometry.label('centroid'))\
|
||||
.where(t.c.postcode.in_(pcs))
|
||||
|
||||
if details.geometry_output:
|
||||
sql = base.add_geometry_columns(sql, t.c.geometry, details)
|
||||
|
||||
penalty: SaExpression = sa.literal(self.penalty)
|
||||
|
||||
if details.viewbox is not None and not details.bounded_viewbox:
|
||||
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM), 0.0),
|
||||
(t.c.geometry.intersects(VIEWBOX2_PARAM), 0.5),
|
||||
else_=1.0)
|
||||
|
||||
if details.near is not None:
|
||||
sql = sql.order_by(t.c.geometry.ST_Distance(NEAR_PARAM))
|
||||
|
||||
sql = base.filter_by_area(sql, t, details)
|
||||
|
||||
if self.countries:
|
||||
sql = sql.where(t.c.country_code.in_(self.countries.values))
|
||||
|
||||
if details.excluded:
|
||||
sql = sql.where(base.exclude_places(t))
|
||||
|
||||
if self.lookups:
|
||||
assert len(self.lookups) == 1
|
||||
tsearch = conn.t.search_name
|
||||
sql = sql.where(tsearch.c.place_id == t.c.parent_place_id)\
|
||||
.where((tsearch.c.name_vector + tsearch.c.nameaddress_vector)
|
||||
.contains(sa.type_coerce(self.lookups[0].tokens,
|
||||
IntArray)))
|
||||
# Do NOT add rerank penalties based on the address terms.
|
||||
# The standard rerank penalty only checks the address vector
|
||||
# while terms may appear in name and address vector. This would
|
||||
# lead to overly high penalties.
|
||||
# We assume that a postcode is precise enough to not require
|
||||
# additional full name matches.
|
||||
|
||||
penalty += sa.case(*((t.c.postcode == v, p) for v, p in self.postcodes),
|
||||
else_=1.0)
|
||||
|
||||
sql = sql.add_columns(penalty.label('accuracy'))
|
||||
sql = sql.order_by('accuracy').limit(LIMIT_PARAM)
|
||||
|
||||
bind_params = {
|
||||
'limit': details.max_results,
|
||||
'viewbox': details.viewbox,
|
||||
'viewbox2': details.viewbox_x2,
|
||||
'near': details.near,
|
||||
'near_radius': details.near_radius,
|
||||
'excluded': details.excluded
|
||||
}
|
||||
|
||||
results = nres.SearchResults()
|
||||
for row in await conn.execute(sql, bind_params):
|
||||
p = conn.t.placex
|
||||
placex_sql = base.select_placex(p)\
|
||||
.add_columns(p.c.importance)\
|
||||
.where(sa.text("""class = 'boundary'
|
||||
AND type = 'postal_code'
|
||||
AND osm_type = 'R'"""))\
|
||||
.where(p.c.country_code == row.country_code)\
|
||||
.where(p.c.postcode == row.postcode)\
|
||||
.limit(1)
|
||||
|
||||
if details.geometry_output:
|
||||
placex_sql = base.add_geometry_columns(placex_sql, p.c.geometry, details)
|
||||
|
||||
for prow in await conn.execute(placex_sql, bind_params):
|
||||
result = nres.create_from_placex_row(prow, nres.SearchResult)
|
||||
if result is not None:
|
||||
result.bbox = Bbox.from_wkb(prow.bbox)
|
||||
break
|
||||
else:
|
||||
result = nres.create_from_postcode_row(row, nres.SearchResult)
|
||||
|
||||
assert result
|
||||
if result.place_id not in details.excluded:
|
||||
result.accuracy = row.accuracy
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Public interface to the search code.
|
||||
@@ -50,9 +50,6 @@ class ForwardGeocoder:
|
||||
self.query_analyzer = await make_query_analyzer(self.conn)
|
||||
|
||||
query = await self.query_analyzer.analyze_query(phrases)
|
||||
query.compute_direction_penalty()
|
||||
log().var_dump('Query direction penalty',
|
||||
lambda: f"[{'LR' if query.dir_penalty < 0 else 'RL'}] {query.dir_penalty}")
|
||||
|
||||
searches: List[AbstractSearch] = []
|
||||
if query.num_token_slots() > 0:
|
||||
@@ -83,7 +80,7 @@ class ForwardGeocoder:
|
||||
min_ranking = searches[0].penalty + 2.0
|
||||
prev_penalty = 0.0
|
||||
for i, search in enumerate(searches):
|
||||
if search.penalty > prev_penalty and (search.penalty > min_ranking or i > 15):
|
||||
if search.penalty > prev_penalty and (search.penalty > min_ranking or i > 20):
|
||||
break
|
||||
log().table_dump(f"{i + 1}. Search", _dump_searches([search], query))
|
||||
log().var_dump('Params', self.params)
|
||||
@@ -118,20 +115,17 @@ class ForwardGeocoder:
|
||||
""" Remove badly matching results, sort by ranking and
|
||||
limit to the configured number of results.
|
||||
"""
|
||||
results.sort(key=lambda r: (r.ranking, 0 if r.bbox is None else -r.bbox.area))
|
||||
if results:
|
||||
results.sort(key=lambda r: (r.ranking, 0 if r.bbox is None else -r.bbox.area))
|
||||
min_rank = results[0].rank_search
|
||||
min_ranking = results[0].ranking
|
||||
results = SearchResults(r for r in results
|
||||
if (r.ranking + 0.03 * (r.rank_search - min_rank)
|
||||
< min_ranking + 0.5))
|
||||
|
||||
final = SearchResults()
|
||||
min_rank = results[0].rank_search
|
||||
min_ranking = results[0].ranking
|
||||
results = SearchResults(results[:self.limit])
|
||||
|
||||
for r in results:
|
||||
if r.ranking + 0.03 * (r.rank_search - min_rank) < min_ranking + 0.5:
|
||||
final.append(r)
|
||||
min_rank = min(r.rank_search, min_rank)
|
||||
if len(final) == self.limit:
|
||||
break
|
||||
|
||||
return final
|
||||
return results
|
||||
|
||||
def rerank_by_query(self, query: QueryStruct, results: SearchResults) -> None:
|
||||
""" Adjust the accuracy of the localized result according to how well
|
||||
@@ -156,16 +150,17 @@ class ForwardGeocoder:
|
||||
if not words:
|
||||
continue
|
||||
for qword in qwords:
|
||||
# only add distance penalty if there is no perfect match
|
||||
if qword not in words:
|
||||
wdist = max(difflib.SequenceMatcher(a=qword, b=w).quick_ratio() for w in words)
|
||||
distance += len(qword) if wdist < 0.4 else 1
|
||||
wdist = max(difflib.SequenceMatcher(a=qword, b=w).quick_ratio() for w in words)
|
||||
if wdist < 0.5:
|
||||
distance += len(qword)
|
||||
else:
|
||||
distance += (1.0 - wdist) * len(qword)
|
||||
# Compensate for the fact that country names do not get a
|
||||
# match penalty yet by the tokenizer.
|
||||
# Temporary hack that needs to be removed!
|
||||
if result.rank_address == 4:
|
||||
distance *= 2
|
||||
result.accuracy += distance * 0.3 / sum(len(w) for w in qwords)
|
||||
result.accuracy += distance * 0.4 / sum(len(w) for w in qwords)
|
||||
|
||||
async def lookup_pois(self, categories: List[Tuple[str, str]],
|
||||
phrases: List[Phrase]) -> SearchResults:
|
||||
@@ -213,10 +208,9 @@ class ForwardGeocoder:
|
||||
results = self.pre_filter_results(results)
|
||||
await add_result_details(self.conn, results, self.params)
|
||||
log().result_dump('Preliminary Results', ((r.accuracy, r) for r in results))
|
||||
if len(results) > 1:
|
||||
self.rerank_by_query(query, results)
|
||||
log().result_dump('Results after reranking', ((r.accuracy, r) for r in results))
|
||||
results = self.sort_and_cut_results(results)
|
||||
self.rerank_by_query(query, results)
|
||||
log().result_dump('Results after reranking', ((r.accuracy, r) for r in results))
|
||||
results = self.sort_and_cut_results(results)
|
||||
log().result_dump('Final Results', ((r.accuracy, r) for r in results))
|
||||
|
||||
return results
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Implementation of query analysis for the ICU tokenizer.
|
||||
@@ -37,14 +37,14 @@ DB_TO_TOKEN_TYPE = {
|
||||
'C': qmod.TOKEN_COUNTRY
|
||||
}
|
||||
|
||||
PENALTY_BREAK = {
|
||||
qmod.BREAK_START: -0.5,
|
||||
qmod.BREAK_END: -0.5,
|
||||
qmod.BREAK_PHRASE: -0.5,
|
||||
qmod.BREAK_SOFT_PHRASE: -0.5,
|
||||
PENALTY_IN_TOKEN_BREAK = {
|
||||
qmod.BREAK_START: 0.5,
|
||||
qmod.BREAK_END: 0.5,
|
||||
qmod.BREAK_PHRASE: 0.5,
|
||||
qmod.BREAK_SOFT_PHRASE: 0.5,
|
||||
qmod.BREAK_WORD: 0.1,
|
||||
qmod.BREAK_PART: 0.2,
|
||||
qmod.BREAK_TOKEN: 0.4
|
||||
qmod.BREAK_PART: 0.0,
|
||||
qmod.BREAK_TOKEN: 0.0
|
||||
}
|
||||
|
||||
|
||||
@@ -78,13 +78,13 @@ class ICUToken(qmod.Token):
|
||||
self.penalty += (distance/len(self.lookup_word))
|
||||
|
||||
@staticmethod
|
||||
def from_db_row(row: SaRow) -> 'ICUToken':
|
||||
def from_db_row(row: SaRow, base_penalty: float = 0.0) -> 'ICUToken':
|
||||
""" Create a ICUToken from the row of the word table.
|
||||
"""
|
||||
count = 1 if row.info is None else row.info.get('count', 1)
|
||||
addr_count = 1 if row.info is None else row.info.get('addr_count', 1)
|
||||
|
||||
penalty = 0.0
|
||||
penalty = base_penalty
|
||||
if row.type == 'w':
|
||||
penalty += 0.3
|
||||
elif row.type == 'W':
|
||||
@@ -174,14 +174,11 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
|
||||
|
||||
self.split_query(query)
|
||||
log().var_dump('Transliterated query', lambda: query.get_transliterated_query())
|
||||
words = query.extract_words()
|
||||
words = query.extract_words(base_penalty=PENALTY_IN_TOKEN_BREAK[qmod.BREAK_WORD])
|
||||
|
||||
for row in await self.lookup_in_db(list(words.keys())):
|
||||
for trange in words[row.word_token]:
|
||||
# Create a new token for each position because the token
|
||||
# penalty can vary depending on the position in the query.
|
||||
# (See rerank_tokens() below.)
|
||||
token = ICUToken.from_db_row(row)
|
||||
token = ICUToken.from_db_row(row, trange.penalty or 0.0)
|
||||
if row.type == 'S':
|
||||
if row.info['op'] in ('in', 'near'):
|
||||
if trange.start == 0:
|
||||
@@ -203,7 +200,6 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
|
||||
lookup_word=pc, word_token=term,
|
||||
info=None))
|
||||
self.rerank_tokens(query)
|
||||
self.compute_break_penalties(query)
|
||||
|
||||
log().table_dump('Word tokens', _dump_word_tokens(query))
|
||||
|
||||
@@ -233,10 +229,13 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
|
||||
if trans:
|
||||
for term in trans.split(' '):
|
||||
if term:
|
||||
query.add_node(qmod.BREAK_TOKEN, phrase.ptype, term, word)
|
||||
query.nodes[-1].btype = breakchar
|
||||
query.add_node(qmod.BREAK_TOKEN, phrase.ptype,
|
||||
PENALTY_IN_TOKEN_BREAK[qmod.BREAK_TOKEN],
|
||||
term, word)
|
||||
query.nodes[-1].adjust_break(breakchar,
|
||||
PENALTY_IN_TOKEN_BREAK[breakchar])
|
||||
|
||||
query.nodes[-1].btype = qmod.BREAK_END
|
||||
query.nodes[-1].adjust_break(qmod.BREAK_END, PENALTY_IN_TOKEN_BREAK[qmod.BREAK_END])
|
||||
|
||||
async def lookup_in_db(self, words: List[str]) -> 'sa.Result[Any]':
|
||||
""" Return the token information from the database for the
|
||||
@@ -268,53 +267,32 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
|
||||
def rerank_tokens(self, query: qmod.QueryStruct) -> None:
|
||||
""" Add penalties to tokens that depend on presence of other token.
|
||||
"""
|
||||
for start, end, tlist in query.iter_tokens_by_edge():
|
||||
if len(tlist) > 1:
|
||||
# If it looks like a Postcode, give preference.
|
||||
if qmod.TOKEN_POSTCODE in tlist:
|
||||
for ttype, tokens in tlist.items():
|
||||
if ttype != qmod.TOKEN_POSTCODE and \
|
||||
(ttype != qmod.TOKEN_HOUSENUMBER or
|
||||
start + 1 > end or
|
||||
len(query.nodes[end].term_lookup) > 4):
|
||||
for token in tokens:
|
||||
token.penalty += 0.39
|
||||
|
||||
# If it looks like a simple housenumber, prefer that.
|
||||
if qmod.TOKEN_HOUSENUMBER in tlist:
|
||||
hnr_lookup = tlist[qmod.TOKEN_HOUSENUMBER][0].lookup_word
|
||||
if len(hnr_lookup) <= 3 and any(c.isdigit() for c in hnr_lookup):
|
||||
penalty = 0.5 - tlist[qmod.TOKEN_HOUSENUMBER][0].penalty
|
||||
for ttype, tokens in tlist.items():
|
||||
if ttype != qmod.TOKEN_HOUSENUMBER:
|
||||
for token in tokens:
|
||||
token.penalty += penalty
|
||||
|
||||
# rerank tokens against the normalized form
|
||||
norm = ' '.join(n.term_normalized for n in query.nodes[start + 1:end + 1]
|
||||
if n.btype != qmod.BREAK_TOKEN)
|
||||
if not norm:
|
||||
# Can happen when the token only covers a partial term
|
||||
norm = query.nodes[start + 1].term_normalized
|
||||
for ttype, tokens in tlist.items():
|
||||
if ttype != qmod.TOKEN_COUNTRY:
|
||||
for token in tokens:
|
||||
cast(ICUToken, token).rematch(norm)
|
||||
|
||||
def compute_break_penalties(self, query: qmod.QueryStruct) -> None:
|
||||
""" Set the break penalties for the nodes in the query.
|
||||
"""
|
||||
for node in query.nodes:
|
||||
node.penalty = PENALTY_BREAK[node.btype]
|
||||
for i, node, tlist in query.iter_token_lists():
|
||||
if tlist.ttype == qmod.TOKEN_POSTCODE:
|
||||
tlen = len(cast(ICUToken, tlist.tokens[0]).word_token)
|
||||
for repl in node.starting:
|
||||
if repl.end == tlist.end and repl.ttype != qmod.TOKEN_POSTCODE \
|
||||
and (repl.ttype != qmod.TOKEN_HOUSENUMBER or tlen > 4):
|
||||
repl.add_penalty(0.39)
|
||||
elif (tlist.ttype == qmod.TOKEN_HOUSENUMBER
|
||||
and len(tlist.tokens[0].lookup_word) <= 3):
|
||||
if any(c.isdigit() for c in tlist.tokens[0].lookup_word):
|
||||
for repl in node.starting:
|
||||
if repl.end == tlist.end and repl.ttype != qmod.TOKEN_HOUSENUMBER:
|
||||
repl.add_penalty(0.5 - tlist.tokens[0].penalty)
|
||||
elif tlist.ttype not in (qmod.TOKEN_COUNTRY, qmod.TOKEN_PARTIAL):
|
||||
norm = ' '.join(n.term_normalized for n in query.nodes[i + 1:tlist.end + 1]
|
||||
if n.btype != qmod.BREAK_TOKEN)
|
||||
if not norm:
|
||||
# Can happen when the token only covers a partial term
|
||||
norm = query.nodes[i + 1].term_normalized
|
||||
for token in tlist.tokens:
|
||||
cast(ICUToken, token).rematch(norm)
|
||||
|
||||
|
||||
def _dump_word_tokens(query: qmod.QueryStruct) -> Iterator[List[Any]]:
|
||||
yield ['type', 'from', 'to', 'token', 'word_token', 'lookup_word', 'penalty', 'count', 'info']
|
||||
for i, node in enumerate(query.nodes):
|
||||
if node.partial is not None:
|
||||
t = cast(ICUToken, node.partial)
|
||||
yield [qmod.TOKEN_PARTIAL, str(i), str(i + 1), t.token,
|
||||
t.word_token, t.lookup_word, t.penalty, t.count, t.info]
|
||||
for tlist in node.starting:
|
||||
for token in tlist.tokens:
|
||||
t = cast(ICUToken, token)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Datastructures for a tokenized query.
|
||||
@@ -12,17 +12,6 @@ from abc import ABC, abstractmethod
|
||||
from collections import defaultdict
|
||||
import dataclasses
|
||||
|
||||
# Precomputed denominator for the computation of the linear regression slope
|
||||
# used to determine the query direction.
|
||||
# The x value for the regression computation will be the position of the
|
||||
# token in the query. Thus we know the x values will be [0, query length).
|
||||
# As the denominator only depends on the x values, we can pre-compute here
|
||||
# the denominatior to use for a given query length.
|
||||
# Note that query length of two or less is special cased and will not use
|
||||
# the values from this array. Thus it is not a problem that they are 0.
|
||||
LINFAC = [i * (sum(si * si for si in range(i)) - (i - 1) * i * (i - 1) / 4)
|
||||
for i in range(50)]
|
||||
|
||||
|
||||
BreakType = str
|
||||
""" Type of break between tokens.
|
||||
@@ -134,6 +123,7 @@ class TokenRange:
|
||||
"""
|
||||
start: int
|
||||
end: int
|
||||
penalty: Optional[float] = None
|
||||
|
||||
def __lt__(self, other: 'TokenRange') -> bool:
|
||||
return self.end <= other.start
|
||||
@@ -190,50 +180,24 @@ class QueryNode:
|
||||
ptype: PhraseType
|
||||
|
||||
penalty: float
|
||||
""" Penalty for having a word break at this position. The penalty
|
||||
may be negative, when a word break is more likely than continuing
|
||||
the word after the node.
|
||||
""" Penalty for the break at this node.
|
||||
"""
|
||||
term_lookup: str
|
||||
""" Transliterated term ending at this node.
|
||||
""" Transliterated term following this node.
|
||||
"""
|
||||
term_normalized: str
|
||||
""" Normalised form of term ending at this node.
|
||||
""" Normalised form of term following this node.
|
||||
When the token resulted from a split during transliteration,
|
||||
then this string contains the complete source term.
|
||||
"""
|
||||
|
||||
starting: List[TokenList] = dataclasses.field(default_factory=list)
|
||||
""" List of all full tokens starting at this node.
|
||||
"""
|
||||
partial: Optional[Token] = None
|
||||
""" Base token going to the next node.
|
||||
May be None when the query has parts for which no words are known.
|
||||
Note that the query may still be parsable when there are other
|
||||
types of tokens spanning over the gap.
|
||||
"""
|
||||
|
||||
@property
|
||||
def word_break_penalty(self) -> float:
|
||||
""" Penalty to apply when a words ends at this node.
|
||||
def adjust_break(self, btype: BreakType, penalty: float) -> None:
|
||||
""" Change the break type and penalty for this node.
|
||||
"""
|
||||
return max(0, self.penalty)
|
||||
|
||||
@property
|
||||
def word_continuation_penalty(self) -> float:
|
||||
""" Penalty to apply when a word continues over this node
|
||||
(i.e. is a multi-term word).
|
||||
"""
|
||||
return max(0, -self.penalty)
|
||||
|
||||
def name_address_ratio(self) -> float:
|
||||
""" Return the propability that the partial token belonging to
|
||||
this node forms part of a name (as opposed of part of the address).
|
||||
"""
|
||||
if self.partial is None:
|
||||
return 0.5
|
||||
|
||||
return self.partial.count / (self.partial.count + self.partial.addr_count)
|
||||
self.btype = btype
|
||||
self.penalty = penalty
|
||||
|
||||
def has_tokens(self, end: int, *ttypes: TokenType) -> bool:
|
||||
""" Check if there are tokens of the given types ending at the
|
||||
@@ -270,20 +234,12 @@ class QueryStruct:
|
||||
need to be direct neighbours. Thus the query is represented as a
|
||||
directed acyclic graph.
|
||||
|
||||
A query also has a direction penalty 'dir_penalty'. This describes
|
||||
the likelyhood if the query should be read from left-to-right or
|
||||
vice versa. A negative 'dir_penalty' should be read as a penalty on
|
||||
right-to-left reading, while a positive value represents a penalty
|
||||
for left-to-right reading. The default value is 0, which is equivalent
|
||||
to having no information about the reading.
|
||||
|
||||
When created, a query contains a single node: the start of the
|
||||
query. Further nodes can be added by appending to 'nodes'.
|
||||
"""
|
||||
|
||||
def __init__(self, source: List[Phrase]) -> None:
|
||||
self.source = source
|
||||
self.dir_penalty = 0.0
|
||||
self.nodes: List[QueryNode] = \
|
||||
[QueryNode(BREAK_START, source[0].ptype if source else PHRASE_ANY,
|
||||
0.0, '', '')]
|
||||
@@ -294,12 +250,13 @@ class QueryStruct:
|
||||
return len(self.nodes) - 1
|
||||
|
||||
def add_node(self, btype: BreakType, ptype: PhraseType,
|
||||
break_penalty: float = 0.0,
|
||||
term_lookup: str = '', term_normalized: str = '') -> None:
|
||||
""" Append a new break node with the given break type.
|
||||
The phrase type denotes the type for any tokens starting
|
||||
at the node.
|
||||
"""
|
||||
self.nodes.append(QueryNode(btype, ptype, 0.0, term_lookup, term_normalized))
|
||||
self.nodes.append(QueryNode(btype, ptype, break_penalty, term_lookup, term_normalized))
|
||||
|
||||
def add_token(self, trange: TokenRange, ttype: TokenType, token: Token) -> None:
|
||||
""" Add a token to the query. 'start' and 'end' are the indexes of the
|
||||
@@ -312,70 +269,37 @@ class QueryStruct:
|
||||
be added to, then the token is silently dropped.
|
||||
"""
|
||||
snode = self.nodes[trange.start]
|
||||
if ttype == TOKEN_PARTIAL:
|
||||
assert snode.partial is None
|
||||
if _phrase_compatible_with(snode.ptype, TOKEN_PARTIAL, False):
|
||||
snode.partial = token
|
||||
else:
|
||||
full_phrase = snode.btype in (BREAK_START, BREAK_PHRASE)\
|
||||
and self.nodes[trange.end].btype in (BREAK_PHRASE, BREAK_END)
|
||||
if _phrase_compatible_with(snode.ptype, ttype, full_phrase):
|
||||
tlist = snode.get_tokens(trange.end, ttype)
|
||||
if tlist is None:
|
||||
snode.starting.append(TokenList(trange.end, ttype, [token]))
|
||||
else:
|
||||
tlist.append(token)
|
||||
|
||||
def compute_direction_penalty(self) -> None:
|
||||
""" Recompute the direction probability from the partial tokens
|
||||
of each node.
|
||||
"""
|
||||
n = len(self.nodes) - 1
|
||||
if n <= 1 or n >= 50:
|
||||
self.dir_penalty = 0
|
||||
elif n == 2:
|
||||
self.dir_penalty = (self.nodes[1].name_address_ratio()
|
||||
- self.nodes[0].name_address_ratio()) / 3
|
||||
else:
|
||||
ratios = [n.name_address_ratio() for n in self.nodes[:-1]]
|
||||
self.dir_penalty = (n * sum(i * r for i, r in enumerate(ratios))
|
||||
- sum(ratios) * n * (n - 1) / 2) / LINFAC[n]
|
||||
full_phrase = snode.btype in (BREAK_START, BREAK_PHRASE)\
|
||||
and self.nodes[trange.end].btype in (BREAK_PHRASE, BREAK_END)
|
||||
if _phrase_compatible_with(snode.ptype, ttype, full_phrase):
|
||||
tlist = snode.get_tokens(trange.end, ttype)
|
||||
if tlist is None:
|
||||
snode.starting.append(TokenList(trange.end, ttype, [token]))
|
||||
else:
|
||||
tlist.append(token)
|
||||
|
||||
def get_tokens(self, trange: TokenRange, ttype: TokenType) -> List[Token]:
|
||||
""" Get the list of tokens of a given type, spanning the given
|
||||
nodes. The nodes must exist. If no tokens exist, an
|
||||
empty list is returned.
|
||||
|
||||
Cannot be used to get the partial token.
|
||||
"""
|
||||
assert ttype != TOKEN_PARTIAL
|
||||
return self.nodes[trange.start].get_tokens(trange.end, ttype) or []
|
||||
|
||||
def get_in_word_penalty(self, trange: TokenRange) -> float:
|
||||
""" Gets the sum of penalties for all token transitions
|
||||
within the given range.
|
||||
def get_partials_list(self, trange: TokenRange) -> List[Token]:
|
||||
""" Create a list of partial tokens between the given nodes.
|
||||
The list is composed of the first token of type PARTIAL
|
||||
going to the subsequent node. Such PARTIAL tokens are
|
||||
assumed to exist.
|
||||
"""
|
||||
return sum(n.word_continuation_penalty
|
||||
for n in self.nodes[trange.start + 1:trange.end])
|
||||
return [next(iter(self.get_tokens(TokenRange(i, i+1), TOKEN_PARTIAL)))
|
||||
for i in range(trange.start, trange.end)]
|
||||
|
||||
def iter_partials(self, trange: TokenRange) -> Iterator[Token]:
|
||||
""" Iterate over the partial tokens between the given nodes.
|
||||
Missing partials are ignored.
|
||||
"""
|
||||
return (n.partial for n in self.nodes[trange.start:trange.end] if n.partial is not None)
|
||||
|
||||
def iter_tokens_by_edge(self) -> Iterator[Tuple[int, int, Dict[TokenType, List[Token]]]]:
|
||||
""" Iterator over all tokens except partial ones grouped by edge.
|
||||
|
||||
Returns the start and end node indexes and a dictionary
|
||||
of list of tokens by token type.
|
||||
def iter_token_lists(self) -> Iterator[Tuple[int, QueryNode, TokenList]]:
|
||||
""" Iterator over all token lists in the query.
|
||||
"""
|
||||
for i, node in enumerate(self.nodes):
|
||||
by_end: Dict[int, Dict[TokenType, List[Token]]] = defaultdict(dict)
|
||||
for tlist in node.starting:
|
||||
by_end[tlist.end][tlist.ttype] = tlist.tokens
|
||||
for end, endlist in by_end.items():
|
||||
yield i, end, endlist
|
||||
yield i, node, tlist
|
||||
|
||||
def find_lookup_word_by_id(self, token: int) -> str:
|
||||
""" Find the first token with the given token ID and return
|
||||
@@ -384,8 +308,6 @@ class QueryStruct:
|
||||
debugging.
|
||||
"""
|
||||
for node in self.nodes:
|
||||
if node.partial is not None and node.partial.token == token:
|
||||
return f"[P]{node.partial.lookup_word}"
|
||||
for tlist in node.starting:
|
||||
for t in tlist.tokens:
|
||||
if t.token == token:
|
||||
@@ -400,29 +322,33 @@ class QueryStruct:
|
||||
"""
|
||||
return ''.join(''.join((n.term_lookup, n.btype)) for n in self.nodes)
|
||||
|
||||
def extract_words(self, start: int = 0,
|
||||
def extract_words(self, base_penalty: float = 0.0,
|
||||
start: int = 0,
|
||||
endpos: Optional[int] = None) -> Dict[str, List[TokenRange]]:
|
||||
""" Add all combinations of words that can be formed from the terms
|
||||
between the given start and endnode. The terms are joined with
|
||||
spaces for each break. Words can never go across a BREAK_PHRASE.
|
||||
|
||||
The functions returns a dictionary of possible words with their
|
||||
position within the query.
|
||||
position within the query and a penalty. The penalty is computed
|
||||
from the base_penalty plus the penalty for each node the word
|
||||
crosses.
|
||||
"""
|
||||
if endpos is None:
|
||||
endpos = len(self.nodes)
|
||||
|
||||
words: Dict[str, List[TokenRange]] = defaultdict(list)
|
||||
|
||||
for first, first_node in enumerate(self.nodes[start + 1:endpos], start):
|
||||
word = first_node.term_lookup
|
||||
words[word].append(TokenRange(first, first + 1))
|
||||
if first_node.btype != BREAK_PHRASE:
|
||||
max_last = min(first + 20, endpos)
|
||||
for last, last_node in enumerate(self.nodes[first + 2:max_last], first + 2):
|
||||
word = ' '.join((word, last_node.term_lookup))
|
||||
words[word].append(TokenRange(first, last))
|
||||
if last_node.btype == BREAK_PHRASE:
|
||||
for first in range(start, endpos - 1):
|
||||
word = self.nodes[first + 1].term_lookup
|
||||
penalty = base_penalty
|
||||
words[word].append(TokenRange(first, first + 1, penalty=penalty))
|
||||
if self.nodes[first + 1].btype != BREAK_PHRASE:
|
||||
for last in range(first + 2, min(first + 20, endpos)):
|
||||
word = ' '.join((word, self.nodes[last].term_lookup))
|
||||
penalty += self.nodes[last - 1].penalty
|
||||
words[word].append(TokenRange(first, last, penalty=penalty))
|
||||
if self.nodes[last].btype == BREAK_PHRASE:
|
||||
break
|
||||
|
||||
return words
|
||||
|
||||
@@ -23,6 +23,16 @@ class TypedRange:
|
||||
trange: qmod.TokenRange
|
||||
|
||||
|
||||
PENALTY_TOKENCHANGE = {
|
||||
qmod.BREAK_START: 0.0,
|
||||
qmod.BREAK_END: 0.0,
|
||||
qmod.BREAK_PHRASE: 0.0,
|
||||
qmod.BREAK_SOFT_PHRASE: 0.0,
|
||||
qmod.BREAK_WORD: 0.1,
|
||||
qmod.BREAK_PART: 0.2,
|
||||
qmod.BREAK_TOKEN: 0.4
|
||||
}
|
||||
|
||||
TypedRangeSeq = List[TypedRange]
|
||||
|
||||
|
||||
@@ -182,7 +192,7 @@ class _TokenSequence:
|
||||
return None
|
||||
|
||||
def advance(self, ttype: qmod.TokenType, end_pos: int,
|
||||
force_break: bool, break_penalty: float) -> Optional['_TokenSequence']:
|
||||
btype: qmod.BreakType) -> Optional['_TokenSequence']:
|
||||
""" Return a new token sequence state with the given token type
|
||||
extended.
|
||||
"""
|
||||
@@ -195,7 +205,7 @@ class _TokenSequence:
|
||||
new_penalty = 0.0
|
||||
else:
|
||||
last = self.seq[-1]
|
||||
if not force_break and last.ttype == ttype:
|
||||
if btype != qmod.BREAK_PHRASE and last.ttype == ttype:
|
||||
# extend the existing range
|
||||
newseq = self.seq[:-1] + [TypedRange(ttype, last.trange.replace_end(end_pos))]
|
||||
new_penalty = 0.0
|
||||
@@ -203,7 +213,7 @@ class _TokenSequence:
|
||||
# start a new range
|
||||
newseq = list(self.seq) + [TypedRange(ttype,
|
||||
qmod.TokenRange(last.trange.end, end_pos))]
|
||||
new_penalty = break_penalty
|
||||
new_penalty = PENALTY_TOKENCHANGE[btype]
|
||||
|
||||
return _TokenSequence(newseq, newdir, self.penalty + new_penalty)
|
||||
|
||||
@@ -276,12 +286,8 @@ class _TokenSequence:
|
||||
log().var_dump('skip forward', (base.postcode, first))
|
||||
return
|
||||
|
||||
penalty = self.penalty
|
||||
if not base.country and self.direction == 1 and query.dir_penalty > 0:
|
||||
penalty += query.dir_penalty
|
||||
|
||||
log().comment('first word = name')
|
||||
yield dataclasses.replace(base, penalty=penalty,
|
||||
yield dataclasses.replace(base, penalty=self.penalty,
|
||||
name=first, address=base.address[1:])
|
||||
|
||||
# To paraphrase:
|
||||
@@ -294,20 +300,19 @@ class _TokenSequence:
|
||||
or (query.nodes[first.start].ptype != qmod.PHRASE_ANY):
|
||||
return
|
||||
|
||||
penalty = self.penalty
|
||||
|
||||
# Penalty for:
|
||||
# * <name>, <street>, <housenumber> , ...
|
||||
# * queries that are comma-separated
|
||||
if (base.housenumber and base.housenumber > first) or len(query.source) > 1:
|
||||
penalty += 0.25
|
||||
|
||||
if self.direction == 0 and query.dir_penalty > 0:
|
||||
penalty += query.dir_penalty
|
||||
|
||||
for i in range(first.start + 1, first.end):
|
||||
name, addr = first.split(i)
|
||||
log().comment(f'split first word = name ({i - first.start})')
|
||||
yield dataclasses.replace(base, name=name, address=[addr] + base.address[1:],
|
||||
penalty=penalty + query.nodes[i].word_break_penalty)
|
||||
penalty=penalty + PENALTY_TOKENCHANGE[query.nodes[i].btype])
|
||||
|
||||
def _get_assignments_address_backward(self, base: TokenAssignment,
|
||||
query: qmod.QueryStruct) -> Iterator[TokenAssignment]:
|
||||
@@ -321,13 +326,9 @@ class _TokenSequence:
|
||||
log().var_dump('skip backward', (base.postcode, last))
|
||||
return
|
||||
|
||||
penalty = self.penalty
|
||||
if not base.country and self.direction == -1 and query.dir_penalty < 0:
|
||||
penalty -= query.dir_penalty
|
||||
|
||||
if self.direction == -1 or len(base.address) > 1 or base.postcode:
|
||||
log().comment('last word = name')
|
||||
yield dataclasses.replace(base, penalty=penalty,
|
||||
yield dataclasses.replace(base, penalty=self.penalty,
|
||||
name=last, address=base.address[:-1])
|
||||
|
||||
# To paraphrase:
|
||||
@@ -340,19 +341,17 @@ class _TokenSequence:
|
||||
or (query.nodes[last.start].ptype != qmod.PHRASE_ANY):
|
||||
return
|
||||
|
||||
penalty = self.penalty
|
||||
if base.housenumber and base.housenumber < last:
|
||||
penalty += 0.4
|
||||
if len(query.source) > 1:
|
||||
penalty += 0.25
|
||||
|
||||
if self.direction == 0 and query.dir_penalty < 0:
|
||||
penalty -= query.dir_penalty
|
||||
|
||||
for i in range(last.start + 1, last.end):
|
||||
addr, name = last.split(i)
|
||||
log().comment(f'split last word = name ({i - last.start})')
|
||||
yield dataclasses.replace(base, name=name, address=base.address[:-1] + [addr],
|
||||
penalty=penalty + query.nodes[i].word_break_penalty)
|
||||
penalty=penalty + PENALTY_TOKENCHANGE[query.nodes[i].btype])
|
||||
|
||||
def get_assignments(self, query: qmod.QueryStruct) -> Iterator[TokenAssignment]:
|
||||
""" Yield possible assignments for the current sequence.
|
||||
@@ -380,11 +379,11 @@ class _TokenSequence:
|
||||
if base.postcode and base.postcode.start == 0:
|
||||
self.penalty += 0.1
|
||||
|
||||
# Left-to-right reading of the address
|
||||
# Right-to-left reading of the address
|
||||
if self.direction != -1:
|
||||
yield from self._get_assignments_address_forward(base, query)
|
||||
|
||||
# Right-to-left reading of the address
|
||||
# Left-to-right reading of the address
|
||||
if self.direction != 1:
|
||||
yield from self._get_assignments_address_backward(base, query)
|
||||
|
||||
@@ -410,25 +409,11 @@ def yield_token_assignments(query: qmod.QueryStruct) -> Iterator[TokenAssignment
|
||||
node = query.nodes[state.end_pos]
|
||||
|
||||
for tlist in node.starting:
|
||||
yield from _append_state_to_todo(
|
||||
query, todo,
|
||||
state.advance(tlist.ttype, tlist.end,
|
||||
True, node.word_break_penalty))
|
||||
|
||||
if node.partial is not None:
|
||||
yield from _append_state_to_todo(
|
||||
query, todo,
|
||||
state.advance(qmod.TOKEN_PARTIAL, state.end_pos + 1,
|
||||
node.btype == qmod.BREAK_PHRASE,
|
||||
node.word_break_penalty))
|
||||
|
||||
|
||||
def _append_state_to_todo(query: qmod.QueryStruct, todo: List[_TokenSequence],
|
||||
newstate: Optional[_TokenSequence]) -> Iterator[TokenAssignment]:
|
||||
if newstate is not None:
|
||||
if newstate.end_pos == query.num_token_slots():
|
||||
if newstate.recheck_sequence():
|
||||
log().var_dump('Assignment', newstate)
|
||||
yield from newstate.get_assignments(query)
|
||||
elif not newstate.is_final():
|
||||
todo.append(newstate)
|
||||
newstate = state.advance(tlist.ttype, tlist.end, node.btype)
|
||||
if newstate is not None:
|
||||
if newstate.end_pos == query.num_token_slots():
|
||||
if newstate.recheck_sequence():
|
||||
log().var_dump('Assignment', newstate)
|
||||
yield from newstate.get_assignments(query)
|
||||
elif not newstate.is_final():
|
||||
todo.append(newstate)
|
||||
|
||||
@@ -143,7 +143,7 @@ def get_application(project_dir: Path,
|
||||
|
||||
log_file = config.LOG_FILE
|
||||
if log_file:
|
||||
middleware.append(Middleware(FileLoggingMiddleware, file_name=log_file)) # type: ignore
|
||||
middleware.append(Middleware(FileLoggingMiddleware, file_name=log_file))
|
||||
|
||||
exceptions: Dict[Any, Callable[[Request, Exception], Awaitable[Response]]] = {
|
||||
TimeoutError: timeout_error,
|
||||
|
||||
@@ -122,18 +122,15 @@ class IsAddressPoint(sa.sql.functions.GenericFunction[Any]):
|
||||
|
||||
def __init__(self, table: sa.Table) -> None:
|
||||
super().__init__(table.c.rank_address,
|
||||
table.c.housenumber, table.c.name, table.c.address)
|
||||
table.c.housenumber, table.c.name)
|
||||
|
||||
|
||||
@compiles(IsAddressPoint)
|
||||
def default_is_address_point(element: IsAddressPoint,
|
||||
compiler: 'sa.Compiled', **kw: Any) -> str:
|
||||
rank, hnr, name, address = list(element.clauses)
|
||||
return "(%s = 30 AND (%s IS NULL OR NOT %s ? '_inherited')" \
|
||||
" AND (%s IS NOT NULL OR %s ? 'addr:housename'))" % (
|
||||
rank, hnr, name = list(element.clauses)
|
||||
return "(%s = 30 AND (%s IS NOT NULL OR %s ? 'addr:housename'))" % (
|
||||
compiler.process(rank, **kw),
|
||||
compiler.process(address, **kw),
|
||||
compiler.process(address, **kw),
|
||||
compiler.process(hnr, **kw),
|
||||
compiler.process(name, **kw))
|
||||
|
||||
@@ -141,11 +138,9 @@ def default_is_address_point(element: IsAddressPoint,
|
||||
@compiles(IsAddressPoint, 'sqlite')
|
||||
def sqlite_is_address_point(element: IsAddressPoint,
|
||||
compiler: 'sa.Compiled', **kw: Any) -> str:
|
||||
rank, hnr, name, address = list(element.clauses)
|
||||
return "(%s = 30 AND json_extract(%s, '$._inherited') IS NULL" \
|
||||
" AND coalesce(%s, json_extract(%s, '$.addr:housename')) IS NOT NULL)" % (
|
||||
rank, hnr, name = list(element.clauses)
|
||||
return "(%s = 30 AND coalesce(%s, json_extract(%s, '$.addr:housename')) IS NOT NULL)" % (
|
||||
compiler.process(rank, **kw),
|
||||
compiler.process(address, **kw),
|
||||
compiler.process(hnr, **kw),
|
||||
compiler.process(name, **kw))
|
||||
|
||||
|
||||
@@ -84,9 +84,8 @@ def format_base_json(results: Union[ReverseResults, SearchResults],
|
||||
|
||||
_write_osm_id(out, result.osm_object)
|
||||
|
||||
# lat and lon must be string values
|
||||
out.keyval('lat', f"{result.centroid.lat:0.7f}")\
|
||||
.keyval('lon', f"{result.centroid.lon:0.7f}")\
|
||||
out.keyval('lat', f"{result.centroid.lat}")\
|
||||
.keyval('lon', f"{result.centroid.lon}")\
|
||||
.keyval(class_label, result.category[0])\
|
||||
.keyval('type', result.category[1])\
|
||||
.keyval('place_rank', result.rank_search)\
|
||||
@@ -113,7 +112,6 @@ def format_base_json(results: Union[ReverseResults, SearchResults],
|
||||
if options.get('namedetails', False):
|
||||
out.keyval('namedetails', result.names)
|
||||
|
||||
# must be string values
|
||||
bbox = cl.bbox_from_result(result)
|
||||
out.key('boundingbox').start_array()\
|
||||
.value(f"{bbox.minlat:0.7f}").next()\
|
||||
|
||||
@@ -90,7 +90,7 @@ def format_base_xml(results: Union[ReverseResults, SearchResults],
|
||||
result will be output, otherwise a list.
|
||||
"""
|
||||
root = ET.Element(xml_root_tag)
|
||||
root.set('timestamp', dt.datetime.now(dt.timezone.utc).strftime('%a, %d %b %Y %H:%M:%S +00:00'))
|
||||
root.set('timestamp', dt.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S +00:00'))
|
||||
root.set('attribution', cl.OSM_ATTRIBUTION)
|
||||
for k, v in xml_extra_info.items():
|
||||
root.set(k, v)
|
||||
|
||||
@@ -374,17 +374,14 @@ async def deletable_endpoint(api: NominatimAPIAsync, params: ASGIAdaptor) -> Any
|
||||
"""
|
||||
fmt = parse_format(params, RawDataList, 'json')
|
||||
|
||||
results = RawDataList()
|
||||
async with api.begin() as conn:
|
||||
for osm_type in ('N', 'W', 'R'):
|
||||
sql = sa.text(""" SELECT p.place_id, country_code,
|
||||
name->'name' as name, i.*
|
||||
FROM placex p, import_polygon_delete i
|
||||
WHERE i.osm_type = :osm_type
|
||||
AND p.osm_id = i.osm_id AND p.osm_type = :osm_type
|
||||
AND p.class = i.class AND p.type = i.type
|
||||
""")
|
||||
results.extend(r._asdict() for r in await conn.execute(sql, {'osm_type': osm_type}))
|
||||
sql = sa.text(""" SELECT p.place_id, country_code,
|
||||
name->'name' as name, i.*
|
||||
FROM placex p, import_polygon_delete i
|
||||
WHERE p.osm_id = i.osm_id AND p.osm_type = i.osm_type
|
||||
AND p.class = i.class AND p.type = i.type
|
||||
""")
|
||||
results = RawDataList(r._asdict() for r in await conn.execute(sql))
|
||||
|
||||
return build_response(params, params.formatting().format_result(results, fmt, {}))
|
||||
|
||||
|
||||
@@ -136,7 +136,6 @@ class NominatimArgs:
|
||||
import_from_wiki: bool
|
||||
import_from_csv: Optional[str]
|
||||
no_replace: bool
|
||||
min: int
|
||||
|
||||
# Arguments to all query functions
|
||||
format: str
|
||||
|
||||
@@ -58,8 +58,6 @@ class ImportSpecialPhrases:
|
||||
help='Import special phrases from a CSV file')
|
||||
group.add_argument('--no-replace', action='store_true',
|
||||
help='Keep the old phrases and only add the new ones')
|
||||
group.add_argument('--min', type=int, default=0,
|
||||
help='Restrict special phrases by minimum occurance')
|
||||
|
||||
def run(self, args: NominatimArgs) -> int:
|
||||
|
||||
@@ -84,9 +82,7 @@ class ImportSpecialPhrases:
|
||||
|
||||
tokenizer = tokenizer_factory.get_tokenizer_for_db(args.config)
|
||||
should_replace = not args.no_replace
|
||||
min = args.min
|
||||
|
||||
with connect(args.config.get_libpq_dsn()) as db_connection:
|
||||
SPImporter(
|
||||
args.config, db_connection, loader
|
||||
).import_phrases(tokenizer, should_replace, min)
|
||||
).import_phrases(tokenizer, should_replace)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Abstract class definitions for tokenizers. These base classes are here
|
||||
@@ -10,6 +10,7 @@ mainly for documentation purposes.
|
||||
"""
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import List, Tuple, Dict, Any, Optional, Iterable
|
||||
from pathlib import Path
|
||||
|
||||
from ..typing import Protocol
|
||||
from ..config import Configuration
|
||||
@@ -37,7 +38,7 @@ class AbstractAnalyzer(ABC):
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_word_token_info(self, words: List[str]) -> List[Tuple[str, str, Optional[int]]]:
|
||||
def get_word_token_info(self, words: List[str]) -> List[Tuple[str, str, int]]:
|
||||
""" Return token information for the given list of words.
|
||||
|
||||
The function is used for testing and debugging only
|
||||
@@ -231,6 +232,6 @@ class TokenizerModule(Protocol):
|
||||
own tokenizer.
|
||||
"""
|
||||
|
||||
def create(self, dsn: str) -> AbstractTokenizer:
|
||||
def create(self, dsn: str, data_dir: Path) -> AbstractTokenizer:
|
||||
""" Factory for new tokenizers.
|
||||
"""
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Functions for creating a tokenizer or initialising the right one for an
|
||||
@@ -52,10 +52,19 @@ def create_tokenizer(config: Configuration, init_db: bool = True,
|
||||
if module_name is None:
|
||||
module_name = config.TOKENIZER
|
||||
|
||||
# Create the directory for the tokenizer data
|
||||
assert config.project_dir is not None
|
||||
basedir = config.project_dir / 'tokenizer'
|
||||
if not basedir.exists():
|
||||
basedir.mkdir()
|
||||
elif not basedir.is_dir():
|
||||
LOG.fatal("Tokenizer directory '%s' cannot be created.", basedir)
|
||||
raise UsageError("Tokenizer setup failed.")
|
||||
|
||||
# Import and initialize the tokenizer.
|
||||
tokenizer_module = _import_tokenizer(module_name)
|
||||
|
||||
tokenizer = tokenizer_module.create(config.get_libpq_dsn())
|
||||
tokenizer = tokenizer_module.create(config.get_libpq_dsn(), basedir)
|
||||
tokenizer.init_new_db(config, init_db=init_db)
|
||||
|
||||
with connect(config.get_libpq_dsn()) as conn:
|
||||
@@ -70,6 +79,12 @@ def get_tokenizer_for_db(config: Configuration) -> AbstractTokenizer:
|
||||
The function looks up the appropriate tokenizer in the database
|
||||
and initialises it.
|
||||
"""
|
||||
assert config.project_dir is not None
|
||||
basedir = config.project_dir / 'tokenizer'
|
||||
if not basedir.is_dir():
|
||||
# Directory will be repopulated by tokenizer below.
|
||||
basedir.mkdir()
|
||||
|
||||
with connect(config.get_libpq_dsn()) as conn:
|
||||
name = properties.get_property(conn, 'tokenizer')
|
||||
|
||||
@@ -79,7 +94,7 @@ def get_tokenizer_for_db(config: Configuration) -> AbstractTokenizer:
|
||||
|
||||
tokenizer_module = _import_tokenizer(name)
|
||||
|
||||
tokenizer = tokenizer_module.create(config.get_libpq_dsn())
|
||||
tokenizer = tokenizer_module.create(config.get_libpq_dsn(), basedir)
|
||||
tokenizer.init_from_project(config)
|
||||
|
||||
return tokenizer
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Tokenizer implementing normalisation as used before Nominatim 4 but using
|
||||
@@ -12,6 +12,7 @@ from typing import Optional, Sequence, List, Tuple, Mapping, Any, cast, \
|
||||
Dict, Set, Iterable
|
||||
import itertools
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
from psycopg.types.json import Jsonb
|
||||
from psycopg import sql as pysql
|
||||
@@ -37,10 +38,10 @@ WORD_TYPES = (('country_names', 'C'),
|
||||
('housenumbers', 'H'))
|
||||
|
||||
|
||||
def create(dsn: str) -> 'ICUTokenizer':
|
||||
def create(dsn: str, data_dir: Path) -> 'ICUTokenizer':
|
||||
""" Create a new instance of the tokenizer provided by this module.
|
||||
"""
|
||||
return ICUTokenizer(dsn)
|
||||
return ICUTokenizer(dsn, data_dir)
|
||||
|
||||
|
||||
class ICUTokenizer(AbstractTokenizer):
|
||||
@@ -49,8 +50,9 @@ class ICUTokenizer(AbstractTokenizer):
|
||||
normalization routines in Nominatim 3.
|
||||
"""
|
||||
|
||||
def __init__(self, dsn: str) -> None:
|
||||
def __init__(self, dsn: str, data_dir: Path) -> None:
|
||||
self.dsn = dsn
|
||||
self.data_dir = data_dir
|
||||
self.loader: Optional[ICURuleLoader] = None
|
||||
|
||||
def init_new_db(self, config: Configuration, init_db: bool = True) -> None:
|
||||
@@ -338,7 +340,7 @@ class ICUNameAnalyzer(AbstractAnalyzer):
|
||||
"""
|
||||
return cast(str, self.token_analysis.normalizer.transliterate(name)).strip()
|
||||
|
||||
def get_word_token_info(self, words: Sequence[str]) -> List[Tuple[str, str, Optional[int]]]:
|
||||
def get_word_token_info(self, words: Sequence[str]) -> List[Tuple[str, str, int]]:
|
||||
""" Return token information for the given list of words.
|
||||
If a word starts with # it is assumed to be a full name
|
||||
otherwise is a partial name.
|
||||
@@ -362,11 +364,11 @@ class ICUNameAnalyzer(AbstractAnalyzer):
|
||||
cur.execute("""SELECT word_token, word_id
|
||||
FROM word WHERE word_token = ANY(%s) and type = 'W'
|
||||
""", (list(full_tokens.values()),))
|
||||
full_ids = {r[0]: cast(int, r[1]) for r in cur}
|
||||
full_ids = {r[0]: r[1] for r in cur}
|
||||
cur.execute("""SELECT word_token, word_id
|
||||
FROM word WHERE word_token = ANY(%s) and type = 'w'""",
|
||||
(list(partial_tokens.values()),))
|
||||
part_ids = {r[0]: cast(int, r[1]) for r in cur}
|
||||
part_ids = {r[0]: r[1] for r in cur}
|
||||
|
||||
return [(k, v, full_ids.get(v, None)) for k, v in full_tokens.items()] \
|
||||
+ [(k, v, part_ids.get(v, None)) for k, v in partial_tokens.items()]
|
||||
|
||||
@@ -127,7 +127,7 @@ def import_osm_data(osm_files: Union[Path, Sequence[Path]],
|
||||
fsize += os.stat(str(fname)).st_size
|
||||
else:
|
||||
fsize = os.stat(str(osm_files)).st_size
|
||||
options['osm2pgsql_cache'] = int(min((mem.available + getattr(mem, 'cached', 0)) * 0.75,
|
||||
options['osm2pgsql_cache'] = int(min((mem.available + mem.cached) * 0.75,
|
||||
fsize * 2) / 1024 / 1024) + 1
|
||||
|
||||
run_osm2pgsql(options)
|
||||
|
||||
@@ -37,17 +37,21 @@ def run_osm2pgsql(options: Mapping[str, Any]) -> None:
|
||||
'--style', str(options['osm2pgsql_style'])
|
||||
]
|
||||
|
||||
env['LUA_PATH'] = ';'.join((str(options['osm2pgsql_style_path'] / '?.lua'),
|
||||
os.environ.get('LUA_PATH', ';')))
|
||||
env['THEMEPARK_PATH'] = str(options['osm2pgsql_style_path'] / 'themes')
|
||||
if 'THEMEPARK_PATH' in os.environ:
|
||||
env['THEMEPARK_PATH'] += ':' + os.environ['THEMEPARK_PATH']
|
||||
cmd.extend(('--output', 'flex'))
|
||||
if str(options['osm2pgsql_style']).endswith('.lua'):
|
||||
env['LUA_PATH'] = ';'.join((str(options['osm2pgsql_style_path'] / '?.lua'),
|
||||
os.environ.get('LUA_PATH', ';')))
|
||||
env['THEMEPARK_PATH'] = str(options['osm2pgsql_style_path'] / 'themes')
|
||||
if 'THEMEPARK_PATH' in os.environ:
|
||||
env['THEMEPARK_PATH'] += ':' + os.environ['THEMEPARK_PATH']
|
||||
cmd.extend(('--output', 'flex'))
|
||||
|
||||
for flavour in ('data', 'index'):
|
||||
if options['tablespaces'][f"main_{flavour}"]:
|
||||
env[f"NOMINATIM_TABLESPACE_PLACE_{flavour.upper()}"] = \
|
||||
options['tablespaces'][f"main_{flavour}"]
|
||||
for flavour in ('data', 'index'):
|
||||
if options['tablespaces'][f"main_{flavour}"]:
|
||||
env[f"NOMINATIM_TABLESPACE_PLACE_{flavour.upper()}"] = \
|
||||
options['tablespaces'][f"main_{flavour}"]
|
||||
else:
|
||||
cmd.extend(('--output', 'gazetteer', '--hstore', '--latlon'))
|
||||
cmd.extend(_mk_tablespace_options('main', options))
|
||||
|
||||
if options['flatnode_file']:
|
||||
cmd.extend(('--flat-nodes', options['flatnode_file']))
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Functions for importing, updating and otherwise maintaining the table
|
||||
@@ -64,15 +64,11 @@ class _PostcodeCollector:
|
||||
if normalized:
|
||||
self.collected[normalized] += (x, y)
|
||||
|
||||
def commit(self, conn: Connection, analyzer: AbstractAnalyzer,
|
||||
project_dir: Optional[Path]) -> None:
|
||||
""" Update postcodes for the country from the postcodes selected so far.
|
||||
|
||||
When 'project_dir' is set, then any postcode files found in this
|
||||
directory are taken into account as well.
|
||||
def commit(self, conn: Connection, analyzer: AbstractAnalyzer, project_dir: Path) -> None:
|
||||
""" Update postcodes for the country from the postcodes selected so far
|
||||
as well as any externally supplied postcodes.
|
||||
"""
|
||||
if project_dir is not None:
|
||||
self._update_from_external(analyzer, project_dir)
|
||||
self._update_from_external(analyzer, project_dir)
|
||||
to_add, to_delete, to_update = self._compute_changes(conn)
|
||||
|
||||
LOG.info("Processing country '%s' (%s added, %s deleted, %s updated).",
|
||||
@@ -174,7 +170,7 @@ class _PostcodeCollector:
|
||||
return None
|
||||
|
||||
|
||||
def update_postcodes(dsn: str, project_dir: Optional[Path], tokenizer: AbstractTokenizer) -> None:
|
||||
def update_postcodes(dsn: str, project_dir: Path, tokenizer: AbstractTokenizer) -> None:
|
||||
""" Update the table of artificial postcodes.
|
||||
|
||||
Computes artificial postcode centroids from the placex table,
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# Copyright (C) 2024 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Functions for bringing auxiliary data in the database up-to-date.
|
||||
@@ -212,11 +212,6 @@ def recompute_importance(conn: Connection) -> None:
|
||||
WHERE s.place_id = d.linked_place_id and d.wikipedia is not null
|
||||
and (s.wikipedia is null or s.importance < d.importance);
|
||||
""")
|
||||
cur.execute("""
|
||||
UPDATE search_name s SET importance = p.importance
|
||||
FROM placex p
|
||||
WHERE s.place_id = p.place_id AND s.importance != p.importance
|
||||
""")
|
||||
|
||||
cur.execute('ALTER TABLE placex ENABLE TRIGGER ALL')
|
||||
conn.commit()
|
||||
|
||||
@@ -16,6 +16,7 @@
|
||||
from typing import Iterable, Tuple, Mapping, Sequence, Optional, Set
|
||||
import logging
|
||||
import re
|
||||
|
||||
from psycopg.sql import Identifier, SQL
|
||||
|
||||
from ...typing import Protocol
|
||||
@@ -64,32 +65,7 @@ class SPImporter():
|
||||
# special phrases class/type on the wiki.
|
||||
self.table_phrases_to_delete: Set[str] = set()
|
||||
|
||||
def get_classtype_pairs(self, min: int = 0) -> Set[Tuple[str, str]]:
|
||||
"""
|
||||
Returns list of allowed special phrases from the database,
|
||||
restricting to a list of combinations of classes and types
|
||||
which occur equal to or more than a specified amount of times.
|
||||
|
||||
Default value for this is 0, which allows everything in database.
|
||||
"""
|
||||
db_combinations = set()
|
||||
|
||||
query = f"""
|
||||
SELECT class AS CLS, type AS typ
|
||||
FROM placex
|
||||
GROUP BY class, type
|
||||
HAVING COUNT(*) >= {min}
|
||||
"""
|
||||
|
||||
with self.db_connection.cursor() as db_cursor:
|
||||
db_cursor.execute(SQL(query))
|
||||
for row in db_cursor:
|
||||
db_combinations.add((row[0], row[1]))
|
||||
|
||||
return db_combinations
|
||||
|
||||
def import_phrases(self, tokenizer: AbstractTokenizer, should_replace: bool,
|
||||
min: int = 0) -> None:
|
||||
def import_phrases(self, tokenizer: AbstractTokenizer, should_replace: bool) -> None:
|
||||
"""
|
||||
Iterate through all SpecialPhrases extracted from the
|
||||
loader and import them into the database.
|
||||
@@ -109,10 +85,9 @@ class SPImporter():
|
||||
if result:
|
||||
class_type_pairs.add(result)
|
||||
|
||||
self._create_classtype_table_and_indexes(class_type_pairs, min)
|
||||
self._create_classtype_table_and_indexes(class_type_pairs)
|
||||
if should_replace:
|
||||
self._remove_non_existent_tables_from_db()
|
||||
|
||||
self.db_connection.commit()
|
||||
|
||||
with tokenizer.name_analyzer() as analyzer:
|
||||
@@ -188,8 +163,7 @@ class SPImporter():
|
||||
return (phrase.p_class, phrase.p_type)
|
||||
|
||||
def _create_classtype_table_and_indexes(self,
|
||||
class_type_pairs: Iterable[Tuple[str, str]],
|
||||
min: int = 0) -> None:
|
||||
class_type_pairs: Iterable[Tuple[str, str]]) -> None:
|
||||
"""
|
||||
Create table place_classtype for each given pair.
|
||||
Also create indexes on place_id and centroid.
|
||||
@@ -203,19 +177,10 @@ class SPImporter():
|
||||
with self.db_connection.cursor() as db_cursor:
|
||||
db_cursor.execute("CREATE INDEX idx_placex_classtype ON placex (class, type)")
|
||||
|
||||
if min:
|
||||
allowed_special_phrases = self.get_classtype_pairs(min)
|
||||
|
||||
for pair in class_type_pairs:
|
||||
phrase_class = pair[0]
|
||||
phrase_type = pair[1]
|
||||
|
||||
# Will only filter if min is not 0
|
||||
if min and (phrase_class, phrase_type) not in allowed_special_phrases:
|
||||
LOG.warning("Skipping phrase %s=%s: not in allowed special phrases",
|
||||
phrase_class, phrase_type)
|
||||
continue
|
||||
|
||||
table_name = _classtype_table(phrase_class, phrase_type)
|
||||
|
||||
if table_name in self.table_phrases_to_delete:
|
||||
|
||||
10
test/Makefile
Normal file
10
test/Makefile
Normal file
@@ -0,0 +1,10 @@
|
||||
all: bdd python
|
||||
|
||||
bdd:
|
||||
cd bdd && behave -DREMOVE_TEMPLATE=1
|
||||
|
||||
python:
|
||||
pytest python
|
||||
|
||||
|
||||
.PHONY: bdd python
|
||||
3
test/bdd/.behaverc
Normal file
3
test/bdd/.behaverc
Normal file
@@ -0,0 +1,3 @@
|
||||
[behave]
|
||||
show_skipped=False
|
||||
default_tags=~@Fail
|
||||
63
test/bdd/api/details/language.feature
Normal file
63
test/bdd/api/details/language.feature
Normal file
@@ -0,0 +1,63 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Localization of search results
|
||||
|
||||
Scenario: default language
|
||||
When sending details query for R1155955
|
||||
Then results contain
|
||||
| ID | localname |
|
||||
| 0 | Liechtenstein |
|
||||
|
||||
Scenario: accept-language first
|
||||
When sending details query for R1155955
|
||||
| accept-language |
|
||||
| zh,de |
|
||||
Then results contain
|
||||
| ID | localname |
|
||||
| 0 | 列支敦士登 |
|
||||
|
||||
Scenario: accept-language missing
|
||||
When sending details query for R1155955
|
||||
| accept-language |
|
||||
| xx,fr,en,de |
|
||||
Then results contain
|
||||
| ID | localname |
|
||||
| 0 | Liechtenstein |
|
||||
|
||||
Scenario: http accept language header first
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
|
||||
When sending details query for R1155955
|
||||
Then results contain
|
||||
| ID | localname |
|
||||
| 0 | Liktinstein |
|
||||
|
||||
Scenario: http accept language header and accept-language
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fr-ca,fr;q=0.8,en-ca;q=0.5,en;q=0.3 |
|
||||
When sending details query for R1155955
|
||||
| accept-language |
|
||||
| fo,en |
|
||||
Then results contain
|
||||
| ID | localname |
|
||||
| 0 | Liktinstein |
|
||||
|
||||
Scenario: http accept language header fallback
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo-ca,en-ca;q=0.5 |
|
||||
When sending details query for R1155955
|
||||
Then results contain
|
||||
| ID | localname |
|
||||
| 0 | Liktinstein |
|
||||
|
||||
Scenario: http accept language header fallback (upper case)
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo-FR;q=0.8,en-ca;q=0.5 |
|
||||
When sending details query for R1155955
|
||||
Then results contain
|
||||
| ID | localname |
|
||||
| 0 | Liktinstein |
|
||||
96
test/bdd/api/details/params.feature
Normal file
96
test/bdd/api/details/params.feature
Normal file
@@ -0,0 +1,96 @@
|
||||
@APIDB
|
||||
Feature: Object details
|
||||
Testing different parameter options for details API.
|
||||
|
||||
@SQLITE
|
||||
Scenario: JSON Details
|
||||
When sending json details query for W297699560
|
||||
Then the result is valid json
|
||||
And result has attributes geometry
|
||||
And result has not attributes keywords,address,linked_places,parentof
|
||||
And results contain in field geometry
|
||||
| type |
|
||||
| Point |
|
||||
|
||||
@SQLITE
|
||||
Scenario: JSON Details with pretty printing
|
||||
When sending json details query for W297699560
|
||||
| pretty |
|
||||
| 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes geometry
|
||||
And result has not attributes keywords,address,linked_places,parentof
|
||||
|
||||
@SQLITE
|
||||
Scenario: JSON Details with addressdetails
|
||||
When sending json details query for W297699560
|
||||
| addressdetails |
|
||||
| 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes address
|
||||
|
||||
@SQLITE
|
||||
Scenario: JSON Details with linkedplaces
|
||||
When sending json details query for R123924
|
||||
| linkedplaces |
|
||||
| 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes linked_places
|
||||
|
||||
@SQLITE
|
||||
Scenario: JSON Details with hierarchy
|
||||
When sending json details query for W297699560
|
||||
| hierarchy |
|
||||
| 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes hierarchy
|
||||
|
||||
@SQLITE
|
||||
Scenario: JSON Details with grouped hierarchy
|
||||
When sending json details query for W297699560
|
||||
| hierarchy | group_hierarchy |
|
||||
| 1 | 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes hierarchy
|
||||
|
||||
Scenario Outline: JSON Details with keywords
|
||||
When sending json details query for <osmid>
|
||||
| keywords |
|
||||
| 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes keywords
|
||||
|
||||
Examples:
|
||||
| osmid |
|
||||
| W297699560 |
|
||||
| W243055645 |
|
||||
| W243055716 |
|
||||
| W43327921 |
|
||||
|
||||
# ticket #1343
|
||||
Scenario: Details of a country with keywords
|
||||
When sending details query for R1155955
|
||||
| keywords |
|
||||
| 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes keywords
|
||||
|
||||
@SQLITE
|
||||
Scenario Outline: JSON details with full geometry
|
||||
When sending json details query for <osmid>
|
||||
| polygon_geojson |
|
||||
| 1 |
|
||||
Then the result is valid json
|
||||
And result has attributes geometry
|
||||
And results contain in field geometry
|
||||
| type |
|
||||
| <geometry> |
|
||||
|
||||
Examples:
|
||||
| osmid | geometry |
|
||||
| W297699560 | LineString |
|
||||
| W243055645 | Polygon |
|
||||
| W243055716 | Polygon |
|
||||
| W43327921 | LineString |
|
||||
|
||||
|
||||
81
test/bdd/api/details/simple.feature
Normal file
81
test/bdd/api/details/simple.feature
Normal file
@@ -0,0 +1,81 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Object details
|
||||
Check details page for correctness
|
||||
|
||||
Scenario Outline: Details via OSM id
|
||||
When sending details query for <type><id>
|
||||
Then the result is valid json
|
||||
And results contain
|
||||
| osm_type | osm_id |
|
||||
| <type> | <id> |
|
||||
|
||||
Examples:
|
||||
| type | id |
|
||||
| N | 5484325405 |
|
||||
| W | 43327921 |
|
||||
| R | 123924 |
|
||||
|
||||
|
||||
Scenario Outline: Details for different class types for the same OSM id
|
||||
When sending details query for N300209696:<class>
|
||||
Then the result is valid json
|
||||
And results contain
|
||||
| osm_type | osm_id | category |
|
||||
| N | 300209696 | <class> |
|
||||
|
||||
Examples:
|
||||
| class |
|
||||
| tourism |
|
||||
| mountain_pass |
|
||||
|
||||
|
||||
Scenario Outline: Details via unknown OSM id
|
||||
When sending details query for <object>
|
||||
Then a HTTP 404 is returned
|
||||
|
||||
Examples:
|
||||
| object |
|
||||
| 1 |
|
||||
| R1 |
|
||||
| N300209696:highway |
|
||||
|
||||
|
||||
Scenario: Details for interpolation way return the interpolation
|
||||
When sending details query for W1
|
||||
Then the result is valid json
|
||||
And results contain
|
||||
| category | type | osm_type | osm_id | admin_level |
|
||||
| place | houses | W | 1 | 15 |
|
||||
|
||||
|
||||
@Fail
|
||||
Scenario: Details for interpolation way return the interpolation
|
||||
When sending details query for 112871
|
||||
Then the result is valid json
|
||||
And results contain
|
||||
| category | type | admin_level |
|
||||
| place | houses | 15 |
|
||||
And result has not attributes osm_type,osm_id
|
||||
|
||||
|
||||
@Fail
|
||||
Scenario: Details for interpolation way return the interpolation
|
||||
When sending details query for 112820
|
||||
Then the result is valid json
|
||||
And results contain
|
||||
| category | type | admin_level |
|
||||
| place | postcode | 15 |
|
||||
And result has not attributes osm_type,osm_id
|
||||
|
||||
|
||||
Scenario Outline: Details debug output returns no errors
|
||||
When sending debug details query for <feature>
|
||||
Then the result is valid html
|
||||
|
||||
Examples:
|
||||
| feature |
|
||||
| N5484325405 |
|
||||
| W1 |
|
||||
| 112820 |
|
||||
| 112871 |
|
||||
14
test/bdd/api/errors/formats.feature
Normal file
14
test/bdd/api/errors/formats.feature
Normal file
@@ -0,0 +1,14 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Places by osm_type and osm_id Tests
|
||||
Simple tests for errors in various response formats.
|
||||
|
||||
Scenario Outline: Force error by providing too many ids
|
||||
When sending <format> lookup query for N1,N2,N3,N4,N5,N6,N7,N8,N9,N10,N11,N12,N13,N14,N15,N16,N17,N18,N19,N20,N21,N22,N23,N24,N25,N26,N27,N28,N29,N30,N31,N32,N33,N34,N35,N36,N37,N38,N39,N40,N41,N42,N43,N44,N45,N46,N47,N48,N49,N50,N51
|
||||
Then a <format> user error is returned
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| xml |
|
||||
| json |
|
||||
| geojson |
|
||||
42
test/bdd/api/lookup/simple.feature
Normal file
42
test/bdd/api/lookup/simple.feature
Normal file
@@ -0,0 +1,42 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Places by osm_type and osm_id Tests
|
||||
Simple tests for response format.
|
||||
|
||||
Scenario Outline: address lookup for existing node, way, relation
|
||||
When sending <format> lookup query for N5484325405,W43327921,,R123924,X99,N0
|
||||
Then the result is valid <outformat>
|
||||
And exactly 3 results are returned
|
||||
|
||||
Examples:
|
||||
| format | outformat |
|
||||
| xml | xml |
|
||||
| json | json |
|
||||
| jsonv2 | json |
|
||||
| geojson | geojson |
|
||||
| geocodejson | geocodejson |
|
||||
|
||||
Scenario: address lookup for non-existing or invalid node, way, relation
|
||||
When sending xml lookup query for X99,,N0,nN158845944,ABC,,W9
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario Outline: Boundingbox is returned
|
||||
When sending <format> lookup query for N5484325405,W43327921
|
||||
Then exactly 2 results are returned
|
||||
And result 0 has bounding box in 47.135,47.14,9.52,9.525
|
||||
And result 1 has bounding box in 47.07,47.08,9.50,9.52
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| geojson |
|
||||
| xml |
|
||||
|
||||
|
||||
Scenario: Lookup of a linked place
|
||||
When sending geocodejson lookup query for N1932181216
|
||||
Then exactly 1 result is returned
|
||||
And results contain
|
||||
| name |
|
||||
| Vaduz |
|
||||
45
test/bdd/api/reverse/geometry.feature
Normal file
45
test/bdd/api/reverse/geometry.feature
Normal file
@@ -0,0 +1,45 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Geometries for reverse geocoding
|
||||
Tests for returning geometries with reverse
|
||||
|
||||
|
||||
Scenario: Polygons are returned fully by default
|
||||
When sending v1/reverse at 47.13803,9.52264
|
||||
| polygon_text |
|
||||
| 1 |
|
||||
Then results contain
|
||||
| geotext |
|
||||
| ^POLYGON\(\(9.5225302 47.138066, ?9.5225348 47.1379282, ?9.5226142 47.1379294, ?9.5226143 47.1379257, ?9.522615 47.137917, ?9.5226225 47.1379098, ?9.5226334 47.1379052, ?9.5226461 47.1379037, ?9.5226588 47.1379056, ?9.5226693 47.1379107, ?9.5226762 47.1379181, ?9.5226762 47.1379268, ?9.5226761 47.1379308, ?9.5227366 47.1379317, ?9.5227352 47.1379753, ?9.5227608 47.1379757, ?9.5227595 47.1380148, ?9.5227355 47.1380145, ?9.5227337 47.1380692, ?9.5225302 47.138066\)\) |
|
||||
|
||||
|
||||
Scenario: Polygons can be slightly simplified
|
||||
When sending v1/reverse at 47.13803,9.52264
|
||||
| polygon_text | polygon_threshold |
|
||||
| 1 | 0.00001 |
|
||||
Then results contain
|
||||
| geotext |
|
||||
| ^POLYGON\(\(9.5225302 47.138066, ?9.5225348 47.1379282, ?9.5226142 47.1379294, ?9.5226225 47.1379098, ?9.5226588 47.1379056, ?9.5226761 47.1379308, ?9.5227366 47.1379317, ?9.5227352 47.1379753, ?9.5227608 47.1379757, ?9.5227595 47.1380148, ?9.5227355 47.1380145, ?9.5227337 47.1380692, ?9.5225302 47.138066\)\) |
|
||||
|
||||
|
||||
Scenario: Polygons can be much simplified
|
||||
When sending v1/reverse at 47.13803,9.52264
|
||||
| polygon_text | polygon_threshold |
|
||||
| 1 | 0.9 |
|
||||
Then results contain
|
||||
| geotext |
|
||||
| ^POLYGON\(\([0-9. ]+, ?[0-9. ]+, ?[0-9. ]+, ?[0-9. ]+(, ?[0-9. ]+)?\)\) |
|
||||
|
||||
|
||||
Scenario: For polygons return the centroid as center point
|
||||
When sending v1/reverse at 47.13836,9.52304
|
||||
Then results contain
|
||||
| centroid |
|
||||
| 9.52271080 47.13818045 |
|
||||
|
||||
|
||||
Scenario: For streets return the closest point as center point
|
||||
When sending v1/reverse at 47.13368,9.52942
|
||||
Then results contain
|
||||
| centroid |
|
||||
| 9.529431527 47.13368172 |
|
||||
37
test/bdd/api/reverse/language.feature
Normal file
37
test/bdd/api/reverse/language.feature
Normal file
@@ -0,0 +1,37 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Localization of reverse search results
|
||||
|
||||
Scenario: default language
|
||||
When sending v1/reverse at 47.14,9.55
|
||||
Then result addresses contain
|
||||
| ID | country |
|
||||
| 0 | Liechtenstein |
|
||||
|
||||
Scenario: accept-language parameter
|
||||
When sending v1/reverse at 47.14,9.55
|
||||
| accept-language |
|
||||
| ja,en |
|
||||
Then result addresses contain
|
||||
| ID | country |
|
||||
| 0 | リヒテンシュタイン |
|
||||
|
||||
Scenario: HTTP accept language header
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo-ca,fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
|
||||
When sending v1/reverse at 47.14,9.55
|
||||
Then result addresses contain
|
||||
| ID | country |
|
||||
| 0 | Liktinstein |
|
||||
|
||||
Scenario: accept-language parameter and HTTP header
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo-ca,fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
|
||||
When sending v1/reverse at 47.14,9.55
|
||||
| accept-language |
|
||||
| en |
|
||||
Then result addresses contain
|
||||
| ID | country |
|
||||
| 0 | Liechtenstein |
|
||||
@@ -1,20 +1,24 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Layer parameter in reverse geocoding
|
||||
Testing correct function of layer selection while reverse geocoding
|
||||
|
||||
Scenario: POIs are selected by default
|
||||
When reverse geocoding 47.14077,9.52414
|
||||
Then the result contains
|
||||
When sending v1/reverse at 47.14077,9.52414
|
||||
Then results contain
|
||||
| category | type |
|
||||
| tourism | viewpoint |
|
||||
|
||||
|
||||
Scenario Outline: Same address level POI with different layers
|
||||
When reverse geocoding 47.14077,9.52414
|
||||
When sending v1/reverse at 47.14077,9.52414
|
||||
| layer |
|
||||
| <layer> |
|
||||
Then the result contains
|
||||
Then results contain
|
||||
| category |
|
||||
| <category> |
|
||||
|
||||
|
||||
Examples:
|
||||
| layer | category |
|
||||
| address | highway |
|
||||
@@ -24,11 +28,12 @@ Feature: Layer parameter in reverse geocoding
|
||||
| address,natural | highway |
|
||||
| natural,poi | tourism |
|
||||
|
||||
|
||||
Scenario Outline: POIs are not selected without housenumber for address layer
|
||||
When reverse geocoding 47.13816,9.52168
|
||||
When sending v1/reverse at 47.13816,9.52168
|
||||
| layer |
|
||||
| <layer> |
|
||||
Then the result contains
|
||||
Then results contain
|
||||
| category | type |
|
||||
| <category> | <type> |
|
||||
|
||||
@@ -37,19 +42,21 @@ Feature: Layer parameter in reverse geocoding
|
||||
| address,poi | highway | bus_stop |
|
||||
| address | amenity | parking |
|
||||
|
||||
|
||||
Scenario: Between natural and low-zoom address prefer natural
|
||||
When reverse geocoding 47.13636,9.52094
|
||||
When sending v1/reverse at 47.13636,9.52094
|
||||
| layer | zoom |
|
||||
| natural,address | 15 |
|
||||
Then the result contains
|
||||
Then results contain
|
||||
| category |
|
||||
| waterway |
|
||||
|
||||
|
||||
Scenario Outline: Search for mountain peaks begins at level 12
|
||||
When reverse geocoding 47.08293,9.57109
|
||||
When sending v1/reverse at 47.08293,9.57109
|
||||
| layer | zoom |
|
||||
| natural | <zoom> |
|
||||
Then the result contains
|
||||
Then results contain
|
||||
| category | type |
|
||||
| <category> | <type> |
|
||||
|
||||
@@ -58,11 +65,12 @@ Feature: Layer parameter in reverse geocoding
|
||||
| 12 | natural | peak |
|
||||
| 13 | waterway | river |
|
||||
|
||||
|
||||
Scenario Outline: Reverse search with manmade layers
|
||||
When reverse geocoding 32.46904,-86.44439
|
||||
When sending v1/reverse at 32.46904,-86.44439
|
||||
| layer |
|
||||
| <layer> |
|
||||
Then the result contains
|
||||
Then results contain
|
||||
| category | type |
|
||||
| <category> | <type> |
|
||||
|
||||
117
test/bdd/api/reverse/queries.feature
Normal file
117
test/bdd/api/reverse/queries.feature
Normal file
@@ -0,0 +1,117 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Reverse geocoding
|
||||
Testing the reverse function
|
||||
|
||||
Scenario Outline: Simple reverse-geocoding with no results
|
||||
When sending v1/reverse at <lat>,<lon>
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Examples:
|
||||
| lat | lon |
|
||||
| 0.0 | 0.0 |
|
||||
| 91.3 | 0.4 |
|
||||
| -700 | 0.4 |
|
||||
| 0.2 | 324.44 |
|
||||
| 0.2 | -180.4 |
|
||||
|
||||
|
||||
Scenario: Unknown countries fall back to default country grid
|
||||
When sending v1/reverse at 45.174,-103.072
|
||||
Then results contain
|
||||
| category | type | display_name |
|
||||
| place | country | United States |
|
||||
|
||||
|
||||
@Tiger
|
||||
Scenario: TIGER house number
|
||||
When sending v1/reverse at 32.4752389363,-86.4810198619
|
||||
Then results contain
|
||||
| category | type |
|
||||
| place | house |
|
||||
And result addresses contain
|
||||
| house_number | road | postcode | country_code |
|
||||
| 707 | Upper Kingston Road | 36067 | us |
|
||||
|
||||
@Tiger
|
||||
Scenario: No TIGER house number for zoom < 18
|
||||
When sending v1/reverse at 32.4752389363,-86.4810198619
|
||||
| zoom |
|
||||
| 17 |
|
||||
Then results contain
|
||||
| osm_type | category |
|
||||
| way | highway |
|
||||
And result addresses contain
|
||||
| road | postcode | country_code |
|
||||
| Upper Kingston Road | 36067 | us |
|
||||
|
||||
Scenario: Interpolated house number
|
||||
When sending v1/reverse at 47.118533,9.57056562
|
||||
Then results contain
|
||||
| osm_type | category | type |
|
||||
| way | place | house |
|
||||
And result addresses contain
|
||||
| house_number | road |
|
||||
| 1019 | Grosssteg |
|
||||
|
||||
Scenario: Address with non-numerical house number
|
||||
When sending v1/reverse at 47.107465,9.52838521614
|
||||
Then result addresses contain
|
||||
| house_number | road |
|
||||
| 39A/B | Dorfstrasse |
|
||||
|
||||
|
||||
Scenario: Address with numerical house number
|
||||
When sending v1/reverse at 47.168440329479594,9.511551699184338
|
||||
Then result addresses contain
|
||||
| house_number | road |
|
||||
| 6 | Schmedgässle |
|
||||
|
||||
Scenario Outline: Zoom levels below 5 result in country
|
||||
When sending v1/reverse at 47.16,9.51
|
||||
| zoom |
|
||||
| <zoom> |
|
||||
Then results contain
|
||||
| display_name |
|
||||
| Liechtenstein |
|
||||
|
||||
Examples:
|
||||
| zoom |
|
||||
| 0 |
|
||||
| 1 |
|
||||
| 2 |
|
||||
| 3 |
|
||||
| 4 |
|
||||
|
||||
Scenario: When on a street, the closest interpolation is shown
|
||||
When sending v1/reverse at 47.118457166193245,9.570678289621355
|
||||
| zoom |
|
||||
| 18 |
|
||||
Then results contain
|
||||
| display_name |
|
||||
| 1021, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
|
||||
|
||||
# github 2214
|
||||
Scenario: Interpolations do not override house numbers when they are closer
|
||||
When sending v1/reverse at 47.11778,9.57255
|
||||
| zoom |
|
||||
| 18 |
|
||||
Then results contain
|
||||
| display_name |
|
||||
| 5, Grosssteg, Steg, Triesenberg, Oberland, 9497, Liechtenstein |
|
||||
|
||||
Scenario: Interpolations do not override house numbers when they are closer (2)
|
||||
When sending v1/reverse at 47.11834,9.57167
|
||||
| zoom |
|
||||
| 18 |
|
||||
Then results contain
|
||||
| display_name |
|
||||
| 3, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
|
||||
|
||||
Scenario: When on a street with zoom 18, the closest housenumber is returned
|
||||
When sending v1/reverse at 47.11755503977281,9.572722250405036
|
||||
| zoom |
|
||||
| 18 |
|
||||
Then result addresses contain
|
||||
| house_number |
|
||||
| 7 |
|
||||
107
test/bdd/api/reverse/v1_geocodejson.feature
Normal file
107
test/bdd/api/reverse/v1_geocodejson.feature
Normal file
@@ -0,0 +1,107 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Geocodejson for Reverse API
|
||||
Testing correctness of geocodejson output (API version v1).
|
||||
|
||||
Scenario Outline: Simple OSM result
|
||||
When sending v1/reverse at 47.066,9.504 with format geocodejson
|
||||
| addressdetails |
|
||||
| <has_address> |
|
||||
Then result has attributes place_id, accuracy
|
||||
And result has <attributes> country,postcode,county,city,district,street,housenumber, admin
|
||||
Then results contain
|
||||
| osm_type | osm_id | osm_key | osm_value | type |
|
||||
| node | 6522627624 | shop | bakery | house |
|
||||
And results contain
|
||||
| name | label |
|
||||
| Dorfbäckerei Herrmann | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
|
||||
And results contain in field geojson
|
||||
| type | coordinates |
|
||||
| Point | [9.5036065, 47.0660892] |
|
||||
And results contain in field __geocoding
|
||||
| version | licence | attribution |
|
||||
| 0.1.0 | ODbL | ^Data © OpenStreetMap contributors, ODbL 1.0. https?://osm.org/copyright$ |
|
||||
|
||||
Examples:
|
||||
| has_address | attributes |
|
||||
| 1 | attributes |
|
||||
| 0 | not attributes |
|
||||
|
||||
|
||||
Scenario: City housenumber-level address with street
|
||||
When sending v1/reverse at 47.1068011,9.52810091 with format geocodejson
|
||||
Then results contain
|
||||
| housenumber | street | postcode | city | country |
|
||||
| 8 | Im Winkel | 9495 | Triesen | Liechtenstein |
|
||||
And results contain in field admin
|
||||
| level6 | level8 |
|
||||
| Oberland | Triesen |
|
||||
|
||||
|
||||
Scenario: Town street-level address with street
|
||||
When sending v1/reverse at 47.066,9.504 with format geocodejson
|
||||
| zoom |
|
||||
| 16 |
|
||||
Then results contain
|
||||
| name | city | postcode | country |
|
||||
| Gnetsch | Balzers | 9496 | Liechtenstein |
|
||||
|
||||
|
||||
Scenario: Poi street-level address with footway
|
||||
When sending v1/reverse at 47.06515,9.50083 with format geocodejson
|
||||
Then results contain
|
||||
| street | city | postcode | country |
|
||||
| Burgweg | Balzers | 9496 | Liechtenstein |
|
||||
|
||||
|
||||
Scenario: City address with suburb
|
||||
When sending v1/reverse at 47.146861,9.511771 with format geocodejson
|
||||
Then results contain
|
||||
| housenumber | street | district | city | postcode | country |
|
||||
| 5 | Lochgass | Ebenholz | Vaduz | 9490 | Liechtenstein |
|
||||
|
||||
|
||||
@Tiger
|
||||
Scenario: Tiger address
|
||||
When sending v1/reverse at 32.4752389363,-86.4810198619 with format geocodejson
|
||||
Then results contain
|
||||
| osm_type | osm_id | osm_key | osm_value | type |
|
||||
| way | 396009653 | place | house | house |
|
||||
And results contain
|
||||
| housenumber | street | city | county | postcode | country |
|
||||
| 707 | Upper Kingston Road | Prattville | Autauga County | 36067 | United States |
|
||||
|
||||
|
||||
Scenario: Interpolation address
|
||||
When sending v1/reverse at 47.118533,9.57056562 with format geocodejson
|
||||
Then results contain
|
||||
| osm_type | osm_id | osm_key | osm_value | type |
|
||||
| way | 1 | place | house | house |
|
||||
And results contain
|
||||
| label |
|
||||
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
|
||||
And result has not attributes name
|
||||
|
||||
|
||||
Scenario: Line geometry output is supported
|
||||
When sending v1/reverse at 47.06597,9.50467 with format geocodejson
|
||||
| param | value |
|
||||
| polygon_geojson | 1 |
|
||||
Then results contain in field geojson
|
||||
| type |
|
||||
| LineString |
|
||||
|
||||
|
||||
Scenario Outline: Only geojson polygons are supported
|
||||
When sending v1/reverse at 47.06597,9.50467 with format geocodejson
|
||||
| param | value |
|
||||
| <param> | 1 |
|
||||
Then results contain in field geojson
|
||||
| type |
|
||||
| Point |
|
||||
|
||||
Examples:
|
||||
| param |
|
||||
| polygon_text |
|
||||
| polygon_svg |
|
||||
| polygon_kml |
|
||||
73
test/bdd/api/reverse/v1_geojson.feature
Normal file
73
test/bdd/api/reverse/v1_geojson.feature
Normal file
@@ -0,0 +1,73 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Geojson for Reverse API
|
||||
Testing correctness of geojson output (API version v1).
|
||||
|
||||
Scenario Outline: Simple OSM result
|
||||
When sending v1/reverse at 47.066,9.504 with format geojson
|
||||
| addressdetails |
|
||||
| <has_address> |
|
||||
Then result has attributes place_id, importance, __licence
|
||||
And result has <attributes> address
|
||||
And results contain
|
||||
| osm_type | osm_id | place_rank | category | type | addresstype |
|
||||
| node | 6522627624 | 30 | shop | bakery | shop |
|
||||
And results contain
|
||||
| name | display_name |
|
||||
| Dorfbäckerei Herrmann | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
|
||||
And results contain
|
||||
| boundingbox |
|
||||
| [47.0660392, 47.0661392, 9.5035565, 9.5036565] |
|
||||
And results contain in field geojson
|
||||
| type | coordinates |
|
||||
| Point | [9.5036065, 47.0660892] |
|
||||
|
||||
Examples:
|
||||
| has_address | attributes |
|
||||
| 1 | attributes |
|
||||
| 0 | not attributes |
|
||||
|
||||
|
||||
@Tiger
|
||||
Scenario: Tiger address
|
||||
When sending v1/reverse at 32.4752389363,-86.4810198619 with format geojson
|
||||
Then results contain
|
||||
| osm_type | osm_id | category | type | addresstype | place_rank |
|
||||
| way | 396009653 | place | house | place | 30 |
|
||||
|
||||
|
||||
Scenario: Interpolation address
|
||||
When sending v1/reverse at 47.118533,9.57056562 with format geojson
|
||||
Then results contain
|
||||
| osm_type | osm_id | place_rank | category | type | addresstype |
|
||||
| way | 1 | 30 | place | house | place |
|
||||
And results contain
|
||||
| boundingbox |
|
||||
| ^\[47.118495\d*, 47.118595\d*, 9.570496\d*, 9.570596\d*\] |
|
||||
And results contain
|
||||
| display_name |
|
||||
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
|
||||
|
||||
|
||||
Scenario: Line geometry output is supported
|
||||
When sending v1/reverse at 47.06597,9.50467 with format geojson
|
||||
| param | value |
|
||||
| polygon_geojson | 1 |
|
||||
Then results contain in field geojson
|
||||
| type |
|
||||
| LineString |
|
||||
|
||||
|
||||
Scenario Outline: Only geojson polygons are supported
|
||||
When sending v1/reverse at 47.06597,9.50467 with format geojson
|
||||
| param | value |
|
||||
| <param> | 1 |
|
||||
Then results contain in field geojson
|
||||
| type |
|
||||
| Point |
|
||||
|
||||
Examples:
|
||||
| param |
|
||||
| polygon_text |
|
||||
| polygon_svg |
|
||||
| polygon_kml |
|
||||
130
test/bdd/api/reverse/v1_json.feature
Normal file
130
test/bdd/api/reverse/v1_json.feature
Normal file
@@ -0,0 +1,130 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Json output for Reverse API
|
||||
Testing correctness of json and jsonv2 output (API version v1).
|
||||
|
||||
Scenario Outline: OSM result with and without addresses
|
||||
When sending v1/reverse at 47.066,9.504 with format json
|
||||
| addressdetails |
|
||||
| <has_address> |
|
||||
Then result has <attributes> address
|
||||
When sending v1/reverse at 47.066,9.504 with format jsonv2
|
||||
| addressdetails |
|
||||
| <has_address> |
|
||||
Then result has <attributes> address
|
||||
|
||||
Examples:
|
||||
| has_address | attributes |
|
||||
| 1 | attributes |
|
||||
| 0 | not attributes |
|
||||
|
||||
Scenario Outline: Simple OSM result
|
||||
When sending v1/reverse at 47.066,9.504 with format <format>
|
||||
Then result has attributes place_id
|
||||
And results contain
|
||||
| licence |
|
||||
| ^Data © OpenStreetMap contributors, ODbL 1.0. https?://osm.org/copyright$ |
|
||||
And results contain
|
||||
| osm_type | osm_id |
|
||||
| node | 6522627624 |
|
||||
And results contain
|
||||
| centroid | boundingbox |
|
||||
| 9.5036065 47.0660892 | ['47.0660392', '47.0661392', '9.5035565', '9.5036565'] |
|
||||
And results contain
|
||||
| display_name |
|
||||
| Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
|
||||
And result has not attributes namedetails,extratags
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
|
||||
Scenario: Extra attributes of jsonv2 result
|
||||
When sending v1/reverse at 47.066,9.504 with format jsonv2
|
||||
Then result has attributes importance
|
||||
Then results contain
|
||||
| category | type | name | place_rank | addresstype |
|
||||
| shop | bakery | Dorfbäckerei Herrmann | 30 | shop |
|
||||
|
||||
|
||||
@Tiger
|
||||
Scenario: Tiger address
|
||||
When sending v1/reverse at 32.4752389363,-86.4810198619 with format jsonv2
|
||||
Then results contain
|
||||
| osm_type | osm_id | category | type | addresstype |
|
||||
| way | 396009653 | place | house | place |
|
||||
|
||||
|
||||
Scenario Outline: Interpolation address
|
||||
When sending v1/reverse at 47.118533,9.57056562 with format <format>
|
||||
Then results contain
|
||||
| osm_type | osm_id |
|
||||
| way | 1 |
|
||||
And results contain
|
||||
| centroid | boundingbox |
|
||||
| 9.57054676 47.118545392 | ^\['47.118495\d*', '47.118595\d*', '9.570496\d*', '9.570596\d*'\] |
|
||||
And results contain
|
||||
| display_name |
|
||||
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
|
||||
|
||||
Scenario Outline: Output of geojson
|
||||
When sending v1/reverse at 47.06597,9.50467 with format <format>
|
||||
| param | value |
|
||||
| polygon_geojson | 1 |
|
||||
Then results contain in field geojson
|
||||
| type | coordinates |
|
||||
| LineString | [[9.5039353, 47.0657546], [9.5040437, 47.0657781], [9.5040808, 47.065787], [9.5054298, 47.0661407]] |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
|
||||
|
||||
Scenario Outline: Output of WKT
|
||||
When sending v1/reverse at 47.06597,9.50467 with format <format>
|
||||
| param | value |
|
||||
| polygon_text | 1 |
|
||||
Then results contain
|
||||
| geotext |
|
||||
| ^LINESTRING\(9.5039353 47.0657546, ?9.5040437 47.0657781, ?9.5040808 47.065787, ?9.5054298 47.0661407\) |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
|
||||
|
||||
Scenario Outline: Output of SVG
|
||||
When sending v1/reverse at 47.06597,9.50467 with format <format>
|
||||
| param | value |
|
||||
| polygon_svg | 1 |
|
||||
Then results contain
|
||||
| svg |
|
||||
| M 9.5039353 -47.0657546 L 9.5040437 -47.0657781 9.5040808 -47.065787 9.5054298 -47.0661407 |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
|
||||
|
||||
Scenario Outline: Output of KML
|
||||
When sending v1/reverse at 47.06597,9.50467 with format <format>
|
||||
| param | value |
|
||||
| polygon_kml | 1 |
|
||||
Then results contain
|
||||
| geokml |
|
||||
| ^<LineString><coordinates>9.5039\d*,47.0657\d* 9.5040\d*,47.0657\d* 9.5040\d*,47.065\d* 9.5054\d*,47.0661\d*</coordinates></LineString> |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
206
test/bdd/api/reverse/v1_params.feature
Normal file
206
test/bdd/api/reverse/v1_params.feature
Normal file
@@ -0,0 +1,206 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: v1/reverse Parameter Tests
|
||||
Tests for parameter inputs for the v1 reverse endpoint.
|
||||
This file contains mostly bad parameter input. Valid parameters
|
||||
are tested in the format tests.
|
||||
|
||||
Scenario: Bad format
|
||||
When sending v1/reverse at 47.14122383,9.52169581334 with format sdf
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Scenario: Missing lon parameter
|
||||
When sending v1/reverse at 52.52,
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
|
||||
Scenario: Missing lat parameter
|
||||
When sending v1/reverse at ,52.52
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
|
||||
Scenario Outline: Bad format for lat or lon
|
||||
When sending v1/reverse at ,
|
||||
| lat | lon |
|
||||
| <lat> | <lon> |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Examples:
|
||||
| lat | lon |
|
||||
| 48.9660 | 8,4482 |
|
||||
| 48,9660 | 8.4482 |
|
||||
| 48,9660 | 8,4482 |
|
||||
| 48.966.0 | 8.4482 |
|
||||
| 48.966 | 8.448.2 |
|
||||
| Nan | 8.448 |
|
||||
| 48.966 | Nan |
|
||||
| Inf | 5.6 |
|
||||
| 5.6 | -Inf |
|
||||
| <script></script> | 3.4 |
|
||||
| 3.4 | <script></script> |
|
||||
| -45.3 | ; |
|
||||
| gkjd | 50 |
|
||||
|
||||
|
||||
Scenario: Non-numerical zoom levels return an error
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| zoom |
|
||||
| adfe |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
|
||||
Scenario Outline: Truthy values for boolean parameters
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| addressdetails |
|
||||
| <value> |
|
||||
Then exactly 1 result is returned
|
||||
And result has attributes address
|
||||
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| extratags |
|
||||
| <value> |
|
||||
Then exactly 1 result is returned
|
||||
And result has attributes extratags
|
||||
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| namedetails |
|
||||
| <value> |
|
||||
Then exactly 1 result is returned
|
||||
And result has attributes namedetails
|
||||
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| polygon_geojson |
|
||||
| <value> |
|
||||
Then exactly 1 result is returned
|
||||
And result has attributes geojson
|
||||
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| polygon_kml |
|
||||
| <value> |
|
||||
Then exactly 1 result is returned
|
||||
And result has attributes geokml
|
||||
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| polygon_svg |
|
||||
| <value> |
|
||||
Then exactly 1 result is returned
|
||||
And result has attributes svg
|
||||
|
||||
When sending v1/reverse at 47.14122383,9.52169581334
|
||||
| polygon_text |
|
||||
| <value> |
|
||||
Then exactly 1 result is returned
|
||||
And result has attributes geotext
|
||||
|
||||
Examples:
|
||||
| value |
|
||||
| yes |
|
||||
| no |
|
||||
| -1 |
|
||||
| 100 |
|
||||
| false |
|
||||
| 00 |
|
||||
|
||||
|
||||
Scenario: Only one geometry can be requested
|
||||
When sending v1/reverse at 47.165989816710066,9.515774846076965
|
||||
| polygon_text | polygon_svg |
|
||||
| 1 | 1 |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
|
||||
Scenario Outline: Wrapping of legal jsonp requests
|
||||
When sending v1/reverse at 67.3245,0.456 with format <format>
|
||||
| json_callback |
|
||||
| foo |
|
||||
Then the result is valid <outformat>
|
||||
|
||||
Examples:
|
||||
| format | outformat |
|
||||
| json | json |
|
||||
| jsonv2 | json |
|
||||
| geojson | geojson |
|
||||
| geocodejson | geocodejson |
|
||||
|
||||
|
||||
Scenario Outline: Illegal jsonp are not allowed
|
||||
When sending v1/reverse at 47.165989816710066,9.515774846076965
|
||||
| param | value |
|
||||
|json_callback | <data> |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Examples:
|
||||
| data |
|
||||
| 1asd |
|
||||
| bar(foo) |
|
||||
| XXX['bad'] |
|
||||
| foo; evil |
|
||||
|
||||
|
||||
Scenario Outline: Reverse debug mode produces valid HTML
|
||||
When sending v1/reverse at , with format debug
|
||||
| lat | lon |
|
||||
| <lat> | <lon> |
|
||||
Then the result is valid html
|
||||
|
||||
Examples:
|
||||
| lat | lon |
|
||||
| 0.0 | 0.0 |
|
||||
| 47.06645 | 9.56601 |
|
||||
| 47.14081 | 9.52267 |
|
||||
|
||||
|
||||
Scenario Outline: Full address display for city housenumber-level address with street
|
||||
When sending v1/reverse at 47.1068011,9.52810091 with format <format>
|
||||
Then address of result 0 is
|
||||
| type | value |
|
||||
| house_number | 8 |
|
||||
| road | Im Winkel |
|
||||
| neighbourhood | Oberdorf |
|
||||
| village | Triesen |
|
||||
| ISO3166-2-lvl8 | LI-09 |
|
||||
| county | Oberland |
|
||||
| postcode | 9495 |
|
||||
| country | Liechtenstein |
|
||||
| country_code | li |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| geojson |
|
||||
| xml |
|
||||
|
||||
|
||||
Scenario Outline: Results with name details
|
||||
When sending v1/reverse at 47.14052,9.52202 with format <format>
|
||||
| zoom | namedetails |
|
||||
| 14 | 1 |
|
||||
Then results contain in field namedetails
|
||||
| name |
|
||||
| Ebenholz |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| xml |
|
||||
| geojson |
|
||||
|
||||
|
||||
Scenario Outline: Results with extratags
|
||||
When sending v1/reverse at 47.14052,9.52202 with format <format>
|
||||
| zoom | extratags |
|
||||
| 14 | 1 |
|
||||
Then results contain in field extratags
|
||||
| wikidata |
|
||||
| Q4529531 |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| xml |
|
||||
| geojson |
|
||||
|
||||
|
||||
88
test/bdd/api/reverse/v1_xml.feature
Normal file
88
test/bdd/api/reverse/v1_xml.feature
Normal file
@@ -0,0 +1,88 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: XML output for Reverse API
|
||||
Testing correctness of xml output (API version v1).
|
||||
|
||||
Scenario Outline: OSM result with and without addresses
|
||||
When sending v1/reverse at 47.066,9.504 with format xml
|
||||
| addressdetails |
|
||||
| <has_address> |
|
||||
Then result has attributes place_id
|
||||
Then result has <attributes> address
|
||||
And results contain
|
||||
| osm_type | osm_id | place_rank | address_rank |
|
||||
| node | 6522627624 | 30 | 30 |
|
||||
And results contain
|
||||
| centroid | boundingbox |
|
||||
| 9.5036065 47.0660892 | 47.0660392,47.0661392,9.5035565,9.5036565 |
|
||||
And results contain
|
||||
| ref | display_name |
|
||||
| Dorfbäckerei Herrmann | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
|
||||
|
||||
Examples:
|
||||
| has_address | attributes |
|
||||
| 1 | attributes |
|
||||
| 0 | not attributes |
|
||||
|
||||
|
||||
@Tiger
|
||||
Scenario: Tiger address
|
||||
When sending v1/reverse at 32.4752389363,-86.4810198619 with format xml
|
||||
Then results contain
|
||||
| osm_type | osm_id | place_rank | address_rank |
|
||||
| way | 396009653 | 30 | 30 |
|
||||
And results contain
|
||||
| centroid | boundingbox |
|
||||
| -86.4808553 32.4753580 | ^32.4753080\d*,32.4754080\d*,-86.4809053\d*,-86.4808053\d* |
|
||||
And results contain
|
||||
| display_name |
|
||||
| 707, Upper Kingston Road, Upper Kingston, Prattville, Autauga County, 36067, United States |
|
||||
|
||||
|
||||
Scenario: Interpolation address
|
||||
When sending v1/reverse at 47.118533,9.57056562 with format xml
|
||||
Then results contain
|
||||
| osm_type | osm_id | place_rank | address_rank |
|
||||
| way | 1 | 30 | 30 |
|
||||
And results contain
|
||||
| centroid | boundingbox |
|
||||
| 9.57054676 47.118545392 | ^47.118495\d*,47.118595\d*,9.570496\d*,9.570596\d* |
|
||||
And results contain
|
||||
| display_name |
|
||||
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
|
||||
|
||||
|
||||
Scenario: Output of geojson
|
||||
When sending v1/reverse at 47.06597,9.50467 with format xml
|
||||
| param | value |
|
||||
| polygon_geojson | 1 |
|
||||
Then results contain
|
||||
| geojson |
|
||||
| {"type":"LineString","coordinates":[[9.5039353,47.0657546],[9.5040437,47.0657781],[9.5040808,47.065787],[9.5054298,47.0661407]]} |
|
||||
|
||||
|
||||
Scenario: Output of WKT
|
||||
When sending v1/reverse at 47.06597,9.50467 with format xml
|
||||
| param | value |
|
||||
| polygon_text | 1 |
|
||||
Then results contain
|
||||
| geotext |
|
||||
| ^LINESTRING\(9.5039353 47.0657546, ?9.5040437 47.0657781, ?9.5040808 47.065787, ?9.5054298 47.0661407\) |
|
||||
|
||||
|
||||
Scenario: Output of SVG
|
||||
When sending v1/reverse at 47.06597,9.50467 with format xml
|
||||
| param | value |
|
||||
| polygon_svg | 1 |
|
||||
Then results contain
|
||||
| geosvg |
|
||||
| M 9.5039353 -47.0657546 L 9.5040437 -47.0657781 9.5040808 -47.065787 9.5054298 -47.0661407 |
|
||||
|
||||
|
||||
Scenario: Output of KML
|
||||
When sending v1/reverse at 47.06597,9.50467 with format xml
|
||||
| param | value |
|
||||
| polygon_kml | 1 |
|
||||
Then results contain
|
||||
| geokml |
|
||||
| ^<geokml><LineString><coordinates>9.5039\d*,47.0657\d* 9.5040\d*,47.0657\d* 9.5040\d*,47.065\d* 9.5054\d*,47.0661\d*</coordinates></LineString></geokml> |
|
||||
28
test/bdd/api/search/geocodejson.feature
Normal file
28
test/bdd/api/search/geocodejson.feature
Normal file
@@ -0,0 +1,28 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Parameters for Search API
|
||||
Testing correctness of geocodejson output.
|
||||
|
||||
Scenario: City housenumber-level address with street
|
||||
When sending geocodejson search query "Im Winkel 8, Triesen" with address
|
||||
Then results contain
|
||||
| housenumber | street | postcode | city | country |
|
||||
| 8 | Im Winkel | 9495 | Triesen | Liechtenstein |
|
||||
|
||||
Scenario: Town street-level address with street
|
||||
When sending geocodejson search query "Gnetsch, Balzers" with address
|
||||
Then results contain
|
||||
| name | city | postcode | country |
|
||||
| Gnetsch | Balzers | 9496 | Liechtenstein |
|
||||
|
||||
Scenario: Town street-level address with footway
|
||||
When sending geocodejson search query "burg gutenberg 6000 jahre geschichte" with address
|
||||
Then results contain
|
||||
| street | city | postcode | country |
|
||||
| Burgweg | Balzers | 9496 | Liechtenstein |
|
||||
|
||||
Scenario: City address with suburb
|
||||
When sending geocodejson search query "Lochgass 5, Ebenholz, Vaduz" with address
|
||||
Then results contain
|
||||
| housenumber | street | district | city | postcode | country |
|
||||
| 5 | Lochgass | Ebenholz | Vaduz | 9490 | Liechtenstein |
|
||||
63
test/bdd/api/search/language.feature
Normal file
63
test/bdd/api/search/language.feature
Normal file
@@ -0,0 +1,63 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Localization of search results
|
||||
|
||||
Scenario: default language
|
||||
When sending json search query "Liechtenstein"
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | Liechtenstein |
|
||||
|
||||
Scenario: accept-language first
|
||||
When sending json search query "Liechtenstein"
|
||||
| accept-language |
|
||||
| zh,de |
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | 列支敦士登 |
|
||||
|
||||
Scenario: accept-language missing
|
||||
When sending json search query "Liechtenstein"
|
||||
| accept-language |
|
||||
| xx,fr,en,de |
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | Liechtenstein |
|
||||
|
||||
Scenario: http accept language header first
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
|
||||
When sending json search query "Liechtenstein"
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | Liktinstein |
|
||||
|
||||
Scenario: http accept language header and accept-language
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fr-ca,fr;q=0.8,en-ca;q=0.5,en;q=0.3 |
|
||||
When sending json search query "Liechtenstein"
|
||||
| accept-language |
|
||||
| fo,en |
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | Liktinstein |
|
||||
|
||||
Scenario: http accept language header fallback
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo-ca,en-ca;q=0.5 |
|
||||
When sending json search query "Liechtenstein"
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | Liktinstein |
|
||||
|
||||
Scenario: http accept language header fallback (upper case)
|
||||
Given the HTTP header
|
||||
| accept-language |
|
||||
| fo-FR;q=0.8,en-ca;q=0.5 |
|
||||
When sending json search query "Liechtenstein"
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | Liktinstein |
|
||||
362
test/bdd/api/search/params.feature
Normal file
362
test/bdd/api/search/params.feature
Normal file
@@ -0,0 +1,362 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Search queries
|
||||
Testing different queries and parameters
|
||||
|
||||
Scenario: Simple XML search
|
||||
When sending xml search query "Schaan"
|
||||
Then result 0 has attributes place_id,osm_type,osm_id
|
||||
And result 0 has attributes place_rank,boundingbox
|
||||
And result 0 has attributes lat,lon,display_name
|
||||
And result 0 has attributes class,type,importance
|
||||
And result 0 has not attributes address
|
||||
And result 0 has bounding box in 46.5,47.5,9,10
|
||||
|
||||
Scenario: Simple JSON search
|
||||
When sending json search query "Vaduz"
|
||||
Then result 0 has attributes place_id,licence,class,type
|
||||
And result 0 has attributes osm_type,osm_id,boundingbox
|
||||
And result 0 has attributes lat,lon,display_name,importance
|
||||
And result 0 has not attributes address
|
||||
And result 0 has bounding box in 46.5,47.5,9,10
|
||||
|
||||
Scenario: Unknown formats returns a user error
|
||||
When sending search query "Vaduz"
|
||||
| format |
|
||||
| x45 |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Scenario Outline: Search with addressdetails
|
||||
When sending <format> search query "Triesen" with address
|
||||
Then address of result 0 is
|
||||
| type | value |
|
||||
| village | Triesen |
|
||||
| county | Oberland |
|
||||
| postcode | 9495 |
|
||||
| country | Liechtenstein |
|
||||
| country_code | li |
|
||||
| ISO3166-2-lvl8 | LI-09 |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| geojson |
|
||||
| xml |
|
||||
|
||||
Scenario: Coordinate search with addressdetails
|
||||
When sending json search query "47.12400621,9.6047552"
|
||||
| accept-language |
|
||||
| en |
|
||||
Then results contain
|
||||
| display_name |
|
||||
| Guschg, Valorschstrasse, Balzers, Oberland, 9497, Liechtenstein |
|
||||
|
||||
Scenario: Address details with unknown class types
|
||||
When sending json search query "Kloster St. Elisabeth" with address
|
||||
Then results contain
|
||||
| ID | class | type |
|
||||
| 0 | amenity | monastery |
|
||||
And result addresses contain
|
||||
| ID | amenity |
|
||||
| 0 | Kloster St. Elisabeth |
|
||||
|
||||
Scenario: Disabling deduplication
|
||||
When sending json search query "Malbunstr"
|
||||
Then there are no duplicates
|
||||
When sending json search query "Malbunstr"
|
||||
| dedupe |
|
||||
| 0 |
|
||||
Then there are duplicates
|
||||
|
||||
Scenario: Search with bounded viewbox in right area
|
||||
When sending json search query "post" with address
|
||||
| bounded | viewbox |
|
||||
| 1 | 9,47,10,48 |
|
||||
Then result addresses contain
|
||||
| ID | town |
|
||||
| 0 | Vaduz |
|
||||
When sending json search query "post" with address
|
||||
| bounded | viewbox |
|
||||
| 1 | 9.49712,47.17122,9.52605,47.16242 |
|
||||
Then result addresses contain
|
||||
| town |
|
||||
| Schaan |
|
||||
|
||||
Scenario: Country search with bounded viewbox remain in the area
|
||||
When sending json search query "" with address
|
||||
| bounded | viewbox | country |
|
||||
| 1 | 9.49712,47.17122,9.52605,47.16242 | de |
|
||||
Then less than 1 result is returned
|
||||
|
||||
Scenario: Search with bounded viewboxlbrt in right area
|
||||
When sending json search query "bar" with address
|
||||
| bounded | viewboxlbrt |
|
||||
| 1 | 9.49712,47.16242,9.52605,47.17122 |
|
||||
Then result addresses contain
|
||||
| town |
|
||||
| Schaan |
|
||||
|
||||
@Fail
|
||||
Scenario: No POI search with unbounded viewbox
|
||||
When sending json search query "restaurant"
|
||||
| viewbox |
|
||||
| 9.93027,53.61634,10.10073,53.54500 |
|
||||
Then results contain
|
||||
| display_name |
|
||||
| ^[^,]*[Rr]estaurant.* |
|
||||
|
||||
Scenario: bounded search remains within viewbox, even with no results
|
||||
When sending json search query "[restaurant]"
|
||||
| bounded | viewbox |
|
||||
| 1 | 43.5403125,-5.6563282,43.54285,-5.662003 |
|
||||
Then less than 1 result is returned
|
||||
|
||||
Scenario: bounded search remains within viewbox with results
|
||||
When sending json search query "restaurant"
|
||||
| bounded | viewbox |
|
||||
| 1 | 9.49712,47.17122,9.52605,47.16242 |
|
||||
Then result has centroid in 9.49712,47.16242,9.52605,47.17122
|
||||
|
||||
Scenario: Prefer results within viewbox
|
||||
When sending json search query "Gässle" with address
|
||||
| accept-language | viewbox |
|
||||
| en | 9.52413,47.10759,9.53140,47.10539 |
|
||||
Then result addresses contain
|
||||
| ID | village |
|
||||
| 0 | Triesen |
|
||||
When sending json search query "Gässle" with address
|
||||
| accept-language | viewbox |
|
||||
| en | 9.45949,47.08421,9.54094,47.05466 |
|
||||
Then result addresses contain
|
||||
| ID | town |
|
||||
| 0 | Balzers |
|
||||
|
||||
Scenario: viewboxes cannot be points
|
||||
When sending json search query "foo"
|
||||
| viewbox |
|
||||
| 1.01,34.6,1.01,34.6 |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Scenario Outline: viewbox must have four coordinate numbers
|
||||
When sending json search query "foo"
|
||||
| viewbox |
|
||||
| <viewbox> |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Examples:
|
||||
| viewbox |
|
||||
| 34 |
|
||||
| 0.003,-84.4 |
|
||||
| 5.2,4.5542,12.4 |
|
||||
| 23.1,-6,0.11,44.2,9.1 |
|
||||
|
||||
Scenario Outline: viewboxlbrt must have four coordinate numbers
|
||||
When sending json search query "foo"
|
||||
| viewboxlbrt |
|
||||
| <viewbox> |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Examples:
|
||||
| viewbox |
|
||||
| 34 |
|
||||
| 0.003,-84.4 |
|
||||
| 5.2,4.5542,12.4 |
|
||||
| 23.1,-6,0.11,44.2,9.1 |
|
||||
|
||||
Scenario: Overly large limit number for search results
|
||||
When sending json search query "restaurant"
|
||||
| limit |
|
||||
| 1000 |
|
||||
Then at most 50 results are returned
|
||||
|
||||
Scenario: Limit number of search results
|
||||
When sending json search query "landstr"
|
||||
| dedupe |
|
||||
| 0 |
|
||||
Then more than 4 results are returned
|
||||
When sending json search query "landstr"
|
||||
| limit | dedupe |
|
||||
| 4 | 0 |
|
||||
Then exactly 4 results are returned
|
||||
|
||||
Scenario: Limit parameter must be a number
|
||||
When sending search query "Blue Laguna"
|
||||
| limit |
|
||||
| ); |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Scenario: Restrict to feature type country
|
||||
When sending xml search query "fürstentum"
|
||||
| featureType |
|
||||
| country |
|
||||
Then results contain
|
||||
| place_rank |
|
||||
| 4 |
|
||||
|
||||
Scenario: Restrict to feature type state
|
||||
When sending xml search query "Wangerberg"
|
||||
Then at least 1 result is returned
|
||||
When sending xml search query "Wangerberg"
|
||||
| featureType |
|
||||
| state |
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: Restrict to feature type city
|
||||
When sending xml search query "vaduz"
|
||||
Then at least 1 result is returned
|
||||
When sending xml search query "vaduz"
|
||||
| featureType |
|
||||
| city |
|
||||
Then results contain
|
||||
| place_rank |
|
||||
| 16 |
|
||||
|
||||
Scenario: Restrict to feature type settlement
|
||||
When sending json search query "Malbun"
|
||||
Then results contain
|
||||
| ID | class |
|
||||
| 1 | landuse |
|
||||
When sending json search query "Malbun"
|
||||
| featureType |
|
||||
| settlement |
|
||||
Then results contain
|
||||
| class | type |
|
||||
| place | village |
|
||||
|
||||
Scenario Outline: Search with polygon threshold (json)
|
||||
When sending json search query "triesenberg"
|
||||
| polygon_geojson | polygon_threshold |
|
||||
| 1 | <th> |
|
||||
Then at least 1 result is returned
|
||||
And result 0 has attributes geojson
|
||||
|
||||
Examples:
|
||||
| th |
|
||||
| -1 |
|
||||
| 0.0 |
|
||||
| 0.5 |
|
||||
| 999 |
|
||||
|
||||
Scenario Outline: Search with polygon threshold (xml)
|
||||
When sending xml search query "triesenberg"
|
||||
| polygon_geojson | polygon_threshold |
|
||||
| 1 | <th> |
|
||||
Then at least 1 result is returned
|
||||
And result 0 has attributes geojson
|
||||
|
||||
Examples:
|
||||
| th |
|
||||
| -1 |
|
||||
| 0.0 |
|
||||
| 0.5 |
|
||||
| 999 |
|
||||
|
||||
Scenario Outline: Search with invalid polygon threshold (xml)
|
||||
When sending xml search query "triesenberg"
|
||||
| polygon_geojson | polygon_threshold |
|
||||
| 1 | <th> |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Examples:
|
||||
| th |
|
||||
| x |
|
||||
| ;; |
|
||||
| 1m |
|
||||
|
||||
Scenario Outline: Search with extratags
|
||||
When sending <format> search query "Landstr"
|
||||
| extratags |
|
||||
| 1 |
|
||||
Then result has attributes extratags
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| xml |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| geojson |
|
||||
|
||||
Scenario Outline: Search with namedetails
|
||||
When sending <format> search query "Landstr"
|
||||
| namedetails |
|
||||
| 1 |
|
||||
Then result has attributes namedetails
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| xml |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| geojson |
|
||||
|
||||
Scenario Outline: Search result with contains TEXT geometry
|
||||
When sending <format> search query "triesenberg"
|
||||
| polygon_text |
|
||||
| 1 |
|
||||
Then result has attributes <response_attribute>
|
||||
|
||||
Examples:
|
||||
| format | response_attribute |
|
||||
| xml | geotext |
|
||||
| json | geotext |
|
||||
| jsonv2 | geotext |
|
||||
|
||||
Scenario Outline: Search result contains SVG geometry
|
||||
When sending <format> search query "triesenberg"
|
||||
| polygon_svg |
|
||||
| 1 |
|
||||
Then result has attributes <response_attribute>
|
||||
|
||||
Examples:
|
||||
| format | response_attribute |
|
||||
| xml | geosvg |
|
||||
| json | svg |
|
||||
| jsonv2 | svg |
|
||||
|
||||
Scenario Outline: Search result contains KML geometry
|
||||
When sending <format> search query "triesenberg"
|
||||
| polygon_kml |
|
||||
| 1 |
|
||||
Then result has attributes <response_attribute>
|
||||
|
||||
Examples:
|
||||
| format | response_attribute |
|
||||
| xml | geokml |
|
||||
| json | geokml |
|
||||
| jsonv2 | geokml |
|
||||
|
||||
Scenario Outline: Search result contains GEOJSON geometry
|
||||
When sending <format> search query "triesenberg"
|
||||
| polygon_geojson |
|
||||
| 1 |
|
||||
Then result has attributes <response_attribute>
|
||||
|
||||
Examples:
|
||||
| format | response_attribute |
|
||||
| xml | geojson |
|
||||
| json | geojson |
|
||||
| jsonv2 | geojson |
|
||||
| geojson | geojson |
|
||||
|
||||
Scenario Outline: Search result in geojson format contains no non-geojson geometry
|
||||
When sending geojson search query "triesenberg"
|
||||
| polygon_text | polygon_svg | polygon_geokml |
|
||||
| 1 | 1 | 1 |
|
||||
Then result 0 has not attributes <response_attribute>
|
||||
|
||||
Examples:
|
||||
| response_attribute |
|
||||
| geotext |
|
||||
| polygonpoints |
|
||||
| svg |
|
||||
| geokml |
|
||||
|
||||
|
||||
Scenario: Array parameters are ignored
|
||||
When sending json search query "Vaduz" with address
|
||||
| countrycodes[] | polygon_svg[] | limit[] | polygon_threshold[] |
|
||||
| IT | 1 | 3 | 3.4 |
|
||||
Then result addresses contain
|
||||
| ID | country_code |
|
||||
| 0 | li |
|
||||
@@ -1,51 +1,51 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Searches with postcodes
|
||||
Various searches involving postcodes
|
||||
|
||||
Scenario: US 5+4 ZIP codes are shortened to 5 ZIP codes if not found
|
||||
When geocoding "36067-1111, us"
|
||||
Then all results contain in field address
|
||||
When sending json search query "36067-1111, us" with address
|
||||
Then result addresses contain
|
||||
| postcode |
|
||||
| 36067 |
|
||||
And all results contain
|
||||
And results contain
|
||||
| type |
|
||||
| postcode |
|
||||
|
||||
Scenario: Postcode search with address
|
||||
When geocoding "9486, mauren"
|
||||
Then result 0 contains
|
||||
| type |
|
||||
| postcode |
|
||||
When sending json search query "9486, mauren"
|
||||
Then at least 1 result is returned
|
||||
|
||||
Scenario: Postcode search with country
|
||||
When geocoding "9486, li"
|
||||
Then all results contain in field address
|
||||
When sending json search query "9486, li" with address
|
||||
Then result addresses contain
|
||||
| country_code |
|
||||
| li |
|
||||
|
||||
Scenario: Postcode search with country code restriction
|
||||
When geocoding "9490"
|
||||
When sending json search query "9490" with address
|
||||
| countrycodes |
|
||||
| li |
|
||||
Then all results contain in field address
|
||||
Then result addresses contain
|
||||
| country_code |
|
||||
| li |
|
||||
|
||||
Scenario: Postcode search with bounded viewbox restriction
|
||||
When geocoding "9486"
|
||||
When sending json search query "9486" with address
|
||||
| bounded | viewbox |
|
||||
| 1 | 9.55,47.20,9.58,47.22 |
|
||||
Then all results contain in field address
|
||||
Then result addresses contain
|
||||
| postcode |
|
||||
| 9486 |
|
||||
When geocoding "9486"
|
||||
When sending json search query "9486" with address
|
||||
| bounded | viewbox |
|
||||
| 1 | 5.00,20.00,6.00,21.00 |
|
||||
Then exactly 0 result is returned
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: Postcode search with structured query
|
||||
When geocoding ""
|
||||
When sending json search query "" with address
|
||||
| postalcode | country |
|
||||
| 9490 | li |
|
||||
Then all results contain in field address
|
||||
Then result addresses contain
|
||||
| country_code | postcode |
|
||||
| li | 9490 |
|
||||
221
test/bdd/api/search/queries.feature
Normal file
221
test/bdd/api/search/queries.feature
Normal file
@@ -0,0 +1,221 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Search queries
|
||||
Generic search result correctness
|
||||
|
||||
Scenario: Search for natural object
|
||||
When sending json search query "Samina"
|
||||
| accept-language |
|
||||
| en |
|
||||
Then results contain
|
||||
| ID | class | type | display_name |
|
||||
| 0 | waterway | river | Samina, Austria |
|
||||
|
||||
Scenario: House number search for non-street address
|
||||
When sending json search query "6 Silum, Liechtenstein" with address
|
||||
| accept-language |
|
||||
| en |
|
||||
Then address of result 0 is
|
||||
| type | value |
|
||||
| house_number | 6 |
|
||||
| village | Silum |
|
||||
| town | Triesenberg |
|
||||
| county | Oberland |
|
||||
| postcode | 9497 |
|
||||
| country | Liechtenstein |
|
||||
| country_code | li |
|
||||
| ISO3166-2-lvl8 | LI-10 |
|
||||
|
||||
Scenario: House number interpolation
|
||||
When sending json search query "Grosssteg 1023, Triesenberg" with address
|
||||
| accept-language |
|
||||
| de |
|
||||
Then address of result 0 contains
|
||||
| type | value |
|
||||
| house_number | 1023 |
|
||||
| road | Grosssteg |
|
||||
| village | Sücka |
|
||||
| postcode | 9497 |
|
||||
| town | Triesenberg |
|
||||
| country | Liechtenstein |
|
||||
| country_code | li |
|
||||
|
||||
Scenario: With missing housenumber search falls back to road
|
||||
When sending json search query "Bündaweg 555" with address
|
||||
Then address of result 0 is
|
||||
| type | value |
|
||||
| road | Bündaweg |
|
||||
| village | Silum |
|
||||
| postcode | 9497 |
|
||||
| county | Oberland |
|
||||
| town | Triesenberg |
|
||||
| country | Liechtenstein |
|
||||
| country_code | li |
|
||||
| ISO3166-2-lvl8 | LI-10 |
|
||||
|
||||
Scenario Outline: Housenumber 0 can be found
|
||||
When sending <format> search query "Gnalpstrasse 0" with address
|
||||
Then results contain
|
||||
| display_name |
|
||||
| ^0,.* |
|
||||
And result addresses contain
|
||||
| house_number |
|
||||
| 0 |
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| xml |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| geojson |
|
||||
|
||||
@Tiger
|
||||
Scenario: TIGER house number
|
||||
When sending json search query "697 Upper Kingston Road"
|
||||
Then results contain
|
||||
| osm_type | display_name |
|
||||
| way | ^697,.* |
|
||||
|
||||
Scenario: Search with class-type feature
|
||||
When sending jsonv2 search query "bars in ebenholz"
|
||||
Then results contain
|
||||
| place_rank |
|
||||
| 30 |
|
||||
|
||||
Scenario: Search with specific amenity
|
||||
When sending json search query "[restaurant] Vaduz" with address
|
||||
Then result addresses contain
|
||||
| country |
|
||||
| Liechtenstein |
|
||||
And results contain
|
||||
| class | type |
|
||||
| amenity | restaurant |
|
||||
|
||||
Scenario: Search with specific amenity also work in country
|
||||
When sending json search query "restaurants in liechtenstein" with address
|
||||
Then result addresses contain
|
||||
| country |
|
||||
| Liechtenstein |
|
||||
And results contain
|
||||
| class | type |
|
||||
| amenity | restaurant |
|
||||
|
||||
Scenario: Search with key-value amenity
|
||||
When sending json search query "[club=scout] Vaduz"
|
||||
Then results contain
|
||||
| class | type |
|
||||
| club | scout |
|
||||
|
||||
Scenario: POI search near given coordinate
|
||||
When sending json search query "restaurant near 47.16712,9.51100"
|
||||
Then results contain
|
||||
| class | type |
|
||||
| amenity | restaurant |
|
||||
|
||||
Scenario: Arbitrary key/value search near given coordinate
|
||||
When sending json search query "[leisure=firepit] 47.150° N 9.5340493° E"
|
||||
Then results contain
|
||||
| class | type |
|
||||
| leisure | firepit |
|
||||
|
||||
|
||||
Scenario: POI search in a bounded viewbox
|
||||
When sending json search query "restaurants"
|
||||
| viewbox | bounded |
|
||||
| 9.50830,47.15253,9.52043,47.14866 | 1 |
|
||||
Then results contain
|
||||
| class | type |
|
||||
| amenity | restaurant |
|
||||
|
||||
Scenario Outline: Key/value search near given coordinate can be restricted to country
|
||||
When sending json search query "[natural=peak] 47.06512,9.53965" with address
|
||||
| countrycodes |
|
||||
| <cc> |
|
||||
Then result addresses contain
|
||||
| country_code |
|
||||
| <cc> |
|
||||
|
||||
Examples:
|
||||
| cc |
|
||||
| li |
|
||||
| ch |
|
||||
|
||||
Scenario: Name search near given coordinate
|
||||
When sending json search query "sporry" with address
|
||||
Then result addresses contain
|
||||
| ID | town |
|
||||
| 0 | Vaduz |
|
||||
When sending json search query "sporry, 47.10791,9.52676" with address
|
||||
Then result addresses contain
|
||||
| ID | village |
|
||||
| 0 | Triesen |
|
||||
|
||||
Scenario: Name search near given coordinate without result
|
||||
When sending json search query "sporry, N 47 15 7 W 9 61 26"
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: Arbitrary key/value search near a road
|
||||
When sending json search query "[amenity=drinking_water] Wissfläckaweg"
|
||||
Then results contain
|
||||
| class | type |
|
||||
| amenity | drinking_water |
|
||||
|
||||
Scenario: Ignore other country codes in structured search with country
|
||||
When sending json search query ""
|
||||
| city | country |
|
||||
| li | de |
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: Ignore country searches when query is restricted to countries
|
||||
When sending json search query "fr"
|
||||
| countrycodes |
|
||||
| li |
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: Country searches only return results for the given country
|
||||
When sending search query "Ans Trail" with address
|
||||
| countrycodes |
|
||||
| li |
|
||||
Then result addresses contain
|
||||
| country_code |
|
||||
| li |
|
||||
|
||||
# https://trac.openstreetmap.org/ticket/5094
|
||||
Scenario: housenumbers are ordered by complete match first
|
||||
When sending json search query "Austrasse 11, Vaduz" with address
|
||||
Then result addresses contain
|
||||
| ID | house_number |
|
||||
| 0 | 11 |
|
||||
|
||||
Scenario Outline: Coordinate searches with white spaces
|
||||
When sending json search query "<data>"
|
||||
Then exactly 1 result is returned
|
||||
And results contain
|
||||
| class |
|
||||
| water |
|
||||
|
||||
Examples:
|
||||
| data |
|
||||
| sporry weiher, N 47.10791° E 9.52676° |
|
||||
| sporry weiher, N 47.10791° E 9.52676° |
|
||||
| sporry weiher , N 47.10791° E 9.52676° |
|
||||
| sporry weiher, N 47.10791° E 9.52676° |
|
||||
| sporry weiher, N 47.10791° E 9.52676° |
|
||||
|
||||
Scenario: Searches with white spaces
|
||||
When sending json search query "52 Bodastr,Triesenberg"
|
||||
Then results contain
|
||||
| class | type |
|
||||
| highway | residential |
|
||||
|
||||
|
||||
# github #1949
|
||||
Scenario: Addressdetails always return the place type
|
||||
When sending json search query "Vaduz" with address
|
||||
Then result addresses contain
|
||||
| ID | town |
|
||||
| 0 | Vaduz |
|
||||
|
||||
Scenario: Search can handle complex query word sets
|
||||
When sending search query "aussenstelle universitat lichtenstein wachterhaus aussenstelle universitat lichtenstein wachterhaus aussenstelle universitat lichtenstein wachterhaus aussenstelle universitat lichtenstein wachterhaus"
|
||||
Then a HTTP 200 is returned
|
||||
208
test/bdd/api/search/simple.feature
Normal file
208
test/bdd/api/search/simple.feature
Normal file
@@ -0,0 +1,208 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Simple Tests
|
||||
Simple tests for internal server errors and response format.
|
||||
|
||||
Scenario Outline: Testing different parameters
|
||||
When sending search query "Vaduz"
|
||||
| param | value |
|
||||
| <parameter> | <value> |
|
||||
Then at least 1 result is returned
|
||||
When sending xml search query "Vaduz"
|
||||
| param | value |
|
||||
| <parameter> | <value> |
|
||||
Then at least 1 result is returned
|
||||
When sending json search query "Vaduz"
|
||||
| param | value |
|
||||
| <parameter> | <value> |
|
||||
Then at least 1 result is returned
|
||||
When sending jsonv2 search query "Vaduz"
|
||||
| param | value |
|
||||
| <parameter> | <value> |
|
||||
Then at least 1 result is returned
|
||||
When sending geojson search query "Vaduz"
|
||||
| param | value |
|
||||
| <parameter> | <value> |
|
||||
Then at least 1 result is returned
|
||||
When sending geocodejson search query "Vaduz"
|
||||
| param | value |
|
||||
| <parameter> | <value> |
|
||||
Then at least 1 result is returned
|
||||
|
||||
Examples:
|
||||
| parameter | value |
|
||||
| addressdetails | 0 |
|
||||
| polygon_text | 0 |
|
||||
| polygon_kml | 0 |
|
||||
| polygon_geojson | 0 |
|
||||
| polygon_svg | 0 |
|
||||
| accept-language | de,en |
|
||||
| countrycodes | li |
|
||||
| bounded | 1 |
|
||||
| bounded | 0 |
|
||||
| exclude_place_ids| 385252,1234515 |
|
||||
| limit | 1000 |
|
||||
| dedupe | 1 |
|
||||
| dedupe | 0 |
|
||||
| extratags | 0 |
|
||||
| namedetails | 0 |
|
||||
|
||||
Scenario: Search with invalid output format
|
||||
When sending search query "Berlin"
|
||||
| format |
|
||||
| fd$# |
|
||||
Then a HTTP 400 is returned
|
||||
|
||||
Scenario Outline: Simple Searches
|
||||
When sending search query "<query>"
|
||||
Then the result is valid json
|
||||
When sending xml search query "<query>"
|
||||
Then the result is valid xml
|
||||
When sending json search query "<query>"
|
||||
Then the result is valid json
|
||||
When sending jsonv2 search query "<query>"
|
||||
Then the result is valid json
|
||||
When sending geojson search query "<query>"
|
||||
Then the result is valid geojson
|
||||
|
||||
Examples:
|
||||
| query |
|
||||
| New York, New York |
|
||||
| France |
|
||||
| 12, Main Street, Houston |
|
||||
| München |
|
||||
| 東京都 |
|
||||
| hotels in nantes |
|
||||
| xywxkrf |
|
||||
| gh; foo() |
|
||||
| %#$@*&l;der#$! |
|
||||
| 234 |
|
||||
| 47.4,8.3 |
|
||||
|
||||
Scenario: Empty XML search
|
||||
When sending xml search query "xnznxvcx"
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| querystring | xnznxvcx |
|
||||
| more_url | .*q=xnznxvcx.*format=xml |
|
||||
|
||||
Scenario: Empty XML search with special XML characters
|
||||
When sending xml search query "xfdghn&zxn"xvbyx<vxx>cssdex"
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| querystring | xfdghn&zxn"xvbyx<vxx>cssdex |
|
||||
| more_url | .*q=xfdghn%26zxn%22xvbyx%3Cvxx%3Ecssdex.*format=xml |
|
||||
|
||||
Scenario: Empty XML search with viewbox
|
||||
When sending xml search query "xnznxvcx"
|
||||
| viewbox |
|
||||
| 12,33,77,45.13 |
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| querystring | xnznxvcx |
|
||||
| viewbox | 12,33,77,45.13 |
|
||||
|
||||
Scenario: Empty XML search with viewboxlbrt
|
||||
When sending xml search query "xnznxvcx"
|
||||
| viewboxlbrt |
|
||||
| 12,34.13,77,45 |
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| querystring | xnznxvcx |
|
||||
| viewbox | 12,34.13,77,45 |
|
||||
|
||||
Scenario: Empty XML search with viewboxlbrt and viewbox
|
||||
When sending xml search query "pub"
|
||||
| viewbox | viewboxblrt |
|
||||
| 12,33,77,45.13 | 1,2,3,4 |
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| querystring | pub |
|
||||
| viewbox | 12,33,77,45.13 |
|
||||
|
||||
Scenario: Empty XML search with excluded place ids
|
||||
When sending xml search query "jghrleoxsbwjer"
|
||||
| exclude_place_ids |
|
||||
| 123,76,342565 |
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| exclude_place_ids | 123,76,342565 |
|
||||
|
||||
Scenario: Empty XML search with bad excluded place ids
|
||||
When sending xml search query "jghrleoxsbwjer"
|
||||
| exclude_place_ids |
|
||||
| , |
|
||||
Then result header has not attributes exclude_place_ids
|
||||
|
||||
Scenario Outline: Wrapping of legal jsonp search requests
|
||||
When sending json search query "Tokyo"
|
||||
| param | value |
|
||||
|json_callback | <data> |
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| json_func | <result> |
|
||||
|
||||
Examples:
|
||||
| data | result |
|
||||
| foo | foo |
|
||||
| FOO | FOO |
|
||||
| __world | __world |
|
||||
|
||||
Scenario Outline: Wrapping of illegal jsonp search requests
|
||||
When sending json search query "Tokyo"
|
||||
| param | value |
|
||||
|json_callback | <data> |
|
||||
Then a json user error is returned
|
||||
|
||||
Examples:
|
||||
| data |
|
||||
| 1asd |
|
||||
| bar(foo) |
|
||||
| XXX['bad'] |
|
||||
| foo; evil |
|
||||
|
||||
Scenario: Ignore jsonp parameter for anything but json
|
||||
When sending json search query "Malibu"
|
||||
| json_callback |
|
||||
| 234 |
|
||||
Then a HTTP 400 is returned
|
||||
When sending xml search query "Malibu"
|
||||
| json_callback |
|
||||
| 234 |
|
||||
Then the result is valid xml
|
||||
|
||||
Scenario Outline: Empty search
|
||||
When sending <format> search query "YHlERzzx"
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Examples:
|
||||
| format |
|
||||
| json |
|
||||
| jsonv2 |
|
||||
| geojson |
|
||||
| geocodejson |
|
||||
|
||||
Scenario: Search for non-existing coordinates
|
||||
When sending json search query "-21.0,-33.0"
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: Country code selection is retained in more URL (#596)
|
||||
When sending xml search query "Vaduz"
|
||||
| countrycodes |
|
||||
| pl,1,,invalid,undefined,%3Cb%3E,bo,, |
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| more_url | .*&countrycodes=pl%2Cbo&.* |
|
||||
|
||||
Scenario Outline: Search debug output does not return errors
|
||||
When sending debug search query "<query>"
|
||||
Then a HTTP 200 is returned
|
||||
|
||||
Examples:
|
||||
| query |
|
||||
| Liechtenstein |
|
||||
| Triesen |
|
||||
| Pfarrkirche |
|
||||
| Landstr 27 Steinort, Triesenberg, 9495 |
|
||||
| 9497 |
|
||||
| restaurant in triesen |
|
||||
79
test/bdd/api/search/structured.feature
Normal file
79
test/bdd/api/search/structured.feature
Normal file
@@ -0,0 +1,79 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Structured search queries
|
||||
Testing correctness of results with
|
||||
structured queries
|
||||
|
||||
Scenario: Country only
|
||||
When sending json search query "" with address
|
||||
| country |
|
||||
| Liechtenstein |
|
||||
Then address of result 0 is
|
||||
| type | value |
|
||||
| country | Liechtenstein |
|
||||
| country_code | li |
|
||||
|
||||
Scenario: Postcode only
|
||||
When sending json search query "" with address
|
||||
| postalcode |
|
||||
| 9495 |
|
||||
Then results contain
|
||||
| type |
|
||||
| ^post(al_)?code |
|
||||
And result addresses contain
|
||||
| postcode |
|
||||
| 9495 |
|
||||
|
||||
Scenario: Street, postcode and country
|
||||
When sending xml search query "" with address
|
||||
| street | postalcode | country |
|
||||
| Old Palace Road | GU2 7UP | United Kingdom |
|
||||
Then result header contains
|
||||
| attr | value |
|
||||
| querystring | Old Palace Road, GU2 7UP, United Kingdom |
|
||||
|
||||
Scenario: Street with housenumber, city and postcode
|
||||
When sending xml search query "" with address
|
||||
| street | city | postalcode |
|
||||
| 19 Am schrägen Weg | Vaduz | 9490 |
|
||||
Then result addresses contain
|
||||
| house_number | road |
|
||||
| 19 | Am Schrägen Weg |
|
||||
|
||||
Scenario: Street with housenumber, city and bad postcode
|
||||
When sending xml search query "" with address
|
||||
| street | city | postalcode |
|
||||
| 19 Am schrägen Weg | Vaduz | 9491 |
|
||||
Then result addresses contain
|
||||
| house_number | road |
|
||||
| 19 | Am Schrägen Weg |
|
||||
|
||||
Scenario: Amenity, city
|
||||
When sending json search query "" with address
|
||||
| city | amenity |
|
||||
| Vaduz | bar |
|
||||
Then result addresses contain
|
||||
| country |
|
||||
| Liechtenstein |
|
||||
And results contain
|
||||
| class | type |
|
||||
| amenity | ^(pub)\|(bar)\|(restaurant) |
|
||||
|
||||
#176
|
||||
Scenario: Structured search restricts rank
|
||||
When sending json search query "" with address
|
||||
| city |
|
||||
| Vaduz |
|
||||
Then result addresses contain
|
||||
| town |
|
||||
| Vaduz |
|
||||
|
||||
#3651
|
||||
Scenario: Structured search with surrounding extra characters
|
||||
When sending xml search query "" with address
|
||||
| street | city | postalcode |
|
||||
| "19 Am schrägen Weg" | "Vaduz" | "9491" |
|
||||
Then result addresses contain
|
||||
| house_number | road |
|
||||
| 19 | Am Schrägen Weg |
|
||||
|
||||
17
test/bdd/api/status/failures.feature
Normal file
17
test/bdd/api/status/failures.feature
Normal file
@@ -0,0 +1,17 @@
|
||||
@UNKNOWNDB
|
||||
Feature: Status queries against unknown database
|
||||
Testing status query
|
||||
|
||||
Scenario: Failed status as text
|
||||
When sending text status query
|
||||
Then a HTTP 500 is returned
|
||||
And the page contents equals "ERROR: Database connection failed"
|
||||
|
||||
Scenario: Failed status as json
|
||||
When sending json status query
|
||||
Then a HTTP 200 is returned
|
||||
And the result is valid json
|
||||
And results contain
|
||||
| status | message |
|
||||
| 700 | Database connection failed |
|
||||
And result has not attributes data_updated
|
||||
17
test/bdd/api/status/simple.feature
Normal file
17
test/bdd/api/status/simple.feature
Normal file
@@ -0,0 +1,17 @@
|
||||
@SQLITE
|
||||
@APIDB
|
||||
Feature: Status queries
|
||||
Testing status query
|
||||
|
||||
Scenario: Status as text
|
||||
When sending status query
|
||||
Then a HTTP 200 is returned
|
||||
And the page contents equals "OK"
|
||||
|
||||
Scenario: Status as json
|
||||
When sending json status query
|
||||
Then the result is valid json
|
||||
And results contain
|
||||
| status | message |
|
||||
| 0 | OK |
|
||||
And result has attributes data_updated
|
||||
@@ -1,358 +0,0 @@
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
#
|
||||
# This file is part of Nominatim. (https://nominatim.org)
|
||||
#
|
||||
# Copyright (C) 2025 by the Nominatim developer community.
|
||||
# For a full list of authors see the git log.
|
||||
"""
|
||||
Fixtures for BDD test steps
|
||||
"""
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import psycopg
|
||||
from psycopg import sql as pysql
|
||||
|
||||
# always test against the source
|
||||
SRC_DIR = (Path(__file__) / '..' / '..' / '..').resolve()
|
||||
sys.path.insert(0, str(SRC_DIR / 'src'))
|
||||
|
||||
import pytest
|
||||
from pytest_bdd.parsers import re as step_parse
|
||||
from pytest_bdd import given, when, then
|
||||
|
||||
pytest.register_assert_rewrite('utils')
|
||||
|
||||
from utils.api_runner import APIRunner
|
||||
from utils.api_result import APIResult
|
||||
from utils.checks import ResultAttr, COMPARATOR_TERMS
|
||||
from utils.geometry_alias import ALIASES
|
||||
from utils.grid import Grid
|
||||
from utils.db import DBManager
|
||||
|
||||
from nominatim_db.config import Configuration
|
||||
from nominatim_db.data.country_info import setup_country_config
|
||||
|
||||
|
||||
def _strlist(inp):
|
||||
return [s.strip() for s in inp.split(',')]
|
||||
|
||||
|
||||
def _pretty_json(inp):
|
||||
return json.dumps(inp, indent=2)
|
||||
|
||||
|
||||
def pytest_addoption(parser, pluginmanager):
|
||||
parser.addoption('--nominatim-purge', dest='NOMINATIM_PURGE', action='store_true',
|
||||
help='Force recreation of test databases from scratch.')
|
||||
parser.addoption('--nominatim-keep-db', dest='NOMINATIM_KEEP_DB', action='store_true',
|
||||
help='Do not drop the database after tests are finished.')
|
||||
parser.addoption('--nominatim-api-engine', dest='NOMINATIM_API_ENGINE',
|
||||
default='falcon',
|
||||
help='Chose the API engine to use when sending requests.')
|
||||
parser.addoption('--nominatim-tokenizer', dest='NOMINATIM_TOKENIZER',
|
||||
metavar='TOKENIZER',
|
||||
help='Use the specified tokenizer for importing data into '
|
||||
'a Nominatim database.')
|
||||
|
||||
parser.addini('nominatim_test_db', default='test_nominatim',
|
||||
help='Name of the database used for running a single test.')
|
||||
parser.addini('nominatim_api_test_db', default='test_api_nominatim',
|
||||
help='Name of the database for storing API test data.')
|
||||
parser.addini('nominatim_template_db', default='test_template_nominatim',
|
||||
help='Name of database used as a template for test databases.')
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def datatable():
|
||||
""" Default fixture for datatables, so that their presence can be optional.
|
||||
"""
|
||||
return None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def node_grid():
|
||||
""" Default fixture for node grids. Nothing set.
|
||||
"""
|
||||
return Grid([[]], None, None)
|
||||
|
||||
|
||||
@pytest.fixture(scope='session', autouse=True)
|
||||
def setup_country_info():
|
||||
setup_country_config(Configuration(None))
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def template_db(pytestconfig):
|
||||
""" Create a template database containing the extensions and base data
|
||||
needed by Nominatim. Using the template instead of doing the full
|
||||
setup can speed up the tests.
|
||||
|
||||
The template database will only be created if it does not exist yet
|
||||
or a purge has been explicitly requested.
|
||||
"""
|
||||
dbm = DBManager(purge=pytestconfig.option.NOMINATIM_PURGE)
|
||||
|
||||
template_db = pytestconfig.getini('nominatim_template_db')
|
||||
|
||||
template_config = Configuration(
|
||||
None, environ={'NOMINATIM_DATABASE_DSN': f"pgsql:dbname={template_db}"})
|
||||
|
||||
dbm.setup_template_db(template_config)
|
||||
|
||||
return template_db
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def def_config(pytestconfig):
|
||||
dbname = pytestconfig.getini('nominatim_test_db')
|
||||
|
||||
return Configuration(None,
|
||||
environ={'NOMINATIM_DATABASE_DSN': f"pgsql:dbname={dbname}"})
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db(template_db, pytestconfig):
|
||||
""" Set up an empty database for use with osm2pgsql.
|
||||
"""
|
||||
dbm = DBManager(purge=pytestconfig.option.NOMINATIM_PURGE)
|
||||
|
||||
dbname = pytestconfig.getini('nominatim_test_db')
|
||||
|
||||
dbm.create_db_from_template(dbname, template_db)
|
||||
|
||||
yield dbname
|
||||
|
||||
if not pytestconfig.option.NOMINATIM_KEEP_DB:
|
||||
dbm.drop_db(dbname)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db_conn(db, def_config):
|
||||
with psycopg.connect(def_config.get_libpq_dsn()) as conn:
|
||||
info = psycopg.types.TypeInfo.fetch(conn, "hstore")
|
||||
psycopg.types.hstore.register_hstore(info, conn)
|
||||
yield conn
|
||||
|
||||
|
||||
@when(step_parse(r'reverse geocoding (?P<lat>[\d.-]*),(?P<lon>[\d.-]*)'),
|
||||
target_fixture='nominatim_result')
|
||||
def reverse_geocode_via_api(test_config_env, pytestconfig, datatable, lat, lon):
|
||||
runner = APIRunner(test_config_env, pytestconfig.option.NOMINATIM_API_ENGINE)
|
||||
api_response = runner.run_step('reverse',
|
||||
{'lat': float(lat), 'lon': float(lon)},
|
||||
datatable, 'jsonv2', {})
|
||||
|
||||
assert api_response.status == 200
|
||||
assert api_response.headers['content-type'] == 'application/json; charset=utf-8'
|
||||
|
||||
result = APIResult('json', 'reverse', api_response.body)
|
||||
assert result.is_simple()
|
||||
|
||||
assert isinstance(result.result['lat'], str)
|
||||
assert isinstance(result.result['lon'], str)
|
||||
result.result['centroid'] = f"POINT({result.result['lon']} {result.result['lat']})"
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@when(step_parse(r'reverse geocoding at node (?P<node>[\d]+)'),
|
||||
target_fixture='nominatim_result')
|
||||
def reverse_geocode_via_api_and_grid(test_config_env, pytestconfig, node_grid, datatable, node):
|
||||
coords = node_grid.get(node)
|
||||
if coords is None:
|
||||
raise ValueError('Unknown node id')
|
||||
|
||||
return reverse_geocode_via_api(test_config_env, pytestconfig, datatable, coords[1], coords[0])
|
||||
|
||||
|
||||
@when(step_parse(r'geocoding(?: "(?P<query>.*)")?'),
|
||||
target_fixture='nominatim_result')
|
||||
def forward_geocode_via_api(test_config_env, pytestconfig, datatable, query):
|
||||
runner = APIRunner(test_config_env, pytestconfig.option.NOMINATIM_API_ENGINE)
|
||||
|
||||
params = {'addressdetails': '1'}
|
||||
if query:
|
||||
params['q'] = query
|
||||
|
||||
api_response = runner.run_step('search', params, datatable, 'jsonv2', {})
|
||||
|
||||
assert api_response.status == 200
|
||||
assert api_response.headers['content-type'] == 'application/json; charset=utf-8'
|
||||
|
||||
result = APIResult('json', 'search', api_response.body)
|
||||
assert not result.is_simple()
|
||||
|
||||
for res in result.result:
|
||||
assert isinstance(res['lat'], str)
|
||||
assert isinstance(res['lon'], str)
|
||||
res['centroid'] = f"POINT({res['lon']} {res['lat']})"
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@then(step_parse(r'(?P<op>[a-z ]+) (?P<num>\d+) results? (?:are|is) returned'),
|
||||
converters={'num': int})
|
||||
def check_number_of_results(nominatim_result, op, num):
|
||||
assert not nominatim_result.is_simple()
|
||||
assert COMPARATOR_TERMS[op](num, len(nominatim_result))
|
||||
|
||||
|
||||
@then(step_parse('the result metadata contains'))
|
||||
def check_metadata_for_fields(nominatim_result, datatable):
|
||||
if datatable[0] == ['param', 'value']:
|
||||
pairs = datatable[1:]
|
||||
else:
|
||||
pairs = zip(datatable[0], datatable[1])
|
||||
|
||||
for k, v in pairs:
|
||||
assert ResultAttr(nominatim_result.meta, k) == v
|
||||
|
||||
|
||||
@then(step_parse('the result metadata has no attributes (?P<attributes>.*)'),
|
||||
converters={'attributes': _strlist})
|
||||
def check_metadata_for_field_presence(nominatim_result, attributes):
|
||||
assert all(a not in nominatim_result.meta for a in attributes), \
|
||||
f"Unexpectedly have one of the attributes '{attributes}' in\n" \
|
||||
f"{_pretty_json(nominatim_result.meta)}"
|
||||
|
||||
|
||||
@then(step_parse(r'the result contains(?: in field (?P<field>\S+))?'))
|
||||
def check_result_for_fields(nominatim_result, datatable, node_grid, field):
|
||||
assert nominatim_result.is_simple()
|
||||
|
||||
if datatable[0] == ['param', 'value']:
|
||||
pairs = datatable[1:]
|
||||
else:
|
||||
pairs = zip(datatable[0], datatable[1])
|
||||
|
||||
prefix = field + '+' if field else ''
|
||||
|
||||
for k, v in pairs:
|
||||
assert ResultAttr(nominatim_result.result, prefix + k, grid=node_grid) == v
|
||||
|
||||
|
||||
@then(step_parse('the result has attributes (?P<attributes>.*)'),
|
||||
converters={'attributes': _strlist})
|
||||
def check_result_for_field_presence(nominatim_result, attributes):
|
||||
assert nominatim_result.is_simple()
|
||||
assert all(a in nominatim_result.result for a in attributes)
|
||||
|
||||
|
||||
@then(step_parse('the result has no attributes (?P<attributes>.*)'),
|
||||
converters={'attributes': _strlist})
|
||||
def check_result_for_field_absence(nominatim_result, attributes):
|
||||
assert nominatim_result.is_simple()
|
||||
assert all(a not in nominatim_result.result for a in attributes)
|
||||
|
||||
|
||||
@then(step_parse('the result set contains(?P<exact> exactly)?'))
|
||||
def check_result_list_match(nominatim_result, datatable, exact):
|
||||
assert not nominatim_result.is_simple()
|
||||
|
||||
result_set = set(range(len(nominatim_result.result)))
|
||||
|
||||
for row in datatable[1:]:
|
||||
for idx in result_set:
|
||||
for key, value in zip(datatable[0], row):
|
||||
if ResultAttr(nominatim_result.result[idx], key) != value:
|
||||
break
|
||||
else:
|
||||
# found a match
|
||||
result_set.remove(idx)
|
||||
break
|
||||
else:
|
||||
assert False, f"Missing data row {row}. Full response:\n{nominatim_result}"
|
||||
|
||||
if exact:
|
||||
assert not [nominatim_result.result[i] for i in result_set]
|
||||
|
||||
|
||||
@then(step_parse('all results have attributes (?P<attributes>.*)'),
|
||||
converters={'attributes': _strlist})
|
||||
def check_all_results_for_field_presence(nominatim_result, attributes):
|
||||
assert not nominatim_result.is_simple()
|
||||
assert len(nominatim_result) > 0
|
||||
for res in nominatim_result.result:
|
||||
assert all(a in res for a in attributes), \
|
||||
f"Missing one of the attributes '{attributes}' in\n{_pretty_json(res)}"
|
||||
|
||||
|
||||
@then(step_parse('all results have no attributes (?P<attributes>.*)'),
|
||||
converters={'attributes': _strlist})
|
||||
def check_all_result_for_field_absence(nominatim_result, attributes):
|
||||
assert not nominatim_result.is_simple()
|
||||
assert len(nominatim_result) > 0
|
||||
for res in nominatim_result.result:
|
||||
assert all(a not in res for a in attributes), \
|
||||
f"Unexpectedly have one of the attributes '{attributes}' in\n{_pretty_json(res)}"
|
||||
|
||||
|
||||
@then(step_parse(r'all results contain(?: in field (?P<field>\S+))?'))
|
||||
def check_all_results_contain(nominatim_result, datatable, node_grid, field):
|
||||
assert not nominatim_result.is_simple()
|
||||
assert len(nominatim_result) > 0
|
||||
|
||||
if datatable[0] == ['param', 'value']:
|
||||
pairs = datatable[1:]
|
||||
else:
|
||||
pairs = zip(datatable[0], datatable[1])
|
||||
|
||||
prefix = field + '+' if field else ''
|
||||
|
||||
for k, v in pairs:
|
||||
for r in nominatim_result.result:
|
||||
assert ResultAttr(r, prefix + k, grid=node_grid) == v
|
||||
|
||||
|
||||
@then(step_parse(r'result (?P<num>\d+) contains(?: in field (?P<field>\S+))?'),
|
||||
converters={'num': int})
|
||||
def check_specific_result_for_fields(nominatim_result, datatable, num, field):
|
||||
assert not nominatim_result.is_simple()
|
||||
assert len(nominatim_result) > num
|
||||
|
||||
if datatable[0] == ['param', 'value']:
|
||||
pairs = datatable[1:]
|
||||
else:
|
||||
pairs = zip(datatable[0], datatable[1])
|
||||
|
||||
prefix = field + '+' if field else ''
|
||||
|
||||
for k, v in pairs:
|
||||
assert ResultAttr(nominatim_result.result[num], prefix + k) == v
|
||||
|
||||
|
||||
@given(step_parse(r'the (?P<step>[0-9.]+ )?grid(?: with origin (?P<origin>.*))?'),
|
||||
target_fixture='node_grid')
|
||||
def set_node_grid(datatable, step, origin):
|
||||
if step is not None:
|
||||
step = float(step)
|
||||
|
||||
if origin:
|
||||
if ',' in origin:
|
||||
coords = origin.split(',')
|
||||
if len(coords) != 2:
|
||||
raise RuntimeError('Grid origin expects origin with x,y coordinates.')
|
||||
origin = list(map(float, coords))
|
||||
elif origin in ALIASES:
|
||||
origin = ALIASES[origin]
|
||||
else:
|
||||
raise RuntimeError('Grid origin must be either coordinate or alias.')
|
||||
|
||||
return Grid(datatable, step, origin)
|
||||
|
||||
|
||||
@then(step_parse('(?P<table>placex?) has no entry for '
|
||||
r'(?P<osm_type>[NRW])(?P<osm_id>\d+)(?::(?P<osm_class>\S+))?'),
|
||||
converters={'osm_id': int})
|
||||
def check_place_missing_lines(db_conn, table, osm_type, osm_id, osm_class):
|
||||
sql = pysql.SQL("""SELECT count(*) FROM {}
|
||||
WHERE osm_type = %s and osm_id = %s""").format(pysql.Identifier(table))
|
||||
params = [osm_type, int(osm_id)]
|
||||
if osm_class:
|
||||
sql += pysql.SQL(' AND class = %s')
|
||||
params.append(osm_class)
|
||||
|
||||
with db_conn.cursor() as cur:
|
||||
assert cur.execute(sql, params).fetchone()[0] == 0
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Address computation
|
||||
Tests for filling of place_addressline
|
||||
|
||||
@@ -10,13 +11,16 @@ Feature: Address computation
|
||||
| N2 | place | hamlet | West Farm | 2 |
|
||||
| N3 | place | hamlet | East Farm | 3 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | fromarea |
|
||||
| N1 | N3 | False |
|
||||
When geocoding "Square"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Square, East Farm |
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N1 | N2 |
|
||||
When sending search query "Square"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Square, East Farm |
|
||||
|
||||
Scenario: given two place nodes, the closer one wins for the address
|
||||
Given the grid
|
||||
@@ -98,9 +102,12 @@ Feature: Address computation
|
||||
| N2 | place | city | 15 | 9 |
|
||||
| R1 | place | city | 8 | (1,2,3,4,1) |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress | cached_rank_address |
|
||||
| N1 | R1 | True | 16 |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N1 | N2 |
|
||||
|
||||
|
||||
Scenario: place nodes close enough to smaller ranked place nodes are included
|
||||
@@ -184,9 +191,12 @@ Feature: Address computation
|
||||
| W10 | boundary | administrative | 5 | (1, 2, 8, 5, 4, 1) |
|
||||
| W11 | boundary | administrative | 5 | (2, 3, 6, 5, 8, 2) |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | cached_rank_address |
|
||||
| W1 | W10 | 10 |
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| W1 | W11 |
|
||||
|
||||
Scenario: Roads should not contain boundaries they touch in a middle point
|
||||
Given the grid
|
||||
@@ -201,9 +211,12 @@ Feature: Address computation
|
||||
| W10 | boundary | administrative | 5 | (1, 2, 8, 5, 4, 1) |
|
||||
| W11 | boundary | administrative | 5 | (2, 3, 6, 5, 8, 2) |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | cached_rank_address |
|
||||
| W1 | W10 | 10 |
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| W1 | W11 |
|
||||
|
||||
Scenario: Locality points should contain all boundaries they touch
|
||||
Given the 0.001 grid
|
||||
@@ -235,8 +248,9 @@ Feature: Address computation
|
||||
| osm | class | type | admin | geometry |
|
||||
| W10 | boundary | administrative | 5 | (2, 3, 6, 5, 2) |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| W1 | W10 |
|
||||
|
||||
Scenario: buildings with only addr:postcodes do not appear in the address of a way
|
||||
Given the grid with origin DE
|
||||
@@ -259,14 +273,9 @@ Feature: Address computation
|
||||
| osm | class | type | addr+postcode | geometry |
|
||||
| W22 | place | postcode | 11234 | (10,11,12,13,10) |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| R4 | R1 |
|
||||
| R4 | R34 |
|
||||
| R34 | R1 |
|
||||
| W93 | R1 |
|
||||
| W93 | R34 |
|
||||
| W93 | R4 |
|
||||
| W93 | W22 |
|
||||
|
||||
Scenario: postcode boundaries do appear in the address of a way
|
||||
Given the grid with origin DE
|
||||
@@ -305,8 +314,9 @@ Feature: Address computation
|
||||
| W1 | highway | residential | 8, 9 |
|
||||
| W2 | place | square | (1, 2, 3 ,4, 1) |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| W1 | W2 |
|
||||
|
||||
Scenario: addr:* tags are honored even when a street is far away from the place
|
||||
Given the grid
|
||||
@@ -322,11 +332,14 @@ Feature: Address computation
|
||||
| W1 | highway | primary | Left | 8,9 |
|
||||
| W2 | highway | primary | Right | 8,9 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress |
|
||||
| W1 | R1 | True |
|
||||
| W1 | R2 | False |
|
||||
| W2 | R2 | True |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| W2 | R1 |
|
||||
|
||||
|
||||
Scenario: addr:* tags are honored even when a POI is far away from the place
|
||||
@@ -343,14 +356,17 @@ Feature: Address computation
|
||||
| W1 | highway | primary | Wonderway | Right | 8,9 |
|
||||
| N1 | amenity | cafe | Bolder | Left | 9 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress |
|
||||
| W1 | R2 | True |
|
||||
| N1 | R1 | True |
|
||||
When geocoding "Bolder"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| W1 | R1 |
|
||||
When sending search query "Bolder"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
|
||||
Scenario: addr:* tags do not produce addresslines when the parent has the address part
|
||||
Given the grid
|
||||
@@ -365,13 +381,16 @@ Feature: Address computation
|
||||
| W1 | highway | primary | Wonderway | Outer | 8,9 |
|
||||
| N1 | amenity | cafe | Bolder | Outer | 9 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress |
|
||||
| W1 | R1 | True |
|
||||
When geocoding "Bolder"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Bolder, Wonderway, Outer |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N1 | R1 |
|
||||
When sending search query "Bolder"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Bolder, Wonderway, Outer |
|
||||
|
||||
Scenario: addr:* tags on outside do not produce addresslines when the parent has the address part
|
||||
Given the grid
|
||||
@@ -387,14 +406,17 @@ Feature: Address computation
|
||||
| W1 | highway | primary | Wonderway | Left | 8,9 |
|
||||
| N1 | amenity | cafe | Bolder | Left | 9 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress |
|
||||
| W1 | R1 | True |
|
||||
| W1 | R2 | False |
|
||||
When geocoding "Bolder"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N1 | R1 |
|
||||
When sending search query "Bolder"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
|
||||
Scenario: POIs can correct address parts on the fly
|
||||
Given the grid
|
||||
@@ -411,18 +433,22 @@ Feature: Address computation
|
||||
| N1 | amenity | cafe | Bolder | 9 |
|
||||
| N2 | amenity | cafe | Leftside | 8 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress |
|
||||
| W1 | R1 | False |
|
||||
| W1 | R2 | True |
|
||||
When geocoding "Bolder"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
When geocoding "Leftside"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N2 | Leftside, Wonderway, Right |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N1 | R1 |
|
||||
| N2 | R2 |
|
||||
When sending search query "Bolder"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
When sending search query "Leftside"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N2 | Leftside, Wonderway, Right |
|
||||
|
||||
|
||||
Scenario: POIs can correct address parts on the fly (with partial unmatching address)
|
||||
@@ -443,18 +469,22 @@ Feature: Address computation
|
||||
| N1 | amenity | cafe | Bolder | Boring | 9 |
|
||||
| N2 | amenity | cafe | Leftside | Boring | 8 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress |
|
||||
| W1 | R1 | True |
|
||||
| W1 | R2 | False |
|
||||
When geocoding "Bolder"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
When geocoding "Leftside"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N2 | Leftside, Wonderway, Right |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N1 | R1 |
|
||||
| N2 | R2 |
|
||||
When sending search query "Bolder"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
When sending search query "Leftside"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N2 | Leftside, Wonderway, Right |
|
||||
|
||||
|
||||
|
||||
@@ -476,26 +506,30 @@ Feature: Address computation
|
||||
| N1 | amenity | cafe | Bolder | Left | 9 |
|
||||
| N2 | amenity | cafe | Leftside | Left | 8 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | isaddress |
|
||||
| W1 | R1 | True |
|
||||
| W1 | R2 | False |
|
||||
When geocoding "Bolder"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
When geocoding "Leftside"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N2 | Leftside, Wonderway, Left |
|
||||
And place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N1 | R1 |
|
||||
| N2 | R2 |
|
||||
When sending search query "Bolder"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Bolder, Wonderway, Left |
|
||||
When sending search query "Leftside"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N2 | Leftside, Wonderway, Left |
|
||||
|
||||
|
||||
Scenario: addr:* tags always match the closer area
|
||||
Given the grid
|
||||
| 1 | | | | 2 | | 5 |
|
||||
| | | | | | | |
|
||||
| 4 | | | | 3 | | 6 |
|
||||
| | 10| 11| | | | |
|
||||
| 4 | | | | 3 | | 6 |
|
||||
And the places
|
||||
| osm | class | type | admin | name | geometry |
|
||||
| R1 | boundary | administrative | 8 | Left | (1,2,3,4,1) |
|
||||
@@ -504,9 +538,9 @@ Feature: Address computation
|
||||
| osm | class | type | name | addr+city | geometry |
|
||||
| W1 | highway | primary | Wonderway | Left | 10,11 |
|
||||
When importing
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| W1 | R1 |
|
||||
| W1 | R2 |
|
||||
|
||||
Scenario: Full name is prefered for unlisted addr:place tags
|
||||
Given the grid
|
||||
@@ -525,7 +559,7 @@ Feature: Address computation
|
||||
| osm | class | type | housenr | addr+street | geometry |
|
||||
| N2 | place | house | 2 | Royal Terrace | 2 |
|
||||
When importing
|
||||
When geocoding "1, Royal Terrace Gardens"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| N1 |
|
||||
When sending search query "1, Royal Terrace Gardens"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Country handling
|
||||
Tests for import and use of country information
|
||||
|
||||
@@ -9,16 +10,16 @@ Feature: Country handling
|
||||
| osm | class | type | name | geometry |
|
||||
| N1 | place | town | Wenig | country:de |
|
||||
When importing
|
||||
When geocoding "Wenig, Loudou"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Wenig, Deutschland |
|
||||
When geocoding "Wenig"
|
||||
When sending search query "Wenig, Loudou"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Wenig, Deutschland |
|
||||
When sending search query "Wenig"
|
||||
| accept-language |
|
||||
| xy,en |
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Wenig, Loudou |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Wenig, Loudou |
|
||||
|
||||
Scenario: OSM country relations outside expected boundaries are ignored for naming
|
||||
Given the grid
|
||||
@@ -31,12 +32,12 @@ Feature: Country handling
|
||||
| osm | class | type | name | geometry |
|
||||
| N1 | place | town | Wenig | country:de |
|
||||
When importing
|
||||
When geocoding "Wenig"
|
||||
When sending search query "Wenig"
|
||||
| accept-language |
|
||||
| xy,en |
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Wenig, Germany |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Wenig, Germany |
|
||||
|
||||
Scenario: Pre-defined country names are used
|
||||
Given the grid with origin CH
|
||||
@@ -45,12 +46,12 @@ Feature: Country handling
|
||||
| osm | class | type | name | geometry |
|
||||
| N1 | place | town | Ingb | 1 |
|
||||
When importing
|
||||
And geocoding "Ingb"
|
||||
And sending search query "Ingb"
|
||||
| accept-language |
|
||||
| en,de |
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Ingb, Switzerland |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Ingb, Switzerland |
|
||||
|
||||
Scenario: For overlapping countries, pre-defined countries are tie-breakers
|
||||
Given the grid with origin US
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Import of address interpolations
|
||||
Tests that interpolated addresses are added correctly
|
||||
|
||||
@@ -59,7 +60,7 @@ Feature: Import of address interpolations
|
||||
When importing
|
||||
Then W1 expands to interpolation
|
||||
| start | end | geometry |
|
||||
| 4 | 6 | 8,9 |
|
||||
| 4 | 6 | 9,8 |
|
||||
|
||||
Scenario: Simple odd two point interpolation
|
||||
Given the grid with origin 1,1
|
||||
@@ -226,8 +227,8 @@ Feature: Import of address interpolations
|
||||
|
||||
Scenario: Even three point interpolation line with odd center point
|
||||
Given the grid
|
||||
| 1 | | 10 | | 11 | 3 | 2 |
|
||||
| 4 | | | | | | 5 |
|
||||
| 1 | | 10 | | | 11 | 3 | 2 |
|
||||
| 4 | | | | | | | 5 |
|
||||
Given the places
|
||||
| osm | class | type | housenr |
|
||||
| N1 | place | house | 2 |
|
||||
@@ -331,14 +332,14 @@ Feature: Import of address interpolations
|
||||
Then W11 expands to interpolation
|
||||
| parent_place_id | start | end |
|
||||
| W3 | 14 | 14 |
|
||||
When geocoding "16 Cloud Street"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| N4 |
|
||||
When geocoding "14 Cloud Street"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| W11 |
|
||||
When sending search query "16 Cloud Street"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N4 |
|
||||
When sending search query "14 Cloud Street"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | W11 |
|
||||
|
||||
Scenario: addr:street on housenumber way
|
||||
Given the grid
|
||||
@@ -376,14 +377,14 @@ Feature: Import of address interpolations
|
||||
Then W11 expands to interpolation
|
||||
| parent_place_id | start | end |
|
||||
| W3 | 14 | 14 |
|
||||
When geocoding "16 Cloud Street"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| N4 |
|
||||
When geocoding "14 Cloud Street"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| W11 |
|
||||
When sending search query "16 Cloud Street"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N4 |
|
||||
When sending search query "14 Cloud Street"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | W11 |
|
||||
|
||||
Scenario: Geometry of points and way don't match (github #253)
|
||||
Given the places
|
||||
@@ -403,7 +404,7 @@ Feature: Import of address interpolations
|
||||
When importing
|
||||
Then W1 expands to interpolation
|
||||
| start | end | geometry |
|
||||
| 4 | 4 | 144.96301672 -37.76294644 |
|
||||
| 4 | 4 | 144.963016 -37.762946 |
|
||||
| 8 | 8 | 144.96314407 -37.762223692 |
|
||||
|
||||
Scenario: Place with missing address information
|
||||
@@ -427,7 +428,7 @@ Feature: Import of address interpolations
|
||||
When importing
|
||||
Then W1 expands to interpolation
|
||||
| start | end | geometry |
|
||||
| 25 | 27 | 0.0000166 0,0.00002 0,0.0000333 0 |
|
||||
| 25 | 27 | 0.000016 0,0.00002 0,0.000033 0 |
|
||||
|
||||
Scenario: Ways without node entries are ignored
|
||||
Given the places
|
||||
@@ -477,10 +478,10 @@ Feature: Import of address interpolations
|
||||
Then W1 expands to interpolation
|
||||
| start | end | geometry |
|
||||
| 2 | 8 | 10,11 |
|
||||
When reverse geocoding 1,1
|
||||
Then the result contains
|
||||
| object | type | display_name |
|
||||
| N1 | house | 0, London Road |
|
||||
When sending v1/reverse at 1,1
|
||||
Then results contain
|
||||
| ID | osm | type | display_name |
|
||||
| 0 | N1 | house | 0, London Road |
|
||||
|
||||
Scenario: Parenting of interpolation with additional tags
|
||||
Given the grid
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Linking of places
|
||||
Tests for correctly determining linked places
|
||||
|
||||
@@ -52,10 +53,10 @@ Feature: Linking of places
|
||||
| W2 | R13 |
|
||||
| R13 | - |
|
||||
| R23 | - |
|
||||
When geocoding "rhein"
|
||||
Then the result set contains
|
||||
| object |
|
||||
| R13 |
|
||||
When sending search query "rhein"
|
||||
Then results contain
|
||||
| osm |
|
||||
| R13 |
|
||||
|
||||
Scenario: Relations are not linked when in waterway relations
|
||||
Given the grid
|
||||
@@ -78,13 +79,11 @@ Feature: Linking of places
|
||||
| W2 | - |
|
||||
| R1 | - |
|
||||
| R2 | - |
|
||||
When geocoding "rhein"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| R1 |
|
||||
And result 1 contains
|
||||
| object |
|
||||
| W2 |
|
||||
When sending search query "rhein"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R1 |
|
||||
| 1 | W2 |
|
||||
|
||||
|
||||
Scenario: Empty waterway relations are handled correctly
|
||||
@@ -137,9 +136,9 @@ Feature: Linking of places
|
||||
| object | linked_place_id |
|
||||
| W1 | - |
|
||||
| W2 | R1 |
|
||||
When geocoding "rhein2"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "rhein2"
|
||||
Then results contain
|
||||
| osm |
|
||||
| W1 |
|
||||
|
||||
# github #573
|
||||
@@ -181,8 +180,8 @@ Feature: Linking of places
|
||||
| object | linked_place_id |
|
||||
| N2 | R13 |
|
||||
And placex contains
|
||||
| object | centroid!wkt | name+name | extratags+linked_place |
|
||||
| R13 | 9 | Garbo | hamlet |
|
||||
| object | centroid | name+name | extratags+linked_place |
|
||||
| R13 | 9 | Garbo | hamlet |
|
||||
|
||||
Scenario: Boundaries with place tags are linked against places with same type
|
||||
Given the 0.01 grid
|
||||
@@ -202,18 +201,18 @@ Feature: Linking of places
|
||||
And placex contains
|
||||
| object | rank_address |
|
||||
| R13 | 16 |
|
||||
When geocoding ""
|
||||
When sending search query ""
|
||||
| city |
|
||||
| Berlin |
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| R13 |
|
||||
When geocoding ""
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R13 |
|
||||
When sending search query ""
|
||||
| state |
|
||||
| Berlin |
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| R13 |
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R13 |
|
||||
|
||||
|
||||
Scenario: Boundaries without place tags only link against same admin level
|
||||
@@ -234,18 +233,18 @@ Feature: Linking of places
|
||||
And placex contains
|
||||
| object | rank_address |
|
||||
| R13 | 8 |
|
||||
When geocoding ""
|
||||
When sending search query ""
|
||||
| state |
|
||||
| Berlin |
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| R13 |
|
||||
When geocoding ""
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R13 |
|
||||
When sending search query ""
|
||||
| city |
|
||||
| Berlin |
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| N2 |
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N2 |
|
||||
|
||||
# github #1352
|
||||
Scenario: Do not use linked centroid when it is outside the area
|
||||
@@ -267,8 +266,8 @@ Feature: Linking of places
|
||||
| object | linked_place_id |
|
||||
| N2 | R13 |
|
||||
And placex contains
|
||||
| object | centroid!in_box |
|
||||
| R13 | 0,0,0.1,0.1 |
|
||||
| object | centroid |
|
||||
| R13 | in geometry |
|
||||
|
||||
Scenario: Place nodes can only be linked once
|
||||
Given the 0.02 grid
|
||||
@@ -287,7 +286,7 @@ Feature: Linking of places
|
||||
| object | linked_place_id |
|
||||
| N2 | R1 |
|
||||
And placex contains
|
||||
| object | extratags!dict |
|
||||
| object | extratags |
|
||||
| R1 | 'linked_place' : 'city', 'wikidata': 'Q1234' |
|
||||
| R2 | 'wikidata': 'Q1234' |
|
||||
|
||||
@@ -311,22 +310,3 @@ Feature: Linking of places
|
||||
| object | name+_place_name |
|
||||
| R1 | LabelPlace |
|
||||
|
||||
|
||||
@skip
|
||||
Scenario: Linked places expand default language names
|
||||
Given the grid
|
||||
| 1 | | 2 |
|
||||
| | 9 | |
|
||||
| 4 | | 3 |
|
||||
Given the places
|
||||
| osm | class | type | name+name | geometry |
|
||||
| N9 | place | city | Popayán | 9 |
|
||||
| R1 | boundary | administrative | Perímetro Urbano Popayán | (1,2,3,4,1) |
|
||||
And the relations
|
||||
| id | members |
|
||||
| 1 | N9:label |
|
||||
When importing
|
||||
Then placex contains
|
||||
| object | name+_place_name | name+_place_name:es |
|
||||
| R1 | Popayán | Popayán |
|
||||
|
||||
105
test/bdd/db/import/naming.feature
Normal file
105
test/bdd/db/import/naming.feature
Normal file
@@ -0,0 +1,105 @@
|
||||
@DB
|
||||
Feature: Import and search of names
|
||||
Tests all naming related import issues
|
||||
|
||||
Scenario: No copying name tag if only one name
|
||||
Given the places
|
||||
| osm | class | type | name | geometry |
|
||||
| N1 | place | locality | german | country:de |
|
||||
When importing
|
||||
Then placex contains
|
||||
| object | country_code | name+name |
|
||||
| N1 | de | german |
|
||||
|
||||
Scenario: Copying name tag to default language if it does not exist
|
||||
Given the places
|
||||
| osm | class | type | name | name+name:fi | geometry |
|
||||
| N1 | place | locality | german | finnish | country:de |
|
||||
When importing
|
||||
Then placex contains
|
||||
| object | country_code | name | name+name:fi | name+name:de |
|
||||
| N1 | de | german | finnish | german |
|
||||
|
||||
Scenario: Copying default language name tag to name if it does not exist
|
||||
Given the places
|
||||
| osm | class | type | name+name:de | name+name:fi | geometry |
|
||||
| N1 | place | locality | german | finnish | country:de |
|
||||
When importing
|
||||
Then placex contains
|
||||
| object | country_code | name | name+name:fi | name+name:de |
|
||||
| N1 | de | german | finnish | german |
|
||||
|
||||
Scenario: Do not overwrite default language with name tag
|
||||
Given the places
|
||||
| osm | class | type | name | name+name:fi | name+name:de | geometry |
|
||||
| N1 | place | locality | german | finnish | local | country:de |
|
||||
When importing
|
||||
Then placex contains
|
||||
| object | country_code | name | name+name:fi | name+name:de |
|
||||
| N1 | de | german | finnish | local |
|
||||
|
||||
Scenario Outline: Names in any script can be found
|
||||
Given the places
|
||||
| osm | class | type | name |
|
||||
| N1 | place | hamlet | <name> |
|
||||
When importing
|
||||
And sending search query "<name>"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
| name |
|
||||
| Berlin |
|
||||
| 北京 |
|
||||
| Вологда |
|
||||
| Αθήνα |
|
||||
| القاهرة |
|
||||
| រាជធានីភ្នំពេញ |
|
||||
| 東京都 |
|
||||
| ပုဗ္ဗသီရိ |
|
||||
|
||||
|
||||
Scenario: German umlauts can be found when expanded
|
||||
Given the places
|
||||
| osm | class | type | name+name:de |
|
||||
| N1 | place | city | Münster |
|
||||
| N2 | place | city | Köln |
|
||||
| N3 | place | city | Gräfenroda |
|
||||
When importing
|
||||
When sending search query "münster"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When sending search query "muenster"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When sending search query "munster"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When sending search query "Köln"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N2 |
|
||||
When sending search query "Koeln"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N2 |
|
||||
When sending search query "Koln"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N2 |
|
||||
When sending search query "gräfenroda"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N3 |
|
||||
When sending search query "graefenroda"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N3 |
|
||||
When sending search query "grafenroda"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N3 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Parenting of objects
|
||||
Tests that the correct parent is chosen
|
||||
|
||||
@@ -20,14 +21,14 @@ Feature: Parenting of objects
|
||||
| object | parent_place_id |
|
||||
| N1 | W1 |
|
||||
| N2 | W1 |
|
||||
When geocoding "4 galoo"
|
||||
Then result 0 contains
|
||||
| object | display_name |
|
||||
| N1 | 4, galoo, 12345, Deutschland |
|
||||
When geocoding "5 galoo"
|
||||
Then result 0 contains
|
||||
| object | display_name |
|
||||
| N2 | 5, galoo, 99999, Deutschland |
|
||||
When sending search query "4 galoo"
|
||||
Then results contain
|
||||
| ID | osm | display_name |
|
||||
| 0 | N1 | 4, galoo, 12345, Deutschland |
|
||||
When sending search query "5 galoo"
|
||||
Then results contain
|
||||
| ID | osm | display_name |
|
||||
| 0 | N2 | 5, galoo, 99999, Deutschland |
|
||||
|
||||
Scenario: Address without tags, closest street
|
||||
Given the grid
|
||||
@@ -483,9 +484,9 @@ Feature: Parenting of objects
|
||||
| N1 | W3 | 3 |
|
||||
| N2 | W3 | 3 |
|
||||
| N3 | W3 | 3 |
|
||||
When geocoding "3, foo"
|
||||
Then the result set contains
|
||||
| address+house_number |
|
||||
When sending geocodejson search query "3, foo" with address
|
||||
Then results contain
|
||||
| housenumber |
|
||||
| 3 |
|
||||
|
||||
Scenario: POIs don't inherit from streets
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Import into placex
|
||||
Tests that data in placex is completed correctly.
|
||||
|
||||
@@ -7,8 +8,8 @@ Feature: Import into placex
|
||||
| N1 | highway | primary | country:us |
|
||||
When importing
|
||||
Then placex contains
|
||||
| object | address | country_code |
|
||||
| N1 | - | us |
|
||||
| object | addr+country | country_code |
|
||||
| N1 | - | us |
|
||||
|
||||
Scenario: Location overwrites country code tag
|
||||
Given the named places
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Import of postcodes
|
||||
Tests for postcode estimation
|
||||
|
||||
@@ -175,10 +176,11 @@ Feature: Import of postcodes
|
||||
| N34 | place | house | 01982 | 111 |country:de |
|
||||
When importing
|
||||
Then location_postcode contains exactly
|
||||
| country_code | postcode | geometry!wkt |
|
||||
| de | 01982 | country:de |
|
||||
| country | postcode | geometry |
|
||||
| de | 01982 | country:de |
|
||||
|
||||
@skip
|
||||
|
||||
@Fail
|
||||
Scenario: search and address ranks for GB post codes correctly assigned
|
||||
Given the places
|
||||
| osm | class | type | postcode | geometry |
|
||||
@@ -187,10 +189,10 @@ Feature: Import of postcodes
|
||||
| N3 | place | postcode | Y45 | country:gb |
|
||||
When importing
|
||||
Then location_postcode contains exactly
|
||||
| postcode | country_code | rank_search | rank_address |
|
||||
| E45 2CD | gb | 25 | 5 |
|
||||
| E45 2 | gb | 23 | 5 |
|
||||
| Y45 | gb | 21 | 5 |
|
||||
| postcode | country | rank_search | rank_address |
|
||||
| E45 2CD | gb | 25 | 5 |
|
||||
| E45 2 | gb | 23 | 5 |
|
||||
| Y45 | gb | 21 | 5 |
|
||||
|
||||
Scenario: Postcodes outside all countries are not added to the postcode table
|
||||
Given the places
|
||||
@@ -201,8 +203,8 @@ Feature: Import of postcodes
|
||||
| N1 | place | hamlet | Null Island | 0 0 |
|
||||
When importing
|
||||
Then location_postcode contains exactly
|
||||
| place_id |
|
||||
When geocoding "111, 01982 Null Island"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N34 | 111, Null Island, 01982 |
|
||||
| country | postcode | geometry |
|
||||
When sending search query "111, 01982 Null Island"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N34 | 111, Null Island, 01982 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Rank assignment
|
||||
Tests for assignment of search and address ranks.
|
||||
|
||||
@@ -172,9 +173,13 @@ Feature: Rank assignment
|
||||
| R23 | 20 | 0 |
|
||||
| R21 | 18 | 0 |
|
||||
| R22 | 16 | 16 |
|
||||
Then place_addressline contains exactly
|
||||
Then place_addressline contains
|
||||
| object | address | cached_rank_address |
|
||||
| N20 | R22 | 16 |
|
||||
Then place_addressline doesn't contain
|
||||
| object | address |
|
||||
| N20 | R21 |
|
||||
| N20 | R23 |
|
||||
|
||||
Scenario: adjacent admin_levels are considered different objects when they have different wikidata
|
||||
Given the named places
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Creation of search terms
|
||||
Tests that search_name table is filled correctly
|
||||
|
||||
@@ -5,40 +6,43 @@ Feature: Creation of search terms
|
||||
Given the places
|
||||
| osm | class | type | name+alt_name |
|
||||
| N1 | place | city | New York; Big Apple |
|
||||
| N2 | place | town | New York Big Apple |
|
||||
When importing
|
||||
And geocoding "New York Big Apple"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| N2 |
|
||||
Then search_name contains
|
||||
| object | name_vector |
|
||||
| N1 | #New York, #Big Apple |
|
||||
|
||||
Scenario: Comma-separated names appear as a single full name
|
||||
Given the places
|
||||
| osm | class | type | name+name |
|
||||
| osm | class | type | name+alt_name |
|
||||
| N1 | place | city | New York, Big Apple |
|
||||
| N2 | place | town | New York Big Apple |
|
||||
When importing
|
||||
And geocoding "New York Big Apple"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| N1 |
|
||||
Then search_name contains
|
||||
| object | name_vector |
|
||||
| N1 | #New York Big Apple |
|
||||
|
||||
Scenario: Name parts before brackets appear as full names
|
||||
Given the places
|
||||
| osm | class | type | name+name |
|
||||
| N1 | place | city | Halle (Saale) |
|
||||
| N2 | place | town | Halle |
|
||||
When importing
|
||||
And geocoding "Halle"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| N1 |
|
||||
When geocoding "Halle (Saale)"
|
||||
Then the result set contains
|
||||
| object |
|
||||
| N1 |
|
||||
Then search_name contains
|
||||
| object | name_vector |
|
||||
| N1 | #Halle Saale, #Halle |
|
||||
|
||||
Scenario: Unknown addr: tags can be found for unnamed POIs
|
||||
Scenario: Unnamed POIs have no search entry
|
||||
Given the grid
|
||||
| | 1 | | |
|
||||
| 10 | | | 11 |
|
||||
And the places
|
||||
| osm | class | type |
|
||||
| N1 | place | house |
|
||||
And the named places
|
||||
| osm | class | type | geometry |
|
||||
| W1 | highway | residential | 10,11 |
|
||||
When importing
|
||||
Then search_name has no entry for N1
|
||||
|
||||
Scenario: Unnamed POI has a search entry when it has unknown addr: tags
|
||||
Given the grid
|
||||
| | 1 | | |
|
||||
| 10 | | | 11 |
|
||||
@@ -49,18 +53,21 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name+name | geometry |
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
When importing
|
||||
When geocoding "23 Rose Street, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When geocoding "Walltown, Rose Street 23"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When geocoding "Rose Street 23, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
Then search_name contains
|
||||
| object | nameaddress_vector |
|
||||
| N1 | #Rose Street, Walltown |
|
||||
When sending search query "23 Rose Street, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When sending search query "Walltown, Rose Street 23"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When sending search query "Rose Street 23, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
|
||||
Scenario: Searching for unknown addr: tags also works for multiple words
|
||||
Given the grid
|
||||
@@ -73,20 +80,23 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name+name | geometry |
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
When importing
|
||||
When geocoding "23 Rose Street, Little Big Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When geocoding "Rose Street 23, Little Big Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When geocoding "Little big Town, Rose Street 23"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
Then search_name contains
|
||||
| object | nameaddress_vector |
|
||||
| N1 | #Rose Street, rose, Little, Big, Town |
|
||||
When sending search query "23 Rose Street, Little Big Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When sending search query "Rose Street 23, Little Big Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When sending search query "Little big Town, Rose Street 23"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
|
||||
Scenario: Unnamed POI can be found when it has known addr: tags
|
||||
Scenario: Unnamed POI has no search entry when it has known addr: tags
|
||||
Given the grid
|
||||
| | 1 | | |
|
||||
| 10 | | | 11 |
|
||||
@@ -97,10 +107,24 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name+name | addr+city | geometry |
|
||||
| W1 | highway | residential | Rose Street | Walltown | 10,11 |
|
||||
When importing
|
||||
When geocoding "23 Rose Street, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
Then search_name has no entry for N1
|
||||
When sending search query "23 Rose Street, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
|
||||
Scenario: Unnamed POI must have a house number to get a search entry
|
||||
Given the grid
|
||||
| | 1 | | |
|
||||
| 10 | | | 11 |
|
||||
And the places
|
||||
| osm | class | type | addr+city |
|
||||
| N1 | place | house | Walltown |
|
||||
And the places
|
||||
| osm | class | type | name+name | geometry |
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
When importing
|
||||
Then search_name has no entry for N1
|
||||
|
||||
Scenario: Unnamed POIs inherit parent name when unknown addr:place is present
|
||||
Given the grid
|
||||
@@ -118,22 +142,23 @@ Feature: Creation of search terms
|
||||
Then placex contains
|
||||
| object | parent_place_id |
|
||||
| N1 | R1 |
|
||||
When geocoding "23 Rose Street"
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| W1 | Rose Street, Strange Town |
|
||||
When geocoding "23 Walltown, Strange Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
When geocoding "Walltown 23, Strange Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
When geocoding "Strange Town, Walltown 23"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
When sending search query "23 Rose Street"
|
||||
Then exactly 1 results are returned
|
||||
And results contain
|
||||
| osm | display_name |
|
||||
| W1 | Rose Street, Strange Town |
|
||||
When sending search query "23 Walltown, Strange Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
When sending search query "Walltown 23, Strange Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
When sending search query "Strange Town, Walltown 23"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
|
||||
Scenario: Named POIs can be searched by housenumber when unknown addr:place is present
|
||||
Given the grid
|
||||
@@ -148,26 +173,26 @@ Feature: Creation of search terms
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
| R1 | place | city | Strange Town | (100,101,102,103,100) |
|
||||
When importing
|
||||
When geocoding "23 Walltown, Strange Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When geocoding "Walltown 23, Strange Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When geocoding "Strange Town, Walltown 23"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When geocoding "Strange Town, Walltown 23, Blue house"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When geocoding "Strange Town, Walltown, Blue house"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When sending search query "23 Walltown, Strange Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When sending search query "Walltown 23, Strange Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When sending search query "Strange Town, Walltown 23"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When sending search query "Strange Town, Walltown 23, Blue house"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
When sending search query "Strange Town, Walltown, Blue house"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Blue house, 23, Walltown, Strange Town |
|
||||
|
||||
Scenario: Named POIs can be found when unknown multi-word addr:place is present
|
||||
Given the grid
|
||||
@@ -182,14 +207,14 @@ Feature: Creation of search terms
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
| R1 | place | city | Strange Town | (100,101,102,103,100) |
|
||||
When importing
|
||||
When geocoding "23 Moon Sun, Strange Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Blue house, 23, Moon sun, Strange Town |
|
||||
When geocoding "Blue house, Moon Sun, Strange Town"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Blue house, 23, Moon sun, Strange Town |
|
||||
When sending search query "23 Moon Sun, Strange Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Blue house, 23, Moon sun, Strange Town |
|
||||
When sending search query "Blue house, Moon Sun, Strange Town"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Blue house, 23, Moon sun, Strange Town |
|
||||
|
||||
Scenario: Unnamed POIs doesn't inherit parent name when addr:place is present only in parent address
|
||||
Given the grid
|
||||
@@ -204,14 +229,16 @@ Feature: Creation of search terms
|
||||
| W1 | highway | residential | Rose Street | Walltown | 10,11 |
|
||||
| R1 | place | suburb | Strange Town | Walltown | (100,101,102,103,100) |
|
||||
When importing
|
||||
When geocoding "23 Rose Street, Walltown"
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| W1 | Rose Street, Strange Town |
|
||||
When geocoding "23 Walltown"
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
When sending search query "23 Rose Street, Walltown"
|
||||
Then exactly 1 result is returned
|
||||
And results contain
|
||||
| osm | display_name |
|
||||
| W1 | Rose Street, Strange Town |
|
||||
When sending search query "23 Walltown"
|
||||
Then exactly 1 result is returned
|
||||
And results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Walltown, Strange Town |
|
||||
|
||||
Scenario: Unnamed POIs does inherit parent name when unknown addr:place and addr:street is present
|
||||
Given the grid
|
||||
@@ -224,11 +251,12 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name+name | geometry |
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
When importing
|
||||
When geocoding "23 Rose Street"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When geocoding "23 Lily Street"
|
||||
Then search_name has no entry for N1
|
||||
When sending search query "23 Rose Street"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When sending search query "23 Lily Street"
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: An unknown addr:street is ignored
|
||||
@@ -242,14 +270,15 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name+name | geometry |
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
When importing
|
||||
When geocoding "23 Rose Street"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When geocoding "23 Lily Street"
|
||||
Then search_name has no entry for N1
|
||||
When sending search query "23 Rose Street"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | 23, Rose Street |
|
||||
When sending search query "23 Lily Street"
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: Named POIs can be found through unknown address tags
|
||||
Scenario: Named POIs get unknown address tags added in the search_name table
|
||||
Given the grid
|
||||
| | 1 | | |
|
||||
| 10 | | | 11 |
|
||||
@@ -260,26 +289,29 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name+name | geometry |
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
When importing
|
||||
When geocoding "Green Moss, Rose Street, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When geocoding "Green Moss, 26, Rose Street, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When geocoding "26, Rose Street, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When geocoding "Rose Street 26, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When geocoding "Walltown, Rose Street 26"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
Then search_name contains
|
||||
| object | name_vector | nameaddress_vector |
|
||||
| N1 | #Green Moss | #Rose Street, Walltown |
|
||||
When sending search query "Green Moss, Rose Street, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When sending search query "Green Moss, 26, Rose Street, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When sending search query "26, Rose Street, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When sending search query "Rose Street 26, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
When sending search query "Walltown, Rose Street 26"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Green Moss, 26, Rose Street |
|
||||
|
||||
Scenario: Named POI doesn't inherit parent name when addr:place is present only in parent address
|
||||
Given the grid
|
||||
@@ -294,12 +326,12 @@ Feature: Creation of search terms
|
||||
| W1 | highway | residential | Rose Street | 10,11 |
|
||||
| R1 | place | suburb | Strange Town | (100,101,102,103,100) |
|
||||
When importing
|
||||
When geocoding "Green Moss, Rose Street, Walltown"
|
||||
Then exactly 0 results are returned
|
||||
When geocoding "Green Moss, Walltown"
|
||||
Then the result set contains
|
||||
| object | display_name |
|
||||
| N1 | Green Moss, Walltown, Strange Town |
|
||||
When sending search query "Green Moss, Rose Street, Walltown"
|
||||
Then exactly 0 result is returned
|
||||
When sending search query "Green Moss, Walltown"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | Green Moss, Walltown, Strange Town |
|
||||
|
||||
Scenario: Named POIs inherit address from parent
|
||||
Given the grid
|
||||
@@ -310,10 +342,9 @@ Feature: Creation of search terms
|
||||
| N1 | place | house | foo | 1 |
|
||||
| W1 | highway | residential | the road | 10,11 |
|
||||
When importing
|
||||
When geocoding "foo, the road"
|
||||
Then all results contain
|
||||
| object |
|
||||
| N1 |
|
||||
Then search_name contains
|
||||
| object | name_vector | nameaddress_vector |
|
||||
| N1 | foo | #the road |
|
||||
|
||||
Scenario: Some addr: tags are added to address
|
||||
Given the grid
|
||||
@@ -323,14 +354,13 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name |
|
||||
| N2 | place | city | bonn |
|
||||
| N3 | place | suburb | smalltown|
|
||||
And the places
|
||||
| osm | class | type | name | addr+city | addr+municipality | addr+suburb | geometry |
|
||||
| W1 | highway | service | the end | bonn | New York | Smalltown | 10,11 |
|
||||
And the named places
|
||||
| osm | class | type | addr+city | addr+municipality | addr+suburb | geometry |
|
||||
| W1 | highway | service | bonn | New York | Smalltown | 10,11 |
|
||||
When importing
|
||||
When geocoding "the end, new york, bonn, smalltown"
|
||||
Then all results contain
|
||||
| object |
|
||||
| W1 |
|
||||
Then search_name contains
|
||||
| object | nameaddress_vector |
|
||||
| W1 | bonn, new, york, smalltown |
|
||||
|
||||
Scenario: A known addr:* tag is added even if the name is unknown
|
||||
Given the grid
|
||||
@@ -339,22 +369,36 @@ Feature: Creation of search terms
|
||||
| osm | class | type | name | addr+city | geometry |
|
||||
| W1 | highway | residential | Road | Nandu | 10,11 |
|
||||
When importing
|
||||
And geocoding "Road, Nandu"
|
||||
Then all results contain
|
||||
| object |
|
||||
| W1 |
|
||||
Then search_name contains
|
||||
| object | nameaddress_vector |
|
||||
| W1 | nandu |
|
||||
|
||||
Scenario: addr:postcode is not added to the address terms
|
||||
Given the grid with origin DE
|
||||
| | 1 | | |
|
||||
| 10 | | | 11 |
|
||||
And the places
|
||||
| osm | class | type | name+ref |
|
||||
| N1 | place | state | 12345 |
|
||||
And the named places
|
||||
| osm | class | type | addr+postcode | geometry |
|
||||
| W1 | highway | residential | 12345 | 10,11 |
|
||||
When importing
|
||||
Then search_name contains not
|
||||
| object | nameaddress_vector |
|
||||
| W1 | 12345 |
|
||||
|
||||
Scenario: a linked place does not show up in search name
|
||||
Given the 0.01 grid
|
||||
| 10 | | 11 |
|
||||
| | 2 | |
|
||||
| 13 | | 12 |
|
||||
Given the places
|
||||
| osm | class | type | name | admin | geometry |
|
||||
| R13 | boundary | administrative | Roma | 9 | (10,11,12,13,10) |
|
||||
And the places
|
||||
| osm | class | type | name |
|
||||
| N2 | place | city | Cite |
|
||||
Given the named places
|
||||
| osm | class | type | admin | geometry |
|
||||
| R13 | boundary | administrative | 9 | (10,11,12,13,10) |
|
||||
And the named places
|
||||
| osm | class | type |
|
||||
| N2 | place | city |
|
||||
And the relations
|
||||
| id | members | tags+type |
|
||||
| 13 | N2:label | boundary |
|
||||
@@ -362,10 +406,7 @@ Feature: Creation of search terms
|
||||
Then placex contains
|
||||
| object | linked_place_id |
|
||||
| N2 | R13 |
|
||||
When geocoding "Cite"
|
||||
Then all results contain
|
||||
| object |
|
||||
| R13 |
|
||||
And search_name has no entry for N2
|
||||
|
||||
Scenario: a linked waterway does not show up in search name
|
||||
Given the grid
|
||||
@@ -383,7 +424,5 @@ Feature: Creation of search terms
|
||||
| object | linked_place_id |
|
||||
| W1 | R13 |
|
||||
| W2 | R13 |
|
||||
When geocoding "Rhein"
|
||||
Then all results contain
|
||||
| object |
|
||||
| R13 |
|
||||
And search_name has no entry for W1
|
||||
And search_name has no entry for W2
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Searching of house numbers
|
||||
Test for specialised treeatment of housenumbers
|
||||
|
||||
@@ -16,13 +17,13 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | North Road | 1,2,3 |
|
||||
When importing
|
||||
And geocoding "45, North Road"
|
||||
Then the result set contains
|
||||
| object |
|
||||
And sending search query "45, North Road"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "North Road 45"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "North Road 45"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
|
||||
@@ -34,17 +35,17 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | North Road | 1,2,3 |
|
||||
When importing
|
||||
And geocoding "45, North Road"
|
||||
Then the result set contains
|
||||
| object |
|
||||
And sending search query "45, North Road"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "North Road ④⑤"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "North Road ④⑤"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "North Road 𑁪𑁫"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "North Road 𑁪𑁫"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
@@ -62,17 +63,17 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Multistr | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "2 Multistr"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "2 Multistr"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "4 Multistr"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "4 Multistr"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "12 Multistr"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "12 Multistr"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
@@ -90,21 +91,21 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Multistr | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "2A Multistr"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "2A Multistr"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "2 a Multistr"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "2 a Multistr"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "2-A Multistr"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "2-A Multistr"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Multistr 2 A"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Multistr 2 A"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
@@ -123,21 +124,21 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Chester St | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "34-10 Chester St"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "34-10 Chester St"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "34/10 Chester St"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "34/10 Chester St"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "34 10 Chester St"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "34 10 Chester St"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "3410 Chester St"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "3410 Chester St"
|
||||
Then results contain
|
||||
| osm |
|
||||
| W10 |
|
||||
|
||||
Examples:
|
||||
@@ -155,21 +156,21 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Rue Paris | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "Rue Paris 45bis"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue Paris 45bis"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Rue Paris 45 BIS"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue Paris 45 BIS"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Rue Paris 45BIS"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue Paris 45BIS"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Rue Paris 45 bis"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue Paris 45 bis"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
@@ -188,21 +189,21 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Rue du Berger | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "Rue du Berger 45ter"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue du Berger 45ter"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Rue du Berger 45 TER"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue du Berger 45 TER"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Rue du Berger 45TER"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue du Berger 45TER"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Rue du Berger 45 ter"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Rue du Berger 45 ter"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
@@ -221,21 +222,21 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Herengracht | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "501-H 1 Herengracht"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "501-H 1 Herengracht"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "501H-1 Herengracht"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "501H-1 Herengracht"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "501H1 Herengracht"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "501H1 Herengracht"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "501-H1 Herengracht"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "501-H1 Herengracht"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
@@ -254,17 +255,17 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Голубинская улица | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "Голубинская улица 55к3"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Голубинская улица 55к3"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Голубинская улица 55 k3"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Голубинская улица 55 k3"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When geocoding "Голубинская улица 55 к-3"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Голубинская улица 55 к-3"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
Examples:
|
||||
@@ -281,9 +282,9 @@ Feature: Searching of house numbers
|
||||
| osm | class | type | name | geometry |
|
||||
| W10 | highway | path | Chester St | 1,2,3 |
|
||||
When importing
|
||||
When geocoding "Chester St Warring"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Chester St Warring"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
|
||||
|
||||
@@ -310,11 +311,11 @@ Feature: Searching of house numbers
|
||||
| 10 | 10, 11 |
|
||||
| 20 | 20, 21 |
|
||||
When importing
|
||||
When geocoding "Ringstr 12"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Ringstr 12"
|
||||
Then results contain
|
||||
| osm |
|
||||
| W10 |
|
||||
When geocoding "Ringstr 13"
|
||||
Then the result set contains
|
||||
| object |
|
||||
When sending search query "Ringstr 13"
|
||||
Then results contain
|
||||
| osm |
|
||||
| W20 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Query of address interpolations
|
||||
Tests that interpolated addresses can be queried correctly
|
||||
|
||||
@@ -22,14 +23,14 @@ Feature: Query of address interpolations
|
||||
| id | nodes |
|
||||
| 1 | 1,3 |
|
||||
When importing
|
||||
When reverse geocoding at node 2
|
||||
Then the result contains
|
||||
| display_name |
|
||||
| 3, Nickway |
|
||||
When geocoding "Nickway 3"
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| W1 | 3, Nickway |
|
||||
When sending v1/reverse N2
|
||||
Then results contain
|
||||
| ID | display_name |
|
||||
| 0 | 3, Nickway |
|
||||
When sending search query "Nickway 3"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| W1 | 3, Nickway |
|
||||
|
||||
|
||||
Scenario: Find interpolations with multiple numbers
|
||||
@@ -47,11 +48,11 @@ Feature: Query of address interpolations
|
||||
| id | nodes |
|
||||
| 1 | 1,3 |
|
||||
When importing
|
||||
When reverse geocoding at node 2
|
||||
Then the result contains
|
||||
| display_name | centroid!wkt |
|
||||
| 10, Nickway | 2 |
|
||||
When geocoding "Nickway 10"
|
||||
Then all results contain
|
||||
| object | display_name | centroid!wkt |
|
||||
| W1 | 10, Nickway | 2 |
|
||||
When sending v1/reverse N2
|
||||
Then results contain
|
||||
| ID | display_name | centroid |
|
||||
| 0 | 10, Nickway | 2 |
|
||||
When sending search query "Nickway 10"
|
||||
Then results contain
|
||||
| osm | display_name | centroid |
|
||||
| W1 | 10, Nickway | 2 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Searches in Japan
|
||||
Test specifically for searches of Japanese addresses and in Japanese language.
|
||||
Scenario: A block house-number is parented to the neighbourhood
|
||||
@@ -22,7 +23,7 @@ Feature: Searches in Japan
|
||||
Then placex contains
|
||||
| object | parent_place_id |
|
||||
| N3 | N9 |
|
||||
When geocoding "2丁目 6-2"
|
||||
Then all results contain
|
||||
| object |
|
||||
When sending search query "2丁目 6-2"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N3 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Searching linked places
|
||||
Tests that information from linked places can be searched correctly
|
||||
|
||||
@@ -19,18 +20,18 @@ Feature: Searching linked places
|
||||
Then placex contains
|
||||
| object | linked_place_id |
|
||||
| N2 | R13 |
|
||||
When geocoding "Vario"
|
||||
When sending search query "Vario"
|
||||
| namedetails |
|
||||
| 1 |
|
||||
Then all results contain
|
||||
| object | display_name | namedetails!dict |
|
||||
| R13 | Garbo | "name": "Garbo", "name:it": "Vario" |
|
||||
When geocoding "Vario"
|
||||
Then results contain
|
||||
| osm | display_name | namedetails |
|
||||
| R13 | Garbo | "name": "Garbo", "name:it": "Vario" |
|
||||
When sending search query "Vario"
|
||||
| accept-language |
|
||||
| it |
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| R13 | Vario |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| R13 | Vario |
|
||||
|
||||
|
||||
Scenario: Differing names from linked places are searchable
|
||||
@@ -51,13 +52,13 @@ Feature: Searching linked places
|
||||
Then placex contains
|
||||
| object | linked_place_id |
|
||||
| N2 | R13 |
|
||||
When geocoding "Vario"
|
||||
When sending search query "Vario"
|
||||
| namedetails |
|
||||
| 1 |
|
||||
Then all results contain
|
||||
| object | display_name | namedetails!dict |
|
||||
| R13 | Garbo | "name": "Garbo", "_place_name": "Vario" |
|
||||
When geocoding "Garbo"
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| R13 | Garbo |
|
||||
Then results contain
|
||||
| osm | display_name | namedetails |
|
||||
| R13 | Garbo | "name": "Garbo", "_place_name": "Vario" |
|
||||
When sending search query "Garbo"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| R13 | Garbo |
|
||||
226
test/bdd/db/query/normalization.feature
Normal file
226
test/bdd/db/query/normalization.feature
Normal file
@@ -0,0 +1,226 @@
|
||||
@DB
|
||||
Feature: Import and search of names
|
||||
Tests all naming related issues: normalisation,
|
||||
abbreviations, internationalisation, etc.
|
||||
|
||||
Scenario: non-latin scripts can be found
|
||||
Given the places
|
||||
| osm | class | type | name |
|
||||
| N1 | place | locality | Речицкий район |
|
||||
| N2 | place | locality | Refugio de montaña |
|
||||
| N3 | place | locality | 高槻市|
|
||||
| N4 | place | locality | الدوحة |
|
||||
When importing
|
||||
When sending search query "Речицкий район"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "Refugio de montaña"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N2 |
|
||||
When sending search query "高槻市"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N3 |
|
||||
When sending search query "الدوحة"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N4 |
|
||||
|
||||
Scenario: Case-insensitivity of search
|
||||
Given the places
|
||||
| osm | class | type | name |
|
||||
| N1 | place | locality | FooBar |
|
||||
When importing
|
||||
Then placex contains
|
||||
| object | class | type | name+name |
|
||||
| N1 | place | locality | FooBar |
|
||||
When sending search query "FooBar"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "foobar"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "fOObar"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "FOOBAR"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
|
||||
Scenario: Multiple spaces in name
|
||||
Given the places
|
||||
| osm | class | type | name |
|
||||
| N1 | place | locality | one two three |
|
||||
When importing
|
||||
When sending search query "one two three"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "one two three"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "one two three"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query " one two three"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
|
||||
Scenario: Special characters in name
|
||||
Given the places
|
||||
| osm | class | type | name+name:de |
|
||||
| N1 | place | locality | Jim-Knopf-Straße |
|
||||
| N2 | place | locality | Smith/Weston |
|
||||
| N3 | place | locality | space mountain |
|
||||
| N4 | place | locality | space |
|
||||
| N5 | place | locality | mountain |
|
||||
When importing
|
||||
When sending search query "Jim-Knopf-Str"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "Jim Knopf-Str"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "Jim Knopf Str"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "Jim/Knopf-Str"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "Jim-Knopfstr"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N1 |
|
||||
When sending search query "Smith/Weston"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N2 |
|
||||
When sending search query "Smith Weston"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N2 |
|
||||
When sending search query "Smith-Weston"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N2 |
|
||||
When sending search query "space mountain"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N3 |
|
||||
When sending search query "space-mountain"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N3 |
|
||||
When sending search query "space/mountain"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N3 |
|
||||
When sending search query "space\mountain"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N3 |
|
||||
When sending search query "space(mountain)"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | N3 |
|
||||
|
||||
Scenario: Landuse with name are found
|
||||
Given the grid
|
||||
| 1 | 2 |
|
||||
| 3 | |
|
||||
Given the places
|
||||
| osm | class | type | name | geometry |
|
||||
| R1 | natural | meadow | landuse1 | (1,2,3,1) |
|
||||
| R2 | landuse | industrial | landuse2 | (2,3,1,2) |
|
||||
When importing
|
||||
When sending search query "landuse1"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R1 |
|
||||
When sending search query "landuse2"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R2 |
|
||||
|
||||
Scenario: Postcode boundaries without ref
|
||||
Given the grid with origin FR
|
||||
| | 2 | |
|
||||
| 1 | | 3 |
|
||||
Given the places
|
||||
| osm | class | type | postcode | geometry |
|
||||
| R1 | boundary | postal_code | 123-45 | (1,2,3,1) |
|
||||
When importing
|
||||
When sending search query "123-45"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R1 |
|
||||
|
||||
Scenario Outline: Housenumbers with special characters are found
|
||||
Given the grid
|
||||
| 1 | | | | 2 |
|
||||
| | | 9 | | |
|
||||
And the places
|
||||
| osm | class | type | name | geometry |
|
||||
| W1 | highway | primary | Main St | 1,2 |
|
||||
And the places
|
||||
| osm | class | type | housenr | geometry |
|
||||
| N1 | building | yes | <nr> | 9 |
|
||||
When importing
|
||||
And sending search query "Main St <nr>"
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N1 | <nr>, Main St |
|
||||
|
||||
Examples:
|
||||
| nr |
|
||||
| 1 |
|
||||
| 3456 |
|
||||
| 1 a |
|
||||
| 56b |
|
||||
| 1 A |
|
||||
| 2號 |
|
||||
| 1Б |
|
||||
| 1 к1 |
|
||||
| 23-123 |
|
||||
|
||||
Scenario Outline: Housenumbers in lists are found
|
||||
Given the grid
|
||||
| 1 | | | | 2 |
|
||||
| | | 9 | | |
|
||||
And the places
|
||||
| osm | class | type | name | geometry |
|
||||
| W1 | highway | primary | Main St | 1,2 |
|
||||
And the places
|
||||
| osm | class | type | housenr | geometry |
|
||||
| N1 | building | yes | <nr-list> | 9 |
|
||||
When importing
|
||||
And sending search query "Main St <nr>"
|
||||
Then results contain
|
||||
| ID | osm | display_name |
|
||||
| 0 | N1 | <nr-list>, Main St |
|
||||
|
||||
Examples:
|
||||
| nr-list | nr |
|
||||
| 1,2,3 | 1 |
|
||||
| 1,2,3 | 2 |
|
||||
| 1, 2, 3 | 3 |
|
||||
| 45 ;67;3 | 45 |
|
||||
| 45 ;67;3 | 67 |
|
||||
| 1a;1k | 1a |
|
||||
| 1a;1k | 1k |
|
||||
| 34/678 | 34 |
|
||||
| 34/678 | 678 |
|
||||
| 34/678 | 34/678 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Querying fo postcode variants
|
||||
|
||||
Scenario: Postcodes in Singapore (6-digit postcode)
|
||||
@@ -7,10 +8,10 @@ Feature: Querying fo postcode variants
|
||||
| osm | class | type | name | addr+postcode | geometry |
|
||||
| W1 | highway | path | Lorang | 399174 | 10,11 |
|
||||
When importing
|
||||
When geocoding "399174"
|
||||
Then result 0 contains
|
||||
| type | display_name |
|
||||
| postcode | 399174, Singapore |
|
||||
When sending search query "399174"
|
||||
Then results contain
|
||||
| ID | type | display_name |
|
||||
| 0 | postcode | 399174, Singapore |
|
||||
|
||||
|
||||
Scenario Outline: Postcodes in the Netherlands (mixed postcode with spaces)
|
||||
@@ -20,14 +21,14 @@ Feature: Querying fo postcode variants
|
||||
| osm | class | type | name | addr+postcode | geometry |
|
||||
| W1 | highway | path | De Weide | 3993 DX | 10,11 |
|
||||
When importing
|
||||
When geocoding "3993 DX"
|
||||
Then result 0 contains
|
||||
| type | display_name |
|
||||
| postcode | 3993 DX, Nederland |
|
||||
When geocoding "3993dx"
|
||||
Then result 0 contains
|
||||
| type | display_name |
|
||||
| postcode | 3993 DX, Nederland |
|
||||
When sending search query "3993 DX"
|
||||
Then results contain
|
||||
| ID | type | display_name |
|
||||
| 0 | postcode | 3993 DX, Nederland |
|
||||
When sending search query "3993dx"
|
||||
Then results contain
|
||||
| ID | type | display_name |
|
||||
| 0 | postcode | 3993 DX, Nederland |
|
||||
|
||||
Examples:
|
||||
| postcode |
|
||||
@@ -43,10 +44,10 @@ Feature: Querying fo postcode variants
|
||||
| osm | class | type | name | addr+postcode | geometry |
|
||||
| W1 | highway | path | Lorang | 399174 | 10,11 |
|
||||
When importing
|
||||
When geocoding "399174"
|
||||
Then result 0 contains
|
||||
| type | display_name |
|
||||
| postcode | 399174, Singapore |
|
||||
When sending search query "399174"
|
||||
Then results contain
|
||||
| ID | type | display_name |
|
||||
| 0 | postcode | 399174, Singapore |
|
||||
|
||||
|
||||
Scenario Outline: Postcodes in Andorra (with country code)
|
||||
@@ -56,14 +57,14 @@ Feature: Querying fo postcode variants
|
||||
| osm | class | type | name | addr+postcode | geometry |
|
||||
| W1 | highway | path | Lorang | <postcode> | 10,11 |
|
||||
When importing
|
||||
When geocoding "675"
|
||||
Then result 0 contains
|
||||
| type | display_name |
|
||||
| postcode | AD675, Andorra |
|
||||
When geocoding "AD675"
|
||||
Then result 0 contains
|
||||
| type | display_name |
|
||||
| postcode | AD675, Andorra |
|
||||
When sending search query "675"
|
||||
Then results contain
|
||||
| ID | type | display_name |
|
||||
| 0 | postcode | AD675, Andorra |
|
||||
When sending search query "AD675"
|
||||
Then results contain
|
||||
| ID | type | display_name |
|
||||
| 0 | postcode | AD675, Andorra |
|
||||
|
||||
Examples:
|
||||
| postcode |
|
||||
@@ -79,15 +80,15 @@ Feature: Querying fo postcode variants
|
||||
| N35 | place | house | E4 7EA | 111 | country:gb |
|
||||
When importing
|
||||
Then location_postcode contains exactly
|
||||
| country_code | postcode | geometry!wkt |
|
||||
| gb | EH4 7EA | country:gb |
|
||||
| gb | E4 7EA | country:gb |
|
||||
When geocoding "EH4 7EA"
|
||||
Then result 0 contains
|
||||
| country | postcode | geometry |
|
||||
| gb | EH4 7EA | country:gb |
|
||||
| gb | E4 7EA | country:gb |
|
||||
When sending search query "EH4 7EA"
|
||||
Then results contain
|
||||
| type | display_name |
|
||||
| postcode | EH4 7EA, United Kingdom |
|
||||
When geocoding "E4 7EA"
|
||||
Then result 0 contains
|
||||
When sending search query "E4 7EA"
|
||||
Then results contain
|
||||
| type | display_name |
|
||||
| postcode | E4 7EA, United Kingdom |
|
||||
|
||||
@@ -101,9 +102,9 @@ Feature: Querying fo postcode variants
|
||||
| R23 | boundary | postal_code | 12345 | (1,2,3,4,1) |
|
||||
When importing
|
||||
Then location_postcode contains exactly
|
||||
| country_code | postcode |
|
||||
| de | 12345 |
|
||||
When geocoding "12345, de"
|
||||
Then result 0 contains
|
||||
| object |
|
||||
| country | postcode |
|
||||
| de | 12345 |
|
||||
When sending search query "12345, de"
|
||||
Then results contain
|
||||
| osm |
|
||||
| R23 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Reverse searches
|
||||
Test results of reverse queries
|
||||
|
||||
@@ -11,11 +12,11 @@ Feature: Reverse searches
|
||||
| W1 | aeroway | terminal | (1,2,3,4,1) |
|
||||
| N1 | amenity | restaurant | 9 |
|
||||
When importing
|
||||
And reverse geocoding 1.0001,1.0001
|
||||
Then the result contains
|
||||
| object |
|
||||
And sending v1/reverse at 1.0001,1.0001
|
||||
Then results contain
|
||||
| osm |
|
||||
| N1 |
|
||||
When reverse geocoding 1.0003,1.0001
|
||||
Then the result contains
|
||||
| object |
|
||||
When sending v1/reverse at 1.0003,1.0001
|
||||
Then results contain
|
||||
| osm |
|
||||
| W1 |
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Searching of simple objects
|
||||
Testing simple stuff
|
||||
|
||||
@@ -6,10 +7,32 @@ Feature: Searching of simple objects
|
||||
| osm | class | type | name+name | geometry |
|
||||
| N1 | place | village | Foo | 10.0 -10.0 |
|
||||
When importing
|
||||
And geocoding "Foo"
|
||||
Then result 0 contains
|
||||
| object | category | type | centroid!wkt |
|
||||
| N1 | place | village | 10 -10 |
|
||||
And sending search query "Foo"
|
||||
Then results contain
|
||||
| ID | osm | category | type | centroid |
|
||||
| 0 | N1 | place | village | 10 -10 |
|
||||
|
||||
Scenario: Updating postcode in postcode boundaries without ref
|
||||
Given the grid
|
||||
| 1 | 2 |
|
||||
| 4 | 3 |
|
||||
Given the places
|
||||
| osm | class | type | postcode | geometry |
|
||||
| R1 | boundary | postal_code | 12345 | (1,2,3,4,1) |
|
||||
When importing
|
||||
And sending search query "12345"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R1 |
|
||||
When updating places
|
||||
| osm | class | type | postcode | geometry |
|
||||
| R1 | boundary | postal_code | 54321 | (1,2,3,4,1) |
|
||||
And sending search query "12345"
|
||||
Then exactly 0 results are returned
|
||||
When sending search query "54321"
|
||||
Then results contain
|
||||
| ID | osm |
|
||||
| 0 | R1 |
|
||||
|
||||
# github #1763
|
||||
Scenario: Correct translation of highways under construction
|
||||
@@ -21,8 +44,8 @@ Feature: Searching of simple objects
|
||||
| W1 | highway | construction | The build | 1,2 |
|
||||
| N1 | amenity | cafe | Bean | 9 |
|
||||
When importing
|
||||
And geocoding "Bean"
|
||||
Then result 0 contains in field address
|
||||
And sending json search query "Bean" with address
|
||||
Then result addresses contain
|
||||
| amenity | road |
|
||||
| Bean | The build |
|
||||
|
||||
@@ -34,10 +57,12 @@ Feature: Searching of simple objects
|
||||
| osm | class | type | name | housenr |
|
||||
| N20 | amenity | restaurant | Red Way | 34 |
|
||||
When importing
|
||||
And geocoding "Wood Street 45"
|
||||
Then exactly 0 results are returned
|
||||
When geocoding "Red Way 34"
|
||||
And sending search query "Wood Street 45"
|
||||
Then exactly 0 results are returned
|
||||
When sending search query "Red Way 34"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N20 |
|
||||
|
||||
Scenario: when the housenumber is missing the street is still returned
|
||||
Given the grid
|
||||
@@ -46,11 +71,12 @@ Feature: Searching of simple objects
|
||||
| osm | class | type | name | geometry |
|
||||
| W1 | highway | residential | Wood Street | 1, 2 |
|
||||
When importing
|
||||
And geocoding "Wood Street"
|
||||
Then all results contain
|
||||
| object |
|
||||
And sending search query "Wood Street"
|
||||
Then results contain
|
||||
| osm |
|
||||
| W1 |
|
||||
|
||||
|
||||
Scenario Outline: Special cased american states will be found
|
||||
Given the grid
|
||||
| 1 | | 2 |
|
||||
@@ -64,15 +90,15 @@ Feature: Searching of simple objects
|
||||
| N2 | place | town | <city> | 10 |
|
||||
| N3 | place | city | <city> | country:ca |
|
||||
When importing
|
||||
And geocoding "<city>, <state>"
|
||||
Then all results contain
|
||||
| object |
|
||||
And sending search query "<city>, <state>"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N2 |
|
||||
When geocoding "<city>, <ref>"
|
||||
When sending search query "<city>, <ref>"
|
||||
| accept-language |
|
||||
| en |
|
||||
Then all results contain
|
||||
| object |
|
||||
Then results contain
|
||||
| osm |
|
||||
| N2 |
|
||||
|
||||
Examples:
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Country handling
|
||||
Tests for update of country information
|
||||
|
||||
@@ -15,14 +16,14 @@ Feature: Country handling
|
||||
| osm | class | type | name |
|
||||
| N10 | place | town | Wenig |
|
||||
When importing
|
||||
When geocoding "Wenig, Loudou"
|
||||
Then all results contain
|
||||
| object |
|
||||
When sending search query "Wenig, Loudou"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N10 |
|
||||
When updating places
|
||||
| osm | class | type | admin | name+name:xy | country | geometry |
|
||||
| R1 | boundary | administrative | 2 | Germany | de | (1,2,3,4,1) |
|
||||
When geocoding "Wenig, Loudou"
|
||||
When sending search query "Wenig, Loudou"
|
||||
Then exactly 0 results are returned
|
||||
|
||||
Scenario: When country names are deleted they are no longer searchable
|
||||
@@ -33,21 +34,21 @@ Feature: Country handling
|
||||
| osm | class | type | name |
|
||||
| N10 | place | town | Wenig |
|
||||
When importing
|
||||
When geocoding "Wenig, Loudou"
|
||||
Then all results contain
|
||||
| object |
|
||||
When sending search query "Wenig, Loudou"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N10 |
|
||||
When updating places
|
||||
| osm | class | type | admin | name+name:en | country | geometry |
|
||||
| R1 | boundary | administrative | 2 | Germany | de | (1,2,3,4,1) |
|
||||
When geocoding "Wenig, Loudou"
|
||||
When sending search query "Wenig, Loudou"
|
||||
Then exactly 0 results are returned
|
||||
When geocoding "Wenig"
|
||||
When sending search query "Wenig"
|
||||
| accept-language |
|
||||
| xy,en |
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| N10 | Wenig, Germany |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N10 | Wenig, Germany |
|
||||
|
||||
|
||||
Scenario: Default country names are always searchable
|
||||
@@ -55,29 +56,29 @@ Feature: Country handling
|
||||
| osm | class | type | name |
|
||||
| N10 | place | town | Wenig |
|
||||
When importing
|
||||
When geocoding "Wenig, Germany"
|
||||
Then all results contain
|
||||
| object |
|
||||
When sending search query "Wenig, Germany"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N10 |
|
||||
When geocoding "Wenig, de"
|
||||
Then all results contain
|
||||
| object |
|
||||
When sending search query "Wenig, de"
|
||||
Then results contain
|
||||
| osm |
|
||||
| N10 |
|
||||
When updating places
|
||||
| osm | class | type | admin | name+name:en | country | geometry |
|
||||
| R1 | boundary | administrative | 2 | Lilly | de | (1,2,3,4,1) |
|
||||
When geocoding "Wenig, Germany"
|
||||
When sending search query "Wenig, Germany"
|
||||
| accept-language |
|
||||
| en,de |
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N10 | Wenig, Lilly |
|
||||
When geocoding "Wenig, de"
|
||||
When sending search query "Wenig, de"
|
||||
| accept-language |
|
||||
| en,de |
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| N10 | Wenig, Lilly |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N10 | Wenig, Lilly |
|
||||
|
||||
|
||||
Scenario: When a localised name is deleted, the standard name takes over
|
||||
@@ -88,21 +89,21 @@ Feature: Country handling
|
||||
| osm | class | type | name |
|
||||
| N10 | place | town | Wenig |
|
||||
When importing
|
||||
When geocoding "Wenig, Loudou"
|
||||
When sending search query "Wenig, Loudou"
|
||||
| accept-language |
|
||||
| de,en |
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N10 | Wenig, Loudou |
|
||||
When updating places
|
||||
| osm | class | type | admin | name+name:en | country | geometry |
|
||||
| R1 | boundary | administrative | 2 | Germany | de | (1,2,3,4,1) |
|
||||
When geocoding "Wenig, Loudou"
|
||||
When sending search query "Wenig, Loudou"
|
||||
Then exactly 0 results are returned
|
||||
When geocoding "Wenig"
|
||||
When sending search query "Wenig"
|
||||
| accept-language |
|
||||
| de,en |
|
||||
Then all results contain
|
||||
| object | display_name |
|
||||
| N10 | Wenig, Deutschland |
|
||||
Then results contain
|
||||
| osm | display_name |
|
||||
| N10 | Wenig, Deutschland |
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
@DB
|
||||
Feature: Update of address interpolations
|
||||
Test the interpolated address are updated correctly
|
||||
|
||||
@@ -333,7 +334,7 @@ Feature: Update of address interpolations
|
||||
| W1 | 4 | 4 |
|
||||
| W1 | 8 | 8 |
|
||||
|
||||
@skip
|
||||
@Fail
|
||||
Scenario: housenumber removed in middle of interpolation
|
||||
Given the grid
|
||||
| 1 | | | | | 2 |
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user