Compare commits

...

141 Commits

Author SHA1 Message Date
Sarah Hoffmann
b7d77b9b43 avoid symbolic link to files in packaging
Hatch cannot handle those correctly and will add a symbolic link to the
source package.
2025-08-06 21:59:13 +02:00
Sarah Hoffmann
7e84d38a92 Merge pull request #3811 from lonvia/fix-frequent-terms-with-viewbox
Don't restrict to viewbox for frequent terms
2025-08-06 21:10:07 +02:00
Sarah Hoffmann
c7df8738ed fix typing issue with latest falcon version 2025-08-06 20:08:10 +02:00
Sarah Hoffmann
0045203092 don't restrict to viewbox for frequent terms
All searched places may be outside the viewbox in which case the
restriction means that there are no results at all. Add the penalty for
being outside the viewbox earlier instead and then cut the list.
2025-08-06 17:27:52 +02:00
Sarah Hoffmann
b325413486 Merge pull request #3808 from lonvia/avoid-st-relate
Replace ST_Relate by shortcut functions
2025-08-06 16:28:51 +02:00
Sarah Hoffmann
6270c90052 replace ST_Relate by shortcut functions
For some reason ST_Relate returns wrong results in the context of
the trigger on Debian Trixie. Works fine with the Postgis version
from postgresql.org.
2025-08-06 14:43:07 +02:00
Sarah Hoffmann
a7709c768d add test for reverse with address layer and inherited address 2025-07-31 22:25:55 +02:00
Sarah Hoffmann
47c0a101b9 Merge pull request #3799 from lonvia/reduce-coordinate-precision
Reduce coordinate precision of centroids and interpolation lines
2025-07-30 14:50:36 +02:00
Sarah Hoffmann
64bb8c2a9c Merge pull request #3800 from lonvia/improve-style-docs
Improvements to documentation for custom import styles
2025-07-30 14:50:17 +02:00
Sarah Hoffmann
194b607491 Merge pull request #3797 from mtmail/database-version-not-found
Better hint to user if database import didnt finish
2025-07-30 12:08:10 +02:00
marc tobias
9bad3b1e61 Better hint to user if database import didnt finish 2025-07-30 10:25:14 +02:00
Sarah Hoffmann
69e882096c clarify what merging means 2025-07-29 23:04:14 +02:00
Sarah Hoffmann
f300b00c2d docs: add a list of available topics 2025-07-29 22:59:02 +02:00
Sarah Hoffmann
242fcc6e4d adapt BDD tests to different rounding of reduce precision 2025-07-29 22:35:55 +02:00
Sarah Hoffmann
83c6f27f5c reduce precision of interpolations to OSM precision 2025-07-29 22:35:47 +02:00
Sarah Hoffmann
1111597db5 reduce precision of computed centroids to 7 digits 2025-07-29 21:25:14 +02:00
Sarah Hoffmann
866e6bade9 Merge pull request #3789 from lonvia/align-deferred-delete-limits
Align limits for deferring delete and reindexing on insert
2025-07-22 11:15:56 +02:00
Sarah Hoffmann
4cbbe04f7f align limits for deferring delete and reindexing on insert
Right now when a boundary with an area between 1 and 2 broke, it
was deleted but on reinsert afer repair, the addresses are not updated
resulting in inconsistent data.
2025-07-21 16:11:06 +02:00
Sarah Hoffmann
e1cef3de0a remove unused code 2025-07-21 11:36:57 +02:00
Sarah Hoffmann
c6088cb4e7 Merge pull request #3785 from lonvia/raise-python-to-39
Raise minimum required Python version to 3.9
2025-07-19 23:02:13 +02:00
Sarah Hoffmann
a725cab2fc run old-version CI against oldest supported Python 2025-07-19 19:50:01 +02:00
Sarah Hoffmann
8bb53c22be raise minimum supported Python version to 3.9 2025-07-19 15:23:17 +02:00
Sarah Hoffmann
8a96e4f802 Merge pull request #3781 from lonvia/partial-address-index-lookup
Reduce number of tokens used for index lookups during search
2025-07-15 10:11:12 +02:00
Sarah Hoffmann
a9cd706bb6 adapt test to new lookup limits 2025-07-14 14:21:09 +02:00
Sarah Hoffmann
09b5ea097b restrict pre-selection by postcode to country 2025-07-14 14:21:09 +02:00
Sarah Hoffmann
e111257644 restrict name-only address searches early by postcode 2025-07-14 14:21:09 +02:00
Sarah Hoffmann
93ac1023f7 restrict name-only search more 2025-07-14 14:21:09 +02:00
Sarah Hoffmann
1fe2353682 restrict postcode distance computation to within country 2025-07-14 14:21:09 +02:00
Sarah Hoffmann
6d2b79870c only use most infrequent tokens for search index lookup 2025-07-14 14:18:22 +02:00
Sarah Hoffmann
621d8e785b Merge pull request #3779 from lonvia/fix-zero-devision-direction
Fix direction factor computation on empty strings
2025-07-11 14:51:00 +02:00
Sarah Hoffmann
830307484b Merge pull request #3777 from lonvia/harmonize-transition-penalties
Clean up word transition penalty assignment for searches
2025-07-11 14:17:48 +02:00
Sarah Hoffmann
5d6967a1d0 Merge pull request #3778 from lonvia/remove-log-db-setting
Remove defaults and documentations for LOG_DB setting
2025-07-11 14:17:24 +02:00
Sarah Hoffmann
26903aec0b add BDD test for empty queries 2025-07-11 14:16:48 +02:00
Sarah Hoffmann
c39183e3a5 remove any references to website setup or refresh
Does no longer exist.
2025-07-11 11:51:49 +02:00
Sarah Hoffmann
21ef3be433 fix direction factor computation on empty strings 2025-07-11 11:25:14 +02:00
Sarah Hoffmann
99562a197e remove LOG_DB setting, not implemented anymore 2025-07-11 11:15:41 +02:00
Sarah Hoffmann
fe30663b21 remove penalty from TokenRanges
The parameter is no longer needed.
2025-07-11 11:01:22 +02:00
Sarah Hoffmann
73ee17af95 adapt tests for new function signatures 2025-07-11 11:01:22 +02:00
Sarah Hoffmann
b9252cc348 reduce maximum number of SQL queries per search 2025-07-11 11:01:22 +02:00
Sarah Hoffmann
71025f3f43 fix order of address rankings prefering longest words 2025-07-11 11:01:21 +02:00
Sarah Hoffmann
e4b671f8b1 reinstate penalty for partial only matches 2025-07-11 11:01:21 +02:00
Sarah Hoffmann
7ebd121abc give word break slight advantage towards continuation
prefers longer words
2025-07-11 11:01:21 +02:00
Sarah Hoffmann
4634ad0720 rebalance word transition penalties 2025-07-11 11:01:21 +02:00
Sarah Hoffmann
4a9253a0a9 simplify QueryNode penalty and initial assignment 2025-07-11 11:01:09 +02:00
Sarah Hoffmann
1aeb8a262c Merge pull request #3774 from lonvia/remove-postcodes-from-nameaddressvector
Do not add postcodes from postcode boundaries to address vector
2025-07-08 17:23:05 +02:00
Sarah Hoffmann
ef7e842702 Merge pull request #3773 from lonvia/small-countries
Reduce area for geometry rank for very small countries
2025-07-08 15:01:37 +02:00
Sarah Hoffmann
ec42fda1bd do not add postcodes from postcode boundaries to address vector
Postcodes will be found through a special search, so we can save
the space.
2025-07-08 14:49:16 +02:00
Sarah Hoffmann
287ba2570e reduce area for geometry rank for very small countries 2025-07-08 13:50:20 +02:00
Sarah Hoffmann
4711deeccb Merge pull request #3772 from lonvia/fix-index-use-deletable
split up query for deletable endpoint by osm type
2025-07-08 13:49:31 +02:00
Sarah Hoffmann
cf9e8d6b8e split up query for deletable endpoint by osm type
This is needed to ensure index use on placex.
2025-07-08 11:03:29 +02:00
Sarah Hoffmann
06d5ab4c2d Merge pull request #3770 from lonvia/split-place-search
Split up SQL generation code for searches with and without housenumbers
2025-07-07 17:52:47 +02:00
Sarah Hoffmann
e327512667 adapt BDD test to refusal to search POI names with hnr only 2025-07-07 16:14:58 +02:00
Sarah Hoffmann
3e04eb2ffe increase penalty on mismatching postcodes for address searches
Otherwise there is an imbalance towards matching housenumbers
instead of the actual street (where no housenumber exists).
2025-07-07 16:07:32 +02:00
Sarah Hoffmann
970d81fb27 sort housenumber parents by accuracy first
Sorting them by presence of housenumber only will give an undue
preference to results with a housenumber while disregarding other
factors like matching postcodes.
2025-07-07 12:06:06 +02:00
Sarah Hoffmann
cecdbeb7cf reduce candidates for place search 2025-07-07 12:03:56 +02:00
Sarah Hoffmann
c634e9fc5f differentiate between place searches with and without address 2025-07-07 12:03:56 +02:00
Sarah Hoffmann
13eaea8aae split place search into address search and named search
The presence/absence of houenumbers makes quite a difference for search.
2025-07-07 09:13:48 +02:00
Sarah Hoffmann
ab5f348a4a Merge pull request #3769 from lonvia/refactor-api-searches
Refactor code around creating SQL for serach queries
2025-07-02 20:08:11 +02:00
Sarah Hoffmann
11d624e92a split db_searches moving each class in its own file 2025-07-01 22:57:04 +02:00
Sarah Hoffmann
a7797f8b37 Merge pull request #3765 from lonvia/update-ui-docs
Update instructions for UI integration
2025-06-27 20:01:28 +02:00
Sarah Hoffmann
c4dd0d4f95 update instructions for UI integration
Switches from defaulting to forwarding to UI to only forwarding
when requested. This avoids issues with auto-forwarding illegal URLs.
Also adapts to the much simplified nginx configuration.
2025-06-27 11:22:28 +02:00
Sarah Hoffmann
f43fec0d57 Merge pull request #3764 from lonvia/update-importance
'refresh --importance' also needs to refresh importances in search_name table
2025-06-27 10:02:18 +02:00
Sarah Hoffmann
af82c3debb remove duplicated test
There is a more extensive test of recompute_importance with
result check in test_refresh_wiki_data.py
2025-06-26 22:35:38 +02:00
Sarah Hoffmann
1ab4d445ea Merge pull request #3762 from lonvia/remove-gazetteer-output-support
Remove support for deprecated gazetteer osm2pgsql output
2025-06-26 20:28:16 +02:00
Sarah Hoffmann
678702ceb7 rewrite importances in search_name after updating in placex 2025-06-26 20:27:37 +02:00
Sarah Hoffmann
f9eb93c4ab remove support for deprecated gazetteer osm2pgsql output 2025-06-25 23:09:08 +02:00
Sarah Hoffmann
f97a0a76f2 Merge pull request #3747 from anqixxx/fix-special-phrases-filtering
Special Phrases Filtering: Add Command Line Functionality
2025-06-06 21:37:17 +02:00
anqixxx
cf9b946eba Added skip for when min =0 2025-06-05 09:25:14 +08:00
anqixxx
7dc3924a3c Added default min = 0 argument for private functions
empty
2025-06-04 01:12:36 -07:00
anqixxx
20cf4b56b9 Refactored min and associated tests to follow greater than or equal to logic, so that min=0 accounted for no filtering
r
2025-06-04 00:53:52 -07:00
anqixxx
40d5b78eb8 Added command line (default 0) min argument for minimum filtering, updated args.py to reflect this 2025-06-04 00:53:52 -07:00
Sarah Hoffmann
8d0e767826 Merge pull request #3748 from lonvia/airports
Improve finding airports by their codes
2025-06-02 14:39:02 +02:00
Sarah Hoffmann
87a8c246a0 improve result cutting when a POI comes out with top importance 2025-06-01 12:00:36 +02:00
Sarah Hoffmann
90050de717 only rerank results if there is more than one
With one result order is obvious.
2025-06-01 11:55:27 +02:00
Sarah Hoffmann
10a7d1106d reduce influence of query rematching a little bit 2025-06-01 11:54:21 +02:00
Sarah Hoffmann
f2236f68f1 when rematching only distinguish between perfect, somewhat and bad match 2025-06-01 11:53:23 +02:00
Sarah Hoffmann
831fccdaee add FAA codes (US version of IATA codes) for airports 2025-06-01 11:49:55 +02:00
Sarah Hoffmann
d2e691b63f work around bogus type error in latest starlette 2025-05-31 09:43:48 +02:00
Sarah Hoffmann
2a508b6c99 fix missing optional return 2025-05-30 12:03:00 +02:00
Sarah Hoffmann
02c3a6fffa Merge pull request #3744 from lonvia/add-unnamed-cemetries
Include unnamed cemetaries in POIs
2025-05-28 11:51:23 +02:00
Sarah Hoffmann
26348764d4 add landuse=cemetery as POI even when unnamed 2025-05-28 09:48:08 +02:00
Sarah Hoffmann
f8a56ab6e6 Merge pull request #3742 from lonvia/korean-defaults
Remove English as default language for South Korea
2025-05-26 14:13:54 +02:00
Sarah Hoffmann
75b4c7e56b adapt to changed loop handling of pytest_asyncio 2025-05-26 11:51:20 +02:00
Sarah Hoffmann
9f1dfb1876 remove English as default language for South Korea 2025-05-26 10:28:14 +02:00
Sarah Hoffmann
730b4204f6 Merge pull request #3741 from dave-meyer/patch-1
docs: Added missing code span for search API parameter value
2025-05-26 09:21:40 +02:00
Dave Meyer
4898704b5a docs: Added missing code span for search API parameter value 2025-05-25 20:42:09 +02:00
Sarah Hoffmann
0cf470f863 Merge pull request #3710 from anqixxx/fix-special-phrases-filtering
Fix special phrases filtering
2025-05-21 21:34:28 +02:00
anqixxx
6220bde2d6 Added mypy ignore fix for logging.py (library change), as well as quick mac fix on mem.cached 2025-05-21 11:11:56 -07:00
Sarah Hoffmann
a4d3b57f37 Merge pull request #3709 from anqixxx/update-readme
Improve README formatting and add install steps
2025-05-21 19:49:12 +02:00
anqixxx
618fbc63d7 Added testing to test get classtype pairs in import special phrases 2025-05-21 10:39:51 -07:00
anqixxx
3f51cb3fd1 Made the limit configurable with an optional argument, updating the testing as well to reflect this. default is now 0, meaning that it will return everything that occurs more than once. Removed mock database test, and got rid of fetch all. Rebased all tests to monkeypatch 2025-05-21 10:38:34 -07:00
anqixxx
59a947c5f5 Removed class type pair getter that used style sheets from both spi_importer and the associated testing function 2025-05-21 10:38:08 -07:00
anqixxx
1952290359 Removed magic mocking, using monkeypatch instead, and using a placex table to simulate a 'real database' 2025-05-21 10:37:42 -07:00
anqixxx
1a323165f9 Filter special phrases by style and frequency to fix #235 2025-05-21 10:36:46 -07:00
anqixxx
9c2fdf5eae Improve README formatting and add install steps, adding a general cloning step before the virtual environment. This would have been helpful for me during Nominatim setup 2025-05-21 10:14:36 -07:00
Sarah Hoffmann
800c56642b tweak full count cut-off (as per deployment on osm.org) 2025-05-11 11:48:07 +02:00
Sarah Hoffmann
b51fed025c Merge pull request #3732 from lonvia/exclude-country-from-direction-penalty
Exclude address searches with country from direction penalty
2025-04-30 10:45:37 +02:00
Sarah Hoffmann
34b72591cc exclude address searches with country from direction penalty
Countries are not adequately represented by partial term counts.
2025-04-29 17:37:31 +02:00
Sarah Hoffmann
bc450d110c Merge pull request #3722 from emmanuel-ferdman/master
resolve datetime deprecation warnings
2025-04-22 14:21:05 +02:00
Sarah Hoffmann
388acf4727 Merge pull request #3726 from lonvia/revert-json-format-change
Revert accidental change in json output format
2025-04-18 14:43:51 +02:00
Sarah Hoffmann
3999977941 revert accidental change in json output format 2025-04-18 12:05:25 +02:00
Emmanuel Ferdman
df58870e3f resolve datetime deprecation warnings
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-04-17 11:15:16 -07:00
Sarah Hoffmann
478a8741db Merge pull request #3719 from lonvia/query-direction
Estimate query direction
2025-04-17 15:17:56 +02:00
Sarah Hoffmann
7f710d2394 add a comment about the precomputed denominator 2025-04-15 09:38:05 +02:00
Sarah Hoffmann
06e39e42d8 add direction penalties
Direction penalties are estimated by getting the name to address
ratio usage for each partial term in the query and computing the
linear regression of that ratio over the entire phrase. Or to put
it in ither words: we try to determine if the terms at the beginning
or the end of the query are more likely to constitute a name.

Direction penalties are currently used only in classic name queries.
2025-04-11 20:41:06 +02:00
Sarah Hoffmann
2ef0e20a3f reorganise token reranking
As the reranking is about changing penalties in presence of other
tokens, change the datastructure to have the other tokens readily
avilable.
2025-04-11 13:38:34 +02:00
Sarah Hoffmann
b680d81f0a ensure that bailout-check is done after each iteration 2025-04-11 11:02:11 +02:00
Sarah Hoffmann
e0e067b1d6 replace use of range when computing word list 2025-04-11 09:59:04 +02:00
Sarah Hoffmann
3980791cfd use iterator instead of list to go over partials 2025-04-11 09:38:24 +02:00
Sarah Hoffmann
497e27bb9a move partial token into a separate field in the query struct
There is exactly one token to be expected and the token is usually
present.
2025-04-11 08:57:34 +02:00
Sarah Hoffmann
1db717b886 Merge pull request #3716 from lonvia/github-cache-osm2pgsql-binary
Github actions: cache compiled osm2pgsql binary

For the tests on Ubunutu 22-04 we need to compile osm2pgsql because the version they ship is too old. This adds caching of the compiled binary, so that we don't need to recompile for each CI run. Together with the new BDD tests that shaves around 10 min off a CI run.
2025-04-10 17:20:32 +02:00
Sarah Hoffmann
b47c8ccfb1 actions: cache compiled osm2pgsql binary 2025-04-10 16:06:27 +02:00
Sarah Hoffmann
63b055283d Merge pull request #3714 from lonvia/postcode-update-without-project-dir
Change postcode update function to work without a project directory
2025-04-10 08:51:22 +02:00
Sarah Hoffmann
b80e6914e7 Merge pull request #3715 from lonvia/demote-tags-to-fallbacks
Demote historic and tourism=attraction to fallback tags
2025-04-10 08:51:06 +02:00
Sarah Hoffmann
9d00a137fe demote historic and tourism=attraction to fallback tags 2025-04-09 20:15:18 +02:00
Sarah Hoffmann
97d9e3c548 allow updating postcodes without a project directory
Postcodes will then be updated without looking for external postcodes.
2025-04-09 20:04:01 +02:00
Sarah Hoffmann
e4180936c1 Merge pull request #3713 from lonvia/bdd-pytest-db-test
Move BDD tests to pytest-bdd
2025-04-09 19:37:30 +02:00
Sarah Hoffmann
34e0ecb44f update documentation for BDD tests 2025-04-09 15:21:50 +02:00
Sarah Hoffmann
d95e9737da remove usage of behave 2025-04-09 14:57:39 +02:00
Sarah Hoffmann
b34991d85f add BDD tests for DB 2025-04-09 14:52:34 +02:00
Sarah Hoffmann
5f44aa2873 improve table comparison 2025-04-04 11:02:51 +02:00
Sarah Hoffmann
dae643c040 move database setup to generic conftest.py 2025-04-04 11:02:51 +02:00
Sarah Hoffmann
ee62d5e1cf remove old behave osm2pgsql BDD tests 2025-04-04 11:02:51 +02:00
Sarah Hoffmann
fb440f29a2 implement BDD osm2pgsql tests with pytest-bdd 2025-04-04 11:02:51 +02:00
Sarah Hoffmann
0f725b1880 enable python-bdd for github actions 2025-04-04 11:02:51 +02:00
Sarah Hoffmann
39f56ba4b8 restrict coordinate output to 7 digits 2025-04-04 11:02:51 +02:00
Sarah Hoffmann
6959577aa4 replace behave BDD API tests with pytest-bdd tests 2025-04-04 11:02:51 +02:00
Sarah Hoffmann
50d4b0a386 Merge pull request #3687 from asharmalik19/test-linked-places-language
test: linked places expand default language names
2025-04-04 10:58:53 +02:00
Ashar
9ff93bdb3d Update linked places name test
Clean up test scenario by removing extra language variations and
improving table readability.
2025-04-03 14:30:18 -04:00
Ashar
e0bf553aa5 test: linked places expand default language names
Add failing test for issue #2714 to verify default language expansion
2025-04-03 14:30:18 -04:00
Sarah Hoffmann
2ce2d031fa Merge pull request #3702 from lonvia/remove-tokenizer-dir
Remove automatic setup of tokenizer directory

So far the tokenizer factory would create a directory for private data for the tokenizer and then hand in the directory location to the tokenizer.

ICU tokenizer doesn't need any extra data anymore, so it doesn't make sense to create a directory which then remains empty. If a tokenizer needs such a directory in the future, it needs to create it on its own and make sure to handle the situation correctly where no project directory is used at all.
2025-04-03 09:04:48 +02:00
Sarah Hoffmann
186f562dd7 remove automatic setup of tokenizer directory
ICU tokenizer doesn't need any extra data anymore, so it doesn't
make sense to create a directory which then remains empty. If a
tokenizer needs such a directory in the future, it needs to create
it on its own and make sure to handle the situation correctly where
no project directory is used at all.
2025-04-02 20:20:04 +02:00
Sarah Hoffmann
c5bbeb626f Merge pull request #3700 from lonvia/ignore-inherited-addresses
Ignore POIs with inherited addresses for the address layer
2025-04-02 12:00:45 +02:00
Sarah Hoffmann
3bc77629c8 ignore POIs with inherited addresses for the address layer
We know that there is a building which describes the address as a
polygon and is therefore more suitable.
2025-04-02 10:30:45 +02:00
Sarah Hoffmann
6cf1287c4e Merge pull request #3686 from astridx/output_names
Output names as setting
2025-04-01 20:16:15 +02:00
Sarah Hoffmann
a49e8b9cf7 Merge pull request #3675 from TuringVerified/generic-preprocessors
Add generic preprocessors
2025-04-01 20:14:43 +02:00
TuringVerified
2eeec46040 Remove unnecessary assert statement, Fix regex_replace docstring and simplify regex_replace 2025-04-01 18:54:30 +05:30
TuringVerified
6d5a4a20c5 Update documentation, optimise regex_replace, add tests 2025-04-01 18:54:30 +05:30
TuringVerified
4665ea3e77 Add generic preprocessor 2025-04-01 18:54:30 +05:30
Sarah Hoffmann
9cf5eee5d4 add instructions for pip package upload 2025-04-01 11:59:03 +02:00
astridx
12ad95067d output names as setting 2025-03-31 16:55:05 +02:00
193 changed files with 8412 additions and 7314 deletions

View File

@@ -7,5 +7,5 @@ extend-ignore =
per-file-ignores =
__init__.py: F401
test/python/utils/test_json_writer.py: E131
test/python/conftest.py: E402
**/conftest.py: E402
test/bdd/*: F821

View File

@@ -44,11 +44,13 @@ jobs:
postgresql: 12
lua: '5.1'
dependencies: pip
python: '3.9'
- flavour: ubuntu-24
ubuntu: 24
postgresql: 17
lua: '5.3'
dependencies: apt
python: 'builtin'
runs-on: ubuntu-${{ matrix.ubuntu }}.04
@@ -68,26 +70,40 @@ jobs:
with:
dependencies: ${{ matrix.dependencies }}
- uses: actions/cache@v4
with:
path: |
/usr/local/bin/osm2pgsql
key: osm2pgsql-bin-22-1
if: matrix.ubuntu == '22'
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python }}
if: matrix.python != 'builtin'
- name: Compile osm2pgsql
run: |
sudo apt-get install -y -qq libboost-system-dev libboost-filesystem-dev libexpat1-dev zlib1g-dev libbz2-dev libpq-dev libproj-dev libicu-dev liblua${LUA_VERSION}-dev lua-dkjson nlohmann-json3-dev
mkdir osm2pgsql-build
cd osm2pgsql-build
git clone https://github.com/osm2pgsql-dev/osm2pgsql
mkdir build
cd build
cmake ../osm2pgsql
make
sudo make install
cd ../..
rm -rf osm2pgsql-build
if [ ! -f /usr/local/bin/osm2pgsql ]; then
sudo apt-get install -y -qq libboost-system-dev libboost-filesystem-dev libexpat1-dev zlib1g-dev libbz2-dev libpq-dev libproj-dev libicu-dev liblua${LUA_VERSION}-dev lua-dkjson nlohmann-json3-dev
mkdir osm2pgsql-build
cd osm2pgsql-build
git clone https://github.com/osm2pgsql-dev/osm2pgsql
mkdir build
cd build
cmake ../osm2pgsql
make
sudo make install
cd ../..
rm -rf osm2pgsql-build
else
sudo apt-get install -y -qq libexpat1 liblua${LUA_VERSION}
fi
if: matrix.ubuntu == '22'
env:
LUA_VERSION: ${{ matrix.lua }}
- name: Install test prerequisites
run: ./venv/bin/pip install behave==1.2.6
- name: Install test prerequisites (apt)
run: sudo apt-get install -y -qq python3-pytest python3-pytest-asyncio uvicorn python3-falcon python3-aiosqlite python3-pyosmium
if: matrix.dependencies == 'apt'
@@ -96,6 +112,9 @@ jobs:
run: ./venv/bin/pip install pytest-asyncio falcon starlette asgi_lifespan aiosqlite osmium uvicorn
if: matrix.dependencies == 'pip'
- name: Install test prerequisites
run: ./venv/bin/pip install pytest-bdd
- name: Install latest flake8
run: ./venv/bin/pip install -U flake8
@@ -108,7 +127,7 @@ jobs:
if: matrix.dependencies == 'pip'
- name: Python static typechecking
run: ../venv/bin/python -m mypy --strict --python-version 3.8 src
run: ../venv/bin/python -m mypy --strict --python-version 3.9 src
working-directory: Nominatim
if: matrix.dependencies == 'pip'
@@ -118,8 +137,8 @@ jobs:
- name: BDD tests
run: |
../../../venv/bin/python -m behave -DREMOVE_TEMPLATE=1 --format=progress3
working-directory: Nominatim/test/bdd
../venv/bin/python -m pytest test/bdd --nominatim-purge
working-directory: Nominatim
install:
runs-on: ubuntu-latest

View File

@@ -113,3 +113,5 @@ Checklist for releases:
* run `nominatim --version` to confirm correct version
* [ ] tag new release and add a release on github.com
* [ ] build pip packages and upload to pypi
* `make build`
* `twine upload dist/*`

View File

@@ -18,7 +18,7 @@ build-api:
tests: mypy lint pytest bdd
mypy:
mypy --strict --python-version 3.8 src
mypy --strict --python-version 3.9 src
pytest:
pytest test/python
@@ -27,7 +27,7 @@ lint:
flake8 src test/python test/bdd
bdd:
cd test/bdd; behave -DREMOVE_TEMPLATE=1
pytest test/bdd --nominatim-purge
# Documentation

View File

@@ -27,18 +27,25 @@ can be found at nominatim.org as well.
A quick summary of the necessary steps:
1. Create a Python virtualenv and install the packages:
1. Clone this git repository and download the country grid
git clone https://github.com/osm-search/Nominatim.git
wget -O Nominatim/data/country_osm_grid.sql.gz https://nominatim.org/data/country_grid.sql.gz
2. Create a Python virtualenv and install the packages:
python3 -m venv nominatim-venv
./nominatim-venv/bin/pip install packaging/nominatim-{api,db}
2. Create a project directory, get OSM data and import:
3. Create a project directory, get OSM data and import:
mkdir nominatim-project
cd nominatim-project
../nominatim-venv/bin/nominatim import --osm-file <your planet file>
../nominatim-venv/bin/nominatim import --osm-file <your planet file> 2>&1 | tee setup.log
3. Start the webserver:
4. Start the webserver:
./nominatim-venv/bin/pip install uvicorn falcon
../nominatim-venv/bin/nominatim serve

View File

@@ -27,7 +27,7 @@ For running Nominatim:
* [PostgreSQL](https://www.postgresql.org) (12+ will work, 13+ strongly recommended)
* [PostGIS](https://postgis.net) (3.0+ will work, 3.2+ strongly recommended)
* [osm2pgsql](https://osm2pgsql.org) (1.8+)
* [Python 3](https://www.python.org/) (3.7+)
* [Python 3](https://www.python.org/) (3.9+)
Furthermore the following Python libraries are required:

View File

@@ -36,11 +36,11 @@ The website is now available at `http://localhost:8765`.
## Forwarding searches to nominatim-ui
Nominatim used to provide the search interface directly by itself when
`format=html` was requested. For all endpoints except for `/reverse` and
`/lookup` this even used to be the default.
`format=html` was requested. For the `/search` endpoint this even used
to be the default.
The following section describes how to set up Apache or nginx, so that your
users are forwarded to nominatim-ui when they go to URL that formerly presented
users are forwarded to nominatim-ui when they go to a URL that formerly presented
the UI.
### Setting up forwarding in Nginx
@@ -73,41 +73,28 @@ map $args $format {
# Determine from the URI and the format parameter above if forwarding is needed.
map $uri/$format $forward_to_ui {
default 1; # The default is to forward.
~^/ui 0; # If the URI point to the UI already, we are done.
~/other$ 0; # An explicit non-html format parameter. No forwarding.
~/reverse.*/default 0; # Reverse and lookup assume xml format when
~/lookup.*/default 0; # no format parameter is given. No forwarding.
default 0; # no forwarding by default
~/search.*/default 1; # Use this line only, if search should go to UI by default.
~/reverse.*/html 1; # Forward API calls that UI supports, when
~/status.*/html 1; # format=html is explicitly requested.
~/search.*/html 1;
~/details.*/html 1;
}
```
The `$forward_to_ui` parameter can now be used to conditionally forward the
calls:
```
# When no endpoint is given, default to search.
# Need to add a rewrite so that the rewrite rules below catch it correctly.
rewrite ^/$ /search;
location @php {
# fastcgi stuff..
``` nginx
location / {
if ($forward_to_ui) {
rewrite ^(/[^/]*) https://yourserver.com/ui$1.html redirect;
rewrite ^(/[^/.]*) https://$http_host/ui$1.html redirect;
}
}
location ~ [^/]\.php(/|$) {
# fastcgi stuff..
if ($forward_to_ui) {
rewrite (.*).php https://yourserver.com/ui$1.html redirect;
}
# proxy_pass commands
}
```
!!! warning
Be aware that the rewrite commands are slightly different for URIs with and
without the .php suffix.
Reload nginx and the UI should be available.
### Setting up forwarding in Apache
@@ -159,18 +146,16 @@ directory like this:
RewriteBase "/nominatim/"
# If no endpoint is given, then use search.
RewriteRule ^(/|$) "search.php"
RewriteRule ^(/|$) "search"
# If format-html is explicitly requested, forward to the UI.
RewriteCond %{QUERY_STRING} "format=html"
RewriteRule ^([^/]+)(.php)? ui/$1.html [R,END]
RewriteRule ^([^/.]+) ui/$1.html [R,END]
# If no format parameter is there then forward anything
# but /reverse and /lookup to the UI.
# Optionally: if no format parameter is there then forward /search.
RewriteCond %{QUERY_STRING} "!format="
RewriteCond %{REQUEST_URI} "!/lookup"
RewriteCond %{REQUEST_URI} "!/reverse"
RewriteRule ^([^/]+)(.php)? ui/$1.html [R,END]
RewriteCond %{REQUEST_URI} "/search"
RewriteRule ^([^/.]+) ui/$1.html [R,END]
</Directory>
```

View File

@@ -212,7 +212,7 @@ other layers.
The featureType allows to have a more fine-grained selection for places
from the address layer. Results can be restricted to places that make up
the 'state', 'country' or 'city' part of an address. A featureType of
settlement selects any human inhabited feature from 'state' down to
`settlement` selects any human inhabited feature from 'state' down to
'neighbourhood'.
When featureType is set, then results are automatically restricted

View File

@@ -36,18 +36,27 @@ local flex = require('flex-base')
### Using preset configurations
If you want to start with one of the existing presets, then you can import
its settings using the `import_topic()` function:
its settings using the `load_topic()` function:
```
``` lua
local flex = require('flex-base')
flex.import_topic('streets')
flex.load_topic('streets')
```
The `import_topic` function takes an optional second configuration
The `load_topic` function takes an optional second configuration
parameter. The available options are explained in the
[themepark section](#using-osm2pgsql-themepark).
Available topics are: `admin`, `street`, `address`, `full`. These topic
correspond to the [import styles](../admin/Import.md#filtering-imported-data)
you can choose during import. To start with the 'extratags' style, use the
`full` topic with the appropriate config parameter:
``` lua
flex.load_topic('full', {with_extratags = true})
```
!!! note
You can also directly import the preset style files, e.g.
`local flex = require('import-street')`. It is not possible to
@@ -116,8 +125,10 @@ value without key, then this is used as default for values that are not listed.
`set_main_tags()` will completely replace the current main tag configuration
with the new configuration. `modify_main_tags()` will merge the new
configuration with the existing one. Otherwise, the two functions do exactly
the same.
configuration with the existing one. Merging is done at value level.
For example, when the current setting is `highway = {'always', primary = 'named'}`,
then `set_main_tags{highway = 'delete'}` will result in a rule
`highway = {'delete', primary = 'named'}`.
!!! example
``` lua
@@ -134,9 +145,9 @@ the same.
when it has a value of `administrative`. Objects with `highway` tags are
always included with two exceptions: the troll tag `highway=no` is
deleted on the spot. And when the value is `street_lamp` then the object
must have a name, too. Finally, if a `landuse` tag is present then
it will be used independently of the concrete value when neither boundary
nor highway tags were found and the object is named.
must also have a name, to be included. Finally, if a `landuse` tag is
present then it will be used independently of the concrete value when
neither boundary nor highway tags were found and the object is named.
##### Presets
@@ -556,16 +567,6 @@ the Nominatim topic.
```
Discarding country-level boundaries when running under themepark.
## osm2pgsql gazetteer output
Nominatim still allows you to configure the gazetteer output to remain
backwards compatible with older imports. It will be automatically used
when the style file name ends in `.style`. For documentation of the
old import style, please refer to the documentation of older releases
of Nominatim. Do not use the gazetteer output for new imports. There is no
guarantee that new versions of Nominatim are fully compatible with the
gazetteer output.
## Changing the style of existing databases
There is usually no issue changing the style of a database that is already

View File

@@ -602,25 +602,44 @@ results gathered so far.
Note that under high load you may observe that users receive different results
than usual without seeing an error. This may cause some confusion.
### Logging Settings
#### NOMINATIM_LOG_DB
#### NOMINATIM_OUTPUT_NAMES
| Summary | |
| -------------- | --------------------------------------------------- |
| **Description:** | Log requests into the database |
| **Format:** | boolean |
| **Default:** | no |
| **After Changes:** | run `nominatim refresh --website` |
| **Description:** | Specifies order of name tags |
| **Format:** | string: comma-separated list of tag names |
| **Default:** | name:XX,name,brand,official_name:XX,short_name:XX,official_name,short_name,ref |
Enable logging requests into a database table with this setting. The logs
can be found in the table `new_query_log`.
Specifies the order in which different name tags are used.
The values in this list determine the preferred order of name variants,
including language-specific names (in OSM: the name tag with and without any language suffix).
When using this logging method, it is advisable to set up a job that
regularly clears out old logging information. Nominatim will not do that
on its own.
Comma-separated list, where :XX stands for language suffix
(e.g. name:en) and no :XX stands for general tags (e.g. name).
Can be used as the same time as NOMINATIM_LOG_FILE.
See also [NOMINATIM_DEFAULT_LANGUAGE](#nominatim_default_language).
!!! note
If NOMINATIM_OUTPUT_NAMES = `name:XX,name,short_name:XX,short_name` the search follows
```
'name', 'short_name'
```
if we have no preferred language order for showing search results.
For languages ['en', 'es'] the search follows
```
'name:en', 'name:es',
'name',
'short_name:en', 'short_name:es',
'short_name'
```
For those familiar with the internal implementation, the `_place_*` expansion is added, but to simplify, it is not included in this example.
### Logging Settings
#### NOMINATIM_LOG_FILE
@@ -645,8 +664,6 @@ given in seconds and includes the entire time the query was queued and executed
in the frontend.
type contains the name of the endpoint used.
Can be used as the same time as NOMINATIM_LOG_DB.
#### NOMINATIM_DEBUG_SQL
| Summary | |

View File

@@ -67,7 +67,13 @@ Here is an example configuration file:
``` yaml
query-preprocessing:
- normalize
- step: split_japanese_phrases
- step: regex_replace
replacements:
- pattern: https?://[^\s]* # Filter URLs starting with http or https
replace: ''
- step: normalize
normalization:
- ":: lower ()"
- "ß > 'ss'" # German szet is unambiguously equal to double ss
@@ -88,8 +94,8 @@ token-analysis:
replacements: ['ä', 'ae']
```
The configuration file contains four sections:
`normalization`, `transliteration`, `sanitizers` and `token-analysis`.
The configuration file contains five sections:
`query-preprocessing`, `normalization`, `transliteration`, `sanitizers` and `token-analysis`.
#### Query preprocessing
@@ -106,6 +112,19 @@ The following is a list of preprocessors that are shipped with Nominatim.
heading_level: 6
docstring_section_style: spacy
##### regex-replace
::: nominatim_api.query_preprocessing.regex_replace
options:
members: False
heading_level: 6
docstring_section_style: spacy
description:
This option runs any given regex pattern on the input and replaces values accordingly
replacements:
- pattern: regex pattern
replace: string to replace with
#### Normalization and Transliteration

View File

@@ -3,8 +3,7 @@
### Import tables
OSM data is initially imported using [osm2pgsql](https://osm2pgsql.org).
Nominatim uses its own data output style 'gazetteer', which differs from the
output style created for map rendering.
Nominatim uses a custom flex style to create the initial import tables.
The import process creates the following tables:
@@ -14,7 +13,7 @@ The `planet_osm_*` tables are the usual backing tables for OSM data. Note
that Nominatim uses them to look up special relations and to find nodes on
ways.
The gazetteer style produces a single table `place` as output with the following
The osm2pgsql import produces a single table `place` as output with the following
columns:
* `osm_type` - kind of OSM object (**N** - node, **W** - way, **R** - relation)

View File

@@ -25,15 +25,15 @@ following packages should get you started:
## Prerequisites for testing and documentation
The Nominatim test suite consists of behavioural tests (using behave) and
The Nominatim test suite consists of behavioural tests (using pytest-bdd) and
unit tests (using pytest). It has the following additional requirements:
* [behave test framework](https://behave.readthedocs.io) >= 1.2.6
* [flake8](https://flake8.pycqa.org/en/stable/) (CI always runs the latest version from pip)
* [mypy](http://mypy-lang.org/) (plus typing information for external libs)
* [Python Typing Extensions](https://github.com/python/typing_extensions) (for Python < 3.9)
* [pytest](https://pytest.org)
* [pytest-asyncio](https://pytest-asyncio.readthedocs.io)
* [pytest-bdd](https://pytest-bdd.readthedocs.io)
For testing the Python search frontend, you need to install extra dependencies
depending on your choice of webserver framework:
@@ -48,9 +48,6 @@ The documentation is built with mkdocs:
* [mkdocs-material](https://squidfunk.github.io/mkdocs-material/)
* [mkdocs-gen-files](https://oprypin.github.io/mkdocs-gen-files/)
Please be aware that tests always run against the globally installed
osm2pgsql, so you need to have this set up. If you want to test against
the vendored version of osm2pgsql, you need to set the PATH accordingly.
### Installing prerequisites on Ubuntu/Debian
@@ -70,8 +67,9 @@ To set up the virtual environment with all necessary packages run:
virtualenv ~/nominatim-dev-venv
~/nominatim-dev-venv/bin/pip install\
psutil 'psycopg[binary]' PyICU SQLAlchemy \
python-dotenv jinja2 pyYAML behave \
mkdocs 'mkdocstrings[python]' mkdocs-gen-files pytest pytest-asyncio flake8 \
python-dotenv jinja2 pyYAML \
mkdocs 'mkdocstrings[python]' mkdocs-gen-files \
pytest pytest-asyncio pytest-bdd flake8 \
types-jinja2 types-markupsafe types-psutil types-psycopg2 \
types-pygments types-pyyaml types-requests types-ujson \
types-urllib3 typing-extensions unicorn falcon starlette \
@@ -94,7 +92,7 @@ but executes against the code in the source tree. For example:
```
me@machine:~$ cd Nominatim
me@machine:~Nominatim$ ./nominatim-cli.py --version
Nominatim version 4.4.99-1
Nominatim version 5.1.0-0
```
Make sure you have activated the virtual environment holding all

View File

@@ -43,53 +43,53 @@ The name of the pytest binary depends on your installation.
## BDD Functional Tests (`test/bdd`)
Functional tests are written as BDD instructions. For more information on
the philosophy of BDD testing, see the
[Behave manual](http://pythonhosted.org/behave/philosophy.html).
The following explanation assume that the reader is familiar with the BDD
notations of features, scenarios and steps.
All possible steps can be found in the `steps` directory and should ideally
be documented.
the philosophy of BDD testing, read the Wikipedia article on
[Behaviour-driven development](https://en.wikipedia.org/wiki/Behavior-driven_development).
### General Usage
To run the functional tests, do
cd test/bdd
behave
pytest test/bdd
The tests can be configured with a set of environment variables (`behave -D key=val`):
The BDD tests create databases for the tests. You can set name of the databases
through configuration variables in your `pytest.ini`:
* `TEMPLATE_DB` - name of template database used as a skeleton for
the test databases (db tests)
* `TEST_DB` - name of test database (db tests)
* `API_TEST_DB` - name of the database containing the API test data (api tests)
* `API_TEST_FILE` - OSM file to be imported into the API test database (api tests)
* `API_ENGINE` - webframe to use for running search queries, same values as
`nominatim serve --engine` parameter
* `DB_HOST` - (optional) hostname of database host
* `DB_PORT` - (optional) port of database on host
* `DB_USER` - (optional) username of database login
* `DB_PASS` - (optional) password for database login
* `REMOVE_TEMPLATE` - if true, the template and API database will not be reused
during the next run. Reusing the base templates speeds
up tests considerably but might lead to outdated errors
for some changes in the database layout.
* `KEEP_TEST_DB` - if true, the test database will not be dropped after a test
is finished. Should only be used if one single scenario is
run, otherwise the result is undefined.
* `nominatim_test_db` defines the name of the temporary database created for
a single test (default: `test_nominatim`)
* `nominatim_api_test_db` defines the name of the database containing
the API test data, see also below (default: `test_api_nominatim`)
* `nominatim_template_db` defines the name of the template database used
for creating the temporary test databases. It contains some static setup
which usually doesn't change between imports of OSM data
(default: `test_template_nominatim`)
To change other connection parameters for the PostgreSQL database, use
the [libpq enivronment variables](https://www.postgresql.org/docs/current/libpq-envars.html).
Never set a password through these variables. Use a
[password file](https://www.postgresql.org/docs/current/libpq-pgpass.html) instead.
The API test database and the template database are only created once and then
left untouched. This is usually what you want because it speeds up subsequent
runs of BDD tests. If you do change code that has an influence on the content
of these databases, you can run pytest with the `--nominatim-purge` parameter
and the databases will be dropped and recreated from scratch.
When running the BDD tests with make (using `make tests` or `make bdd`), then
the databases will always be purged.
The temporary test database is usually dropped directly after the test, so
it does not take up unnecessary space. If you want to keep the database around,
for example while debugging a specific BDD test, use the parameter
`--nominatim-keep-db`.
Logging can be defined through command line parameters of behave itself. Check
out `behave --help` for details. Also have a look at the 'work-in-progress'
feature of behave which comes in handy when writing new tests.
### API Tests (`test/bdd/api`)
These tests are meant to test the different API endpoints and their parameters.
They require to import several datasets into a test database. This is normally
done automatically during setup of the test. The API test database is then
kept around and reused in subsequent runs of behave. Use `behave -DREMOVE_TEMPLATE`
kept around and reused in subsequent runs of behave. Use `--nominatim-purge`
to force a reimport of the database.
The official test dataset is saved in the file `test/testdb/apidb-test-data.pbf`
@@ -109,12 +109,12 @@ test the correctness of osm2pgsql. Each test will write some data into the `plac
table (and optionally the `planet_osm_*` tables if required) and then run
Nominatim's processing functions on that.
These tests need to create their own test databases. By default they will be
called `test_template_nominatim` and `test_nominatim`. Names can be changed with
the environment variables `TEMPLATE_DB` and `TEST_DB`. The user running the tests
needs superuser rights for postgres.
These tests use the template database and create temporary test databases for
each test.
### Import Tests (`test/bdd/osm2pgsql`)
These tests check that data is imported correctly into the place table. They
use the same template database as the DB Creation tests, so the same remarks apply.
These tests check that data is imported correctly into the place table.
These tests also use the template database and create temporary test databases
for each test.

View File

@@ -9,7 +9,7 @@ the address computation and the search frontend.
The __data import__ stage reads the raw OSM data and extracts all information
that is useful for geocoding. This part is done by osm2pgsql, the same tool
that can also be used to import a rendering database. It uses the special
gazetteer output plugin in `osm2pgsql/src/output-gazetter.[ch]pp`. The result of
flex output style defined in the directory `/lib-lua`. The result of
the import can be found in the database table `place`.
The __address computation__ or __indexing__ stage takes the data from `place`

View File

@@ -187,7 +187,7 @@ module.MAIN_TAGS_POIS = function (group)
passing_place = group,
street_lamp = 'named',
traffic_signals = 'named'},
historic = {'always',
historic = {'fallback',
yes = group,
no = group},
information = {include_when_tag_present('tourism', 'information'),
@@ -196,6 +196,7 @@ module.MAIN_TAGS_POIS = function (group)
trail_blaze = 'never'},
junction = {'fallback',
no = group},
landuse = {cemetery = 'always'},
leisure = {'always',
nature_reserve = 'fallback',
swimming_pool = 'named',
@@ -229,6 +230,7 @@ module.MAIN_TAGS_POIS = function (group)
shop = {'always',
no = group},
tourism = {'always',
attraction = 'fallback',
no = group,
yes = group,
information = exclude_when_key_present('information')},
@@ -330,7 +332,7 @@ module.NAME_TAGS.core = {main = {'name', 'name:*',
}
module.NAME_TAGS.address = {house = {'addr:housename'}}
module.NAME_TAGS.poi = group_merge({main = {'brand'},
extra = {'iata', 'icao'}},
extra = {'iata', 'icao', 'faa'}},
module.NAME_TAGS.core)
-- Address tagging

View File

@@ -309,7 +309,7 @@ BEGIN
IF NEW.startnumber IS NULL THEN
NEW.startnumber := startnumber;
NEW.endnumber := endnumber;
NEW.linegeo := sectiongeo;
NEW.linegeo := ST_ReducePrecision(sectiongeo, 0.0000001);
NEW.postcode := postcode;
ELSE
INSERT INTO location_property_osmline
@@ -317,7 +317,8 @@ BEGIN
startnumber, endnumber, step,
address, postcode, country_code,
geometry_sector, indexed_status)
VALUES (sectiongeo, NEW.partition, NEW.osm_id, NEW.parent_place_id,
VALUES (ST_ReducePrecision(sectiongeo, 0.0000001),
NEW.partition, NEW.osm_id, NEW.parent_place_id,
startnumber, endnumber, NEW.step,
NEW.address, postcode,
NEW.country_code, NEW.geometry_sector, 0);

View File

@@ -2,7 +2,7 @@
--
-- This file is part of Nominatim. (https://nominatim.org)
--
-- Copyright (C) 2024 by the Nominatim developer community.
-- Copyright (C) 2025 by the Nominatim developer community.
-- For a full list of authors see the git log.
-- Trigger functions for the placex table.
@@ -638,8 +638,10 @@ BEGIN
-- Add it to the list of search terms
{% if not db.reverse_only %}
nameaddress_vector := array_merge(nameaddress_vector,
location.keywords::integer[]);
IF location.rank_address != 11 AND location.rank_address != 5 THEN
nameaddress_vector := array_merge(nameaddress_vector,
location.keywords::integer[]);
END IF;
{% endif %}
INSERT INTO place_addressline (place_id, address_place_id, fromarea,
@@ -730,7 +732,7 @@ BEGIN
IF NEW.rank_address between 2 and 27 THEN
IF (ST_GeometryType(NEW.geometry) in ('ST_Polygon','ST_MultiPolygon') AND ST_IsValid(NEW.geometry)) THEN
-- Performance: We just can't handle re-indexing for country level changes
IF (NEW.rank_address < 26 and st_area(NEW.geometry) < 1)
IF (NEW.rank_address < 26 and st_area(NEW.geometry) <= 2)
OR (NEW.rank_address >= 26 and st_area(NEW.geometry) < 0.01)
THEN
-- mark items within the geometry for re-indexing
@@ -778,7 +780,7 @@ BEGIN
SELECT count(*)>0 FROM pg_tables WHERE tablename = classtable and schemaname = current_schema() INTO result;
IF result THEN
EXECUTE 'INSERT INTO ' || classtable::regclass || ' (place_id, centroid) VALUES ($1,$2)'
USING NEW.place_id, ST_Centroid(NEW.geometry);
USING NEW.place_id, NEW.centroid;
END IF;
{% endif %} -- not disable_diff_updates
@@ -960,9 +962,8 @@ BEGIN
WHERE class = 'place' and rank_address between 1 and 23
and prank.address_rank >= NEW.rank_address
and ST_GeometryType(geometry) in ('ST_Polygon','ST_MultiPolygon') -- select right index
and geometry && NEW.geometry
and geometry ~ NEW.geometry -- needed because ST_Relate does not do bbox cover test
and ST_Relate(geometry, NEW.geometry, 'T*T***FF*') -- contains but not equal
and ST_Contains(geometry, NEW.geometry)
and not ST_Equals(geometry, NEW.geometry)
ORDER BY prank.address_rank desc LIMIT 1
LOOP
NEW.rank_address := location.rank_address + 2;
@@ -983,9 +984,8 @@ BEGIN
and rank_address between 1 and 25 -- select right index
and ST_GeometryType(geometry) in ('ST_Polygon','ST_MultiPolygon') -- select right index
and prank.address_rank >= NEW.rank_address
and geometry && NEW.geometry
and geometry ~ NEW.geometry -- needed because ST_Relate does not do bbox cover test
and ST_Relate(geometry, NEW.geometry, 'T*T***FF*') -- contains but not equal
and ST_Contains(geometry, NEW.geometry)
and not ST_Equals(geometry, NEW.geometry)
ORDER BY prank.address_rank desc LIMIT 1
LOOP
NEW.rank_address := location.rank_address + 2;

View File

@@ -88,6 +88,10 @@ BEGIN
area := area / 3;
ELSIF country_code IN ('bo', 'ar', 'sd', 'mn', 'in', 'et', 'cd', 'mz', 'ly', 'cl', 'zm') THEN
area := area / 2;
ELSIF country_code IN ('sg', 'ws', 'st', 'kn') THEN
area := area * 5;
ELSIF country_code IN ('dm', 'mt', 'lc', 'gg', 'sc', 'nr') THEN
area := area * 20;
END IF;
IF area > 1 THEN

View File

@@ -2,7 +2,7 @@
--
-- This file is part of Nominatim. (https://nominatim.org)
--
-- Copyright (C) 2022 by the Nominatim developer community.
-- Copyright (C) 2025 by the Nominatim developer community.
-- For a full list of authors see the git log.
-- Assorted helper functions for the triggers.
@@ -14,14 +14,14 @@ DECLARE
geom_type TEXT;
BEGIN
geom_type := ST_GeometryType(place);
IF geom_type = ' ST_Point' THEN
IF geom_type = 'ST_Point' THEN
RETURN place;
END IF;
IF geom_type = 'ST_LineString' THEN
RETURN ST_LineInterpolatePoint(place, 0.5);
RETURN ST_ReducePrecision(ST_LineInterpolatePoint(place, 0.5), 0.0000001);
END IF;
RETURN ST_PointOnSurface(place);
RETURN ST_ReducePrecision(ST_PointOnSurface(place), 0.0000001);
END;
$$
LANGUAGE plpgsql IMMUTABLE PARALLEL SAFE;

View File

@@ -1 +0,0 @@
../../COPYING

View File

@@ -0,0 +1,232 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <https://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@@ -2,7 +2,7 @@
name = "nominatim-api"
description = "A tool for building a database of OpenStreetMap for geocoding and for searching the database. Search library."
readme = "README.md"
requires-python = ">=3.7"
requires-python = ">=3.9"
license = 'GPL-3.0-or-later'
maintainers = [
{ name = "Sarah Hoffmann", email = "lonvia@denofr.de" },

View File

@@ -1 +0,0 @@
../../COPYING

View File

@@ -0,0 +1,232 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <https://www.gnu.org/philosophy/why-not-lgpl.html>.

View File

@@ -2,7 +2,7 @@
name = "nominatim-db"
description = "A tool for building a database of OpenStreetMap for geocoding and for searching the database. Database backend."
readme = "README.md"
requires-python = ">=3.7"
requires-python = ">=3.9"
license = 'GPL-3.0-or-later'
maintainers = [
{ name = "Sarah Hoffmann", email = "lonvia@denofr.de" },

View File

@@ -944,7 +944,7 @@ kp:
# South Korea (대한민국)
kr:
partition: 49
languages: ko, en
languages: ko
names: !include country-names/kr.yaml
postcode:
pattern: "ddddd"

View File

@@ -5,7 +5,6 @@
# Database connection string.
# Add host, port, user etc through additional semicolon-separated attributes.
# e.g. ;host=...;port=...;user=...;password=...
# Changing this variable requires to run 'nominatim refresh --website'.
NOMINATIM_DATABASE_DSN="pgsql:dbname=nominatim"
# Database web user.
@@ -36,11 +35,11 @@ NOMINATIM_TOKENIZER_CONFIG=
# Search in the Tiger house number data for the US.
# Note: The tables must already exist or queries will throw errors.
# Changing this value requires to run ./utils/setup --create-functions --setup-website.
# Changing this value requires to run ./utils/setup --create-functions.
NOMINATIM_USE_US_TIGER_DATA=no
# Search in the auxiliary housenumber table.
# Changing this value requires to run ./utils/setup --create-functions --setup-website.
# Changing this value requires to run ./utils/setup --create-functions.
NOMINATIM_USE_AUX_LOCATION_DATA=no
# Proxy settings
@@ -143,8 +142,7 @@ NOMINATIM_REPLICATION_RECHECK_INTERVAL=60
### API settings
#
# The following settings configure the API responses. You must rerun
# 'nominatim refresh --website' after changing any of them.
# The following settings configure the API responses.
# Send permissive CORS access headers.
# When enabled, send CORS headers to allow access to everybody.
@@ -192,16 +190,17 @@ NOMINATIM_REQUEST_TIMEOUT=60
# to geocode" instead.
NOMINATIM_SEARCH_WITHIN_COUNTRIES=False
# Specifies the order in which different name tags are used.
# The values in this list determine the preferred order of name variants,
# including language-specific names.
# Comma-separated list, where :XX stands for language-specific tags
# (e.g. name:en) and no :XX stands for general tags (e.g. name).
NOMINATIM_OUTPUT_NAMES=name:XX,name,brand,official_name:XX,short_name:XX,official_name,short_name,ref
### Log settings
#
# The following options allow to enable logging of API requests.
# You must rerun 'nominatim refresh --website' after changing any of them.
#
# Enable logging of requests into the DB.
# The request will be logged into the new_query_log table.
# You should set up a cron job that regularly clears out this table.
NOMINATIM_LOG_DB=no
# Enable logging of requests into a file.
# To enable logging set this setting to the file to log to.
NOMINATIM_LOG_FILE=

View File

@@ -8,6 +8,7 @@
Helper functions for localizing names of results.
"""
from typing import Mapping, List, Optional
from .config import Configuration
import re
@@ -20,14 +21,18 @@ class Locales:
"""
def __init__(self, langs: Optional[List[str]] = None):
self.config = Configuration(None)
self.languages = langs or []
self.name_tags: List[str] = []
# Build the list of supported tags. It is currently hard-coded.
self._add_lang_tags('name')
self._add_tags('name', 'brand')
self._add_lang_tags('official_name', 'short_name')
self._add_tags('official_name', 'short_name', 'ref')
parts = self.config.OUTPUT_NAMES.split(',')
for part in parts:
part = part.strip()
if part.endswith(":XX"):
self._add_lang_tags(part[:-3])
else:
self._add_tags(part)
def __bool__(self) -> bool:
return len(self.languages) > 0

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Functions for specialised logging with HTML output.
@@ -106,9 +106,6 @@ class BaseLogger:
except TypeError:
return sqlstr
# Fixes an odd issue with Python 3.7 where percentages are not
# quoted correctly.
sqlstr = re.sub(r'%(?!\()', '%%', sqlstr)
sqlstr = re.sub(r'__\[POSTCOMPILE_([^]]*)\]', r'%(\1)s', sqlstr)
return sqlstr % params
@@ -342,7 +339,8 @@ HTML_HEADER: str = """<!DOCTYPE html>
<title>Nominatim - Debug</title>
<style>
""" + \
(HtmlFormatter(nobackground=True).get_style_defs('.highlight') if CODE_HIGHLIGHT else '') + \
(HtmlFormatter(nobackground=True).get_style_defs('.highlight') # type: ignore[no-untyped-call]
if CODE_HIGHLIGHT else '') + \
"""
h2 { font-size: x-large }

View File

@@ -0,0 +1,52 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
This preprocessor replaces values in a given input based on pre-defined regex rules.
Arguments:
pattern: Regex pattern to be applied on the input
replace: The string that it is to be replaced with
"""
from typing import List
import re
from .config import QueryConfig
from .base import QueryProcessingFunc
from ..search.query import Phrase
class _GenericPreprocessing:
"""Perform replacements to input phrases using custom regex patterns."""
def __init__(self, config: QueryConfig) -> None:
"""Initialise the _GenericPreprocessing class with patterns from the ICU config file."""
self.config = config
match_patterns = self.config.get('replacements', 'Key not found')
self.compiled_patterns = [
(re.compile(item['pattern']), item['replace']) for item in match_patterns
]
def split_phrase(self, phrase: Phrase) -> Phrase:
"""This function performs replacements on the given text using regex patterns."""
for item in self.compiled_patterns:
phrase.text = item[0].sub(item[1], phrase.text)
return phrase
def __call__(self, phrases: List[Phrase]) -> List[Phrase]:
"""
Return the final Phrase list.
Returns an empty list if there is nothing left after split_phrase.
"""
result = [p for p in map(self.split_phrase, phrases) if p.text.strip()]
return result
def create(config: QueryConfig) -> QueryProcessingFunc:
""" Create a function for generic preprocessing."""
return _GenericPreprocessing(config)

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Conversion from token assignment to an abstract DB search.
@@ -146,7 +146,7 @@ class SearchBuilder:
if address:
sdata.lookups = [dbf.FieldLookup('nameaddress_vector',
[t.token for r in address
for t in self.query.get_partials_list(r)],
for t in self.query.iter_partials(r)],
lookups.Restrict)]
yield dbs.PostcodeSearch(penalty, sdata)
@@ -155,32 +155,39 @@ class SearchBuilder:
""" Build a simple address search for special entries where the
housenumber is the main name token.
"""
sdata.lookups = [dbf.FieldLookup('name_vector', [t.token for t in hnrs], lookups.LookupAny)]
expected_count = sum(t.count for t in hnrs)
partials = {t.token: t.addr_count for trange in address
for t in self.query.get_partials_list(trange)}
partials = dbf.CountedTokenIDs((t for trange in address
for t in self.query.iter_partials(trange)),
'addr_count')
if not partials:
# can happen when none of the partials is indexed
return
if expected_count < 8000:
sdata.lookups.append(dbf.FieldLookup('nameaddress_vector',
list(partials), lookups.Restrict))
elif len(partials) != 1 or list(partials.values())[0] < 10000:
sdata.lookups.append(dbf.FieldLookup('nameaddress_vector',
list(partials), lookups.LookupAll))
expected_count = sum(t.count for t in hnrs)
hnr_tokens = [t.token for t in hnrs]
if expected_count < 10000:
sdata.lookups = [dbf.FieldLookup('name_vector', hnr_tokens, lookups.LookupAny),
dbf.FieldLookup('nameaddress_vector',
partials.get_tokens(),
lookups.Restrict)]
else:
addr_fulls = [t.token for t
in self.query.get_tokens(address[0], qmod.TOKEN_WORD)]
if len(addr_fulls) > 5:
return
sdata.lookups.append(
dbf.FieldLookup('nameaddress_vector', addr_fulls, lookups.LookupAny))
split = partials.get_num_lookup_tokens(20000, 5)
if split > 0:
sdata.lookups = partials.split_lookup(split, 'nameaddress_vector')
sdata.lookups.append(
dbf.FieldLookup('name_vector', hnr_tokens, lookups.Restrict))
else:
addr_fulls = [t.token for t in
self.query.get_tokens(address[0], qmod.TOKEN_WORD)]
if len(addr_fulls) > 5:
return
sdata.lookups = [
dbf.FieldLookup('name_vector', hnr_tokens, lookups.LookupAny),
dbf.FieldLookup('nameaddress_vector', addr_fulls, lookups.LookupAny)]
sdata.housenumbers = dbf.WeightedStrings([], [])
yield dbs.PlaceSearch(0.05, sdata, expected_count)
yield dbs.PlaceSearch(0.05, sdata, expected_count, True)
def build_name_search(self, sdata: dbf.SearchData,
name: qmod.TokenRange, address: List[qmod.TokenRange],
@@ -194,7 +201,10 @@ class SearchBuilder:
sdata.rankings.append(ranking)
for penalty, count, lookup in self.yield_lookups(name, address):
sdata.lookups = lookup
yield dbs.PlaceSearch(penalty + name_penalty, sdata, count)
if sdata.housenumbers:
yield dbs.AddressSearch(penalty + name_penalty, sdata, count, bool(address))
else:
yield dbs.PlaceSearch(penalty + name_penalty, sdata, count, bool(address))
def yield_lookups(self, name: qmod.TokenRange, address: List[qmod.TokenRange]
) -> Iterator[Tuple[float, int, List[dbf.FieldLookup]]]:
@@ -202,37 +212,88 @@ class SearchBuilder:
be searched for. This takes into account how frequent the terms
are and tries to find a lookup that optimizes index use.
"""
name_partials = dbf.CountedTokenIDs(self.query.iter_partials(name))
addr_partials = dbf.CountedTokenIDs((t for r in address
for t in self.query.iter_partials(r)),
'addr_count')
if not addr_partials:
yield from self.yield_name_only_lookups(name_partials, name)
else:
yield from self.yield_address_lookups(name_partials, addr_partials, name)
def yield_name_only_lookups(self, partials: dbf.CountedTokenIDs, name: qmod.TokenRange
) -> Iterator[Tuple[float, int, List[dbf.FieldLookup]]]:
""" Yield the best lookup for a name-only search.
"""
split = partials.get_num_lookup_tokens(30000, 6)
if split > 0:
yield 0.0, partials.expected_for_all_search(5), \
partials.split_lookup(split, 'name_vector')
else:
# lots of results expected: try lookup by full names first
name_fulls = list(filter(lambda t: t.count < 50000,
self.query.get_tokens(name, qmod.TOKEN_WORD)))
if name_fulls:
yield 0.0, sum(t.count for t in name_fulls), \
dbf.lookup_by_any_name([t.token for t in name_fulls], [], [])
# look the name up by its partials
exp_count = partials.expected_for_all_search(5)
if exp_count < 50000:
yield 1.0, exp_count, \
[dbf.FieldLookup('name_vector', partials.get_tokens(), lookups.LookupAll)]
def yield_address_lookups(self, name_partials: dbf.CountedTokenIDs,
addr_partials: dbf.CountedTokenIDs, name: qmod.TokenRange,
) -> Iterator[Tuple[float, int, List[dbf.FieldLookup]]]:
penalty = 0.0 # extra penalty
name_partials = {t.token: t for t in self.query.get_partials_list(name)}
addr_partials = [t for r in address for t in self.query.get_partials_list(r)]
addr_tokens = list({t.token for t in addr_partials})
name_split = name_partials.get_num_lookup_tokens(20000, 6)
addr_split = addr_partials.get_num_lookup_tokens(10000, 3)
exp_count = min(t.count for t in name_partials.values()) / (3**(len(name_partials) - 1))
if name_split < 0 and addr_split < 0:
# Partial term too frequent. Try looking up by rare full names first.
name_fulls = self.query.get_tokens(name, qmod.TOKEN_WORD)
if name_fulls:
fulls_count = sum(t.count for t in name_fulls)
if (len(name_partials) > 3 or exp_count < 8000):
yield penalty, exp_count, dbf.lookup_by_names(list(name_partials.keys()), addr_tokens)
return
if fulls_count < 80000:
yield 0.0, fulls_count, \
dbf.lookup_by_any_name([t.token for t in name_fulls],
addr_partials.get_tokens(),
[])
penalty += 0.2
penalty += 0.4
addr_count = min(t.addr_count for t in addr_partials) if addr_partials else 50000
# Partial term to frequent. Try looking up by rare full names first.
name_fulls = self.query.get_tokens(name, qmod.TOKEN_WORD)
if name_fulls:
fulls_count = sum(t.count for t in name_fulls)
name_split = name_partials.get_num_lookup_tokens(50000, 10)
addr_split = addr_partials.get_num_lookup_tokens(30000, 5)
if fulls_count < 50000 or addr_count < 50000:
yield penalty, fulls_count / (2**len(addr_tokens)), \
self.get_full_name_ranking(name_fulls, addr_partials,
fulls_count > 30000 / max(1, len(addr_tokens)))
if name_split > 0 \
and (addr_split < 0 or name_partials.min_count() <= addr_partials.min_count()):
# lookup by name
lookup = name_partials.split_lookup(name_split, 'name_vector')
lookup.append(dbf.FieldLookup('nameaddress_vector',
addr_partials.get_tokens(), lookups.Restrict))
yield penalty, name_partials.expected_for_all_search(5), lookup
elif addr_split > 0:
# lookup by address
lookup = addr_partials.split_lookup(addr_split, 'nameaddress_vector')
lookup.append(dbf.FieldLookup('name_vector',
name_partials.get_tokens(), lookups.Restrict))
yield penalty, addr_partials.expected_for_all_search(3), lookup
elif len(name_partials) > 1:
penalty += 0.5
# To catch remaining results, lookup by name and address
# We only do this if there is a reasonable number of results expected.
exp_count = min(name_partials.min_count(), addr_partials.min_count())
exp_count = int(exp_count / (min(3, len(name_partials)) + min(3, len(addr_partials))))
if exp_count < 50000:
lookup = name_partials.split_lookup(3, 'name_vector')
lookup.extend(addr_partials.split_lookup(3, 'nameaddress_vector'))
# To catch remaining results, lookup by name and address
# We only do this if there is a reasonable number of results expected.
exp_count /= 2**len(addr_tokens)
if exp_count < 10000 and addr_count < 20000:
penalty += 0.35 * max(1 if name_fulls else 0.1,
5 - len(name_partials) - len(addr_tokens))
yield penalty, exp_count, \
self.get_name_address_ranking(list(name_partials.keys()), addr_partials)
yield penalty, exp_count, lookup
def get_name_address_ranking(self, name_tokens: List[int],
addr_partials: List[qmod.Token]) -> List[dbf.FieldLookup]:
@@ -279,11 +340,14 @@ class SearchBuilder:
""" Create a ranking expression for a name term in the given range.
"""
name_fulls = self.query.get_tokens(trange, qmod.TOKEN_WORD)
ranks = [dbf.RankedTokens(t.penalty, [t.token]) for t in name_fulls]
full_word_penalty = self.query.get_in_word_penalty(trange)
ranks = [dbf.RankedTokens(t.penalty + full_word_penalty, [t.token])
for t in name_fulls]
ranks.sort(key=lambda r: r.penalty)
# Fallback, sum of penalty for partials
name_partials = self.query.get_partials_list(trange)
default = sum(t.penalty for t in name_partials) + 0.2
default = sum(t.penalty for t in self.query.iter_partials(trange)) + 0.2
default += sum(n.word_break_penalty
for n in self.query.nodes[trange.start + 1:trange.end])
return dbf.FieldRanking(db_field, default, ranks)
def get_addr_ranking(self, trange: qmod.TokenRange) -> dbf.FieldRanking:
@@ -295,36 +359,40 @@ class SearchBuilder:
ranks: List[dbf.RankedTokens] = []
while todo:
neglen, pos, rank = heapq.heappop(todo)
_, pos, rank = heapq.heappop(todo)
# partial node
partial = self.query.nodes[pos].partial
if partial is not None:
if pos + 1 < trange.end:
penalty = rank.penalty + partial.penalty \
+ self.query.nodes[pos + 1].word_break_penalty
heapq.heappush(todo, (-(pos + 1), pos + 1,
dbf.RankedTokens(penalty, rank.tokens)))
else:
ranks.append(dbf.RankedTokens(rank.penalty + partial.penalty,
rank.tokens))
# full words
for tlist in self.query.nodes[pos].starting:
if tlist.ttype in (qmod.TOKEN_PARTIAL, qmod.TOKEN_WORD):
if tlist.ttype == qmod.TOKEN_WORD:
if tlist.end < trange.end:
chgpenalty = PENALTY_WORDCHANGE[self.query.nodes[tlist.end].btype]
if tlist.ttype == qmod.TOKEN_PARTIAL:
penalty = rank.penalty + chgpenalty \
+ max(t.penalty for t in tlist.tokens)
heapq.heappush(todo, (neglen - 1, tlist.end,
dbf.RankedTokens(penalty, rank.tokens)))
else:
for t in tlist.tokens:
heapq.heappush(todo, (neglen - 1, tlist.end,
rank.with_token(t, chgpenalty)))
chgpenalty = self.query.nodes[tlist.end].word_break_penalty \
+ self.query.get_in_word_penalty(
qmod.TokenRange(pos, tlist.end))
for t in tlist.tokens:
heapq.heappush(todo, (-tlist.end, tlist.end,
rank.with_token(t, chgpenalty)))
elif tlist.end == trange.end:
if tlist.ttype == qmod.TOKEN_PARTIAL:
ranks.append(dbf.RankedTokens(rank.penalty
+ max(t.penalty for t in tlist.tokens),
rank.tokens))
else:
ranks.extend(rank.with_token(t, 0.0) for t in tlist.tokens)
if len(ranks) >= 10:
# Too many variants, bail out and only add
# Worst-case Fallback: sum of penalty of partials
name_partials = self.query.get_partials_list(trange)
default = sum(t.penalty for t in name_partials) + 0.2
ranks.append(dbf.RankedTokens(rank.penalty + default, []))
# Bail out of outer loop
todo.clear()
break
ranks.extend(rank.with_token(t, 0.0) for t in tlist.tokens)
if len(ranks) >= 10:
# Too many variants, bail out and only add
# Worst-case Fallback: sum of penalty of partials
default = sum(t.penalty for t in self.query.iter_partials(trange)) + 0.2
default += sum(n.word_break_penalty
for n in self.query.nodes[trange.start + 1:trange.end])
ranks.append(dbf.RankedTokens(rank.penalty + default, []))
# Bail out of outer loop
break
ranks.sort(key=lambda r: len(r.tokens))
default = ranks[0].penalty + 0.3
@@ -344,6 +412,7 @@ class SearchBuilder:
if not tokens:
return None
sdata.set_strings('countries', tokens)
sdata.penalty += self.query.get_in_word_penalty(assignment.country)
elif self.details.countries:
sdata.countries = dbf.WeightedStrings(self.details.countries,
[0.0] * len(self.details.countries))
@@ -351,29 +420,24 @@ class SearchBuilder:
sdata.set_strings('housenumbers',
self.query.get_tokens(assignment.housenumber,
qmod.TOKEN_HOUSENUMBER))
sdata.penalty += self.query.get_in_word_penalty(assignment.housenumber)
if assignment.postcode:
sdata.set_strings('postcodes',
self.query.get_tokens(assignment.postcode,
qmod.TOKEN_POSTCODE))
sdata.penalty += self.query.get_in_word_penalty(assignment.postcode)
if assignment.qualifier:
tokens = self.get_qualifier_tokens(assignment.qualifier)
if not tokens:
return None
sdata.set_qualifiers(tokens)
sdata.penalty += self.query.get_in_word_penalty(assignment.qualifier)
elif self.details.categories:
sdata.qualifiers = dbf.WeightedCategories(self.details.categories,
[0.0] * len(self.details.categories))
if assignment.address:
if not assignment.name and assignment.housenumber:
# housenumber search: the first item needs to be handled like
# a name in ranking or penalties are not comparable with
# normal searches.
sdata.set_ranking([self.get_name_ranking(assignment.address[0],
db_field='nameaddress_vector')]
+ [self.get_addr_ranking(r) for r in assignment.address[1:]])
else:
sdata.set_ranking([self.get_addr_ranking(r) for r in assignment.address])
sdata.set_ranking([self.get_addr_ranking(r) for r in assignment.address])
else:
sdata.rankings = []
@@ -419,14 +483,3 @@ class SearchBuilder:
return dbf.WeightedCategories(list(tokens.keys()), list(tokens.values()))
return None
PENALTY_WORDCHANGE = {
qmod.BREAK_START: 0.0,
qmod.BREAK_END: 0.0,
qmod.BREAK_PHRASE: 0.0,
qmod.BREAK_SOFT_PHRASE: 0.0,
qmod.BREAK_WORD: 0.1,
qmod.BREAK_PART: 0.2,
qmod.BREAK_TOKEN: 0.4
}

View File

@@ -7,7 +7,7 @@
"""
Data structures for more complex fields in abstract search descriptions.
"""
from typing import List, Tuple, Iterator, Dict, Type
from typing import List, Tuple, Iterator, Dict, Type, cast
import dataclasses
import sqlalchemy as sa
@@ -18,6 +18,66 @@ from .query import Token
from . import db_search_lookups as lookups
class CountedTokenIDs:
""" A list of token IDs with their respective counts, sorted
from least frequent to most frequent.
If a token count is one, then statistics are likely to be unavaible
and a relatively high count is assumed instead.
"""
def __init__(self, tokens: Iterator[Token], count_column: str = 'count'):
self.tokens = list({(cast(int, getattr(t, count_column)), t.token) for t in tokens})
self.tokens.sort(key=lambda t: t[0] if t[0] > 1 else 100000)
def __len__(self) -> int:
return len(self.tokens)
def get_num_lookup_tokens(self, limit: int, fac: int) -> int:
""" Suggest the number of tokens to be used for an index lookup.
The idea here is to use as few items as possible while making
sure the number of rows returned stays below 'limit' which
makes recheck of the returned rows more expensive than adding
another item for the index lookup. 'fac' is the factor by which
the limit is increased every time a lookup item is added.
If the list of tokens doesn't seem suitable at all for index
lookup, -1 is returned.
"""
length = len(self.tokens)
min_count = self.tokens[0][0]
if min_count == 1:
return min(length, 3) # no statistics available, use index
for i in range(min(length, 3)):
if min_count < limit:
return i + 1
limit = limit * fac
return -1
def min_count(self) -> int:
return self.tokens[0][0]
def expected_for_all_search(self, fac: int = 5) -> int:
return int(self.tokens[0][0] / (fac**(len(self.tokens) - 1)))
def get_tokens(self) -> List[int]:
return [t[1] for t in self.tokens]
def get_head_tokens(self, num_tokens: int) -> List[int]:
return [t[1] for t in self.tokens[:num_tokens]]
def get_tail_tokens(self, first: int) -> List[int]:
return [t[1] for t in self.tokens[first:]]
def split_lookup(self, split: int, column: str) -> 'List[FieldLookup]':
lookup = [FieldLookup(column, self.get_head_tokens(split), lookups.LookupAll)]
if split < len(self.tokens):
lookup.append(FieldLookup(column, self.get_tail_tokens(split), lookups.Restrict))
return lookup
@dataclasses.dataclass
class WeightedStrings:
""" A list of strings together with a penalty.

View File

@@ -1,867 +0,0 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of the actual database accesses for forward search.
"""
from typing import List, Tuple, AsyncIterator, Dict, Any, Callable, cast
import abc
import sqlalchemy as sa
from ..typing import SaFromClause, SaScalarSelect, SaColumn, \
SaExpression, SaSelect, SaLambdaSelect, SaRow, SaBind
from ..sql.sqlalchemy_types import Geometry, IntArray
from ..connection import SearchConnection
from ..types import SearchDetails, DataLayer, GeometryFormat, Bbox
from .. import results as nres
from .db_search_fields import SearchData, WeightedCategories
def no_index(expr: SaColumn) -> SaColumn:
""" Wrap the given expression, so that the query planner will
refrain from using the expression for index lookup.
"""
return sa.func.coalesce(sa.null(), expr)
def _details_to_bind_params(details: SearchDetails) -> Dict[str, Any]:
""" Create a dictionary from search parameters that can be used
as bind parameter for SQL execute.
"""
return {'limit': details.max_results,
'min_rank': details.min_rank,
'max_rank': details.max_rank,
'viewbox': details.viewbox,
'viewbox2': details.viewbox_x2,
'near': details.near,
'near_radius': details.near_radius,
'excluded': details.excluded,
'countries': details.countries}
LIMIT_PARAM: SaBind = sa.bindparam('limit')
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
def filter_by_area(sql: SaSelect, t: SaFromClause,
details: SearchDetails, avoid_index: bool = False) -> SaSelect:
""" Apply SQL statements for filtering by viewbox and near point,
if applicable.
"""
if details.near is not None and details.near_radius is not None:
if details.near_radius < 0.1 and not avoid_index:
sql = sql.where(t.c.geometry.within_distance(NEAR_PARAM, NEAR_RADIUS_PARAM))
else:
sql = sql.where(t.c.geometry.ST_Distance(NEAR_PARAM) <= NEAR_RADIUS_PARAM)
if details.viewbox is not None and details.bounded_viewbox:
sql = sql.where(t.c.geometry.intersects(VIEWBOX_PARAM,
use_index=not avoid_index and
details.viewbox.area < 0.2))
return sql
def _exclude_places(t: SaFromClause) -> Callable[[], SaExpression]:
return lambda: t.c.place_id.not_in(sa.bindparam('excluded'))
def _select_placex(t: SaFromClause) -> SaSelect:
return sa.select(t.c.place_id, t.c.osm_type, t.c.osm_id, t.c.name,
t.c.class_, t.c.type,
t.c.address, t.c.extratags,
t.c.housenumber, t.c.postcode, t.c.country_code,
t.c.wikipedia,
t.c.parent_place_id, t.c.rank_address, t.c.rank_search,
t.c.linked_place_id, t.c.admin_level,
t.c.centroid,
t.c.geometry.ST_Expand(0).label('bbox'))
def _add_geometry_columns(sql: SaLambdaSelect, col: SaColumn, details: SearchDetails) -> SaSelect:
out = []
if details.geometry_simplification > 0.0:
col = sa.func.ST_SimplifyPreserveTopology(col, details.geometry_simplification)
if details.geometry_output & GeometryFormat.GEOJSON:
out.append(sa.func.ST_AsGeoJSON(col, 7).label('geometry_geojson'))
if details.geometry_output & GeometryFormat.TEXT:
out.append(sa.func.ST_AsText(col).label('geometry_text'))
if details.geometry_output & GeometryFormat.KML:
out.append(sa.func.ST_AsKML(col, 7).label('geometry_kml'))
if details.geometry_output & GeometryFormat.SVG:
out.append(sa.func.ST_AsSVG(col, 0, 7).label('geometry_svg'))
return sql.add_columns(*out)
def _make_interpolation_subquery(table: SaFromClause, inner: SaFromClause,
numerals: List[int], details: SearchDetails) -> SaScalarSelect:
all_ids = sa.func.ArrayAgg(table.c.place_id)
sql = sa.select(all_ids).where(table.c.parent_place_id == inner.c.place_id)
if len(numerals) == 1:
sql = sql.where(sa.between(numerals[0], table.c.startnumber, table.c.endnumber))\
.where((numerals[0] - table.c.startnumber) % table.c.step == 0)
else:
sql = sql.where(sa.or_(
*(sa.and_(sa.between(n, table.c.startnumber, table.c.endnumber),
(n - table.c.startnumber) % table.c.step == 0)
for n in numerals)))
if details.excluded:
sql = sql.where(_exclude_places(table))
return sql.scalar_subquery()
def _filter_by_layer(table: SaFromClause, layers: DataLayer) -> SaColumn:
orexpr: List[SaExpression] = []
if layers & DataLayer.ADDRESS and layers & DataLayer.POI:
orexpr.append(no_index(table.c.rank_address).between(1, 30))
elif layers & DataLayer.ADDRESS:
orexpr.append(no_index(table.c.rank_address).between(1, 29))
orexpr.append(sa.func.IsAddressPoint(table))
elif layers & DataLayer.POI:
orexpr.append(sa.and_(no_index(table.c.rank_address) == 30,
table.c.class_.not_in(('place', 'building'))))
if layers & DataLayer.MANMADE:
exclude = []
if not layers & DataLayer.RAILWAY:
exclude.append('railway')
if not layers & DataLayer.NATURAL:
exclude.extend(('natural', 'water', 'waterway'))
orexpr.append(sa.and_(table.c.class_.not_in(tuple(exclude)),
no_index(table.c.rank_address) == 0))
else:
include = []
if layers & DataLayer.RAILWAY:
include.append('railway')
if layers & DataLayer.NATURAL:
include.extend(('natural', 'water', 'waterway'))
orexpr.append(sa.and_(table.c.class_.in_(tuple(include)),
no_index(table.c.rank_address) == 0))
if len(orexpr) == 1:
return orexpr[0]
return sa.or_(*orexpr)
def _interpolated_position(table: SaFromClause, nr: SaColumn) -> SaColumn:
pos = sa.cast(nr - table.c.startnumber, sa.Float) / (table.c.endnumber - table.c.startnumber)
return sa.case(
(table.c.endnumber == table.c.startnumber, table.c.linegeo.ST_Centroid()),
else_=table.c.linegeo.ST_LineInterpolatePoint(pos)).label('centroid')
async def _get_placex_housenumbers(conn: SearchConnection,
place_ids: List[int],
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
t = conn.t.placex
sql = _select_placex(t).add_columns(t.c.importance)\
.where(t.c.place_id.in_(place_ids))
if details.geometry_output:
sql = _add_geometry_columns(sql, t.c.geometry, details)
for row in await conn.execute(sql):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.bbox = Bbox.from_wkb(row.bbox)
yield result
def _int_list_to_subquery(inp: List[int]) -> 'sa.Subquery':
""" Create a subselect that returns the given list of integers
as rows in the column 'nr'.
"""
vtab = sa.func.JsonArrayEach(sa.type_coerce(inp, sa.JSON))\
.table_valued(sa.column('value', type_=sa.JSON))
return sa.select(sa.cast(sa.cast(vtab.c.value, sa.Text), sa.Integer).label('nr')).subquery()
async def _get_osmline(conn: SearchConnection, place_ids: List[int],
numerals: List[int],
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
t = conn.t.osmline
values = _int_list_to_subquery(numerals)
sql = sa.select(t.c.place_id, t.c.osm_id,
t.c.parent_place_id, t.c.address,
values.c.nr.label('housenumber'),
_interpolated_position(t, values.c.nr),
t.c.postcode, t.c.country_code)\
.where(t.c.place_id.in_(place_ids))\
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
if details.geometry_output:
sub = sql.subquery()
sql = _add_geometry_columns(sa.select(sub), sub.c.centroid, details)
for row in await conn.execute(sql):
result = nres.create_from_osmline_row(row, nres.SearchResult)
assert result
yield result
async def _get_tiger(conn: SearchConnection, place_ids: List[int],
numerals: List[int], osm_id: int,
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
t = conn.t.tiger
values = _int_list_to_subquery(numerals)
sql = sa.select(t.c.place_id, t.c.parent_place_id,
sa.literal('W').label('osm_type'),
sa.literal(osm_id).label('osm_id'),
values.c.nr.label('housenumber'),
_interpolated_position(t, values.c.nr),
t.c.postcode)\
.where(t.c.place_id.in_(place_ids))\
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
if details.geometry_output:
sub = sql.subquery()
sql = _add_geometry_columns(sa.select(sub), sub.c.centroid, details)
for row in await conn.execute(sql):
result = nres.create_from_tiger_row(row, nres.SearchResult)
assert result
yield result
class AbstractSearch(abc.ABC):
""" Encapuslation of a single lookup in the database.
"""
SEARCH_PRIO: int = 2
def __init__(self, penalty: float) -> None:
self.penalty = penalty
@abc.abstractmethod
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
class NearSearch(AbstractSearch):
""" Category search of a place type near the result of another search.
"""
def __init__(self, penalty: float, categories: WeightedCategories,
search: AbstractSearch) -> None:
super().__init__(penalty)
self.search = search
self.categories = categories
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
results = nres.SearchResults()
base = await self.search.lookup(conn, details)
if not base:
return results
base.sort(key=lambda r: (r.accuracy, r.rank_search))
max_accuracy = base[0].accuracy + 0.5
if base[0].rank_address == 0:
min_rank = 0
max_rank = 0
elif base[0].rank_address < 26:
min_rank = 1
max_rank = min(25, base[0].rank_address + 4)
else:
min_rank = 26
max_rank = 30
base = nres.SearchResults(r for r in base
if (r.source_table == nres.SourceTable.PLACEX
and r.accuracy <= max_accuracy
and r.bbox and r.bbox.area < 20
and r.rank_address >= min_rank
and r.rank_address <= max_rank))
if base:
baseids = [b.place_id for b in base[:5] if b.place_id]
for category, penalty in self.categories:
await self.lookup_category(results, conn, baseids, category, penalty, details)
if len(results) >= details.max_results:
break
return results
async def lookup_category(self, results: nres.SearchResults,
conn: SearchConnection, ids: List[int],
category: Tuple[str, str], penalty: float,
details: SearchDetails) -> None:
""" Find places of the given category near the list of
place ids and add the results to 'results'.
"""
table = await conn.get_class_table(*category)
tgeom = conn.t.placex.alias('pgeom')
if table is None:
# No classtype table available, do a simplified lookup in placex.
table = conn.t.placex
sql = sa.select(table.c.place_id,
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
.label('dist'))\
.join(tgeom, table.c.geometry.intersects(tgeom.c.centroid.ST_Expand(0.01)))\
.where(table.c.class_ == category[0])\
.where(table.c.type == category[1])
else:
# Use classtype table. We can afford to use a larger
# radius for the lookup.
sql = sa.select(table.c.place_id,
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
.label('dist'))\
.join(tgeom,
table.c.centroid.ST_CoveredBy(
sa.case((sa.and_(tgeom.c.rank_address > 9,
tgeom.c.geometry.is_area()),
tgeom.c.geometry),
else_=tgeom.c.centroid.ST_Expand(0.05))))
inner = sql.where(tgeom.c.place_id.in_(ids))\
.group_by(table.c.place_id).subquery()
t = conn.t.placex
sql = _select_placex(t).add_columns((-inner.c.dist).label('importance'))\
.join(inner, inner.c.place_id == t.c.place_id)\
.order_by(inner.c.dist)
sql = sql.where(no_index(t.c.rank_address).between(MIN_RANK_PARAM, MAX_RANK_PARAM))
if details.countries:
sql = sql.where(t.c.country_code.in_(COUNTRIES_PARAM))
if details.excluded:
sql = sql.where(_exclude_places(t))
if details.layers is not None:
sql = sql.where(_filter_by_layer(t, details.layers))
sql = sql.limit(LIMIT_PARAM)
for row in await conn.execute(sql, _details_to_bind_params(details)):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.accuracy = self.penalty + penalty
result.bbox = Bbox.from_wkb(row.bbox)
results.append(result)
class PoiSearch(AbstractSearch):
""" Category search in a geographic area.
"""
def __init__(self, sdata: SearchData) -> None:
super().__init__(sdata.penalty)
self.qualifiers = sdata.qualifiers
self.countries = sdata.countries
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
bind_params = _details_to_bind_params(details)
t = conn.t.placex
rows: List[SaRow] = []
if details.near and details.near_radius is not None and details.near_radius < 0.2:
# simply search in placex table
def _base_query() -> SaSelect:
return _select_placex(t) \
.add_columns((-t.c.centroid.ST_Distance(NEAR_PARAM))
.label('importance'))\
.where(t.c.linked_place_id == None) \
.where(t.c.geometry.within_distance(NEAR_PARAM, NEAR_RADIUS_PARAM)) \
.order_by(t.c.centroid.ST_Distance(NEAR_PARAM)) \
.limit(LIMIT_PARAM)
classtype = self.qualifiers.values
if len(classtype) == 1:
cclass, ctype = classtype[0]
sql: SaLambdaSelect = sa.lambda_stmt(
lambda: _base_query().where(t.c.class_ == cclass)
.where(t.c.type == ctype))
else:
sql = _base_query().where(sa.or_(*(sa.and_(t.c.class_ == cls, t.c.type == typ)
for cls, typ in classtype)))
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
if details.viewbox is not None and details.bounded_viewbox:
sql = sql.where(t.c.geometry.intersects(VIEWBOX_PARAM))
rows.extend(await conn.execute(sql, bind_params))
else:
# use the class type tables
for category in self.qualifiers.values:
table = await conn.get_class_table(*category)
if table is not None:
sql = _select_placex(t)\
.add_columns(t.c.importance)\
.join(table, t.c.place_id == table.c.place_id)\
.where(t.c.class_ == category[0])\
.where(t.c.type == category[1])
if details.viewbox is not None and details.bounded_viewbox:
sql = sql.where(table.c.centroid.intersects(VIEWBOX_PARAM))
if details.near and details.near_radius is not None:
sql = sql.order_by(table.c.centroid.ST_Distance(NEAR_PARAM))\
.where(table.c.centroid.within_distance(NEAR_PARAM,
NEAR_RADIUS_PARAM))
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
sql = sql.limit(LIMIT_PARAM)
rows.extend(await conn.execute(sql, bind_params))
results = nres.SearchResults()
for row in rows:
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.accuracy = self.penalty + self.qualifiers.get_penalty((row.class_, row.type))
result.bbox = Bbox.from_wkb(row.bbox)
results.append(result)
return results
class CountrySearch(AbstractSearch):
""" Search for a country name or country code.
"""
SEARCH_PRIO = 0
def __init__(self, sdata: SearchData) -> None:
super().__init__(sdata.penalty)
self.countries = sdata.countries
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
t = conn.t.placex
ccodes = self.countries.values
sql = _select_placex(t)\
.add_columns(t.c.importance)\
.where(t.c.country_code.in_(ccodes))\
.where(t.c.rank_address == 4)
if details.geometry_output:
sql = _add_geometry_columns(sql, t.c.geometry, details)
if details.excluded:
sql = sql.where(_exclude_places(t))
sql = filter_by_area(sql, t, details)
results = nres.SearchResults()
for row in await conn.execute(sql, _details_to_bind_params(details)):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
result.bbox = Bbox.from_wkb(row.bbox)
results.append(result)
if not results:
results = await self.lookup_in_country_table(conn, details)
if results:
details.min_rank = min(5, details.max_rank)
details.max_rank = min(25, details.max_rank)
return results
async def lookup_in_country_table(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Look up the country in the fallback country tables.
"""
# Avoid the fallback search when this is a more search. Country results
# usually are in the first batch of results and it is not possible
# to exclude these fallbacks.
if details.excluded:
return nres.SearchResults()
t = conn.t.country_name
tgrid = conn.t.country_grid
sql = sa.select(tgrid.c.country_code,
tgrid.c.geometry.ST_Centroid().ST_Collect().ST_Centroid()
.label('centroid'),
tgrid.c.geometry.ST_Collect().ST_Expand(0).label('bbox'))\
.where(tgrid.c.country_code.in_(self.countries.values))\
.group_by(tgrid.c.country_code)
sql = filter_by_area(sql, tgrid, details, avoid_index=True)
sub = sql.subquery('grid')
sql = sa.select(t.c.country_code,
t.c.name.merge(t.c.derived_name).label('name'),
sub.c.centroid, sub.c.bbox)\
.join(sub, t.c.country_code == sub.c.country_code)
if details.geometry_output:
sql = _add_geometry_columns(sql, sub.c.centroid, details)
results = nres.SearchResults()
for row in await conn.execute(sql, _details_to_bind_params(details)):
result = nres.create_from_country_row(row, nres.SearchResult)
assert result
result.bbox = Bbox.from_wkb(row.bbox)
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
results.append(result)
return results
class PostcodeSearch(AbstractSearch):
""" Search for a postcode.
"""
def __init__(self, extra_penalty: float, sdata: SearchData) -> None:
super().__init__(sdata.penalty + extra_penalty)
self.countries = sdata.countries
self.postcodes = sdata.postcodes
self.lookups = sdata.lookups
self.rankings = sdata.rankings
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
t = conn.t.postcode
pcs = self.postcodes.values
sql = sa.select(t.c.place_id, t.c.parent_place_id,
t.c.rank_search, t.c.rank_address,
t.c.postcode, t.c.country_code,
t.c.geometry.label('centroid'))\
.where(t.c.postcode.in_(pcs))
if details.geometry_output:
sql = _add_geometry_columns(sql, t.c.geometry, details)
penalty: SaExpression = sa.literal(self.penalty)
if details.viewbox is not None and not details.bounded_viewbox:
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM), 0.0),
(t.c.geometry.intersects(VIEWBOX2_PARAM), 0.5),
else_=1.0)
if details.near is not None:
sql = sql.order_by(t.c.geometry.ST_Distance(NEAR_PARAM))
sql = filter_by_area(sql, t, details)
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
if details.excluded:
sql = sql.where(_exclude_places(t))
if self.lookups:
assert len(self.lookups) == 1
tsearch = conn.t.search_name
sql = sql.where(tsearch.c.place_id == t.c.parent_place_id)\
.where((tsearch.c.name_vector + tsearch.c.nameaddress_vector)
.contains(sa.type_coerce(self.lookups[0].tokens,
IntArray)))
# Do NOT add rerank penalties based on the address terms.
# The standard rerank penalty only checks the address vector
# while terms may appear in name and address vector. This would
# lead to overly high penalties.
# We assume that a postcode is precise enough to not require
# additional full name matches.
penalty += sa.case(*((t.c.postcode == v, p) for v, p in self.postcodes),
else_=1.0)
sql = sql.add_columns(penalty.label('accuracy'))
sql = sql.order_by('accuracy').limit(LIMIT_PARAM)
results = nres.SearchResults()
for row in await conn.execute(sql, _details_to_bind_params(details)):
p = conn.t.placex
placex_sql = _select_placex(p)\
.add_columns(p.c.importance)\
.where(sa.text("""class = 'boundary'
AND type = 'postal_code'
AND osm_type = 'R'"""))\
.where(p.c.country_code == row.country_code)\
.where(p.c.postcode == row.postcode)\
.limit(1)
if details.geometry_output:
placex_sql = _add_geometry_columns(placex_sql, p.c.geometry, details)
for prow in await conn.execute(placex_sql, _details_to_bind_params(details)):
result = nres.create_from_placex_row(prow, nres.SearchResult)
if result is not None:
result.bbox = Bbox.from_wkb(prow.bbox)
break
else:
result = nres.create_from_postcode_row(row, nres.SearchResult)
assert result
if result.place_id not in details.excluded:
result.accuracy = row.accuracy
results.append(result)
return results
class PlaceSearch(AbstractSearch):
""" Generic search for an address or named place.
"""
SEARCH_PRIO = 1
def __init__(self, extra_penalty: float, sdata: SearchData, expected_count: int) -> None:
super().__init__(sdata.penalty + extra_penalty)
self.countries = sdata.countries
self.postcodes = sdata.postcodes
self.housenumbers = sdata.housenumbers
self.qualifiers = sdata.qualifiers
self.lookups = sdata.lookups
self.rankings = sdata.rankings
self.expected_count = expected_count
def _inner_search_name_cte(self, conn: SearchConnection,
details: SearchDetails) -> 'sa.CTE':
""" Create a subquery that preselects the rows in the search_name
table.
"""
t = conn.t.search_name
penalty: SaExpression = sa.literal(self.penalty)
for ranking in self.rankings:
penalty += ranking.sql_penalty(t)
sql = sa.select(t.c.place_id, t.c.search_rank, t.c.address_rank,
t.c.country_code, t.c.centroid,
t.c.name_vector, t.c.nameaddress_vector,
sa.case((t.c.importance > 0, t.c.importance),
else_=0.40001-(sa.cast(t.c.search_rank, sa.Float())/75))
.label('importance'),
penalty.label('penalty'))
for lookup in self.lookups:
sql = sql.where(lookup.sql_condition(t))
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
if self.postcodes:
# if a postcode is given, don't search for state or country level objects
sql = sql.where(t.c.address_rank > 9)
if self.expected_count > 10000:
# Many results expected. Restrict by postcode.
tpc = conn.t.postcode
sql = sql.where(sa.select(tpc.c.postcode)
.where(tpc.c.postcode.in_(self.postcodes.values))
.where(t.c.centroid.within_distance(tpc.c.geometry, 0.4))
.exists())
if details.viewbox is not None:
if details.bounded_viewbox:
sql = sql.where(t.c.centroid
.intersects(VIEWBOX_PARAM,
use_index=details.viewbox.area < 0.2))
elif not self.postcodes and not self.housenumbers and self.expected_count >= 10000:
sql = sql.where(t.c.centroid
.intersects(VIEWBOX2_PARAM,
use_index=details.viewbox.area < 0.5))
if details.near is not None and details.near_radius is not None:
if details.near_radius < 0.1:
sql = sql.where(t.c.centroid.within_distance(NEAR_PARAM,
NEAR_RADIUS_PARAM))
else:
sql = sql.where(t.c.centroid
.ST_Distance(NEAR_PARAM) < NEAR_RADIUS_PARAM)
if self.housenumbers:
sql = sql.where(t.c.address_rank.between(16, 30))
else:
if details.excluded:
sql = sql.where(_exclude_places(t))
if details.min_rank > 0:
sql = sql.where(sa.or_(t.c.address_rank >= MIN_RANK_PARAM,
t.c.search_rank >= MIN_RANK_PARAM))
if details.max_rank < 30:
sql = sql.where(sa.or_(t.c.address_rank <= MAX_RANK_PARAM,
t.c.search_rank <= MAX_RANK_PARAM))
inner = sql.limit(10000).order_by(sa.desc(sa.text('importance'))).subquery()
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
inner.c.country_code, inner.c.centroid, inner.c.importance,
inner.c.penalty)
# If the query is not an address search or has a geographic preference,
# preselect most important items to restrict the number of places
# that need to be looked up in placex.
if not self.housenumbers\
and (details.viewbox is None or details.bounded_viewbox)\
and (details.near is None or details.near_radius is not None)\
and not self.qualifiers:
sql = sql.add_columns(sa.func.first_value(inner.c.penalty - inner.c.importance)
.over(order_by=inner.c.penalty - inner.c.importance)
.label('min_penalty'))
inner = sql.subquery()
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
inner.c.country_code, inner.c.centroid, inner.c.importance,
inner.c.penalty)\
.where(inner.c.penalty - inner.c.importance < inner.c.min_penalty + 0.5)
return sql.cte('searches')
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
t = conn.t.placex
tsearch = self._inner_search_name_cte(conn, details)
sql = _select_placex(t).join(tsearch, t.c.place_id == tsearch.c.place_id)
if details.geometry_output:
sql = _add_geometry_columns(sql, t.c.geometry, details)
penalty: SaExpression = tsearch.c.penalty
if self.postcodes:
tpc = conn.t.postcode
pcs = self.postcodes.values
pc_near = sa.select(sa.func.min(tpc.c.geometry.ST_Distance(t.c.centroid)))\
.where(tpc.c.postcode.in_(pcs))\
.scalar_subquery()
penalty += sa.case((t.c.postcode.in_(pcs), 0.0),
else_=sa.func.coalesce(pc_near, cast(SaColumn, 2.0)))
if details.viewbox is not None and not details.bounded_viewbox:
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM, use_index=False), 0.0),
(t.c.geometry.intersects(VIEWBOX2_PARAM, use_index=False), 0.5),
else_=1.0)
if details.near is not None:
sql = sql.add_columns((-tsearch.c.centroid.ST_Distance(NEAR_PARAM))
.label('importance'))
sql = sql.order_by(sa.desc(sa.text('importance')))
else:
sql = sql.order_by(penalty - tsearch.c.importance)
sql = sql.add_columns(tsearch.c.importance)
sql = sql.add_columns(penalty.label('accuracy'))\
.order_by(sa.text('accuracy'))
if self.housenumbers:
hnr_list = '|'.join(self.housenumbers.values)
inner = sql.where(sa.or_(tsearch.c.address_rank < 30,
sa.func.RegexpWord(hnr_list, t.c.housenumber)))\
.subquery()
# Housenumbers from placex
thnr = conn.t.placex.alias('hnr')
pid_list = sa.func.ArrayAgg(thnr.c.place_id)
place_sql = sa.select(pid_list)\
.where(thnr.c.parent_place_id == inner.c.place_id)\
.where(sa.func.RegexpWord(hnr_list, thnr.c.housenumber))\
.where(thnr.c.linked_place_id == None)\
.where(thnr.c.indexed_status == 0)
if details.excluded:
place_sql = place_sql.where(thnr.c.place_id.not_in(sa.bindparam('excluded')))
if self.qualifiers:
place_sql = place_sql.where(self.qualifiers.sql_restrict(thnr))
numerals = [int(n) for n in self.housenumbers.values
if n.isdigit() and len(n) < 8]
interpol_sql: SaColumn
tiger_sql: SaColumn
if numerals and \
(not self.qualifiers or ('place', 'house') in self.qualifiers.values):
# Housenumbers from interpolations
interpol_sql = _make_interpolation_subquery(conn.t.osmline, inner,
numerals, details)
# Housenumbers from Tiger
tiger_sql = sa.case((inner.c.country_code == 'us',
_make_interpolation_subquery(conn.t.tiger, inner,
numerals, details)
), else_=None)
else:
interpol_sql = sa.null()
tiger_sql = sa.null()
unsort = sa.select(inner, place_sql.scalar_subquery().label('placex_hnr'),
interpol_sql.label('interpol_hnr'),
tiger_sql.label('tiger_hnr')).subquery('unsort')
sql = sa.select(unsort)\
.order_by(sa.case((unsort.c.placex_hnr != None, 1),
(unsort.c.interpol_hnr != None, 2),
(unsort.c.tiger_hnr != None, 3),
else_=4),
unsort.c.accuracy)
else:
sql = sql.where(t.c.linked_place_id == None)\
.where(t.c.indexed_status == 0)
if self.qualifiers:
sql = sql.where(self.qualifiers.sql_restrict(t))
if details.layers is not None:
sql = sql.where(_filter_by_layer(t, details.layers))
sql = sql.limit(LIMIT_PARAM)
results = nres.SearchResults()
for row in await conn.execute(sql, _details_to_bind_params(details)):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.bbox = Bbox.from_wkb(row.bbox)
result.accuracy = row.accuracy
if self.housenumbers and row.rank_address < 30:
if row.placex_hnr:
subs = _get_placex_housenumbers(conn, row.placex_hnr, details)
elif row.interpol_hnr:
subs = _get_osmline(conn, row.interpol_hnr, numerals, details)
elif row.tiger_hnr:
subs = _get_tiger(conn, row.tiger_hnr, numerals, row.osm_id, details)
else:
subs = None
if subs is not None:
async for sub in subs:
assert sub.housenumber
sub.accuracy = result.accuracy
if not any(nr in self.housenumbers.values
for nr in sub.housenumber.split(';')):
sub.accuracy += 0.6
results.append(sub)
# Only add the street as a result, if it meets all other
# filter conditions.
if (not details.excluded or result.place_id not in details.excluded)\
and (not self.qualifiers or result.category in self.qualifiers.values)\
and result.rank_address >= details.min_rank:
result.accuracy += 1.0 # penalty for missing housenumber
results.append(result)
else:
results.append(result)
return results

View File

@@ -0,0 +1,17 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Module implementing the actual database accesses for forward search.
"""
from .base import AbstractSearch as AbstractSearch
from .near_search import NearSearch as NearSearch
from .poi_search import PoiSearch as PoiSearch
from .country_search import CountrySearch as CountrySearch
from .postcode_search import PostcodeSearch as PostcodeSearch
from .place_search import PlaceSearch as PlaceSearch
from .address_search import AddressSearch as AddressSearch

View File

@@ -0,0 +1,360 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of search for an address (search with housenumber).
"""
from typing import cast, List, AsyncIterator
import sqlalchemy as sa
from . import base
from ...typing import SaBind, SaExpression, SaColumn, SaFromClause, SaScalarSelect
from ...types import SearchDetails, Bbox
from ...sql.sqlalchemy_types import Geometry
from ...connection import SearchConnection
from ... import results as nres
from ..db_search_fields import SearchData
LIMIT_PARAM: SaBind = sa.bindparam('limit')
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
def _int_list_to_subquery(inp: List[int]) -> 'sa.Subquery':
""" Create a subselect that returns the given list of integers
as rows in the column 'nr'.
"""
vtab = sa.func.JsonArrayEach(sa.type_coerce(inp, sa.JSON))\
.table_valued(sa.column('value', type_=sa.JSON))
return sa.select(sa.cast(sa.cast(vtab.c.value, sa.Text), sa.Integer).label('nr')).subquery()
def _interpolated_position(table: SaFromClause, nr: SaColumn) -> SaColumn:
pos = sa.cast(nr - table.c.startnumber, sa.Float) / (table.c.endnumber - table.c.startnumber)
return sa.case(
(table.c.endnumber == table.c.startnumber, table.c.linegeo.ST_Centroid()),
else_=table.c.linegeo.ST_LineInterpolatePoint(pos)).label('centroid')
def _make_interpolation_subquery(table: SaFromClause, inner: SaFromClause,
numerals: List[int], details: SearchDetails) -> SaScalarSelect:
all_ids = sa.func.ArrayAgg(table.c.place_id)
sql = sa.select(all_ids).where(table.c.parent_place_id == inner.c.place_id)
if len(numerals) == 1:
sql = sql.where(sa.between(numerals[0], table.c.startnumber, table.c.endnumber))\
.where((numerals[0] - table.c.startnumber) % table.c.step == 0)
else:
sql = sql.where(sa.or_(
*(sa.and_(sa.between(n, table.c.startnumber, table.c.endnumber),
(n - table.c.startnumber) % table.c.step == 0)
for n in numerals)))
if details.excluded:
sql = sql.where(base.exclude_places(table))
return sql.scalar_subquery()
async def _get_placex_housenumbers(conn: SearchConnection,
place_ids: List[int],
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
t = conn.t.placex
sql = base.select_placex(t).add_columns(t.c.importance)\
.where(t.c.place_id.in_(place_ids))
if details.geometry_output:
sql = base.add_geometry_columns(sql, t.c.geometry, details)
for row in await conn.execute(sql):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.bbox = Bbox.from_wkb(row.bbox)
yield result
async def _get_osmline(conn: SearchConnection, place_ids: List[int],
numerals: List[int],
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
t = conn.t.osmline
values = _int_list_to_subquery(numerals)
sql = sa.select(t.c.place_id, t.c.osm_id,
t.c.parent_place_id, t.c.address,
values.c.nr.label('housenumber'),
_interpolated_position(t, values.c.nr),
t.c.postcode, t.c.country_code)\
.where(t.c.place_id.in_(place_ids))\
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
if details.geometry_output:
sub = sql.subquery()
sql = base.add_geometry_columns(sa.select(sub), sub.c.centroid, details)
for row in await conn.execute(sql):
result = nres.create_from_osmline_row(row, nres.SearchResult)
assert result
yield result
async def _get_tiger(conn: SearchConnection, place_ids: List[int],
numerals: List[int], osm_id: int,
details: SearchDetails) -> AsyncIterator[nres.SearchResult]:
t = conn.t.tiger
values = _int_list_to_subquery(numerals)
sql = sa.select(t.c.place_id, t.c.parent_place_id,
sa.literal('W').label('osm_type'),
sa.literal(osm_id).label('osm_id'),
values.c.nr.label('housenumber'),
_interpolated_position(t, values.c.nr),
t.c.postcode)\
.where(t.c.place_id.in_(place_ids))\
.join(values, values.c.nr.between(t.c.startnumber, t.c.endnumber))
if details.geometry_output:
sub = sql.subquery()
sql = base.add_geometry_columns(sa.select(sub), sub.c.centroid, details)
for row in await conn.execute(sql):
result = nres.create_from_tiger_row(row, nres.SearchResult)
assert result
yield result
class AddressSearch(base.AbstractSearch):
""" Generic search for an address or named place.
"""
SEARCH_PRIO = 1
def __init__(self, extra_penalty: float, sdata: SearchData,
expected_count: int, has_address_terms: bool) -> None:
assert sdata.housenumbers
super().__init__(sdata.penalty + extra_penalty)
self.countries = sdata.countries
self.postcodes = sdata.postcodes
self.housenumbers = sdata.housenumbers
self.qualifiers = sdata.qualifiers
self.lookups = sdata.lookups
self.rankings = sdata.rankings
self.expected_count = expected_count
self.has_address_terms = has_address_terms
def _inner_search_name_cte(self, conn: SearchConnection,
details: SearchDetails) -> 'sa.CTE':
""" Create a subquery that preselects the rows in the search_name
table.
"""
t = conn.t.search_name
penalty: SaExpression = sa.literal(self.penalty)
for ranking in self.rankings:
penalty += ranking.sql_penalty(t)
sql = sa.select(t.c.place_id, t.c.search_rank, t.c.address_rank,
t.c.country_code, t.c.centroid,
t.c.name_vector, t.c.nameaddress_vector,
sa.case((t.c.importance > 0, t.c.importance),
else_=0.40001-(sa.cast(t.c.search_rank, sa.Float())/75))
.label('importance'),
penalty.label('penalty'))
for lookup in self.lookups:
sql = sql.where(lookup.sql_condition(t))
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
if self.postcodes:
if self.expected_count > 10000:
tpc = conn.t.postcode
sql = sql.where(sa.select(tpc.c.postcode)
.where(tpc.c.postcode.in_(self.postcodes.values))
.where(tpc.c.country_code == t.c.country_code)
.where(t.c.centroid.within_distance(tpc.c.geometry, 0.4))
.exists())
if details.viewbox is not None:
if details.bounded_viewbox:
sql = sql.where(t.c.centroid
.intersects(VIEWBOX_PARAM,
use_index=details.viewbox.area < 0.2))
if details.near is not None and details.near_radius is not None:
if details.near_radius < 0.1:
sql = sql.where(t.c.centroid.within_distance(NEAR_PARAM,
NEAR_RADIUS_PARAM))
else:
sql = sql.where(t.c.centroid
.ST_Distance(NEAR_PARAM) < NEAR_RADIUS_PARAM)
if self.has_address_terms:
sql = sql.where(t.c.address_rank.between(16, 30))
else:
# If no further address terms are given, then the base street must
# be in the name. No search for named POIs with the given house number.
sql = sql.where(t.c.address_rank.between(16, 27))
inner = sql.limit(10000).order_by(sa.desc(sa.text('importance'))).subquery()
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
inner.c.country_code, inner.c.centroid, inner.c.importance,
inner.c.penalty)
return sql.cte('searches')
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
t = conn.t.placex
tsearch = self._inner_search_name_cte(conn, details)
sql = base.select_placex(t).join(tsearch, t.c.place_id == tsearch.c.place_id)
if details.geometry_output:
sql = base.add_geometry_columns(sql, t.c.geometry, details)
penalty: SaExpression = tsearch.c.penalty
if self.postcodes:
tpc = conn.t.postcode
pcs = self.postcodes.values
pc_near = sa.select(sa.func.min(tpc.c.geometry.ST_Distance(t.c.centroid)
* (tpc.c.rank_search - 19)))\
.where(tpc.c.postcode.in_(pcs))\
.where(tpc.c.country_code == t.c.country_code)\
.scalar_subquery()
penalty += sa.case((t.c.postcode.in_(pcs), 0.0),
else_=sa.func.coalesce(pc_near, cast(SaColumn, 2.0)))
if details.viewbox is not None and not details.bounded_viewbox:
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM, use_index=False), 0.0),
(t.c.geometry.intersects(VIEWBOX2_PARAM, use_index=False), 0.5),
else_=1.0)
if details.near is not None:
sql = sql.add_columns((-tsearch.c.centroid.ST_Distance(NEAR_PARAM))
.label('importance'))
sql = sql.order_by(sa.desc(sa.text('importance')))
else:
sql = sql.order_by(penalty - tsearch.c.importance)
sql = sql.add_columns(tsearch.c.importance)
sql = sql.add_columns(penalty.label('accuracy'))\
.order_by(sa.text('accuracy'))
hnr_list = '|'.join(self.housenumbers.values)
if self.has_address_terms:
sql = sql.where(sa.or_(tsearch.c.address_rank < 30,
sa.func.RegexpWord(hnr_list, t.c.housenumber)))
inner = sql.subquery()
# Housenumbers from placex
thnr = conn.t.placex.alias('hnr')
pid_list = sa.func.ArrayAgg(thnr.c.place_id)
place_sql = sa.select(pid_list)\
.where(thnr.c.parent_place_id == inner.c.place_id)\
.where(sa.func.RegexpWord(hnr_list, thnr.c.housenumber))\
.where(thnr.c.linked_place_id == None)\
.where(thnr.c.indexed_status == 0)
if details.excluded:
place_sql = place_sql.where(thnr.c.place_id.not_in(sa.bindparam('excluded')))
if self.qualifiers:
place_sql = place_sql.where(self.qualifiers.sql_restrict(thnr))
numerals = [int(n) for n in self.housenumbers.values
if n.isdigit() and len(n) < 8]
interpol_sql: SaColumn
tiger_sql: SaColumn
if numerals and \
(not self.qualifiers or ('place', 'house') in self.qualifiers.values):
# Housenumbers from interpolations
interpol_sql = _make_interpolation_subquery(conn.t.osmline, inner,
numerals, details)
# Housenumbers from Tiger
tiger_sql = sa.case((inner.c.country_code == 'us',
_make_interpolation_subquery(conn.t.tiger, inner,
numerals, details)
), else_=None)
else:
interpol_sql = sa.null()
tiger_sql = sa.null()
unsort = sa.select(inner, place_sql.scalar_subquery().label('placex_hnr'),
interpol_sql.label('interpol_hnr'),
tiger_sql.label('tiger_hnr')).subquery('unsort')
sql = sa.select(unsort)\
.order_by(unsort.c.accuracy +
sa.case((unsort.c.placex_hnr != None, 0),
(unsort.c.interpol_hnr != None, 0),
(unsort.c.tiger_hnr != None, 0),
else_=1),
sa.case((unsort.c.placex_hnr != None, 1),
(unsort.c.interpol_hnr != None, 2),
(unsort.c.tiger_hnr != None, 3),
else_=4))
sql = sql.limit(LIMIT_PARAM)
bind_params = {
'limit': details.max_results,
'min_rank': details.min_rank,
'max_rank': details.max_rank,
'viewbox': details.viewbox,
'viewbox2': details.viewbox_x2,
'near': details.near,
'near_radius': details.near_radius,
'excluded': details.excluded,
'countries': details.countries
}
results = nres.SearchResults()
for row in await conn.execute(sql, bind_params):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.bbox = Bbox.from_wkb(row.bbox)
result.accuracy = row.accuracy
if row.rank_address < 30:
if row.placex_hnr:
subs = _get_placex_housenumbers(conn, row.placex_hnr, details)
elif row.interpol_hnr:
subs = _get_osmline(conn, row.interpol_hnr, numerals, details)
elif row.tiger_hnr:
subs = _get_tiger(conn, row.tiger_hnr, numerals, row.osm_id, details)
else:
subs = None
if subs is not None:
async for sub in subs:
assert sub.housenumber
sub.accuracy = result.accuracy
if not any(nr in self.housenumbers.values
for nr in sub.housenumber.split(';')):
sub.accuracy += 0.6
results.append(sub)
# Only add the street as a result, if it meets all other
# filter conditions.
if (not details.excluded or result.place_id not in details.excluded)\
and (not self.qualifiers or result.category in self.qualifiers.values)\
and result.rank_address >= details.min_rank:
result.accuracy += 1.0 # penalty for missing housenumber
results.append(result)
else:
results.append(result)
return results

View File

@@ -0,0 +1,144 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Interface for classes implementing a database search.
"""
from typing import Callable, List
import abc
import sqlalchemy as sa
from ...typing import SaFromClause, SaSelect, SaColumn, SaExpression, SaLambdaSelect
from ...sql.sqlalchemy_types import Geometry
from ...connection import SearchConnection
from ...types import SearchDetails, DataLayer, GeometryFormat
from ...results import SearchResults
class AbstractSearch(abc.ABC):
""" Encapuslation of a single lookup in the database.
"""
SEARCH_PRIO: int = 2
def __init__(self, penalty: float) -> None:
self.penalty = penalty
@abc.abstractmethod
async def lookup(self, conn: SearchConnection, details: SearchDetails) -> SearchResults:
""" Find results for the search in the database.
"""
def select_placex(t: SaFromClause) -> SaSelect:
""" Return the basic select query for placex which returns all
fields necessary to fill a Nominatim result. 't' must either be
the placex table or a subquery returning appropriate fields from
a placex-related query.
"""
return sa.select(t.c.place_id, t.c.osm_type, t.c.osm_id, t.c.name,
t.c.class_, t.c.type,
t.c.address, t.c.extratags,
t.c.housenumber, t.c.postcode, t.c.country_code,
t.c.wikipedia,
t.c.parent_place_id, t.c.rank_address, t.c.rank_search,
t.c.linked_place_id, t.c.admin_level,
t.c.centroid,
t.c.geometry.ST_Expand(0).label('bbox'))
def exclude_places(t: SaFromClause) -> Callable[[], SaExpression]:
""" Return an expression to exclude place IDs from the list in the
SearchDetails.
Requires the excluded IDs to be supplied as a bind parameter in SQL.
"""
return lambda: t.c.place_id.not_in(sa.bindparam('excluded'))
def filter_by_layer(table: SaFromClause, layers: DataLayer) -> SaColumn:
""" Return an expression that filters the given table by layers.
"""
orexpr: List[SaExpression] = []
if layers & DataLayer.ADDRESS and layers & DataLayer.POI:
orexpr.append(no_index(table.c.rank_address).between(1, 30))
elif layers & DataLayer.ADDRESS:
orexpr.append(no_index(table.c.rank_address).between(1, 29))
orexpr.append(sa.func.IsAddressPoint(table))
elif layers & DataLayer.POI:
orexpr.append(sa.and_(no_index(table.c.rank_address) == 30,
table.c.class_.not_in(('place', 'building'))))
if layers & DataLayer.MANMADE:
exclude = []
if not layers & DataLayer.RAILWAY:
exclude.append('railway')
if not layers & DataLayer.NATURAL:
exclude.extend(('natural', 'water', 'waterway'))
orexpr.append(sa.and_(table.c.class_.not_in(tuple(exclude)),
no_index(table.c.rank_address) == 0))
else:
include = []
if layers & DataLayer.RAILWAY:
include.append('railway')
if layers & DataLayer.NATURAL:
include.extend(('natural', 'water', 'waterway'))
orexpr.append(sa.and_(table.c.class_.in_(tuple(include)),
no_index(table.c.rank_address) == 0))
if len(orexpr) == 1:
return orexpr[0]
return sa.or_(*orexpr)
def no_index(expr: SaColumn) -> SaColumn:
""" Wrap the given expression, so that the query planner will
refrain from using the expression for index lookup.
"""
return sa.func.coalesce(sa.null(), expr)
def filter_by_area(sql: SaSelect, t: SaFromClause,
details: SearchDetails, avoid_index: bool = False) -> SaSelect:
""" Apply SQL statements for filtering by viewbox and near point,
if applicable.
"""
if details.near is not None and details.near_radius is not None:
if details.near_radius < 0.1 and not avoid_index:
sql = sql.where(
t.c.geometry.within_distance(sa.bindparam('near', type_=Geometry),
sa.bindparam('near_radius')))
else:
sql = sql.where(
t.c.geometry.ST_Distance(
sa.bindparam('near', type_=Geometry)) <= sa.bindparam('near_radius'))
if details.viewbox is not None and details.bounded_viewbox:
sql = sql.where(t.c.geometry.intersects(sa.bindparam('viewbox', type_=Geometry),
use_index=not avoid_index and
details.viewbox.area < 0.2))
return sql
def add_geometry_columns(sql: SaLambdaSelect, col: SaColumn, details: SearchDetails) -> SaSelect:
""" Add columns for requested geometry formats and return the new query.
"""
out = []
if details.geometry_simplification > 0.0:
col = sa.func.ST_SimplifyPreserveTopology(col, details.geometry_simplification)
if details.geometry_output & GeometryFormat.GEOJSON:
out.append(sa.func.ST_AsGeoJSON(col, 7).label('geometry_geojson'))
if details.geometry_output & GeometryFormat.TEXT:
out.append(sa.func.ST_AsText(col).label('geometry_text'))
if details.geometry_output & GeometryFormat.KML:
out.append(sa.func.ST_AsKML(col, 7).label('geometry_kml'))
if details.geometry_output & GeometryFormat.SVG:
out.append(sa.func.ST_AsSVG(col, 0, 7).label('geometry_svg'))
return sql.add_columns(*out)

View File

@@ -0,0 +1,119 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of searches for a country.
"""
import sqlalchemy as sa
from . import base
from ..db_search_fields import SearchData
from ... import results as nres
from ...connection import SearchConnection
from ...types import SearchDetails, Bbox
class CountrySearch(base.AbstractSearch):
""" Search for a country name or country code.
"""
SEARCH_PRIO = 0
def __init__(self, sdata: SearchData) -> None:
super().__init__(sdata.penalty)
self.countries = sdata.countries
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
t = conn.t.placex
ccodes = self.countries.values
sql = base.select_placex(t)\
.add_columns(t.c.importance)\
.where(t.c.country_code.in_(ccodes))\
.where(t.c.rank_address == 4)
if details.geometry_output:
sql = base.add_geometry_columns(sql, t.c.geometry, details)
if details.excluded:
sql = sql.where(base.exclude_places(t))
sql = base.filter_by_area(sql, t, details)
bind_params = {
'excluded': details.excluded,
'viewbox': details.viewbox,
'near': details.near,
'near_radius': details.near_radius
}
results = nres.SearchResults()
for row in await conn.execute(sql, bind_params):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
result.bbox = Bbox.from_wkb(row.bbox)
results.append(result)
if not results:
results = await self.lookup_in_country_table(conn, details)
if results:
details.min_rank = min(5, details.max_rank)
details.max_rank = min(25, details.max_rank)
return results
async def lookup_in_country_table(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Look up the country in the fallback country tables.
"""
# Avoid the fallback search when this is a more search. Country results
# usually are in the first batch of results and it is not possible
# to exclude these fallbacks.
if details.excluded:
return nres.SearchResults()
t = conn.t.country_name
tgrid = conn.t.country_grid
sql = sa.select(tgrid.c.country_code,
tgrid.c.geometry.ST_Centroid().ST_Collect().ST_Centroid()
.label('centroid'),
tgrid.c.geometry.ST_Collect().ST_Expand(0).label('bbox'))\
.where(tgrid.c.country_code.in_(self.countries.values))\
.group_by(tgrid.c.country_code)
sql = base.filter_by_area(sql, tgrid, details, avoid_index=True)
sub = sql.subquery('grid')
sql = sa.select(t.c.country_code,
t.c.name.merge(t.c.derived_name).label('name'),
sub.c.centroid, sub.c.bbox)\
.join(sub, t.c.country_code == sub.c.country_code)
if details.geometry_output:
sql = base.add_geometry_columns(sql, sub.c.centroid, details)
bind_params = {
'viewbox': details.viewbox,
'near': details.near,
'near_radius': details.near_radius
}
results = nres.SearchResults()
for row in await conn.execute(sql, bind_params):
result = nres.create_from_country_row(row, nres.SearchResult)
assert result
result.bbox = Bbox.from_wkb(row.bbox)
result.accuracy = self.penalty + self.countries.get_penalty(row.country_code, 5.0)
results.append(result)
return results

View File

@@ -0,0 +1,136 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of a category search around a place.
"""
from typing import List, Tuple
import sqlalchemy as sa
from . import base
from ...typing import SaBind
from ...types import SearchDetails, Bbox
from ...connection import SearchConnection
from ... import results as nres
from ..db_search_fields import WeightedCategories
LIMIT_PARAM: SaBind = sa.bindparam('limit')
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
class NearSearch(base.AbstractSearch):
""" Category search of a place type near the result of another search.
"""
def __init__(self, penalty: float, categories: WeightedCategories,
search: base.AbstractSearch) -> None:
super().__init__(penalty)
self.search = search
self.categories = categories
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
results = nres.SearchResults()
base = await self.search.lookup(conn, details)
if not base:
return results
base.sort(key=lambda r: (r.accuracy, r.rank_search))
max_accuracy = base[0].accuracy + 0.5
if base[0].rank_address == 0:
min_rank = 0
max_rank = 0
elif base[0].rank_address < 26:
min_rank = 1
max_rank = min(25, base[0].rank_address + 4)
else:
min_rank = 26
max_rank = 30
base = nres.SearchResults(r for r in base
if (r.source_table == nres.SourceTable.PLACEX
and r.accuracy <= max_accuracy
and r.bbox and r.bbox.area < 20
and r.rank_address >= min_rank
and r.rank_address <= max_rank))
if base:
baseids = [b.place_id for b in base[:5] if b.place_id]
for category, penalty in self.categories:
await self.lookup_category(results, conn, baseids, category, penalty, details)
if len(results) >= details.max_results:
break
return results
async def lookup_category(self, results: nres.SearchResults,
conn: SearchConnection, ids: List[int],
category: Tuple[str, str], penalty: float,
details: SearchDetails) -> None:
""" Find places of the given category near the list of
place ids and add the results to 'results'.
"""
table = await conn.get_class_table(*category)
tgeom = conn.t.placex.alias('pgeom')
if table is None:
# No classtype table available, do a simplified lookup in placex.
table = conn.t.placex
sql = sa.select(table.c.place_id,
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
.label('dist'))\
.join(tgeom, table.c.geometry.intersects(tgeom.c.centroid.ST_Expand(0.01)))\
.where(table.c.class_ == category[0])\
.where(table.c.type == category[1])
else:
# Use classtype table. We can afford to use a larger
# radius for the lookup.
sql = sa.select(table.c.place_id,
sa.func.min(tgeom.c.centroid.ST_Distance(table.c.centroid))
.label('dist'))\
.join(tgeom,
table.c.centroid.ST_CoveredBy(
sa.case((sa.and_(tgeom.c.rank_address > 9,
tgeom.c.geometry.is_area()),
tgeom.c.geometry),
else_=tgeom.c.centroid.ST_Expand(0.05))))
inner = sql.where(tgeom.c.place_id.in_(ids))\
.group_by(table.c.place_id).subquery()
t = conn.t.placex
sql = base.select_placex(t).add_columns((-inner.c.dist).label('importance'))\
.join(inner, inner.c.place_id == t.c.place_id)\
.order_by(inner.c.dist)
sql = sql.where(base.no_index(t.c.rank_address).between(MIN_RANK_PARAM, MAX_RANK_PARAM))
if details.countries:
sql = sql.where(t.c.country_code.in_(COUNTRIES_PARAM))
if details.excluded:
sql = sql.where(base.exclude_places(t))
if details.layers is not None:
sql = sql.where(base.filter_by_layer(t, details.layers))
sql = sql.limit(LIMIT_PARAM)
bind_params = {'limit': details.max_results,
'min_rank': details.min_rank,
'max_rank': details.max_rank,
'excluded': details.excluded,
'countries': details.countries}
for row in await conn.execute(sql, bind_params):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.accuracy = self.penalty + penalty
result.bbox = Bbox.from_wkb(row.bbox)
results.append(result)

View File

@@ -0,0 +1,210 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of search for a named place (without housenumber).
"""
from typing import cast
import sqlalchemy as sa
from . import base
from ...typing import SaBind, SaExpression, SaColumn
from ...types import SearchDetails, Bbox
from ...sql.sqlalchemy_types import Geometry
from ...connection import SearchConnection
from ... import results as nres
from ..db_search_fields import SearchData
LIMIT_PARAM: SaBind = sa.bindparam('limit')
MIN_RANK_PARAM: SaBind = sa.bindparam('min_rank')
MAX_RANK_PARAM: SaBind = sa.bindparam('max_rank')
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
COUNTRIES_PARAM: SaBind = sa.bindparam('countries')
class PlaceSearch(base.AbstractSearch):
""" Generic search for a named place.
"""
SEARCH_PRIO = 1
def __init__(self, extra_penalty: float, sdata: SearchData,
expected_count: int, has_address_terms: bool) -> None:
assert not sdata.housenumbers
super().__init__(sdata.penalty + extra_penalty)
self.countries = sdata.countries
self.postcodes = sdata.postcodes
self.qualifiers = sdata.qualifiers
self.lookups = sdata.lookups
self.rankings = sdata.rankings
self.expected_count = expected_count
self.has_address_terms = has_address_terms
def _inner_search_name_cte(self, conn: SearchConnection,
details: SearchDetails) -> 'sa.CTE':
""" Create a subquery that preselects the rows in the search_name
table.
"""
t = conn.t.search_name
penalty: SaExpression = sa.literal(self.penalty)
for ranking in self.rankings:
penalty += ranking.sql_penalty(t)
sql = sa.select(t.c.place_id, t.c.search_rank, t.c.address_rank,
t.c.country_code, t.c.centroid,
t.c.name_vector, t.c.nameaddress_vector,
sa.case((t.c.importance > 0, t.c.importance),
else_=0.40001-(sa.cast(t.c.search_rank, sa.Float())/75))
.label('importance'))
for lookup in self.lookups:
sql = sql.where(lookup.sql_condition(t))
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
if self.postcodes:
# if a postcode is given, don't search for state or country level objects
sql = sql.where(t.c.address_rank > 9)
if self.expected_count > 10000:
# Many results expected. Restrict by postcode.
tpc = conn.t.postcode
sql = sql.where(sa.select(tpc.c.postcode)
.where(tpc.c.postcode.in_(self.postcodes.values))
.where(t.c.centroid.within_distance(tpc.c.geometry, 0.4))
.exists())
if details.viewbox is not None:
if details.bounded_viewbox:
sql = sql.where(t.c.centroid
.intersects(VIEWBOX_PARAM,
use_index=details.viewbox.area < 0.2))
else:
penalty += sa.case((t.c.centroid.intersects(VIEWBOX_PARAM, use_index=False), 0.0),
(t.c.centroid.intersects(VIEWBOX2_PARAM, use_index=False), 0.5),
else_=1.0)
if details.near is not None and details.near_radius is not None:
if details.near_radius < 0.1:
sql = sql.where(t.c.centroid.within_distance(NEAR_PARAM,
NEAR_RADIUS_PARAM))
else:
sql = sql.where(t.c.centroid
.ST_Distance(NEAR_PARAM) < NEAR_RADIUS_PARAM)
if details.excluded:
sql = sql.where(base.exclude_places(t))
if details.min_rank > 0:
sql = sql.where(sa.or_(t.c.address_rank >= MIN_RANK_PARAM,
t.c.search_rank >= MIN_RANK_PARAM))
if details.max_rank < 30:
sql = sql.where(sa.or_(t.c.address_rank <= MAX_RANK_PARAM,
t.c.search_rank <= MAX_RANK_PARAM))
sql = sql.add_columns(penalty.label('penalty'))
inner = sql.limit(5000 if self.qualifiers else 1000)\
.order_by(sa.desc(sa.text('importance')))\
.subquery()
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
inner.c.country_code, inner.c.centroid, inner.c.importance,
inner.c.penalty)
# If the query is not an address search or has a geographic preference,
# preselect most important items to restrict the number of places
# that need to be looked up in placex.
if (details.viewbox is None or not details.bounded_viewbox)\
and (details.near is None or details.near_radius is None)\
and not self.qualifiers:
sql = sql.add_columns(sa.func.first_value(inner.c.penalty - inner.c.importance)
.over(order_by=inner.c.penalty - inner.c.importance)
.label('min_penalty'))
inner = sql.subquery()
sql = sa.select(inner.c.place_id, inner.c.search_rank, inner.c.address_rank,
inner.c.country_code, inner.c.centroid, inner.c.importance,
inner.c.penalty)\
.where(inner.c.penalty - inner.c.importance < inner.c.min_penalty + 0.5)
return sql.cte('searches')
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
t = conn.t.placex
tsearch = self._inner_search_name_cte(conn, details)
sql = base.select_placex(t).join(tsearch, t.c.place_id == tsearch.c.place_id)
if details.geometry_output:
sql = base.add_geometry_columns(sql, t.c.geometry, details)
penalty: SaExpression = tsearch.c.penalty
if self.postcodes:
if self.has_address_terms:
tpc = conn.t.postcode
pcs = self.postcodes.values
pc_near = sa.select(sa.func.min(tpc.c.geometry.ST_Distance(t.c.centroid)))\
.where(tpc.c.postcode.in_(pcs))\
.scalar_subquery()
penalty += sa.case((t.c.postcode.in_(pcs), 0.0),
else_=sa.func.coalesce(pc_near, cast(SaColumn, 2.0)))
else:
# High penalty if the postcode is not an exact match.
# The postcode search needs to get priority here.
penalty += sa.case((t.c.postcode.in_(self.postcodes.values), 0.0), else_=1.0)
if details.near is not None:
sql = sql.add_columns((-tsearch.c.centroid.ST_Distance(NEAR_PARAM))
.label('importance'))
sql = sql.order_by(sa.desc(sa.text('importance')))
else:
sql = sql.order_by(penalty - tsearch.c.importance)
sql = sql.add_columns(tsearch.c.importance)
sql = sql.add_columns(penalty.label('accuracy'))\
.order_by(sa.text('accuracy'))
sql = sql.where(t.c.linked_place_id == None)\
.where(t.c.indexed_status == 0)
if self.qualifiers:
sql = sql.where(self.qualifiers.sql_restrict(t))
if details.layers is not None:
sql = sql.where(base.filter_by_layer(t, details.layers))
sql = sql.limit(LIMIT_PARAM)
bind_params = {
'limit': details.max_results,
'min_rank': details.min_rank,
'max_rank': details.max_rank,
'viewbox': details.viewbox,
'viewbox2': details.viewbox_x2,
'near': details.near,
'near_radius': details.near_radius,
'excluded': details.excluded,
'countries': details.countries
}
results = nres.SearchResults()
for row in await conn.execute(sql, bind_params):
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.bbox = Bbox.from_wkb(row.bbox)
result.accuracy = row.accuracy
results.append(result)
return results

View File

@@ -0,0 +1,114 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of category search.
"""
from typing import List
import sqlalchemy as sa
from . import base
from ..db_search_fields import SearchData
from ... import results as nres
from ...typing import SaBind, SaRow, SaSelect, SaLambdaSelect
from ...sql.sqlalchemy_types import Geometry
from ...connection import SearchConnection
from ...types import SearchDetails, Bbox
LIMIT_PARAM: SaBind = sa.bindparam('limit')
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
NEAR_RADIUS_PARAM: SaBind = sa.bindparam('near_radius')
class PoiSearch(base.AbstractSearch):
""" Category search in a geographic area.
"""
def __init__(self, sdata: SearchData) -> None:
super().__init__(sdata.penalty)
self.qualifiers = sdata.qualifiers
self.countries = sdata.countries
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
bind_params = {
'limit': details.max_results,
'viewbox': details.viewbox,
'near': details.near,
'near_radius': details.near_radius,
'excluded': details.excluded
}
t = conn.t.placex
rows: List[SaRow] = []
if details.near and details.near_radius is not None and details.near_radius < 0.2:
# simply search in placex table
def _base_query() -> SaSelect:
return base.select_placex(t) \
.add_columns((-t.c.centroid.ST_Distance(NEAR_PARAM))
.label('importance'))\
.where(t.c.linked_place_id == None) \
.where(t.c.geometry.within_distance(NEAR_PARAM, NEAR_RADIUS_PARAM)) \
.order_by(t.c.centroid.ST_Distance(NEAR_PARAM)) \
.limit(LIMIT_PARAM)
classtype = self.qualifiers.values
if len(classtype) == 1:
cclass, ctype = classtype[0]
sql: SaLambdaSelect = sa.lambda_stmt(
lambda: _base_query().where(t.c.class_ == cclass)
.where(t.c.type == ctype))
else:
sql = _base_query().where(sa.or_(*(sa.and_(t.c.class_ == cls, t.c.type == typ)
for cls, typ in classtype)))
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
if details.viewbox is not None and details.bounded_viewbox:
sql = sql.where(t.c.geometry.intersects(VIEWBOX_PARAM))
rows.extend(await conn.execute(sql, bind_params))
else:
# use the class type tables
for category in self.qualifiers.values:
table = await conn.get_class_table(*category)
if table is not None:
sql = base.select_placex(t)\
.add_columns(t.c.importance)\
.join(table, t.c.place_id == table.c.place_id)\
.where(t.c.class_ == category[0])\
.where(t.c.type == category[1])
if details.viewbox is not None and details.bounded_viewbox:
sql = sql.where(table.c.centroid.intersects(VIEWBOX_PARAM))
if details.near and details.near_radius is not None:
sql = sql.order_by(table.c.centroid.ST_Distance(NEAR_PARAM))\
.where(table.c.centroid.within_distance(NEAR_PARAM,
NEAR_RADIUS_PARAM))
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
sql = sql.limit(LIMIT_PARAM)
rows.extend(await conn.execute(sql, bind_params))
results = nres.SearchResults()
for row in rows:
result = nres.create_from_placex_row(row, nres.SearchResult)
assert result
result.accuracy = self.penalty + self.qualifiers.get_penalty((row.class_, row.type))
result.bbox = Bbox.from_wkb(row.bbox)
results.append(result)
return results

View File

@@ -0,0 +1,129 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of search for a postcode.
"""
import sqlalchemy as sa
from . import base
from ...typing import SaBind, SaExpression
from ...sql.sqlalchemy_types import Geometry, IntArray
from ...connection import SearchConnection
from ...types import SearchDetails, Bbox
from ... import results as nres
from ..db_search_fields import SearchData
LIMIT_PARAM: SaBind = sa.bindparam('limit')
VIEWBOX_PARAM: SaBind = sa.bindparam('viewbox', type_=Geometry)
VIEWBOX2_PARAM: SaBind = sa.bindparam('viewbox2', type_=Geometry)
NEAR_PARAM: SaBind = sa.bindparam('near', type_=Geometry)
class PostcodeSearch(base.AbstractSearch):
""" Search for a postcode.
"""
def __init__(self, extra_penalty: float, sdata: SearchData) -> None:
super().__init__(sdata.penalty + extra_penalty)
self.countries = sdata.countries
self.postcodes = sdata.postcodes
self.lookups = sdata.lookups
self.rankings = sdata.rankings
async def lookup(self, conn: SearchConnection,
details: SearchDetails) -> nres.SearchResults:
""" Find results for the search in the database.
"""
t = conn.t.postcode
pcs = self.postcodes.values
sql = sa.select(t.c.place_id, t.c.parent_place_id,
t.c.rank_search, t.c.rank_address,
t.c.postcode, t.c.country_code,
t.c.geometry.label('centroid'))\
.where(t.c.postcode.in_(pcs))
if details.geometry_output:
sql = base.add_geometry_columns(sql, t.c.geometry, details)
penalty: SaExpression = sa.literal(self.penalty)
if details.viewbox is not None and not details.bounded_viewbox:
penalty += sa.case((t.c.geometry.intersects(VIEWBOX_PARAM), 0.0),
(t.c.geometry.intersects(VIEWBOX2_PARAM), 0.5),
else_=1.0)
if details.near is not None:
sql = sql.order_by(t.c.geometry.ST_Distance(NEAR_PARAM))
sql = base.filter_by_area(sql, t, details)
if self.countries:
sql = sql.where(t.c.country_code.in_(self.countries.values))
if details.excluded:
sql = sql.where(base.exclude_places(t))
if self.lookups:
assert len(self.lookups) == 1
tsearch = conn.t.search_name
sql = sql.where(tsearch.c.place_id == t.c.parent_place_id)\
.where((tsearch.c.name_vector + tsearch.c.nameaddress_vector)
.contains(sa.type_coerce(self.lookups[0].tokens,
IntArray)))
# Do NOT add rerank penalties based on the address terms.
# The standard rerank penalty only checks the address vector
# while terms may appear in name and address vector. This would
# lead to overly high penalties.
# We assume that a postcode is precise enough to not require
# additional full name matches.
penalty += sa.case(*((t.c.postcode == v, p) for v, p in self.postcodes),
else_=1.0)
sql = sql.add_columns(penalty.label('accuracy'))
sql = sql.order_by('accuracy').limit(LIMIT_PARAM)
bind_params = {
'limit': details.max_results,
'viewbox': details.viewbox,
'viewbox2': details.viewbox_x2,
'near': details.near,
'near_radius': details.near_radius,
'excluded': details.excluded
}
results = nres.SearchResults()
for row in await conn.execute(sql, bind_params):
p = conn.t.placex
placex_sql = base.select_placex(p)\
.add_columns(p.c.importance)\
.where(sa.text("""class = 'boundary'
AND type = 'postal_code'
AND osm_type = 'R'"""))\
.where(p.c.country_code == row.country_code)\
.where(p.c.postcode == row.postcode)\
.limit(1)
if details.geometry_output:
placex_sql = base.add_geometry_columns(placex_sql, p.c.geometry, details)
for prow in await conn.execute(placex_sql, bind_params):
result = nres.create_from_placex_row(prow, nres.SearchResult)
if result is not None:
result.bbox = Bbox.from_wkb(prow.bbox)
break
else:
result = nres.create_from_postcode_row(row, nres.SearchResult)
assert result
if result.place_id not in details.excluded:
result.accuracy = row.accuracy
results.append(result)
return results

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Public interface to the search code.
@@ -50,6 +50,9 @@ class ForwardGeocoder:
self.query_analyzer = await make_query_analyzer(self.conn)
query = await self.query_analyzer.analyze_query(phrases)
query.compute_direction_penalty()
log().var_dump('Query direction penalty',
lambda: f"[{'LR' if query.dir_penalty < 0 else 'RL'}] {query.dir_penalty}")
searches: List[AbstractSearch] = []
if query.num_token_slots() > 0:
@@ -80,7 +83,7 @@ class ForwardGeocoder:
min_ranking = searches[0].penalty + 2.0
prev_penalty = 0.0
for i, search in enumerate(searches):
if search.penalty > prev_penalty and (search.penalty > min_ranking or i > 20):
if search.penalty > prev_penalty and (search.penalty > min_ranking or i > 15):
break
log().table_dump(f"{i + 1}. Search", _dump_searches([search], query))
log().var_dump('Params', self.params)
@@ -115,17 +118,20 @@ class ForwardGeocoder:
""" Remove badly matching results, sort by ranking and
limit to the configured number of results.
"""
if results:
results.sort(key=lambda r: (r.ranking, 0 if r.bbox is None else -r.bbox.area))
min_rank = results[0].rank_search
min_ranking = results[0].ranking
results = SearchResults(r for r in results
if (r.ranking + 0.03 * (r.rank_search - min_rank)
< min_ranking + 0.5))
results.sort(key=lambda r: (r.ranking, 0 if r.bbox is None else -r.bbox.area))
results = SearchResults(results[:self.limit])
final = SearchResults()
min_rank = results[0].rank_search
min_ranking = results[0].ranking
return results
for r in results:
if r.ranking + 0.03 * (r.rank_search - min_rank) < min_ranking + 0.5:
final.append(r)
min_rank = min(r.rank_search, min_rank)
if len(final) == self.limit:
break
return final
def rerank_by_query(self, query: QueryStruct, results: SearchResults) -> None:
""" Adjust the accuracy of the localized result according to how well
@@ -150,17 +156,16 @@ class ForwardGeocoder:
if not words:
continue
for qword in qwords:
wdist = max(difflib.SequenceMatcher(a=qword, b=w).quick_ratio() for w in words)
if wdist < 0.5:
distance += len(qword)
else:
distance += (1.0 - wdist) * len(qword)
# only add distance penalty if there is no perfect match
if qword not in words:
wdist = max(difflib.SequenceMatcher(a=qword, b=w).quick_ratio() for w in words)
distance += len(qword) if wdist < 0.4 else 1
# Compensate for the fact that country names do not get a
# match penalty yet by the tokenizer.
# Temporary hack that needs to be removed!
if result.rank_address == 4:
distance *= 2
result.accuracy += distance * 0.4 / sum(len(w) for w in qwords)
result.accuracy += distance * 0.3 / sum(len(w) for w in qwords)
async def lookup_pois(self, categories: List[Tuple[str, str]],
phrases: List[Phrase]) -> SearchResults:
@@ -208,9 +213,10 @@ class ForwardGeocoder:
results = self.pre_filter_results(results)
await add_result_details(self.conn, results, self.params)
log().result_dump('Preliminary Results', ((r.accuracy, r) for r in results))
self.rerank_by_query(query, results)
log().result_dump('Results after reranking', ((r.accuracy, r) for r in results))
results = self.sort_and_cut_results(results)
if len(results) > 1:
self.rerank_by_query(query, results)
log().result_dump('Results after reranking', ((r.accuracy, r) for r in results))
results = self.sort_and_cut_results(results)
log().result_dump('Final Results', ((r.accuracy, r) for r in results))
return results

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Implementation of query analysis for the ICU tokenizer.
@@ -37,14 +37,14 @@ DB_TO_TOKEN_TYPE = {
'C': qmod.TOKEN_COUNTRY
}
PENALTY_IN_TOKEN_BREAK = {
qmod.BREAK_START: 0.5,
qmod.BREAK_END: 0.5,
qmod.BREAK_PHRASE: 0.5,
qmod.BREAK_SOFT_PHRASE: 0.5,
PENALTY_BREAK = {
qmod.BREAK_START: -0.5,
qmod.BREAK_END: -0.5,
qmod.BREAK_PHRASE: -0.5,
qmod.BREAK_SOFT_PHRASE: -0.5,
qmod.BREAK_WORD: 0.1,
qmod.BREAK_PART: 0.0,
qmod.BREAK_TOKEN: 0.0
qmod.BREAK_PART: 0.2,
qmod.BREAK_TOKEN: 0.4
}
@@ -78,13 +78,13 @@ class ICUToken(qmod.Token):
self.penalty += (distance/len(self.lookup_word))
@staticmethod
def from_db_row(row: SaRow, base_penalty: float = 0.0) -> 'ICUToken':
def from_db_row(row: SaRow) -> 'ICUToken':
""" Create a ICUToken from the row of the word table.
"""
count = 1 if row.info is None else row.info.get('count', 1)
addr_count = 1 if row.info is None else row.info.get('addr_count', 1)
penalty = base_penalty
penalty = 0.0
if row.type == 'w':
penalty += 0.3
elif row.type == 'W':
@@ -174,11 +174,14 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
self.split_query(query)
log().var_dump('Transliterated query', lambda: query.get_transliterated_query())
words = query.extract_words(base_penalty=PENALTY_IN_TOKEN_BREAK[qmod.BREAK_WORD])
words = query.extract_words()
for row in await self.lookup_in_db(list(words.keys())):
for trange in words[row.word_token]:
token = ICUToken.from_db_row(row, trange.penalty or 0.0)
# Create a new token for each position because the token
# penalty can vary depending on the position in the query.
# (See rerank_tokens() below.)
token = ICUToken.from_db_row(row)
if row.type == 'S':
if row.info['op'] in ('in', 'near'):
if trange.start == 0:
@@ -200,6 +203,7 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
lookup_word=pc, word_token=term,
info=None))
self.rerank_tokens(query)
self.compute_break_penalties(query)
log().table_dump('Word tokens', _dump_word_tokens(query))
@@ -229,13 +233,10 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
if trans:
for term in trans.split(' '):
if term:
query.add_node(qmod.BREAK_TOKEN, phrase.ptype,
PENALTY_IN_TOKEN_BREAK[qmod.BREAK_TOKEN],
term, word)
query.nodes[-1].adjust_break(breakchar,
PENALTY_IN_TOKEN_BREAK[breakchar])
query.add_node(qmod.BREAK_TOKEN, phrase.ptype, term, word)
query.nodes[-1].btype = breakchar
query.nodes[-1].adjust_break(qmod.BREAK_END, PENALTY_IN_TOKEN_BREAK[qmod.BREAK_END])
query.nodes[-1].btype = qmod.BREAK_END
async def lookup_in_db(self, words: List[str]) -> 'sa.Result[Any]':
""" Return the token information from the database for the
@@ -267,32 +268,53 @@ class ICUQueryAnalyzer(AbstractQueryAnalyzer):
def rerank_tokens(self, query: qmod.QueryStruct) -> None:
""" Add penalties to tokens that depend on presence of other token.
"""
for i, node, tlist in query.iter_token_lists():
if tlist.ttype == qmod.TOKEN_POSTCODE:
tlen = len(cast(ICUToken, tlist.tokens[0]).word_token)
for repl in node.starting:
if repl.end == tlist.end and repl.ttype != qmod.TOKEN_POSTCODE \
and (repl.ttype != qmod.TOKEN_HOUSENUMBER or tlen > 4):
repl.add_penalty(0.39)
elif (tlist.ttype == qmod.TOKEN_HOUSENUMBER
and len(tlist.tokens[0].lookup_word) <= 3):
if any(c.isdigit() for c in tlist.tokens[0].lookup_word):
for repl in node.starting:
if repl.end == tlist.end and repl.ttype != qmod.TOKEN_HOUSENUMBER:
repl.add_penalty(0.5 - tlist.tokens[0].penalty)
elif tlist.ttype not in (qmod.TOKEN_COUNTRY, qmod.TOKEN_PARTIAL):
norm = ' '.join(n.term_normalized for n in query.nodes[i + 1:tlist.end + 1]
if n.btype != qmod.BREAK_TOKEN)
if not norm:
# Can happen when the token only covers a partial term
norm = query.nodes[i + 1].term_normalized
for token in tlist.tokens:
cast(ICUToken, token).rematch(norm)
for start, end, tlist in query.iter_tokens_by_edge():
if len(tlist) > 1:
# If it looks like a Postcode, give preference.
if qmod.TOKEN_POSTCODE in tlist:
for ttype, tokens in tlist.items():
if ttype != qmod.TOKEN_POSTCODE and \
(ttype != qmod.TOKEN_HOUSENUMBER or
start + 1 > end or
len(query.nodes[end].term_lookup) > 4):
for token in tokens:
token.penalty += 0.39
# If it looks like a simple housenumber, prefer that.
if qmod.TOKEN_HOUSENUMBER in tlist:
hnr_lookup = tlist[qmod.TOKEN_HOUSENUMBER][0].lookup_word
if len(hnr_lookup) <= 3 and any(c.isdigit() for c in hnr_lookup):
penalty = 0.5 - tlist[qmod.TOKEN_HOUSENUMBER][0].penalty
for ttype, tokens in tlist.items():
if ttype != qmod.TOKEN_HOUSENUMBER:
for token in tokens:
token.penalty += penalty
# rerank tokens against the normalized form
norm = ' '.join(n.term_normalized for n in query.nodes[start + 1:end + 1]
if n.btype != qmod.BREAK_TOKEN)
if not norm:
# Can happen when the token only covers a partial term
norm = query.nodes[start + 1].term_normalized
for ttype, tokens in tlist.items():
if ttype != qmod.TOKEN_COUNTRY:
for token in tokens:
cast(ICUToken, token).rematch(norm)
def compute_break_penalties(self, query: qmod.QueryStruct) -> None:
""" Set the break penalties for the nodes in the query.
"""
for node in query.nodes:
node.penalty = PENALTY_BREAK[node.btype]
def _dump_word_tokens(query: qmod.QueryStruct) -> Iterator[List[Any]]:
yield ['type', 'from', 'to', 'token', 'word_token', 'lookup_word', 'penalty', 'count', 'info']
for i, node in enumerate(query.nodes):
if node.partial is not None:
t = cast(ICUToken, node.partial)
yield [qmod.TOKEN_PARTIAL, str(i), str(i + 1), t.token,
t.word_token, t.lookup_word, t.penalty, t.count, t.info]
for tlist in node.starting:
for token in tlist.tokens:
t = cast(ICUToken, token)

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Datastructures for a tokenized query.
@@ -12,6 +12,17 @@ from abc import ABC, abstractmethod
from collections import defaultdict
import dataclasses
# Precomputed denominator for the computation of the linear regression slope
# used to determine the query direction.
# The x value for the regression computation will be the position of the
# token in the query. Thus we know the x values will be [0, query length).
# As the denominator only depends on the x values, we can pre-compute here
# the denominatior to use for a given query length.
# Note that query length of two or less is special cased and will not use
# the values from this array. Thus it is not a problem that they are 0.
LINFAC = [i * (sum(si * si for si in range(i)) - (i - 1) * i * (i - 1) / 4)
for i in range(50)]
BreakType = str
""" Type of break between tokens.
@@ -123,7 +134,6 @@ class TokenRange:
"""
start: int
end: int
penalty: Optional[float] = None
def __lt__(self, other: 'TokenRange') -> bool:
return self.end <= other.start
@@ -180,24 +190,50 @@ class QueryNode:
ptype: PhraseType
penalty: float
""" Penalty for the break at this node.
""" Penalty for having a word break at this position. The penalty
may be negative, when a word break is more likely than continuing
the word after the node.
"""
term_lookup: str
""" Transliterated term following this node.
""" Transliterated term ending at this node.
"""
term_normalized: str
""" Normalised form of term following this node.
""" Normalised form of term ending at this node.
When the token resulted from a split during transliteration,
then this string contains the complete source term.
"""
starting: List[TokenList] = dataclasses.field(default_factory=list)
""" List of all full tokens starting at this node.
"""
partial: Optional[Token] = None
""" Base token going to the next node.
May be None when the query has parts for which no words are known.
Note that the query may still be parsable when there are other
types of tokens spanning over the gap.
"""
def adjust_break(self, btype: BreakType, penalty: float) -> None:
""" Change the break type and penalty for this node.
@property
def word_break_penalty(self) -> float:
""" Penalty to apply when a words ends at this node.
"""
self.btype = btype
self.penalty = penalty
return max(0, self.penalty)
@property
def word_continuation_penalty(self) -> float:
""" Penalty to apply when a word continues over this node
(i.e. is a multi-term word).
"""
return max(0, -self.penalty)
def name_address_ratio(self) -> float:
""" Return the propability that the partial token belonging to
this node forms part of a name (as opposed of part of the address).
"""
if self.partial is None:
return 0.5
return self.partial.count / (self.partial.count + self.partial.addr_count)
def has_tokens(self, end: int, *ttypes: TokenType) -> bool:
""" Check if there are tokens of the given types ending at the
@@ -234,12 +270,20 @@ class QueryStruct:
need to be direct neighbours. Thus the query is represented as a
directed acyclic graph.
A query also has a direction penalty 'dir_penalty'. This describes
the likelyhood if the query should be read from left-to-right or
vice versa. A negative 'dir_penalty' should be read as a penalty on
right-to-left reading, while a positive value represents a penalty
for left-to-right reading. The default value is 0, which is equivalent
to having no information about the reading.
When created, a query contains a single node: the start of the
query. Further nodes can be added by appending to 'nodes'.
"""
def __init__(self, source: List[Phrase]) -> None:
self.source = source
self.dir_penalty = 0.0
self.nodes: List[QueryNode] = \
[QueryNode(BREAK_START, source[0].ptype if source else PHRASE_ANY,
0.0, '', '')]
@@ -250,13 +294,12 @@ class QueryStruct:
return len(self.nodes) - 1
def add_node(self, btype: BreakType, ptype: PhraseType,
break_penalty: float = 0.0,
term_lookup: str = '', term_normalized: str = '') -> None:
""" Append a new break node with the given break type.
The phrase type denotes the type for any tokens starting
at the node.
"""
self.nodes.append(QueryNode(btype, ptype, break_penalty, term_lookup, term_normalized))
self.nodes.append(QueryNode(btype, ptype, 0.0, term_lookup, term_normalized))
def add_token(self, trange: TokenRange, ttype: TokenType, token: Token) -> None:
""" Add a token to the query. 'start' and 'end' are the indexes of the
@@ -269,37 +312,70 @@ class QueryStruct:
be added to, then the token is silently dropped.
"""
snode = self.nodes[trange.start]
full_phrase = snode.btype in (BREAK_START, BREAK_PHRASE)\
and self.nodes[trange.end].btype in (BREAK_PHRASE, BREAK_END)
if _phrase_compatible_with(snode.ptype, ttype, full_phrase):
tlist = snode.get_tokens(trange.end, ttype)
if tlist is None:
snode.starting.append(TokenList(trange.end, ttype, [token]))
else:
tlist.append(token)
if ttype == TOKEN_PARTIAL:
assert snode.partial is None
if _phrase_compatible_with(snode.ptype, TOKEN_PARTIAL, False):
snode.partial = token
else:
full_phrase = snode.btype in (BREAK_START, BREAK_PHRASE)\
and self.nodes[trange.end].btype in (BREAK_PHRASE, BREAK_END)
if _phrase_compatible_with(snode.ptype, ttype, full_phrase):
tlist = snode.get_tokens(trange.end, ttype)
if tlist is None:
snode.starting.append(TokenList(trange.end, ttype, [token]))
else:
tlist.append(token)
def compute_direction_penalty(self) -> None:
""" Recompute the direction probability from the partial tokens
of each node.
"""
n = len(self.nodes) - 1
if n <= 1 or n >= 50:
self.dir_penalty = 0
elif n == 2:
self.dir_penalty = (self.nodes[1].name_address_ratio()
- self.nodes[0].name_address_ratio()) / 3
else:
ratios = [n.name_address_ratio() for n in self.nodes[:-1]]
self.dir_penalty = (n * sum(i * r for i, r in enumerate(ratios))
- sum(ratios) * n * (n - 1) / 2) / LINFAC[n]
def get_tokens(self, trange: TokenRange, ttype: TokenType) -> List[Token]:
""" Get the list of tokens of a given type, spanning the given
nodes. The nodes must exist. If no tokens exist, an
empty list is returned.
Cannot be used to get the partial token.
"""
assert ttype != TOKEN_PARTIAL
return self.nodes[trange.start].get_tokens(trange.end, ttype) or []
def get_partials_list(self, trange: TokenRange) -> List[Token]:
""" Create a list of partial tokens between the given nodes.
The list is composed of the first token of type PARTIAL
going to the subsequent node. Such PARTIAL tokens are
assumed to exist.
def get_in_word_penalty(self, trange: TokenRange) -> float:
""" Gets the sum of penalties for all token transitions
within the given range.
"""
return [next(iter(self.get_tokens(TokenRange(i, i+1), TOKEN_PARTIAL)))
for i in range(trange.start, trange.end)]
return sum(n.word_continuation_penalty
for n in self.nodes[trange.start + 1:trange.end])
def iter_token_lists(self) -> Iterator[Tuple[int, QueryNode, TokenList]]:
""" Iterator over all token lists in the query.
def iter_partials(self, trange: TokenRange) -> Iterator[Token]:
""" Iterate over the partial tokens between the given nodes.
Missing partials are ignored.
"""
return (n.partial for n in self.nodes[trange.start:trange.end] if n.partial is not None)
def iter_tokens_by_edge(self) -> Iterator[Tuple[int, int, Dict[TokenType, List[Token]]]]:
""" Iterator over all tokens except partial ones grouped by edge.
Returns the start and end node indexes and a dictionary
of list of tokens by token type.
"""
for i, node in enumerate(self.nodes):
by_end: Dict[int, Dict[TokenType, List[Token]]] = defaultdict(dict)
for tlist in node.starting:
yield i, node, tlist
by_end[tlist.end][tlist.ttype] = tlist.tokens
for end, endlist in by_end.items():
yield i, end, endlist
def find_lookup_word_by_id(self, token: int) -> str:
""" Find the first token with the given token ID and return
@@ -308,6 +384,8 @@ class QueryStruct:
debugging.
"""
for node in self.nodes:
if node.partial is not None and node.partial.token == token:
return f"[P]{node.partial.lookup_word}"
for tlist in node.starting:
for t in tlist.tokens:
if t.token == token:
@@ -322,33 +400,29 @@ class QueryStruct:
"""
return ''.join(''.join((n.term_lookup, n.btype)) for n in self.nodes)
def extract_words(self, base_penalty: float = 0.0,
start: int = 0,
def extract_words(self, start: int = 0,
endpos: Optional[int] = None) -> Dict[str, List[TokenRange]]:
""" Add all combinations of words that can be formed from the terms
between the given start and endnode. The terms are joined with
spaces for each break. Words can never go across a BREAK_PHRASE.
The functions returns a dictionary of possible words with their
position within the query and a penalty. The penalty is computed
from the base_penalty plus the penalty for each node the word
crosses.
position within the query.
"""
if endpos is None:
endpos = len(self.nodes)
words: Dict[str, List[TokenRange]] = defaultdict(list)
for first in range(start, endpos - 1):
word = self.nodes[first + 1].term_lookup
penalty = base_penalty
words[word].append(TokenRange(first, first + 1, penalty=penalty))
if self.nodes[first + 1].btype != BREAK_PHRASE:
for last in range(first + 2, min(first + 20, endpos)):
word = ' '.join((word, self.nodes[last].term_lookup))
penalty += self.nodes[last - 1].penalty
words[word].append(TokenRange(first, last, penalty=penalty))
if self.nodes[last].btype == BREAK_PHRASE:
for first, first_node in enumerate(self.nodes[start + 1:endpos], start):
word = first_node.term_lookup
words[word].append(TokenRange(first, first + 1))
if first_node.btype != BREAK_PHRASE:
max_last = min(first + 20, endpos)
for last, last_node in enumerate(self.nodes[first + 2:max_last], first + 2):
word = ' '.join((word, last_node.term_lookup))
words[word].append(TokenRange(first, last))
if last_node.btype == BREAK_PHRASE:
break
return words

View File

@@ -23,16 +23,6 @@ class TypedRange:
trange: qmod.TokenRange
PENALTY_TOKENCHANGE = {
qmod.BREAK_START: 0.0,
qmod.BREAK_END: 0.0,
qmod.BREAK_PHRASE: 0.0,
qmod.BREAK_SOFT_PHRASE: 0.0,
qmod.BREAK_WORD: 0.1,
qmod.BREAK_PART: 0.2,
qmod.BREAK_TOKEN: 0.4
}
TypedRangeSeq = List[TypedRange]
@@ -192,7 +182,7 @@ class _TokenSequence:
return None
def advance(self, ttype: qmod.TokenType, end_pos: int,
btype: qmod.BreakType) -> Optional['_TokenSequence']:
force_break: bool, break_penalty: float) -> Optional['_TokenSequence']:
""" Return a new token sequence state with the given token type
extended.
"""
@@ -205,7 +195,7 @@ class _TokenSequence:
new_penalty = 0.0
else:
last = self.seq[-1]
if btype != qmod.BREAK_PHRASE and last.ttype == ttype:
if not force_break and last.ttype == ttype:
# extend the existing range
newseq = self.seq[:-1] + [TypedRange(ttype, last.trange.replace_end(end_pos))]
new_penalty = 0.0
@@ -213,7 +203,7 @@ class _TokenSequence:
# start a new range
newseq = list(self.seq) + [TypedRange(ttype,
qmod.TokenRange(last.trange.end, end_pos))]
new_penalty = PENALTY_TOKENCHANGE[btype]
new_penalty = break_penalty
return _TokenSequence(newseq, newdir, self.penalty + new_penalty)
@@ -286,8 +276,12 @@ class _TokenSequence:
log().var_dump('skip forward', (base.postcode, first))
return
penalty = self.penalty
if not base.country and self.direction == 1 and query.dir_penalty > 0:
penalty += query.dir_penalty
log().comment('first word = name')
yield dataclasses.replace(base, penalty=self.penalty,
yield dataclasses.replace(base, penalty=penalty,
name=first, address=base.address[1:])
# To paraphrase:
@@ -300,19 +294,20 @@ class _TokenSequence:
or (query.nodes[first.start].ptype != qmod.PHRASE_ANY):
return
penalty = self.penalty
# Penalty for:
# * <name>, <street>, <housenumber> , ...
# * queries that are comma-separated
if (base.housenumber and base.housenumber > first) or len(query.source) > 1:
penalty += 0.25
if self.direction == 0 and query.dir_penalty > 0:
penalty += query.dir_penalty
for i in range(first.start + 1, first.end):
name, addr = first.split(i)
log().comment(f'split first word = name ({i - first.start})')
yield dataclasses.replace(base, name=name, address=[addr] + base.address[1:],
penalty=penalty + PENALTY_TOKENCHANGE[query.nodes[i].btype])
penalty=penalty + query.nodes[i].word_break_penalty)
def _get_assignments_address_backward(self, base: TokenAssignment,
query: qmod.QueryStruct) -> Iterator[TokenAssignment]:
@@ -326,9 +321,13 @@ class _TokenSequence:
log().var_dump('skip backward', (base.postcode, last))
return
penalty = self.penalty
if not base.country and self.direction == -1 and query.dir_penalty < 0:
penalty -= query.dir_penalty
if self.direction == -1 or len(base.address) > 1 or base.postcode:
log().comment('last word = name')
yield dataclasses.replace(base, penalty=self.penalty,
yield dataclasses.replace(base, penalty=penalty,
name=last, address=base.address[:-1])
# To paraphrase:
@@ -341,17 +340,19 @@ class _TokenSequence:
or (query.nodes[last.start].ptype != qmod.PHRASE_ANY):
return
penalty = self.penalty
if base.housenumber and base.housenumber < last:
penalty += 0.4
if len(query.source) > 1:
penalty += 0.25
if self.direction == 0 and query.dir_penalty < 0:
penalty -= query.dir_penalty
for i in range(last.start + 1, last.end):
addr, name = last.split(i)
log().comment(f'split last word = name ({i - last.start})')
yield dataclasses.replace(base, name=name, address=base.address[:-1] + [addr],
penalty=penalty + PENALTY_TOKENCHANGE[query.nodes[i].btype])
penalty=penalty + query.nodes[i].word_break_penalty)
def get_assignments(self, query: qmod.QueryStruct) -> Iterator[TokenAssignment]:
""" Yield possible assignments for the current sequence.
@@ -379,11 +380,11 @@ class _TokenSequence:
if base.postcode and base.postcode.start == 0:
self.penalty += 0.1
# Right-to-left reading of the address
# Left-to-right reading of the address
if self.direction != -1:
yield from self._get_assignments_address_forward(base, query)
# Left-to-right reading of the address
# Right-to-left reading of the address
if self.direction != 1:
yield from self._get_assignments_address_backward(base, query)
@@ -409,11 +410,25 @@ def yield_token_assignments(query: qmod.QueryStruct) -> Iterator[TokenAssignment
node = query.nodes[state.end_pos]
for tlist in node.starting:
newstate = state.advance(tlist.ttype, tlist.end, node.btype)
if newstate is not None:
if newstate.end_pos == query.num_token_slots():
if newstate.recheck_sequence():
log().var_dump('Assignment', newstate)
yield from newstate.get_assignments(query)
elif not newstate.is_final():
todo.append(newstate)
yield from _append_state_to_todo(
query, todo,
state.advance(tlist.ttype, tlist.end,
True, node.word_break_penalty))
if node.partial is not None:
yield from _append_state_to_todo(
query, todo,
state.advance(qmod.TOKEN_PARTIAL, state.end_pos + 1,
node.btype == qmod.BREAK_PHRASE,
node.word_break_penalty))
def _append_state_to_todo(query: qmod.QueryStruct, todo: List[_TokenSequence],
newstate: Optional[_TokenSequence]) -> Iterator[TokenAssignment]:
if newstate is not None:
if newstate.end_pos == query.num_token_slots():
if newstate.recheck_sequence():
log().var_dump('Assignment', newstate)
yield from newstate.get_assignments(query)
elif not newstate.is_final():
todo.append(newstate)

View File

@@ -190,7 +190,7 @@ def get_application(project_dir: Path,
"""
apimw = APIMiddleware(project_dir, environ)
middleware: List[object] = [apimw]
middleware: List[Any] = [apimw]
log_file = apimw.config.LOG_FILE
if log_file:
middleware.append(FileLoggingMiddleware(log_file))

View File

@@ -143,7 +143,7 @@ def get_application(project_dir: Path,
log_file = config.LOG_FILE
if log_file:
middleware.append(Middleware(FileLoggingMiddleware, file_name=log_file))
middleware.append(Middleware(FileLoggingMiddleware, file_name=log_file)) # type: ignore
exceptions: Dict[Any, Callable[[Request, Exception], Awaitable[Response]]] = {
TimeoutError: timeout_error,

View File

@@ -122,15 +122,18 @@ class IsAddressPoint(sa.sql.functions.GenericFunction[Any]):
def __init__(self, table: sa.Table) -> None:
super().__init__(table.c.rank_address,
table.c.housenumber, table.c.name)
table.c.housenumber, table.c.name, table.c.address)
@compiles(IsAddressPoint)
def default_is_address_point(element: IsAddressPoint,
compiler: 'sa.Compiled', **kw: Any) -> str:
rank, hnr, name = list(element.clauses)
return "(%s = 30 AND (%s IS NOT NULL OR %s ? 'addr:housename'))" % (
rank, hnr, name, address = list(element.clauses)
return "(%s = 30 AND (%s IS NULL OR NOT %s ? '_inherited')" \
" AND (%s IS NOT NULL OR %s ? 'addr:housename'))" % (
compiler.process(rank, **kw),
compiler.process(address, **kw),
compiler.process(address, **kw),
compiler.process(hnr, **kw),
compiler.process(name, **kw))
@@ -138,9 +141,11 @@ def default_is_address_point(element: IsAddressPoint,
@compiles(IsAddressPoint, 'sqlite')
def sqlite_is_address_point(element: IsAddressPoint,
compiler: 'sa.Compiled', **kw: Any) -> str:
rank, hnr, name = list(element.clauses)
return "(%s = 30 AND coalesce(%s, json_extract(%s, '$.addr:housename')) IS NOT NULL)" % (
rank, hnr, name, address = list(element.clauses)
return "(%s = 30 AND json_extract(%s, '$._inherited') IS NULL" \
" AND coalesce(%s, json_extract(%s, '$.addr:housename')) IS NOT NULL)" % (
compiler.process(rank, **kw),
compiler.process(address, **kw),
compiler.process(hnr, **kw),
compiler.process(name, **kw))

View File

@@ -84,8 +84,9 @@ def format_base_json(results: Union[ReverseResults, SearchResults],
_write_osm_id(out, result.osm_object)
out.keyval('lat', f"{result.centroid.lat}")\
.keyval('lon', f"{result.centroid.lon}")\
# lat and lon must be string values
out.keyval('lat', f"{result.centroid.lat:0.7f}")\
.keyval('lon', f"{result.centroid.lon:0.7f}")\
.keyval(class_label, result.category[0])\
.keyval('type', result.category[1])\
.keyval('place_rank', result.rank_search)\
@@ -112,6 +113,7 @@ def format_base_json(results: Union[ReverseResults, SearchResults],
if options.get('namedetails', False):
out.keyval('namedetails', result.names)
# must be string values
bbox = cl.bbox_from_result(result)
out.key('boundingbox').start_array()\
.value(f"{bbox.minlat:0.7f}").next()\

View File

@@ -90,7 +90,7 @@ def format_base_xml(results: Union[ReverseResults, SearchResults],
result will be output, otherwise a list.
"""
root = ET.Element(xml_root_tag)
root.set('timestamp', dt.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S +00:00'))
root.set('timestamp', dt.datetime.now(dt.timezone.utc).strftime('%a, %d %b %Y %H:%M:%S +00:00'))
root.set('attribution', cl.OSM_ATTRIBUTION)
for k, v in xml_extra_info.items():
root.set(k, v)

View File

@@ -374,14 +374,17 @@ async def deletable_endpoint(api: NominatimAPIAsync, params: ASGIAdaptor) -> Any
"""
fmt = parse_format(params, RawDataList, 'json')
results = RawDataList()
async with api.begin() as conn:
sql = sa.text(""" SELECT p.place_id, country_code,
name->'name' as name, i.*
FROM placex p, import_polygon_delete i
WHERE p.osm_id = i.osm_id AND p.osm_type = i.osm_type
AND p.class = i.class AND p.type = i.type
""")
results = RawDataList(r._asdict() for r in await conn.execute(sql))
for osm_type in ('N', 'W', 'R'):
sql = sa.text(""" SELECT p.place_id, country_code,
name->'name' as name, i.*
FROM placex p, import_polygon_delete i
WHERE i.osm_type = :osm_type
AND p.osm_id = i.osm_id AND p.osm_type = :osm_type
AND p.class = i.class AND p.type = i.type
""")
results.extend(r._asdict() for r in await conn.execute(sql, {'osm_type': osm_type}))
return build_response(params, params.formatting().format_result(results, fmt, {}))

View File

@@ -136,6 +136,7 @@ class NominatimArgs:
import_from_wiki: bool
import_from_csv: Optional[str]
no_replace: bool
min: int
# Arguments to all query functions
format: str

View File

@@ -58,6 +58,8 @@ class ImportSpecialPhrases:
help='Import special phrases from a CSV file')
group.add_argument('--no-replace', action='store_true',
help='Keep the old phrases and only add the new ones')
group.add_argument('--min', type=int, default=0,
help='Restrict special phrases by minimum occurance')
def run(self, args: NominatimArgs) -> int:
@@ -82,7 +84,9 @@ class ImportSpecialPhrases:
tokenizer = tokenizer_factory.get_tokenizer_for_db(args.config)
should_replace = not args.no_replace
min = args.min
with connect(args.config.get_libpq_dsn()) as db_connection:
SPImporter(
args.config, db_connection, loader
).import_phrases(tokenizer, should_replace)
).import_phrases(tokenizer, should_replace, min)

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Abstract class definitions for tokenizers. These base classes are here
@@ -10,7 +10,6 @@ mainly for documentation purposes.
"""
from abc import ABC, abstractmethod
from typing import List, Tuple, Dict, Any, Optional, Iterable
from pathlib import Path
from ..typing import Protocol
from ..config import Configuration
@@ -38,7 +37,7 @@ class AbstractAnalyzer(ABC):
"""
@abstractmethod
def get_word_token_info(self, words: List[str]) -> List[Tuple[str, str, int]]:
def get_word_token_info(self, words: List[str]) -> List[Tuple[str, str, Optional[int]]]:
""" Return token information for the given list of words.
The function is used for testing and debugging only
@@ -232,6 +231,6 @@ class TokenizerModule(Protocol):
own tokenizer.
"""
def create(self, dsn: str, data_dir: Path) -> AbstractTokenizer:
def create(self, dsn: str) -> AbstractTokenizer:
""" Factory for new tokenizers.
"""

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Functions for creating a tokenizer or initialising the right one for an
@@ -52,19 +52,10 @@ def create_tokenizer(config: Configuration, init_db: bool = True,
if module_name is None:
module_name = config.TOKENIZER
# Create the directory for the tokenizer data
assert config.project_dir is not None
basedir = config.project_dir / 'tokenizer'
if not basedir.exists():
basedir.mkdir()
elif not basedir.is_dir():
LOG.fatal("Tokenizer directory '%s' cannot be created.", basedir)
raise UsageError("Tokenizer setup failed.")
# Import and initialize the tokenizer.
tokenizer_module = _import_tokenizer(module_name)
tokenizer = tokenizer_module.create(config.get_libpq_dsn(), basedir)
tokenizer = tokenizer_module.create(config.get_libpq_dsn())
tokenizer.init_new_db(config, init_db=init_db)
with connect(config.get_libpq_dsn()) as conn:
@@ -79,12 +70,6 @@ def get_tokenizer_for_db(config: Configuration) -> AbstractTokenizer:
The function looks up the appropriate tokenizer in the database
and initialises it.
"""
assert config.project_dir is not None
basedir = config.project_dir / 'tokenizer'
if not basedir.is_dir():
# Directory will be repopulated by tokenizer below.
basedir.mkdir()
with connect(config.get_libpq_dsn()) as conn:
name = properties.get_property(conn, 'tokenizer')
@@ -94,7 +79,7 @@ def get_tokenizer_for_db(config: Configuration) -> AbstractTokenizer:
tokenizer_module = _import_tokenizer(name)
tokenizer = tokenizer_module.create(config.get_libpq_dsn(), basedir)
tokenizer = tokenizer_module.create(config.get_libpq_dsn())
tokenizer.init_from_project(config)
return tokenizer

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Tokenizer implementing normalisation as used before Nominatim 4 but using
@@ -12,7 +12,6 @@ from typing import Optional, Sequence, List, Tuple, Mapping, Any, cast, \
Dict, Set, Iterable
import itertools
import logging
from pathlib import Path
from psycopg.types.json import Jsonb
from psycopg import sql as pysql
@@ -38,10 +37,10 @@ WORD_TYPES = (('country_names', 'C'),
('housenumbers', 'H'))
def create(dsn: str, data_dir: Path) -> 'ICUTokenizer':
def create(dsn: str) -> 'ICUTokenizer':
""" Create a new instance of the tokenizer provided by this module.
"""
return ICUTokenizer(dsn, data_dir)
return ICUTokenizer(dsn)
class ICUTokenizer(AbstractTokenizer):
@@ -50,9 +49,8 @@ class ICUTokenizer(AbstractTokenizer):
normalization routines in Nominatim 3.
"""
def __init__(self, dsn: str, data_dir: Path) -> None:
def __init__(self, dsn: str) -> None:
self.dsn = dsn
self.data_dir = data_dir
self.loader: Optional[ICURuleLoader] = None
def init_new_db(self, config: Configuration, init_db: bool = True) -> None:
@@ -340,7 +338,7 @@ class ICUNameAnalyzer(AbstractAnalyzer):
"""
return cast(str, self.token_analysis.normalizer.transliterate(name)).strip()
def get_word_token_info(self, words: Sequence[str]) -> List[Tuple[str, str, int]]:
def get_word_token_info(self, words: Sequence[str]) -> List[Tuple[str, str, Optional[int]]]:
""" Return token information for the given list of words.
If a word starts with # it is assumed to be a full name
otherwise is a partial name.
@@ -364,11 +362,11 @@ class ICUNameAnalyzer(AbstractAnalyzer):
cur.execute("""SELECT word_token, word_id
FROM word WHERE word_token = ANY(%s) and type = 'W'
""", (list(full_tokens.values()),))
full_ids = {r[0]: r[1] for r in cur}
full_ids = {r[0]: cast(int, r[1]) for r in cur}
cur.execute("""SELECT word_token, word_id
FROM word WHERE word_token = ANY(%s) and type = 'w'""",
(list(partial_tokens.values()),))
part_ids = {r[0]: r[1] for r in cur}
part_ids = {r[0]: cast(int, r[1]) for r in cur}
return [(k, v, full_ids.get(v, None)) for k, v in full_tokens.items()] \
+ [(k, v, part_ids.get(v, None)) for k, v in partial_tokens.items()]

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Collection of functions that check if the database is complete and functional.
@@ -163,12 +163,8 @@ def check_connection(conn: Any, config: Configuration) -> CheckResult:
Database version ({db_version}) doesn't match Nominatim version ({nom_version})
Hints:
* Are you connecting to the correct database?
{instruction}
Check the Migration chapter of the Administration Guide.
Project directory: {config.project_dir}
Current setting of NOMINATIM_DATABASE_DSN: {config.DATABASE_DSN}
""")
@@ -176,24 +172,25 @@ def check_database_version(conn: Connection, config: Configuration) -> CheckResu
""" Checking database_version matches Nominatim software version
"""
if table_exists(conn, 'nominatim_properties'):
db_version_str = None
if not table_exists(conn, 'nominatim_properties'):
instruction = 'Are you connecting to the correct database?'
else:
db_version_str = properties.get_property(conn, 'database_version')
else:
db_version_str = None
if db_version_str is not None:
db_version = parse_version(db_version_str)
if db_version_str is None:
instruction = 'Database version not found. Did the import finish?'
else:
db_version = parse_version(db_version_str)
if db_version == NOMINATIM_VERSION:
return CheckState.OK
if db_version == NOMINATIM_VERSION:
return CheckState.OK
instruction = (
'Run migrations: nominatim admin --migrate'
if db_version < NOMINATIM_VERSION
else 'You need to upgrade the Nominatim software.'
)
else:
instruction = ''
instruction = (
"Run migrations: 'nominatim admin --migrate'"
if db_version < NOMINATIM_VERSION
else 'You need to upgrade the Nominatim software.'
) + ' Check the Migration chapter of the Administration Guide.'
return CheckState.FATAL, dict(db_version=db_version_str,
nom_version=NOMINATIM_VERSION,

View File

@@ -127,7 +127,7 @@ def import_osm_data(osm_files: Union[Path, Sequence[Path]],
fsize += os.stat(str(fname)).st_size
else:
fsize = os.stat(str(osm_files)).st_size
options['osm2pgsql_cache'] = int(min((mem.available + mem.cached) * 0.75,
options['osm2pgsql_cache'] = int(min((mem.available + getattr(mem, 'cached', 0)) * 0.75,
fsize * 2) / 1024 / 1024) + 1
run_osm2pgsql(options)

View File

@@ -37,21 +37,17 @@ def run_osm2pgsql(options: Mapping[str, Any]) -> None:
'--style', str(options['osm2pgsql_style'])
]
if str(options['osm2pgsql_style']).endswith('.lua'):
env['LUA_PATH'] = ';'.join((str(options['osm2pgsql_style_path'] / '?.lua'),
os.environ.get('LUA_PATH', ';')))
env['THEMEPARK_PATH'] = str(options['osm2pgsql_style_path'] / 'themes')
if 'THEMEPARK_PATH' in os.environ:
env['THEMEPARK_PATH'] += ':' + os.environ['THEMEPARK_PATH']
cmd.extend(('--output', 'flex'))
env['LUA_PATH'] = ';'.join((str(options['osm2pgsql_style_path'] / '?.lua'),
os.environ.get('LUA_PATH', ';')))
env['THEMEPARK_PATH'] = str(options['osm2pgsql_style_path'] / 'themes')
if 'THEMEPARK_PATH' in os.environ:
env['THEMEPARK_PATH'] += ':' + os.environ['THEMEPARK_PATH']
cmd.extend(('--output', 'flex'))
for flavour in ('data', 'index'):
if options['tablespaces'][f"main_{flavour}"]:
env[f"NOMINATIM_TABLESPACE_PLACE_{flavour.upper()}"] = \
options['tablespaces'][f"main_{flavour}"]
else:
cmd.extend(('--output', 'gazetteer', '--hstore', '--latlon'))
cmd.extend(_mk_tablespace_options('main', options))
for flavour in ('data', 'index'):
if options['tablespaces'][f"main_{flavour}"]:
env[f"NOMINATIM_TABLESPACE_PLACE_{flavour.upper()}"] = \
options['tablespaces'][f"main_{flavour}"]
if options['flatnode_file']:
cmd.extend(('--flat-nodes', options['flatnode_file']))

View File

@@ -2,7 +2,7 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Functions for importing, updating and otherwise maintaining the table
@@ -64,11 +64,15 @@ class _PostcodeCollector:
if normalized:
self.collected[normalized] += (x, y)
def commit(self, conn: Connection, analyzer: AbstractAnalyzer, project_dir: Path) -> None:
""" Update postcodes for the country from the postcodes selected so far
as well as any externally supplied postcodes.
def commit(self, conn: Connection, analyzer: AbstractAnalyzer,
project_dir: Optional[Path]) -> None:
""" Update postcodes for the country from the postcodes selected so far.
When 'project_dir' is set, then any postcode files found in this
directory are taken into account as well.
"""
self._update_from_external(analyzer, project_dir)
if project_dir is not None:
self._update_from_external(analyzer, project_dir)
to_add, to_delete, to_update = self._compute_changes(conn)
LOG.info("Processing country '%s' (%s added, %s deleted, %s updated).",
@@ -170,7 +174,7 @@ class _PostcodeCollector:
return None
def update_postcodes(dsn: str, project_dir: Path, tokenizer: AbstractTokenizer) -> None:
def update_postcodes(dsn: str, project_dir: Optional[Path], tokenizer: AbstractTokenizer) -> None:
""" Update the table of artificial postcodes.
Computes artificial postcode centroids from the placex table,

View File

@@ -2,12 +2,12 @@
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2024 by the Nominatim developer community.
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Functions for bringing auxiliary data in the database up-to-date.
"""
from typing import MutableSequence, Tuple, Any, Type, Mapping, Sequence, List, cast
from typing import MutableSequence, Tuple, Any, Mapping, Sequence, List
import csv
import gzip
import logging
@@ -212,31 +212,16 @@ def recompute_importance(conn: Connection) -> None:
WHERE s.place_id = d.linked_place_id and d.wikipedia is not null
and (s.wikipedia is null or s.importance < d.importance);
""")
cur.execute("""
UPDATE search_name s SET importance = p.importance
FROM placex p
WHERE s.place_id = p.place_id AND s.importance != p.importance
""")
cur.execute('ALTER TABLE placex ENABLE TRIGGER ALL')
conn.commit()
def _quote_php_variable(var_type: Type[Any], config: Configuration,
conf_name: str) -> str:
if var_type == bool:
return 'true' if config.get_bool(conf_name) else 'false'
if var_type == int:
return cast(str, getattr(config, conf_name))
if not getattr(config, conf_name):
return 'false'
if var_type == Path:
value = str(config.get_path(conf_name) or '')
else:
value = getattr(config, conf_name)
quoted = value.replace("'", "\\'")
return f"'{quoted}'"
def invalidate_osm_object(osm_type: str, osm_id: int, conn: Connection,
recursive: bool = True) -> None:
""" Mark the given OSM object for reindexing. When 'recursive' is set

View File

@@ -16,7 +16,6 @@
from typing import Iterable, Tuple, Mapping, Sequence, Optional, Set
import logging
import re
from psycopg.sql import Identifier, SQL
from ...typing import Protocol
@@ -65,7 +64,32 @@ class SPImporter():
# special phrases class/type on the wiki.
self.table_phrases_to_delete: Set[str] = set()
def import_phrases(self, tokenizer: AbstractTokenizer, should_replace: bool) -> None:
def get_classtype_pairs(self, min: int = 0) -> Set[Tuple[str, str]]:
"""
Returns list of allowed special phrases from the database,
restricting to a list of combinations of classes and types
which occur equal to or more than a specified amount of times.
Default value for this is 0, which allows everything in database.
"""
db_combinations = set()
query = f"""
SELECT class AS CLS, type AS typ
FROM placex
GROUP BY class, type
HAVING COUNT(*) >= {min}
"""
with self.db_connection.cursor() as db_cursor:
db_cursor.execute(SQL(query))
for row in db_cursor:
db_combinations.add((row[0], row[1]))
return db_combinations
def import_phrases(self, tokenizer: AbstractTokenizer, should_replace: bool,
min: int = 0) -> None:
"""
Iterate through all SpecialPhrases extracted from the
loader and import them into the database.
@@ -85,9 +109,10 @@ class SPImporter():
if result:
class_type_pairs.add(result)
self._create_classtype_table_and_indexes(class_type_pairs)
self._create_classtype_table_and_indexes(class_type_pairs, min)
if should_replace:
self._remove_non_existent_tables_from_db()
self.db_connection.commit()
with tokenizer.name_analyzer() as analyzer:
@@ -163,7 +188,8 @@ class SPImporter():
return (phrase.p_class, phrase.p_type)
def _create_classtype_table_and_indexes(self,
class_type_pairs: Iterable[Tuple[str, str]]) -> None:
class_type_pairs: Iterable[Tuple[str, str]],
min: int = 0) -> None:
"""
Create table place_classtype for each given pair.
Also create indexes on place_id and centroid.
@@ -177,10 +203,19 @@ class SPImporter():
with self.db_connection.cursor() as db_cursor:
db_cursor.execute("CREATE INDEX idx_placex_classtype ON placex (class, type)")
if min:
allowed_special_phrases = self.get_classtype_pairs(min)
for pair in class_type_pairs:
phrase_class = pair[0]
phrase_type = pair[1]
# Will only filter if min is not 0
if min and (phrase_class, phrase_type) not in allowed_special_phrases:
LOG.warning("Skipping phrase %s=%s: not in allowed special phrases",
phrase_class, phrase_type)
continue
table_name = _classtype_table(phrase_class, phrase_type)
if table_name in self.table_phrases_to_delete:

View File

@@ -1,10 +0,0 @@
all: bdd python
bdd:
cd bdd && behave -DREMOVE_TEMPLATE=1
python:
pytest python
.PHONY: bdd python

View File

@@ -1,3 +0,0 @@
[behave]
show_skipped=False
default_tags=~@Fail

View File

@@ -1,63 +0,0 @@
@SQLITE
@APIDB
Feature: Localization of search results
Scenario: default language
When sending details query for R1155955
Then results contain
| ID | localname |
| 0 | Liechtenstein |
Scenario: accept-language first
When sending details query for R1155955
| accept-language |
| zh,de |
Then results contain
| ID | localname |
| 0 | |
Scenario: accept-language missing
When sending details query for R1155955
| accept-language |
| xx,fr,en,de |
Then results contain
| ID | localname |
| 0 | Liechtenstein |
Scenario: http accept language header first
Given the HTTP header
| accept-language |
| fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending details query for R1155955
Then results contain
| ID | localname |
| 0 | Liktinstein |
Scenario: http accept language header and accept-language
Given the HTTP header
| accept-language |
| fr-ca,fr;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending details query for R1155955
| accept-language |
| fo,en |
Then results contain
| ID | localname |
| 0 | Liktinstein |
Scenario: http accept language header fallback
Given the HTTP header
| accept-language |
| fo-ca,en-ca;q=0.5 |
When sending details query for R1155955
Then results contain
| ID | localname |
| 0 | Liktinstein |
Scenario: http accept language header fallback (upper case)
Given the HTTP header
| accept-language |
| fo-FR;q=0.8,en-ca;q=0.5 |
When sending details query for R1155955
Then results contain
| ID | localname |
| 0 | Liktinstein |

View File

@@ -1,96 +0,0 @@
@APIDB
Feature: Object details
Testing different parameter options for details API.
@SQLITE
Scenario: JSON Details
When sending json details query for W297699560
Then the result is valid json
And result has attributes geometry
And result has not attributes keywords,address,linked_places,parentof
And results contain in field geometry
| type |
| Point |
@SQLITE
Scenario: JSON Details with pretty printing
When sending json details query for W297699560
| pretty |
| 1 |
Then the result is valid json
And result has attributes geometry
And result has not attributes keywords,address,linked_places,parentof
@SQLITE
Scenario: JSON Details with addressdetails
When sending json details query for W297699560
| addressdetails |
| 1 |
Then the result is valid json
And result has attributes address
@SQLITE
Scenario: JSON Details with linkedplaces
When sending json details query for R123924
| linkedplaces |
| 1 |
Then the result is valid json
And result has attributes linked_places
@SQLITE
Scenario: JSON Details with hierarchy
When sending json details query for W297699560
| hierarchy |
| 1 |
Then the result is valid json
And result has attributes hierarchy
@SQLITE
Scenario: JSON Details with grouped hierarchy
When sending json details query for W297699560
| hierarchy | group_hierarchy |
| 1 | 1 |
Then the result is valid json
And result has attributes hierarchy
Scenario Outline: JSON Details with keywords
When sending json details query for <osmid>
| keywords |
| 1 |
Then the result is valid json
And result has attributes keywords
Examples:
| osmid |
| W297699560 |
| W243055645 |
| W243055716 |
| W43327921 |
# ticket #1343
Scenario: Details of a country with keywords
When sending details query for R1155955
| keywords |
| 1 |
Then the result is valid json
And result has attributes keywords
@SQLITE
Scenario Outline: JSON details with full geometry
When sending json details query for <osmid>
| polygon_geojson |
| 1 |
Then the result is valid json
And result has attributes geometry
And results contain in field geometry
| type |
| <geometry> |
Examples:
| osmid | geometry |
| W297699560 | LineString |
| W243055645 | Polygon |
| W243055716 | Polygon |
| W43327921 | LineString |

View File

@@ -1,81 +0,0 @@
@SQLITE
@APIDB
Feature: Object details
Check details page for correctness
Scenario Outline: Details via OSM id
When sending details query for <type><id>
Then the result is valid json
And results contain
| osm_type | osm_id |
| <type> | <id> |
Examples:
| type | id |
| N | 5484325405 |
| W | 43327921 |
| R | 123924 |
Scenario Outline: Details for different class types for the same OSM id
When sending details query for N300209696:<class>
Then the result is valid json
And results contain
| osm_type | osm_id | category |
| N | 300209696 | <class> |
Examples:
| class |
| tourism |
| mountain_pass |
Scenario Outline: Details via unknown OSM id
When sending details query for <object>
Then a HTTP 404 is returned
Examples:
| object |
| 1 |
| R1 |
| N300209696:highway |
Scenario: Details for interpolation way return the interpolation
When sending details query for W1
Then the result is valid json
And results contain
| category | type | osm_type | osm_id | admin_level |
| place | houses | W | 1 | 15 |
@Fail
Scenario: Details for interpolation way return the interpolation
When sending details query for 112871
Then the result is valid json
And results contain
| category | type | admin_level |
| place | houses | 15 |
And result has not attributes osm_type,osm_id
@Fail
Scenario: Details for interpolation way return the interpolation
When sending details query for 112820
Then the result is valid json
And results contain
| category | type | admin_level |
| place | postcode | 15 |
And result has not attributes osm_type,osm_id
Scenario Outline: Details debug output returns no errors
When sending debug details query for <feature>
Then the result is valid html
Examples:
| feature |
| N5484325405 |
| W1 |
| 112820 |
| 112871 |

View File

@@ -1,14 +0,0 @@
@SQLITE
@APIDB
Feature: Places by osm_type and osm_id Tests
Simple tests for errors in various response formats.
Scenario Outline: Force error by providing too many ids
When sending <format> lookup query for N1,N2,N3,N4,N5,N6,N7,N8,N9,N10,N11,N12,N13,N14,N15,N16,N17,N18,N19,N20,N21,N22,N23,N24,N25,N26,N27,N28,N29,N30,N31,N32,N33,N34,N35,N36,N37,N38,N39,N40,N41,N42,N43,N44,N45,N46,N47,N48,N49,N50,N51
Then a <format> user error is returned
Examples:
| format |
| xml |
| json |
| geojson |

View File

@@ -1,42 +0,0 @@
@SQLITE
@APIDB
Feature: Places by osm_type and osm_id Tests
Simple tests for response format.
Scenario Outline: address lookup for existing node, way, relation
When sending <format> lookup query for N5484325405,W43327921,,R123924,X99,N0
Then the result is valid <outformat>
And exactly 3 results are returned
Examples:
| format | outformat |
| xml | xml |
| json | json |
| jsonv2 | json |
| geojson | geojson |
| geocodejson | geocodejson |
Scenario: address lookup for non-existing or invalid node, way, relation
When sending xml lookup query for X99,,N0,nN158845944,ABC,,W9
Then exactly 0 results are returned
Scenario Outline: Boundingbox is returned
When sending <format> lookup query for N5484325405,W43327921
Then exactly 2 results are returned
And result 0 has bounding box in 47.135,47.14,9.52,9.525
And result 1 has bounding box in 47.07,47.08,9.50,9.52
Examples:
| format |
| json |
| jsonv2 |
| geojson |
| xml |
Scenario: Lookup of a linked place
When sending geocodejson lookup query for N1932181216
Then exactly 1 result is returned
And results contain
| name |
| Vaduz |

View File

@@ -1,45 +0,0 @@
@SQLITE
@APIDB
Feature: Geometries for reverse geocoding
Tests for returning geometries with reverse
Scenario: Polygons are returned fully by default
When sending v1/reverse at 47.13803,9.52264
| polygon_text |
| 1 |
Then results contain
| geotext |
| ^POLYGON\(\(9.5225302 47.138066, ?9.5225348 47.1379282, ?9.5226142 47.1379294, ?9.5226143 47.1379257, ?9.522615 47.137917, ?9.5226225 47.1379098, ?9.5226334 47.1379052, ?9.5226461 47.1379037, ?9.5226588 47.1379056, ?9.5226693 47.1379107, ?9.5226762 47.1379181, ?9.5226762 47.1379268, ?9.5226761 47.1379308, ?9.5227366 47.1379317, ?9.5227352 47.1379753, ?9.5227608 47.1379757, ?9.5227595 47.1380148, ?9.5227355 47.1380145, ?9.5227337 47.1380692, ?9.5225302 47.138066\)\) |
Scenario: Polygons can be slightly simplified
When sending v1/reverse at 47.13803,9.52264
| polygon_text | polygon_threshold |
| 1 | 0.00001 |
Then results contain
| geotext |
| ^POLYGON\(\(9.5225302 47.138066, ?9.5225348 47.1379282, ?9.5226142 47.1379294, ?9.5226225 47.1379098, ?9.5226588 47.1379056, ?9.5226761 47.1379308, ?9.5227366 47.1379317, ?9.5227352 47.1379753, ?9.5227608 47.1379757, ?9.5227595 47.1380148, ?9.5227355 47.1380145, ?9.5227337 47.1380692, ?9.5225302 47.138066\)\) |
Scenario: Polygons can be much simplified
When sending v1/reverse at 47.13803,9.52264
| polygon_text | polygon_threshold |
| 1 | 0.9 |
Then results contain
| geotext |
| ^POLYGON\(\([0-9. ]+, ?[0-9. ]+, ?[0-9. ]+, ?[0-9. ]+(, ?[0-9. ]+)?\)\) |
Scenario: For polygons return the centroid as center point
When sending v1/reverse at 47.13836,9.52304
Then results contain
| centroid |
| 9.52271080 47.13818045 |
Scenario: For streets return the closest point as center point
When sending v1/reverse at 47.13368,9.52942
Then results contain
| centroid |
| 9.529431527 47.13368172 |

View File

@@ -1,37 +0,0 @@
@SQLITE
@APIDB
Feature: Localization of reverse search results
Scenario: default language
When sending v1/reverse at 47.14,9.55
Then result addresses contain
| ID | country |
| 0 | Liechtenstein |
Scenario: accept-language parameter
When sending v1/reverse at 47.14,9.55
| accept-language |
| ja,en |
Then result addresses contain
| ID | country |
| 0 | |
Scenario: HTTP accept language header
Given the HTTP header
| accept-language |
| fo-ca,fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending v1/reverse at 47.14,9.55
Then result addresses contain
| ID | country |
| 0 | Liktinstein |
Scenario: accept-language parameter and HTTP header
Given the HTTP header
| accept-language |
| fo-ca,fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending v1/reverse at 47.14,9.55
| accept-language |
| en |
Then result addresses contain
| ID | country |
| 0 | Liechtenstein |

View File

@@ -1,117 +0,0 @@
@SQLITE
@APIDB
Feature: Reverse geocoding
Testing the reverse function
Scenario Outline: Simple reverse-geocoding with no results
When sending v1/reverse at <lat>,<lon>
Then exactly 0 results are returned
Examples:
| lat | lon |
| 0.0 | 0.0 |
| 91.3 | 0.4 |
| -700 | 0.4 |
| 0.2 | 324.44 |
| 0.2 | -180.4 |
Scenario: Unknown countries fall back to default country grid
When sending v1/reverse at 45.174,-103.072
Then results contain
| category | type | display_name |
| place | country | United States |
@Tiger
Scenario: TIGER house number
When sending v1/reverse at 32.4752389363,-86.4810198619
Then results contain
| category | type |
| place | house |
And result addresses contain
| house_number | road | postcode | country_code |
| 707 | Upper Kingston Road | 36067 | us |
@Tiger
Scenario: No TIGER house number for zoom < 18
When sending v1/reverse at 32.4752389363,-86.4810198619
| zoom |
| 17 |
Then results contain
| osm_type | category |
| way | highway |
And result addresses contain
| road | postcode | country_code |
| Upper Kingston Road | 36067 | us |
Scenario: Interpolated house number
When sending v1/reverse at 47.118533,9.57056562
Then results contain
| osm_type | category | type |
| way | place | house |
And result addresses contain
| house_number | road |
| 1019 | Grosssteg |
Scenario: Address with non-numerical house number
When sending v1/reverse at 47.107465,9.52838521614
Then result addresses contain
| house_number | road |
| 39A/B | Dorfstrasse |
Scenario: Address with numerical house number
When sending v1/reverse at 47.168440329479594,9.511551699184338
Then result addresses contain
| house_number | road |
| 6 | Schmedgässle |
Scenario Outline: Zoom levels below 5 result in country
When sending v1/reverse at 47.16,9.51
| zoom |
| <zoom> |
Then results contain
| display_name |
| Liechtenstein |
Examples:
| zoom |
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
Scenario: When on a street, the closest interpolation is shown
When sending v1/reverse at 47.118457166193245,9.570678289621355
| zoom |
| 18 |
Then results contain
| display_name |
| 1021, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
# github 2214
Scenario: Interpolations do not override house numbers when they are closer
When sending v1/reverse at 47.11778,9.57255
| zoom |
| 18 |
Then results contain
| display_name |
| 5, Grosssteg, Steg, Triesenberg, Oberland, 9497, Liechtenstein |
Scenario: Interpolations do not override house numbers when they are closer (2)
When sending v1/reverse at 47.11834,9.57167
| zoom |
| 18 |
Then results contain
| display_name |
| 3, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
Scenario: When on a street with zoom 18, the closest housenumber is returned
When sending v1/reverse at 47.11755503977281,9.572722250405036
| zoom |
| 18 |
Then result addresses contain
| house_number |
| 7 |

View File

@@ -1,107 +0,0 @@
@SQLITE
@APIDB
Feature: Geocodejson for Reverse API
Testing correctness of geocodejson output (API version v1).
Scenario Outline: Simple OSM result
When sending v1/reverse at 47.066,9.504 with format geocodejson
| addressdetails |
| <has_address> |
Then result has attributes place_id, accuracy
And result has <attributes> country,postcode,county,city,district,street,housenumber, admin
Then results contain
| osm_type | osm_id | osm_key | osm_value | type |
| node | 6522627624 | shop | bakery | house |
And results contain
| name | label |
| Dorfbäckerei Herrmann | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
And results contain in field geojson
| type | coordinates |
| Point | [9.5036065, 47.0660892] |
And results contain in field __geocoding
| version | licence | attribution |
| 0.1.0 | ODbL | ^Data © OpenStreetMap contributors, ODbL 1.0. https?://osm.org/copyright$ |
Examples:
| has_address | attributes |
| 1 | attributes |
| 0 | not attributes |
Scenario: City housenumber-level address with street
When sending v1/reverse at 47.1068011,9.52810091 with format geocodejson
Then results contain
| housenumber | street | postcode | city | country |
| 8 | Im Winkel | 9495 | Triesen | Liechtenstein |
And results contain in field admin
| level6 | level8 |
| Oberland | Triesen |
Scenario: Town street-level address with street
When sending v1/reverse at 47.066,9.504 with format geocodejson
| zoom |
| 16 |
Then results contain
| name | city | postcode | country |
| Gnetsch | Balzers | 9496 | Liechtenstein |
Scenario: Poi street-level address with footway
When sending v1/reverse at 47.06515,9.50083 with format geocodejson
Then results contain
| street | city | postcode | country |
| Burgweg | Balzers | 9496 | Liechtenstein |
Scenario: City address with suburb
When sending v1/reverse at 47.146861,9.511771 with format geocodejson
Then results contain
| housenumber | street | district | city | postcode | country |
| 5 | Lochgass | Ebenholz | Vaduz | 9490 | Liechtenstein |
@Tiger
Scenario: Tiger address
When sending v1/reverse at 32.4752389363,-86.4810198619 with format geocodejson
Then results contain
| osm_type | osm_id | osm_key | osm_value | type |
| way | 396009653 | place | house | house |
And results contain
| housenumber | street | city | county | postcode | country |
| 707 | Upper Kingston Road | Prattville | Autauga County | 36067 | United States |
Scenario: Interpolation address
When sending v1/reverse at 47.118533,9.57056562 with format geocodejson
Then results contain
| osm_type | osm_id | osm_key | osm_value | type |
| way | 1 | place | house | house |
And results contain
| label |
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
And result has not attributes name
Scenario: Line geometry output is supported
When sending v1/reverse at 47.06597,9.50467 with format geocodejson
| param | value |
| polygon_geojson | 1 |
Then results contain in field geojson
| type |
| LineString |
Scenario Outline: Only geojson polygons are supported
When sending v1/reverse at 47.06597,9.50467 with format geocodejson
| param | value |
| <param> | 1 |
Then results contain in field geojson
| type |
| Point |
Examples:
| param |
| polygon_text |
| polygon_svg |
| polygon_kml |

View File

@@ -1,73 +0,0 @@
@SQLITE
@APIDB
Feature: Geojson for Reverse API
Testing correctness of geojson output (API version v1).
Scenario Outline: Simple OSM result
When sending v1/reverse at 47.066,9.504 with format geojson
| addressdetails |
| <has_address> |
Then result has attributes place_id, importance, __licence
And result has <attributes> address
And results contain
| osm_type | osm_id | place_rank | category | type | addresstype |
| node | 6522627624 | 30 | shop | bakery | shop |
And results contain
| name | display_name |
| Dorfbäckerei Herrmann | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
And results contain
| boundingbox |
| [47.0660392, 47.0661392, 9.5035565, 9.5036565] |
And results contain in field geojson
| type | coordinates |
| Point | [9.5036065, 47.0660892] |
Examples:
| has_address | attributes |
| 1 | attributes |
| 0 | not attributes |
@Tiger
Scenario: Tiger address
When sending v1/reverse at 32.4752389363,-86.4810198619 with format geojson
Then results contain
| osm_type | osm_id | category | type | addresstype | place_rank |
| way | 396009653 | place | house | place | 30 |
Scenario: Interpolation address
When sending v1/reverse at 47.118533,9.57056562 with format geojson
Then results contain
| osm_type | osm_id | place_rank | category | type | addresstype |
| way | 1 | 30 | place | house | place |
And results contain
| boundingbox |
| ^\[47.118495\d*, 47.118595\d*, 9.570496\d*, 9.570596\d*\] |
And results contain
| display_name |
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
Scenario: Line geometry output is supported
When sending v1/reverse at 47.06597,9.50467 with format geojson
| param | value |
| polygon_geojson | 1 |
Then results contain in field geojson
| type |
| LineString |
Scenario Outline: Only geojson polygons are supported
When sending v1/reverse at 47.06597,9.50467 with format geojson
| param | value |
| <param> | 1 |
Then results contain in field geojson
| type |
| Point |
Examples:
| param |
| polygon_text |
| polygon_svg |
| polygon_kml |

View File

@@ -1,130 +0,0 @@
@SQLITE
@APIDB
Feature: Json output for Reverse API
Testing correctness of json and jsonv2 output (API version v1).
Scenario Outline: OSM result with and without addresses
When sending v1/reverse at 47.066,9.504 with format json
| addressdetails |
| <has_address> |
Then result has <attributes> address
When sending v1/reverse at 47.066,9.504 with format jsonv2
| addressdetails |
| <has_address> |
Then result has <attributes> address
Examples:
| has_address | attributes |
| 1 | attributes |
| 0 | not attributes |
Scenario Outline: Simple OSM result
When sending v1/reverse at 47.066,9.504 with format <format>
Then result has attributes place_id
And results contain
| licence |
| ^Data © OpenStreetMap contributors, ODbL 1.0. https?://osm.org/copyright$ |
And results contain
| osm_type | osm_id |
| node | 6522627624 |
And results contain
| centroid | boundingbox |
| 9.5036065 47.0660892 | ['47.0660392', '47.0661392', '9.5035565', '9.5036565'] |
And results contain
| display_name |
| Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
And result has not attributes namedetails,extratags
Examples:
| format |
| json |
| jsonv2 |
Scenario: Extra attributes of jsonv2 result
When sending v1/reverse at 47.066,9.504 with format jsonv2
Then result has attributes importance
Then results contain
| category | type | name | place_rank | addresstype |
| shop | bakery | Dorfbäckerei Herrmann | 30 | shop |
@Tiger
Scenario: Tiger address
When sending v1/reverse at 32.4752389363,-86.4810198619 with format jsonv2
Then results contain
| osm_type | osm_id | category | type | addresstype |
| way | 396009653 | place | house | place |
Scenario Outline: Interpolation address
When sending v1/reverse at 47.118533,9.57056562 with format <format>
Then results contain
| osm_type | osm_id |
| way | 1 |
And results contain
| centroid | boundingbox |
| 9.57054676 47.118545392 | ^\['47.118495\d*', '47.118595\d*', '9.570496\d*', '9.570596\d*'\] |
And results contain
| display_name |
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
Examples:
| format |
| json |
| jsonv2 |
Scenario Outline: Output of geojson
When sending v1/reverse at 47.06597,9.50467 with format <format>
| param | value |
| polygon_geojson | 1 |
Then results contain in field geojson
| type | coordinates |
| LineString | [[9.5039353, 47.0657546], [9.5040437, 47.0657781], [9.5040808, 47.065787], [9.5054298, 47.0661407]] |
Examples:
| format |
| json |
| jsonv2 |
Scenario Outline: Output of WKT
When sending v1/reverse at 47.06597,9.50467 with format <format>
| param | value |
| polygon_text | 1 |
Then results contain
| geotext |
| ^LINESTRING\(9.5039353 47.0657546, ?9.5040437 47.0657781, ?9.5040808 47.065787, ?9.5054298 47.0661407\) |
Examples:
| format |
| json |
| jsonv2 |
Scenario Outline: Output of SVG
When sending v1/reverse at 47.06597,9.50467 with format <format>
| param | value |
| polygon_svg | 1 |
Then results contain
| svg |
| M 9.5039353 -47.0657546 L 9.5040437 -47.0657781 9.5040808 -47.065787 9.5054298 -47.0661407 |
Examples:
| format |
| json |
| jsonv2 |
Scenario Outline: Output of KML
When sending v1/reverse at 47.06597,9.50467 with format <format>
| param | value |
| polygon_kml | 1 |
Then results contain
| geokml |
| ^<LineString><coordinates>9.5039\d*,47.0657\d* 9.5040\d*,47.0657\d* 9.5040\d*,47.065\d* 9.5054\d*,47.0661\d*</coordinates></LineString> |
Examples:
| format |
| json |
| jsonv2 |

View File

@@ -1,206 +0,0 @@
@SQLITE
@APIDB
Feature: v1/reverse Parameter Tests
Tests for parameter inputs for the v1 reverse endpoint.
This file contains mostly bad parameter input. Valid parameters
are tested in the format tests.
Scenario: Bad format
When sending v1/reverse at 47.14122383,9.52169581334 with format sdf
Then a HTTP 400 is returned
Scenario: Missing lon parameter
When sending v1/reverse at 52.52,
Then a HTTP 400 is returned
Scenario: Missing lat parameter
When sending v1/reverse at ,52.52
Then a HTTP 400 is returned
Scenario Outline: Bad format for lat or lon
When sending v1/reverse at ,
| lat | lon |
| <lat> | <lon> |
Then a HTTP 400 is returned
Examples:
| lat | lon |
| 48.9660 | 8,4482 |
| 48,9660 | 8.4482 |
| 48,9660 | 8,4482 |
| 48.966.0 | 8.4482 |
| 48.966 | 8.448.2 |
| Nan | 8.448 |
| 48.966 | Nan |
| Inf | 5.6 |
| 5.6 | -Inf |
| <script></script> | 3.4 |
| 3.4 | <script></script> |
| -45.3 | ; |
| gkjd | 50 |
Scenario: Non-numerical zoom levels return an error
When sending v1/reverse at 47.14122383,9.52169581334
| zoom |
| adfe |
Then a HTTP 400 is returned
Scenario Outline: Truthy values for boolean parameters
When sending v1/reverse at 47.14122383,9.52169581334
| addressdetails |
| <value> |
Then exactly 1 result is returned
And result has attributes address
When sending v1/reverse at 47.14122383,9.52169581334
| extratags |
| <value> |
Then exactly 1 result is returned
And result has attributes extratags
When sending v1/reverse at 47.14122383,9.52169581334
| namedetails |
| <value> |
Then exactly 1 result is returned
And result has attributes namedetails
When sending v1/reverse at 47.14122383,9.52169581334
| polygon_geojson |
| <value> |
Then exactly 1 result is returned
And result has attributes geojson
When sending v1/reverse at 47.14122383,9.52169581334
| polygon_kml |
| <value> |
Then exactly 1 result is returned
And result has attributes geokml
When sending v1/reverse at 47.14122383,9.52169581334
| polygon_svg |
| <value> |
Then exactly 1 result is returned
And result has attributes svg
When sending v1/reverse at 47.14122383,9.52169581334
| polygon_text |
| <value> |
Then exactly 1 result is returned
And result has attributes geotext
Examples:
| value |
| yes |
| no |
| -1 |
| 100 |
| false |
| 00 |
Scenario: Only one geometry can be requested
When sending v1/reverse at 47.165989816710066,9.515774846076965
| polygon_text | polygon_svg |
| 1 | 1 |
Then a HTTP 400 is returned
Scenario Outline: Wrapping of legal jsonp requests
When sending v1/reverse at 67.3245,0.456 with format <format>
| json_callback |
| foo |
Then the result is valid <outformat>
Examples:
| format | outformat |
| json | json |
| jsonv2 | json |
| geojson | geojson |
| geocodejson | geocodejson |
Scenario Outline: Illegal jsonp are not allowed
When sending v1/reverse at 47.165989816710066,9.515774846076965
| param | value |
|json_callback | <data> |
Then a HTTP 400 is returned
Examples:
| data |
| 1asd |
| bar(foo) |
| XXX['bad'] |
| foo; evil |
Scenario Outline: Reverse debug mode produces valid HTML
When sending v1/reverse at , with format debug
| lat | lon |
| <lat> | <lon> |
Then the result is valid html
Examples:
| lat | lon |
| 0.0 | 0.0 |
| 47.06645 | 9.56601 |
| 47.14081 | 9.52267 |
Scenario Outline: Full address display for city housenumber-level address with street
When sending v1/reverse at 47.1068011,9.52810091 with format <format>
Then address of result 0 is
| type | value |
| house_number | 8 |
| road | Im Winkel |
| neighbourhood | Oberdorf |
| village | Triesen |
| ISO3166-2-lvl8 | LI-09 |
| county | Oberland |
| postcode | 9495 |
| country | Liechtenstein |
| country_code | li |
Examples:
| format |
| json |
| jsonv2 |
| geojson |
| xml |
Scenario Outline: Results with name details
When sending v1/reverse at 47.14052,9.52202 with format <format>
| zoom | namedetails |
| 14 | 1 |
Then results contain in field namedetails
| name |
| Ebenholz |
Examples:
| format |
| json |
| jsonv2 |
| xml |
| geojson |
Scenario Outline: Results with extratags
When sending v1/reverse at 47.14052,9.52202 with format <format>
| zoom | extratags |
| 14 | 1 |
Then results contain in field extratags
| wikidata |
| Q4529531 |
Examples:
| format |
| json |
| jsonv2 |
| xml |
| geojson |

View File

@@ -1,88 +0,0 @@
@SQLITE
@APIDB
Feature: XML output for Reverse API
Testing correctness of xml output (API version v1).
Scenario Outline: OSM result with and without addresses
When sending v1/reverse at 47.066,9.504 with format xml
| addressdetails |
| <has_address> |
Then result has attributes place_id
Then result has <attributes> address
And results contain
| osm_type | osm_id | place_rank | address_rank |
| node | 6522627624 | 30 | 30 |
And results contain
| centroid | boundingbox |
| 9.5036065 47.0660892 | 47.0660392,47.0661392,9.5035565,9.5036565 |
And results contain
| ref | display_name |
| Dorfbäckerei Herrmann | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
Examples:
| has_address | attributes |
| 1 | attributes |
| 0 | not attributes |
@Tiger
Scenario: Tiger address
When sending v1/reverse at 32.4752389363,-86.4810198619 with format xml
Then results contain
| osm_type | osm_id | place_rank | address_rank |
| way | 396009653 | 30 | 30 |
And results contain
| centroid | boundingbox |
| -86.4808553 32.4753580 | ^32.4753080\d*,32.4754080\d*,-86.4809053\d*,-86.4808053\d* |
And results contain
| display_name |
| 707, Upper Kingston Road, Upper Kingston, Prattville, Autauga County, 36067, United States |
Scenario: Interpolation address
When sending v1/reverse at 47.118533,9.57056562 with format xml
Then results contain
| osm_type | osm_id | place_rank | address_rank |
| way | 1 | 30 | 30 |
And results contain
| centroid | boundingbox |
| 9.57054676 47.118545392 | ^47.118495\d*,47.118595\d*,9.570496\d*,9.570596\d* |
And results contain
| display_name |
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
Scenario: Output of geojson
When sending v1/reverse at 47.06597,9.50467 with format xml
| param | value |
| polygon_geojson | 1 |
Then results contain
| geojson |
| {"type":"LineString","coordinates":[[9.5039353,47.0657546],[9.5040437,47.0657781],[9.5040808,47.065787],[9.5054298,47.0661407]]} |
Scenario: Output of WKT
When sending v1/reverse at 47.06597,9.50467 with format xml
| param | value |
| polygon_text | 1 |
Then results contain
| geotext |
| ^LINESTRING\(9.5039353 47.0657546, ?9.5040437 47.0657781, ?9.5040808 47.065787, ?9.5054298 47.0661407\) |
Scenario: Output of SVG
When sending v1/reverse at 47.06597,9.50467 with format xml
| param | value |
| polygon_svg | 1 |
Then results contain
| geosvg |
| M 9.5039353 -47.0657546 L 9.5040437 -47.0657781 9.5040808 -47.065787 9.5054298 -47.0661407 |
Scenario: Output of KML
When sending v1/reverse at 47.06597,9.50467 with format xml
| param | value |
| polygon_kml | 1 |
Then results contain
| geokml |
| ^<geokml><LineString><coordinates>9.5039\d*,47.0657\d* 9.5040\d*,47.0657\d* 9.5040\d*,47.065\d* 9.5054\d*,47.0661\d*</coordinates></LineString></geokml> |

View File

@@ -1,28 +0,0 @@
@SQLITE
@APIDB
Feature: Parameters for Search API
Testing correctness of geocodejson output.
Scenario: City housenumber-level address with street
When sending geocodejson search query "Im Winkel 8, Triesen" with address
Then results contain
| housenumber | street | postcode | city | country |
| 8 | Im Winkel | 9495 | Triesen | Liechtenstein |
Scenario: Town street-level address with street
When sending geocodejson search query "Gnetsch, Balzers" with address
Then results contain
| name | city | postcode | country |
| Gnetsch | Balzers | 9496 | Liechtenstein |
Scenario: Town street-level address with footway
When sending geocodejson search query "burg gutenberg 6000 jahre geschichte" with address
Then results contain
| street | city | postcode | country |
| Burgweg | Balzers | 9496 | Liechtenstein |
Scenario: City address with suburb
When sending geocodejson search query "Lochgass 5, Ebenholz, Vaduz" with address
Then results contain
| housenumber | street | district | city | postcode | country |
| 5 | Lochgass | Ebenholz | Vaduz | 9490 | Liechtenstein |

View File

@@ -1,63 +0,0 @@
@SQLITE
@APIDB
Feature: Localization of search results
Scenario: default language
When sending json search query "Liechtenstein"
Then results contain
| ID | display_name |
| 0 | Liechtenstein |
Scenario: accept-language first
When sending json search query "Liechtenstein"
| accept-language |
| zh,de |
Then results contain
| ID | display_name |
| 0 | |
Scenario: accept-language missing
When sending json search query "Liechtenstein"
| accept-language |
| xx,fr,en,de |
Then results contain
| ID | display_name |
| 0 | Liechtenstein |
Scenario: http accept language header first
Given the HTTP header
| accept-language |
| fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending json search query "Liechtenstein"
Then results contain
| ID | display_name |
| 0 | Liktinstein |
Scenario: http accept language header and accept-language
Given the HTTP header
| accept-language |
| fr-ca,fr;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending json search query "Liechtenstein"
| accept-language |
| fo,en |
Then results contain
| ID | display_name |
| 0 | Liktinstein |
Scenario: http accept language header fallback
Given the HTTP header
| accept-language |
| fo-ca,en-ca;q=0.5 |
When sending json search query "Liechtenstein"
Then results contain
| ID | display_name |
| 0 | Liktinstein |
Scenario: http accept language header fallback (upper case)
Given the HTTP header
| accept-language |
| fo-FR;q=0.8,en-ca;q=0.5 |
When sending json search query "Liechtenstein"
Then results contain
| ID | display_name |
| 0 | Liktinstein |

View File

@@ -1,362 +0,0 @@
@SQLITE
@APIDB
Feature: Search queries
Testing different queries and parameters
Scenario: Simple XML search
When sending xml search query "Schaan"
Then result 0 has attributes place_id,osm_type,osm_id
And result 0 has attributes place_rank,boundingbox
And result 0 has attributes lat,lon,display_name
And result 0 has attributes class,type,importance
And result 0 has not attributes address
And result 0 has bounding box in 46.5,47.5,9,10
Scenario: Simple JSON search
When sending json search query "Vaduz"
Then result 0 has attributes place_id,licence,class,type
And result 0 has attributes osm_type,osm_id,boundingbox
And result 0 has attributes lat,lon,display_name,importance
And result 0 has not attributes address
And result 0 has bounding box in 46.5,47.5,9,10
Scenario: Unknown formats returns a user error
When sending search query "Vaduz"
| format |
| x45 |
Then a HTTP 400 is returned
Scenario Outline: Search with addressdetails
When sending <format> search query "Triesen" with address
Then address of result 0 is
| type | value |
| village | Triesen |
| county | Oberland |
| postcode | 9495 |
| country | Liechtenstein |
| country_code | li |
| ISO3166-2-lvl8 | LI-09 |
Examples:
| format |
| json |
| jsonv2 |
| geojson |
| xml |
Scenario: Coordinate search with addressdetails
When sending json search query "47.12400621,9.6047552"
| accept-language |
| en |
Then results contain
| display_name |
| Guschg, Valorschstrasse, Balzers, Oberland, 9497, Liechtenstein |
Scenario: Address details with unknown class types
When sending json search query "Kloster St. Elisabeth" with address
Then results contain
| ID | class | type |
| 0 | amenity | monastery |
And result addresses contain
| ID | amenity |
| 0 | Kloster St. Elisabeth |
Scenario: Disabling deduplication
When sending json search query "Malbunstr"
Then there are no duplicates
When sending json search query "Malbunstr"
| dedupe |
| 0 |
Then there are duplicates
Scenario: Search with bounded viewbox in right area
When sending json search query "post" with address
| bounded | viewbox |
| 1 | 9,47,10,48 |
Then result addresses contain
| ID | town |
| 0 | Vaduz |
When sending json search query "post" with address
| bounded | viewbox |
| 1 | 9.49712,47.17122,9.52605,47.16242 |
Then result addresses contain
| town |
| Schaan |
Scenario: Country search with bounded viewbox remain in the area
When sending json search query "" with address
| bounded | viewbox | country |
| 1 | 9.49712,47.17122,9.52605,47.16242 | de |
Then less than 1 result is returned
Scenario: Search with bounded viewboxlbrt in right area
When sending json search query "bar" with address
| bounded | viewboxlbrt |
| 1 | 9.49712,47.16242,9.52605,47.17122 |
Then result addresses contain
| town |
| Schaan |
@Fail
Scenario: No POI search with unbounded viewbox
When sending json search query "restaurant"
| viewbox |
| 9.93027,53.61634,10.10073,53.54500 |
Then results contain
| display_name |
| ^[^,]*[Rr]estaurant.* |
Scenario: bounded search remains within viewbox, even with no results
When sending json search query "[restaurant]"
| bounded | viewbox |
| 1 | 43.5403125,-5.6563282,43.54285,-5.662003 |
Then less than 1 result is returned
Scenario: bounded search remains within viewbox with results
When sending json search query "restaurant"
| bounded | viewbox |
| 1 | 9.49712,47.17122,9.52605,47.16242 |
Then result has centroid in 9.49712,47.16242,9.52605,47.17122
Scenario: Prefer results within viewbox
When sending json search query "Gässle" with address
| accept-language | viewbox |
| en | 9.52413,47.10759,9.53140,47.10539 |
Then result addresses contain
| ID | village |
| 0 | Triesen |
When sending json search query "Gässle" with address
| accept-language | viewbox |
| en | 9.45949,47.08421,9.54094,47.05466 |
Then result addresses contain
| ID | town |
| 0 | Balzers |
Scenario: viewboxes cannot be points
When sending json search query "foo"
| viewbox |
| 1.01,34.6,1.01,34.6 |
Then a HTTP 400 is returned
Scenario Outline: viewbox must have four coordinate numbers
When sending json search query "foo"
| viewbox |
| <viewbox> |
Then a HTTP 400 is returned
Examples:
| viewbox |
| 34 |
| 0.003,-84.4 |
| 5.2,4.5542,12.4 |
| 23.1,-6,0.11,44.2,9.1 |
Scenario Outline: viewboxlbrt must have four coordinate numbers
When sending json search query "foo"
| viewboxlbrt |
| <viewbox> |
Then a HTTP 400 is returned
Examples:
| viewbox |
| 34 |
| 0.003,-84.4 |
| 5.2,4.5542,12.4 |
| 23.1,-6,0.11,44.2,9.1 |
Scenario: Overly large limit number for search results
When sending json search query "restaurant"
| limit |
| 1000 |
Then at most 50 results are returned
Scenario: Limit number of search results
When sending json search query "landstr"
| dedupe |
| 0 |
Then more than 4 results are returned
When sending json search query "landstr"
| limit | dedupe |
| 4 | 0 |
Then exactly 4 results are returned
Scenario: Limit parameter must be a number
When sending search query "Blue Laguna"
| limit |
| ); |
Then a HTTP 400 is returned
Scenario: Restrict to feature type country
When sending xml search query "fürstentum"
| featureType |
| country |
Then results contain
| place_rank |
| 4 |
Scenario: Restrict to feature type state
When sending xml search query "Wangerberg"
Then at least 1 result is returned
When sending xml search query "Wangerberg"
| featureType |
| state |
Then exactly 0 results are returned
Scenario: Restrict to feature type city
When sending xml search query "vaduz"
Then at least 1 result is returned
When sending xml search query "vaduz"
| featureType |
| city |
Then results contain
| place_rank |
| 16 |
Scenario: Restrict to feature type settlement
When sending json search query "Malbun"
Then results contain
| ID | class |
| 1 | landuse |
When sending json search query "Malbun"
| featureType |
| settlement |
Then results contain
| class | type |
| place | village |
Scenario Outline: Search with polygon threshold (json)
When sending json search query "triesenberg"
| polygon_geojson | polygon_threshold |
| 1 | <th> |
Then at least 1 result is returned
And result 0 has attributes geojson
Examples:
| th |
| -1 |
| 0.0 |
| 0.5 |
| 999 |
Scenario Outline: Search with polygon threshold (xml)
When sending xml search query "triesenberg"
| polygon_geojson | polygon_threshold |
| 1 | <th> |
Then at least 1 result is returned
And result 0 has attributes geojson
Examples:
| th |
| -1 |
| 0.0 |
| 0.5 |
| 999 |
Scenario Outline: Search with invalid polygon threshold (xml)
When sending xml search query "triesenberg"
| polygon_geojson | polygon_threshold |
| 1 | <th> |
Then a HTTP 400 is returned
Examples:
| th |
| x |
| ;; |
| 1m |
Scenario Outline: Search with extratags
When sending <format> search query "Landstr"
| extratags |
| 1 |
Then result has attributes extratags
Examples:
| format |
| xml |
| json |
| jsonv2 |
| geojson |
Scenario Outline: Search with namedetails
When sending <format> search query "Landstr"
| namedetails |
| 1 |
Then result has attributes namedetails
Examples:
| format |
| xml |
| json |
| jsonv2 |
| geojson |
Scenario Outline: Search result with contains TEXT geometry
When sending <format> search query "triesenberg"
| polygon_text |
| 1 |
Then result has attributes <response_attribute>
Examples:
| format | response_attribute |
| xml | geotext |
| json | geotext |
| jsonv2 | geotext |
Scenario Outline: Search result contains SVG geometry
When sending <format> search query "triesenberg"
| polygon_svg |
| 1 |
Then result has attributes <response_attribute>
Examples:
| format | response_attribute |
| xml | geosvg |
| json | svg |
| jsonv2 | svg |
Scenario Outline: Search result contains KML geometry
When sending <format> search query "triesenberg"
| polygon_kml |
| 1 |
Then result has attributes <response_attribute>
Examples:
| format | response_attribute |
| xml | geokml |
| json | geokml |
| jsonv2 | geokml |
Scenario Outline: Search result contains GEOJSON geometry
When sending <format> search query "triesenberg"
| polygon_geojson |
| 1 |
Then result has attributes <response_attribute>
Examples:
| format | response_attribute |
| xml | geojson |
| json | geojson |
| jsonv2 | geojson |
| geojson | geojson |
Scenario Outline: Search result in geojson format contains no non-geojson geometry
When sending geojson search query "triesenberg"
| polygon_text | polygon_svg | polygon_geokml |
| 1 | 1 | 1 |
Then result 0 has not attributes <response_attribute>
Examples:
| response_attribute |
| geotext |
| polygonpoints |
| svg |
| geokml |
Scenario: Array parameters are ignored
When sending json search query "Vaduz" with address
| countrycodes[] | polygon_svg[] | limit[] | polygon_threshold[] |
| IT | 1 | 3 | 3.4 |
Then result addresses contain
| ID | country_code |
| 0 | li |

View File

@@ -1,221 +0,0 @@
@SQLITE
@APIDB
Feature: Search queries
Generic search result correctness
Scenario: Search for natural object
When sending json search query "Samina"
| accept-language |
| en |
Then results contain
| ID | class | type | display_name |
| 0 | waterway | river | Samina, Austria |
Scenario: House number search for non-street address
When sending json search query "6 Silum, Liechtenstein" with address
| accept-language |
| en |
Then address of result 0 is
| type | value |
| house_number | 6 |
| village | Silum |
| town | Triesenberg |
| county | Oberland |
| postcode | 9497 |
| country | Liechtenstein |
| country_code | li |
| ISO3166-2-lvl8 | LI-10 |
Scenario: House number interpolation
When sending json search query "Grosssteg 1023, Triesenberg" with address
| accept-language |
| de |
Then address of result 0 contains
| type | value |
| house_number | 1023 |
| road | Grosssteg |
| village | Sücka |
| postcode | 9497 |
| town | Triesenberg |
| country | Liechtenstein |
| country_code | li |
Scenario: With missing housenumber search falls back to road
When sending json search query "Bündaweg 555" with address
Then address of result 0 is
| type | value |
| road | Bündaweg |
| village | Silum |
| postcode | 9497 |
| county | Oberland |
| town | Triesenberg |
| country | Liechtenstein |
| country_code | li |
| ISO3166-2-lvl8 | LI-10 |
Scenario Outline: Housenumber 0 can be found
When sending <format> search query "Gnalpstrasse 0" with address
Then results contain
| display_name |
| ^0,.* |
And result addresses contain
| house_number |
| 0 |
Examples:
| format |
| xml |
| json |
| jsonv2 |
| geojson |
@Tiger
Scenario: TIGER house number
When sending json search query "697 Upper Kingston Road"
Then results contain
| osm_type | display_name |
| way | ^697,.* |
Scenario: Search with class-type feature
When sending jsonv2 search query "bars in ebenholz"
Then results contain
| place_rank |
| 30 |
Scenario: Search with specific amenity
When sending json search query "[restaurant] Vaduz" with address
Then result addresses contain
| country |
| Liechtenstein |
And results contain
| class | type |
| amenity | restaurant |
Scenario: Search with specific amenity also work in country
When sending json search query "restaurants in liechtenstein" with address
Then result addresses contain
| country |
| Liechtenstein |
And results contain
| class | type |
| amenity | restaurant |
Scenario: Search with key-value amenity
When sending json search query "[club=scout] Vaduz"
Then results contain
| class | type |
| club | scout |
Scenario: POI search near given coordinate
When sending json search query "restaurant near 47.16712,9.51100"
Then results contain
| class | type |
| amenity | restaurant |
Scenario: Arbitrary key/value search near given coordinate
When sending json search query "[leisure=firepit] 47.150° N 9.5340493° E"
Then results contain
| class | type |
| leisure | firepit |
Scenario: POI search in a bounded viewbox
When sending json search query "restaurants"
| viewbox | bounded |
| 9.50830,47.15253,9.52043,47.14866 | 1 |
Then results contain
| class | type |
| amenity | restaurant |
Scenario Outline: Key/value search near given coordinate can be restricted to country
When sending json search query "[natural=peak] 47.06512,9.53965" with address
| countrycodes |
| <cc> |
Then result addresses contain
| country_code |
| <cc> |
Examples:
| cc |
| li |
| ch |
Scenario: Name search near given coordinate
When sending json search query "sporry" with address
Then result addresses contain
| ID | town |
| 0 | Vaduz |
When sending json search query "sporry, 47.10791,9.52676" with address
Then result addresses contain
| ID | village |
| 0 | Triesen |
Scenario: Name search near given coordinate without result
When sending json search query "sporry, N 47 15 7 W 9 61 26"
Then exactly 0 results are returned
Scenario: Arbitrary key/value search near a road
When sending json search query "[amenity=drinking_water] Wissfläckaweg"
Then results contain
| class | type |
| amenity | drinking_water |
Scenario: Ignore other country codes in structured search with country
When sending json search query ""
| city | country |
| li | de |
Then exactly 0 results are returned
Scenario: Ignore country searches when query is restricted to countries
When sending json search query "fr"
| countrycodes |
| li |
Then exactly 0 results are returned
Scenario: Country searches only return results for the given country
When sending search query "Ans Trail" with address
| countrycodes |
| li |
Then result addresses contain
| country_code |
| li |
# https://trac.openstreetmap.org/ticket/5094
Scenario: housenumbers are ordered by complete match first
When sending json search query "Austrasse 11, Vaduz" with address
Then result addresses contain
| ID | house_number |
| 0 | 11 |
Scenario Outline: Coordinate searches with white spaces
When sending json search query "<data>"
Then exactly 1 result is returned
And results contain
| class |
| water |
Examples:
| data |
| sporry weiher, N 47.10791° E 9.52676° |
| sporry weiher, N 47.10791° E 9.52676° |
| sporry weiher , N 47.10791° E 9.52676° |
| sporry weiher, N 47.10791° E 9.52676° |
| sporry weiher , N 47.10791° E 9.52676° |
Scenario: Searches with white spaces
When sending json search query "52 Bodastr , Triesenberg"
Then results contain
| class | type |
| highway | residential |
# github #1949
Scenario: Addressdetails always return the place type
When sending json search query "Vaduz" with address
Then result addresses contain
| ID | town |
| 0 | Vaduz |
Scenario: Search can handle complex query word sets
When sending search query "aussenstelle universitat lichtenstein wachterhaus aussenstelle universitat lichtenstein wachterhaus aussenstelle universitat lichtenstein wachterhaus aussenstelle universitat lichtenstein wachterhaus"
Then a HTTP 200 is returned

View File

@@ -1,208 +0,0 @@
@SQLITE
@APIDB
Feature: Simple Tests
Simple tests for internal server errors and response format.
Scenario Outline: Testing different parameters
When sending search query "Vaduz"
| param | value |
| <parameter> | <value> |
Then at least 1 result is returned
When sending xml search query "Vaduz"
| param | value |
| <parameter> | <value> |
Then at least 1 result is returned
When sending json search query "Vaduz"
| param | value |
| <parameter> | <value> |
Then at least 1 result is returned
When sending jsonv2 search query "Vaduz"
| param | value |
| <parameter> | <value> |
Then at least 1 result is returned
When sending geojson search query "Vaduz"
| param | value |
| <parameter> | <value> |
Then at least 1 result is returned
When sending geocodejson search query "Vaduz"
| param | value |
| <parameter> | <value> |
Then at least 1 result is returned
Examples:
| parameter | value |
| addressdetails | 0 |
| polygon_text | 0 |
| polygon_kml | 0 |
| polygon_geojson | 0 |
| polygon_svg | 0 |
| accept-language | de,en |
| countrycodes | li |
| bounded | 1 |
| bounded | 0 |
| exclude_place_ids| 385252,1234515 |
| limit | 1000 |
| dedupe | 1 |
| dedupe | 0 |
| extratags | 0 |
| namedetails | 0 |
Scenario: Search with invalid output format
When sending search query "Berlin"
| format |
| fd$# |
Then a HTTP 400 is returned
Scenario Outline: Simple Searches
When sending search query "<query>"
Then the result is valid json
When sending xml search query "<query>"
Then the result is valid xml
When sending json search query "<query>"
Then the result is valid json
When sending jsonv2 search query "<query>"
Then the result is valid json
When sending geojson search query "<query>"
Then the result is valid geojson
Examples:
| query |
| New York, New York |
| France |
| 12, Main Street, Houston |
| München |
| |
| hotels in nantes |
| xywxkrf |
| gh; foo() |
| %#$@*&l;der#$! |
| 234 |
| 47.4,8.3 |
Scenario: Empty XML search
When sending xml search query "xnznxvcx"
Then result header contains
| attr | value |
| querystring | xnznxvcx |
| more_url | .*q=xnznxvcx.*format=xml |
Scenario: Empty XML search with special XML characters
When sending xml search query "xfdghn&zxn"xvbyx<vxx>cssdex"
Then result header contains
| attr | value |
| querystring | xfdghn&zxn"xvbyx<vxx>cssdex |
| more_url | .*q=xfdghn%26zxn%22xvbyx%3Cvxx%3Ecssdex.*format=xml |
Scenario: Empty XML search with viewbox
When sending xml search query "xnznxvcx"
| viewbox |
| 12,33,77,45.13 |
Then result header contains
| attr | value |
| querystring | xnznxvcx |
| viewbox | 12,33,77,45.13 |
Scenario: Empty XML search with viewboxlbrt
When sending xml search query "xnznxvcx"
| viewboxlbrt |
| 12,34.13,77,45 |
Then result header contains
| attr | value |
| querystring | xnznxvcx |
| viewbox | 12,34.13,77,45 |
Scenario: Empty XML search with viewboxlbrt and viewbox
When sending xml search query "pub"
| viewbox | viewboxblrt |
| 12,33,77,45.13 | 1,2,3,4 |
Then result header contains
| attr | value |
| querystring | pub |
| viewbox | 12,33,77,45.13 |
Scenario: Empty XML search with excluded place ids
When sending xml search query "jghrleoxsbwjer"
| exclude_place_ids |
| 123,76,342565 |
Then result header contains
| attr | value |
| exclude_place_ids | 123,76,342565 |
Scenario: Empty XML search with bad excluded place ids
When sending xml search query "jghrleoxsbwjer"
| exclude_place_ids |
| , |
Then result header has not attributes exclude_place_ids
Scenario Outline: Wrapping of legal jsonp search requests
When sending json search query "Tokyo"
| param | value |
|json_callback | <data> |
Then result header contains
| attr | value |
| json_func | <result> |
Examples:
| data | result |
| foo | foo |
| FOO | FOO |
| __world | __world |
Scenario Outline: Wrapping of illegal jsonp search requests
When sending json search query "Tokyo"
| param | value |
|json_callback | <data> |
Then a json user error is returned
Examples:
| data |
| 1asd |
| bar(foo) |
| XXX['bad'] |
| foo; evil |
Scenario: Ignore jsonp parameter for anything but json
When sending json search query "Malibu"
| json_callback |
| 234 |
Then a HTTP 400 is returned
When sending xml search query "Malibu"
| json_callback |
| 234 |
Then the result is valid xml
Scenario Outline: Empty search
When sending <format> search query "YHlERzzx"
Then exactly 0 results are returned
Examples:
| format |
| json |
| jsonv2 |
| geojson |
| geocodejson |
Scenario: Search for non-existing coordinates
When sending json search query "-21.0,-33.0"
Then exactly 0 results are returned
Scenario: Country code selection is retained in more URL (#596)
When sending xml search query "Vaduz"
| countrycodes |
| pl,1,,invalid,undefined,%3Cb%3E,bo,, |
Then result header contains
| attr | value |
| more_url | .*&countrycodes=pl%2Cbo&.* |
Scenario Outline: Search debug output does not return errors
When sending debug search query "<query>"
Then a HTTP 200 is returned
Examples:
| query |
| Liechtenstein |
| Triesen |
| Pfarrkirche |
| Landstr 27 Steinort, Triesenberg, 9495 |
| 9497 |
| restaurant in triesen |

View File

@@ -1,79 +0,0 @@
@SQLITE
@APIDB
Feature: Structured search queries
Testing correctness of results with
structured queries
Scenario: Country only
When sending json search query "" with address
| country |
| Liechtenstein |
Then address of result 0 is
| type | value |
| country | Liechtenstein |
| country_code | li |
Scenario: Postcode only
When sending json search query "" with address
| postalcode |
| 9495 |
Then results contain
| type |
| ^post(al_)?code |
And result addresses contain
| postcode |
| 9495 |
Scenario: Street, postcode and country
When sending xml search query "" with address
| street | postalcode | country |
| Old Palace Road | GU2 7UP | United Kingdom |
Then result header contains
| attr | value |
| querystring | Old Palace Road, GU2 7UP, United Kingdom |
Scenario: Street with housenumber, city and postcode
When sending xml search query "" with address
| street | city | postalcode |
| 19 Am schrägen Weg | Vaduz | 9490 |
Then result addresses contain
| house_number | road |
| 19 | Am Schrägen Weg |
Scenario: Street with housenumber, city and bad postcode
When sending xml search query "" with address
| street | city | postalcode |
| 19 Am schrägen Weg | Vaduz | 9491 |
Then result addresses contain
| house_number | road |
| 19 | Am Schrägen Weg |
Scenario: Amenity, city
When sending json search query "" with address
| city | amenity |
| Vaduz | bar |
Then result addresses contain
| country |
| Liechtenstein |
And results contain
| class | type |
| amenity | ^(pub)\|(bar)\|(restaurant) |
#176
Scenario: Structured search restricts rank
When sending json search query "" with address
| city |
| Vaduz |
Then result addresses contain
| town |
| Vaduz |
#3651
Scenario: Structured search with surrounding extra characters
When sending xml search query "" with address
| street | city | postalcode |
| "19 Am schrägen Weg" | "Vaduz" | "9491" |
Then result addresses contain
| house_number | road |
| 19 | Am Schrägen Weg |

View File

@@ -1,17 +0,0 @@
@UNKNOWNDB
Feature: Status queries against unknown database
Testing status query
Scenario: Failed status as text
When sending text status query
Then a HTTP 500 is returned
And the page contents equals "ERROR: Database connection failed"
Scenario: Failed status as json
When sending json status query
Then a HTTP 200 is returned
And the result is valid json
And results contain
| status | message |
| 700 | Database connection failed |
And result has not attributes data_updated

View File

@@ -1,17 +0,0 @@
@SQLITE
@APIDB
Feature: Status queries
Testing status query
Scenario: Status as text
When sending status query
Then a HTTP 200 is returned
And the page contents equals "OK"
Scenario: Status as json
When sending json status query
Then the result is valid json
And results contain
| status | message |
| 0 | OK |
And result has attributes data_updated

358
test/bdd/conftest.py Normal file
View File

@@ -0,0 +1,358 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
"""
Fixtures for BDD test steps
"""
import sys
import json
from pathlib import Path
import psycopg
from psycopg import sql as pysql
# always test against the source
SRC_DIR = (Path(__file__) / '..' / '..' / '..').resolve()
sys.path.insert(0, str(SRC_DIR / 'src'))
import pytest
from pytest_bdd.parsers import re as step_parse
from pytest_bdd import given, when, then
pytest.register_assert_rewrite('utils')
from utils.api_runner import APIRunner
from utils.api_result import APIResult
from utils.checks import ResultAttr, COMPARATOR_TERMS
from utils.geometry_alias import ALIASES
from utils.grid import Grid
from utils.db import DBManager
from nominatim_db.config import Configuration
from nominatim_db.data.country_info import setup_country_config
def _strlist(inp):
return [s.strip() for s in inp.split(',')]
def _pretty_json(inp):
return json.dumps(inp, indent=2)
def pytest_addoption(parser, pluginmanager):
parser.addoption('--nominatim-purge', dest='NOMINATIM_PURGE', action='store_true',
help='Force recreation of test databases from scratch.')
parser.addoption('--nominatim-keep-db', dest='NOMINATIM_KEEP_DB', action='store_true',
help='Do not drop the database after tests are finished.')
parser.addoption('--nominatim-api-engine', dest='NOMINATIM_API_ENGINE',
default='falcon',
help='Chose the API engine to use when sending requests.')
parser.addoption('--nominatim-tokenizer', dest='NOMINATIM_TOKENIZER',
metavar='TOKENIZER',
help='Use the specified tokenizer for importing data into '
'a Nominatim database.')
parser.addini('nominatim_test_db', default='test_nominatim',
help='Name of the database used for running a single test.')
parser.addini('nominatim_api_test_db', default='test_api_nominatim',
help='Name of the database for storing API test data.')
parser.addini('nominatim_template_db', default='test_template_nominatim',
help='Name of database used as a template for test databases.')
@pytest.fixture
def datatable():
""" Default fixture for datatables, so that their presence can be optional.
"""
return None
@pytest.fixture
def node_grid():
""" Default fixture for node grids. Nothing set.
"""
return Grid([[]], None, None)
@pytest.fixture(scope='session', autouse=True)
def setup_country_info():
setup_country_config(Configuration(None))
@pytest.fixture(scope='session')
def template_db(pytestconfig):
""" Create a template database containing the extensions and base data
needed by Nominatim. Using the template instead of doing the full
setup can speed up the tests.
The template database will only be created if it does not exist yet
or a purge has been explicitly requested.
"""
dbm = DBManager(purge=pytestconfig.option.NOMINATIM_PURGE)
template_db = pytestconfig.getini('nominatim_template_db')
template_config = Configuration(
None, environ={'NOMINATIM_DATABASE_DSN': f"pgsql:dbname={template_db}"})
dbm.setup_template_db(template_config)
return template_db
@pytest.fixture
def def_config(pytestconfig):
dbname = pytestconfig.getini('nominatim_test_db')
return Configuration(None,
environ={'NOMINATIM_DATABASE_DSN': f"pgsql:dbname={dbname}"})
@pytest.fixture
def db(template_db, pytestconfig):
""" Set up an empty database for use with osm2pgsql.
"""
dbm = DBManager(purge=pytestconfig.option.NOMINATIM_PURGE)
dbname = pytestconfig.getini('nominatim_test_db')
dbm.create_db_from_template(dbname, template_db)
yield dbname
if not pytestconfig.option.NOMINATIM_KEEP_DB:
dbm.drop_db(dbname)
@pytest.fixture
def db_conn(db, def_config):
with psycopg.connect(def_config.get_libpq_dsn()) as conn:
info = psycopg.types.TypeInfo.fetch(conn, "hstore")
psycopg.types.hstore.register_hstore(info, conn)
yield conn
@when(step_parse(r'reverse geocoding (?P<lat>[\d.-]*),(?P<lon>[\d.-]*)'),
target_fixture='nominatim_result')
def reverse_geocode_via_api(test_config_env, pytestconfig, datatable, lat, lon):
runner = APIRunner(test_config_env, pytestconfig.option.NOMINATIM_API_ENGINE)
api_response = runner.run_step('reverse',
{'lat': float(lat), 'lon': float(lon)},
datatable, 'jsonv2', {})
assert api_response.status == 200
assert api_response.headers['content-type'] == 'application/json; charset=utf-8'
result = APIResult('json', 'reverse', api_response.body)
assert result.is_simple()
assert isinstance(result.result['lat'], str)
assert isinstance(result.result['lon'], str)
result.result['centroid'] = f"POINT({result.result['lon']} {result.result['lat']})"
return result
@when(step_parse(r'reverse geocoding at node (?P<node>[\d]+)'),
target_fixture='nominatim_result')
def reverse_geocode_via_api_and_grid(test_config_env, pytestconfig, node_grid, datatable, node):
coords = node_grid.get(node)
if coords is None:
raise ValueError('Unknown node id')
return reverse_geocode_via_api(test_config_env, pytestconfig, datatable, coords[1], coords[0])
@when(step_parse(r'geocoding(?: "(?P<query>.*)")?'),
target_fixture='nominatim_result')
def forward_geocode_via_api(test_config_env, pytestconfig, datatable, query):
runner = APIRunner(test_config_env, pytestconfig.option.NOMINATIM_API_ENGINE)
params = {'addressdetails': '1'}
if query:
params['q'] = query
api_response = runner.run_step('search', params, datatable, 'jsonv2', {})
assert api_response.status == 200
assert api_response.headers['content-type'] == 'application/json; charset=utf-8'
result = APIResult('json', 'search', api_response.body)
assert not result.is_simple()
for res in result.result:
assert isinstance(res['lat'], str)
assert isinstance(res['lon'], str)
res['centroid'] = f"POINT({res['lon']} {res['lat']})"
return result
@then(step_parse(r'(?P<op>[a-z ]+) (?P<num>\d+) results? (?:are|is) returned'),
converters={'num': int})
def check_number_of_results(nominatim_result, op, num):
assert not nominatim_result.is_simple()
assert COMPARATOR_TERMS[op](num, len(nominatim_result))
@then(step_parse('the result metadata contains'))
def check_metadata_for_fields(nominatim_result, datatable):
if datatable[0] == ['param', 'value']:
pairs = datatable[1:]
else:
pairs = zip(datatable[0], datatable[1])
for k, v in pairs:
assert ResultAttr(nominatim_result.meta, k) == v
@then(step_parse('the result metadata has no attributes (?P<attributes>.*)'),
converters={'attributes': _strlist})
def check_metadata_for_field_presence(nominatim_result, attributes):
assert all(a not in nominatim_result.meta for a in attributes), \
f"Unexpectedly have one of the attributes '{attributes}' in\n" \
f"{_pretty_json(nominatim_result.meta)}"
@then(step_parse(r'the result contains(?: in field (?P<field>\S+))?'))
def check_result_for_fields(nominatim_result, datatable, node_grid, field):
assert nominatim_result.is_simple()
if datatable[0] == ['param', 'value']:
pairs = datatable[1:]
else:
pairs = zip(datatable[0], datatable[1])
prefix = field + '+' if field else ''
for k, v in pairs:
assert ResultAttr(nominatim_result.result, prefix + k, grid=node_grid) == v
@then(step_parse('the result has attributes (?P<attributes>.*)'),
converters={'attributes': _strlist})
def check_result_for_field_presence(nominatim_result, attributes):
assert nominatim_result.is_simple()
assert all(a in nominatim_result.result for a in attributes)
@then(step_parse('the result has no attributes (?P<attributes>.*)'),
converters={'attributes': _strlist})
def check_result_for_field_absence(nominatim_result, attributes):
assert nominatim_result.is_simple()
assert all(a not in nominatim_result.result for a in attributes)
@then(step_parse('the result set contains(?P<exact> exactly)?'))
def check_result_list_match(nominatim_result, datatable, exact):
assert not nominatim_result.is_simple()
result_set = set(range(len(nominatim_result.result)))
for row in datatable[1:]:
for idx in result_set:
for key, value in zip(datatable[0], row):
if ResultAttr(nominatim_result.result[idx], key) != value:
break
else:
# found a match
result_set.remove(idx)
break
else:
assert False, f"Missing data row {row}. Full response:\n{nominatim_result}"
if exact:
assert not [nominatim_result.result[i] for i in result_set]
@then(step_parse('all results have attributes (?P<attributes>.*)'),
converters={'attributes': _strlist})
def check_all_results_for_field_presence(nominatim_result, attributes):
assert not nominatim_result.is_simple()
assert len(nominatim_result) > 0
for res in nominatim_result.result:
assert all(a in res for a in attributes), \
f"Missing one of the attributes '{attributes}' in\n{_pretty_json(res)}"
@then(step_parse('all results have no attributes (?P<attributes>.*)'),
converters={'attributes': _strlist})
def check_all_result_for_field_absence(nominatim_result, attributes):
assert not nominatim_result.is_simple()
assert len(nominatim_result) > 0
for res in nominatim_result.result:
assert all(a not in res for a in attributes), \
f"Unexpectedly have one of the attributes '{attributes}' in\n{_pretty_json(res)}"
@then(step_parse(r'all results contain(?: in field (?P<field>\S+))?'))
def check_all_results_contain(nominatim_result, datatable, node_grid, field):
assert not nominatim_result.is_simple()
assert len(nominatim_result) > 0
if datatable[0] == ['param', 'value']:
pairs = datatable[1:]
else:
pairs = zip(datatable[0], datatable[1])
prefix = field + '+' if field else ''
for k, v in pairs:
for r in nominatim_result.result:
assert ResultAttr(r, prefix + k, grid=node_grid) == v
@then(step_parse(r'result (?P<num>\d+) contains(?: in field (?P<field>\S+))?'),
converters={'num': int})
def check_specific_result_for_fields(nominatim_result, datatable, num, field):
assert not nominatim_result.is_simple()
assert len(nominatim_result) > num
if datatable[0] == ['param', 'value']:
pairs = datatable[1:]
else:
pairs = zip(datatable[0], datatable[1])
prefix = field + '+' if field else ''
for k, v in pairs:
assert ResultAttr(nominatim_result.result[num], prefix + k) == v
@given(step_parse(r'the (?P<step>[0-9.]+ )?grid(?: with origin (?P<origin>.*))?'),
target_fixture='node_grid')
def set_node_grid(datatable, step, origin):
if step is not None:
step = float(step)
if origin:
if ',' in origin:
coords = origin.split(',')
if len(coords) != 2:
raise RuntimeError('Grid origin expects origin with x,y coordinates.')
origin = list(map(float, coords))
elif origin in ALIASES:
origin = ALIASES[origin]
else:
raise RuntimeError('Grid origin must be either coordinate or alias.')
return Grid(datatable, step, origin)
@then(step_parse('(?P<table>placex?) has no entry for '
r'(?P<osm_type>[NRW])(?P<osm_id>\d+)(?::(?P<osm_class>\S+))?'),
converters={'osm_id': int})
def check_place_missing_lines(db_conn, table, osm_type, osm_id, osm_class):
sql = pysql.SQL("""SELECT count(*) FROM {}
WHERE osm_type = %s and osm_id = %s""").format(pysql.Identifier(table))
params = [osm_type, int(osm_id)]
if osm_class:
sql += pysql.SQL(' AND class = %s')
params.append(osm_class)
with db_conn.cursor() as cur:
assert cur.execute(sql, params).fetchone()[0] == 0

View File

@@ -1,105 +0,0 @@
@DB
Feature: Import and search of names
Tests all naming related import issues
Scenario: No copying name tag if only one name
Given the places
| osm | class | type | name | geometry |
| N1 | place | locality | german | country:de |
When importing
Then placex contains
| object | country_code | name+name |
| N1 | de | german |
Scenario: Copying name tag to default language if it does not exist
Given the places
| osm | class | type | name | name+name:fi | geometry |
| N1 | place | locality | german | finnish | country:de |
When importing
Then placex contains
| object | country_code | name | name+name:fi | name+name:de |
| N1 | de | german | finnish | german |
Scenario: Copying default language name tag to name if it does not exist
Given the places
| osm | class | type | name+name:de | name+name:fi | geometry |
| N1 | place | locality | german | finnish | country:de |
When importing
Then placex contains
| object | country_code | name | name+name:fi | name+name:de |
| N1 | de | german | finnish | german |
Scenario: Do not overwrite default language with name tag
Given the places
| osm | class | type | name | name+name:fi | name+name:de | geometry |
| N1 | place | locality | german | finnish | local | country:de |
When importing
Then placex contains
| object | country_code | name | name+name:fi | name+name:de |
| N1 | de | german | finnish | local |
Scenario Outline: Names in any script can be found
Given the places
| osm | class | type | name |
| N1 | place | hamlet | <name> |
When importing
And sending search query "<name>"
Then results contain
| osm |
| N1 |
Examples:
| name |
| Berlin |
| |
| Вологда |
| Αθήνα |
| القاهرة |
| |
| |
| |
Scenario: German umlauts can be found when expanded
Given the places
| osm | class | type | name+name:de |
| N1 | place | city | Münster |
| N2 | place | city | Köln |
| N3 | place | city | Gräfenroda |
When importing
When sending search query "münster"
Then results contain
| osm |
| N1 |
When sending search query "muenster"
Then results contain
| osm |
| N1 |
When sending search query "munster"
Then results contain
| osm |
| N1 |
When sending search query "Köln"
Then results contain
| osm |
| N2 |
When sending search query "Koeln"
Then results contain
| osm |
| N2 |
When sending search query "Koln"
Then results contain
| osm |
| N2 |
When sending search query "gräfenroda"
Then results contain
| osm |
| N3 |
When sending search query "graefenroda"
Then results contain
| osm |
| N3 |
When sending search query "grafenroda"
Then results contain
| osm |
| N3 |

View File

@@ -1,226 +0,0 @@
@DB
Feature: Import and search of names
Tests all naming related issues: normalisation,
abbreviations, internationalisation, etc.
Scenario: non-latin scripts can be found
Given the places
| osm | class | type | name |
| N1 | place | locality | Речицкий район |
| N2 | place | locality | Refugio de montaña |
| N3 | place | locality | |
| N4 | place | locality | الدوحة |
When importing
When sending search query "Речицкий район"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "Refugio de montaña"
Then results contain
| ID | osm |
| 0 | N2 |
When sending search query ""
Then results contain
| ID | osm |
| 0 | N3 |
When sending search query "الدوحة"
Then results contain
| ID | osm |
| 0 | N4 |
Scenario: Case-insensitivity of search
Given the places
| osm | class | type | name |
| N1 | place | locality | FooBar |
When importing
Then placex contains
| object | class | type | name+name |
| N1 | place | locality | FooBar |
When sending search query "FooBar"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "foobar"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "fOObar"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "FOOBAR"
Then results contain
| ID | osm |
| 0 | N1 |
Scenario: Multiple spaces in name
Given the places
| osm | class | type | name |
| N1 | place | locality | one two three |
When importing
When sending search query "one two three"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "one two three"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "one two three"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query " one two three"
Then results contain
| ID | osm |
| 0 | N1 |
Scenario: Special characters in name
Given the places
| osm | class | type | name+name:de |
| N1 | place | locality | Jim-Knopf-Straße |
| N2 | place | locality | Smith/Weston |
| N3 | place | locality | space mountain |
| N4 | place | locality | space |
| N5 | place | locality | mountain |
When importing
When sending search query "Jim-Knopf-Str"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "Jim Knopf-Str"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "Jim Knopf Str"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "Jim/Knopf-Str"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "Jim-Knopfstr"
Then results contain
| ID | osm |
| 0 | N1 |
When sending search query "Smith/Weston"
Then results contain
| ID | osm |
| 0 | N2 |
When sending search query "Smith Weston"
Then results contain
| ID | osm |
| 0 | N2 |
When sending search query "Smith-Weston"
Then results contain
| ID | osm |
| 0 | N2 |
When sending search query "space mountain"
Then results contain
| ID | osm |
| 0 | N3 |
When sending search query "space-mountain"
Then results contain
| ID | osm |
| 0 | N3 |
When sending search query "space/mountain"
Then results contain
| ID | osm |
| 0 | N3 |
When sending search query "space\mountain"
Then results contain
| ID | osm |
| 0 | N3 |
When sending search query "space(mountain)"
Then results contain
| ID | osm |
| 0 | N3 |
Scenario: Landuse with name are found
Given the grid
| 1 | 2 |
| 3 | |
Given the places
| osm | class | type | name | geometry |
| R1 | natural | meadow | landuse1 | (1,2,3,1) |
| R2 | landuse | industrial | landuse2 | (2,3,1,2) |
When importing
When sending search query "landuse1"
Then results contain
| ID | osm |
| 0 | R1 |
When sending search query "landuse2"
Then results contain
| ID | osm |
| 0 | R2 |
Scenario: Postcode boundaries without ref
Given the grid with origin FR
| | 2 | |
| 1 | | 3 |
Given the places
| osm | class | type | postcode | geometry |
| R1 | boundary | postal_code | 123-45 | (1,2,3,1) |
When importing
When sending search query "123-45"
Then results contain
| ID | osm |
| 0 | R1 |
Scenario Outline: Housenumbers with special characters are found
Given the grid
| 1 | | | | 2 |
| | | 9 | | |
And the places
| osm | class | type | name | geometry |
| W1 | highway | primary | Main St | 1,2 |
And the places
| osm | class | type | housenr | geometry |
| N1 | building | yes | <nr> | 9 |
When importing
And sending search query "Main St <nr>"
Then results contain
| osm | display_name |
| N1 | <nr>, Main St |
Examples:
| nr |
| 1 |
| 3456 |
| 1 a |
| 56b |
| 1 A |
| 2 |
| 1Б |
| 1 к1 |
| 23-123 |
Scenario Outline: Housenumbers in lists are found
Given the grid
| 1 | | | | 2 |
| | | 9 | | |
And the places
| osm | class | type | name | geometry |
| W1 | highway | primary | Main St | 1,2 |
And the places
| osm | class | type | housenr | geometry |
| N1 | building | yes | <nr-list> | 9 |
When importing
And sending search query "Main St <nr>"
Then results contain
| ID | osm | display_name |
| 0 | N1 | <nr-list>, Main St |
Examples:
| nr-list | nr |
| 1,2,3 | 1 |
| 1,2,3 | 2 |
| 1, 2, 3 | 3 |
| 45 ;67;3 | 45 |
| 45 ;67;3 | 67 |
| 1a;1k | 1a |
| 1a;1k | 1k |
| 34/678 | 34 |
| 34/678 | 678 |
| 34/678 | 34/678 |

View File

@@ -1,64 +0,0 @@
# SPDX-License-Identifier: GPL-3.0-or-later
#
# This file is part of Nominatim. (https://nominatim.org)
#
# Copyright (C) 2025 by the Nominatim developer community.
# For a full list of authors see the git log.
from pathlib import Path
import sys
from behave import * # noqa
sys.path.insert(1, str(Path(__file__, '..', '..', '..', 'src').resolve()))
from steps.geometry_factory import GeometryFactory # noqa: E402
from steps.nominatim_environment import NominatimEnvironment # noqa: E402
TEST_BASE_DIR = Path(__file__, '..', '..').resolve()
userconfig = {
'REMOVE_TEMPLATE': False,
'KEEP_TEST_DB': False,
'DB_HOST': None,
'DB_PORT': None,
'DB_USER': None,
'DB_PASS': None,
'TEMPLATE_DB': 'test_template_nominatim',
'TEST_DB': 'test_nominatim',
'API_TEST_DB': 'test_api_nominatim',
'API_TEST_FILE': TEST_BASE_DIR / 'testdb' / 'apidb-test-data.pbf',
'TOKENIZER': None, # Test with a custom tokenizer
'STYLE': 'extratags',
'API_ENGINE': 'falcon'
}
use_step_matcher("re") # noqa: F405
def before_all(context):
# logging setup
context.config.setup_logging()
# set up -D options
for k, v in userconfig.items():
context.config.userdata.setdefault(k, v)
# Nominatim test setup
context.nominatim = NominatimEnvironment(context.config.userdata)
context.osm = GeometryFactory()
def before_scenario(context, scenario):
if 'SQLITE' not in context.tags \
and context.config.userdata['API_TEST_DB'].startswith('sqlite:'):
context.scenario.skip("Not usable with Sqlite database.")
elif 'DB' in context.tags:
context.nominatim.setup_db(context)
elif 'APIDB' in context.tags:
context.nominatim.setup_api_db()
elif 'UNKNOWNDB' in context.tags:
context.nominatim.setup_unknown_db()
def after_scenario(context, scenario):
if 'DB' in context.tags:
context.nominatim.teardown_db(context)

View File

@@ -0,0 +1,83 @@
Feature: Localization of search results
Scenario: default language
When sending v1/details
| osmtype | osmid |
| R | 1155955 |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| localname |
| Liechtenstein |
Scenario: accept-language first
When sending v1/details
| osmtype | osmid | accept-language |
| R | 1155955 | zh,de |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| localname |
| |
Scenario: accept-language missing
When sending v1/details
| osmtype | osmid | accept-language |
| R | 1155955 | xx,fr,en,de |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| localname |
| Liechtenstein |
Scenario: http accept language header first
Given the HTTP header
| accept-language |
| fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending v1/details
| osmtype | osmid |
| R | 1155955 |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| localname |
| Liktinstein |
Scenario: http accept language header and accept-language
Given the HTTP header
| accept-language |
| fr-ca,fr;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending v1/details
| osmtype | osmid | accept-language |
| R | 1155955 | fo,en |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| localname |
| Liktinstein |
Scenario: http accept language header fallback
Given the HTTP header
| accept-language |
| fo-ca,en-ca;q=0.5 |
When sending v1/details
| osmtype | osmid |
| R | 1155955 |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| localname |
| Liktinstein |
Scenario: http accept language header fallback (upper case)
Given the HTTP header
| accept-language |
| fo-FR;q=0.8,en-ca;q=0.5 |
When sending v1/details
| osmtype | osmid |
| R | 1155955 |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| localname |
| Liktinstein |

View File

@@ -0,0 +1,99 @@
Feature: Object details
Testing different parameter options for details API.
Scenario: Basic details
When sending v1/details
| osmtype | osmid |
| W | 297699560 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes geometry
And the result has no attributes keywords,address,linked_places,parentof
And the result contains
| geometry+type |
| Point |
Scenario: Basic details with pretty printing
When sending v1/details
| osmtype | osmid | pretty |
| W | 297699560 | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes geometry
And the result has no attributes keywords,address,linked_places,parentof
Scenario: Details with addressdetails
When sending v1/details
| osmtype | osmid | addressdetails |
| W | 297699560 | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes address
Scenario: Details with linkedplaces
When sending v1/details
| osmtype | osmid | linkedplaces |
| R | 123924 | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes linked_places
Scenario: Details with hierarchy
When sending v1/details
| osmtype | osmid | hierarchy |
| W | 297699560 | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes hierarchy
Scenario: Details with grouped hierarchy
When sending v1/details
| osmtype | osmid | hierarchy | group_hierarchy |
| W | 297699560 | 1 | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes hierarchy
Scenario Outline: Details with keywords
When sending v1/details
| osmtype | osmid | keywords |
| <type> | <id> | 1 |
Then a HTTP 200 is returned
Then the result is valid json
And the result has attributes keywords
Examples:
| type | id |
| W | 297699560 |
| W | 243055645 |
| W | 243055716 |
| W | 43327921 |
# ticket #1343
Scenario: Details of a country with keywords
When sending v1/details
| osmtype | osmid | keywords |
| R | 1155955 | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes keywords
Scenario Outline: Details with full geometry
When sending v1/details
| osmtype | osmid | polygon_geojson |
| <type> | <id> | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result has attributes geometry
And the result contains
| geometry+type |
| <geometry> |
Examples:
| type | id | geometry |
| W | 297699560 | LineString |
| W | 243055645 | Polygon |
| W | 243055716 | Polygon |
| W | 43327921 | LineString |

View File

@@ -0,0 +1,99 @@
Feature: Object details
Check details page for correctness
Scenario Outline: Details request with OSM id
When sending v1/details
| osmtype | osmid |
| <type> | <id> |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| osm_type | osm_id |
| <type> | <id> |
Examples:
| type | id |
| N | 5484325405 |
| W | 43327921 |
| R | 123924 |
Scenario Outline: Details request with different class types for the same OSM id
When sending v1/details
| osmtype | osmid | class |
| N | 300209696 | <class> |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| osm_type | osm_id | category |
| N | 300209696 | <class> |
Examples:
| class |
| tourism |
| mountain_pass |
Scenario: Details request without osmtype
When sending v1/details
| osmid |
| <id> |
Then a HTTP 400 is returned
And the result is valid json
Scenario: Details request with unknown OSM id
When sending v1/details
| osmtype | osmid |
| R | 1 |
Then a HTTP 404 is returned
And the result is valid json
Scenario: Details request with unknown class
When sending v1/details
| osmtype | osmid | class |
| N | 300209696 | highway |
Then a HTTP 404 is returned
And the result is valid json
Scenario: Details for interpolation way return the interpolation
When sending v1/details
| osmtype | osmid |
| W | 1 |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| category | type | osm_type | osm_id | admin_level |
| place | houses | W | 1 | 15 |
@skip
Scenario: Details for interpolation way return the interpolation
When sending details query for 112871
Then the result is valid json
And the result contains
| category | type | admin_level |
| place | houses | 15 |
And result has not attributes osm_type,osm_id
@skip
Scenario: Details for postcode
When sending details query for 112820
Then the result is valid json
And the result contains
| category | type | admin_level |
| place | postcode | 15 |
And result has not attributes osm_type,osm_id
Scenario Outline: Details debug output returns no errors
When sending v1/details
| osmtype | osmid | debug |
| <type> | <id> | 1 |
Then a HTTP 200 is returned
And the result is valid html
Examples:
| type | id |
| N | 5484325405 |
| W | 43327921 |
| R | 123924 |

View File

@@ -0,0 +1,71 @@
Feature: Tests for finding places by osm_type and osm_id
Simple tests for response format.
Scenario Outline: Address lookup for existing object
When sending v1/lookup with format <format>
| osm_ids |
| N5484325405,W43327921,,R123924,X99,N0 |
Then a HTTP 200 is returned
And the result is valid <outformat>
And exactly 3 results are returned
Examples:
| format | outformat |
| xml | xml |
| json | json |
| jsonv2 | json |
| geojson | geojson |
| geocodejson | geocodejson |
Scenario: Address lookup for non-existing or invalid object
When sending v1/lookup
| osm_ids |
| X99,,N0,nN158845944,ABC,,W9 |
Then a HTTP 200 is returned
And the result is valid xml
And exactly 0 results are returned
Scenario Outline: Boundingbox is returned
When sending v1/lookup with format <format>
| osm_ids |
| N5484325405,W43327921 |
Then the result is valid <outformat>
And the result set contains exactly
| object | boundingbox!in_box |
| N5484325405 | 47.135,47.14,9.52,9.525 |
| W43327921 | 47.07,47.08,9.50,9.52 |
Examples:
| format | outformat |
| xml | xml |
| json | json |
| jsonv2 | json |
| geojson | geojson |
Scenario: Linked places return information from the linkee
When sending v1/lookup with format geocodejson
| osm_ids |
| N1932181216 |
Then the result is valid geocodejson
And exactly 1 result is returned
And all results contain
| name |
| Vaduz |
Scenario Outline: Force error by providing too many ids
When sending v1/lookup with format <format>
| osm_ids |
| N1,N2,N3,N4,N5,N6,N7,N8,N9,N10,N11,N12,N13,N14,N15,N16,N17,N18,N19,N20,N21,N22,N23,N24,N25,N26,N27,N28,N29,N30,N31,N32,N33,N34,N35,N36,N37,N38,N39,N40,N41,N42,N43,N44,N45,N46,N47,N48,N49,N50,N51 |
Then a HTTP 400 is returned
And the result is valid <outformat>
And the result contains
| error+code | error+message |
| 400 | Too many object IDs. |
Examples:
| format | outformat |
| xml | xml |
| json | json |
| jsonv2 | json |
| geojson | json |
| geocodejson | json |

View File

@@ -0,0 +1,56 @@
Feature: Geometries for reverse geocoding
Tests for returning geometries with reverse
Scenario: Reverse - polygons are returned fully by default
When sending v1/reverse
| lat | lon | polygon_text |
| 47.13803 | 9.52264 | 1 |
Then a HTTP 200 is returned
And the result is valid xml
And the result contains
| geotext!fm |
| POLYGON\(\(9.5225302 47.138066, ?9.5225348 47.1379282, ?9.5226142 47.1379294, ?9.5226143 47.1379257, ?9.522615 47.137917, ?9.5226225 47.1379098, ?9.5226334 47.1379052, ?9.5226461 47.1379037, ?9.5226588 47.1379056, ?9.5226693 47.1379107, ?9.5226762 47.1379181, ?9.5226762 47.1379268, ?9.5226761 47.1379308, ?9.5227366 47.1379317, ?9.5227352 47.1379753, ?9.5227608 47.1379757, ?9.5227595 47.1380148, ?9.5227355 47.1380145, ?9.5227337 47.1380692, ?9.5225302 47.138066\)\) |
Scenario: Reverse - polygons can be slightly simplified
When sending v1/reverse
| lat | lon | polygon_text | polygon_threshold |
| 47.13803 | 9.52264 | 1 | 0.00001 |
Then a HTTP 200 is returned
And the result is valid xml
And the result contains
| geotext!fm |
| POLYGON\(\(9.5225302 47.138066, ?9.5225348 47.1379282, ?9.5226142 47.1379294, ?9.5226225 47.1379098, ?9.5226588 47.1379056, ?9.5226761 47.1379308, ?9.5227366 47.1379317, ?9.5227352 47.1379753, ?9.5227608 47.1379757, ?9.5227595 47.1380148, ?9.5227355 47.1380145, ?9.5227337 47.1380692, ?9.5225302 47.138066\)\) |
Scenario: Reverse - polygons can be much simplified
When sending v1/reverse
| lat | lon | polygon_text | polygon_threshold |
| 47.13803 | 9.52264 | 1 | 0.9 |
Then a HTTP 200 is returned
And the result is valid xml
And the result contains
| geotext!fm |
| POLYGON\(\([0-9. ]+, ?[0-9. ]+, ?[0-9. ]+, ?[0-9. ]+(, ?[0-9. ]+)?\)\) |
Scenario: Reverse - for polygons return the centroid as center point
When sending v1/reverse
| lat | lon |
| 47.13836 | 9.52304 |
Then a HTTP 200 is returned
And the result is valid xml
And the result contains
| lon | lat |
| 9.5227108 | 47.1381805 |
Scenario: Reverse - for streets return the closest point as center point
When sending v1/reverse
| lat | lon |
| 47.13368 | 9.52942 |
Then a HTTP 200 is returned
And the result is valid xml
And the result contains
| lon | lat |
| 9.5294315 | 47.1336817 |

View File

@@ -0,0 +1,47 @@
Feature: Localization of reverse search results
Scenario: Reverse - default language
When sending v1/reverse with format jsonv2
| lat | lon |
| 47.14 | 9.55 |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| address+country |
| Liechtenstein |
Scenario: Reverse - accept-language parameter
When sending v1/reverse with format jsonv2
| lat | lon | accept-language |
| 47.14 | 9.55 | ja,en |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| address+country |
| |
Scenario: Reverse - HTTP accept language header
Given the HTTP header
| accept-language |
| fo-ca,fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending v1/reverse with format jsonv2
| lat | lon |
| 47.14 | 9.55 |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| address+country |
| Liktinstein |
Scenario: Reverse - accept-language parameter and HTTP header
Given the HTTP header
| accept-language |
| fo-ca,fo;q=0.8,en-ca;q=0.5,en;q=0.3 |
When sending v1/reverse with format jsonv2
| lat | lon | accept-language |
| 47.14 | 9.55 | en |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| address+country |
| Liechtenstein |

View File

@@ -1,24 +1,20 @@
@SQLITE
@APIDB
Feature: Layer parameter in reverse geocoding
Testing correct function of layer selection while reverse geocoding
Scenario: POIs are selected by default
When sending v1/reverse at 47.14077,9.52414
Then results contain
When reverse geocoding 47.14077,9.52414
Then the result contains
| category | type |
| tourism | viewpoint |
Scenario Outline: Same address level POI with different layers
When sending v1/reverse at 47.14077,9.52414
When reverse geocoding 47.14077,9.52414
| layer |
| <layer> |
Then results contain
Then the result contains
| category |
| <category> |
Examples:
| layer | category |
| address | highway |
@@ -28,12 +24,11 @@ Feature: Layer parameter in reverse geocoding
| address,natural | highway |
| natural,poi | tourism |
Scenario Outline: POIs are not selected without housenumber for address layer
When sending v1/reverse at 47.13816,9.52168
When reverse geocoding 47.13816,9.52168
| layer |
| <layer> |
Then results contain
Then the result contains
| category | type |
| <category> | <type> |
@@ -42,21 +37,19 @@ Feature: Layer parameter in reverse geocoding
| address,poi | highway | bus_stop |
| address | amenity | parking |
Scenario: Between natural and low-zoom address prefer natural
When sending v1/reverse at 47.13636,9.52094
When reverse geocoding 47.13636,9.52094
| layer | zoom |
| natural,address | 15 |
Then results contain
Then the result contains
| category |
| waterway |
Scenario Outline: Search for mountain peaks begins at level 12
When sending v1/reverse at 47.08293,9.57109
When reverse geocoding 47.08293,9.57109
| layer | zoom |
| natural | <zoom> |
Then results contain
Then the result contains
| category | type |
| <category> | <type> |
@@ -65,12 +58,11 @@ Feature: Layer parameter in reverse geocoding
| 12 | natural | peak |
| 13 | waterway | river |
Scenario Outline: Reverse search with manmade layers
When sending v1/reverse at 32.46904,-86.44439
When reverse geocoding 32.46904,-86.44439
| layer |
| <layer> |
Then results contain
Then the result contains
| category | type |
| <category> | <type> |

View File

@@ -0,0 +1,94 @@
Feature: Reverse geocoding
Testing the reverse function
Scenario: Reverse - Unknown countries fall back to default country grid
When reverse geocoding 45.174,-103.072
Then the result contains
| category | type | display_name |
| place | country | United States |
Scenario: Reverse - No TIGER house number for zoom < 18
When reverse geocoding 32.4752389363,-86.4810198619
| zoom |
| 17 |
Then the result contains
| osm_type | category |
| way | highway |
And the result contains in field address
| road | postcode | country_code |
| Upper Kingston Road | 36067 | us |
Scenario: Reverse - Address with non-numerical house number
When reverse geocoding 47.107465,9.52838521614
Then the result contains in field address
| house_number | road |
| 39A/B | Dorfstrasse |
Scenario: Reverse - Address with numerical house number
When reverse geocoding 47.168440329479594,9.511551699184338
Then the result contains in field address
| house_number | road |
| 6 | Schmedgässle |
Scenario Outline: Reverse - Zoom levels below 5 result in country
When reverse geocoding 47.16,9.51
| zoom |
| <zoom> |
Then the result contains
| display_name |
| Liechtenstein |
Examples:
| zoom |
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
Scenario: Reverse - When on a street, the closest interpolation is shown
When reverse geocoding 47.118457166193245,9.570678289621355
| zoom |
| 18 |
Then the result contains
| display_name |
| 1021, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
# github 2214
Scenario: Reverse - Interpolations do not override house numbers when they are closer
When reverse geocoding 47.11778,9.57255
| zoom |
| 18 |
Then the result contains
| display_name |
| 5, Grosssteg, Steg, Triesenberg, Oberland, 9497, Liechtenstein |
Scenario: Reverse - Interpolations do not override house numbers when they are closer (2)
When reverse geocoding 47.11834,9.57167
| zoom |
| 18 |
Then the result contains
| display_name |
| 3, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
Scenario: Reverse - When on a street with zoom 18, the closest housenumber is returned
When reverse geocoding 47.11755503977281,9.572722250405036
| zoom |
| 18 |
Then the result contains in field address
| house_number |
| 7 |
Scenario: Reverse - inherited address is shown by default
When reverse geocoding 47.0629071,9.4879694
Then the result contains
| osm_type | category | display_name |
| node | office | foo.li, 64, Hampfländer, Mäls, Balzers, Oberland, 9496, Liechtenstein |
Scenario: Reverse - inherited address is not shown with address layer
When reverse geocoding 47.0629071,9.4879694
| layer |
| address |
Then the result contains
| osm_type | category | display_name |
| way | building | 64, Hampfländer, Mäls, Balzers, Oberland, 9496, Liechtenstein |

View File

@@ -0,0 +1,143 @@
Feature: Geocodejson for Reverse API
Testing correctness of geocodejson output (API version v1).
Scenario Outline: Reverse geocodejson - Simple with no results
When sending v1/reverse with format geocodejson
| lat | lon |
| <lat> | <lon> |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| error |
| Unable to geocode |
Examples:
| lat | lon |
| 0.0 | 0.0 |
| 91.3 | 0.4 |
| -700 | 0.4 |
| 0.2 | 324.44 |
| 0.2 | -180.4 |
Scenario Outline: Reverse geocodejson - Simple OSM result
When sending v1/reverse with format geocodejson
| lat | lon | addressdetails |
| 47.066 | 9.504 | <has_address> |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And the result metadata contains
| version | licence | attribution!fm |
| 0.1.0 | ODbL | Data © OpenStreetMap contributors, ODbL 1.0. https?://osm.org/copyright |
And all results have <attributes> country,postcode,county,city,district,street,housenumber,admin
And all results contain
| param | value |
| osm_type | node |
| osm_id | 6522627624 |
| osm_key | shop |
| osm_value | bakery |
| type | house |
| name | Dorfbäckerei Herrmann |
| label | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
| geojson+type | Point |
| geojson+coordinates | [9.5036065, 47.0660892] |
Examples:
| has_address | attributes |
| 1 | attributes |
| 0 | no attributes |
Scenario: Reverse geocodejson - City housenumber-level address with street
When sending v1/reverse with format geocodejson
| lat | lon |
| 47.1068011 | 9.52810091 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| housenumber | street | postcode | city | country |
| 8 | Im Winkel | 9495 | Triesen | Liechtenstein |
And all results contain
| admin+level6 | admin+level8 |
| Oberland | Triesen |
Scenario: Reverse geocodejson - Town street-level address with street
When sending v1/reverse with format geocodejson
| lat | lon | zoom |
| 47.066 | 9.504 | 16 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| name | city | postcode | country |
| Gnetsch | Balzers | 9496 | Liechtenstein |
Scenario: Reverse geocodejson - Poi street-level address with footway
When sending v1/reverse with format geocodejson
| lat | lon |
| 47.06515 | 9.50083 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| street | city | postcode | country |
| Burgweg | Balzers | 9496 | Liechtenstein |
Scenario: Reverse geocodejson - City address with suburb
When sending v1/reverse with format geocodejson
| lat | lon |
| 47.146861 | 9.511771 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| housenumber | street | district | city | postcode | country |
| 5 | Lochgass | Ebenholz | Vaduz | 9490 | Liechtenstein |
Scenario: Reverse geocodejson - Tiger address
When sending v1/reverse with format geocodejson
| lat | lon |
| 32.4752389363 | -86.4810198619 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| osm_type | osm_id | osm_key | osm_value | type |
| way | 396009653 | place | house | house |
And all results contain
| housenumber | street | city | county | postcode | country |
| 707 | Upper Kingston Road | Prattville | Autauga County | 36067 | United States |
Scenario: Reverse geocodejson - Interpolation address
When sending v1/reverse with format geocodejson
| lat | lon |
| 47.118533 | 9.57056562 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| osm_type | osm_id | osm_key | osm_value | type |
| way | 1 | place | house | house |
And all results contain
| label |
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
And all results have no attributes name
Scenario: Reverse geocodejson - Line geometry output is supported
When sending v1/reverse with format geocodejson
| lat | lon | polygon_geojson |
| 47.06597 | 9.50467 | 1 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| geojson+type |
| LineString |
Scenario Outline: Reverse geocodejson - Only geojson polygons are supported
When sending v1/reverse with format geocodejson
| lat | lon | <param> |
| 47.06597 | 9.50467 | 1 |
Then a HTTP 200 is returned
And the result is valid geocodejson with 1 result
And all results contain
| geojson+type |
| Point |
Examples:
| param |
| polygon_text |
| polygon_svg |
| polygon_kml |

View File

@@ -0,0 +1,102 @@
Feature: Geojson for Reverse API
Testing correctness of geojson output (API version v1).
Scenario Outline: Reverse geojson - Simple with no results
When sending v1/reverse with format geojson
| lat | lon |
| <lat> | <lon> |
Then a HTTP 200 is returned
And the result is valid json
And the result contains
| error |
| Unable to geocode |
Examples:
| lat | lon |
| 0.0 | 0.0 |
| 91.3 | 0.4 |
| -700 | 0.4 |
| 0.2 | 324.44 |
| 0.2 | -180.4 |
Scenario Outline: Reverse geojson - Simple OSM result
When sending v1/reverse with format geojson
| lat | lon | addressdetails |
| 47.066 | 9.504 | <has_address> |
Then a HTTP 200 is returned
And the result is valid geojson with 1 result
And the result metadata contains
| licence!fm |
| Data © OpenStreetMap contributors, ODbL 1.0. http://osm.org/copyright |
And all results have attributes place_id, importance
And all results have <attributes> address
And all results contain
| param | value |
| osm_type | node |
| osm_id | 6522627624 |
| place_rank | 30 |
| category | shop |
| type | bakery |
| addresstype | shop |
| name | Dorfbäckerei Herrmann |
| display_name | Dorfbäckerei Herrmann, 29, Gnetsch, Mäls, Balzers, Oberland, 9496, Liechtenstein |
| boundingbox | [47.0660392, 47.0661392, 9.5035565, 9.5036565] |
| geojson+type | Point |
| geojson+coordinates | [9.5036065, 47.0660892] |
Examples:
| has_address | attributes |
| 1 | attributes |
| 0 | no attributes |
Scenario: Reverse geojson - Tiger address
When sending v1/reverse with format geojson
| lat | lon |
| 32.4752389363 | -86.4810198619 |
Then a HTTP 200 is returned
And the result is valid geojson with 1 result
And all results contain
| osm_type | osm_id | category | type | addresstype | place_rank |
| way | 396009653 | place | house | place | 30 |
Scenario: Reverse geojson - Interpolation address
When sending v1/reverse with format geojson
| lat | lon |
| 47.118533 | 9.57056562 |
Then a HTTP 200 is returned
And the result is valid geojson with 1 result
And all results contain
| osm_type | osm_id | place_rank | category | type | addresstype |
| way | 1 | 30 | place | house | place |
And all results contain
| boundingbox!in_box |
| 47.118494, 47.118596, 9.570495, 9.570597 |
And all results contain
| display_name |
| 1019, Grosssteg, Sücka, Triesenberg, Oberland, 9497, Liechtenstein |
Scenario: Reverse geojson - Line geometry output is supported
When sending v1/reverse with format geojson
| lat | lon | polygon_geojson |
| 47.06597 | 9.50467 | 1 |
Then a HTTP 200 is returned
And the result is valid geojson with 1 result
And all results contain
| geojson+type |
| LineString |
Scenario Outline: Reverse geojson - Only geojson polygons are supported
When sending v1/reverse with format geojson
| lat | lon | <param> |
| 47.06597 | 9.50467 | 1 |
Then a HTTP 200 is returned
And the result is valid geojson with 1 result
And all results contain
| geojson+type |
| Point |
Examples:
| param |
| polygon_text |
| polygon_svg |
| polygon_kml |

Some files were not shown because too many files have changed in this diff Show More