Wikikamus
idwiktionary
https://id.wiktionary.org/wiki/Wikikamus:Halaman_Utama
MediaWiki 1.46.0-wmf.23
case-sensitive
Media
Istimewa
Pembicaraan
Pengguna
Pembicaraan Pengguna
Wikikamus
Pembicaraan Wikikamus
Berkas
Pembicaraan Berkas
MediaWiki
Pembicaraan MediaWiki
Templat
Pembicaraan Templat
Bantuan
Pembicaraan Bantuan
Kategori
Pembicaraan Kategori
Indeks
Pembicaraan Indeks
Lampiran
Pembicaraan Lampiran
TimedText
TimedText talk
Modul
Pembicaraan Modul
Acara
Pembicaraan Acara
kami
0
1798
1349290
1283808
2026-04-10T20:22:59Z
Swarabakti
18192
1349290
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-etimologi-}}
: Dari {{inh|id|ms|kami}}, dari {{inh|id|poz-mly-pro|*kami}}, dari {{inh|id|poz-pro|*kami}}, dari {{inh|id|map-pro|*kami}}
{{-pron-|id}}
# yang berbicara/menulis beserta orang lain, tetapi tidak termasuk lawan bicara/pembaca; berbeda dengan [[kita]]
# yang berbicara (bersifat menunjukkan kehormatan); yang menulis (penulis)
{{-terjemahan-}}
{{t-atas|kata ganti orang pertama jamak eksklusif}}
*{{en}}: {{t+|en|we}} (sebagai subjek), {{t+|en|us}} (sebagai objek), {{t+|en|our}} (kepemilikan)
{{t-bawah}}
=={{bahasa|bac}}==
{{kepala|bac}}
{{-pron-|bac}}
# [[saya]]
{{-terjemahan-}}
{{t-atas}}
*{{id}}: {{t+|id|saya}}, {{t+|id|aku}}
*{{en}}: {{t+|en|I}}
{{t-bawah}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-pron-|bjn}}
# kata ganti orang pertama jamak atau tunggal yang bersifat menunjukkan kehormatan si pembicara yang bersifat [[eksklusif]]. Dengan kata lain, lawan bicara tidak termasuk, berbeda dengan [[kita]].
{{-lafal-|bjn}}
* {{suara|bjn|LL-Q33151 (bjn)-Roniyansyah (Roniyronron)-kami.wav }}
[[Kategori:WikiTutur - Banjar]]
[[Kategori:WikiTutur Daring 24 Maret 2024]]
#: {{ux|bjn|saban hari kami bagawian di pahumaan ngini.| setiap hari kami bekerja di ladang sawah ini.}}
[[Kategori:WikiTutur 3.0 - Banjar]]
[[Kategori:WikiTutur 3.0 Banjarmasin 15 Februari 2026]]
=={{bahasa|kaw}}==
{{kepala|kaw}}
{{-etimologi-}}
: Dari {{inh+|kaw|poz-pro|*kami}}, dari {{inh|id|map-pro|*kami}}.
{{-pron-|kaw}}
# kata ganti orang pertama (tunggal dan jamak, termasuk lawan bicara/pembaca)
=={{bahasa|jv}}==
{{kepala|jv}}
{{-etimologi-}}
: {{inh+|jv|kaw|kami}}, dari {{inh|jv|poz-pro|*kami}}, dari {{Inh|jv|map-pro|*kami}}.
{{-pron-|jv}}
#{{ngoko}} kami
#:'''''Kami''' lagi mangan sega''
#:'''Kami''' sedang makan nasi
{{-lafal-|jv}}
*{{suara|jv|LL-Q33549 (jav)-Dessyil-Kami.wav|"Suara"}}
{{Pronomina bahasa Jawa}}
[[Kategori:WikiTutur - Jawa]]
[[Kategori:WikiTutur Yogyakarta 18 Februari 2024]]
=={{bahasa|mad}}==
{{kepala|mad}}
{{-pron-|mad}}
# kami
{{-lafal-|mad}}
*{{suara|mad|LL-Q36213 (mad)-Nuraini Aan-Kami.wav|Audio}}
[[Kategori:WikiTutur - Madura]]
[[Kategori:WikiTutur Jakarta 3 Februari 2024]]
=={{bahasa|ms}}==
{{kepala|ms}}
{{-pron-|ms}}
# kata ganti orang pertama jamak (tidak termasuk orang kedua)
#: '''''Kami''' sudah lama menunggu awak di sini.''
# [[saya]], [[aku]]
{{-etimologi-}}
* Dari {{inh|id|poz-mly-pro|*kami}}, dari {{inh|id|poz-pro|*kami}}, dari {{inh|id|map-pro|*kami}}
=={{bahasa|jax}}==
{{kepala|jax}}
{{-lafal-}}
*{{suara|jax|LL-Q3915769_(jax)-Retno_KD-kami.wav|q=''Kota Jambi''}}
* {{suara|pron|LL-Q3915769 (jax)-Rurublue-kami.wav|q=''Jambi Seberang''}}
{{-pron-|jax}}
# [[saya]]; bentuk sopan dari kata [[aku]]:
#: ''kami agek izin balek cepat yo Bu..''
#:: saya nanti izin pulang cepat ya Bu..
=={{bahasa|pse}}==
{{kepala|pse}}
{{-pron-|pse}}
# kata ganti orang pertama jamak; kami
{{-lafal-|pse}}
* {{suara|pse| LL-Q3367751 (pse)-Rezzzyy-die.wav |q={{enim}}}}
*{{suara|pse|LL-Q3367751 (pse)-Meiilnd-kami.wav|q={{ogan}}}}
* {{suara|pse|LL-Q3367751 (pse)-Tarijushi-kami.wav |q={{semende}}}}
* {{suara|pse|LL-Q3367751 (pse)-naura (Sky rajata)-kami.wav|q={{lintang}}}}
[[Kategori:WikiTutur - Lintang]]
[[Kategori:WikiTutur - Semende]]
[[Kategori:WikiTutur - Ogan]]
[[Kategori:WikiTutur Palembang 18 Februari 2024]]
=={{bahasa|btd}}==
{{kepala|btd}}
{{-n-|btd}}
# kami
[[Kategori:WikiTutur 3.0 - Pakpak]]
[[Kategori:WikiTutur 3.0 Kopdar Medan 2025-11-02]]
=={{bahasa|mui-plm}}==
{{kepala|mui-plm}}
{{-pron-|mui-plm}}
# kata ganti orang pertama jamak eksklusif (tidak termasuk orang kedua); [[#bahasa Indonesia|kami]]
# kata ganti orang pertama tunggal dalam ragam formal; [[saya]]
#: {{syn|mui-plm|aku|kulo|tubu}}
{{-lafal-|mui-plm}}
* {{IPA|[ka.mi]}}
{{-ragam-}}
* {{l|mui-plm|kamek}}
=={{bahasa|smw}}==
{{kepala|smw}}
{{-pron-|smw}}
# kami
{{-lafal-|smw}}
* {{suara|smw|LL-Q3182585 (smw)-Salmaanelghazi-kami.wav}}
[[Kategori:WikiTutur Kopdar Jakarta 22 Juni 2024]]
[[Kategori:WikiTutur - Sumbawa]]
=={{bahasa|su}}==
{{kepala|su}}
{{-pron-|su}}
# kata pengganti orang pertama tunggal, dipakai dalam pembicaraan dengan orang yang sudah sangat akrab
{{-etimologi-}}
* Dari {{inh|kaw|poz-pro|*kami}}, dari {{inh|id|map-pro|*kami}}
=={{bahasa|ljl}}==
{{kepala|ljl}}
{{-pron-|ljl}}
# kata ganti orang pertama jamak , menggambarkan beberapa orang atau sekelompok orang yang sedang bersama - sama
#: ''kami mbana da lau uma''
#: kami pergi ke kebun
{{-lafal-|ljl}}
* {{suara|ljl|LL-Q2697010_(ljl)-Ingin_Bunga-kami.wav}}
[[Kategori:WikiTutur - Lio]]
[[Kategori:WikiTutur Daring 24 Maret 2024]]
=={{bahasa|lbx}}==
{{kepala|lbx}}
{{-n-|lbx}}
# tangan
9nid99ccp27kybeyndk6eocsdqfvmvg
manusia
0
2009
1349284
1280455
2026-04-10T19:20:29Z
Swarabakti
18192
1349284
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# makhluk yang berakal budi (mampu menguasai makhluk lain); insan; orang
# {{rfdef|id}}
#* {{RQ:Sejarah Daerah Bengkulu
|page=3
|text=kehidupan manusia yang sekarang ini merupakan mata rantai tak terpisahkan dari kehidupan manusia generasi sebelumnya.
|url=https://id.wikisource.org/wiki/Halaman:Sejarah_Daerah_Bengkulu.pdf/14#:~:text=kehidupan%20manusia%20yang%20sekarang%20ini%20merupakan%20mata%20rantai%20tak%20terpisahkan%20dari%20kehidupan%20manusia%20generasi%20sebelumnya.
}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page=1
|text=Adapun yang dimaksud dengan permainan rakyat adalah suatu kegiatan yang dilakukan oleh '''manusia''' pendukungnya guna kepentingan pembinaan jasmani dan sikap mental yang bersangkutan.
|url=https://id.wikisource.org/wiki/Halaman:Permainan_rakyat_daerah_Kalimantan_Selatan.pdf/9#:~:text=Adapun%20yang%20dimaksud%20dengan%20permainan%20rakyat%20adalah%20suatu%20kegiatan%20yang%20dilakukan%20oleh%20manusia%20pendukungnya%20guna%20kepentingan%20pembinaan%20jasmani%20dan%20sikap%20mental%20yang%20bersangkutan.
}}
{{-etimologi-}}
* Dari {{der|id|sa|sc=Deva|मनुष|tr=manuṣa||manusia}}.
{{-rujukan-}}
*Russell Jones, Loan-words in Indonesian and Malay, (Jakarta: Yayasan Obor Indonesia, 2008)
*Sir Monier Monier-Williams, M.A., K.C.I.E (1899) Sanskrit-English Dictionary Etymologically and Philologically Arranged with Special Reference to Cognate Indo-European Languages. Oxford: University Press
*Arthur Anthony Macdonell (1929) A Practical Sanskrit Dictionary With Transliteration, Accentuation, and Etymological Analysis Throughout. London: Oxford University Press
*{{R:KBBI Daring}}
#* {{RQ:Pantjasila
|text=manusia itu berasal dari beralihnja sekonjong-konjong seekor hewan, jg bertingkat - hidup tinggi, mendjadi „manusia jg pertama".
|page=17
|url=https://id.wikisource.org/wiki/Halaman:Pantjasila_oleh_Ki_Hadjar_Dewantara.pdf/23#:~:text=manusia%20itu%20berasal%20dari%20beralihnja%20sekonjong%2Dkonjong%20seekor%20hewan%2C%20jg%20bertingkat%20%2D%20hidup%20tinggi%2C%20mendjadi%20%E2%80%9Emanusia%20jg%20pertama%22.
}}
{{-turunan-|id}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
{{t-atas}}
* {{bhs|ar}}: {{t|ar|فَصْل}}, {{t|ar|مَوْسِم}}
* {{bhs|ms}}: {{t|ms|manusia}}
* {{bhs|en}}: {{t|en|human}}
* {{bhs|ru}}: {{t|ru|человек|m|sc=Cyrl}}
* {{bhs|th}}: {{t|th|มนุษย์}}
{{t-bawah}}
4ux8t9gabwjmzjbn4k7fcwpy0ukk6ry
1349285
1349284
2026-04-10T19:21:39Z
Swarabakti
18192
1349285
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-etimologi-}}
: Dari {{der|id|sa|sc=Deva|मनुष|tr=manuṣa||manusia}}.
{{-n-|id}}
# makhluk yang berakal budi (mampu menguasai makhluk lain); insan; orang
#* {{RQ:Sejarah Daerah Bengkulu
|page=3
|text=kehidupan manusia yang sekarang ini merupakan mata rantai tak terpisahkan dari kehidupan manusia generasi sebelumnya.
|url=https://id.wikisource.org/wiki/Halaman:Sejarah_Daerah_Bengkulu.pdf/14#:~:text=kehidupan%20manusia%20yang%20sekarang%20ini%20merupakan%20mata%20rantai%20tak%20terpisahkan%20dari%20kehidupan%20manusia%20generasi%20sebelumnya.
}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page=1
|text=Adapun yang dimaksud dengan permainan rakyat adalah suatu kegiatan yang dilakukan oleh '''manusia''' pendukungnya guna kepentingan pembinaan jasmani dan sikap mental yang bersangkutan.
|url=https://id.wikisource.org/wiki/Halaman:Permainan_rakyat_daerah_Kalimantan_Selatan.pdf/9#:~:text=Adapun%20yang%20dimaksud%20dengan%20permainan%20rakyat%20adalah%20suatu%20kegiatan%20yang%20dilakukan%20oleh%20manusia%20pendukungnya%20guna%20kepentingan%20pembinaan%20jasmani%20dan%20sikap%20mental%20yang%20bersangkutan.
}}
#* {{RQ:Pantjasila
|text=manusia itu berasal dari beralihnja sekonjong-konjong seekor hewan, jg bertingkat - hidup tinggi, mendjadi „manusia jg pertama".
|page=17
|url=https://id.wikisource.org/wiki/Halaman:Pantjasila_oleh_Ki_Hadjar_Dewantara.pdf/23#:~:text=manusia%20itu%20berasal%20dari%20beralihnja%20sekonjong%2Dkonjong%20seekor%20hewan%2C%20jg%20bertingkat%20%2D%20hidup%20tinggi%2C%20mendjadi%20%E2%80%9Emanusia%20jg%20pertama%22.
}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
{{t-atas}}
* {{bhs|ar}}: {{t|ar|فَصْل}}, {{t|ar|مَوْسِم}}
* {{bhs|ms}}: {{t|ms|manusia}}
* {{bhs|en}}: {{t|en|human}}
* {{bhs|ru}}: {{t|ru|человек|m|sc=Cyrl}}
* {{bhs|th}}: {{t|th|มนุษย์}}
{{t-bawah}}
35dpuxuk896y0c9mcvw780ur4nj13l6
1349286
1349285
2026-04-10T19:22:12Z
Swarabakti
18192
1349286
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-etimologi-}}
: Dari {{der|id|sa|मनुष||manusia}}.
{{-n-|id}}
# makhluk yang berakal budi (mampu menguasai makhluk lain); insan; orang
#* {{RQ:Sejarah Daerah Bengkulu
|page=3
|text=kehidupan manusia yang sekarang ini merupakan mata rantai tak terpisahkan dari kehidupan manusia generasi sebelumnya.
|url=https://id.wikisource.org/wiki/Halaman:Sejarah_Daerah_Bengkulu.pdf/14#:~:text=kehidupan%20manusia%20yang%20sekarang%20ini%20merupakan%20mata%20rantai%20tak%20terpisahkan%20dari%20kehidupan%20manusia%20generasi%20sebelumnya.
}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page=1
|text=Adapun yang dimaksud dengan permainan rakyat adalah suatu kegiatan yang dilakukan oleh '''manusia''' pendukungnya guna kepentingan pembinaan jasmani dan sikap mental yang bersangkutan.
|url=https://id.wikisource.org/wiki/Halaman:Permainan_rakyat_daerah_Kalimantan_Selatan.pdf/9#:~:text=Adapun%20yang%20dimaksud%20dengan%20permainan%20rakyat%20adalah%20suatu%20kegiatan%20yang%20dilakukan%20oleh%20manusia%20pendukungnya%20guna%20kepentingan%20pembinaan%20jasmani%20dan%20sikap%20mental%20yang%20bersangkutan.
}}
#* {{RQ:Pantjasila
|text=manusia itu berasal dari beralihnja sekonjong-konjong seekor hewan, jg bertingkat - hidup tinggi, mendjadi „manusia jg pertama".
|page=17
|url=https://id.wikisource.org/wiki/Halaman:Pantjasila_oleh_Ki_Hadjar_Dewantara.pdf/23#:~:text=manusia%20itu%20berasal%20dari%20beralihnja%20sekonjong%2Dkonjong%20seekor%20hewan%2C%20jg%20bertingkat%20%2D%20hidup%20tinggi%2C%20mendjadi%20%E2%80%9Emanusia%20jg%20pertama%22.
}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
{{t-atas}}
* {{bhs|ar}}: {{t|ar|فَصْل}}, {{t|ar|مَوْسِم}}
* {{bhs|ms}}: {{t|ms|manusia}}
* {{bhs|en}}: {{t|en|human}}
* {{bhs|ru}}: {{t|ru|человек|m}}
* {{bhs|th}}: {{t|th|มนุษย์}}
{{t-bawah}}
77l3234wq3elavyo9rjzh6q9dn2jc38
1349287
1349286
2026-04-10T19:22:43Z
Swarabakti
18192
1349287
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-etimologi-}}
: Dari {{der|id|sa|मनुष||manusia}}.
{{-n-|id}}
# makhluk yang berakal budi (mampu menguasai makhluk lain); insan; orang
#* {{RQ:Sejarah Daerah Bengkulu
|page=3
|text=kehidupan manusia yang sekarang ini merupakan mata rantai tak terpisahkan dari kehidupan manusia generasi sebelumnya.
|url=https://id.wikisource.org/wiki/Halaman:Sejarah_Daerah_Bengkulu.pdf/14#:~:text=kehidupan%20manusia%20yang%20sekarang%20ini%20merupakan%20mata%20rantai%20tak%20terpisahkan%20dari%20kehidupan%20manusia%20generasi%20sebelumnya.
}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page=1
|text=Adapun yang dimaksud dengan permainan rakyat adalah suatu kegiatan yang dilakukan oleh '''manusia''' pendukungnya guna kepentingan pembinaan jasmani dan sikap mental yang bersangkutan.
|url=https://id.wikisource.org/wiki/Halaman:Permainan_rakyat_daerah_Kalimantan_Selatan.pdf/9#:~:text=Adapun%20yang%20dimaksud%20dengan%20permainan%20rakyat%20adalah%20suatu%20kegiatan%20yang%20dilakukan%20oleh%20manusia%20pendukungnya%20guna%20kepentingan%20pembinaan%20jasmani%20dan%20sikap%20mental%20yang%20bersangkutan.
}}
#* {{RQ:Pantjasila
|text=manusia itu berasal dari beralihnja sekonjong-konjong seekor hewan, jg bertingkat - hidup tinggi, mendjadi „manusia jg pertama".
|page=17
|url=https://id.wikisource.org/wiki/Halaman:Pantjasila_oleh_Ki_Hadjar_Dewantara.pdf/23#:~:text=manusia%20itu%20berasal%20dari%20beralihnja%20sekonjong%2Dkonjong%20seekor%20hewan%2C%20jg%20bertingkat%20%2D%20hidup%20tinggi%2C%20mendjadi%20%E2%80%9Emanusia%20jg%20pertama%22.
}}
{{-terjemahan-}}
{{t-atas}}
* {{bhs|ar}}: {{t|ar|فَصْل}}, {{t|ar|مَوْسِم}}
* {{bhs|ms}}: {{t|ms|manusia}}
* {{bhs|en}}: {{t|en|human}}
* {{bhs|ru}}: {{t|ru|человек|m}}
* {{bhs|th}}: {{t|th|มนุษย์}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
6f62syir4xozro3qmk69s6reecjnoyd
1349288
1349287
2026-04-10T19:23:10Z
Swarabakti
18192
1349288
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-etimologi-}}
: Dari {{der|id|sa|मनुष||manusia}}.
{{-n-|id}}
# makhluk yang berakal budi (mampu menguasai makhluk lain); insan; orang
#* {{RQ:Sejarah Daerah Bengkulu
|page=3
|text=kehidupan manusia yang sekarang ini merupakan mata rantai tak terpisahkan dari kehidupan manusia generasi sebelumnya.
|url=https://id.wikisource.org/wiki/Halaman:Sejarah_Daerah_Bengkulu.pdf/14#:~:text=kehidupan%20manusia%20yang%20sekarang%20ini%20merupakan%20mata%20rantai%20tak%20terpisahkan%20dari%20kehidupan%20manusia%20generasi%20sebelumnya.
}}
#* {{RQ:Pantjasila
|text=manusia itu berasal dari beralihnja sekonjong-konjong seekor hewan, jg bertingkat - hidup tinggi, mendjadi „manusia jg pertama".
|page=17
|url=https://id.wikisource.org/wiki/Halaman:Pantjasila_oleh_Ki_Hadjar_Dewantara.pdf/23#:~:text=manusia%20itu%20berasal%20dari%20beralihnja%20sekonjong%2Dkonjong%20seekor%20hewan%2C%20jg%20bertingkat%20%2D%20hidup%20tinggi%2C%20mendjadi%20%E2%80%9Emanusia%20jg%20pertama%22.
}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page=1
|text=Adapun yang dimaksud dengan permainan rakyat adalah suatu kegiatan yang dilakukan oleh '''manusia''' pendukungnya guna kepentingan pembinaan jasmani dan sikap mental yang bersangkutan.
|url=https://id.wikisource.org/wiki/Halaman:Permainan_rakyat_daerah_Kalimantan_Selatan.pdf/9#:~:text=Adapun%20yang%20dimaksud%20dengan%20permainan%20rakyat%20adalah%20suatu%20kegiatan%20yang%20dilakukan%20oleh%20manusia%20pendukungnya%20guna%20kepentingan%20pembinaan%20jasmani%20dan%20sikap%20mental%20yang%20bersangkutan.
}}
{{-terjemahan-}}
{{t-atas}}
* {{bhs|ar}}: {{t|ar|فَصْل}}, {{t|ar|مَوْسِم}}
* {{bhs|ms}}: {{t|ms|manusia}}
* {{bhs|en}}: {{t|en|human}}
* {{bhs|ru}}: {{t|ru|человек|m}}
* {{bhs|th}}: {{t|th|มนุษย์}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
lgbhzjqxgweu2ro5axhr9k7cy9murwl
1349289
1349288
2026-04-10T19:24:24Z
Swarabakti
18192
1349289
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-etimologi-}}
: Dari {{der|id|sa|मनुष||manusia}}.
{{-n-|id}}
# makhluk yang berakal budi (mampu menguasai makhluk lain); insan; orang
#* {{RQ:Pantjasila
|text=manusia itu berasal dari beralihnja sekonjong-konjong seekor hewan, jg bertingkat - hidup tinggi, mendjadi „manusia jg pertama".
|page=17
|url=https://id.wikisource.org/wiki/Halaman:Pantjasila_oleh_Ki_Hadjar_Dewantara.pdf/23#:~:text=manusia%20itu%20berasal%20dari%20beralihnja%20sekonjong%2Dkonjong%20seekor%20hewan%2C%20jg%20bertingkat%20%2D%20hidup%20tinggi%2C%20mendjadi%20%E2%80%9Emanusia%20jg%20pertama%22.
}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page=1
|text=Adapun yang dimaksud dengan permainan rakyat adalah suatu kegiatan yang dilakukan oleh '''manusia''' pendukungnya guna kepentingan pembinaan jasmani dan sikap mental yang bersangkutan.
|url=https://id.wikisource.org/wiki/Halaman:Permainan_rakyat_daerah_Kalimantan_Selatan.pdf/9#:~:text=Adapun%20yang%20dimaksud%20dengan%20permainan%20rakyat%20adalah%20suatu%20kegiatan%20yang%20dilakukan%20oleh%20manusia%20pendukungnya%20guna%20kepentingan%20pembinaan%20jasmani%20dan%20sikap%20mental%20yang%20bersangkutan.
}}
#* {{RQ:Sejarah Daerah Bengkulu
|page=3
|text=kehidupan manusia yang sekarang ini merupakan mata rantai tak terpisahkan dari kehidupan manusia generasi sebelumnya.
|url=https://id.wikisource.org/wiki/Halaman:Sejarah_Daerah_Bengkulu.pdf/14#:~:text=kehidupan%20manusia%20yang%20sekarang%20ini%20merupakan%20mata%20rantai%20tak%20terpisahkan%20dari%20kehidupan%20manusia%20generasi%20sebelumnya.
}}
{{-terjemahan-}}
{{t-atas}}
* {{bhs|ar}}: {{t|ar|فَصْل}}, {{t|ar|مَوْسِم}}
* {{bhs|ms}}: {{t|ms|manusia}}
* {{bhs|en}}: {{t|en|human}}
* {{bhs|ru}}: {{t|ru|человек|m}}
* {{bhs|th}}: {{t|th|มนุษย์}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
7gm5kwdy4yk9jgdka4ut2juslj3lq25
Templat
0
2550
1349291
1023508
2026-04-10T21:04:55Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Wikikamus:Templat]]
1349291
wikitext
text/x-wiki
#ALIH [[Wikikamus:Templat]]
t128mwbddyo6v143lm2tp410abmnbd2
Template
0
2551
1349292
1023509
2026-04-10T21:05:05Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Wikikamus:Templat]]
1349292
wikitext
text/x-wiki
#ALIH [[Wikikamus:Templat]]
t128mwbddyo6v143lm2tp410abmnbd2
adas
0
21033
1349306
1109892
2026-04-11T04:07:57Z
Sekar Jarwo Soekarno
46072
1349306
wikitext
text/x-wiki
'''Bahasa Sasaq'''
adas (nama sejenis biji-bijian, untukbumbu atau obat);
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# tumbuhan bergetah yang tingginya kira-kira 1,5 m, bijinya dijadikan minyak untuk obat; ''Foeniculum vulgare''
# tanaman serupa rempah
{{-turunan-|id}}
{{-terjemahan-}}
{{t-atas}}
* {{fr}} : {{trad-|fr|anis}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
8vw973i2oo0cbfxlbgahwkkr06m5dq3
adat
0
21035
1349304
1292070
2026-04-11T04:06:43Z
Sekar Jarwo Soekarno
46072
1349304
wikitext
text/x-wiki
Bahasa Sasaq
endiqne taoq
tidak tahu atur
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# aturan (perbuatan dsb) yang lazim diturut atau dilakukan sejak dahulu kala
#:''menurut '''adat''' daerah ini, laki-lakilah yang berhak sebagai ahli waris''
# cara (kelakuan dsb) yang sudah menjadi kebiasaan; kebiasaan
#:''demikianlah '''adatnya''' apabila ia marah; (pada) adatnya''
# wujud gagasan kebudayaan yang terdiri atas nilai-nilai budaya, norma, hukum, dan aturan yang satu dengan lainnya berkaitan menjadi suatu sistem
#* {{RQ:20 Mei Pelopor 17 Agustus
|page=n.d
|text=Tentang Daeng Aru Palaka ditjeriterakan, bahwa kemudian dia dapat dipegang kembali oleh Trunodjojo dan dihukum mati menurut '''adat'''-perang.
|norm=Tentang Daeng Aru Palaka diceriterakan, bahwa kemudian dia dapat dipegang kembali oleh Trunojoyo dan dihukum mati menurut '''adat'''-perang.
|url=https://id.wikisource.org/wiki/20_Mei_Pelopor_17_Agustus#:~:text=Tentang%20Daeng%20Aru%20Palaka%20ditjeriterakan%2C%20bahwa%20kemudian%20dia%20dapat%20dipegang%20kembali%20oleh%20Trunodjojo%20dan%20dihukum%20mati%20menurut%20adat%2Dperang
}}
# {{klasik}} cukai menurut peraturan yang berlaku (di pelabuhan dsb)
{{-etimologi-}}
* Dari [''[[Persia]]'' '''''عادة ‘ādat''''' 'kebiasaan; cara; penggunaan; upacara; observasi', dari ''[[Arab]]'' '''''عَادَةٌ ‘ādah''''' 'terus-menerus melakukan sebuah aktivitas sampai menjadi karakter dan kebiasaan', dari ''[[Arab]]'' '''''عَوَّدَ ‘awwada''''' 'membiasakan', dari ''[[Arab]]'' '''''عَادَ ‘āda''''' 'kembali']
{{-rujukan-}}
* Russell Jones, Loan-words in Indonesian and Malay, (Jakarta: Yayasan Obor Indonesia, 2008)
* John Richardson, A Dictionary Persian, Arabic, English, (London, 1806)
* al-Khalīl, al-‘Ain, (Beirut: Dār Maktabah al-Hilāl, t.th)
* Ibn Manẓūr, Lisan al-'Arab, (Cairo: Dār al-Ma‘ārif, 1431 H)
* {{R:KBBI Daring}}
{{-turunan-|id}}
{{-terjemahan-}}
{{t-atas}}
*{{gor}}: {{t+|gor|aadati}}
* {{fr}} : {{trad-|fr|coutume}}, {{trad-|fr|tradition}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-n-|bjn}}
# sopan santun;
# adat istiadat
=={{bahasa|bts}}==
{{kepala|bts}}
{{-n-|bts}} ᯀᯑᯖ᯲
# {{l|id|adat}}
=={{bahasa|pey}}==
{{kepala|pey}}
{{suara|pey|LL-Q940486 (pey)-Bangrapip-adat.wav}}
{{-n-|pey}}
# [[adat]], [[kebiasaan]]
=={{bahasa|su}}==
{{kepala|su}}
{{-n-|su}} ᮃᮓᮒ᮪
{{suword|hb=panganggo|hs=adat|l=parangi, tabéat|ch=dodongés}}
# {{sedeng}} [[perangai]], [[tabiat]]
[[Kategori:WikiTutur - Peco]]
[[Kategori:WikiTutur Jakarta 3 Februari 2024]]
=={{bahasa|bkr}}==
{{kepala|bkr}}
{{-adj-|bkr}}
# [[adat;adab]]
#: ''ukeh jite ba adat''
#: orang itu tahu adat
[[Kategori:WikiBalalah - Bakumpai]]
69eqcnfszgskw6c87q9zb00zeo4mlq6
1349305
1349304
2026-04-11T04:07:06Z
Sekar Jarwo Soekarno
46072
1349305
wikitext
text/x-wiki
'''Bahasa Sasaq'''
endiqne taoq
tidak tahu atur
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# aturan (perbuatan dsb) yang lazim diturut atau dilakukan sejak dahulu kala
#:''menurut '''adat''' daerah ini, laki-lakilah yang berhak sebagai ahli waris''
# cara (kelakuan dsb) yang sudah menjadi kebiasaan; kebiasaan
#:''demikianlah '''adatnya''' apabila ia marah; (pada) adatnya''
# wujud gagasan kebudayaan yang terdiri atas nilai-nilai budaya, norma, hukum, dan aturan yang satu dengan lainnya berkaitan menjadi suatu sistem
#* {{RQ:20 Mei Pelopor 17 Agustus
|page=n.d
|text=Tentang Daeng Aru Palaka ditjeriterakan, bahwa kemudian dia dapat dipegang kembali oleh Trunodjojo dan dihukum mati menurut '''adat'''-perang.
|norm=Tentang Daeng Aru Palaka diceriterakan, bahwa kemudian dia dapat dipegang kembali oleh Trunojoyo dan dihukum mati menurut '''adat'''-perang.
|url=https://id.wikisource.org/wiki/20_Mei_Pelopor_17_Agustus#:~:text=Tentang%20Daeng%20Aru%20Palaka%20ditjeriterakan%2C%20bahwa%20kemudian%20dia%20dapat%20dipegang%20kembali%20oleh%20Trunodjojo%20dan%20dihukum%20mati%20menurut%20adat%2Dperang
}}
# {{klasik}} cukai menurut peraturan yang berlaku (di pelabuhan dsb)
{{-etimologi-}}
* Dari [''[[Persia]]'' '''''عادة ‘ādat''''' 'kebiasaan; cara; penggunaan; upacara; observasi', dari ''[[Arab]]'' '''''عَادَةٌ ‘ādah''''' 'terus-menerus melakukan sebuah aktivitas sampai menjadi karakter dan kebiasaan', dari ''[[Arab]]'' '''''عَوَّدَ ‘awwada''''' 'membiasakan', dari ''[[Arab]]'' '''''عَادَ ‘āda''''' 'kembali']
{{-rujukan-}}
* Russell Jones, Loan-words in Indonesian and Malay, (Jakarta: Yayasan Obor Indonesia, 2008)
* John Richardson, A Dictionary Persian, Arabic, English, (London, 1806)
* al-Khalīl, al-‘Ain, (Beirut: Dār Maktabah al-Hilāl, t.th)
* Ibn Manẓūr, Lisan al-'Arab, (Cairo: Dār al-Ma‘ārif, 1431 H)
* {{R:KBBI Daring}}
{{-turunan-|id}}
{{-terjemahan-}}
{{t-atas}}
*{{gor}}: {{t+|gor|aadati}}
* {{fr}} : {{trad-|fr|coutume}}, {{trad-|fr|tradition}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-n-|bjn}}
# sopan santun;
# adat istiadat
=={{bahasa|bts}}==
{{kepala|bts}}
{{-n-|bts}} ᯀᯑᯖ᯲
# {{l|id|adat}}
=={{bahasa|pey}}==
{{kepala|pey}}
{{suara|pey|LL-Q940486 (pey)-Bangrapip-adat.wav}}
{{-n-|pey}}
# [[adat]], [[kebiasaan]]
=={{bahasa|su}}==
{{kepala|su}}
{{-n-|su}} ᮃᮓᮒ᮪
{{suword|hb=panganggo|hs=adat|l=parangi, tabéat|ch=dodongés}}
# {{sedeng}} [[perangai]], [[tabiat]]
[[Kategori:WikiTutur - Peco]]
[[Kategori:WikiTutur Jakarta 3 Februari 2024]]
=={{bahasa|bkr}}==
{{kepala|bkr}}
{{-adj-|bkr}}
# [[adat;adab]]
#: ''ukeh jite ba adat''
#: orang itu tahu adat
[[Kategori:WikiBalalah - Bakumpai]]
7h6mckf3i4kk61vm19ls35rgnni4bxs
adil
0
21040
1349302
1292116
2026-04-11T04:03:52Z
Sekar Jarwo Soekarno
46072
1349302
wikitext
text/x-wiki
Bahasa Sasaq
"kepale dese nu tedemen
isiq dengan Xueq sengaqne kepala
desa itu disukai banyak orang krn
adil; pusake sino uah tebagi secure"
=={{bahasa|id}}==
{{kepala|id}}
{{-a-|id}} (''[[superlatif]]'' '''[[{{ter-|{{PAGENAME}}}}]]''')
# sama berat; tidak berat sebelah; tidak memihak: <br />''keputusan hakim itu adil''
# berpihak kepada yang benar; berpegang pada kebenaran
# sepantasnya; sepatutnya; tidak sewenang-wenang: <br />''para buruh mengemukakan tuntutan yang adil''
{{-turunan-|id}}
* [[diadili]]
* [[keadilan]]
* [[mengadili]]
* [[pengadilan]]
* [[peradilan]]
{{-terjemahan-}}
{{t-atas}}
*{{gor}}: {{t+|gor|aadili}}
* {{en}} : {{trad-|en|fair}}
* {{fr}} : {{trad-|fr|équitable}}, {{trad-|fr|juste}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|mui-plm}}==
{{kepala|mui-plm}}
: {{suara|mui-plm|LL-Q12497929_(mis)-Iteng734-Adil.wav}}
{{-a-|mui-plm}}
# {{l|id|adil}}
[[Kategori:WikiTutur - Palembang]]
[[Kategori:WikiTutur Kopdar Palembang 19 Mei 2024]]
=={{bahasa|pey}}==
{{kepala|pey}}
{{suara|pey|LL-Q940486 (pey)-Bangrapip-adil.wav}}
{{-n-|pey}}
# {{l|id|adil}}
[[Kategori:WikiTutur Jakarta 3 Februari 2024]]
[[Kategori:WikiTutur - Peco]]
=={{bahasa|jv}}==
{{kepala|jv}}
{{-a-|jv}}
# {{kedu}} [[adil]]; adil:
#: ''bapakku'' ngomong'' aku'' kudu'' adil'' karo'' rak oleh'' pelit
#: bapakku'' ngomong'' aku'' harus'' adil'' dan'' ga boleh'' pelit
[[Kategori:WikiTutur - Jawa]]
[[Kategori:WikiTutur Kopdar Bandar Lampung 29 Juni 2024]]
ro1zooa6obdn8emhtcu8b78w9amf5sh
1349303
1349302
2026-04-11T04:04:33Z
Sekar Jarwo Soekarno
46072
1349303
wikitext
text/x-wiki
'''Bahasa Sasaq'''
"kepale dese nu tedemen
isiq dengan Xueq sengaqne kepala
desa itu disukai banyak orang krn
adil; pusake sino uah tebagi secure"
=={{bahasa|id}}==
{{kepala|id}}
{{-a-|id}} (''[[superlatif]]'' '''[[{{ter-|{{PAGENAME}}}}]]''')
# sama berat; tidak berat sebelah; tidak memihak: <br />''keputusan hakim itu adil''
# berpihak kepada yang benar; berpegang pada kebenaran
# sepantasnya; sepatutnya; tidak sewenang-wenang: <br />''para buruh mengemukakan tuntutan yang adil''
{{-turunan-|id}}
* [[diadili]]
* [[keadilan]]
* [[mengadili]]
* [[pengadilan]]
* [[peradilan]]
{{-terjemahan-}}
{{t-atas}}
*{{gor}}: {{t+|gor|aadili}}
* {{en}} : {{trad-|en|fair}}
* {{fr}} : {{trad-|fr|équitable}}, {{trad-|fr|juste}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|mui-plm}}==
{{kepala|mui-plm}}
: {{suara|mui-plm|LL-Q12497929_(mis)-Iteng734-Adil.wav}}
{{-a-|mui-plm}}
# {{l|id|adil}}
[[Kategori:WikiTutur - Palembang]]
[[Kategori:WikiTutur Kopdar Palembang 19 Mei 2024]]
=={{bahasa|pey}}==
{{kepala|pey}}
{{suara|pey|LL-Q940486 (pey)-Bangrapip-adil.wav}}
{{-n-|pey}}
# {{l|id|adil}}
[[Kategori:WikiTutur Jakarta 3 Februari 2024]]
[[Kategori:WikiTutur - Peco]]
=={{bahasa|jv}}==
{{kepala|jv}}
{{-a-|jv}}
# {{kedu}} [[adil]]; adil:
#: ''bapakku'' ngomong'' aku'' kudu'' adil'' karo'' rak oleh'' pelit
#: bapakku'' ngomong'' aku'' harus'' adil'' dan'' ga boleh'' pelit
[[Kategori:WikiTutur - Jawa]]
[[Kategori:WikiTutur Kopdar Bandar Lampung 29 Juni 2024]]
p2a24em2sjwgtbnaucul0s7exh6lbh0
anak
0
21314
1349326
1335422
2026-04-11T06:53:33Z
Pitchrigi
38796
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [bew]
1349326
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# seseorang yang belum dewasa, baik berdasarkan jasmani, budaya, atau hukum
#* {{RQ:Perahu Tulis
|page= 174
|text= Jarak dari rumah ke kota menghabiskan waktu sekitar tiga jam, namun efek kota tidak sedikit berpengaruh pada masyarakat di sini, dimana gaya hidup '''anak''' kota sudah banyak ditemukan.
|url= https://id.wikisource.org/wiki/Halaman:Antologi_Cerpen_Remaja_Sumatera_Barat_Perahu_Tulis.pdf/186#:~:text=Jarak%20dari%20rumah%20ke%20kota%20menghabiskan%20waktu%20sekitar%20tiga%20jam%2C%20namun%20efek%20kota%20tidak%20sedikit%20berpengaruh%20pada%20masyarakat%20di%20sini%2C%20dimana%20gaya%20hidup%20anak%20kota%20sudah%20banyak%20ditemukan.
}}
#: {{sinonim|id|bayi|orok|bocah}}
# keturunan yang kedua
#: {{sinonim|id|putra|putri}}
#: ''Ini bukan '''anak'''nya, melainkan cucunya.''
# manusia yang masih kecil
#: {{sinonim|id|ananda|arek|awang|bani|bocah|budak|bujang|buyung|entong|ujang}}
#: '''''Anak''' itu baru berumur enam tahun.''
# binatang yang masih kecil
#: '''''Anak''' ayam itu berciap-ciap mencari induknya.''
#: {{sinonim|id|piyik}}
# pohon kecil yang tumbuh pada umbi atau rumpun tumbuh-tumbuhan yang besar
#: {{sinonim|id|anak cabang|cabang|cawang|pecahan|ranting|tunas|jipang|sangkak}}
#: '''''Anak''' pisang''
# orang yang berasal dari atau dilahirkan di (suatu negeri, daerah, dsb.)
#: {{sinonim|id|orang|penduduk|warga}}
#: '''''Anak''' Jakarta
#: '''''Anak''' Medan''
# orang yang termasuk dalam suatu golongan pekerjaan (keluarga dsb.)
#: {{sinonim|id|anggota}}
#: '''''Anak''' kapal
#: '''''Anak''' komidi''
# bagian yang kecil (pada suatu benda)
#: '''''Anak''' baju''
# yang lebih kecil daripada yang lain
#: '''''Anak''' bukit''
{{-turunan-}}
{{kotak daftar|id|title=Kata turunan
|anak-anak
|anak-anakan
|anak-beranak
|anakan
|beranak
|memperanakkan
|peranakan
}}
{{kotak daftar|id|title=Gabungan kata
|anak Adam
|anak air
|anak ajaib
|anak alang
|anak Allah
|anak ampang
|anak andaman
|anak angin
|anak angkat
|anak anjing
|anak asuh
|anak baju
|anak bala
|anak balam
|anak bangsa
|anak bangsawan
|anak bapak
|anak batu tulis
|anak bawang
|anak bedil
|anak benua
|anak berbakat
|anak berkundang
|anak bersagar
|anak bilik
|anak bini
|anak buah
|anak bukit
|anak buncit
|anak bungsu
|anak busur
|anak buyung
|anak cabang
|anak cobek
|anak cucu
|anak dabus
|anak dacin
|anak dagang
|anak dapat
|anak dara
|anak Daud
|anak daun
|anak dayung
|anak didik
|anak domba
|anak emas
|anak gadis
|anak gahara
|anak gedongan
|anak geladak
|anak genta
|anak ginjal
|anak gobek
|anak gugur
|anak haram
|anak hitam
|anak ibu
|anak jadah
|anak jari
|anak jawi
|anak jentera
|anak judul
|anak kalimat
|anak kandung
|anak kapal
|anak kembar
|anak kemenakan
|anak kencing
|anak keti
|anak kolong
|anak komidi
|anak kuar
|anak kukut
|anak kunci
|anak lidah
|anak limpa
|anak liplap
|anak lombong
|anak luar nikah
|anak lumpang
|anak mampu didik
|anak mampu latih
|anak mampu rawat
|anak manusia
|anak mas
|anak mata
|anak meja
|anak mentimun
|anak muda
|anak murid
|anak nakal
|anak negara
|anak negeri
|anak obat
|anak orang
|anak panah
|anak pandak
|anak panggung
|anak panjang
|anak pelor
|anak penak
|anak perahu
|anak perusahaan
|anak piara
|anak piatu
|anak pinak
|anak pisang
|anak prasekolah
|anak pungut
|anak putih
|anak ragil
|anak rambut
|anak rantau
|anak ronggeng
|anak saku
|anak sanak
|anak sandiwara
|anak sapihan
|anak sasian
|anak sekolah
|anak semang
|anak sipil
|anak sukar
|anak sulung
|anak sumbang
|anak sundal
|anak sungai
|anak sunti
|anak susuan
|anak tangan
|anak tangga
|anak tari
|anak taruhan
|anak tekak
|anak timbangan
|anak tiri
|anak tolakan
|anak tonil
|anak torak
|anak tuna
|anak tunalaras
|anak tunggal
|anak uang
|anak wayang
|anak yatim
|anak zadah
|bagi anak
|dokter anak
|membuang anak
|menanggang anak
|mengambil anak
|menjolok anak
|merintang anak
|pemakan anak
|ransum anak
|rasio anak wanita
|sel anak
|selusuh anak
|tulah anak
|tunjangan anak
}}
{{kotak daftar|id|title=Peribahasa
|anak ayam kehilangan induk
|anak harimau takkan menjadi anak kambing
|anak orang, anak orang juga
|bagai anak bercerai susu
|bagai kucing kehilangan anak
|biar mati anak asal jangan mati adat
|kasih ibu sepanjang jalan, kasih anak sepanjang penggalan
|mati anak berkalang bapak, mati bapak berkalang anak
|menggantang anak ayam
|rusak anak oleh menantu
}}
{{-terkait-}}
* {{l|id|kanak-kanak}}
{{-terjemahan-}}
{{t-atas|orang yang belum dewasa}}
*{{bhs|nl}} : {{t|nl|kind}}
*{{bhs|gor}}: {{t|gor|wala'o}}
*{{bhs|en}} : {{t|en|child}}, {{t|en|infant}}
*{{bhs|de}} : {{t|de|Kind}}
*{{bhs|fr}} : {{t|fr|enfant}}
*{{bhs|ru}} : {{t|ru|ребёнок}}
*{{bhs|su}}: {{t|su|budak}}, {{t|su|murangkalih}}
{{t-bawah}}
{{-rujukan-}}
* {{R:KBBI}}
{{rfv|id|impor dari KBBI}}
[[Kategori:id:Keluarga]]
=={{bahasa|akg}}==
{{kepala|akg}}
{{-n-|akg}}
# {{l|id|anak}}
=={{bahasa|bkr}}==
{{kepala|bkr}}
{{-n-|bkr}}
# {{l|id|anak}}
[[Kategori:WikiBalalah - Bakumpai]]
=={{bahasa|xkl}}==
{{kepala|xkl}}
{{-n-|xkl}}
# {{l|id|anak}}
=={{bahasa|ban}}==
{{kepala|ban}}
{{-n-|ban}}
# [[orang]]
# pemarkah kontras
# (''Bali Kuno'') [[warga]], [[penduduk]]
# (''Bali Kuno'') [[cabang]]
{{-pronomina-|ban}}
# {{q|penunjuk}} {{sinonim dari|ban|ana|t=ada}}
{{-rujukan-}}
* {{R:KBB}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-n-|bjn}}
# {{l|id|anak}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-lafal-|bjn}}
* {{suara|bjn|LL-Q33151_(bjn)-Malamilai-anak.wav}}
#: {{ux|bjn|anak ikam bagawi di mana wayah ni?| anakmu bekerja di mana sekarang?}}
[[Kategori:WikiTutur 3.0 - Banjar]]
[[Kategori:WikiTutur 3.0 Banjarmasin 15 Februari 2026]]
=={{bahasa|bew}}==
{{kepala|bew}}
{{-n-|bew}}
# anak
=={{bahasa|bqr}}==
{{kepala|bqr}}
: {{suara|bqr|LL-Q5001028 (bqr)-Apriana (Egie Allinskie)-anak.wav}}
{{-n-|bqr}}
# {{l|id|anak}}
=={{bahasa|gay}}==
{{kepala|gay}}
{{-n-|gay}}
# {{l|id|anak}}
=={{bahasa|iba}}==
{{kepala|iba}}
{{-n-|iba}}
# {{l|id|anak}}
# {{label|iba|Kristen}} {{w|Allah Anak}}
=={{bahasa|pea}}==
{{kepala|pea}}
{{-n-|pea}}
# {{l|id|anak}}
=={{bahasa|jv}}==
{{kepala|jv}}
{{-etimologi-}}
:Diwariskan dari {{Inh*|jv|poz-pro|anak}}, dari {{Inh*|jv|map-pro|aNak}}.
{{-n-|jv}}
# {{l|id|anak}}
{{-rujukan-}}
*{{R:map:ACD|aNak}}
=={{bahasa|kaw}}==
{{kepala|kaw}}
{{-n-|kaw}}
# {{l|id|anak}}
=={{bahasa|kkv}}==
{{kepala|kkv}}
{{-n-|kkv}}
# {{sinonim dari|kkv|nanak}}
=={{bahasa|btx}}==
{{kepala|btx}}
{{-n-|btx}}
# {{l|id|anak}}
=={{bahasa|kys}}==
{{kepala|kys}}
{{-n-|kys}}
# {{l|id|anak}}
=={{bahasa|kzi}}==
{{kepala|kzi}}
{{-n-|kzi}}
# {{l|id|anak}}
=={{bahasa|mad}}==
{{kepala|mad}}
{{-n-|mad}}
# {{sinonim dari|mad|kanak}}
=={{bahasa|mqy}}==
{{kepala|mqy}}
{{-n-|mqy}}
# {{l|id|anak}}
=={{bahasa|ms}}==
{{kepala|ms}}
: {{AFI|ms|/anak/}}
:* {{suara|ms|LL-Q9237 (msa)-Caca (Muhammad Rifqi Saputra)-anak.wav|q=Sambas}}
:* {{suara|ms|LL-Q9237 (msa)-Khairunisa (Melepok)-anak.wav|q=Pontianak}}
{{-n-|ms}}
# {{l|id|anak}}
[[Kategori:WikiTutur 2.0 - Melayu Sambas]]
[[Kategori:WikiTutur 2.0 - Melayu Pontianak]]
[[Kategori:Kopdar WikiTutur 2.0 Jakarta 29 April 2025]]
=={{bahasa|jax}}==
{{kepala|jax}}
: {{suara|jax|LL-Q3915769 (jax)-Sultan Toktik-anak.wav|q=Jambi Seberang}}
{{-n-|jax}}
# {{l|id|anak}}
=={{bahasa|kvb}}==
{{kepala|kvb}}
:{{pemenggalan|kvb|a|nak}}
{{-n-|kvb}}
# {{l|id|anak}}
{{-turunan-|kvb}}
# [[anak ambik]]= anak angkat
# [[anak balok]]= orang yang ahli menebang pohon-pohon besar dan paham seluk-beluk hutan di sekitar tempat tinggalnya
# [[anak dalam]]= jabatan dalam organisasi sosial SAD yang bertugas menjemput tumenggung ke sidang adat
# [[anak tanggo]]= anak tangga
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
=={{bahasa|mui-plm}}==
{{kepala|mui-plm}}
{{-n-|mui-plm}}
# {{ragam bentuk dari|mui-plm|ana'}}
=={{bahasa|pag}}==
{{kepala|pag}}
{{-n-|pag}}
# {{l|id|anak}}
=={{bahasa|smw}}==
{{kepala|smw}}
{{-n-|smw}}
# {{l|id|anak}}
=={{bahasa|tsg}}==
{{kepala|tsg}}
{{-n-|tsg}}
# {{l|id|anak}}
=={{bahasa|tes}}==
{{kepala|tes}}
: {{suara|tes|LL-Q9240 (ind)-Romo Eko (Amidaxaviera)-anak.wav}}
{{-n-|tes}}
# {{l|id|anak}}
[[Kategori:Kopdar WikiTutur 2.0 Jakarta 20 April 2025]]
[[Kategori:WikiTutur 2.0 - Tengger]]
=={{bahasa|osi}}==
{{kepala|osi}}
{{-n-|osi}}
# {{l|id|anak}}
{{-rujukan-}}
* {{R:KBOB}}
=={{bahasa|tao}}==
{{kepala|tao}}
{{-n-|tao}}
# {{l|id|anak}}
5gxu0znkw7at1cvsxfixwfwax41xghy
matahari
0
23053
1349271
1337200
2026-04-10T16:02:08Z
Swarabakti
18192
1349271
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-ragam-}}
* {{ragam|id|Matahari||kapitalisasi, khusus untuk makna nama diri}}
{{-etimologi-}}
: {{inh+|id|ms|matahari}}, dari {{inh|id|poz-mly-pro|*matahari}}, {{sebut|poz-mly-pro|*mata}} + {{sebut|poz-mly-pro|*hari}}.
{{-n-|id}}
# [[benda langit]] yang [[menyinari]] [[bumi]] di [[siang hari]]
#: {{sinonim|id|mentari|surya}}
#* {{RQ:Perahu Tulis
|page= 173
|text= Pagi itu cukup cerah dengan sinar '''mentari''' yang menyinari bumi.
|url= https://id.wikisource.org/wiki/Halaman:Antologi_Cerpen_Remaja_Sumatera_Barat_Perahu_Tulis.pdf/185#:~:text=Pagi%20itu%20cukup%20cerah%20dengan%20sinar%20mentari%20yang%20menyinari%20bumi.%20
}}
# [[cahaya]] dan [[panas]] dari matahari
{{-pn-|id}}
# [[bintang]] yang paling [[dekat]] dari [[bumi]]
{{-rujukan-}}
* {{R:KBBI Daring}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-n-|bjn}}
# matahari
{{-lafal-|bjn}}
* {{suara|bjn|LL-Q33151 (bjn)-Salsa66syifa-matahari.wav|‘‘Suara’’}}
[[Kategori:WikiTutur - Banjar]]
[[Kategori:WikiTutur Daring 24 Maret 2024]]
=={{bahasa|lbx}}==
{{kepala|lbx}}
{{-lafal-}}
*{{suara|lbx|LL-Q3120345 (lbx)-Mardiansyah (Hair alex)-mato’ lou.wav}}
{{-n-|lbx}}
#: {{label|lbx|Balik}} [[matahari]]
5jywa7t8gemocu90f0ujmb5h1tean7z
1349272
1349271
2026-04-10T16:02:59Z
Swarabakti
18192
1349272
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-ragam-}}
* {{ragam|id|Matahari||kapitalisasi, khusus untuk makna nama diri}}
{{-etimologi-}}
: {{inh+|id|ms|matahari}}, dari {{inh|id|poz-mly-pro|*matahari}}, {{sebut|poz-mly-pro|*mata}} + {{sebut|poz-mly-pro|*hari}}.
{{-n-|id}}
# [[benda langit]] yang [[menyinari]] [[bumi]] di [[siang hari]]
#: {{sinonim|id|mentari|surya}}
#* {{RQ:Perahu Tulis
|page= 173
|text= Pagi itu cukup cerah dengan sinar '''mentari''' yang menyinari bumi.
|url= https://id.wikisource.org/wiki/Halaman:Antologi_Cerpen_Remaja_Sumatera_Barat_Perahu_Tulis.pdf/185#:~:text=Pagi%20itu%20cukup%20cerah%20dengan%20sinar%20mentari%20yang%20menyinari%20bumi.%20
}}
# [[cahaya]] dan [[panas]] dari matahari
{{-pn-|id}}
# [[bintang]] yang paling [[dekat]] dari [[bumi]]
{{-bacaan-}}
* {{R:KBBI Daring}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
: {{suara|bjn|LL-Q33151 (bjn)-Salsa66syifa-matahari.wav}}
{{-n-|bjn}}
# matahari
[[Kategori:WikiTutur - Banjar]]
[[Kategori:WikiTutur Daring 24 Maret 2024]]
=={{bahasa|lbx}}==
{{kepala|lbx}}
: {{suara|lbx|LL-Q3120345 (lbx)-Mardiansyah (Hair alex)-mato’ lou.wav}}
{{-n-|lbx}}
# {{label|lbx|Balik}} [[matahari]]
tivkdubmsgnuf3vtfwuue02mz6nq16y
1349273
1349272
2026-04-10T16:03:40Z
Swarabakti
18192
1349273
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-ragam-}}
* {{ragam|id|Matahari||kapitalisasi, khusus untuk makna nama diri}}
{{-etimologi-}}
: {{inh+|id|ms|matahari}}, dari {{inh|id|poz-mly-pro|*matahari}}, {{sebut|poz-mly-pro|*mata}} + {{sebut|poz-mly-pro|*hari}}.
{{-n-|id}}
# [[benda langit]] yang [[menyinari]] [[bumi]] di [[siang hari]]
#: {{sinonim|id|mentari|surya}}
#* {{RQ:Perahu Tulis
|page= 173
|text= Pagi itu cukup cerah dengan sinar '''mentari''' yang menyinari bumi.
|url= https://id.wikisource.org/wiki/Halaman:Antologi_Cerpen_Remaja_Sumatera_Barat_Perahu_Tulis.pdf/185#:~:text=Pagi%20itu%20cukup%20cerah%20dengan%20sinar%20mentari%20yang%20menyinari%20bumi.%20
}}
# [[cahaya]] dan [[panas]] dari matahari
{{-pn-|id}}
# [[bintang]] yang paling [[dekat]] dari [[bumi]]
{{-bacaan-}}
* {{R:KBBI Daring}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
: {{suara|bjn|LL-Q33151 (bjn)-Salsa66syifa-matahari.wav}}
{{-n-|bjn}}
# {{l|id|matahari}}
[[Kategori:WikiTutur - Banjar]]
[[Kategori:WikiTutur Daring 24 Maret 2024]]
=={{bahasa|lbx}}==
{{kepala|lbx}}
: {{suara|lbx|LL-Q3120345 (lbx)-Mardiansyah (Hair alex)-mato’ lou.wav}}
{{-n-|lbx}}
# {{label|lbx|Balik}} {{l|id|matahari}}
okjgnip006qwydvhnb19h6ogj652t9x
1349274
1349273
2026-04-10T16:06:23Z
Swarabakti
18192
@[[Pengguna:Losstreak|Losstreak]]: bantu periksa, ini kutipannya semestinya di "mentari" yaa
1349274
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-ragam-}}
* {{ragam|id|Matahari||kapitalisasi, khusus untuk makna nama diri}}
{{-etimologi-}}
: {{inh+|id|ms|matahari}}, dari {{inh|id|poz-mly-pro|*matahari}}, {{sebut|poz-mly-pro|*mata}} + {{sebut|poz-mly-pro|*hari}}.
{{-n-|id}}
# [[benda langit]] yang [[menyinari]] [[bumi]] di [[siang hari]]
#: {{sinonim|id|mentari|surya}}
<!--Pindahkan ke mentari: #* {{RQ:Perahu Tulis
|page= 173
|text= Pagi itu cukup cerah dengan sinar '''mentari''' yang menyinari bumi.
|url= https://id.wikisource.org/wiki/Halaman:Antologi_Cerpen_Remaja_Sumatera_Barat_Perahu_Tulis.pdf/185#:~:text=Pagi%20itu%20cukup%20cerah%20dengan%20sinar%20mentari%20yang%20menyinari%20bumi.%20
}}-->
# [[cahaya]] dan [[panas]] dari matahari
{{-pn-|id}}
# [[bintang]] yang paling [[dekat]] dari [[bumi]]
{{-bacaan-}}
* {{R:KBBI Daring}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
: {{suara|bjn|LL-Q33151 (bjn)-Salsa66syifa-matahari.wav}}
{{-n-|bjn}}
# {{l|id|matahari}}
[[Kategori:WikiTutur - Banjar]]
[[Kategori:WikiTutur Daring 24 Maret 2024]]
=={{bahasa|lbx}}==
{{kepala|lbx}}
: {{suara|lbx|LL-Q3120345 (lbx)-Mardiansyah (Hair alex)-mato’ lou.wav}}
{{-n-|lbx}}
# {{label|lbx|Balik}} {{l|id|matahari}}
lbccgipvtm3ijy2pd4xfokos07sm3lr
gula
0
23056
1349299
1349152
2026-04-11T02:07:56Z
Biyanto R
38480
1349299
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
: {{pemenggalan|id|gu|la}} {{IPA|id|/gu.la/}}
* {{suara|id|:LL-Q9240 (ind)-Xbypass-gula.wav|Audio}}
{{-etimologi-}}
: Sanskerta ''guḍa'' (गुड)
{{-n-|id}}
# bahan [[pemanis]] dalam bentuk [[padatan]] atau [[butiran]] yang dibuat dari air [[tebu]], [[aren]], atau [[kelapa]]
#* {{RQ:Mustikarasa
|page= 58
|text= '''Gula''' ini dibuat dari sari tebu, jang setelah disaring lalu dikristalkan dengan dimasak
|norm= '''Gula''' ini dibuat dari sari tebu, yang setelah disaring lalu dikristalkan dengan dimasak
|url= https://id.wikisource.org/wiki/Halaman:Mustikarasa.pdf/66#:~:text=Gula%20ini%20dibuat%20dari%20sari%20tebu%2C%20jang%20setelah%20disaring%20lalu%20dikristalkan%20dengan%20dimasak
}}
{{-terjemahan-}}
{{t-atas}}
* {{bhs|de}}: {{t|de|Zucker}}
* {{bhs|en}}: {{t|en|sugar}}
* {{bhs|fr}}: {{t|fr|sucre}}
* {{bhs|th}}: {{t|th|น้ำตาล}}
{{t-bawah}}
{{id-cat|Food}}
=={{bahasa|jv}}==
{{kepala|jv}}
{{-n-|jv}}
# [[Gula]]
#: ''Nggawa '''gula''' sak kilo'
#:Membawa gula satu kilo
{{-lafal-|jv}}
* {{suara|jv|LL-Q33549 (jav)-Tetheow-gula.wav}}
[[Kategori:WikiTutur Daring Umum 11 Februari 2024]]
[[Kategori:WikiTutur - Jawa]]
=={{bahasa|tes}}==
{{kepala|tes}}
{{-n-|tes}}
# {{l|id|gula}}
{{-lafal-|tes}}
* {{suara|tes|LL-Q12473479 (tes)-Resmi (Bangrapip)-gula.wav}}
[[Kategori:Kopdar WikiTutur 2.0 Jakarta 20 April 2025]]
[[Kategori:WikiTutur 2.0 - Tengger]]
[[Category:WikiMaknyus Manado]]
[[Category:WikiMaknyus]]
=={{bahasa|xmm}}==
{{kepala|xmm}}
: {{suara|xmm|LL-Q9240 (ind)-Manadonese-Gula.wav}}
{{-n-|xmm}}
# gula, bahan pemanis
[[Category:WikiMaknyus Manado]]
[[Category:WikiMaknyus]]
gku227aflyb4idcoencsrbly9cq6k19
kemungkinan
0
24911
1349377
1202070
2026-04-11T10:42:10Z
Sofi Solihah
23681
1349377
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan ke-an|mungkin|kelas=n}}
# keadaan yang mungkin; keadaan yang memungkinkan sesuatu terjadi: <br />''kemungkinan untuk menyelusup tanpa diketahui masih ada''
# sesuatu yang mungkin terjadi: <br />''masih banyak kemungkinan untuk menang''
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Hal ini '''kemungkinan''' di samping permainan ini merupakan sarana hiburan bagi anak anak mereka juga sebenarnya masih terkandung unsur-unsur pembinaan latihan ketrampilan melempar.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Badurit#:~:text=Hal%20ini%20kemungkinan%20di%20samping%20permainan%20ini%20merupakan%20sarana%20hiburan%20bagi%20anak%20anak%20mereka%20juga%20sebenarnya%20masih%20terkandung%20unsur%2Dunsur%20pembinaan%20latihan%20ketrampilan%20melempar.
}}
{{-terjemahan-}}
{{t-atas}}
* {{en}} : {{trad-|en|possibility}}
* {{fr}} : {{trad-|fr|possibilité}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
2lpmuxhs69duwzhlw2fyju3sgqumqtw
kamu
0
39718
1349371
1342186
2026-04-11T07:39:27Z
Alfiyah Rizzy Afdiquni
40651
Arabic and madurese
1349371
wikitext
text/x-wiki
[[File:Kamu.webm|thumb|250px|start=1|end=4|Bahasa isyarat kata "Kamu" ]]
=={{bahasa|id}}==
{{kepala|id}}
{{-pron-|id}}
# kata ganti orang kedua tunggal (yang lebih bersifat akrab)
{{-lafal-|id}}
* {{IPA|/ka.mu/}}
* {{suara|id|LL-Q9240 (ind)-Swarabakti-kamu.wav}}
{{-sinonim-}}
* [[Anda]], [[engkau]] (formal)
* [[kau]], [[dirimu]] (non-formal)
* (plural) [[kalian]]
{{-antonim-|id}}
* [[aku]] (orang pertama)
* [[dia]] (orang ketiga)
=== Penggunaan ===
* Kata ganti "kamu" biasa digunakan dalam konteks non-formal, untuk orang yang sudah kenal dekat. Dalam konteks formal, atau dalam percakapan antara orang yang belum terlalu dekat, bisa digantikan dengan kata sapaan atau nama orang yang diajak bicara:
** ''Kamu pergi ke mana?'' (nonformal) - ''Bapak/Ibu/Kakak/Adik/Mas/Mbak pergi ke mana''? (formal)
** ''Sepedamu kamu bawa ke sini aja'' (kenal dekat) - ''Sepedanya mas Budi dibawa ke sini saja'' (belum kenal dekat)
:Lihat pula penggunaan di lema [[-mu]]/[[-nya]].
{{-terjemahan-}}
{{t-atas}}
* {{nl}} : {{trad-|nl|u}}
* {{bew}} : {{trad-|bew|elu}}, {{trad-|bew|énté}}
* {{meo}} : {{trad-|meo|hang}}
* {{zlm-sar}} : {{trad-|zlm-sar|kitak}}
*{{jv}} : {{trad-|jv|kowe}}
* {{en}} : {{trad-|en|you}}
* {{bhs|ar}}: {{t|ar|أنت}}
* {{bhs|mad}}: {{t|mad|bâ'na}}
* bahasa di Indonesia:
</strong>|alt=sampeyan}}
{{t-bawah}}
=={{bahasa|mui-plm}}==
{{kepala|mui-plm}}
{{-pron-|mui-plm}}
# kata ganti orang kedua jamak; [[kalian]]
#: ''aku melok '''kamu''' bae''
#:: aku ikut '''kalian''' saja
{{-lafal-|mui-plm}}
* {{IPA|[ka.mu]}}
** {{suara|mui-plm|LL-Q12497929 (mis)-Clysmic-kamu.wav}}
{{-etimologi-}}
* Dari {{inh|mui-plm|poz-mly-pro|*kamu|gloss=kalian}}, dari {{inh|mui-plm|poz-pro|*kamu|gloss=kalian [nominatif]}}, dari {{inh|mui-plm|map-pro|*k-amu|gloss=kalian [nominatif]}}
{{-turunan-}}
* {{l|mui-plm|kamu-kamu}}
* {{l|mui-plm|sekamuan}}
[[Kategori:WikiTutur - Palembang]]
[[Kategori:WikiTutur Palembang 18 Februari 2024]]
=={{bahasa|kvb}}==
{{kepala|kvb}}
: {{pemenggalan|kvb|ka|mu}}
{{-pron-|kvb}}
# [[kalian]]
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
g75y3cymkzfy2jo8d4p0pjdyqkph40z
siapa
0
40030
1349373
1348389
2026-04-11T08:15:52Z
Alfiyah Rizzy Afdiquni
40651
Arabic and madurese
1349373
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-pron-|id}}
{{-lafal-|id}}
* {{IPA|/si.a.pa/}}
* {{suara|id|LL-Q9240 (ind)-Swarabakti-siapa.wav}}
# seseorang yang tidak tentu: <br />''Siapa yang bersalah harus dihukum.''
# kata untuk menanyakan nama orang
#: (kalimat tanya langsung) ''Siapa yang bersalah?''
#: (kalimat tanya tidak langsung) ''Kita tidak tahu siapa yang bersalah.''
# kata tanya untuk menanyakan nomina insan: <br />anak siapa dia?; adik siapa yang nakal itu?''
#* {{RQ:Perahu Tulis
| page = 121
| author = Balai Bahasa Sumatera Barat
| chapter =
| url = https://id.wikisource.org/wiki/Halaman:Antologi_Cerpen_Remaja_Sumatera_Barat_Perahu_Tulis.pdf/133#:~:text=Kalau%20aku%20turun%2C%20nanti%20kau%20malah%20pulang.%20Dengan%20siapa%20lagi%20aku%20bisa%20belajar%3F%E2%80%9D
| text = Kalau aku turun, nanti kau malah pulang. Dengan '''siapa''' lagi aku bisa belajar?
}}
{{-etimologi-}}
* Pinjaman dari [[Bahasa Jawa]] (Kuno) '''''syapa''''' ['siapa']
{{-rujukan-}}
* Zoetmulder, P.J. 1982. Old Javanese-English dictionary. (Koninklijk Instituut voor Taal-, Land- en Volkenkunde.) The Hague: Martinus Nijhoff. (2 vols).
* Gericke, J.F.C., T. Roorda. 1847. Javaansch-Nederduitsch Woordenboek. Johannes Müller, Amsterdam & Brill. Leiden.
* L'Abbé P. Favre. 1870. Dictionnaire Javanais-Français. Imprimerie Impériale et Royale.
* Juynboll, H.H. 1923. Oudjavaansch-Nederlandsche Woordenlijst. E.J. Brill.
* Pigeaud, Th. 1938. Javaans-Nederlands handwoordenboek. Groningen: J.B. Wolters.
* Poerwadarminta, W.J.S. 1939. Bausastra Jawa. Groningen: J.B. Wolters.
* Robson, S.O & Wibisono, S. 1941. Old Javanese-English Dictionary. Periplus Editions, Hongkong.
* Horne, E.C. 1974. Javanese-English Dictionary. New Haven: Yale University Press, London.
* Zoetmulder, P.J & Robson, S.0. (2006). Kamus Jawa Kuna-Indonesia. (Penerjemah: Darusuprapta dan Sumarti Suprayitna). Jakarta. Gramedia Pustaka Utama.
* {{R:KBBI Daring}}
{{-terjemahan-}}
{{t-atas}}
* {{nl}} : {{trad-|nl|wie}}
* {{en}} : {{trad-|en|who}}
* {{ban}} : [[sira]], [[nyen]]
* {{jv}} : {{trad-|jv|sinten}} (krama), {{trad-|jv|sapa}} sopo (ngoko)
* {{fr}} : {{trad-|fr|qui}}
* {{es}} : {{trad+|es|quién}}
* {{de}} : {{trad+|de|wer}}
* {{bhs|ar}}: {{t|ar|من}}
* {{bhs|mad}}: {{t|mad|sapa}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
qwihg2rjusqzh7zr6x8157hktsnr3v7
kapan
0
44372
1349372
1348268
2026-04-11T08:12:19Z
Alfiyah Rizzy Afdiquni
40651
Arabic and madurese
1349372
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# {{ragam dari|id|kafan}}
{{-pron-|id}}
# [[kata tanya]] untuk menanyakan [[waktu]]
#: '''''Kapan''' dia akan pergi?''
{{-lafal-|id}}
* {{IPA|/ka.pan/}}
* {{suara|id|LL-Q9240 (ind)-Swarabakti-kapan.wav}}
* {{suara|id|Kapan.wav}}
{{-etimologi-}}
* Berasal dari ''[[bahasa Jawa Kuno]]'' ''"kapan"'' 'kata tanya untuk menanyakan waktu'
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 5
|text= Hanya '''kapan''' dimulai berkembangnya tidak dinyatakan dengan pasti.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Babanga#:~:text=Hanya%20kapan%20dimulai%20berkembangnya%20tidak%20dinyatakan%20dengan%20pasti.
}}
{{-rujukan-}}
* Zoetmulder, P.J., dan Robson, S.O. (2006). Kamus Jawa Kuna-Indonesia. (Darusuprapta dan Sumarti Suprayitna, Penerjemah). Jakarta: Gramedia Pustaka Utama.
* Wilkinson, R. J. 1959. A Malay-English Dictionary (Romanised). London: Macmillan.
* Stevens, A. M., Schmidgall-Tellings, A. E., & American Indonesian Chamber of Commerce. (2004). A comprehensive Indonesian-English dictionary (Second edition). Ohio University Press.
* Klinkert, H.C. 1892. Nieuw Maleisch-Nederlandsch zawoordenboek, ten behoove van hen, die het Maleisch met Latijnsch karakter beoefenen. Leiden, E.J. Brill.
* Adelaar, K. Alexander. (1985). Proto-Malayic: The reconstruction of its phonology aparts of its lexicon and morphology. Ph.D. thesis, University of Leiden (Netherland).
* Adelaar, K. Alexander. 1994. Bahasa Melayik Purba: Rekontruksi Fonologi dan Sebagian dari Leksikon dan Morfologi. Jakarta: RUL. Diterbitkan atas kerjasama dengan Universitas Leiden, Belanda.
* {{R:KBBI Daring}}
{{-terjemahan-}}
{{t-atas}}
* bahasa Inggris : {{t|en|when}}
* bahasa Esperanto : {{t|eo|kiam}}
* bahasa Swedia : {{t|eo|när}}
* {{bhs|ar}}: {{t|ar|متى}}
* {{bhs|mad}}: {{t|mad|bilâ}}
{{t-bawah}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|bew}}==
{{kepala|bew}}
{{-lafal-}}
*{{suara|bew|LL-Q33014 (bew)-Ahmad Mawardi (Swarabakti)-kapan.wav|Suara {{a|Tanah Abang}}}}
{{-pron-|bew}}
# [[sini]]
[[Kategori:WikiTutur 2.0 - Betawi]]
[[Kategori:Kolaborasi Tim WikiTutur 2.0]]
=={{bahasa|jv}}==
{{kepala|jv}}
{{-pron-|jv}} /ka.pan
# kapan; bila
#: ''Kapan kowe bali?''
#: Kapan kamu pulang?
{{-lafal-|jv}}
* {{suara|jv|LL-Q33549 (jav)-Srengenge nyunar-kapan.wav|q=Yogyakarta}}
* {{suara|id|LL-Q33549 (jav)-Raizan1-Kapan.wav|q=Banten}}
* {{suara|id|LL-Q33549_(jav)-Annidafattiya-kapan.wav|q=malang}}
[[Kategori:WikiTutur - Jawa]]
[[Kategori:WikiTutur - Jawa Banten]]
[[Kategori:WikiTutur Daring Umum 11 Februari 2024]]
[[Kategori:WikiTutur Daring 10 Maret 2024]]
=={{bahasa|su}}==
{{kepala|su}}
{{-pron-|su}}
# kapan; bila
=={{bahasa|ms}}==
{{kepala|ms}}
{{-pron-|ms}}
# kapan; bila
=={{bahasa|bew}}==
{{kepala|bew}}
{{-pron-|bew}}
# kapan; bila
=={{bahasa|ban}}==
{{kepala|ban}}
{{-pron-|ban}}
# kapan; bila
=={{bahasa|mad}}==
{{kepala|mad}}
{{-pron-|mad}}
# kapan; bila
=={{bahasa|osi}}==
{{kepala|osi}}
{{-pron-|osi}}
# kapan; bila
=={{bahasa|sas}}==
{{kepala|sas}}
{{-pron-|sas}}
# kapan; bila
=={{bahasa|tes}}==
{{kepala|tes}}
{{-pron-|tes}}
# kapan; bila
=={{bahasa|kkv}}==
{{kepala|kkv}}
{{-pron-|kkv}}
# kapan; bila
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-pron-|bjn}}
# kapan; bila
=={{bahasa|bug}}==
{{kepala|bug}}
{{-pron-|bug}}
# kapan; bila
=={{bahasa|mak}}==
{{kepala|mak}}
{{-pron-|mak}}
# kapan; bila
=={{bahasa|min}}==
{{kepala|min}}
{{-pron-|min}}
# kapan; bila
=={{batak}}==
{{-pron-|min}}
# kapan; bila
=={{bahasa|ljp}}==
{{kepala|ljp}}
{{-pron-|ljp}}
# kapan
{{-lafal-|ljp}}
* {{suara|ljp|LL-Q49215 (ljp)-WanaraLima-kapan.wav}}
[[Kategori:WikiTutur - Lampung Api]]
[[Kategori:WikiTutur Yogyakarta 18 Februari 2024]]
=={{bahasa|pmy}}==
{{kepala|pmy}}
{{-intj-|pmy}}
# kapan
#: '''''kapan''' tong pi kota ini''
#:: '''kapan''' kita pergi ke kota?
{{-lafal-|pmy}}
*{{suara|pmy|LL-Q12473446 (pmy)-Empat Tilda-dong.wav}}
=={{bahasa|mui-plm}}==
{{kepala|mui-plm}}
{{-pron-|mui-plm}}
# kata tanya untuk menanyakan waktu; [[#bahasa Indonesia|kapan]]
#: {{syn|mui-plm|bilo}}
#: '''''Kapan''' kamu nak besanjo?''
#:: '''Kapan''' kalian mau berkunjung?
{{-adv-|mui-plm}}
# pada saat; [[apabila]], [[ketika]]
#: {{syn|mui-plm|pangko|pas}}
#: ''Nyengeh bae dio '''kapan''' ditanyo gawean.''
#:: Dia hanya tersenyum '''ketika''' ditanya soal pekerjaan.
3fzep60o1m7x95hp1f3yhk1ewhdzsv5
didukung
0
45706
1349381
1318735
2026-04-11T10:59:14Z
Sofi Solihah
23681
1349381
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan di-|dukung}}
# {{rfdef|id}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Permainan ini merupakan permainan yang berkembang di masyarakat yaag tidak membedakan kelompok sosial tertentu. Karenanya permainan ini '''didukung''' oleh masyarakat, baik masyarakat petani, buruh, nelayan, pedagang dan lain - lainnya.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Bagimpar#:~:text=Permainan%20ini%20merupakan%20permainan%20yang%20berkembang%20di%20masyarakat%20yaag%20tidak%20membedakan%20kelompok%20sosial%20tertentu.%20Karenanya%20permainan%20ini%20didukung%20oleh%20masyarakat%2C%20baik%20masyarakat%20petani.%20buruh%2C%20nelayan%2C%20pedagang%20dan%20lain%20%2D%20lainnya.
}}
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
9020ce9nqw9jgp510p3dzew1vr4q4yh
peribahasa
0
45948
1349267
1123659
2026-04-10T12:19:02Z
~2026-22085-22
47548
/* {{bahasa|id}} */ kesiya robbiya
1349267
wikitext
text/x-wiki
===
* Teks judul
robbiya kesiya ===
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
{{indonesia-majemuk|peri|bahasa}}
# kelompok kata atau kalimat yang tetap susunannya, biasanya mengiaskan maksud tertentu (dalam peribahasa termasuk juga bidal, ungkapan, perumpamaan);
# ungkapan atau kalimat ringkas padat, berisi perbandingan, perumpamaan, nasihat, prinsip hidup atau aturan tingkah laku
[[Kategori:Turunan kata bahasa]]
l580axyfzsbm1iug77q4o6nnf6xyioa
1349301
1349267
2026-04-11T03:58:43Z
OrangKalideres
35065
Suntingan [[Special:Contributions/~2026-22085-22|~2026-22085-22]] ([[User talk:~2026-22085-22|bicara]]) dikembalikan ke versi terakhir oleh [[Special:Contributions/SwarabaktiBot|SwarabaktiBot]]
1123659
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
{{indonesia-majemuk|peri|bahasa}}
# kelompok kata atau kalimat yang tetap susunannya, biasanya mengiaskan maksud tertentu (dalam peribahasa termasuk juga bidal, ungkapan, perumpamaan);
# ungkapan atau kalimat ringkas padat, berisi perbandingan, perumpamaan, nasihat, prinsip hidup atau aturan tingkah laku
[[Kategori:Turunan kata bahasa]]
6i24l92lj0bpefq112bakmsedeyfpfo
sabi
0
47792
1349347
1125487
2026-04-11T07:11:10Z
Iripseudocorus
40083
1349347
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# kemeja
{{-v-|id}}
# {{cak}} [[bisa]]
=={{bahasa|gor}}==
{{kepala|gor}}
{{-n-|gor}}
# [[sabit]]
{{-turunan-|id}}
{{-terjemahan-}}
<!--Anda dapat menyalin templat {{t-atas}} -- {{t-bawah}} di bawah berulang kali untuk masing masing arti kata, masing-masing dibedakan melalui parameter pertamanya (misalkan {{t-atas|arti 1}} dan {{t-atas|arti 2}} dst). Lihat [[Wiktionary:Terjemahan]] untuk panduan membuat lebih dari satu kolom terjemahan-->
{{t-atas}}
{{t-bawah}}
o7io9d4kx4maw1lpynvo76son7574y3
1349359
1349347
2026-04-11T07:22:10Z
Iripseudocorus
40083
/* Bahasa Indonesia */
1349359
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# kemeja
{{-v-|id}}
# {{label|id|cakapan}} {{sinonim dari|id|bisa}}
=={{bahasa|gor}}==
{{kepala|gor}}
{{-n-|gor}}
# [[sabit]]
{{-turunan-|id}}
{{-terjemahan-}}
<!--Anda dapat menyalin templat {{t-atas}} -- {{t-bawah}} di bawah berulang kali untuk masing masing arti kata, masing-masing dibedakan melalui parameter pertamanya (misalkan {{t-atas|arti 1}} dan {{t-atas|arti 2}} dst). Lihat [[Wiktionary:Terjemahan]] untuk panduan membuat lebih dari satu kolom terjemahan-->
{{t-atas}}
{{t-bawah}}
ojtie2odh9ghwj2ih14twmktalk9q8s
kosek
0
104074
1349332
1206497
2026-04-11T07:05:29Z
Riiiv
40737
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [jv]
1349332
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{lihat 2|v=y}}
{{-turunan-|id}}
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|jv}}==
{{kepala|jv}}
{{-n-|jv}}
# [[sebentar]], [[nanti dulu]]
eyi7hzw9xf1izuxs8jfvii64yqf0xlc
gelang
0
106997
1349374
1346374
2026-04-11T10:34:27Z
Sofi Solihah
23681
1349374
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# barang yang berbentuk lingkaran atau cincin besar
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 31
|text= Selama masih ada orang atau warung yang menjual karet '''gelang''' maka permainan ini kemungkinan akan tetap masih berkembang.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Badurit#:~:text=Selama%20masih%20ada%20orang%20atau%20warung%20yang%20menjual%20karet%20gelang%20maka%20permainan%20ini%20kemungkinan%20akan%20tetap%20masih%20berkembang.
}}
# perhiasan (dari emas, perak, dsb.) berbentuk lingkaran yang dipakai di lengan atau di kaki
# {{Bio}} sisa cadar dalam yang melingkari tangkai cendawan tertentu, tudungnya mekar
{{-turunan-|id}}
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|tes}}==
{{kepala|tes}}
{{-n-|tes}}
# {{l|id|gelang}}
{{-lafal-|tes}}
* {{suara|tes|LL-Q12473479 (tes)-Resmi (Bangrapip)-gelang.wav}}
[[Kategori:Kopdar WikiTutur 2.0 Jakarta 20 April 2025]]
[[Kategori:WikiTutur 2.0 - Tengger]]
=={{bahasa|osi}}==
{{kepala|osi}}
: {{pemenggalan|osi|ge|lang}}
{{-etimologi-}}
: {{l|id|Jawa Kuno}}
{{-n-|osi}}
# {{l|id|gelang}}; benda yang berbentuk lingkaran atau cincin besar
{{-rujukan-}}
* Ali, Hasan. (2002). ''[https://web.archive.org/web/20260115111844/https://ebookbanyuwangi.id/assets/2022/kamus_using.pdf Kamus Bahasa Daerah Using-Indonesia]''. Banyuwangi: Pemerintah Kabupaten Banyuwangi.
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
4mnipbqhysgg9ima1abl174fvcyojbv
bidan
0
112030
1349300
1334141
2026-04-11T03:40:39Z
~2026-22282-04
47553
1349300
wikitext
text/x-wiki
=={{bahasa|ace}}==
{{kepala|ace}}
{{-lafal-|ace}}
* {{suara|ace|LL-Q27683 (ace)-Muislate-bidan.wav}}
{{-n-|ace}}
# {{l|id|bidan}}
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# wanita yang mempunyai kepandaian menolong dan merawat orang melahirkan dan bayinya
{{-etimologi-}}
* Berasal dari bahasa Hindi विद्वस् vidvas 'bijaksana, terpelajar, mahir'
{{-rujukan-}}
* Russell Jones, Loan-words in Indonesian and Malay, (Jakarta: Yayasan Obor Indonesia, 2008)
* Sir Monier Monier-Williams, M.A., K.C.I.E (1899) Sanskrit-English Dictionary Etymologically and Philologically Arranged with Special Reference to Cognate Indo-European Languages. Oxford: University Press
{{-turunan-|id}}
{{-ragam-}}
=={{bahasa|btd}}==
{{kepala|btd}}
{{-n-|btd}}
# {{l|id|bidan}}
[[Kategori:WikiTutur 3.0 - Pakpak]]
[[Kategori:WikiTutur 3.0 Kopdar Medan 2025-11-02]]
=={{bahasa|tes}}==
{{kepala|tes}}
{{-lafal-|tes}}
{{-n-|tes}}
# wanita yang mempunyai kepandaian menolong dan merawat orang melahirkan dan bayinya
#: {{ux|tes|bidan ngrewangi wong manak|bidan membantu orang melahirkan}}
[[Kategori:WikiTutur 3.0 - Bidan]]
[[Kategori:WikiTutur 3.0 KlubWikiUB 1 November 2025]]
[[Kategori:WikiTutur 3.0 - Aceh]]
[[Kategori:WikiTutur 3.0 Kopdar Medan 2025-11-02]]
=={{bahasa|bqr}}==
{{kepala|bqr}}
: {{suara|bqr|LL-Q5001028 (bqr)-Apriana (Egie Allinskie)-bidan.wav}}
{{-n-|bqr}}
# {{l|id|bidan}}
=={{bahasa|bjn}}==
{{kepala|bjn}}
{{-lafal-|bjn}}
* {{suara|bjn|LL-Q33151 (bjn)-Opalegam-bidan.wav}}
{{-n-|bjn}}
# {{l|id|bidan}}
#: {{ux|bjn|kasi kita ke bidan.| ayo kita ke bidan.}}
[[Kategori:WikiTutur 3.0 - Banjar]]
[[Kategori:WikiTutur 3.0 Banjarmasin 15 Februari 2026]]
=={{bahasa|kvb}}==
{{kepala|kvb}}
: {{pemenggalan|kvb|bi|dan}}
{{-v-|kvb}}
# [[dukun]] [[beranak]]
=={{bahasa|sas}}==
{{kepala|sas}}
{{-n-|sas}}
# belian
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
hf3qey2d5jlxlxereveeoajakgvcfzv
ocehan
0
138135
1349383
1231104
2026-04-11T11:16:55Z
Sofi Solihah
23681
1349383
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan -an|oceh}}
# perkataan yang bukan-bukan; celoteh; omongan: <br>''jangan dengarkan ocehannya''
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Dan bagi anak yang usianya lebih dari 15 tahun, biasanya mereka enggan turut bermain, karena takut mendapat '''ocehan''' dari masyarakatnya.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Bagimpar#:~:text=Dan%20bagi%20anak%20yang%20usianya%20lebih%20dari%2015%20tahun%2C%20biasanya%20mereka%20enggan%20turut%20bermain%2C%20karena%20takut%20mendapat%20ocehan%20dari%20masyarakatnya.
}}
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
95hvxidszboq3cv6dtmyxicab9if1ml
tanggapan
0
139266
1349376
1240742
2026-04-11T10:40:59Z
Sofi Solihah
23681
1349376
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan -an|tanggap}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= '''Tanggapan''' masyarakat
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Badurit#:~:text=Tanggapan%20masyarakat.
}}
# sambutan terhadap ucapan (kritik, komentar, dsb)
# apa yang diterima oleh pancaindra; bayangan dalam angan-angan
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
7q38xu0pdr3tuc4wd4slrl9po33rgf6
melakukan
0
142491
1349379
1279990
2026-04-11T10:48:50Z
Sofi Solihah
23681
1349379
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan me-kan|laku}}
# [[mengerjakan]] ([[menjalankan]] dsb)
#:''ia gugur dalam '''melakukan''' tugasnya''
# [[mengadakan]] (suatu perbuatan, [[tindakan]], dsb)
#:'' '''melakukan''' pendaratan darurat; '''melakukan''' demonstrasi''
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Jadi Bagimpar berarti '''melakukan''' permainan Gimpar.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Bagimpar#:~:text=Jadi%20Bagimpar%20berarti%20melakukan%20permainan%20Gimpar.
}}
# [[melaksanakan]]; [[mempraktikkan]]; [[menunaikan]]
#:''Pemerintah akan '''melakukan''' tindakan tegas terhadap setiap penyelewengan yang terjadi''
# [[melazimkan]] ([[kebiasaan]], [[cara]], dsb)
#:''kepala sekolah bermaksud '''melakukan''' "Senam Pagi Indonesia" di sekolahnya''
# menjadikan (membuat dsb) berlaku; menjadikan laku
#:'' '''melakukan''' uang palsu adalah perbuatan yang melanggar hukum''
# berbuat sesuatu terhadap (suatu hal, [[orang]], dsb)
#:''ia '''melakukan''' anak yatim itu sebagai anaknya sendiri''
# [[mengabulkan]] ([[permintaan]], [[doa]], dsb); [[meluluskan]]
#:''orang tuanya selalu '''melakukan''' permintaan anak itu''
{{-terjemahan-}}
{{t-atas}}
*bahasa Finlandia: {{t+|fi|tehdä}}
*bahasa Prancis: {{t+|fr|faire}}
*bahasa Swedia: {{t+|sv|göra}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
g8kqkpq2q8ywcrmh6dj2ow7q8ura384
pedesaan
0
143834
1349380
1224200
2026-04-11T10:55:36Z
Sofi Solihah
23681
1349380
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan pe-an|desa}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Pada masa lalu, terutama di daerah '''pedesaan''', hiburan anak anak sangat terbatas sekali. Satu - satunya hiburan bagi anak - anak adalah bermain.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Bagimpar#:~:text=Pada%20masa%20lalu%2C%20terutama%20di%20daerah%20pedesaan%2C%20hiburan%20anak%20anak%20sangat%20terbatas%20sekali.%20Satu%20%2D%20satunya%20hiburan%20bagi%20anak%20%2D%20anak%20adalah%20bermain.
}}
# daerah permukiman penduduk yang sangat dipengaruhi oleh kondisi tanah, iklim, dan air sebagai syarat penting bagi terwujudnya pola kehidupan agraris penduduk di tempat itu
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
gyq71nztjjd0csutkd4v3ib92r8bxw2
bergabung
0
147899
1349384
1149533
2026-04-11T11:26:00Z
Sofi Solihah
23681
1349384
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan ber-|gabung}}
# menjadi satu (dengan); berkumpul menjadi satu: <br />''lebih baik kita bergabung dengan rombongan itu''
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Begitu pula dalam melakukan permainan, mereka '''bergabung''' menjadi satu dengan tidak membedakan apakah dia dari anak petani, anak pedagang, anak pegawai dan sebagainya.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Bagimpar#:~:text=Begitu%20pula%20dalam%20melakukan%20permainan%2C%20mereka%20begabung%20menjadi%20satu%20dengan%20tidak%20membedakan%20apakah%20dia%20dari%20anak%20petani%2C%20anak%20pedagang%2C%20anak%20pegawai%20dan%20sebagainya.
}}
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
hd3tycfq5vmbr1rromofefhh07lc7tc
pembinaan
0
149879
1349378
1222997
2026-04-11T10:46:11Z
Sofi Solihah
23681
1349378
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan peng-an|bina}}
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Hal ini kemungkinan di samping permainan ini merupakan sarana hiburan bagi anak anak mereka juga sebenarnya masih terkandung unsur-unsur '''pembinaan''' latihan ketrampilan melempar.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Badurit#:~:text=Hal%20ini%20kemungkinan%20di%20samping%20permainan%20ini%20merupakan%20sarana%20hiburan%20bagi%20anak%20anak%20mereka%20juga%20sebenarnya%20masih%20terkandung%20unsur%2Dunsur%20pembinaan%20latihan%20ketrampilan%20melempar.
}}
# proses, cara, perbuatan membina (negara dsb)
# pembaharuan; penyempurnaan
# usaha, tindakan, dan kegiatan yang dilakukan secara efisien dan efektif untuk memperoleh hasil yang lebih baik
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
3bot2ygasflqtr4ty9re9jf757a0e4n
berebutan
0
151876
1349275
1349209
2026-04-10T16:51:20Z
Iripseudocorus
40083
Melengkapi konteks
1349275
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan ber-an|rebut}}
# berganti-ganti mengambil sesuatu (dengan kekerasan); saling mendahului dengan paksa: <br />''anak-anak itu sangat senang makan berebutan; kedua partai itu berebutan kursi''
#* {{RQ:Perahu Tulis
| page = 140
| author = Balai Bahasa Sumatera Barat
| chapter =
| text = Bahkan untuk transportasi saja masih sangat sulit. Kalaupun ada itu hanya dua kali seminggu yaitu pada hari Selasa dan Kamis. Itu pun harus '''berebutan''' dengan para petani yang harus membawa hasil panen mereka ke pasar.
| url =https://id.wikisource.org/wiki/Halaman:Antologi_Cerpen_Remaja_Sumatera_Barat_Perahu_Tulis.pdf/152#:~:text=Bahkan%20untuk%20transportasi%20saja%20masih%20sangat%20sulit.%20Kalaupun%20ada%20itu%20hanya%20dua%20kali%20seminggu%20yaitu%20pada%20hari%20Selasa%20dan%20Kamis.%20Itu%20pun%20harus%20berebutan%20dengan%20para%20petani%20yang%20harus%20membawa%20hasil%20panen%20mereka%20ke%20pasar.
}}
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
[[Kategori:Awalan ber yang luluh]]
fewd6xpflnxwvqomo64p0zveyw132xr
berkembang
0
152769
1349375
1155706
2026-04-11T10:36:38Z
Sofi Solihah
23681
1349375
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan ber-|kembang}}
{{imbuhan ber-|kembang|kelas=adj}}
# mekar terbuka atau membentang (tentang barang yang berlipat atau kuncup): <br />''parasutnya tidak berkembang''
# menjadi besar (luas, banyak, dsb); memuai: <br />''perusahaan itu berkembang pesat''
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Selama masih ada orang atau warung yang menjual karet gelang maka permainan ini kemungkinan akan tetap masih '''berkembang'''.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Badurit#:~:text=Selama%20masih%20ada%20orang%20atau%20warung%20yang%20menjual%20karet%20gelang%20maka%20permainan%20ini%20kemungkinan%20akan%20tetap%20masih%20berkembang.
}}
# menjadi bertambah sempurna (tentang pribadi, pikiran, pengetahuan, dsb): <br />''dengan kemampuan kosakata yang terbatas, pikiran seseorang tidak dapat berkembang''
# menjadi banyak (merata, meluas, dsb): <br />''usaha kerajinan tangan dan industri kecil berkembang dengan pesat di daerah ini''
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
n5elna32n4xnng8wqsf407eec2e8rfl
menyulitkan
0
153600
1349382
1239774
2026-04-11T11:05:36Z
Sofi Solihah
23681
1349382
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{imbuhan me-kan|sulit}}
# menjadikan sulit; menyukarkan; menyusahkan
#* {{RQ:Permainan Rakyat Daerah Kalimantan Selatan
|page= 55
|text= Mengenai tidak melebihi dari 8 orang, karena kalau terlalu banyak akan '''menyulitkan''' dalam penempatan pasangan dan juga dalam waktu melakukan permainan tersebut.
|url= https://id.wikisource.org/wiki/Permainan_Rakyat_Daerah_Kalimantan_Selatan/Bagimpar#:~:text=Mengenai%20tidak%20melebihi%20dari%208%20orang%2C%20karena%20kalau%20terlalu%20banyak%20akan%20menyulitkan%20dalam%20penempatan%20pasangan%20dan%20juga%20dalam%20waktu%20melakukan%20permainan%20tersebut.
}}
{{-terjemahan-}}
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
jfneg9aqy5apwobrbn2llnuferp952z
Modul:translations
828
200720
1349280
1105881
2026-04-10T19:12:26Z
Swarabakti
18192
coba mutakhirkan
1349280
Scribunto
text/plain
local export = {}
local anchors_module = "Module:anchors"
local debug_track_module = "Module:debug/track"
local languages_module = "Module:languages"
local links_module = "Module:links"
local pages_module = "Module:pages"
local parameters_module = "Module:parameters"
local string_utilities_module = "Module:string utilities"
local templatestyles_module = "Module:TemplateStyles"
local utilities_module = "Module:utilities"
local wikimedia_languages_module = "Module:wikimedia languages"
local concat = table.concat
local html_create = mw.html.create
local insert = table.insert
local load_data = mw.loadData
local new_title = mw.title.new
local require = require
--[==[
Loaders for functions in other modules, which overwrite themselves with the target function when called. This ensures modules are only loaded when needed, retains the speed/convenience of locally-declared pre-loaded functions, and has no overhead after the first call, since the target functions are called directly in any subsequent calls.]==]
local function decode_uri(...)
decode_uri = require(string_utilities_module).decode_uri
return decode_uri(...)
end
local function format_categories(...)
format_categories = require(utilities_module).format_categories
return format_categories(...)
end
local function full_link(...)
full_link = require(links_module).full_link
return full_link(...)
end
local function get_link_page(...)
get_link_page = require(links_module).get_link_page
return get_link_page(...)
end
local function get_wikimedia_lang(...)
get_wikimedia_lang = require(wikimedia_languages_module).getByCode
return get_wikimedia_lang(...)
end
local function language_link(...)
language_link = require(links_module).language_link
return language_link(...)
end
local function normalize_anchor(...)
normalize_anchor = require(anchors_module).normalize_anchor
return normalize_anchor(...)
end
local function plain_link(...)
plain_link = require(links_module).plain_link
return plain_link(...)
end
local function process_params(...)
process_params = require(parameters_module).process
return process_params(...)
end
local function remove_links(...)
remove_links = require(links_module).remove_links
return remove_links(...)
end
local function split_on_slashes(...)
split_on_slashes = require(links_module).split_on_slashes
return split_on_slashes(...)
end
local function templatestyles(...)
templatestyles = require(templatestyles_module)
return templatestyles(...)
end
local function track(...)
track = require(debug_track_module)
return track(...)
end
--[==[
Loaders for objects, which load data (or some other object) into some variable, which can then be accessed as "foo or get_foo()", where the function get_foo sets the object to "foo" and then returns it. This ensures they are only loaded when needed, and avoids the need to check for the existence of the object each time, since once "foo" has been set, "get_foo" will not be called again.]==]
local en
local function get_en()
en, get_en = require(languages_module).getByCode("en"), nil
return en
end
local headword_data
local function get_headword_data()
headword_data, get_headword_data = load_data("Module:headword/data"), nil
return headword_data
end
local parameters_data
local function get_parameters_data()
parameters_data, get_parameters_data = load_data("Module:parameters/data"), nil
return parameters_data
end
local translations_data
local function get_translations_data()
translations_data, get_translations_data = load_data("Module:translations/data"), nil
return translations_data
end
local function is_translation_subpage(pagename)
if (headword_data or get_headword_data()).page.namespace ~= "" then
return false
elseif not pagename then
pagename = (headword_data or get_headword_data()).encoded_pagename
end
return pagename:match("./translations$") and true or false
end
local function canonical_pagename()
local pagename = (headword_data or get_headword_data()).encoded_pagename
return is_translation_subpage(pagename) and pagename:sub(1, -14) or pagename
end
local function interwiki(terminfo, term, lang, langcode)
-- No interwiki link if term is empty/missing
if not term or #term < 1 then
terminfo.interwiki = false
return
end
-- Percent-decode the term.
term = decode_uri(terminfo.term, "PATH")
-- Don't show an interwiki link if it's an invalid title.
if not new_title(term) then
terminfo.interwiki = false
return
end
local interwiki_langcode = (translations_data or get_translations_data()).interwiki_langs[langcode]
local wmlangs = interwiki_langcode and {get_wikimedia_lang(interwiki_langcode)} or lang:getWikimediaLanguages()
-- Don't show the interwiki link if the language is not recognised by Wikimedia.
if #wmlangs == 0 then
terminfo.interwiki = false
return
end
local sc = terminfo.sc
local target_page = get_link_page(term, lang, sc)
local split = split_on_slashes(target_page)
if not split[1] then
terminfo.interwiki = false
return
end
target_page = split[1]
local wmlangcode = wmlangs[1]:getCode()
local interwiki_link = language_link{
lang = lang,
sc = sc,
term = wmlangcode .. ":" .. target_page,
alt = "(" .. wmlangcode .. ")",
tr = "-"
}
terminfo.interwiki = tostring(html_create("span")
:addClass("tpos")
:wikitext(" " .. interwiki_link)
)
end
function export.show_terminfo(terminfo, check)
local lang = terminfo.lang
local langcode, langname = lang:getCode(), lang:getCanonicalName()
-- Translations must be for mainspace languages.
if not lang:hasType("regular") then
error("Terjemahan hanya dapat diberikan bagi bahasa yang tercatat dan sudah disahkan sebagai bahasa di ruang nama utama.")
else
local disallowed = (translations_data or get_translations_data()).disallowed
local err_msg = disallowed[langcode]
if err_msg then
error("Terjemahan tidak diperbolehkan di " .. langname .. " (" .. langcode .. "). Terjemahan " .. langname .. " semestinya " .. err_msg)
end
local fullcode = lang:getFullCode()
if fullcode ~= langcode then
err_msg = disallowed[fullcode]
if err_msg then
langname = lang:getCanonicalNameLower()
error("Terjemahan tidak diizinkan di " .. langname .. " (" .. fullcode .. "). " .. " terjemahan " .. langname .. " perlu " .. err_msg)
end
end
end
if langcode == "en" then
if terminfo.interwiki then
error("Interwiki translations not allowed for English; they should always link to a different Wiktionary")
end
local current_L2 = require(pages_module).get_current_L2()
if current_L2 ~= "Translingual" and mw.title.getCurrentTitle().nsText ~= "Wiktionary" then
if current_L2 then
error("English translations only allowed in Translingual section, not in " .. current_L2)
else
error("English translations only allowed in Translingual section, not outside of any L2")
end
end
end
local term = terminfo.term
-- Check if there is a term. Don't show the interwiki link if there is nothing to link to.
if not term then
-- Track entries that don't provide a term.
-- FIXME: This should be a category.
track("translations/no term")
track("translations/no term/" .. langcode)
end
if terminfo.interwiki then
interwiki(terminfo, term, lang, langcode)
end
langcode = lang:getFullCode()
if (translations_data or get_translations_data()).need_super[langcode] then
local tr = terminfo.tr
if tr ~= nil then
terminfo.tr = tr:gsub("%d[%d%*%-]*%f[^%d%*]", "<sup>%0</sup>")
end
end
terminfo.show_qualifiers = true
local link = full_link(terminfo, "terjemahan")
local canonical_name = lang:getCanonicalNameLower()
local full_name = lang:getFullName()
local categories = {"Istilah dengan terjemahan " .. canonical_name}
if canonical_name ~= full_name then
insert(categories, "Istilah dengan terjemahan " .. full_name)
end
if check then
link = tostring(html_create("span")
:addClass("ttbc")
:tag("sup")
:addClass("ttbc")
:wikitext("(tolong [[WT:Warung Kopi|pastikan]])")
:done()
:wikitext(" " .. link)
)
insert(categories, "Permintaan pemastian terjemahan " .. langname)
end
return link .. format_categories(categories, en or get_en(), nil, canonical_pagename())
end
-- Implements {{t}}, {{t+}}, {{t-check}} and {{t+check}}.
function export.show(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["translation"])
local check = frame.args.check
return export.show_terminfo({
lang = args[1],
sc = args.sc,
track_sc = true,
term = args[2],
alt = args.alt,
id = args.id,
genders = args[3],
tr = args.tr,
ts = args.ts,
lit = args.lit,
q = args.q,
qq = args.qq,
l = args.l,
ll = args.ll,
refs = args.ref,
interwiki = frame.args.interwiki,
}, check and check ~= "")
end
local function add_id(div, id)
return id and div:attr("id", normalize_anchor("Terjemahan-" .. id)) or div
end
-- Implements {{trans-top}} and part of {{trans-top-also}}.
local function top(args, title, id, navhead)
local column_width = (args["column-width"] == "wide" or args["column-width"] == "narrow") and "-" .. args["column-width"] or ""
local div = html_create("div")
:addClass("NavFrame")
:node(navhead)
:tag("div")
:addClass("NavContent")
:tag("table")
:addClass("translations")
:attr("role", "presentation")
:attr("data-gloss", title or "")
:tag("tr")
:tag("td")
:addClass("translations-cell")
:addClass("multicolumn-list" .. column_width)
:attr("colspan", "3")
:allDone()
div = add_id(div, id)
local categories = {}
if not title then
insert(categories, "Tabel terjemahan tidak memiliki glos pada kop")
end
local pagename = canonical_pagename()
if is_translation_subpage() then
insert(categories, "Subhalaman terjemahan")
end
return (tostring(div):gsub("</td></tr></table></div></div>$", "")) ..
(#categories > 0 and format_categories(categories, en or get_en(), nil, pagename) or "") ..
-- Category to trigger [[MediaWiki:Gadget-TranslationAdder.js]]; we want this even on
-- user pages and such.
format_categories("Entries with translation boxes", nil, nil, nil, true) ..
templatestyles("Module:translations/styles.css")
end
-- Entry point for {{trans-top}}.
function export.top(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-top"])
local title = args[1]
local id = args.id or title
title = title and remove_links(title)
return top(args, title, id, html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(title or "Terjemahan")
)
end
-- Entry point for {{checktrans-top}}.
function export.check_top(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["checktrans-top"])
local text = "\n:''Terjemahan di bawah ini perlu diperiksa dan dimasukkan ke dalam tabel terjemahan yang sesuai." .. --Lihat instruksi di " ..
frame:expandTemplate{
title = "section link",
args = {"WT:TLE#Terjemahan"}
} ..
".''\n"
local header = html_create("div")
:addClass("checktrans")
:wikitext(text)
local subtitle = args[1]
local title = "Terjemahan yang perlu diperiksa"
if subtitle then
title = title .. "‌: \"" .. subtitle .. "\""
end
-- No ID, since these should always accompany proper translation tables, and can't be trusted anyway (i.e. there's no use-case for links).
return tostring(header) .. "\n" .. top(args, title, nil, html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(title or "Terjemahan")
)
end
-- Implements {{trans-bottom}}.
function export.bottom(frame)
-- Check nothing is being passed as a parameter.
process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-bottom"])
return "</table></div></div>"
end
-- Implements {{trans-see}} and part of {{trans-top-also}}.
local function see(args, see_text)
local navhead = html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(args[1] .. " ")
:tag("span")
:css("font-weight", "normal")
:wikitext("— ")
:tag("i")
:wikitext(see_text)
:allDone()
local terms, id = args[2], args.id
if #terms == 0 then
terms[1] = args[1]
end
for i = 1, #terms do
local term_id = id[i] or id.default
local data = {
term = terms[i],
id = term_id and "Translations-" .. term_id or "Translations",
}
terms[i] = plain_link(data)
end
return navhead:wikitext(concat(terms, ",‎ "))
end
-- Entry point for {{trans-see}}.
function export.see(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-see"])
local div = html_create("div")
:addClass("pseudo")
:addClass("NavFrame")
:node(see(args, "see "))
return tostring(add_id(div, args.id.default or args[1]))
end
-- Entry point for {{trans-top-also}}.
function export.top_also(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-top-also"])
local navhead = see(args, "see also ")
local title = args[1]
local id = args.id.default or title
title = remove_links(title)
return top(args, title, id, navhead)
end
-- Implements {{translation subpage}}.
function export.subpage(frame)
process_params(frame:getParent().args, (parameters_data or get_parameters_data())["translation subpage"])
if not is_translation_subpage() then
error("This template should only be used on translation subpages, which have titles that end with '/translations'.")
end
-- "Translation subpages" category is handled by {{trans-top}}.
return ("''This page contains translations for ''%s''. See the main entry for more information.''"):format(full_link{
lang = en or get_en(),
term = canonical_pagename(),
})
end
-- Implements {{t-needed}}.
function export.needed(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["t-needed"])
local lang, category = args[1], ""
local span = html_create("span")
:addClass("trreq")
:attr("data-lang", lang:getCode())
:tag("i")
:wikitext("please add this translation if you can")
:done()
if not args.nocat then
local type, sort = args[2], args.sort
if type == "quote" then
category = "Requests for translations of " .. lang:getCanonicalName() .. " quotations"
elseif type == "usex" then
category = "Requests for translations of " .. lang:getCanonicalName() .. " usage examples"
else
category = "Requests for translations into " .. lang:getCanonicalName()
lang = en or get_en()
end
category = format_categories(category, lang, sort, not sort and canonical_pagename() or nil)
end
return tostring(span) .. category
end
-- Implements {{no equivalent translation}}.
function export.no_equivalent(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["tidak ada padanannya"])
local text = "tidak ada padanannya di " .. args[1]:getCanonicalNameLower()
if not args.noend then
text = text .. ", tapi lihat"
end
return tostring(html_create("i"):wikitext(text))
end
-- Implements {{no attested translation}}.
function export.no_attested(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["no attested translation"])
local langname = args[1]:getCanonicalName()
local text = "no [[WT:ATTEST|attested]] term in " .. langname
local category = ""
if not args.noend then
text = text .. ", but see"
local sort = args.sort
category = format_categories(langname .. " unattested translations", en or get_en(), sort, not sort and canonical_pagename() or nil)
end
return tostring(html_create("i"):wikitext(text)) .. category
end
-- Implements {{not used}}.
function export.not_used(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["not used"])
return tostring(html_create("i"):wikitext((args[2] or "not used") .. " in " .. args[1]:getCanonicalName()))
end
return export
ha3292vq9xv3uopw8laxppfe5o745ld
1349281
1349280
2026-04-10T19:13:39Z
Swarabakti
18192
1349281
Scribunto
text/plain
local export = {}
local anchors_module = "Module:anchors"
local debug_track_module = "Module:debug/track"
local languages_module = "Module:languages"
local links_module = "Module:links"
local pages_module = "Module:pages"
local parameters_module = "Module:parameters"
local string_utilities_module = "Module:string utilities"
local templatestyles_module = "Module:TemplateStyles"
local utilities_module = "Module:utilities"
local wikimedia_languages_module = "Module:wikimedia languages"
local concat = table.concat
local html_create = mw.html.create
local insert = table.insert
local load_data = mw.loadData
local new_title = mw.title.new
local require = require
--[==[
Loaders for functions in other modules, which overwrite themselves with the target function when called. This ensures modules are only loaded when needed, retains the speed/convenience of locally-declared pre-loaded functions, and has no overhead after the first call, since the target functions are called directly in any subsequent calls.]==]
local function decode_uri(...)
decode_uri = require(string_utilities_module).decode_uri
return decode_uri(...)
end
local function format_categories(...)
format_categories = require(utilities_module).format_categories
return format_categories(...)
end
local function full_link(...)
full_link = require(links_module).full_link
return full_link(...)
end
local function get_link_page(...)
get_link_page = require(links_module).get_link_page
return get_link_page(...)
end
local function get_wikimedia_lang(...)
get_wikimedia_lang = require(wikimedia_languages_module).getByCode
return get_wikimedia_lang(...)
end
local function language_link(...)
language_link = require(links_module).language_link
return language_link(...)
end
local function normalize_anchor(...)
normalize_anchor = require(anchors_module).normalize_anchor
return normalize_anchor(...)
end
local function plain_link(...)
plain_link = require(links_module).plain_link
return plain_link(...)
end
local function process_params(...)
process_params = require(parameters_module).process
return process_params(...)
end
local function remove_links(...)
remove_links = require(links_module).remove_links
return remove_links(...)
end
local function split_on_slashes(...)
split_on_slashes = require(links_module).split_on_slashes
return split_on_slashes(...)
end
local function templatestyles(...)
templatestyles = require(templatestyles_module)
return templatestyles(...)
end
local function track(...)
track = require(debug_track_module)
return track(...)
end
--[==[
Loaders for objects, which load data (or some other object) into some variable, which can then be accessed as "foo or get_foo()", where the function get_foo sets the object to "foo" and then returns it. This ensures they are only loaded when needed, and avoids the need to check for the existence of the object each time, since once "foo" has been set, "get_foo" will not be called again.]==]
local en
local function get_en()
en, get_en = require(languages_module).getByCode("en"), nil
return en
end
local headword_data
local function get_headword_data()
headword_data, get_headword_data = load_data("Module:headword/data"), nil
return headword_data
end
local parameters_data
local function get_parameters_data()
parameters_data, get_parameters_data = load_data("Module:parameters/data"), nil
return parameters_data
end
local translations_data
local function get_translations_data()
translations_data, get_translations_data = load_data("Module:translations/data"), nil
return translations_data
end
local function is_translation_subpage(pagename)
if (headword_data or get_headword_data()).page.namespace ~= "" then
return false
elseif not pagename then
pagename = (headword_data or get_headword_data()).encoded_pagename
end
return pagename:match("./translations$") and true or false
end
local function canonical_pagename()
local pagename = (headword_data or get_headword_data()).encoded_pagename
return is_translation_subpage(pagename) and pagename:sub(1, -14) or pagename
end
local function interwiki(terminfo, term, lang, langcode)
-- No interwiki link if term is empty/missing
if not term or #term < 1 then
terminfo.interwiki = false
return
end
-- Percent-decode the term.
term = decode_uri(terminfo.term, "PATH")
-- Don't show an interwiki link if it's an invalid title.
if not new_title(term) then
terminfo.interwiki = false
return
end
local interwiki_langcode = (translations_data or get_translations_data()).interwiki_langs[langcode]
local wmlangs = interwiki_langcode and {get_wikimedia_lang(interwiki_langcode)} or lang:getWikimediaLanguages()
-- Don't show the interwiki link if the language is not recognised by Wikimedia.
if #wmlangs == 0 then
terminfo.interwiki = false
return
end
local sc = terminfo.sc
local target_page = get_link_page(term, lang, sc)
local split = split_on_slashes(target_page)
if not split[1] then
terminfo.interwiki = false
return
end
target_page = split[1]
local wmlangcode = wmlangs[1]:getCode()
local interwiki_link = language_link{
lang = lang,
sc = sc,
term = wmlangcode .. ":" .. target_page,
alt = "(" .. wmlangcode .. ")",
tr = "-"
}
terminfo.interwiki = tostring(html_create("span")
:addClass("tpos")
:wikitext(" " .. interwiki_link)
)
end
function export.show_terminfo(terminfo, check)
local lang = terminfo.lang
local langcode, langname = lang:getCode(), lang:getCanonicalName()
-- Translations must be for mainspace languages.
if not lang:hasType("regular") then
error("Terjemahan hanya dapat diberikan bagi bahasa yang tercatat dan sudah disahkan sebagai bahasa di ruang nama utama.")
else
local disallowed = (translations_data or get_translations_data()).disallowed
local err_msg = disallowed[langcode]
if err_msg then
error("Terjemahan tidak diperbolehkan di " .. langname .. " (" .. langcode .. "). Terjemahan " .. langname .. " semestinya " .. err_msg)
end
local fullcode = lang:getFullCode()
if fullcode ~= langcode then
err_msg = disallowed[fullcode]
if err_msg then
langname = lang:getCanonicalNameLower()
error("Terjemahan tidak diizinkan di " .. langname .. " (" .. fullcode .. "). " .. " terjemahan " .. langname .. " perlu " .. err_msg)
end
end
end
if langcode == "en" then
if terminfo.interwiki then
error("Interwiki translations not allowed for English; they should always link to a different Wiktionary")
end
local current_L2 = require(pages_module).get_current_L2()
if current_L2 ~= "Translingual" and mw.title.getCurrentTitle().nsText ~= "Wiktionary" then
if current_L2 then
error("English translations only allowed in Translingual section, not in " .. current_L2)
else
error("English translations only allowed in Translingual section, not outside of any L2")
end
end
end
local term = terminfo.term
-- Check if there is a term. Don't show the interwiki link if there is nothing to link to.
if not term then
-- Track entries that don't provide a term.
-- FIXME: This should be a category.
track("translations/no term")
track("translations/no term/" .. langcode)
end
if terminfo.interwiki then
interwiki(terminfo, term, lang, langcode)
end
langcode = lang:getFullCode()
if (translations_data or get_translations_data()).need_super[langcode] then
local tr = terminfo.tr
if tr ~= nil then
terminfo.tr = tr:gsub("%d[%d%*%-]*%f[^%d%*]", "<sup>%0</sup>")
end
end
terminfo.show_qualifiers = true
local link = full_link(terminfo, "translation")
local canonical_name = lang:getCanonicalNameLower()
local full_name = lang:getFullName()
local categories = {"Istilah dengan terjemahan " .. canonical_name}
if canonical_name ~= full_name then
insert(categories, "Istilah dengan terjemahan " .. full_name)
end
if check then
link = tostring(html_create("span")
:addClass("ttbc")
:tag("sup")
:addClass("ttbc")
:wikitext("(tolong [[WT:Warung Kopi|pastikan]])")
:done()
:wikitext(" " .. link)
)
insert(categories, "Permintaan pemastian terjemahan " .. langname)
end
return link .. format_categories(categories, en or get_en(), nil, canonical_pagename())
end
-- Implements {{t}}, {{t+}}, {{t-check}} and {{t+check}}.
function export.show(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["translation"])
local check = frame.args.check
return export.show_terminfo({
lang = args[1],
sc = args.sc,
track_sc = true,
term = args[2],
alt = args.alt,
id = args.id,
genders = args[3],
tr = args.tr,
ts = args.ts,
lit = args.lit,
q = args.q,
qq = args.qq,
l = args.l,
ll = args.ll,
refs = args.ref,
interwiki = frame.args.interwiki,
}, check and check ~= "")
end
local function add_id(div, id)
return id and div:attr("id", normalize_anchor("Terjemahan-" .. id)) or div
end
-- Implements {{trans-top}} and part of {{trans-top-also}}.
local function top(args, title, id, navhead)
local column_width = (args["column-width"] == "wide" or args["column-width"] == "narrow") and "-" .. args["column-width"] or ""
local div = html_create("div")
:addClass("NavFrame")
:node(navhead)
:tag("div")
:addClass("NavContent")
:tag("table")
:addClass("translations")
:attr("role", "presentation")
:attr("data-gloss", title or "")
:tag("tr")
:tag("td")
:addClass("translations-cell")
:addClass("multicolumn-list" .. column_width)
:attr("colspan", "3")
:allDone()
div = add_id(div, id)
local categories = {}
if not title then
insert(categories, "Tabel terjemahan tidak memiliki glos pada kop")
end
local pagename = canonical_pagename()
if is_translation_subpage() then
insert(categories, "Subhalaman terjemahan")
end
return (tostring(div):gsub("</td></tr></table></div></div>$", "")) ..
(#categories > 0 and format_categories(categories, en or get_en(), nil, pagename) or "") ..
-- Category to trigger [[MediaWiki:Gadget-TranslationAdder.js]]; we want this even on
-- user pages and such.
format_categories("Entries with translation boxes", nil, nil, nil, true) ..
templatestyles("Module:translations/styles.css")
end
-- Entry point for {{trans-top}}.
function export.top(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-top"])
local title = args[1]
local id = args.id or title
title = title and remove_links(title)
return top(args, title, id, html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(title or "Terjemahan")
)
end
-- Entry point for {{checktrans-top}}.
function export.check_top(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["checktrans-top"])
local text = "\n:''Terjemahan di bawah ini perlu diperiksa dan dimasukkan ke dalam tabel terjemahan yang sesuai." .. --Lihat instruksi di " ..
frame:expandTemplate{
title = "section link",
args = {"WT:TLE#Terjemahan"}
} ..
".''\n"
local header = html_create("div")
:addClass("checktrans")
:wikitext(text)
local subtitle = args[1]
local title = "Terjemahan yang perlu diperiksa"
if subtitle then
title = title .. "‌: \"" .. subtitle .. "\""
end
-- No ID, since these should always accompany proper translation tables, and can't be trusted anyway (i.e. there's no use-case for links).
return tostring(header) .. "\n" .. top(args, title, nil, html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(title or "Terjemahan")
)
end
-- Implements {{trans-bottom}}.
function export.bottom(frame)
-- Check nothing is being passed as a parameter.
process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-bottom"])
return "</table></div></div>"
end
-- Implements {{trans-see}} and part of {{trans-top-also}}.
local function see(args, see_text)
local navhead = html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(args[1] .. " ")
:tag("span")
:css("font-weight", "normal")
:wikitext("— ")
:tag("i")
:wikitext(see_text)
:allDone()
local terms, id = args[2], args.id
if #terms == 0 then
terms[1] = args[1]
end
for i = 1, #terms do
local term_id = id[i] or id.default
local data = {
term = terms[i],
id = term_id and "Translations-" .. term_id or "Translations",
}
terms[i] = plain_link(data)
end
return navhead:wikitext(concat(terms, ",‎ "))
end
-- Entry point for {{trans-see}}.
function export.see(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-see"])
local div = html_create("div")
:addClass("pseudo")
:addClass("NavFrame")
:node(see(args, "see "))
return tostring(add_id(div, args.id.default or args[1]))
end
-- Entry point for {{trans-top-also}}.
function export.top_also(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-top-also"])
local navhead = see(args, "see also ")
local title = args[1]
local id = args.id.default or title
title = remove_links(title)
return top(args, title, id, navhead)
end
-- Implements {{translation subpage}}.
function export.subpage(frame)
process_params(frame:getParent().args, (parameters_data or get_parameters_data())["translation subpage"])
if not is_translation_subpage() then
error("This template should only be used on translation subpages, which have titles that end with '/translations'.")
end
-- "Translation subpages" category is handled by {{trans-top}}.
return ("''This page contains translations for ''%s''. See the main entry for more information.''"):format(full_link{
lang = en or get_en(),
term = canonical_pagename(),
})
end
-- Implements {{t-needed}}.
function export.needed(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["t-needed"])
local lang, category = args[1], ""
local span = html_create("span")
:addClass("trreq")
:attr("data-lang", lang:getCode())
:tag("i")
:wikitext("please add this translation if you can")
:done()
if not args.nocat then
local type, sort = args[2], args.sort
if type == "quote" then
category = "Requests for translations of " .. lang:getCanonicalName() .. " quotations"
elseif type == "usex" then
category = "Requests for translations of " .. lang:getCanonicalName() .. " usage examples"
else
category = "Requests for translations into " .. lang:getCanonicalName()
lang = en or get_en()
end
category = format_categories(category, lang, sort, not sort and canonical_pagename() or nil)
end
return tostring(span) .. category
end
-- Implements {{no equivalent translation}}.
function export.no_equivalent(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["tidak ada padanannya"])
local text = "tidak ada padanannya di " .. args[1]:getCanonicalNameLower()
if not args.noend then
text = text .. ", tapi lihat"
end
return tostring(html_create("i"):wikitext(text))
end
-- Implements {{no attested translation}}.
function export.no_attested(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["no attested translation"])
local langname = args[1]:getCanonicalName()
local text = "no [[WT:ATTEST|attested]] term in " .. langname
local category = ""
if not args.noend then
text = text .. ", but see"
local sort = args.sort
category = format_categories(langname .. " unattested translations", en or get_en(), sort, not sort and canonical_pagename() or nil)
end
return tostring(html_create("i"):wikitext(text)) .. category
end
-- Implements {{not used}}.
function export.not_used(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["not used"])
return tostring(html_create("i"):wikitext((args[2] or "not used") .. " in " .. args[1]:getCanonicalName()))
end
return export
jiv8jwl66pvtpex714usj7d5vp6nbwe
1349282
1349281
2026-04-10T19:16:13Z
Swarabakti
18192
1349282
Scribunto
text/plain
local export = {}
local anchors_module = "Module:anchors"
local debug_track_module = "Module:debug/track"
local languages_module = "Module:languages"
local links_module = "Module:links"
local pages_module = "Module:pages"
local parameters_module = "Module:parameters"
local string_utilities_module = "Module:string utilities"
local templatestyles_module = "Module:TemplateStyles"
local utilities_module = "Module:utilities"
local wikimedia_languages_module = "Module:wikimedia languages"
local concat = table.concat
local html_create = mw.html.create
local insert = table.insert
local load_data = mw.loadData
local new_title = mw.title.new
local require = require
--[==[
Loaders for functions in other modules, which overwrite themselves with the target function when called. This ensures modules are only loaded when needed, retains the speed/convenience of locally-declared pre-loaded functions, and has no overhead after the first call, since the target functions are called directly in any subsequent calls.]==]
local function decode_uri(...)
decode_uri = require(string_utilities_module).decode_uri
return decode_uri(...)
end
local function format_categories(...)
format_categories = require(utilities_module).format_categories
return format_categories(...)
end
local function full_link(...)
full_link = require(links_module).full_link
return full_link(...)
end
local function get_link_page(...)
get_link_page = require(links_module).get_link_page
return get_link_page(...)
end
local function get_wikimedia_lang(...)
get_wikimedia_lang = require(wikimedia_languages_module).getByCode
return get_wikimedia_lang(...)
end
local function language_link(...)
language_link = require(links_module).language_link
return language_link(...)
end
local function normalize_anchor(...)
normalize_anchor = require(anchors_module).normalize_anchor
return normalize_anchor(...)
end
local function plain_link(...)
plain_link = require(links_module).plain_link
return plain_link(...)
end
local function process_params(...)
process_params = require(parameters_module).process
return process_params(...)
end
local function remove_links(...)
remove_links = require(links_module).remove_links
return remove_links(...)
end
local function split_on_slashes(...)
split_on_slashes = require(links_module).split_on_slashes
return split_on_slashes(...)
end
local function templatestyles(...)
templatestyles = require(templatestyles_module)
return templatestyles(...)
end
local function track(...)
track = require(debug_track_module)
return track(...)
end
--[==[
Loaders for objects, which load data (or some other object) into some variable, which can then be accessed as "foo or get_foo()", where the function get_foo sets the object to "foo" and then returns it. This ensures they are only loaded when needed, and avoids the need to check for the existence of the object each time, since once "foo" has been set, "get_foo" will not be called again.]==]
local en
local function get_en()
en, get_en = require(languages_module).getByCode("en"), nil
return en
end
local headword_data
local function get_headword_data()
headword_data, get_headword_data = load_data("Module:headword/data"), nil
return headword_data
end
local parameters_data
local function get_parameters_data()
parameters_data, get_parameters_data = load_data("Module:parameters/data"), nil
return parameters_data
end
local translations_data
local function get_translations_data()
translations_data, get_translations_data = load_data("Module:translations/data"), nil
return translations_data
end
local function is_translation_subpage(pagename)
if (headword_data or get_headword_data()).page.namespace ~= "" then
return false
elseif not pagename then
pagename = (headword_data or get_headword_data()).encoded_pagename
end
return pagename:match("./translations$") and true or false
end
local function canonical_pagename()
local pagename = (headword_data or get_headword_data()).encoded_pagename
return is_translation_subpage(pagename) and pagename:sub(1, -14) or pagename
end
local function interwiki(terminfo, term, lang, langcode)
-- No interwiki link if term is empty/missing
if not term or #term < 1 then
terminfo.interwiki = false
return
end
-- Percent-decode the term.
term = decode_uri(terminfo.term, "PATH")
-- Don't show an interwiki link if it's an invalid title.
if not new_title(term) then
terminfo.interwiki = false
return
end
local interwiki_langcode = (translations_data or get_translations_data()).interwiki_langs[langcode]
local wmlangs = interwiki_langcode and {get_wikimedia_lang(interwiki_langcode)} or lang:getWikimediaLanguages()
-- Don't show the interwiki link if the language is not recognised by Wikimedia.
if #wmlangs == 0 then
terminfo.interwiki = false
return
end
local sc = terminfo.sc
local target_page = get_link_page(term, lang, sc)
local split = split_on_slashes(target_page)
if not split[1] then
terminfo.interwiki = false
return
end
target_page = split[1]
local wmlangcode = wmlangs[1]:getCode()
local interwiki_link = language_link{
lang = lang,
sc = sc,
term = wmlangcode .. ":" .. target_page,
alt = "(" .. wmlangcode .. ")",
tr = "-"
}
terminfo.interwiki = tostring(html_create("span")
:addClass("tpos")
:wikitext(" " .. interwiki_link)
)
end
function export.show_terminfo(terminfo, check)
local lang = terminfo.lang
local langcode, langname = lang:getCode(), lang:getCanonicalName()
-- Translations must be for mainspace languages.
if not lang:hasType("regular") then
error("Terjemahan hanya dapat diberikan bagi bahasa yang tercatat dan sudah disahkan sebagai bahasa di ruang nama utama.")
else
local disallowed = (translations_data or get_translations_data()).disallowed
local err_msg = disallowed[langcode]
if err_msg then
error("Terjemahan tidak diperbolehkan di " .. langname .. " (" .. langcode .. "). Terjemahan " .. langname .. " semestinya " .. err_msg)
end
local fullcode = lang:getFullCode()
if fullcode ~= langcode then
err_msg = disallowed[fullcode]
if err_msg then
langname = lang:getCanonicalNameLower()
error("Terjemahan tidak diizinkan di " .. langname .. " (" .. fullcode .. "). " .. " terjemahan " .. langname .. " perlu " .. err_msg)
end
end
end
if langcode == "id" then
if terminfo.interwiki then
error("Galat")
end
local current_L2 = require(pages_module).get_current_L2()
if current_L2 ~= "Lintas bahasa" and mw.title.getCurrentTitle().nsText ~= "Wikikamus" then
if current_L2 then
error("Terjemahan ke bahasa Indonesia hanya diperbolehkan pada subjudul Lintas bahasa, bukan di " .. current_L2)
else
error("Terjemahan ke bahasa Indonesia hanya diperbolehkan pada subjudul Lintas bahasa, bukan di luar semua subjudul")
end
end
end
local term = terminfo.term
-- Check if there is a term. Don't show the interwiki link if there is nothing to link to.
if not term then
-- Track entries that don't provide a term.
-- FIXME: This should be a category.
track("translations/no term")
track("translations/no term/" .. langcode)
end
if terminfo.interwiki then
interwiki(terminfo, term, lang, langcode)
end
langcode = lang:getFullCode()
if (translations_data or get_translations_data()).need_super[langcode] then
local tr = terminfo.tr
if tr ~= nil then
terminfo.tr = tr:gsub("%d[%d%*%-]*%f[^%d%*]", "<sup>%0</sup>")
end
end
terminfo.show_qualifiers = true
local link = full_link(terminfo, "translation")
local canonical_name = lang:getCanonicalNameLower()
local full_name = lang:getFullName()
local categories = {"Istilah dengan terjemahan " .. canonical_name}
if canonical_name ~= full_name then
insert(categories, "Istilah dengan terjemahan " .. full_name)
end
if check then
link = tostring(html_create("span")
:addClass("ttbc")
:tag("sup")
:addClass("ttbc")
:wikitext("(tolong [[WT:Warung Kopi|pastikan]])")
:done()
:wikitext(" " .. link)
)
insert(categories, "Permintaan pemastian terjemahan " .. langname)
end
return link .. format_categories(categories, en or get_en(), nil, canonical_pagename())
end
-- Implements {{t}}, {{t+}}, {{t-check}} and {{t+check}}.
function export.show(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["translation"])
local check = frame.args.check
return export.show_terminfo({
lang = args[1],
sc = args.sc,
track_sc = true,
term = args[2],
alt = args.alt,
id = args.id,
genders = args[3],
tr = args.tr,
ts = args.ts,
lit = args.lit,
q = args.q,
qq = args.qq,
l = args.l,
ll = args.ll,
refs = args.ref,
interwiki = frame.args.interwiki,
}, check and check ~= "")
end
local function add_id(div, id)
return id and div:attr("id", normalize_anchor("Terjemahan-" .. id)) or div
end
-- Implements {{trans-top}} and part of {{trans-top-also}}.
local function top(args, title, id, navhead)
local column_width = (args["column-width"] == "wide" or args["column-width"] == "narrow") and "-" .. args["column-width"] or ""
local div = html_create("div")
:addClass("NavFrame")
:node(navhead)
:tag("div")
:addClass("NavContent")
:tag("table")
:addClass("translations")
:attr("role", "presentation")
:attr("data-gloss", title or "")
:tag("tr")
:tag("td")
:addClass("translations-cell")
:addClass("multicolumn-list" .. column_width)
:attr("colspan", "3")
:allDone()
div = add_id(div, id)
local categories = {}
if not title then
insert(categories, "Tabel terjemahan tidak memiliki glos pada kop")
end
local pagename = canonical_pagename()
if is_translation_subpage() then
insert(categories, "Subhalaman terjemahan")
end
return (tostring(div):gsub("</td></tr></table></div></div>$", "")) ..
(#categories > 0 and format_categories(categories, en or get_en(), nil, pagename) or "") ..
-- Category to trigger [[MediaWiki:Gadget-TranslationAdder.js]]; we want this even on
-- user pages and such.
format_categories("Entries with translation boxes", nil, nil, nil, true) ..
templatestyles("Module:translations/styles.css")
end
-- Entry point for {{trans-top}}.
function export.top(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-top"])
local title = args[1]
local id = args.id or title
title = title and remove_links(title)
return top(args, title, id, html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(title or "Terjemahan")
)
end
-- Entry point for {{checktrans-top}}.
function export.check_top(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["checktrans-top"])
local text = "\n:''Terjemahan di bawah ini perlu diperiksa dan dimasukkan ke dalam tabel terjemahan yang sesuai." .. --Lihat instruksi di " ..
frame:expandTemplate{
title = "section link",
args = {"WT:TLE#Terjemahan"}
} ..
".''\n"
local header = html_create("div")
:addClass("checktrans")
:wikitext(text)
local subtitle = args[1]
local title = "Terjemahan yang perlu diperiksa"
if subtitle then
title = title .. "‌: \"" .. subtitle .. "\""
end
-- No ID, since these should always accompany proper translation tables, and can't be trusted anyway (i.e. there's no use-case for links).
return tostring(header) .. "\n" .. top(args, title, nil, html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(title or "Terjemahan")
)
end
-- Implements {{trans-bottom}}.
function export.bottom(frame)
-- Check nothing is being passed as a parameter.
process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-bottom"])
return "</table></div></div>"
end
-- Implements {{trans-see}} and part of {{trans-top-also}}.
local function see(args, see_text)
local navhead = html_create("div")
:addClass("NavHead")
:css("text-align", "left")
:wikitext(args[1] .. " ")
:tag("span")
:css("font-weight", "normal")
:wikitext("— ")
:tag("i")
:wikitext(see_text)
:allDone()
local terms, id = args[2], args.id
if #terms == 0 then
terms[1] = args[1]
end
for i = 1, #terms do
local term_id = id[i] or id.default
local data = {
term = terms[i],
id = term_id and "Translations-" .. term_id or "Translations",
}
terms[i] = plain_link(data)
end
return navhead:wikitext(concat(terms, ",‎ "))
end
-- Entry point for {{trans-see}}.
function export.see(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-see"])
local div = html_create("div")
:addClass("pseudo")
:addClass("NavFrame")
:node(see(args, "see "))
return tostring(add_id(div, args.id.default or args[1]))
end
-- Entry point for {{trans-top-also}}.
function export.top_also(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["trans-top-also"])
local navhead = see(args, "see also ")
local title = args[1]
local id = args.id.default or title
title = remove_links(title)
return top(args, title, id, navhead)
end
-- Implements {{translation subpage}}.
function export.subpage(frame)
process_params(frame:getParent().args, (parameters_data or get_parameters_data())["translation subpage"])
if not is_translation_subpage() then
error("This template should only be used on translation subpages, which have titles that end with '/translations'.")
end
-- "Translation subpages" category is handled by {{trans-top}}.
return ("''This page contains translations for ''%s''. See the main entry for more information.''"):format(full_link{
lang = en or get_en(),
term = canonical_pagename(),
})
end
-- Implements {{t-needed}}.
function export.needed(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["t-needed"])
local lang, category = args[1], ""
local span = html_create("span")
:addClass("trreq")
:attr("data-lang", lang:getCode())
:tag("i")
:wikitext("please add this translation if you can")
:done()
if not args.nocat then
local type, sort = args[2], args.sort
if type == "quote" then
category = "Requests for translations of " .. lang:getCanonicalName() .. " quotations"
elseif type == "usex" then
category = "Requests for translations of " .. lang:getCanonicalName() .. " usage examples"
else
category = "Requests for translations into " .. lang:getCanonicalName()
lang = en or get_en()
end
category = format_categories(category, lang, sort, not sort and canonical_pagename() or nil)
end
return tostring(span) .. category
end
-- Implements {{no equivalent translation}}.
function export.no_equivalent(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["tidak ada padanannya"])
local text = "tidak ada padanannya di " .. args[1]:getCanonicalNameLower()
if not args.noend then
text = text .. ", tapi lihat"
end
return tostring(html_create("i"):wikitext(text))
end
-- Implements {{no attested translation}}.
function export.no_attested(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["no attested translation"])
local langname = args[1]:getCanonicalName()
local text = "no [[WT:ATTEST|attested]] term in " .. langname
local category = ""
if not args.noend then
text = text .. ", but see"
local sort = args.sort
category = format_categories(langname .. " unattested translations", en or get_en(), sort, not sort and canonical_pagename() or nil)
end
return tostring(html_create("i"):wikitext(text)) .. category
end
-- Implements {{not used}}.
function export.not_used(frame)
local args = process_params(frame:getParent().args, (parameters_data or get_parameters_data())["not used"])
return tostring(html_create("i"):wikitext((args[2] or "not used") .. " in " .. args[1]:getCanonicalName()))
end
return export
chfpr4r4pjd34qzin8wn90ilkhu117d
Modul:links
828
200721
1349279
1111213
2026-04-10T19:00:15Z
Swarabakti
18192
mutakhirkan
1349279
Scribunto
text/plain
local export = {}
--[=[
[[Unsupported titles]], pages with high memory usage,
extraction modules and part-of-speech names are listed
at [[Module:links/data]].
Other modules used:
[[Module:script utilities]]
[[Module:scripts]]
[[Module:languages]] and its submodules
[[Module:gender and number]]
[[Module:debug/track]]
]=]
local anchors_module = "Module:anchors"
local debug_track_module = "Module:debug/track"
local form_of_module = "Module:form of"
local gender_and_number_module = "Module:gender and number"
local languages_module = "Module:languages"
local load_module = "Module:load"
local memoize_module = "Module:memoize"
local pages_module = "Module:pages"
local pron_qualifier_module = "Module:pron qualifier"
local scripts_module = "Module:scripts"
local script_utilities_module = "Module:script utilities"
local string_encode_entities_module = "Module:string/encode entities"
local string_utilities_module = "Module:string utilities"
local table_module = "Module:table"
local utilities_module = "Module:utilities"
local concat = table.concat
local find = string.find
local get_current_title = mw.title.getCurrentTitle
local insert = table.insert
local ipairs = ipairs
local match = string.match
local new_title = mw.title.new
local pairs = pairs
local remove = table.remove
local sub = string.sub
local toNFC = mw.ustring.toNFC
local tostring = tostring
local type = type
local unstrip = mw.text.unstrip
local NAMESPACE = get_current_title().nsText
local function anchor_encode(...)
anchor_encode = require(memoize_module)(mw.uri.anchorEncode, true)
return anchor_encode(...)
end
local function debug_track(...)
debug_track = require(debug_track_module)
return debug_track(...)
end
local function decode_entities(...)
decode_entities = require(string_utilities_module).decode_entities
return decode_entities(...)
end
local function decode_uri(...)
decode_uri = require(string_utilities_module).decode_uri
return decode_uri(...)
end
-- Can't yet replace, as the [[Module:string utilities]] version no longer has automatic double-encoding prevention, which requires changes here to account for.
local function encode_entities(...)
encode_entities = require(string_encode_entities_module)
return encode_entities(...)
end
local function extend(...)
extend = require(table_module).extend
return extend(...)
end
local function find_best_script_without_lang(...)
find_best_script_without_lang = require(scripts_module).findBestScriptWithoutLang
return find_best_script_without_lang(...)
end
local function format_categories(...)
format_categories = require(utilities_module).format_categories
return format_categories(...)
end
local function format_genders(...)
format_genders = require(gender_and_number_module).format_genders
return format_genders(...)
end
local function format_qualifiers(...)
format_qualifiers = require(pron_qualifier_module).format_qualifiers
return format_qualifiers(...)
end
local function get_current_L2(...)
get_current_L2 = require(pages_module).get_current_L2
return get_current_L2(...)
end
local function get_lang(...)
get_lang = require(languages_module).getByCode
return get_lang(...)
end
local function get_script(...)
get_script = require(scripts_module).getByCode
return get_script(...)
end
local function language_anchor(...)
language_anchor = require(anchors_module).language_anchor
return language_anchor(...)
end
local function load_data(...)
load_data = require(load_module).load_data
return load_data(...)
end
local function request_script(...)
request_script = require(script_utilities_module).request_script
return request_script(...)
end
local function shallow_copy(...)
shallow_copy = require(table_module).shallowCopy
return shallow_copy(...)
end
local function split(...)
split = require(string_utilities_module).split
return split(...)
end
local function tag_text(...)
tag_text = require(script_utilities_module).tag_text
return tag_text(...)
end
local function tag_translit(...)
tag_translit = require(script_utilities_module).tag_translit
return tag_translit(...)
end
local function trim(...)
trim = require(string_utilities_module).trim
return trim(...)
end
local function u(...)
u = require(string_utilities_module).char
return u(...)
end
local function ulower(...)
ulower = require(string_utilities_module).lower
return ulower(...)
end
local function umatch(...)
umatch = require(string_utilities_module).match
return umatch(...)
end
local m_headword_data
local function get_headword_data()
m_headword_data = load_data("Module:headword/data")
return m_headword_data
end
local function track(page, code)
local tracking_page = "links/" .. page
debug_track(tracking_page)
if code then
debug_track(tracking_page .. "/" .. code)
end
end
local function selective_trim(...)
-- Unconditionally trimmed charset.
local always_trim =
"\194\128-\194\159" .. -- U+0080-009F (C1 control characters)
"\194\173" .. -- U+00AD (soft hyphen)
"\226\128\170-\226\128\174" .. -- U+202A-202E (directionality formatting characters)
"\226\129\166-\226\129\169" -- U+2066-2069 (directionality formatting characters)
-- Standard trimmed charset.
local standard_trim = "%s" .. -- (default whitespace charset)
"\226\128\139-\226\128\141" .. -- U+200B-200D (zero-width spaces)
always_trim
-- If there are non-whitespace characters, trim all characters in `standard_trim`.
-- Otherwise, only trim the characters in `always_trim`.
selective_trim = function(text)
if text == "" then
return text
end
local trimmed = trim(text, standard_trim)
if trimmed ~= "" then
return trimmed
end
return trim(text, always_trim)
end
return selective_trim(...)
end
local function escape(text, str)
local rep
repeat
text, rep = text:gsub("\\\\(\\*" .. str .. ")", "\5%1")
until rep == 0
return (text:gsub("\\" .. str, "\6"))
end
local function unescape(text, str)
return (text
:gsub("\5", "\\")
:gsub("\6", str))
end
-- Remove bold, italics, soft hyphens, strip markers and HTML tags.
local function remove_formatting(str)
str = str
:gsub("('*)'''(.-'*)'''", "%1%2")
:gsub("('*)''(.-'*)''", "%1%2")
:gsub("", "")
return (unstrip(str)
:gsub("<[^<>]+>", ""))
end
--[==[Takes an input and splits on a double slash (taking account of escaping backslashes).]==]
function export.split_on_slashes(text)
if text:find("\\", nil, true) then
track("escaped", "split_on_slashes")
end
text = split(escape(text, "//"), "//", true) or {}
for i, v in ipairs(text) do
text[i] = unescape(v, "//")
if v == "" then
text[i] = false
end
end
return text
end
--[==[Takes a wikilink and outputs the link target and display text. By default, the link target will be returned as a title object, but if `allow_bad_target` is set it will be returned as a string, and no check will be performed as to whether it is a valid link target.]==]
function export.get_wikilink_parts(text, allow_bad_target)
-- TODO: replace `allow_bad_target` with `allow_unsupported`, with support for links to unsupported titles, including escape sequences.
if ( -- Filters out anything but "[[...]]" with no intermediate "[[" or "]]".
not match(text, "^()%[%[") or -- Faster than sub(text, 1, 2) ~= "[[".
find(text, "[[", 3, true) or
find(text, "]]", 3, true) ~= #text - 1
) then
return nil, nil
end
local pipe, title, display = find(text, "|", 3, true)
if pipe then
title, display = sub(text, 3, pipe - 1), sub(text, pipe + 1, -3)
else
title = sub(text, 3, -3)
display = title
end
if allow_bad_target then
return title, display
end
title = new_title(title)
-- No title object means the target is invalid.
if title == nil then
return nil, nil
-- If the link target starts with "#" then mw.title.new returns a broken
-- title object, so grab the current title and give it the correct fragment.
elseif title.prefixedText == "" then
local fragment = title.fragment
if fragment == "" then -- [[#]] isn't valid
return nil, nil
end
title = get_current_title()
title.fragment = fragment
end
return title, display
end
-- Does the work of export.get_fragment, but can be called directly to avoid unnecessary checks for embedded links.
local function get_fragment(text)
text = escape(text, "#")
-- Replace numeric character references with the corresponding character (' → '),
-- as they contain #, which causes the numeric character reference to be
-- misparsed (wa'a → wa'a → pagename wa&, fragment 39;a).
text = decode_entities(text)
local target, fragment = text:match("^(.-)#(.+)$")
target = target or text
target = unescape(target, "#")
fragment = fragment and unescape(fragment, "#")
return target, fragment
end
--[==[Takes a link target and outputs the actual target and the fragment (if any).]==]
function export.get_fragment(text)
if text:find("\\", nil, true) then
track("escaped", "get_fragment")
end
-- If there are no embedded links, process input.
local open = find(text, "[[", nil, true)
if not open then
return get_fragment(text)
end
local close = find(text, "]]", open + 2, true)
if not close then
return get_fragment(text)
-- If there is one, but it's redundant (i.e. encloses everything with no pipe), remove and process.
elseif open == 1 and close == #text - 1 and not find(text, "|", 3, true) then
return get_fragment(sub(text, 3, -3))
end
-- Otherwise, return the input.
return text
end
--[==[
Given a link target as passed to `full_link()`, get the actual page that the target refers to. This removes
bold, italics, strip markets and HTML; calls `makeEntryName()` for the language in question; converts targets
beginning with `*` to the Reconstruction namespace; and converts appendix-constructed languages to the Appendix
namespace. Returns up to three values:
# the actual page to link to, or {nil} to not link to anything;
# how the target should be displayed as, if the user didn't explicitly specify any display text; generally the
same as the original target, but minus any anti-asterisk !!;
# the value `true` if the target had a backslash-escaped * in it (FIXME: explain this more clearly).
]==]
function export.get_link_page_with_auto_display(target, lang, sc, plain)
local orig_target = target
if not target then
return nil
elseif target:find("\\", nil, true) then
track("escaped", "get_link_page")
end
target = remove_formatting(target)
if target:sub(1, 1) == ":" then
track("initial colon")
-- FIXME, the auto_display (second return value) should probably remove the colon
return target:sub(2), orig_target
end
local prefix = target:match("^(.-):")
-- Convert any escaped colons
target = target:gsub("\\:", ":")
if prefix then
-- If this is an a link to another namespace or an interwiki link, ensure there's an initial colon and then
-- return what we have (so that it works as a conventional link, and doesn't do anything weird like add the term
-- to a category.)
prefix = ulower(trim(prefix))
if prefix ~= "" and (
load_data("Module:data/namespaces")[prefix] or
load_data("Module:data/interwikis")[prefix]
) then
return target, orig_target
end
end
-- Check if the term is reconstructed and remove any asterisk. Also check for anti-asterisk (!!).
-- Otherwise, handle the escapes.
local reconstructed, escaped, anti_asterisk
if not plain then
target, reconstructed = target:gsub("^%*(.)", "%1")
if reconstructed == 0 then
target, anti_asterisk = target:gsub("^!!(.)", "%1")
if anti_asterisk == 1 then
-- Remove !! from original. FIXME! We do it this way because the call to remove_formatting() above
-- may cause non-initial !! to be interpreted as anti-asterisks. We should surely move the
-- remove_formatting() call later.
orig_target = orig_target:gsub("^!!", "")
end
end
end
target, escaped = target:gsub("^(\\-)\\%*", "%1*")
if not (sc and sc:getCode() ~= "None") then
sc = lang:findBestScript(target)
end
-- Remove carets if they are used to capitalize parts of transliterations (unless they have been escaped).
if (not sc:hasCapitalization()) and sc:isTransliterated() and target:match("%^") then
target = escape(target, "^")
:gsub("%^", "")
target = unescape(target, "^")
end
-- Get the entry name for the language.
target = lang:makeEntryName(target, sc, reconstructed == 1 or lang:hasType("appendix-constructed"))
-- If the link contains unexpanded template parameters, then don't create a link.
if target:match("{{{.-}}}") then
-- FIXME: Should we return the original target as the default display value (second return value)?
return nil
end
-- Link to appendix for reconstructed terms and terms in appendix-only languages. Plain links interpret *
-- literally, however.
if reconstructed == 1 then
if lang:getFullCode() == "und" then
-- Return the original target as default display value. If we don't do this, we wrongly get
-- [Term?] displayed instead.
return nil, orig_target
end
target = "Lampiran:Rekonstruksi " .. lang:getCanonicalNameLower() .. "/" .. target
-- Reconstructed languages and substrates require an initial *.
elseif anti_asterisk ~= 1 and (lang:hasType("reconstructed") or lang:getFamilyCode() == "qfa-sub") then
error(("The specified language %s is unattested, while the term '%s' does not begin with '*' to indicate that it is reconstructed.")
:
format(lang:getCanonicalName(), orig_target))
elseif lang:hasType("appendix-constructed") then
target = "Lampiran:" .. lang:getFullName() .. "/" .. target
else
target = target
end
return target, orig_target, escaped > 0
end
function export.get_link_page(target, lang, sc, plain)
local target, auto_display, escaped = export.get_link_page_with_auto_display(target, lang, sc, plain)
return target, escaped
end
-- Make a link from a given link's parts
local function make_link(link, lang, sc, id, isolated, cats, no_alt_ast, plain)
-- Convert percent encoding to plaintext.
link.target = link.target and decode_uri(link.target, "PATH")
link.fragment = link.fragment and decode_uri(link.fragment, "PATH")
-- Find fragments (if one isn't already set).
-- Prevents {{l|en|word#Etymology 2|word}} from linking to [[word#Etymology 2#English]].
-- # can be escaped as \#.
if link.target and link.fragment == nil then
link.target, link.fragment = get_fragment(link.target)
end
-- Process the target
local auto_display, escaped
link.target, auto_display, escaped = export.get_link_page_with_auto_display(link.target, lang, sc, plain)
-- Create a default display form.
-- If the target is "" then it's a link like [[#English]], which refers to the current page.
if auto_display == "" then
auto_display = (m_headword_data or get_headword_data()).pagename
end
-- If the display is the target and the reconstruction * has been escaped, remove the escaping backslash.
if escaped then
auto_display = auto_display:gsub("\\([^\\]*%*)", "%1", 1)
end
-- Process the display form.
if link.display then
local orig_display = link.display
link.display = lang:makeDisplayText(link.display, sc, true)
if cats then
auto_display = lang:makeDisplayText(auto_display, sc)
-- If the alt text is the same as what would have been automatically generated, then the alt parameter is redundant (e.g. {{l|en|foo|foo}}, {{l|en|w:foo|foo}}, but not {{l|en|w:foo|w:foo}}).
-- If they're different, but the alt text could have been entered as the term parameter without it affecting the target page, then the target parameter is redundant (e.g. {{l|ru|фу|фу́}}).
-- If `no_alt_ast` is true, use pcall to catch the error which will be thrown if this is a reconstructed lang and the alt text doesn't have *.
if link.display == auto_display then
insert(cats, lang:getCode() .. ":Istilah dengan parameter alt lewah")
else
local ok, check
if no_alt_ast then
ok, check = pcall(export.get_link_page, orig_display, lang, sc, plain)
else
ok = true
check = export.get_link_page(orig_display, lang, sc, plain)
end
if ok and link.target == check then
insert(cats, lang:getCode() .. ":Istilah dengan parameter sasaran lewah")
end
end
end
else
link.display = lang:makeDisplayText(auto_display, sc)
end
if not link.target then
return link.display
end
-- If the target is the same as the current page, there is no sense id
-- and either the language code is "und" or the current L2 is the current
-- language then return a "self-link" like the software does.
if link.target == get_current_title().prefixedText then
local fragment, current_L2 = link.fragment, get_current_L2()
if (
fragment and fragment == current_L2 or
not (id or fragment) and (lang:getFullCode() == "und" or lang:getFullName() == current_L2)
) then
return tostring(mw.html.create("strong")
:addClass("selflink")
:wikitext(link.display))
end
end
-- Add fragment. Do not add a section link to "Undetermined", as such sections do not exist and are invalid.
-- TabbedLanguages handles links without a section by linking to the "last visited" section, but adding
-- "Undetermined" would break that feature. For localized prefixes that make syntax error, please use the
-- format: ["xyz"] = true.
local prefix = link.target:match("^:*([^:]+):")
prefix = prefix and ulower(prefix)
if prefix ~= "category" and not (prefix and load_data("Module:data/interwikis")[prefix]) then
if (link.fragment or link.target:sub(-1) == "#") and not plain then
track("fragment", lang:getFullCode())
if cats then
insert(cats, lang:getCode() .. ":Istilah dengan fragmen manual")
end
end
if not link.fragment then
if id then
link.fragment = lang:getFullCode() == "und" and anchor_encode(id) or language_anchor(lang, id)
elseif lang:getFullCode() ~= "und" and not (link.target:match("^Lampiran:") or link.target:match("^Wikikamus:")) then
link.fragment = anchor_encode(lang:getFullName())
end
end
end
-- Put inward-facing square brackets around a link to isolated spacing character(s).
if isolated and #link.display > 0 and not umatch(decode_entities(link.display), "%S") then
link.display = "]" .. link.display .. "["
end
link.target = link.target:gsub("^(:?)(.*)", function(m1, m2)
return m1 .. encode_entities(m2, "#%&+/:<=>@[\\]_{|}")
end)
link.fragment = link.fragment and encode_entities(remove_formatting(link.fragment), "#%&+/:<=>@[\\]_{|}")
return "[[" ..
link.target:gsub("^[^:]", ":%0") .. (link.fragment and "#" .. link.fragment or "") .. "|" .. link.display .. "]]"
end
-- Split a link into its parts
local function parse_link(linktext)
local link = { target = linktext }
local target = link.target
link.target, link.display = target:match("^(..-)|(.+)$")
if not link.target then
link.target = target
link.display = target
end
-- There's no point in processing these, as they aren't real links.
local target_lower = link.target:lower()
for _, false_positive in ipairs({ "category", "cat", "file", "image" }) do
if target_lower:match("^" .. false_positive .. ":") then
return nil
end
end
link.display = decode_entities(link.display)
link.target, link.fragment = get_fragment(link.target)
-- So that make_link does not look for a fragment again.
if not link.fragment then
link.fragment = false
end
return link
end
local function check_params_ignored_when_embedded(alt, lang, id, cats)
if alt then
track("alt-ignored")
if cats then
insert(cats, lang:getCode() .. ":Istilah dengan parameter alt tak diacuhkan")
end
end
if id then
track("id-ignored")
if cats then
insert(cats, lang:getCode() .. "Istilah dengan parameter id tak diacuhkan")
end
end
end
-- Find embedded links and ensure they link to the correct section.
local function process_embedded_links(text, alt, lang, sc, id, cats, no_alt_ast, plain)
-- Process the non-linked text.
text = lang:makeDisplayText(text, sc, true)
-- If the text begins with * and another character, then act as if each link begins with *. However, don't do this if the * is contained within a link at the start. E.g. `|*[[foo]]` would set all_reconstructed to true, while `|[[*foo]]` would not.
local all_reconstructed = false
if not plain then
-- anchor_encode removes links etc.
if anchor_encode(text):sub(1, 1) == "*" then
all_reconstructed = true
end
-- Otherwise, handle any escapes.
text = text:gsub("^(\\-)\\%*", "%1*")
end
check_params_ignored_when_embedded(alt, lang, id, cats)
local function process_link(space1, linktext, space2)
local capture = "[[" .. linktext .. "]]"
local link = parse_link(linktext)
-- Return unprocessed false positives untouched (e.g. categories).
if not link then
return capture
end
if all_reconstructed then
if link.target:find("^!!") then
-- Check for anti-asterisk !! at the beginning of a target, indicating that a reconstructed term
-- wants a part of the term to link to a non-reconstructed term, e.g. Old English
-- {{ang-noun|m|head=*[[!!Crist|Cristes]] [[!!mæsseǣfen]]}}.
link.target = link.target:sub(3)
-- Also remove !! from the display, which may have been copied from the target (as in mæsseǣfen in
-- the example above).
link.display = link.display:gsub("^!!", "")
elseif not link.target:match("^%*") then
link.target = "*" .. link.target
end
end
linktext = make_link(link, lang, sc, id, false, nil, no_alt_ast, plain)
:gsub("^%[%[", "\3")
:gsub("%]%]$", "\4")
return space1 .. linktext .. space2
end
-- Use chars 1 and 2 as temporary substitutions, so that we can use charsets. These are converted to chars 3 and 4 by process_link, which means we can convert any remaining chars 1 and 2 back to square brackets (i.e. those not part of a link).
text = text
:gsub("%[%[", "\1")
:gsub("%]%]", "\2")
-- If the script uses ^ to capitalize transliterations, make sure that any carets preceding links are on the inside, so that they get processed with the following text.
if (
text:find("^", nil, true) and
not sc:hasCapitalization() and
sc:isTransliterated()
) then
text = escape(text, "^")
:gsub("%^\1", "\1%^")
text = unescape(text, "^")
end
text = text:gsub("\1(%s*)([^\1\2]-)(%s*)\2", process_link)
-- Remove the extra * at the beginning of a language link if it's immediately followed by a link whose display begins with * too.
if all_reconstructed then
text = text:gsub("^%*\3([^|\1-\4]+)|%*", "\3%1|*")
end
return (text
:gsub("[\1\3]", "[[")
:gsub("[\2\4]", "]]")
)
end
local function simple_link(term, fragment, alt, lang, sc, id, cats, no_alt_ast, srwc)
local plain
if lang == nil then
lang, plain = get_lang("und"), true
end
-- Get the link target and display text. If the term is the empty string, treat the input as a link to the current page.
if term == "" then
term = get_current_title().prefixedText
elseif term then
local new_term, new_alt = export.get_wikilink_parts(term, true)
if new_term then
check_params_ignored_when_embedded(alt, lang, id, cats)
-- [[|foo]] links are treated as plaintext "[[|foo]]".
-- FIXME: Pipes should be handled via a proper escape sequence, as they can occur in unsupported titles.
if new_term == "" then
term, alt = nil, term
else
local title = new_title(new_term)
if title then
local ns = title.namespace
-- File: and Category: links should be returned as-is.
if ns == 6 or ns == 14 then
return term
end
end
term, alt = new_term, new_alt
if cats then
if not (srwc and srwc(term, alt)) then
insert(cats, lang:getCode() .. ":Istilah dengan pranala wiki lewah")
end
end
end
end
end
if alt then
alt = selective_trim(alt)
if alt == "" then
alt = nil
end
end
-- If there's nothing to process, return nil.
if not (term or alt) then
return nil
end
-- If there is no script, get one.
if not sc then
sc = lang:findBestScript(alt or term)
end
-- Embedded wikilinks need to be processed individually.
if term then
local open = find(term, "[[", nil, true)
if open and find(term, "]]", open + 2, true) then
return process_embedded_links(term, alt, lang, sc, id, cats, no_alt_ast, plain)
end
term = selective_trim(term)
end
-- If not, make a link using the parameters.
return make_link({
target = term,
display = alt,
fragment = fragment
}, lang, sc, id, true, cats, no_alt_ast, plain)
end
--[==[Creates a basic link to the given term. It links to the language section (such as <code>==English==</code>), but it does not add language and script wrappers, so any code that uses this function should call the <code class="n">[[Module:script utilities#tag_text|tag_text]]</code> from [[Module:script utilities]] to add such wrappers itself at some point.
The first argument, <code class="n">data</code>, may contain the following items, a subset of the items used in the <code class="n">data</code> argument of <code class="n">full_link</code>. If any other items are included, they are ignored.
{ {
term = entry_to_link_to,
alt = link_text_or_displayed_text,
lang = language_object,
id = sense_id,
} }
; <code class="n">term</code>
: Text to turn into a link. This is generally the name of a page. The text can contain wikilinks already embedded in it. These are processed individually just like a single link would be. The <code class="n">alt</code> argument is ignored in this case.
; <code class="n">alt</code> (''optional'')
: The alternative display for the link, if different from the linked page. If this is {{code|lua|nil}}, the <code class="n">text</code> argument is used instead (much like regular wikilinks). If <code class="n">text</code> contains wikilinks in it, this argument is ignored and has no effect. (Links in which the alt is ignored are tracked with the tracking template {{whatlinkshere|tracking=links/alt-ignored}}.)
; <code class="n">lang</code>
: The [[Module:languages#Language objects|language object]] for the term being linked. If this argument is defined, the function will determine the language's canonical name (see [[Template:language data documentation]]), and point the link or links in the <code class="n">term</code> to the language's section of an entry, or to a language-specific senseid if the <code class="n">id</code> argument is defined.
; <code class="n">id</code> (''optional'')
: Sense id string. If this argument is defined, the link will point to a language-specific sense id ({{ll|en|identifier|id=HTML}}) created by the template {{temp|senseid}}. A sense id consists of the language's canonical name, a hyphen (<code>-</code>), and the string that was supplied as the <code class="n">id</code> argument. This is useful when a term has more than one sense in a language. If the <code class="n">term</code> argument contains wikilinks, this argument is ignored. (Links in which the sense id is ignored are tracked with the tracking template {{whatlinkshere|tracking=links/id-ignored}}.)
The second argument is as follows:
; <code class="n">allow_self_link</code>
: If {{code|lua|true}}, the function will also generate links to the current page. The default ({{code|lua|false}}) will not generate a link but generate a bolded "self link" instead.
The following special options are processed for each link (both simple text and with embedded wikilinks):
* The target page name will be processed to generate the correct entry name. This is done by the [[Module:languages#makeEntryName|makeEntryName]] function in [[Module:languages]], using the <code class="n">entry_name</code> replacements in the language's data file (see [[Template:language data documentation]] for more information). This function is generally used to automatically strip dictionary-only diacritics that are not part of the normal written form of a language.
* If the text starts with <code class="n">*</code>, then the term is considered a reconstructed term, and a link to the Reconstruction: namespace will be created. If the text contains embedded wikilinks, then <code class="n">*</code> is automatically applied to each one individually, while preserving the displayed form of each link as it was given. This allows linking to phrases containing multiple reconstructed terms, while only showing the * once at the beginning.
* If the text starts with <code class="n">:</code>, then the link is treated as "raw" and the above steps are skipped. This can be used in rare cases where the page name begins with <code class="n">*</code> or if diacritics should not be stripped. For example:
** {{temp|l|en|*nix}} links to the nonexistent page [[Reconstruction:English/nix]] (<code class="n">*</code> is interpreted as a reconstruction), but {{temp|l|en|:*nix}} links to [[*nix]].
** {{temp|l|sl|Franche-Comté}} links to the nonexistent page [[Franche-Comte]] (<code>é</code> is converted to <code>e</code> by <code class="n">makeEntryName</code>), but {{temp|l|sl|:Franche-Comté}} links to [[Franche-Comté]].]==]
function export.language_link(data)
if type(data) ~= "table" then
error(
"The first argument to the function language_link must be a table. See Module:links/documentation for more information.")
elseif data.term and data.term:find("\\", nil, true) or data.alt and data.alt:find("\\", nil, true) then
track("escaped", "language_link")
end
-- Categorize links to "und".
local lang, cats = data.lang, data.cats
if cats and lang:getCode() == "und" then
insert(cats, "Pranala bahasa tak ditentukan")
end
return simple_link(
data.term,
data.fragment,
data.alt,
lang,
data.sc,
data.id,
cats,
data.no_alt_ast,
data.suppress_redundant_wikilink_cat
)
end
function export.plain_link(data)
if type(data) ~= "table" then
error(
"The first argument to the function plain_link must be a table. See Module:links/documentation for more information.")
elseif data.term and data.term:find("\\", nil, true) or data.alt and data.alt:find("\\", nil, true) then
track("escaped", "plain_link")
end
return simple_link(
data.term,
data.fragment,
data.alt,
nil,
data.sc,
data.id,
data.cats,
data.no_alt_ast,
data.suppress_redundant_wikilink_cat
)
end
--[==[Replace any links with links to the correct section, but don't link the whole text if no embedded links are found. Returns the display text form.]==]
function export.embedded_language_links(data)
if type(data) ~= "table" then
error(
"The first argument to the function embedded_language_links must be a table. See Module:links/documentation for more information.")
elseif data.term and data.term:find("\\", nil, true) or data.alt and data.alt:find("\\", nil, true) then
track("escaped", "embedded_language_links")
end
local term, lang, sc = data.term, data.lang, data.sc
-- If we don't have a script, get one.
if not sc then
sc = lang:findBestScript(term)
end
-- Do we have embedded wikilinks? If so, they need to be processed individually.
local open = find(term, "[[", nil, true)
if open and find(term, "]]", open + 2, true) then
return process_embedded_links(term, data.alt, lang, sc, data.id, data.cats, data.no_alt_ast)
end
-- If not, return the display text.
term = selective_trim(term)
-- FIXME: Double-escape any percent-signs, because we don't want to treat non-linked text as having percent-encoded characters. This is a hack: percent-decoding should come out of [[Module:languages]] and only dealt with in this module, as it's specific to links.
term = term:gsub("%%", "%%25")
return lang:makeDisplayText(term, sc, true)
end
function export.mark(text, item_type, face, lang)
local tag = { "", "" }
if item_type == "gloss" then
tag = { '<span class="mention-gloss-double-quote">“</span><span class="mention-gloss">',
'</span><span class="mention-gloss-double-quote">”</span>' }
if type(text) == "string" and text:match("^''[^'].*''$") then
-- Temporary tracking for mention glosses that are entirely italicized or bolded, which is probably
-- wrong. (Note that this will also find bolded mention glosses since they use triple apostrophes.)
track("italicized-mention-gloss", lang and lang:getFullCode() or nil)
end
elseif item_type == "tr" then
if face == "term" then
tag = { '<span lang="' .. lang:getFullCode() .. '" class="tr mention-tr Latn">',
'</span>' }
else
tag = { '<span lang="' .. lang:getFullCode() .. '" class="tr Latn">', '</span>' }
end
elseif item_type == "ts" then
-- \226\129\160 = word joiner (zero-width non-breaking space) U+2060
tag = { '<span class="ts mention-ts Latn">/\226\129\160', '\226\129\160/</span>' }
elseif item_type == "pos" then
tag = { '<span class="ann-pos">', '</span>' }
elseif item_type == "non-gloss" then
tag = { '<span class="ann-non-gloss">', '</span>' }
elseif item_type == "annotations" then
tag = { '<span class="mention-gloss-paren annotation-paren">(</span>',
'<span class="mention-gloss-paren annotation-paren">)</span>' }
elseif item_type == "infl" then
tag = { '<span class="ann-infl">', '</span>' }
end
if type(text) == "string" then
return tag[1] .. text .. tag[2]
else
return ""
end
end
local pos_tags
--[==[Formats the annotations that are displayed with a link created by {{code|lua|full_link}}. Annotations are the extra bits of information that are displayed following the linked term, and include things such as gender, transliteration, gloss and so on.
* The first argument is a table possessing some or all of the following keys:
*:; <code class="n">genders</code>
*:: Table containing a list of gender specifications in the style of [[Module:gender and number]].
*:; <code class="n">tr</code>
*:: Transliteration.
*:; <code class="n">gloss</code>
*:: Gloss that translates the term in the link, or gives some other descriptive information.
*:; <code class="n">pos</code>
*:: Part of speech of the linked term. If the given argument matches one of the aliases in `pos_aliases` in [[Module:headword/data]], or consists of a part of speech or alias followed by `f` (for a non-lemma form), expand it appropriately. Otherwise, just show the given text as it is.
*:; <code class="n">ng</code>
*:: Arbitrary non-gloss descriptive text for the link. This should be used in preference to putting descriptive text in `gloss` or `pos`.
*:; <code class="n">lit</code>
*:: Literal meaning of the term, if the usual meaning is figurative or idiomatic.
*:; <code class="n">infl</code>
*:: Table containing a list of grammar tags in the style of [[Module:form of]] `tagged_inflections`.
*:Any of the above values can be omitted from the <code class="n">info</code> argument. If a completely empty table is given (with no annotations at all), then an empty string is returned.
* The second argument is a string. Valid values are listed in [[Module:script utilities/data]] "data.translit" table.]==]
function export.format_link_annotations(data, face)
local output = {}
-- Interwiki link
if data.interwiki then
insert(output, data.interwiki)
end
-- Genders
if type(data.genders) ~= "table" then
data.genders = { data.genders }
end
if data.genders and #data.genders > 0 then
local genders, gender_cats = format_genders(data.genders, data.lang)
insert(output, " " .. genders)
if gender_cats then
local cats = data.cats
if cats then
extend(cats, gender_cats)
end
end
end
local annotations = {}
-- Transliteration and transcription
if data.tr and data.tr[1] or data.ts and data.ts[1] then
local kind
if face == "term" then
kind = face
else
kind = "default"
end
if data.tr[1] and data.ts[1] then
insert(annotations, tag_translit(data.tr[1], data.lang, kind) .. " " .. export.mark(data.ts[1], "ts"))
elseif data.ts[1] then
insert(annotations, export.mark(data.ts[1], "ts"))
else
insert(annotations, tag_translit(data.tr[1], data.lang, kind))
end
end
-- Gloss/translation
if data.gloss then
insert(annotations, export.mark(data.gloss, "gloss"))
end
-- Part of speech
if data.pos then
-- debug category for pos= containing transcriptions
if data.pos:match("/[^><]-/") then
data.pos = data.pos .. "[[Kategori:Pranala mengandung transkripsi pada kelas kata]]"
end
-- Canonicalize part of speech aliases as well as non-lemma aliases like 'nf' or 'nounf' for "noun form".
pos_tags = pos_tags or (m_headword_data or get_headword_data()).pos_aliases
local pos = pos_tags[data.pos]
if not pos and data.pos:find("f$") then
local pos_form = data.pos:sub(1, -2)
-- We only expand something ending in 'f' if the result is a recognized non-lemma POS.
pos_form = (pos_tags[pos_form] or pos_form) .. " form"
if (m_headword_data or get_headword_data()).nonlemmas[pos_form .. "s"] then
pos = pos_form
end
end
insert(annotations, export.mark(pos or data.pos, "pos"))
end
-- Inflection data
if data.infl then
local m_form_of = require(form_of_module)
-- Split tag sets manually, since tagged_inflections creates a numbered list, and we do not want that.
local infl_outputs = {}
local tag_sets = m_form_of.split_tag_set(data.infl)
for _, tag_set in ipairs(tag_sets) do
table.insert(infl_outputs,
m_form_of.tagged_inflections({ tags = tag_set, lang = data.lang, nocat = true, nolink = true, nowrap = true }))
end
insert(annotations, export.mark(table.concat(infl_outputs, "; "), "infl"))
end
-- Non-gloss text
if data.ng then
insert(annotations, export.mark(data.ng, "non-gloss"))
end
-- Literal/sum-of-parts meaning
if data.lit then
insert(annotations, "literally " .. export.mark(data.lit, "gloss"))
end
-- Provide a hook to insert additional annotations such as nested inflections.
if data.postprocess_annotations then
data.postprocess_annotations {
data = data,
annotations = annotations
}
end
if #annotations > 0 then
insert(output, " " .. export.mark(concat(annotations, ", "), "annotations"))
end
return concat(output)
end
-- Encode certain characters to avoid various delimiter-related issues at various stages. We need to encode < and >
-- because they end up forming part of CSS class names inside of <span ...> and will interfere with finding the end
-- of the HTML tag. I first tried converting them to URL encoding, i.e. %3C and %3E; they then appear in the URL as
-- %253C and %253E, which get mapped back to %3C and %3E when passed to [[Module:accel]]. But mapping them to <
-- and > somehow works magically without any further work; they appear in the URL as < and >, and get passed to
-- [[Module:accel]] as < and >. I have no idea who along the chain of calls is doing the encoding and decoding. If
-- someone knows, please modify this comment appropriately!
local accel_char_map
local function get_accel_char_map()
accel_char_map = {
["%"] = ".",
[" "] = "_",
["_"] = u(0xFFF0),
["<"] = "<",
[">"] = ">",
}
return accel_char_map
end
local function encode_accel_param_chars(param)
return (param:gsub("[% <>_]", accel_char_map or get_accel_char_map()))
end
local function encode_accel_param(prefix, param)
if not param then
return ""
end
if type(param) == "table" then
local filled_params = {}
-- There may be gaps in the sequence, especially for translit params.
local maxindex = 0
for k in pairs(param) do
if type(k) == "number" and k > maxindex then
maxindex = k
end
end
for i = 1, maxindex do
filled_params[i] = param[i] or ""
end
-- [[Module:accel]] splits these up again.
param = concat(filled_params, "*~!")
end
-- This is decoded again by [[WT:ACCEL]].
return prefix .. encode_accel_param_chars(param)
end
local function insert_if_not_blank(list, item)
if item == "" then
return
end
insert(list, item)
end
local function get_class(lang, tr, accel, nowrap)
if not accel and not nowrap then
return ""
end
local classes = {}
if accel then
insert(classes, "form-of lang-" .. lang:getFullCode())
local form = accel.form
if form then
insert(classes, encode_accel_param_chars(form) .. "-form-of")
end
insert_if_not_blank(classes, encode_accel_param("gender-", accel.gender))
insert_if_not_blank(classes, encode_accel_param("pos-", accel.pos))
insert_if_not_blank(classes, encode_accel_param("transliteration-", accel.translit or (tr ~= "-" and tr or nil)))
insert_if_not_blank(classes, encode_accel_param("target-", accel.target))
insert_if_not_blank(classes, encode_accel_param("origin-", accel.lemma))
insert_if_not_blank(classes, encode_accel_param("origin_transliteration-", accel.lemma_translit))
if accel.no_store then
insert(classes, "form-of-nostore")
end
end
if nowrap then
insert(classes, nowrap)
end
return concat(classes, " ")
end
-- Add any left or right regular or accent qualifiers, labels or references to a formatted term. `data` is the object
-- specifying the term, which should optionally contain:
-- * a language object in `lang`; required if any accent qualifiers or labels are given;
-- * left regular qualifiers in `q` (an array of strings or a single string); an empty array or blank string will be
-- ignored;
-- * right regular qualifiers in `qq` (an array of strings or a single string); an empty array or blank string will be
-- ignored;
-- * left accent qualifiers in `a` (an array of strings); an empty array will be ignored;
-- * right accent qualifiers in `aa` (an array of strings); an empty array will be ignored;
-- * left labels in `l` (an array of strings); an empty array will be ignored;
-- * right labels in `ll` (an array of strings); an empty array will be ignored;
-- * references in `refs`, an array either of strings (formatted reference text) or objects containing fields `text`
-- (formatted reference text) and optionally `name` and/or `group`.
-- `formatted` is the formatted version of the term itself.
local function add_qualifiers_and_refs_to_term(data, formatted)
local q = data.q
if type(q) == "string" then
q = { q }
end
local qq = data.qq
if type(qq) == "string" then
qq = { qq }
end
if q and q[1] or qq and qq[1] or data.a and data.a[1] or data.aa and data.aa[1] or data.l and data.l[1] or
data.ll and data.ll[1] or data.refs and data.refs[1] then
formatted = format_qualifiers {
lang = data.lang,
text = formatted,
q = q,
qq = qq,
a = data.a,
aa = data.aa,
l = data.l,
ll = data.ll,
refs = data.refs,
}
end
return formatted
end
--[==[
Creates a full link, with annotations (see `[[#format_link_annotations|format_link_annotations]]`), in the style of {{tl|l}} or {{tl|m}}.
The first argument, `data`, must be a table. It contains the various elements that can be supplied as parameters to {{tl|l}} or {{tl|m}}:
{ {
term = entry_to_link_to,
alt = link_text_or_displayed_text,
lang = language_object,
sc = script_object,
track_sc = boolean,
no_nonstandard_sc_cat = boolean,
fragment = link_fragment,
id = sense_id,
genders = { "gender1", "gender2", ... },
tr = transliteration,
respect_link_tr = boolean,
ts = transcription,
gloss = gloss,
pos = part_of_speech_tag,
ng = non-gloss text,
lit = literal_translation,
infl = { "form_of_grammar_tag1", "form_of_grammar_tag2", ... },
no_alt_ast = boolean,
accel = {accelerated_creation_tags},
interwiki = interwiki,
pretext = "text_at_beginning" or nil,
posttext = "text_at_end" or nil,
q = { "left_qualifier1", "left_qualifier2", ...} or "left_qualifier",
qq = { "right_qualifier1", "right_qualifier2", ...} or "right_qualifier",
l = { "left_label1", "left_label2", ...},
ll = { "right_label1", "right_label2", ...},
a = { "left_accent_qualifier1", "left_accent_qualifier2", ...},
aa = { "right_accent_qualifier1", "right_accent_qualifier2", ...},
refs = { "formatted_ref1", "formatted_ref2", ...} or { {text = "text", name = "name", group = "group"}, ... },
show_qualifiers = boolean,
} }
Any one of the items in the `data` table may be {nil}, but an error will be shown if neither `term` nor `alt` nor `tr`
is present. Thus, calling {full_link{ term = term, lang = lang, sc = sc }}, where `term` is the page to link to (which
may have diacritics that will be stripped and/or embedded bracketed links) and `lang` is a
[[Module:languages#Language objects|language object]] from [[Module:languages]], will give a plain link similar to the
one produced by the template {{tl|l}}, and calling {full_link( { term = term, lang = lang, sc = sc }, "term" )} will
give a link similar to the one produced by the template {{tl|m}}.
The function will:
* Try to determine the script, based on the characters found in the `term` or `alt` argument, if the script was not
given. If a script is given and `track_sc` is {true}, it will check whether the input script is the same as the one
which would have been automatically generated and add the category [[:Category:LANG terms with redundant script codes]]
if yes, or [[:Category:LANG terms with non-redundant manual script codes]] if no. This should be used when the input
script object is directly determined by a template's `sc` parameter.
* Call `[[#language_link|language_link]]` on the `term` or `alt` forms, to remove diacritics in the page name, process
any embedded wikilinks and create links to Reconstruction or Appendix pages when necessary.
* Call `[[Module:script utilities#tag_text]]` to add the appropriate language and script tags to the term and
italicize terms written in the Latin script if necessary. Accelerated creation tags, as used by [[WT:ACCEL]], are
included.
* Generate a transliteration, based on the `alt` or `term` arguments, if the script is not Latin, no transliteration was
provided in `tr` and the combination of the term's language and script support automatic transliteration. The
transliteration itself will be linked if both `.respect_link_tr` is specified and the language of the term has the
`link_tr` property set for the script of the term; but not otherwise.
* Add the annotations (transliteration, gender, gloss, etc.) after the link.
* If `no_alt_ast` is specified, then the `alt` text does not need to contain an asterisk if the language is
reconstructed. This should only be used by modules which really need to allow links to reconstructions that don't
display asterisks (e.g. number boxes).
* If `pretext` or `posttext` is specified, this is text to (respectively) prepend or append to the output, directly
before processing qualifiers, labels and references. This can be used to add arbitrary extra text inside of the
qualifiers, labels and references.
* If `show_qualifiers` is specified or the `show_qualifiers` argument is given, then left and right qualifiers, accent
qualifiers, labels and references will be displayed, otherwise they will be ignored. (This is because a fair amount of
code stores qualifiers, labels and/or references in these fields and displays them itself, rather than expecting
{full_link()} to display them.)]==]
function export.full_link(data, face, allow_self_link, show_qualifiers)
if type(data) ~= "table" then
error("The first argument to the function full_link must be a table. "
.. "See Module:links/documentation for more information.")
elseif data.term and data.term:find("\\", nil, true) or data.alt and data.alt:find("\\", nil, true) then
track("escaped", "full_link")
end
-- Prevent data from being destructively modified.
local data = shallow_copy(data)
-- FIXME: this shouldn't be added to `data`, as that means the input table needs to be cloned.
data.cats = {}
-- Categorize links to "und".
local lang, cats = data.lang, data.cats
if cats and lang:getCode() == "und" then
insert(cats, "Undetermined language links")
end
local terms = { true }
-- Generate multiple forms if applicable.
for _, param in ipairs { "term", "alt" } do
if type(data[param]) == "string" and data[param]:find("//", nil, true) then
data[param] = export.split_on_slashes(data[param])
elseif type(data[param]) == "string" and not (type(data.term) == "string" and data.term:find("//", nil, true)) then
if not data.no_generate_forms then
data[param] = lang:generateForms(data[param])
else
data[param] = { data[param] }
end
else
data[param] = {}
end
end
for _, param in ipairs { "sc", "tr", "ts" } do
data[param] = { data[param] }
end
for _, param in ipairs { "term", "alt", "sc", "tr", "ts" } do
for i in pairs(data[param]) do
terms[i] = true
end
end
-- Create the link
local output = {}
local id, no_alt_ast, srwc, accel, nevercalltr = data.id, data.no_alt_ast, data.suppress_redundant_wikilink_cat,
data.accel, data.never_call_transliteration_module
local link_tr = data.respect_link_tr and lang:link_tr(data.sc[1])
for i in ipairs(terms) do
local link
-- Is there any text to show?
if (data.term[i] or data.alt[i]) then
-- Try to detect the script if it was not provided
local display_term = data.alt[i] or data.term[i]
local best = lang:findBestScript(display_term)
-- no_nonstandard_sc_cat is intended for use in [[Module:interproject]]
if (
not data.no_nonstandard_sc_cat and
best:getCode() == "None" and
find_best_script_without_lang(display_term):getCode() ~= "None"
) then
insert(cats, lang:getCode() .. ":Istilah dengan aksara tak lazim")
end
if not data.sc[i] then
data.sc[i] = best
-- Track uses of sc parameter.
elseif data.track_sc then
if data.sc[i]:getCode() == best:getCode() then
insert(cats, lang:getCode() .. ":Istilah dengan kode aksara lewah")
else
insert(cats, lang:getCode() .. ":Istilah dengan kode aksara manual tak lewah")
end
end
-- If using a discouraged character sequence, add to maintenance category
if data.sc[i]:hasNormalizationFixes() == true then
if (data.term[i] and data.sc[i]:fixDiscouragedSequences(toNFC(data.term[i])) ~= toNFC(data.term[i])) or (data.alt[i] and data.sc[i]:fixDiscouragedSequences(toNFC(data.alt[i])) ~= toNFC(data.alt[i])) then
insert(cats, "Pages using discouraged character sequences")
end
end
link = simple_link(
data.term[i],
data.fragment,
data.alt[i],
lang,
data.sc[i],
id,
cats,
no_alt_ast,
srwc
)
end
-- simple_link can return nil, so check if a link has been generated.
if link then
-- Add "nowrap" class to prefixes in order to prevent wrapping after the hyphen
local nowrap
local display_term = data.alt[i] or data.term[i]
if display_term and (display_term:find("^%-") or display_term:find("^־")) then -- Hebrew maqqef -- FIXME, use hyphens from [[Module:affix]]
nowrap = "nowrap"
end
link = tag_text(link, lang, data.sc[i], face, get_class(lang, data.tr[i], accel, nowrap))
else
--[[ No term to show.
Is there at least a transliteration we can work from? ]]
link = request_script(lang, data.sc[i])
-- No link to show, and no transliteration either. Show a term request (unless it's a substrate, as they rarely take terms).
if (link == "" or (not data.tr[i]) or data.tr[i] == "-") and lang:getFamilyCode() ~= "qfa-sub" then
-- If there are multiple terms, break the loop instead.
if i > 1 then
remove(output)
break
elseif NAMESPACE ~= "Templat" then
insert(cats, lang:getCode() .. ":Permintaan istilah")
end
link = "<small>[Term?]</small>"
end
end
insert(output, link)
if i < #terms then insert(output, "<span class=\"Zsym mention\" style=\"font-size:100%;\"> / </span>") end
end
-- When suppress_tr is true, do not show or generate any transliteration
if data.suppress_tr then
data.tr[1] = nil
else
-- TODO: Currently only handles the first transliteration, pending consensus on how to handle multiple translits for multiple forms, as this is not always desirable (e.g. traditional/simplified Chinese).
if data.tr[1] == "" or data.tr[1] == "-" then
data.tr[1] = nil
else
local phonetic_extraction = load_data("Module:links/data").phonetic_extraction
phonetic_extraction = phonetic_extraction[lang:getCode()] or phonetic_extraction[lang:getFullCode()]
if phonetic_extraction then
data.tr[1] = data.tr[1] or
require(phonetic_extraction).getTranslit(export.remove_links(data.alt[1] or data.term[1]))
elseif (data.term[1] or data.alt[1]) and data.sc[1]:isTransliterated() then
-- Track whenever there is manual translit. The categories below like 'terms with redundant transliterations'
-- aren't sufficient because they only work with reference to automatic translit and won't operate at all in
-- languages without any automatic translit, like Persian and Hebrew.
if data.tr[1] then
local full_code = lang:getFullCode()
track("manual-tr", full_code)
end
if not nevercalltr then
-- Try to generate a transliteration.
local text = data.alt[1] or data.term[1]
if not link_tr then
text = export.remove_links(text, true)
end
local automated_tr = lang:transliterate(text, data.sc[1])
if automated_tr then
local manual_tr = data.tr[1]
if manual_tr then
if export.remove_links(manual_tr) == export.remove_links(automated_tr) then
insert(cats, lang:getCode() .. ":Istilah dengan alih aksara lewah")
else
-- Prevents Arabic root categories from flooding the tracking categories.
if NAMESPACE ~= "Kategori" then
insert(cats,
lang:getCode() .. ":Istilah dengan alih aksara manual tak lewah")
end
end
end
if not manual_tr or lang:overrideManualTranslit(data.sc[1]) then
data.tr[1] = automated_tr
end
end
end
end
end
end
-- Link to the transliteration entry for languages that require this
if data.tr[1] and link_tr and not data.tr[1]:match("%[%[(.-)%]%]") then
data.tr[1] = simple_link(
data.tr[1],
nil,
nil,
lang,
get_script("Latn"),
nil,
cats,
no_alt_ast,
srwc
)
elseif data.tr[1] and not link_tr then
-- Remove the pseudo-HTML tags added by remove_links.
data.tr[1] = data.tr[1]:gsub("</?link>", "")
end
if data.tr[1] and not umatch(data.tr[1], "[^%s%p]") then data.tr[1] = nil end
insert(output, export.format_link_annotations(data, face))
if data.pretext then
insert(output, 1, data.pretext)
end
if data.posttext then
insert(output, data.posttext)
end
local categories = cats[1] and format_categories(cats, lang, "-", nil, nil, data.sc) or ""
output = concat(output)
if show_qualifiers or data.show_qualifiers then
output = add_qualifiers_and_refs_to_term(data, output)
end
return output .. categories
end
--[==[Replaces all wikilinks with their displayed text, and removes any categories. This function can be invoked either from a template or from another module.
-- Strips links: deletes category links, the targets of piped links, and any double square brackets involved in links (other than file links, which are untouched). If `tag` is set, then any links removed will be given pseudo-HTML tags, which allow the substitution functions in [[Module:languages]] to properly subdivide the text in order to reduce the chance of substitution failures in modules which scrape pages like [[Module:zh-translit]].
-- FIXME: This is quite hacky. We probably want this to be integrated into [[Module:languages]], but we can't do that until we know that nothing is pushing pipe linked transliterations through it for languages which don't have link_tr set.
* <code><nowiki>[[page|displayed text]]</nowiki></code> → <code><nowiki>displayed text</nowiki></code>
* <code><nowiki>[[page and displayed text]]</nowiki></code> → <code><nowiki>page and displayed text</nowiki></code>
* <code><nowiki>[[Kategori:id:Lema|WORD]]</nowiki></code> → ''(nothing)'']==]
function export.remove_links(text, tag)
if type(text) == "table" then
text = text.args[1]
end
if not text or text == "" then
return ""
end
text = text
:gsub("%[%[", "\1")
:gsub("%]%]", "\2")
-- Parse internal links for the display text.
text = text:gsub("(\1)([^\1\2]-)(\2)",
function(c1, c2, c3)
-- Don't remove files.
for _, false_positive in ipairs({ "file", "image" }) do
if c2:lower():match("^" .. false_positive .. ":") then return c1 .. c2 .. c3 end
end
-- Remove categories completely.
for _, false_positive in ipairs({ "category", "cat" }) do
if c2:lower():match("^" .. false_positive .. ":") then return "" end
end
-- In piped links, remove all text before the pipe, unless it's the final character (i.e. the pipe trick), in which case just remove the pipe.
c2 = c2:match("^[^|]*|(.+)") or c2:match("([^|]+)|$") or c2
if tag then
return "<link>" .. c2 .. "</link>"
else
return c2
end
end)
text = text
:gsub("\1", "[[")
:gsub("\2", "]]")
return text
end
function export.section_link(link)
if type(link) ~= "string" then
error("The first argument to section_link was a " .. type(link) .. ", but it should be a string.")
elseif link:find("\\", nil, true) then
track("escaped", "section_link")
end
local target, section = get_fragment((link:gsub("_", " ")))
if not section then
error("No \"#\" delineating a section name")
end
return simple_link(
target,
section,
target .. " § " .. section
)
end
return export
eoxg2nqf378ifvy2vd9s9wh15y2r7b7
Modul:languages
828
200722
1349277
1348993
2026-04-10T18:45:19Z
Swarabakti
18192
1349277
Scribunto
text/plain
--[==[ intro:
This module implements fetching of language-specific information and processing text in a given language.
===Types of languages===
There are two types of languages: full languages and etymology-only languages. The essential difference is that only
full languages appear in L2 headings in vocabulary entries, and hence categories like [[:Category:French nouns]] exist
only for full languages. Etymology-only languages have either a full language or another etymology-only language as
their parent (in the parent-child inheritance sense), and for etymology-only languages with another etymology-only
language as their parent, a full language can always be derived by following the parent links upwards. For example,
"Canadian French", code `fr-CA`, is an etymology-only language whose parent is the full language "French", code `fr`.
An example of an etymology-only language with another etymology-only parent is "Northumbrian Old English", code
`ang-nor`, which has "Anglian Old English", code `ang-ang` as its parent; this is an etymology-only language whose
parent is "Old English", code `ang`, which is a full language. (This is because Northumbrian Old English is considered
a variety of Anglian Old English.) Sometimes the parent is the "Undetermined" language, code `und`; this is the case,
for example, for "substrate" languages such as "Pre-Greek", code `qsb-grc`, and "the BMAC substrate", code `qsb-bma`.
It is important to distinguish language ''parents'' from language ''ancestors''. The parent-child relationship is one
of containment, i.e. if X is a child of Y, X is considered a variety of Y. On the other hand, the ancestor-descendant
relationship is one of descent in time. For example, "Classical Latin", code `la-cla`, and "Late Latin", code `la-lat`,
are both etymology-only languages with "Latin", code `la`, as their parents, because both of the former are varieties
of Latin. However, Late Latin does *NOT* have Classical Latin as its parent because Late Latin is *not* a variety of
Classical Latin; rather, it is a descendant. There is in fact a separate `ancestors` field that is used to express the
ancestor-descendant relationship, and Late Latin's ancestor is given as Classical Latin. It is also important to note
that sometimes an etymology-only language is actually the conceptual ancestor of its parent language. This happens,
for example, with "Old Italian" (code `roa-oit`), which is an etymology-only variant of full language "Italian" (code
`it`), and with "Old Latin" (code `itc-ola`), which is an etymology-only variant of Latin. In both cases, the full
language has the etymology-only variant listed as an ancestor. This allows a Latin term to inherit from Old Latin
using the {{tl|inh}} template (where in this template, "inheritance" refers to ancestral inheritance, i.e. inheritance
in time, rather than in the parent-child sense); likewise for Italian and Old Italian.
Full languages come in three subtypes:
* {regular}: This indicates a full language that is attested according to [[WT:CFI]] and therefore permitted in the
main namespace. There may also be reconstructed terms for the language, which are placed in the
{Reconstruction} namespace and must be prefixed with * to indicate a reconstruction. Most full languages
are natural (not constructed) languages, but a few constructed languages (e.g. Esperanto and Volapük,
among others) are also allowed in the mainspace and considered regular languages.
* {reconstructed}: This language is not attested according to [[WT:CFI]], and therefore is allowed only in the
{Reconstruction} namespace. All terms in this language are reconstructed, and must be prefixed with
*. Languages such as Proto-Indo-European and Proto-Germanic are in this category.
* {appendix-constructed}: This language is attested but does not meet the additional requirements set out for
constructed languages ([[WT:CFI#Constructed languages]]). Its entries must therefore be in
the Appendix namespace, but they are not reconstructed and therefore should not have *
prefixed in links. Most constructed languages are of this subtype.
Both full languages and etymology-only languages have a {Language} object associated with them, which is fetched using
the {getByCode} function in [[Module:languages]] to convert a language code to a {Language} object. Depending on the
options supplied to this function, etymology-only languages may or may not be accepted, and family codes may be
accepted (returning a {Family} object as described in [[Module:families]]). There are also separate {getByCanonicalName}
functions in [[Module:languages]] and [[Module:etymology languages]] to convert a language's canonical name to a
{Language} object (depending on whether the canonical name refers to a full or etymology-only language).
===Textual representations===
Textual strings belonging to a given language come in several different ''text variants'':
# The ''input text'' is what the user supplies in wikitext, in the parameters to {{tl|m}}, {{tl|l}}, {{tl|ux}},
{{tl|t}}, {{tl|lang}} and the like.
# The ''corrected input text'' is the input text with some corrections and/or normalizations applied, such as
bad-character replacements for certain languages, like replacing `l` or `1` to [[palochka]] in some languages written
in Cyrillic. (FIXME: This currently goes under the name ''display text'' but that will be repurposed below. Also,
[[User:Surjection]] suggests renaming this to ''normalized input text'', but "normalized" is used in a different sense
in [[Module:usex]].)
# The ''display text'' is the text in the form as it will be displayed to the user. This is what appears in headwords,
in usexes, in displayed internal links, etc. This can include accent marks that are removed to form the stripped
display text (see below), as well as embedded bracketed links that are variously processed further. The display text
is generated from the corrected input text by applying language-specific transformations; for most languages, there
will be no such transformations. The general reason for having a difference between input and display text is to allow
for extra information in the input text that is not displayed to the user but is sent to the transliteration module.
Note that having different display and input text is only supported currently through special-casing but will be
generalized. Examples of transformations are: (1) Removing the {{cd|^}} that is used in certain East Asian (and
possibly other unicameral) languages to indicate capitalization of the transliteration (which is currently
special-cased); (2) for Korean, removing or otherwise processing hyphens (which is currently special-cased); (3) for
Arabic, removing a ''sukūn'' diacritic placed over a ''tāʔ marbūṭa'' (like this: ةْ) to indicate that the
''tāʔ marbūṭa'' is pronounced and transliterated as /t/ instead of being silent [NOTE, NOT IMPLEMENTED YET]; (4) for
Thai and Khmer, converting space-separated words to bracketed words and resolving respelling substitutions such as
`[กรีน/กฺรีน]`, which indicate how to transliterate given words [NOTE, NOT IMPLEMENTED YET except in language-specific
templates like {{tl|th-usex}}].
## The ''right-resolved display text'' is the result of removing brackets around one-part embedded links and resolving
two-part embedded links into their right-hand components (i.e. converting two-part links into the displayed form).
The process of right-resolution is what happens when you call {{cd|remove_links()}} in [[Module:links]] on some text.
When applied to the display text, it produces exactly what the user sees, without any link markup.
# The ''stripped display text'' is the result of applying diacritic-stripping to the display text.
## The ''left-resolved stripped display text'' [NEED BETTER NAME] is the result of applying left-resolution to the
stripped display text, i.e. similar to right-resolution but resolving two-part embedded links into their left-hand
components (i.e. the linked-to page). If the display text refers to a single page, the resulting of applying
diacritic stripping and left-resolution produces the ''logical pagename''.
# The ''physical pagename text'' is the result of converting the stripped display text into physical page links. If the
stripped display text contains embedded links, the left side of those links is converted into physical page links;
otherwise, the entire text is considered a pagename and converted in the same fashion. The conversion does three
things: (1) converts characters not allowed in pagenames into their "unsupported title" representation, e.g.
{{cd|Unsupported titles/`gt`}} in place of the logical name {{cd|>}}; (2) handles certain special-cased
unsupported-title logical pagenames, such as {{cd|Unsupported titles/Space}} in place of {{cd|[space]}} and
{{cd|Unsupported titles/Ancient Greek dish}} in place of a very long Greek name for a gourmet dish as found in
Aristophanes; (3) converts "mammoth" pagenames such as [[a]] into their appropriate split component, e.g.
[[a/languages A to L]].
# The ''source translit text'' is the text as supplied to the language-specific {{cd|transliterate()}} method. The form
of the source translit text may need to be language-specific, e.g Thai and Khmer will need the corrected input text,
whereas other languages may need to work off the display text. [FIXME: It's still unclear to me how embedded bracketed
links are handled in the existing code.] In general, embedded links need to be right-resolved (see above), but when
this happens is unclear to me [FIXME]. Some languages have a chop-up-and-paste-together scheme that sends parts of the
text through the transliterate mechanism, and for others (those listed with "cont" in {{cd|substitution}} in
[[Module:languages/data]]) they receive the full input text, but preprocessed in certain ways. (The wisdom of this is
still unclear to me.)
# The ''transliterated text'' (or ''transliteration'') is the result of transliterating the source translit text. Unlike
for all the other text variants except the transcribed text, it is always in the Latin script.
# The ''transcribed text'' (or ''transcription'') is the result of transcribing the source translit text, where
"transcription" here means a close approximation to the phonetic form of the language in languages (e.g. Akkadian,
Sumerian, Ancient Egyptian, maybe Tibetan) that have a wide difference between the written letters and spoken form.
Unlike for all the other text variants other than the transliterated text, it is always in the Latin script.
Currently, the transcribed text is always supplied manually be the user; there is no such thing as a
{{cd|transcribe()}} method on language objects.
# The ''sort key'' is the text used in sort keys for determining the placing of pages in categories they belong to. The
sort key is generated from the pagename or a specified ''sort base'' by lowercasing, doing language-specific
transformations and then uppercasing the result. If the sort base is supplied and is generated from input text, it
needs to be converted to display text, have embedded links removed through right-resolution and have
diacritic-stripping applied.
# There are other text variants that occur in usexes (specifically, there are normalized variants of several of the
above text variants), but we can skip them for now.
The following methods exist on {Language} objects to convert between different text variants:
# {correctInputText} (currently called {makeDisplayText}): This converts input text to corrected input text.
# {stripDiacritics}: This converts to stripped display text. [FIXME: This needs some rethinking. In particular,
{stripDiacritics} is sometimes called on input text, corrected input text or display text (in various paths inside of
[[Module:links]], and, in the case of input text, usually from other modules). We need to make sure we don't try to
convert input text to display text twice, but at the same time we need to support calling it directly on input text
since so many modules do this. This means we need to add a parameter indicating whether the passed-in text is input,
corrected input, or display text; if the former two, we call {correctInputText} ourselves.]
# {logicalToPhysical}: This converts logical pagenames to physical pagenames.
# {transliterate}: This appears to convert input text with embedded brackets removed into a transliteration.
[FIXME: This needs some rethinking. In particular, it calls {processDisplayText} on its input, which won't work
for Thai and Khmer, so we may need language-specific flags indicating whether to pass the input text directly to the
language transliterate method. In addition, I'm not sure how embedded links are handled in the existing translit code;
a lot of callers remove the links themselves before calling {transliterate()}, which I assume is wrong.]
# {makeSortKey}: This converts display text (?) to a sort key. [FIXME: Clarify this.]
]==]
local export = {}
local debug_track_module = "Module:debug/track"
local etymology_languages_data_module = "Module:etymology languages/data"
local families_module = "Module:families"
local headword_page_module = "Module:headword/page"
local json_module = "Module:JSON"
local language_like_module = "Module:language-like"
local languages_data_module = "Module:languages/data"
local languages_data_patterns_module = "Module:languages/data/patterns"
local links_data_module = "Module:links/data"
local load_module = "Module:load"
local scripts_module = "Module:scripts"
local scripts_data_module = "Module:scripts/data"
local string_encode_entities_module = "Module:string/encode entities"
local string_pattern_escape_module = "Module:string/patternEscape"
local string_replacement_escape_module = "Module:string/replacementEscape"
local string_utilities_module = "Module:string utilities"
local table_module = "Module:table"
local utilities_module = "Module:utilities"
local wikimedia_languages_module = "Module:wikimedia languages"
local mw = mw
local string = string
local table = table
local char = string.char
local concat = table.concat
local find = string.find
local floor = math.floor
local get_by_code -- Defined below.
local get_data_module_name -- Defined below.
local get_extra_data_module_name -- Defined below.
local getmetatable = getmetatable
local gmatch = string.gmatch
local gsub = string.gsub
local insert = table.insert
local ipairs = ipairs
local is_known_language_tag = mw.language.isKnownLanguageTag
local make_object -- Defined below.
local match = string.match
local next = next
local pairs = pairs
local remove = table.remove
local require = require
local select = select
local setmetatable = setmetatable
local sub = string.sub
local type = type
local unstrip = mw.text.unstrip
-- Loaded as needed by findBestScript.
local Hans_chars
local Hant_chars
local function check_object(...)
check_object = require(utilities_module).check_object
return check_object(...)
end
local function debug_track(...)
debug_track = require(debug_track_module)
return debug_track(...)
end
local function decode_entities(...)
decode_entities = require(string_utilities_module).decode_entities
return decode_entities(...)
end
local function decode_uri(...)
decode_uri = require(string_utilities_module).decode_uri
return decode_uri(...)
end
local function deep_copy(...)
deep_copy = require(table_module).deepCopy
return deep_copy(...)
end
local function encode_entities(...)
encode_entities = require(string_encode_entities_module)
return encode_entities(...)
end
local function get_L2_sort_key(...)
get_L2_sort_key = require(headword_page_module).get_L2_sort_key
return get_L2_sort_key(...)
end
local function get_script(...)
get_script = require(scripts_module).getByCode
return get_script(...)
end
local function find_best_script_without_lang(...)
find_best_script_without_lang = require(scripts_module).findBestScriptWithoutLang
return find_best_script_without_lang(...)
end
local function get_family(...)
get_family = require(families_module).getByCode
return get_family(...)
end
local function get_plaintext(...)
get_plaintext = require(utilities_module).get_plaintext
return get_plaintext(...)
end
local function get_wikimedia_lang(...)
get_wikimedia_lang = require(wikimedia_languages_module).getByCode
return get_wikimedia_lang(...)
end
local function keys_to_list(...)
keys_to_list = require(table_module).keysToList
return keys_to_list(...)
end
local function list_to_set(...)
list_to_set = require(table_module).listToSet
return list_to_set(...)
end
local function load_data(...)
load_data = require(load_module).load_data
return load_data(...)
end
local function make_family_object(...)
make_family_object = require(families_module).makeObject
return make_family_object(...)
end
local function pattern_escape(...)
pattern_escape = require(string_pattern_escape_module)
return pattern_escape(...)
end
local function replacement_escape(...)
replacement_escape = require(string_replacement_escape_module)
return replacement_escape(...)
end
local function safe_require(...)
safe_require = require(load_module).safe_require
return safe_require(...)
end
local function shallow_copy(...)
shallow_copy = require(table_module).shallowCopy
return shallow_copy(...)
end
local function split(...)
split = require(string_utilities_module).split
return split(...)
end
local function to_json(...)
to_json = require(json_module).toJSON
return to_json(...)
end
local function u(...)
u = require(string_utilities_module).char
return u(...)
end
local function ugsub(...)
ugsub = require(string_utilities_module).gsub
return ugsub(...)
end
local function ulen(...)
ulen = require(string_utilities_module).len
return ulen(...)
end
local function ulower(...)
ulower = require(string_utilities_module).lower
return ulower(...)
end
local function umatch(...)
umatch = require(string_utilities_module).match
return umatch(...)
end
local function uupper(...)
uupper = require(string_utilities_module).upper
return uupper(...)
end
local function track(page)
debug_track("languages/" .. page)
return true
end
local function normalize_code(code)
return load_data(languages_data_module).aliases[code] or code
end
local function check_inputs(self, check, default, ...)
local n = select("#", ...)
if n == 0 then
return false
end
local ret = check(self, (...))
if ret ~= nil then
return ret
elseif n > 1 then
local inputs = {...}
for i = 2, n do
ret = check(self, inputs[i])
if ret ~= nil then
return ret
end
end
end
return default
end
local function make_link(self, target, display)
local prefix, main
if self:getFamilyCode() == "qfa-sub" then
prefix, main = display:match("^(the )(.*)")
if not prefix then
prefix, main = display:match("^(a )(.*)")
end
end
return (prefix or "") .. "[[" .. target .. "|" .. (main or display) .. "]]"
end
-- Convert risky characters to HTML entities, which minimizes interference once returned (e.g. for "sms:a", "<!-- -->" etc.).
local function escape_risky_characters(text)
-- Spacing characters in isolation generally need to be escaped in order to be properly processed by the MediaWiki software.
if umatch(text, "^%s*$") then
return encode_entities(text, text)
end
return encode_entities(text, "!#%&*+/:;<=>?@[\\]_{|}")
end
-- Temporarily convert various formatting characters to PUA to prevent them from being disrupted by the substitution process.
local function doTempSubstitutions(text, subbedChars, keepCarets, noTrim)
-- Clone so that we don't insert any extra patterns into the table in package.loaded. For some reason, using require seems to keep memory use down; probably because the table is always cloned.
local patterns = shallow_copy(require(languages_data_patterns_module))
if keepCarets then
insert(patterns, "((\\+)%^)")
insert(patterns, "((%^))")
end
-- Ensure any whitespace at the beginning and end is temp substituted, to prevent it from being accidentally trimmed. We only want to trim any final spaces added during the substitution process (e.g. by a module), which means we only do this during the first round of temp substitutions.
if not noTrim then
insert(patterns, "^([\128-\191\244]*(%s+))")
insert(patterns, "((%s+)[\128-\191\244]*)$")
end
-- Pre-substitution, of "[[" and "]]", which makes pattern matching more accurate.
text = gsub(text, "%f[%[]%[%[", "\1"):gsub("%f[%]]%]%]", "\2")
local i = #subbedChars
for _, pattern in ipairs(patterns) do
-- Patterns ending in \0 stand are for things like "[[" or "]]"), so the inserted PUA are treated as breaks between terms by modules that scrape info from pages.
local term_divider
pattern = gsub(pattern, "%z$", function(divider)
term_divider = divider == "\0"
return ""
end)
text = gsub(text, pattern, function(...)
local m = {...}
local m1New = m[1]
for k = 2, #m do
local n = i + k - 1
subbedChars[n] = m[k]
local byte2 = floor(n / 4096) % 64 + (term_divider and 128 or 136)
local byte3 = floor(n / 64) % 64 + 128
local byte4 = n % 64 + 128
m1New = gsub(m1New, pattern_escape(m[k]), "\244" .. char(byte2) .. char(byte3) .. char(byte4), 1)
end
i = i + #m - 1
return m1New
end)
end
text = gsub(text, "\1", "%[%["):gsub("\2", "%]%]")
return text, subbedChars
end
-- Reinsert any formatting that was temporarily substituted.
local function undoTempSubstitutions(text, subbedChars)
for i = 1, #subbedChars do
local byte2 = floor(i / 4096) % 64 + 128
local byte3 = floor(i / 64) % 64 + 128
local byte4 = i % 64 + 128
text = gsub(text, "\244[" .. char(byte2) .. char(byte2+8) .. "]" .. char(byte3) .. char(byte4),
replacement_escape(subbedChars[i]))
end
text = gsub(text, "\1", "%[%["):gsub("\2", "%]%]")
return text
end
-- Check if the raw text is an unsupported title, and if so return that. Otherwise, remove HTML entities. We do the pre-conversion to avoid loading the unsupported title list unnecessarily.
local function checkNoEntities(self, text)
local textNoEnc = decode_entities(text)
if textNoEnc ~= text and load_data(links_data_module).unsupported_titles[text] then
return text
else
return textNoEnc
end
end
-- If no script object is provided (or if it's invalid or None), get one.
local function checkScript(text, self, sc)
if not check_object("script", true, sc) or sc:getCode() == "None" then
return self:findBestScript(text)
end
return sc
end
local function normalize(text, sc)
text = sc:fixDiscouragedSequences(text)
return sc:toFixedNFD(text)
end
-- Subfunction of iterateSectionSubstitutions(). Process an individual chunk of text according to the specifications in
-- `substitution_data`. The input parameters are all as in the documentation of iterateSectionSubstitutions() except for
-- `recursed`, which is set to true if we called ourselves recursively to process a script-specific setting or
-- script-wide fallback. Returns two values: the processed text and the actual substitution data used to do the
-- substitutions (same as the `actual_substitution_data` return value to iterateSectionSubstitutions()).
local function doSubstitutions(self, text, sc, substitution_data, data_field, function_name, recursed)
-- BE CAREFUL in this function because the value at any level can be `false`, which causes no processing to be done
-- and blocks any further fallback processing.
local actual_substitution_data = substitution_data
-- If there are language-specific substitutes given in the data module, use those.
if type(substitution_data) == "table" then
-- If a script is specified, run this function with the script-specific data before continuing.
local sc_code = sc:getCode()
local has_substitution_data = false
if substitution_data[sc_code] ~= nil then
has_substitution_data = true
if substitution_data[sc_code] then
text, actual_substitution_data = doSubstitutions(self, text, sc, substitution_data[sc_code], data_field,
function_name, true)
end
-- Hant, Hans and Hani are usually treated the same, so add a special case to avoid having to specify each one
-- separately.
elseif sc_code:match("^Han") and substitution_data.Hani ~= nil then
has_substitution_data = true
if substitution_data.Hani then
text, actual_substitution_data = doSubstitutions(self, text, sc, substitution_data.Hani, data_field,
function_name, true)
end
-- Substitution data with key 1 in the outer table may be given as a fallback.
elseif substitution_data[1] ~= nil then
has_substitution_data = true
if substitution_data[1] then
text, actual_substitution_data = doSubstitutions(self, text, sc, substitution_data[1], data_field,
function_name, true)
end
end
-- Iterate over all strings in the "from" subtable, and gsub with the corresponding string in "to". We work with
-- the NFD decomposed forms, as this simplifies many substitutions.
if substitution_data.from then
has_substitution_data = true
for i, from in ipairs(substitution_data.from) do
-- Normalize each loop, to ensure multi-stage substitutions work correctly.
text = sc:toFixedNFD(text)
text = ugsub(text, sc:toFixedNFD(from), substitution_data.to[i] or "")
end
end
if substitution_data.remove_diacritics then
has_substitution_data = true
text = sc:toFixedNFD(text)
-- Convert exceptions to PUA.
local remove_exceptions, substitutes = substitution_data.remove_exceptions
if remove_exceptions then
substitutes = {}
local i = 0
for _, exception in ipairs(remove_exceptions) do
exception = sc:toFixedNFD(exception)
text = ugsub(text, exception, function(m)
i = i + 1
local subst = u(0x80000 + i)
substitutes[subst] = m
return subst
end)
end
end
-- Strip diacritics.
text = ugsub(text, "[" .. substitution_data.remove_diacritics .. "]", "")
-- Convert exceptions back.
if remove_exceptions then
text = text:gsub("\242[\128-\191]*", substitutes)
end
end
if not has_substitution_data and sc._data[data_field] then
-- If language-specific sort key (etc.) is nil, fall back to script-wide sort key (etc.).
text, actual_substitution_data = doSubstitutions(self, text, sc, sc._data[data_field], data_field,
function_name, true)
end
elseif type(substitution_data) == "string" then
-- If there is a dedicated function module, use that.
local module = safe_require("Module:" .. substitution_data)
if module then
-- TODO: translit functions should take objects, not codes.
-- TODO: translit functions should be called with form NFD.
if function_name == "tr" then
if not module[function_name] then
error(("Internal error: Module [[%s]] has no function named 'tr'"):format(substitution_data))
end
text = module[function_name](text, self._code, sc:getCode())
elseif function_name == "stripDiacritics" then
-- FIXME, get rid of this arm after renaming makeEntryName -> stripDiacritics.
if module[function_name] then
text = module[function_name](sc:toFixedNFD(text), self, sc)
elseif module.makeEntryName then
text = module.makeEntryName(sc:toFixedNFD(text), self, sc)
else
error(("Internal error: Module [[%s]] has no function named 'stripDiacritics' or 'makeEntryName'"
):format(substitution_data))
end
else
if not module[function_name] then
error(("Internal error: Module [[%s]] has no function named '%s'"):format(
substitution_data, function_name))
end
text = module[function_name](sc:toFixedNFD(text), self, sc)
end
else
error("Substitution data '" .. substitution_data .. "' does not match an existing module.")
end
elseif substitution_data == nil and sc._data[data_field] then
-- If language-specific sort key (etc.) is nil, fall back to script-wide sort key (etc.).
text, actual_substitution_data = doSubstitutions(self, text, sc, sc._data[data_field], data_field,
function_name, true)
end
-- Don't normalize to NFC if this is the inner loop or if a module returned nil.
if recursed or not text then
return text, actual_substitution_data
end
-- Fix any discouraged sequences created during the substitution process, and normalize into the final form.
return sc:toFixedNFC(sc:fixDiscouragedSequences(text)), actual_substitution_data
end
-- Split the text into sections, based on the presence of temporarily substituted formatting characters, then iterate
-- over each section to apply substitutions (e.g. transliteration or diacritic stripping). This avoids putting PUA
-- characters through language-specific modules, which may be unequipped for them. This function is passed the following
-- values:
-- * `self` (the Language object);
-- * `text` (the text to process);
-- * `sc` (the script of the text, which must be specified; callers should call checkScript() as needed to autodetect the
-- script of the text if not given explicitly by the user);
-- * `subbedChars` (an array of the same length as the text, indicating which characters have been substituted and by
-- what, or {nil} if no substitutions are to happen);
-- * `keepCarets` (DOCUMENT ME);
-- * `substitution_data` (the data indicating which substitutions to apply, taken directly from `data_field` in the
-- language's data structure in a submodule of [[Module:languages/data]]);
-- * `data_field` (the data field from which `substitution_data` was fetched, such as "sort_key" or "strip_diacritics");
-- * `function_name` (the name of the function to call to do the substitution, in case `substitution_data` specifies a
-- module to do the substitution);
-- * `notrim` (don't trim whitespace at the edges of `text`; set when computing the sort key, because whitespace at the
-- beginning of a sort key is significant and causes the resulting page to be sorted at the beginning of the category
-- it's in).
-- Returns three values:
-- (1) the processed text;
-- (2) the value of `subbedChars` that was passed in, possibly modified with additional character substitutions; will be
-- {nil} if {nil} was passed in;
-- (3) the actual substitution data that was used to apply substitutions to `text`; this may be different from the value
-- of `substitution_data` passed in if that value recursively specified script-specific substitutions or if no
-- substitution data could be found in the language-specific data (e.g. {nil} was passed in or a structure was passed
-- in that had no setting for the script given in `sc`), but a script-wide fallback value was set; currently it is
-- only used by makeSortKey().
local function iterateSectionSubstitutions(self, text, sc, subbedChars, keepCarets, substitution_data, data_field,
function_name, notrim)
local sections
-- See [[Module:languages/data]].
if not find(text, "\244") or load_data(languages_data_module).substitution[self._code] == "cont" then
sections = {text}
else
sections = split(text, "\244[\128-\143][\128-\191]*", true)
end
local actual_substitution_data
for _, section in ipairs(sections) do
-- Don't bother processing empty strings or whitespace (which may also not be handled well by dedicated
-- modules).
if gsub(section, "%s+", "") ~= "" then
local sub, this_actual_substitution_data = doSubstitutions(self, section, sc, substitution_data, data_field,
function_name)
actual_substitution_data = this_actual_substitution_data
-- Second round of temporary substitutions, in case any formatting was added by the main substitution
-- process. However, don't do this if the section contains formatting already (as it would have had to have
-- been escaped to reach this stage, and therefore should be given as raw text).
if sub and subbedChars then
local noSub
for _, pattern in ipairs(require(languages_data_patterns_module)) do
if match(section, pattern .. "%z?") then
noSub = true
end
end
if not noSub then
sub, subbedChars = doTempSubstitutions(sub, subbedChars, keepCarets, true)
end
end
if not sub then
text = sub
break
end
text = sub and gsub(text, pattern_escape(section), replacement_escape(sub), 1) or text
end
end
if not notrim then
-- Trim, unless there are only spacing characters, while ignoring any final formatting characters.
-- Do not trim sort keys because spaces at the beginning are significant.
text = text and text:gsub("^([\128-\191\244]*)%s+(%S)", "%1%2"):gsub("(%S)%s+([\128-\191\244]*)$", "%1%2") or
nil
end
return text, subbedChars, actual_substitution_data
end
-- Process carets (and any escapes). Default to simple removal, if no pattern/replacement is given.
local function processCarets(text, pattern, repl)
local rep
repeat
text, rep = gsub(text, "\\\\(\\*^)", "\3%1")
until rep == 0
return (text:gsub("\\^", "\4")
:gsub(pattern or "%^", repl or "")
:gsub("\3", "\\")
:gsub("\4", "^"))
end
-- Remove carets if they are used to capitalize parts of transliterations (unless they have been escaped).
local function removeCarets(text, sc)
if not sc:hasCapitalization() and sc:isTransliterated() and text:find("^", 1, true) then
return processCarets(text)
else
return text
end
end
local Language = {}
local Bahasa = require("Module:bahasa")
--[==[Returns the language code of the language. Example: {{code|lua|"fr"}} for French.]==]
function Language:getCode()
return Bahasa.getLangCodeByCode(self._code) or self._code
end
--[==[Returns the canonical name of the language. This is the name used to represent that language on Wiktionary, and is guaranteed to be unique to that language alone. Example: {{code|lua|"French"}} for French.]==]
function Language:getCanonicalName()
local canonical = Bahasa.getLangNameByCode(self._code)
if canonical then
self._name = canonical
return canonical
end
local name = self._name
if name == nil then
name = self._data[1]
self._name = name
end
return name
end
--lowercase version
function Language:getCanonicalNameLower()
local name = self:getCanonicalName()
local first = mw.ustring.sub(name, 1, 1)
local rest = mw.ustring.sub(name, 2)
return mw.ustring.lower(first) .. rest
end
--[==[
Return the display form of the language. The display form of a language, family or script is the form it takes when
appearing as the <code><var>source</var></code> in categories such as <code>English terms derived from
<var>source</var></code> or <code>English given names from <var>source</var></code>, and is also the displayed text
in {makeCategoryLink()} links. For full and etymology-only languages, this is the same as the canonical name, but
for families, it reads <code>"<var>name</var> languages"</code> (e.g. {"Indo-Iranian languages"}), and for scripts,
it reads <code>"<var>name</var> script"</code> (e.g. {"Arabic script"}).
]==]
function Language:getDisplayForm()
local form = self._displayForm
if form == nil then
form = self:getCanonicalNameLower()
-- Add article and " substrate" to substrates that lack them.
if self:getFamilyCode() == "qfa-sub" then
if not (sub(form, 1, 4) == "the " or sub(form, 1, 2) == "a ") then
form = "a " .. form
end
if not match(form, " [Ss]ubstrate") then
form = form .. " substrate"
end
end
self._displayForm = form
end
return form
end
--[==[Returns the value which should be used in the HTML lang= attribute for tagged text in the language.]==]
function Language:getHTMLAttribute(sc, region)
local code = self._code
if not find(code, "-", 1, true) then
return code .. "-" .. sc:getCode() .. (region and "-" .. region or "")
end
local parent = self:getParent()
region = region or match(code, "%f[%u][%u-]+%f[%U]")
if parent then
return parent:getHTMLAttribute(sc, region)
end
-- TODO: ISO family codes can also be used.
return "mis-" .. sc:getCode() .. (region and "-" .. region or "")
end
--[==[Returns a table of the aliases that the language is known by, excluding the canonical name. Aliases are synonyms for the language in question. The names are not guaranteed to be unique, in that sometimes more than one language is known by the same name. Example: {{code|lua|{"High German", "New High German", "Deutsch"} }} for [[:Category:German language|German]].]==]
function Language:getAliases()
self:loadInExtraData()
return require(language_like_module).getAliases(self)
end
--[==[
Return a table of the known subvarieties of a given language, excluding subvarieties that have been given
explicit etymology-only language codes. The names are not guaranteed to be unique, in that sometimes a given name
refers to a subvariety of more than one language. Example: {{code|lua|{"Southern Aymara", "Central Aymara"} }} for
[[:Category:Aymara language|Aymara]]. Note that the returned value can have nested tables in it, when a subvariety
goes by more than one name. Example: {{code|lua|{"North Azerbaijani", "South Azerbaijani", {"Afshar", "Afshari",
"Afshar Azerbaijani", "Afchar"}, {"Qashqa'i", "Qashqai", "Kashkay"}, "Sonqor"} }} for
[[:Category:Azerbaijani language|Azerbaijani]]. Here, for example, Afshar, Afshari, Afshar Azerbaijani and Afchar
all refer to the same subvariety, whose preferred name is Afshar (the one listed first). To avoid a return value
with nested tables in it, specify a non-{{code|lua|nil}} value for the <code>flatten</code> parameter; in that case,
the return value would be {{code|lua|{"North Azerbaijani", "South Azerbaijani", "Afshar", "Afshari",
"Afshar Azerbaijani", "Afchar", "Qashqa'i", "Qashqai", "Kashkay", "Sonqor"} }}.
]==]
function Language:getVarieties(flatten)
self:loadInExtraData()
return require(language_like_module).getVarieties(self, flatten)
end
--[==[Returns a table of the "other names" that the language is known by, which are listed in the <code>otherNames</code> field. It should be noted that the <code>otherNames</code> field itself is deprecated, and entries listed there should eventually be moved to either <code>aliases</code> or <code>varieties</code>.]==]
function Language:getOtherNames() -- To be eventually removed, once there are no more uses of the `otherNames` field.
self:loadInExtraData()
return require(language_like_module).getOtherNames(self)
end
--[==[
Return a combined table of the canonical name, aliases, varieties and other names of a given language.]==]
function Language:getAllNames()
self:loadInExtraData()
return require(language_like_module).getAllNames(self)
end
--[==[Returns a table of types as a lookup table (with the types as keys).
The possible types are
* {language}: This is a language, either full or etymology-only.
* {full}: This is a "full" (not etymology-only) language, i.e. the union of {regular}, {reconstructed} and
{appendix-constructed}. Note that the types {full} and {etymology-only} also exist for families, so if you
want to check specifically for a full language and you have an object that might be a family, you should
use {{lua|hasType("language", "full")}} and not simply {{lua|hasType("full")}}.
* {etymology-only}: This is an etymology-only (not full) language, whose parent is another etymology-only
language or a full language. Note that the types {full} and {etymology-only} also exist for
families, so if you want to check specifically for an etymology-only language and you have an
object that might be a family, you should use {{lua|hasType("language", "etymology-only")}}
and not simply {{lua|hasType("etymology-only")}}.
* {regular}: This indicates a full language that is attested according to [[WT:CFI]] and therefore permitted
in the main namespace. There may also be reconstructed terms for the language, which are placed in
the {Reconstruction} namespace and must be prefixed with * to indicate a reconstruction. Most full
languages are natural (not constructed) languages, but a few constructed languages (e.g. Esperanto
and Volapük, among others) are also allowed in the mainspace and considered regular languages.
* {reconstructed}: This language is not attested according to [[WT:CFI]], and therefore is allowed only in the
{Reconstruction} namespace. All terms in this language are reconstructed, and must be prefixed
with *. Languages such as Proto-Indo-European and Proto-Germanic are in this category.
* {appendix-constructed}: This language is attested but does not meet the additional requirements set out for
constructed languages ([[WT:CFI#Constructed languages]]). Its entries must therefore
be in the Appendix namespace, but they are not reconstructed and therefore should
not have * prefixed in links.
]==]
function Language:getTypes()
local types = self._types
if types == nil then
types = {language = true}
if self:getFullCode() == self._code then
types.full = true
else
types["etymology-only"] = true
end
for t in gmatch(self._data.type, "[^,]+") do
types[t] = true
end
self._types = types
end
return types
end
--[==[Given a list of types as strings, returns true if the language has all of them.]==]
function Language:hasType(...)
Language.hasType = require(language_like_module).hasType
return self:hasType(...)
end
--[==[Returns a table containing <code>WikimediaLanguage</code> objects (see [[Module:wikimedia languages]]), which represent languages and their codes as they are used in Wikimedia projects for interwiki linking and such. More than one object may be returned, as a single Wiktionary language may correspond to multiple Wikimedia languages. For example, Wiktionary's single code <code>sh</code> (Serbo-Croatian) maps to four Wikimedia codes: <code>sh</code> (Serbo-Croatian), <code>bs</code> (Bosnian), <code>hr</code> (Croatian) and <code>sr</code> (Serbian).
The code for the Wikimedia language is retrieved from the <code>wikimedia_codes</code> property in the data modules. If that property is not present, the code of the current language is used. If none of the available codes is actually a valid Wikimedia code, an empty table is returned.]==]
function Language:getWikimediaLanguages()
local wm_langs = self._wikimediaLanguageObjects
if wm_langs == nil then
local codes = self:getWikimediaLanguageCodes()
wm_langs = {}
for i = 1, #codes do
wm_langs[i] = get_wikimedia_lang(codes[i])
end
self._wikimediaLanguageObjects = wm_langs
end
return wm_langs
end
function Language:getWikimediaLanguageCodes()
local wm_langs = self._wikimediaLanguageCodes
if wm_langs == nil then
wm_langs = self._data.wikimedia_codes
if wm_langs then
wm_langs = split(wm_langs, ",", true, true)
else
local code = self._code
if is_known_language_tag(code) then
wm_langs = {code}
else
-- Inherit, but only if no codes are specified in the data *and*
-- the language code isn't a valid Wikimedia language code.
local parent = self:getParent()
wm_langs = parent and parent:getWikimediaLanguageCodes() or {}
end
end
self._wikimediaLanguageCodes = wm_langs
end
return wm_langs
end
--[==[
Returns the name of the Wikipedia article for the language. `project` specifies the language and project to retrieve
the article from, defaulting to {"enwiki"} for the English Wikipedia. Normally if specified it should be the project
code for a specific-language Wikipedia e.g. "zhwiki" for the Chinese Wikipedia, but it can be any project, including
non-Wikipedia ones. If the project is the English Wikipedia and the property {wikipedia_article} is present in the data
module it will be used first. In all other cases, a sitelink will be generated from {:getWikidataItem} (if set). The
resulting value (or lack of value) is cached so that subsequent calls are fast. If no value could be determined, and
`noCategoryFallback` is {false}, {:getCategoryName} is used as fallback; otherwise, {nil} is returned. Note that if
`noCategoryFallback` is {nil} or omitted, it defaults to {false} if the project is the English Wikipedia, otherwise
to {true}. In other words, under normal circumstances, if the English Wikipedia article couldn't be retrieved, the
return value will fall back to a link to the language's category, but this won't normally happen for any other project.
]==]
function Language:getWikipediaArticle(noCategoryFallback, project)
Language.getWikipediaArticle = require(language_like_module).getWikipediaArticle
return self:getWikipediaArticle(noCategoryFallback, project)
end
function Language:makeWikipediaLink()
return make_link(self, "w:id:" .. self:getWikipediaArticle(), self:getCanonicalNameLower())
end
--[==[Returns the name of the Wikimedia Commons category page for the language.]==]
function Language:getCommonsCategory()
Language.getCommonsCategory = require(language_like_module).getCommonsCategory
return self:getCommonsCategory()
end
--[==[Returns the Wikidata item id for the language or <code>nil</code>. This corresponds to the the second field in the data modules.]==]
function Language:getWikidataItem()
Language.getWikidataItem = require(language_like_module).getWikidataItem
return self:getWikidataItem()
end
--[==[Returns a table of <code>Script</code> objects for all scripts that the language is written in. See [[Module:scripts]].]==]
function Language:getScripts()
local scripts = self._scriptObjects
if scripts == nil then
local codes = self:getScriptCodes()
if codes[1] == "All" then
scripts = load_data(scripts_data_module)
else
scripts = {}
for i = 1, #codes do
scripts[i] = get_script(codes[i])
end
end
self._scriptObjects = scripts
end
return scripts
end
--[==[Returns the table of script codes in the language's data file.]==]
function Language:getScriptCodes()
local scripts = self._scriptCodes
if scripts == nil then
scripts = self._data[4]
if scripts then
local codes, n = {}, 0
for code in gmatch(scripts, "[^,]+") do
n = n + 1
-- Special handling of "Hants", which represents "Hani", "Hant" and "Hans" collectively.
if code == "Hants" then
codes[n] = "Hani"
codes[n + 1] = "Hant"
codes[n + 2] = "Hans"
n = n + 2
else
codes[n] = code
end
end
scripts = codes
else
scripts = {"None"}
end
self._scriptCodes = scripts
end
return scripts
end
--[==[Given some text, this function iterates through the scripts of a given language and tries to find the script that best matches the text. It returns a {{code|lua|Script}} object representing the script. If no match is found at all, it returns the {{code|lua|None}} script object.]==]
function Language:findBestScript(text, forceDetect)
if not text or text == "" or text == "-" then
return get_script("None")
end
-- Differs from table returned by getScriptCodes, as Hants is not normalized into its constituents.
local codes = self._bestScriptCodes
if codes == nil then
codes = self._data[4]
codes = codes and split(codes, ",", true, true) or {"None"}
self._bestScriptCodes = codes
end
local first_sc = codes[1]
if first_sc == "All" then
return find_best_script_without_lang(text)
end
local codes_len = #codes
if not (forceDetect or first_sc == "Hants" or codes_len > 1) then
first_sc = get_script(first_sc)
local charset = first_sc.characters
return charset and umatch(text, "[" .. charset .. "]") and first_sc or get_script("None")
end
-- Remove all formatting characters.
text = get_plaintext(text)
-- Remove all spaces and any ASCII punctuation. Some non-ASCII punctuation is script-specific, so can't be removed.
text = ugsub(text, "[%s!\"#%%&'()*,%-./:;?@[\\%]_{}]+", "")
if #text == 0 then
return get_script("None")
end
-- Try to match every script against the text,
-- and return the one with the most matching characters.
local bestcount, bestscript, length = 0
for i = 1, codes_len do
local sc = codes[i]
-- Special case for "Hants", which is a special code that represents whichever of "Hant" or "Hans" best matches, or "Hani" if they match equally. This avoids having to list all three. In addition, "Hants" will be treated as the best match if there is at least one matching character, under the assumption that a Han script is desirable in terms that contain a mix of Han and other scripts (not counting those which use Jpan or Kore).
if sc == "Hants" then
local Hani = get_script("Hani")
if not Hant_chars then
Hant_chars = load_data("Module:zh/data/ts")
Hans_chars = load_data("Module:zh/data/st")
end
local t, s, found = 0, 0
-- This is faster than using mw.ustring.gmatch directly.
for ch in gmatch((ugsub(text, "[" .. Hani.characters .. "]", "\255%0")), "\255(.[\128-\191]*)") do
found = true
if Hant_chars[ch] then
t = t + 1
if Hans_chars[ch] then
s = s + 1
end
elseif Hans_chars[ch] then
s = s + 1
else
t, s = t + 1, s + 1
end
end
if found then
if t == s then
return Hani
end
return get_script(t > s and "Hant" or "Hans")
end
else
sc = get_script(sc)
if not length then
length = ulen(text)
end
-- Count characters by removing everything in the script's charset and comparing to the original length.
local charset = sc.characters
local count = charset and length - ulen((ugsub(text, "[" .. charset .. "]+", ""))) or 0
if count >= length then
return sc
elseif count > bestcount then
bestcount = count
bestscript = sc
end
end
end
-- Return best matching script, or otherwise None.
return bestscript or get_script("None")
end
--[==[Returns a <code>Family</code> object for the language family that the language belongs to. See [[Module:families]].]==]
function Language:getFamily()
local family = self._familyObject
if family == nil then
family = self:getFamilyCode()
-- If the value is nil, it's cached as false.
family = family and get_family(family) or false
self._familyObject = family
end
return family or nil
end
--[==[Returns the family code in the language's data file.]==]
function Language:getFamilyCode()
local family = self._familyCode
if family == nil then
-- If the value is nil, it's cached as false.
family = self._data[3] or false
self._familyCode = family
end
return family or nil
end
function Language:getFamilyName()
local family = self._familyName
if family == nil then
family = self:getFamily()
-- If the value is nil, it's cached as false.
family = family and family:getCanonicalName() or false
self._familyName = family
end
return family or nil
end
do
local function check_family(self, family)
if type(family) == "table" then
family = family:getCode()
end
if self:getFamilyCode() == family then
return true
end
local self_family = self:getFamily()
if self_family:inFamily(family) then
return true
-- If the family isn't a real family (e.g. creoles) check any ancestors.
elseif self_family:inFamily("qfa-not") then
local ancestors = self:getAncestors()
for _, ancestor in ipairs(ancestors) do
if ancestor:inFamily(family) then
return true
end
end
end
end
--[==[Check whether the language belongs to `family` (which can be a family code or object). A list of objects can be given in place of `family`; in that case, return true if the language belongs to any of the specified families. Note that some languages (in particular, certain creoles) can have multiple immediate ancestors potentially belonging to different families; in that case, return true if the language belongs to any of the specified families.]==]
function Language:inFamily(...)
if self:getFamilyCode() == nil then
return false
end
return check_inputs(self, check_family, false, ...)
end
end
function Language:getParent()
local parent = self._parentObject
if parent == nil then
parent = self:getParentCode()
-- If the value is nil, it's cached as false.
parent = parent and get_by_code(parent, nil, true, true) or false
self._parentObject = parent
end
return parent or nil
end
function Language:getParentCode()
local parent = self._parentCode
if parent == nil then
-- If the value is nil, it's cached as false.
parent = self._data.parent or false
self._parentCode = parent
end
return parent or nil
end
function Language:getParentName()
local parent = self._parentName
if parent == nil then
parent = self:getParent()
-- If the value is nil, it's cached as false.
parent = parent and parent:getCanonicalName() or false
self._parentName = parent
end
return parent or nil
end
function Language:getParentChain()
local chain = self._parentChain
if chain == nil then
chain = {}
local parent, n = self:getParent(), 0
while parent do
n = n + 1
chain[n] = parent
parent = parent:getParent()
end
self._parentChain = chain
end
return chain
end
do
local function check_lang(self, lang)
for _, parent in ipairs(self:getParentChain()) do
if (type(lang) == "string" and lang or lang:getCode()) == parent:getCode() then
return true
end
end
end
function Language:hasParent(...)
return check_inputs(self, check_lang, false, ...)
end
end
--[==[
If the language is etymology-only, this iterates through parents until a full language or family is found, and the
corresponding object is returned. If the language is a full language, then it simply returns itself.
]==]
function Language:getFull()
local full = self._fullObject
if full == nil then
full = self:getFullCode()
full = full == self._code and self or get_by_code(full)
self._fullObject = full
end
return full
end
--[==[
If the language is an etymology-only language, this iterates through parents until a full language or family is
found, and the corresponding code is returned. If the language is a full language, then it simply returns the
language code.
]==]
function Language:getFullCode()
return self._fullCode or self._code
end
--[==[
If the language is an etymology-only language, this iterates through parents until a full language or family is
found, and the corresponding canonical name is returned. If the language is a full language, then it simply returns
the canonical name of the language.
]==]
function Language:getFullName()
local full = self._fullName
if full == nil then
full = self:getFull():getCanonicalName()
self._fullName = full
end
return full
end
--[==[Returns a table of <code class="nf">Language</code> objects for all languages that this language is directly descended from. Generally this is only a single language, but creoles, pidgins and mixed languages can have multiple ancestors.]==]
function Language:getAncestors()
local ancestors = self._ancestorObjects
if ancestors == nil then
ancestors = {}
local ancestor_codes = self:getAncestorCodes()
if #ancestor_codes > 0 then
for _, ancestor in ipairs(ancestor_codes) do
insert(ancestors, get_by_code(ancestor, nil, true))
end
else
local fam = self:getFamily()
local protoLang = fam and fam:getProtoLanguage() or nil
-- For the cases where the current language is the proto-language
-- of its family, or an etymology-only language that is ancestral to that
-- proto-language, we need to step up a level higher right from the
-- start.
if protoLang and (
protoLang:getCode() == self._code or
(self:hasType("etymology-only") and protoLang:hasAncestor(self))
) then
fam = fam:getFamily()
protoLang = fam and fam:getProtoLanguage() or nil
end
while not protoLang and not (not fam or fam:getCode() == "qfa-not") do
fam = fam:getFamily()
protoLang = fam and fam:getProtoLanguage() or nil
end
insert(ancestors, protoLang)
end
self._ancestorObjects = ancestors
end
return ancestors
end
do
-- Avoid a language being its own ancestor via class inheritance. We only need to check for this if the language has inherited an ancestor table from its parent, because we never want to drop ancestors that have been explicitly set in the data.
-- Recursively iterate over ancestors until we either find self or run out. If self is found, return true.
local function check_ancestor(self, lang)
local codes = lang:getAncestorCodes()
if not codes then
return nil
end
for i = 1, #codes do
local code = codes[i]
if code == self._code then
return true
end
local anc = get_by_code(code, nil, true)
if check_ancestor(self, anc) then
return true
end
end
end
--[==[Returns a table of <code class="nf">Language</code> codes for all languages that this language is directly descended from. Generally this is only a single language, but creoles, pidgins and mixed languages can have multiple ancestors.]==]
function Language:getAncestorCodes()
if self._ancestorCodes then
return self._ancestorCodes
end
local data = self._data
local codes = data.ancestors
if codes == nil then
codes = {}
self._ancestorCodes = codes
return codes
end
codes = split(codes, ",", true, true)
self._ancestorCodes = codes
-- If there are no codes or the ancestors weren't inherited data, there's nothing left to check.
if #codes == 0 or self:getData(false, "raw").ancestors ~= nil then
return codes
end
local i, code = 1
while i <= #codes do
code = codes[i]
if check_ancestor(self, self) then
remove(codes, i)
else
i = i + 1
end
end
return codes
end
end
--[==[Given a list of language objects or codes, returns true if at least one of them is an ancestor. This includes any etymology-only children of that ancestor. If the language's ancestor(s) are etymology-only languages, it will also return true for those language parent(s) (e.g. if Vulgar Latin is the ancestor, it will also return true for its parent, Latin). However, a parent is excluded from this if the ancestor is also ancestral to that parent (e.g. if Classical Persian is the ancestor, Persian would return false, because Classical Persian is also ancestral to Persian).]==]
function Language:hasAncestor(...)
local function iterateOverAncestorTree(node, func, parent_check)
local ancestors = node:getAncestors()
local ancestorsParents = {}
for _, ancestor in ipairs(ancestors) do
-- When checking the parents of the other language, and the ancestor is also a parent, skip to the next ancestor, so that we exclude any etymology-only children of that parent that are not directly related (see below).
local ret = (parent_check or not node:hasParent(ancestor)) and
func(ancestor) or iterateOverAncestorTree(ancestor, func, parent_check)
if ret then
return ret
end
end
-- Check the parents of any ancestors. We don't do this if checking the parents of the other language, so that we exclude any etymology-only children of those parents that are not directly related (e.g. if the ancestor is Vulgar Latin and we are checking New Latin, we want it to return false because they are on different ancestral branches. As such, if we're already checking the parent of New Latin (Latin) we don't want to compare it to the parent of the ancestor (Latin), as this would be a false positive; it should be one or the other).
if not parent_check then
return nil
end
for _, ancestor in ipairs(ancestors) do
local ancestorParents = ancestor:getParentChain()
for _, ancestorParent in ipairs(ancestorParents) do
if ancestorParent:getCode() == self._code or ancestorParent:hasAncestor(ancestor) then
break
else
insert(ancestorsParents, ancestorParent)
end
end
end
for _, ancestorParent in ipairs(ancestorsParents) do
local ret = func(ancestorParent)
if ret then
return ret
end
end
end
local function do_iteration(otherlang, parent_check)
-- otherlang can't be self
if (type(otherlang) == "string" and otherlang or otherlang:getCode()) == self._code then
return false
end
repeat
if iterateOverAncestorTree(
self,
function(ancestor)
return ancestor:getCode() == (type(otherlang) == "string" and otherlang or otherlang:getCode())
end,
parent_check
) then
return true
elseif type(otherlang) == "string" then
otherlang = get_by_code(otherlang, nil, true)
end
otherlang = otherlang:getParent()
parent_check = false
until not otherlang
end
local parent_check = true
for _, otherlang in ipairs{...} do
local ret = do_iteration(otherlang, parent_check)
if ret then
return true
end
end
return false
end
do
local function construct_node(lang, memo)
local branch, ancestors = {lang = lang:getCode()}
memo[lang:getCode()] = branch
for _, ancestor in ipairs(lang:getAncestors()) do
if ancestors == nil then
ancestors = {}
end
insert(ancestors, memo[ancestor:getCode()] or construct_node(ancestor, memo))
end
branch.ancestors = ancestors
return branch
end
function Language:getAncestorChain()
local chain = self._ancestorChain
if chain == nil then
chain = construct_node(self, {})
self._ancestorChain = chain
end
return chain
end
end
function Language:getAncestorChainOld()
local chain = self._ancestorChain
if chain == nil then
chain = {}
local step = self
while true do
local ancestors = step:getAncestors()
step = #ancestors == 1 and ancestors[1] or nil
if not step then
break
end
insert(chain, step)
end
self._ancestorChain = chain
end
return chain
end
local function fetch_descendants(self, fmt)
local descendants, family = {}, self:getFamily()
-- Iterate over all three datasets.
for _, data in ipairs{
require("Module:languages/code to canonical name"),
require("Module:etymology languages/code to canonical name"),
require("Module:families/code to canonical name"),
} do
for code in pairs(data) do
local lang = get_by_code(code, nil, true, true)
-- Test for a descendant. Earlier tests weed out most candidates, while the more intensive tests are only used sparingly.
if (
code ~= self._code and -- Not self.
lang:inFamily(family) and -- In the same family.
(
family:getProtoLanguageCode() == self._code or -- Self is the protolanguage.
self:hasDescendant(lang) or -- Full hasDescendant check.
(lang:getFullCode() == self._code and not self:hasAncestor(lang)) -- Etymology-only child which isn't an ancestor.
)
) then
if fmt == "object" then
insert(descendants, lang)
elseif fmt == "code" then
insert(descendants, code)
elseif fmt == "name" then
insert(descendants, lang:getCanonicalName())
end
end
end
end
return descendants
end
function Language:getDescendants()
local descendants = self._descendantObjects
if descendants == nil then
descendants = fetch_descendants(self, "object")
self._descendantObjects = descendants
end
return descendants
end
function Language:getDescendantCodes()
local descendants = self._descendantCodes
if descendants == nil then
descendants = fetch_descendants(self, "code")
self._descendantCodes = descendants
end
return descendants
end
function Language:getDescendantNames()
local descendants = self._descendantNames
if descendants == nil then
descendants = fetch_descendants(self, "name")
self._descendantNames = descendants
end
return descendants
end
do
local function check_lang(self, lang)
if type(lang) == "string" then
lang = get_by_code(lang, nil, true)
end
if lang:hasAncestor(self) then
return true
end
end
function Language:hasDescendant(...)
return check_inputs(self, check_lang, false, ...)
end
end
local function fetch_children(self, fmt)
local m_etym_data = require(etymology_languages_data_module)
local self_code, children = self._code, {}
for code, lang in pairs(m_etym_data) do
local _lang = lang
repeat
local parent = _lang.parent
if parent == self_code then
if fmt == "object" then
insert(children, get_by_code(code, nil, true))
elseif fmt == "code" then
insert(children, code)
elseif fmt == "name" then
insert(children, lang[1])
end
break
end
_lang = m_etym_data[parent]
until not _lang
end
return children
end
function Language:getChildren()
local children = self._childObjects
if children == nil then
children = fetch_children(self, "object")
self._childObjects = children
end
return children
end
function Language:getChildrenCodes()
local children = self._childCodes
if children == nil then
children = fetch_children(self, "code")
self._childCodes = children
end
return children
end
function Language:getChildrenNames()
local children = self._childNames
if children == nil then
children = fetch_children(self, "name")
self._childNames = children
end
return children
end
function Language:hasChild(...)
local lang = ...
if not lang then
return false
elseif type(lang) == "string" then
lang = get_by_code(lang, nil, true)
end
if lang:hasParent(self) then
return true
end
return self:hasChild(select(2, ...))
end
--[==[Returns the name of the main category of that language. Example: {{code|lua|"French language"}} for French, whose category is at [[:Category:French language]]. Unless optional argument <code>nocap</code> is given, the language name at the beginning of the returned value will be capitalized. This capitalization is correct for category names, but not if the language name is lowercase and the returned value of this function is used in the middle of a sentence.]==]
function Language:getCategoryName(nocap)
local name = self._categoryName
if name == nil then
name = self:getCanonicalNameLower()
-- If a substrate, omit any leading article.
if self:getFamilyCode() == "qfa-sub" then
name = name:gsub("^the ", ""):gsub("^a ", "")
end
-- Only add " language" if a full language.
if self:hasType("full") then
-- Unless the canonical name already ends with "language", "lect" or their derivatives, add " language".
if not (match(name, "[Ll]anguage$") or match(name, "[Ll]ect$")) then
name = name .. " language"
end
end
self._categoryName = name
end
if nocap then
return name
end
return mw.getContentLanguage():ucfirst(name)
end
--[==[Creates a link to the category; the link text is the canonical name.]==]
function Language:makeCategoryLink()
return make_link(self, ":Category:" .. self:getCategoryName(), self:getDisplayForm())
end
function Language:getStandardCharacters(sc)
local standard_chars = self._data.standard_chars
if type(standard_chars) ~= "table" then
return standard_chars
elseif sc and type(sc) ~= "string" then
check_object("script", nil, sc)
sc = sc:getCode()
end
if (not sc) or sc == "None" then
local scripts = {}
for _, script in pairs(standard_chars) do
insert(scripts, script)
end
return concat(scripts)
end
if standard_chars[sc] then
return standard_chars[sc] .. (standard_chars[1] or "")
end
end
--[==[
Strip diacritics from display text `text` (in a language-specific fashion), which is in the script `sc`. If `sc` is
omitted or {nil}, the script is autodetected. This also strips certain punctuation characters from the end and (in the
case of Spanish upside-down question mark and exclamation points) from the beginning; strips any whitespace at the
end of the text or between the text and final stripped punctuation characters; and applies some language-specific
Unicode normalizations to replace discouraged characters with their prescribed alternatives. Return the stripped text.
]==]
function Language:stripDiacritics(text, sc)
if (not text) or text == "" then
return text
end
sc = checkScript(text, self, sc)
text = normalize(text, sc)
-- FIXME, rename makeEntryName to stripDiacritics and get rid of second and third return values
-- everywhere
text, _, _ = iterateSectionSubstitutions(self, text, sc, nil, nil,
self._data.strip_diacritics or self._data.entry_name, "strip_diacritics", "stripDiacritics")
text = umatch(text, "^[¿¡]?(.-[^%s%p].-)%s*[؟?!;՛՜ ՞ ՟?!︖︕।॥။၊་།]?$") or text
return text
end
--[==[
Convert a ''logical'' pagename (the pagename as it appears to the user, after diacritics and punctuation have been
stripped) to a ''physical'' pagename (the pagename as it appears in the MediaWiki database). Reasons for a difference
between the two are (a) unsupported titles such as `[ ]` (with square brackets in them), `#` (pound/hash sign) and
`¯\_(ツ)_/¯` (with underscores), as well as overly long titles of various sorts; (b) "mammoth" pages that are split into
parts (e.g. `a`, which is split into physical pagenames `a/languages A to L` and `a/languages M to Z`). For almost all
purposes, you should work with logical and not physical pagenames. But there are certain use cases that require physical
pagenames, such as checking the existence of a page or retrieving a page's contents.
`pagename` is the logical pagename to be converted. `is_reconstructed_or_appendix` indicates whether the page is in the
`Reconstruction` or `Appendix` namespaces. If it is omitted or has the value {nil}, the pagename is checked for an
initial asterisk, and if found, the page is assumed to be a `Reconstruction` page. Setting a value of `false` or `true`
to `is_reconstructed_or_appendix` disables this check and allows for mainspace pagenames that begin with an asterisk.
]==]
function Language:logicalToPhysical(pagename, is_reconstructed_or_appendix)
-- FIXME: This probably shouldn't happen but it happens when makeEntryName() receives nil.
if pagename == nil then
track("nil-passed-to-logicalToPhysical")
return nil
end
local initial_asterisk
if is_reconstructed_or_appendix == nil then
local pagename_minus_initial_asterisk
initial_asterisk, pagename_minus_initial_asterisk = pagename:match("^(%*)(.*)$")
if pagename_minus_initial_asterisk then
is_reconstructed_or_appendix = true
pagename = pagename_minus_initial_asterisk
elseif self:hasType("appendix-constructed") then
is_reconstructed_or_appendix = true
end
end
if not is_reconstructed_or_appendix then
-- Check if the pagename is a listed unsupported title.
local unsupportedTitles = load_data(links_data_module).unsupported_titles
if unsupportedTitles[pagename] then
return "Unsupported titles/" .. unsupportedTitles[pagename]
end
end
-- Set `unsupported` as true if certain conditions are met.
local unsupported
-- Check if there's an unsupported character. \239\191\189 is the replacement character U+FFFD, which can't be typed
-- directly here due to an abuse filter. Unix-style dot-slash notation is also unsupported, as it is used for
-- relative paths in links, as are 3 or more consecutive tildes. Note: match is faster with magic
-- characters/charsets; find is faster with plaintext.
if (
match(pagename, "[#<>%[%]_{|}]") or
find(pagename, "\239\191\189") or
match(pagename, "%f[^%z/]%.%.?%f[%z/]") or
find(pagename, "~~~")
) then
unsupported = true
-- If it looks like an interwiki link.
elseif find(pagename, ":") then
local prefix = gsub(pagename, "^:*(.-):.*", ulower)
if (
load_data("Module:data/namespaces")[prefix] or
load_data("Module:data/interwikis")[prefix]
) then
unsupported = true
end
end
-- Escape unsupported characters so they can be used in titles. ` is used as a delimiter for this, so a raw use of
-- it in an unsupported title is also escaped here to prevent interference; this is only done with unsupported
-- titles, though, so inclusion won't in itself mean a title is treated as unsupported (which is why it's excluded
-- from the earlier test).
if unsupported then
-- FIXME: This conversion needs to be different for reconstructed pages with unsupported characters. There
-- aren't any currently, but if there ever are, we need to fix this e.g. to put them in something like
-- Reconstruction:Proto-Indo-European/Unsupported titles/`lowbar``num`.
local unsupported_characters = load_data(links_data_module).unsupported_characters
pagename = pagename:gsub("[#<>%[%]_`{|}\239]\191?\189?", unsupported_characters)
:gsub("%f[^%z/]%.%.?%f[%z/]", function(m)
return (gsub(m, "%.", "`period`"))
end)
:gsub("~~~+", function(m)
return (gsub(m, "~", "`tilde`"))
end)
pagename = "Unsupported titles/" .. pagename
elseif not is_reconstructed_or_appendix then
-- Check if this is a mammoth page. If so, which subpage should we link to?
local m_links_data = load_data(links_data_module)
local mammoth_page_type = m_links_data.mammoth_pages[pagename]
if mammoth_page_type then
local canonical_name = self:getFullName()
if canonical_name ~= "Translingual" and canonical_name ~= "English" then
local this_subpage
local L2_sort_key = get_L2_sort_key(canonical_name)
for _, subpage_spec in ipairs(m_links_data.mammoth_page_subpage_types[mammoth_page_type]) do
-- unpack() fails utterly on data loaded using mw.loadData() even if offsets are given
local subpage, pattern = subpage_spec[1], subpage_spec[2]
if pattern == true or L2_sort_key:match(pattern) then
this_subpage = subpage
break
end
end
if not this_subpage then
error(("Internal error: Bad data in mammoth_page_subpage_pages in [[Module:links/data]] for mammoth page %s, type %s; last entry didn't have 'true' in it"):format(
pagename, mammoth_page_type))
end
pagename = pagename .. "/" .. this_subpage
end
end
end
return (initial_asterisk or "") .. pagename
end
--[==[
Strip the diacritics from a display pagename and convert the resulting logical pagename into a physical pagename.
This allows you, for example, to retrieve the contents of the page or check its existence. WARNING: This is deprecated
and will be going away. It is a simple composition of `self:stripDiacritics` and `self:logicalToPhysical`; most callers
only want the former, and if you need both, call them both yourself.
`text` and `sc` are as in `self:stripDiacritics`, and `is_reconstructed_or_appendix` is as in `self:logicalToPhysical`.
]==]
function Language:makeEntryName(text, sc, is_reconstructed_or_appendix)
return self:logicalToPhysical(self:stripDiacritics(text, sc), is_reconstructed_or_appendix)
end
--[==[Generates alternative forms using a specified method, and returns them as a table. If no method is specified, returns a table containing only the input term.]==]
function Language:generateForms(text, sc)
local generate_forms = self._data.generate_forms
if generate_forms == nil then
return {text}
end
sc = checkScript(text, self, sc)
return require("Module:" .. self._data.generate_forms).generateForms(text, self, sc)
end
--[==[Creates a sort key for the given stripped text, following the rules appropriate for the language. This removes
diacritical marks from the stripped text if they are not considered significant for sorting, and may perform some other
changes. Any initial hyphen is also removed, and anything in parentheses is removed as well.
The <code>sort_key</code> setting for each language in the data modules defines the replacements made by this function, or it gives the name of the module that takes the stripped text and returns a sortkey.]==]
function Language:makeSortKey(text, sc)
if (not text) or text == "" then
return text
end
if match(text, "<[^<>]+>") then
track("track HTML tag")
end
-- Remove directional characters, bold, italics, soft hyphens, strip markers and HTML tags.
-- FIXME: Partly duplicated with remove_formatting() in [[Module:links]].
text = ugsub(text, "[\194\173\226\128\170-\226\128\174\226\129\166-\226\129\169]", "")
text = text:gsub("('*)'''(.-'*)'''", "%1%2"):gsub("('*)''(.-'*)''", "%1%2")
text = gsub(unstrip(text), "<[^<>]+>", "")
text = decode_uri(text, "PATH")
text = checkNoEntities(self, text)
-- Remove initial hyphens and * unless the term only consists of spacing + punctuation characters.
text = ugsub(text, "^([-]*)[-־ـ᠊*]+([-]*)(.*[^%s%p].*)", "%1%2%3")
sc = checkScript(text, self, sc)
text = normalize(text, sc)
text = removeCarets(text, sc)
-- For languages with dotted dotless i, ensure that "İ" is sorted as "i", and "I" is sorted as "ı".
if self:hasDottedDotlessI() then
text = gsub(text, "I\204\135", "i") -- decomposed "İ"
:gsub("I", "ı")
text = sc:toFixedNFD(text)
end
-- Convert to lowercase, make the sortkey, then convert to uppercase. Where the language has dotted dotless i, it is
-- usually not necessary to convert "i" to "İ" and "ı" to "I" first, because "I" will always be interpreted as
-- conventional "I" (not dotless "İ") by any sorting algorithms, which will have been taken into account by the
-- sortkey substitutions themselves. However, if no sortkey substitutions have been specified, then conversion is
-- necessary so as to prevent "i" and "ı" both being sorted as "I".
--
-- An exception is made for scripts that (sometimes) sort by scraping page content, as that means they are sensitive
-- to changes in capitalization (as it changes the target page).
if not sc:sortByScraping() then
text = ulower(text)
end
local actual_substitution_data
-- Don't trim whitespace here because it's significant at the beginning of a sort key or sort base.
text, _, actual_substitution_data = iterateSectionSubstitutions(self, text, sc, nil, nil, self._data.sort_key,
"sort_key", "makeSortKey", "notrim")
if not sc:sortByScraping() then
if self:hasDottedDotlessI() and not actual_substitution_data then
text = text:gsub("ı", "I"):gsub("i", "İ")
text = sc:toFixedNFC(text)
end
text = uupper(text)
end
-- Remove parentheses, as long as they are either preceded or followed by something.
text = gsub(text, "(.)[()]+", "%1"):gsub("[()]+(.)", "%1")
text = escape_risky_characters(text)
return text
end
--[==[Create the form used as as a basis for display text and transliteration. FIXME: Rename to correctInputText().]==]
local function processDisplayText(text, self, sc, keepCarets, keepPrefixes)
local subbedChars = {}
text, subbedChars = doTempSubstitutions(text, subbedChars, keepCarets)
text = decode_uri(text, "PATH")
text = checkNoEntities(self, text)
sc = checkScript(text, self, sc)
text = normalize(text, sc)
text, subbedChars = iterateSectionSubstitutions(self, text, sc, subbedChars, keepCarets, self._data.display_text,
"display_text", "makeDisplayText")
text = removeCarets(text, sc)
-- Remove any interwiki link prefixes (unless they have been escaped or this has been disabled).
if find(text, ":") and not keepPrefixes then
local rep
repeat
text, rep = gsub(text, "\\\\(\\*:)", "\3%1")
until rep == 0
text = gsub(text, "\\:", "\4")
while true do
local prefix = gsub(text, "^(.-):.+", function(m1)
return (gsub(m1, "\244[\128-\191]*", ""))
end)
-- Check if the prefix is an interwiki, though ignore capitalised Wiktionary:, which is a namespace.
if not prefix or prefix == text or prefix == "Wiktionary"
or not (load_data("Module:data/interwikis")[ulower(prefix)] or prefix == "") then
break
end
text = gsub(text, "^(.-):(.*)", function(m1, m2)
local ret = {}
for subbedChar in gmatch(m1, "\244[\128-\191]*") do
insert(ret, subbedChar)
end
return concat(ret) .. m2
end)
end
text = gsub(text, "\3", "\\"):gsub("\4", ":")
end
return text, subbedChars
end
--[==[Make the display text (i.e. what is displayed on the page).]==]
function Language:makeDisplayText(text, sc, keepPrefixes)
if not text or text == "" then
return text
end
local subbedChars
text, subbedChars = processDisplayText(text, self, sc, nil, keepPrefixes)
text = escape_risky_characters(text)
return undoTempSubstitutions(text, subbedChars)
end
--[==[Transliterates the text from the given script into the Latin script (see
[[Wiktionary:Transliteration and romanization]]). The language must have the <code>translit</code> property for this to
work; if it is not present, {{code|lua|nil}} is returned.
The <code>sc</code> parameter is handled by the transliteration module, and how it is handled is specific to that
module. Some transliteration modules may tolerate {{code|lua|nil}} as the script, others require it to be one of the
possible scripts that the module can transliterate, and will throw an error if it's not one of them. For this reason,
the <code>sc</code> parameter should always be provided when writing non-language-specific code.
The <code>module_override</code> parameter is used to override the default module that is used to provide the
transliteration. This is useful in cases where you need to demonstrate a particular module in use, but there is no
default module yet, or you want to demonstrate an alternative version of a transliteration module before making it
official. It should not be used in real modules or templates, only for testing. All uses of this parameter are tracked
by [[Wiktionary:Tracking/languages/module_override]].
'''Known bugs''':
* This function assumes {tr(s1) .. tr(s2) == tr(s1 .. s2)}. When this assertion fails, wikitext markups like <nowiki>'''</nowiki> can cause wrong transliterations.
* HTML entities like <code>&apos;</code>, often used to escape wikitext markups, do not work.
]==]
function Language:transliterate(text, sc, module_override)
-- If there is no text, or the language doesn't have transliteration data and there's no override, return nil.
if not text or text == "" or text == "-" then
return text
end
-- If the script is not transliteratable (and no override is given), return nil.
sc = checkScript(text, self, sc)
if not (sc:isTransliterated() or module_override) then
-- temporary tracking to see if/when this gets triggered
track("non-transliterable")
track("non-transliterable/" .. self._code)
track("non-transliterable/" .. sc:getCode())
track("non-transliterable/" .. sc:getCode() .. "/" .. self._code)
return nil
end
-- Remove any strip markers.
text = unstrip(text)
-- Do not process the formatting into PUA characters for certain languages.
local processed = load_data(languages_data_module).substitution[self._code] ~= "none"
-- Get the display text with the keepCarets flag set.
local subbedChars
if processed then
text, subbedChars = processDisplayText(text, self, sc, true)
end
-- Transliterate (using the module override if applicable).
text, subbedChars = iterateSectionSubstitutions(self, text, sc, subbedChars, true, module_override or
self._data.translit, "translit", "tr")
if not text then
return nil
end
-- Incomplete transliterations return nil.
local charset = sc.characters
if charset and umatch(text, "[" .. charset .. "]") then
-- Remove any characters in Latin, which includes Latin characters also included in other scripts (as these are
-- false positives), as well as any PUA substitutions. Anything remaining should only be script code "None"
-- (e.g. numerals).
local check_text = ugsub(text, "[" .. get_script("Latn").characters .. "-]+", "")
-- Set none_is_last_resort_only flag, so that any non-None chars will cause a script other than "None" to be
-- returned.
if find_best_script_without_lang(check_text, true):getCode() ~= "None" then
return nil
end
end
if processed then
text = escape_risky_characters(text)
text = undoTempSubstitutions(text, subbedChars)
end
-- If the script does not use capitalization, then capitalize any letters of the transliteration which are
-- immediately preceded by a caret (and remove the caret).
if text and not sc:hasCapitalization() and text:find("^", 1, true) then
text = processCarets(text, "%^([\128-\191\244]*%*?)([^\128-\191\244][\128-\191]*)", function(m1, m2)
return m1 .. uupper(m2)
end)
end
-- Track module overrides.
if module_override ~= nil then
track("module_override")
end
return text
end
do
local function handle_language_spec(self, spec, sc)
local ret = self["_" .. spec]
if ret == nil then
ret = self._data[spec]
if type(ret) == "string" then
ret = list_to_set(split(ret, ",", true, true))
end
self["_" .. spec] = ret
end
if type(ret) == "table" then
ret = ret[sc:getCode()]
end
return not not ret
end
function Language:overrideManualTranslit(sc)
return handle_language_spec(self, "override_translit", sc)
end
function Language:link_tr(sc)
return handle_language_spec(self, "link_tr", sc)
end
end
--[==[Returns {{code|lua|true}} if the language has a transliteration module, or {{code|lua|false}} if it doesn't.]==]
function Language:hasTranslit()
return not not self._data.translit
end
--[==[Returns {{code|lua|true}} if the language uses the letters I/ı and İ/i, or {{code|lua|false}} if it doesn't.]==]
function Language:hasDottedDotlessI()
return not not self._data.dotted_dotless_i
end
function Language:toJSON(opts)
local strip_diacritics, strip_diacritics_patterns, strip_diacritics_remove_diacritics = self._data.strip_diacritics
if strip_diacritics then
if strip_diacritics.from then
strip_diacritics_patterns = {}
for i, from in ipairs(strip_diacritics.from) do
insert(strip_diacritics_patterns, {from = from, to = strip_diacritics.to[i] or ""})
end
end
strip_diacritics_remove_diacritics = strip_diacritics.remove_diacritics
end
-- mainCode should only end up non-nil if dontCanonicalizeAliases is passed to make_object().
-- props should either contain zero-argument functions to compute the value, or the value itself.
local props = {
ancestors = function() return self:getAncestorCodes() end,
canonicalName = function() return self:getCanonicalName() end,
categoryName = function() return self:getCategoryName("nocap") end,
code = self._code,
mainCode = self._mainCode,
parent = function() return self:getParentCode() end,
full = function() return self:getFullCode() end,
stripDiacriticsPatterns = strip_diacritics_patterns,
stripDiacriticsRemoveDiacritics = strip_diacritics_remove_diacritics,
family = function() return self:getFamilyCode() end,
aliases = function() return self:getAliases() end,
varieties = function() return self:getVarieties() end,
otherNames = function() return self:getOtherNames() end,
scripts = function() return self:getScriptCodes() end,
type = function() return keys_to_list(self:getTypes()) end,
wikimediaLanguages = function() return self:getWikimediaLanguageCodes() end,
wikidataItem = function() return self:getWikidataItem() end,
wikipediaArticle = function() return self:getWikipediaArticle(true) end,
}
local ret = {}
for prop, val in pairs(props) do
if not opts.skip_fields or not opts.skip_fields[prop] then
if type(val) == "function" then
ret[prop] = val()
else
ret[prop] = val
end
end
end
-- Use `deep_copy` when returning a table, so that there are no editing restrictions imposed by `mw.loadData`.
return opts and opts.lua_table and deep_copy(ret) or to_json(ret, opts)
end
function export.getDataModuleName(code)
local letter = match(code, "^(%l)%l%l?$")
return "Module:" .. (
letter == nil and "languages/data/exceptional" or
#code == 2 and "languages/data/2" or
"languages/data/3/" .. letter
)
end
get_data_module_name = export.getDataModuleName
function export.getExtraDataModuleName(code)
return get_data_module_name(code) .. "/extra"
end
get_extra_data_module_name = export.getExtraDataModuleName
do
local function make_stack(data)
local key_types = {
[2] = "unique",
aliases = "unique",
otherNames = "unique",
type = "append",
varieties = "unique",
wikipedia_article = "unique",
wikimedia_codes = "unique"
}
local function __index(self, k)
local stack, key_type = getmetatable(self), key_types[k]
-- Data that isn't inherited from the parent.
if key_type == "unique" then
local v = stack[stack[make_stack]][k]
if v == nil then
local layer = stack[0]
if layer then -- Could be false if there's no extra data.
v = layer[k]
end
end
return v
-- Data that is appended by each generation.
elseif key_type == "append" then
local parts, offset, n = {}, 0, stack[make_stack]
for i = 1, n do
local part = stack[i][k]
if part == nil then
offset = offset + 1
else
parts[i - offset] = part
end
end
return offset ~= n and concat(parts, ",") or nil
end
local n = stack[make_stack]
while true do
local layer = stack[n]
if not layer then -- Could be false if there's no extra data.
return nil
end
local v = layer[k]
if v ~= nil then
return v
end
n = n - 1
end
end
local function __newindex()
error("table is read-only")
end
local function __pairs(self)
-- Iterate down the stack, caching keys to avoid duplicate returns.
local stack, seen = getmetatable(self), {}
local n = stack[make_stack]
local iter, state, k, v = pairs(stack[n])
return function()
repeat
repeat
k = iter(state, k)
if k == nil then
n = n - 1
local layer = stack[n]
if not layer then -- Could be false if there's no extra data.
return nil
end
iter, state, k = pairs(layer)
end
until not (k == nil or seen[k])
-- Get the value via a lookup, as the one returned by the
-- iterator will be the raw value from the current layer,
-- which may not be the one __index will return for that
-- key. Also memoize the key in `seen` (even if the lookup
-- returns nil) so that it doesn't get looked up again.
-- TODO: store values in `self`, avoiding the need to create
-- the `seen` table. The iterator will need to iterate over
-- `self` with `next` first to find these on future loops.
v, seen[k] = self[k], true
until v ~= nil
return k, v
end
end
local __ipairs = require(table_module).indexIpairs
function make_stack(data)
local stack = {
data,
[make_stack] = 1, -- stores the length and acts as a sentinel to confirm a given metatable is a stack.
__index = __index,
__newindex = __newindex,
__pairs = __pairs,
__ipairs = __ipairs,
}
stack.__metatable = stack
return setmetatable({}, stack), stack
end
return make_stack(data)
end
local function get_stack(data)
local stack = getmetatable(data)
return stack and type(stack) == "table" and stack[make_stack] and stack or nil
end
--[==[
<span style="color: var(--wikt-palette-red,#BA0000)">This function is not for use in entries or other content pages.</span>
Returns a blob of data about the language. The format of this blob is undocumented, and perhaps unstable; it's intended for things like the module's own unit-tests, which are "close friends" with the module and will be kept up-to-date as the format changes. If `extra` is set, any extra data in the relevant `/extra` module will be included. (Note that it will be included anyway if it has already been loaded into the language object.) If `raw` is set, then the returned data will not contain any data inherited from parent objects.
-- Do NOT use these methods!
-- All uses should be pre-approved on the talk page!
]==]
function Language:getData(extra, raw)
if extra then
self:loadInExtraData()
end
local data = self._data
-- If raw is not set, just return the data.
if not raw then
return data
end
local stack = get_stack(data)
-- If there isn't a stack or its length is 1, return the data. Extra data (if any) will be included, as it's stored at key 0 and doesn't affect the reported length.
if stack == nil then
return data
end
local n = stack[make_stack]
if n == 1 then
return data
end
local extra = stack[0]
-- If there isn't any extra data, return the top layer of the stack.
if extra == nil then
return stack[n]
end
-- If there is, return a new stack which has the top layer at key 1 and the extra data at key 0.
data, stack = make_stack(stack[n])
stack[0] = extra
return data
end
function Language:loadInExtraData()
-- Only full languages have extra data.
if not self:hasType("language", "full") then
return
end
local data = self._data
-- If there's no stack, create one.
local stack = get_stack(self._data)
if stack == nil then
data, stack = make_stack(data)
-- If already loaded, return.
elseif stack[0] ~= nil then
return
end
self._data = data
-- Load extra data from the relevant module and add it to the stack at key 0, so that the __index and __pairs metamethods will pick it up, since they iterate down the stack until they run out of layers.
local code = self._code
local modulename = get_extra_data_module_name(code)
-- No data cached as false.
stack[0] = modulename and load_data(modulename)[code] or false
end
--[==[Returns the name of the module containing the language's data. Currently, this is always [[Module:scripts/data]].]==]
function Language:getDataModuleName()
local name = self._dataModuleName
if name == nil then
name = self:hasType("etymology-only") and etymology_languages_data_module or
get_data_module_name(self._mainCode or self._code)
self._dataModuleName = name
end
return name
end
--[==[Returns the name of the module containing the language's data. Currently, this is always [[Module:scripts/data]].]==]
function Language:getExtraDataModuleName()
local name = self._extraDataModuleName
if name == nil then
name = not self:hasType("etymology-only") and get_extra_data_module_name(self._mainCode or self._code) or false
self._extraDataModuleName = name
end
return name or nil
end
function export.makeObject(code, data, dontCanonicalizeAliases)
local data_type = type(data)
if data_type ~= "table" then
error(("bad argument #2 to 'makeObject' (table expected, got %s)"):format(data_type))
end
-- Convert any aliases.
local input_code = code
code = normalize_code(code)
input_code = dontCanonicalizeAliases and input_code or code
local parent
if data.parent then
parent = get_by_code(data.parent, nil, true, true)
else
parent = Language
end
parent.__index = parent
local lang = {_code = input_code}
-- This can only happen if dontCanonicalizeAliases is passed to make_object().
if code ~= input_code then
lang._mainCode = code
end
local parent_data = parent._data
if parent_data == nil then
-- Full code is the same as the code.
lang._fullCode = parent._code or code
else
-- Copy full code.
lang._fullCode = parent._fullCode
local stack = get_stack(parent_data)
if stack == nil then
parent_data, stack = make_stack(parent_data)
end
-- Insert the input data as the new top layer of the stack.
local n = stack[make_stack] + 1
data, stack[n], stack[make_stack] = parent_data, data, n
end
lang._data = data
return setmetatable(lang, parent)
end
make_object = export.makeObject
end
--[==[Finds the language whose code matches the one provided. If it exists, it returns a <code class="nf">Language</code> object representing the language. Otherwise, it returns {{code|lua|nil}}, unless <code class="n">paramForError</code> is given, in which case an error is generated. If <code class="n">paramForError</code> is {{code|lua|true}}, a generic error message mentioning the bad code is generated; otherwise <code class="n">paramForError</code> should be a string or number specifying the parameter that the code came from, and this parameter will be mentioned in the error message along with the bad code. If <code class="n">allowEtymLang</code> is specified, etymology-only language codes are allowed and looked up along with normal language codes. If <code class="n">allowFamily</code> is specified, language family codes are allowed and looked up along with normal language codes.]==]
function export.getByCode(code, paramForError, allowEtymLang, allowFamily)
-- Track uses of paramForError, ultimately so it can be removed, as error-handling should be done by [[Module:parameters]], not here.
if paramForError ~= nil then
track("paramForError")
end
if type(code) ~= "string" then
local typ
if not code then
typ = "nil"
elseif check_object("language", true, code) then
typ = "a language object"
elseif check_object("family", true, code) then
typ = "a family object"
else
typ = "a " .. type(code)
end
error("The function getByCode expects a string as its first argument, but received " .. typ .. ".")
end
local m_data = load_data(languages_data_module)
if m_data.aliases[code] or m_data.track[code] then
track(code)
end
local norm_code = normalize_code(code)
-- Get the data, checking for etymology-only languages if allowEtymLang is set.
local data = load_data(get_data_module_name(norm_code))[norm_code] or
allowEtymLang and load_data(etymology_languages_data_module)[norm_code]
-- If no data was found and allowFamily is set, check the family data. If the main family data was found, make the object with [[Module:families]] instead, as family objects have different methods. However, if it's an etymology-only family, use make_object in this module (which handles object inheritance), and the family-specific methods will be inherited from the parent object.
if data == nil and allowFamily then
data = load_data("Module:families/data")[norm_code]
if data ~= nil then
if data.parent == nil then
return make_family_object(norm_code, data)
elseif not allowEtymLang then
data = nil
end
end
end
local retval = code and data and make_object(code, data)
if not retval and paramForError then
require("Module:languages/errorGetBy").code(code, paramForError, allowEtymLang, allowFamily)
end
return retval
end
get_by_code = export.getByCode
--[==[Finds the language whose canonical name (the name used to represent that language on Wiktionary) or other name matches the one provided. If it exists, it returns a <code class="nf">Language</code> object representing the language. Otherwise, it returns {{code|lua|nil}}, unless <code class="n">paramForError</code> is given, in which case an error is generated. If <code class="n">allowEtymLang</code> is specified, etymology-only language codes are allowed and looked up along with normal language codes. If <code class="n">allowFamily</code> is specified, language family codes are allowed and looked up along with normal language codes.
The canonical name of languages should always be unique (it is an error for two languages on Wiktionary to share the same canonical name), so this is guaranteed to give at most one result.
This function is powered by [[Module:languages/canonical names]], which contains a pre-generated mapping of full-language canonical names to codes. It is generated by going through the [[:Category:Language data modules]] for full languages. When <code class="n">allowEtymLang</code> is specified for the above function, [[Module:etymology languages/canonical names]] may also be used, and when <code class="n">allowFamily</code> is specified for the above function, [[Module:families/canonical names]] may also be used.]==]
function export.getByCanonicalName(name, errorIfInvalid, allowEtymLang, allowFamily)
local byName = load_data("Module:languages/canonical names")
local code = byName and byName[name]
if not code and allowEtymLang then
byName = load_data("Module:etymology languages/canonical names")
code = byName and byName[name] or
byName[gsub(name, " [Ss]ubstrate$", "")] or
byName[gsub(name, "^a ", "")] or
byName[gsub(name, "^a ", ""):gsub(" [Ss]ubstrate$", "")] or
-- For etymology families like "ira-pro".
-- FIXME: This is not ideal, as it allows " languages" to be appended to any etymology-only language, too.
byName[match(name, "^(.*) languages$")]
end
if not code and allowFamily then
byName = load_data("Module:families/canonical names")
code = byName[name] or byName[match(name, "^(.*) languages$")]
end
local retval = code and get_by_code(code, errorIfInvalid, allowEtymLang, allowFamily)
if not retval and errorIfInvalid then
require("Module:languages/errorGetBy").canonicalName(name, allowEtymLang, allowFamily)
end
return retval
end
--[==[Used by [[Module:languages/data/2]] (et al.) and [[Module:etymology languages/data]], [[Module:families/data]], [[Module:scripts/data]] and [[Module:writing systems/data]] to finalize the data into the format that is actually returned.]==]
function export.finalizeData(data, main_type, variety)
local fields = {"type"}
if main_type == "language" then
insert(fields, 4) -- script codes
insert(fields, "ancestors")
insert(fields, "link_tr")
insert(fields, "override_translit")
insert(fields, "wikimedia_codes")
elseif main_type == "script" then
insert(fields, 3) -- writing system codes
end -- Families and writing systems have no extra fields to process.
local fields_len = #fields
for _, entity in next, data do
if variety then
-- Move parent from 3 to "parent" and family from "family" to 3. These are different for the sake of convenience, since very few varieties have the family specified, whereas all of them have a parent.
entity.parent, entity[3], entity.family = entity[3], entity.family
-- Give the type "regular" iff not a variety and no other types are assigned.
elseif not (entity.type or entity.parent) then
entity.type = "regular"
end
for i = 1, fields_len do
local key = fields[i]
local field = entity[key]
if field and type(field) == "string" then
entity[key] = gsub(field, "%s*,%s*", ",")
end
end
end
return data
end
--[==[For backwards compatibility only; modules should require the error themselves.]==]
function export.err(lang_code, param, code_desc, template_tag, not_real_lang)
return require("Module:languages/error")(lang_code, param, code_desc, template_tag, not_real_lang)
end
return export
278qio2ki35dtgm22th3ft9t3fyumb7
Modul:languages/data/2
828
200741
1349283
1099265
2026-04-10T19:19:58Z
Swarabakti
18192
1349283
Scribunto
text/plain
local m_langdata = require("Module:languages/data")
-- Loaded on demand, as it may not be needed (depending on the data).
local function u(...)
u = require("Module:string utilities").char
return u(...)
end
local c = m_langdata.chars
local p = m_langdata.puaChars
local s = m_langdata.shared
-- Ideally, we want to move these into [[Module:languages/data]], but because (a) it's necessary to use require on that module, and (b) they're only used in this data module, it's less memory-efficient to do that at the moment. If it becomes possible to use mw.loadData, then these should be moved there.
s["de-Latn-sortkey"] = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.diaer .. c.ringabove,
from = {"æ", "œ", "ß"},
to = {"ae", "oe", "ss"}
}
s["de-Latn-standardchars"] = "AaÄäBbCcDdEeFfGgHhIiJjKkLlMmNnOoÖöPpQqRrSsẞßTtUuÜüVvWwXxYyZz"
s["ka-entryname"] = {remove_diacritics = c.circ}
s["no-sortkey"] = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.macron .. c.dacute .. c.caron .. c.cedilla,
remove_exceptions = {"å"},
from = {"æ", "ø", "å"},
to = {"z" .. p[1], "z" .. p[2], "z" .. p[3]}
}
s["no-standardchars"] = "AaBbDdEeFfGgHhIiJjKkLlMmNnOoPpRrSsTtUuVvYyÆæØøÅå" .. c.punc
s["tg-entryname"] = {remove_diacritics = c.grave .. c.acute}
s["tk-entryname"] = {remove_diacritics = c.macron}
local m = {}
m["aa"] = {
"Afar",
27811,
"cus-eas",
"Latn, Ethi",
entry_name = {
Latn = {remove_diacritics = c.acute},
},
}
m["ab"] = {
"Abkhaz",
5111,
"cau-abz",
"Cyrl, Geor, Latn",
translit = {
Cyrl = "ab-translit",
Geor = "Geor-translit",
},
override_translit = true,
display_text = {
Cyrl = s["cau-Cyrl-displaytext"]
},
entry_name = {
Cyrl = {
remove_diacritics = c.acute,
from = {"^а%-"},
to = {"а"},
},
Latn = s["cau-Latn-entryname"],
},
sort_key = {
Cyrl = {
from = {
"х'ә", -- 3 chars
"гь", "гә", "ӷь", "ҕь", "ӷә", "ҕә", "дә", "ё", "жь", "жә", "ҙә", "ӡә", "ӡ'", "кь", "кә", "қь", "қә", "ҟь", "ҟә", "ҫә", "тә", "ҭә", "ф'", "хь", "хә", "х'", "ҳә", "ць", "цә", "ц'", "ҵә", "ҵ'", "шь", "шә", "џь", -- 2 chars
"ӷ", "ҕ", "ҙ", "ӡ", "қ", "ҟ", "ԥ", "ҧ", "ҫ", "ҭ", "ҳ", "ҵ", "ҷ", "ҽ", "ҿ", "ҩ", "џ", "ә", -- 1 char
"^а",
},
to = {
"х" .. p[4],
"г" .. p[1], "г" .. p[2], "г" .. p[5], "г" .. p[6], "г" .. p[7], "г" .. p[8], "д" .. p[1], "е" .. p[1], "ж" .. p[1], "ж" .. p[2], "з" .. p[2], "з" .. p[4], "з" .. p[5], "к" .. p[1], "к" .. p[2], "к" .. p[4], "к" .. p[5], "к" .. p[7], "к" .. p[8], "с" .. p[2], "т" .. p[1], "т" .. p[3], "ф" .. p[1], "х" .. p[1], "х" .. p[2], "х" .. p[3], "х" .. p[6], "ц" .. p[1], "ц" .. p[2], "ц" .. p[3], "ц" .. p[5], "ц" .. p[6], "ш" .. p[1], "ш" .. p[2], "ы" .. p[3],
"г" .. p[3], "г" .. p[4], "з" .. p[1], "з" .. p[3], "к" .. p[3], "к" .. p[6], "п" .. p[1], "п" .. p[2], "с" .. p[1], "т" .. p[2], "х" .. p[5], "ц" .. p[4], "ч" .. p[1], "ч" .. p[2], "ч" .. p[3], "ы" .. p[1], "ы" .. p[2], "ь" .. p[1],
"",
}
},
},
}
m["ae"] = {
"Avestan",
29572,
"ira-cen",
"Avst, Gujr",
translit = {
Avst = "Avst-translit"
},
}
m["af"] = {
"Afrikaans",
14196,
"gmw-frk",
"Latn, Arab",
ancestors = "nl",
sort_key = {
Latn = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.diaer .. c.ringabove .. c.cedilla .. "'",
from = {"['ʼ]n"},
to = {"n" .. p[1]}
}
},
}
m["ak"] = {
"Akan",
28026,
"alv-ctn",
"Latn",
}
m["am"] = {
"Amharic",
28244,
"sem-eth",
"Ethi",
translit = "Ethi-translit",
}
m["an"] = {
"Aragonese",
8765,
"roa-nar",
"Latn",
}
m["ar"] = {
"Arab",
13955,
"sem-arb",
"Arab, Hebr, Syrc, Brai, Nbat",
translit = {
Arab = "ar-translit"
},
strip_diacritics = {
Arab = "ar-stripdiacritics",
},
-- Hebr display_text, strip_diacritics, sort_key in [[Module:scripts/data]]
}
m["as"] = {
"Assamese",
29401,
"inc-bas",
"as-Beng",
ancestors = "inc-mas",
translit = "as-translit",
}
m["av"] = {
"Avar",
29561,
"cau-ava",
"Cyrl, Latn, Arab",
ancestors = "oav",
translit = {
Cyrl = "cau-nec-translit",
Arab = "ar-translit",
},
override_translit = true,
display_text = {
Cyrl = s["cau-Cyrl-displaytext"],
},
entry_name = {
Cyrl = s["cau-Cyrl-entryname"],
Latn = s["cau-Latn-entryname"],
},
sort_key = {
Cyrl = {
from = {"гъ", "гь", "гӏ", "ё", "кк", "къ", "кь", "кӏ", "лъ", "лӏ", "тӏ", "хх", "хъ", "хь", "хӏ", "цӏ", "чӏ"},
to = {"г" .. p[1], "г" .. p[2], "г" .. p[3], "е" .. p[1], "к" .. p[1], "к" .. p[2], "к" .. p[3], "к" .. p[4], "л" .. p[1], "л" .. p[2], "т" .. p[1], "х" .. p[1], "х" .. p[2], "х" .. p[3], "х" .. p[4], "ц" .. p[1], "ч" .. p[1]}
},
},
}
m["ay"] = {
"Aymara",
4627,
"sai-aym",
"Latn",
}
m["az"] = {
"Azerbaijani",
9292,
"trk-ogz",
"Latn, Cyrl, fa-Arab",
ancestors = "trk-oat",
dotted_dotless_i = true,
entry_name = {
Latn = {
from = {"ʼ"},
to = {"'"},
},
["fa-Arab"] = {
module = "ar-entryname",
["from"] = {
"ۆ",
"ۇ",
"وْ",
"ڲ",
"ؽ",
},
["to"] = {
"و",
"و",
"و",
"گ",
"ی",
},
},
},
display_text = {
Latn = {
from = {"'"},
to = {"ʼ"}
}
},
sort_key = {
Latn = {
from = {
"i", -- Ensure "i" comes after "ı".
"ç", "ə", "ğ", "x", "ı", "q", "ö", "ş", "ü", "w"
},
to = {
"i" .. p[1],
"c" .. p[1], "e" .. p[1], "g" .. p[1], "h" .. p[1], "i", "k" .. p[1], "o" .. p[1], "s" .. p[1], "u" .. p[1], "z" .. p[1]
}
},
Cyrl = {
from = {"ғ", "ә", "ы", "ј", "ҝ", "ө", "ү", "һ", "ҹ"},
to = {"г" .. p[1], "е" .. p[1], "и" .. p[1], "и" .. p[2], "к" .. p[1], "о" .. p[1], "у" .. p[1], "х" .. p[1], "ч" .. p[1]}
},
},
}
m["ba"] = {
"Bashkir",
13389,
"trk-kbu",
"Cyrl",
translit = "ba-translit",
override_translit = true,
sort_key = {
from = {"ғ", "ҙ", "ё", "ҡ", "ң", "ө", "ҫ", "ү", "һ", "ә"},
to = {"г" .. p[1], "д" .. p[1], "е" .. p[1], "к" .. p[1], "н" .. p[1], "о" .. p[1], "с" .. p[1], "у" .. p[1], "х" .. p[1], "э" .. p[1]}
},
}
m["be"] = {
"Belarusian",
9091,
"zle",
"Cyrl, Latn",
ancestors = "zle-mbe",
translit = {
Cyrl = "be-translit",
},
entry_name = {
Cyrl = {
remove_diacritics = c.grave .. c.acute,
},
Latn = {
remove_diacritics = c.grave .. c.acute,
remove_exceptions = {"Ć", "ć", "Ń", "ń", "Ś", "ś", "Ź", "ź"},
},
},
sort_key = {
Cyrl = {
remove_diacritics = c.grave .. c.acute,
from = {"ґ", "ё", "і", "ў"},
to = {"г" .. p[1], "е" .. p[1], "и" .. p[1], "у" .. p[1]}
},
Latn = {
remove_diacritics = c.grave .. c.acute,
remove_exceptions = {"Ć", "ć", "Ń", "ń", "Ś", "ś", "Ź", "ź"},
from = {"ć", "č", "dz", "dź", "dž", "ch", "ł", "ń", "ś", "š", "ŭ", "ź", "ž"},
to = {"c" .. p[1], "c" .. p[2], "d" .. p[1], "d" .. p[2], "d" .. p[3], "h" .. p[1], "l" .. p[1], "n" .. p[1], "s" .. p[1], "s" .. p[2], "u" .. p[1], "z" .. p[1], "z" .. p[2]}
},
},
standardChars = {
Cyrl = "АаБбВвГгДдЕеЁёЖжЗзІіЙйКкЛлМмНнОоПпРрСсТтУуЎўФфХхЦцЧчШшЫыЬьЭэЮюЯя",
Latn = "AaBbCcĆćČčDdEeFfGgHhIiJjKkLlŁłMmNnŃńOoPpRrSsŚśŠšTtUuŬŭVvYyZzŹźŽž",
(c.punc:gsub("'", "")) -- Exclude apostrophe.
},
}
m["bg"] = {
"Bulgarian",
7918,
"zls",
"Cyrl",
ancestors = "cu-bgm",
translit = "bg-translit",
entry_name = {
remove_diacritics = c.grave .. c.acute,
remove_exceptions = {"%f[^%z%s]ѝ%f[%z%s]"},
},
sort_key = {
remove_diacritics = c.grave .. c.acute,
remove_exceptions = {"%f[^%z%s]ѝ%f[%z%s]"},
},
standardChars = "АаБбВвГгДдЕеЖжЗзИиЙйКкЛлМмНнОоПпРрСсТтУуФфХхЦцЧчШшЩщЪъЬьЮюЯя" .. c.punc,
}
m["bh"] = {
"Bihari",
135305,
"inc-eas",
"Deva",
}
m["bi"] = {
"Bislama",
35452,
"crp",
"Latn",
ancestors = "en",
}
m["bm"] = {
"Bambara",
33243,
"dmn-emn",
"Latn, Nkoo",
sort_key = {
Latn = {
from = {"ɛ", "ɲ", "ŋ", "ɔ"},
to = {"e" .. p[1], "n" .. p[1], "n" .. p[2], "o" .. p[1]}
},
},
}
m["bn"] = {
"Bengali",
9610,
"inc-bas",
"Beng, Newa",
ancestors = "inc-mbn",
translit = {
Beng = "bn-translit"
},
}
m["bo"] = {
"Tibetan",
34271,
"sit-tib",
"Tibt", -- sometimes Deva?
ancestors = "xct",
translit = "Tibt-translit",
override_translit = true,
display_text = s["Tibt-displaytext"],
entry_name = s["Tibt-entryname"],
sort_key = "Tibt-sortkey",
}
m["br"] = {
"Breton",
12107,
"cel-brs",
"Latn",
ancestors = "xbm",
sort_key = {
from = {"ch", "c['ʼ’]h"},
to = {"c" .. p[1], "c" .. p[2]}
},
}
m["ca"] = {
"Catalan",
7026,
"roa-ocr",
"Latn",
ancestors = "roa-oca",
sort_key = {remove_diacritics = c.grave .. c.acute .. c.diaer .. c.cedilla .. "·"},
standardChars = "AaÀàBbCcÇçDdEeÉéÈèFfGgHhIiÍíÏïJjLlMmNnOoÓóÒòPpQqRrSsTtUuÚúÜüVvXxYyZz·" .. c.punc,
}
m["ce"] = {
"Chechen",
33350,
"cau-vay",
"Cyrl, Latn, Arab",
translit = {
Cyrl = "cau-nec-translit",
Arab = "ar-translit",
},
override_translit = true,
display_text = {
Cyrl = s["cau-Cyrl-displaytext"]
},
entry_name = {
Cyrl = s["cau-Cyrl-entryname"],
Latn = s["cau-Latn-entryname"],
},
sort_key = {
Cyrl = {
from = {"аь", "гӏ", "ё", "кх", "къ", "кӏ", "оь", "пӏ", "тӏ", "уь", "хь", "хӏ", "цӏ", "чӏ", "юь", "яь"},
to = {"а" .. p[1], "г" .. p[1], "е" .. p[1], "к" .. p[1], "к" .. p[2], "к" .. p[3], "о" .. p[1], "п" .. p[1], "т" .. p[1], "у" .. p[1], "х" .. p[1], "х" .. p[2], "ц" .. p[1], "ч" .. p[1], "ю" .. p[1], "я" .. p[1]}
},
},
}
m["ch"] = {
"Chamorro",
33262,
"poz",
"Latn",
sort_key = {
remove_diacritics = "'",
from = {"å", "ch", "ñ", "ng"},
to = {"a" .. p[1], "c" .. p[1], "n" .. p[1], "n" .. p[2]}
},
}
m["co"] = {
"Corsican",
33111,
"roa-itr",
"Latn",
sort_key = {
from = {"chj", "ghj", "sc", "sg"},
to = {"c" .. p[1], "g" .. p[1], "s" .. p[1], "s" .. p[2]}
},
standardChars = "AaÀàBbCcDdEeÈèFfGgHhIiÌìÏïJjLlMmNnOoÒòPpQqRrSsTtUuÙùÜüVvZz" .. c.punc,
}
m["cr"] = {
"Cree",
33390,
"alg",
"Latn, Cans",
translit = {
Cans = "cr-translit"
},
}
m["cs"] = {
"Czech",
9056,
"zlw",
"Latn",
ancestors = "cs-ear",
sort_key = {
from = {"á", "č", "ď", "é", "ě", "ch", "í", "ň", "ó", "ř", "š", "ť", "ú", "ů", "ý", "ž"},
to = {"a" .. p[1], "c" .. p[1], "d" .. p[1], "e" .. p[1], "e" .. p[2], "h" .. p[1], "i" .. p[1], "n" .. p[1], "o" .. p[1], "r" .. p[1], "s" .. p[1], "t" .. p[1], "u" .. p[1], "u" .. p[2], "y" .. p[1], "z" .. p[1]}
},
standardChars = "AaÁáBbCcČčDdĎďEeÉéĚěFfGgHhIiÍíJjKkLlMmNnŇňOoÓóPpRrŘřSsŠšTtŤťUuÚúŮůVvYyÝýZzŽž" .. c.punc,
}
m["cu"] = {
"Old Church Slavonic",
35499,
"zls",
"Cyrs, Glag, Zname",
translit = {
Cyrs = "Cyrs-translit",
Glag = "Glag-translit"
},
entry_name = {
Cyrs = s["Cyrs-entryname"]
},
sort_key = {
Cyrs = s["Cyrs-sortkey"]
},
}
m["cv"] = {
"Chuvash",
33348,
"trk-ogr",
"Cyrl",
ancestors = "cv-mid",
translit = "cv-translit",
override_translit = true,
sort_key = {
from = {"ӑ", "ё", "ӗ", "ҫ", "ӳ"},
to = {"а" .. p[1], "е" .. p[1], "е" .. p[2], "с" .. p[1], "у" .. p[1]}
},
}
m["cy"] = {
"Welsh",
9309,
"cel-brw",
"Latn",
ancestors = "wlm",
sort_key = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.diaer .. "'",
from = {"ch", "dd", "ff", "ng", "ll", "ph", "rh", "th"},
to = {"c" .. p[1], "d" .. p[1], "f" .. p[1], "g" .. p[1], "l" .. p[1], "p" .. p[1], "r" .. p[1], "t" .. p[1]}
},
standardChars = "ÂâAaBbCcDdEeÊêFfGgHhIiÎîLlMmNnOoÔôPpRrSsTtUuÛûWwŴŵYyŶŷ" .. c.punc,
}
m["da"] = {
"Danish",
9035,
"gmq-eas",
"Latn",
ancestors = "gmq-oda",
sort_key = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.macron .. c.dacute .. c.caron .. c.cedilla,
remove_exceptions = {"å"},
from = {"æ", "ø", "å"},
to = {"z" .. p[1], "z" .. p[2], "z" .. p[3]}
},
standardChars = "AaBbDdEeFfGgHhIiJjKkLlMmNnOoPpRrSsTtUuVvYyÆæØøÅå" .. c.punc,
}
m["de"] = {
"German",
188,
"gmw-hgm",
"Latn, Latf, Brai",
ancestors = "de-ear",
sort_key = {
Latn = s["de-Latn-sortkey"],
Latf = s["de-Latn-sortkey"],
},
standardChars = {
Latn = s["de-Latn-standardchars"],
Latf = s["de-Latn-standardchars"],
Brai = c.braille,
c.punc
}
}
m["dv"] = {
"Dhivehi",
32656,
"inc-ins",
"Thaa, Diak",
translit = {
Thaa = "dv-translit",
Diak = "Diak-translit",
},
override_translit = true,
}
m["dz"] = {
"Dzongkha",
33081,
"sit-tib",
"Tibt",
ancestors = "xct",
translit = "Tibt-translit",
override_translit = true,
display_text = s["Tibt-displaytext"],
entry_name = s["Tibt-entryname"],
sort_key = "Tibt-sortkey",
}
m["ee"] = {
"Ewe",
30005,
"alv-gbe",
"Latn",
sort_key = {
remove_diacritics = c.tilde,
from = {"ɖ", "dz", "ɛ", "ƒ", "gb", "ɣ", "kp", "ny", "ŋ", "ɔ", "ts", "ʋ"},
to = {"d" .. p[1], "d" .. p[2], "e" .. p[1], "f" .. p[1], "g" .. p[1], "g" .. p[2], "k" .. p[1], "n" .. p[1], "n" .. p[2], "o" .. p[1], "t" .. p[1], "v" .. p[1]}
},
}
m["el"] = {
"Greek",
9129,
"grk",
"Grek, Polyt, Brai",
ancestors = "el-kth",
translit = "el-translit",
override_translit = true,
display_text = {
Grek = s["Grek-displaytext"],
Polyt = s["Polyt-displaytext"],
},
entry_name = {
Grek = s["Grek-entryname"],
Polyt = s["Polyt-entryname"],
},
sort_key = {
Grek = s["Grek-sortkey"],
Polyt = s["Polyt-sortkey"],
},
standardChars = {
Grek = "΅·ͺ΄ΑαΆάΒβΓγΔδΕεέΈΖζΗηΉήΘθΙιΊίΪϊΐΚκΛλΜμΝνΞξΟοΌόΠπΡρΣσςΤτΥυΎύΫϋΰΦφΧχΨψΩωΏώ",
Brai = c.braille,
c.punc
},
}
m["en"] = {
"English",
1860,
"gmw-ang",
"Latn, Brai, Shaw, Dsrt", -- entries in Shaw or Dsrt might require prior discussion
wikimedia_codes = "en, simple",
ancestors = "en-ear",
sort_key = {
Latn = {
-- Many of these are needed for sorting language names.
remove_diacritics = "'\"%-%.,%s·ʻʼ" .. c.diacritics,
-- These are found in entry names.
from = {"[ɒæ🅱¢©ᴄðđəǝɜɡħʜıɨłŋɲøɔœꝑꝓꝕßʋ]"},
to = {{
["ɒ"] = "a", ["æ"] = "ae", ["🅱"] = "b", ["¢"] = "c", ["©"] = "c",
["ᴄ"] = "c", ["ð"] = "d", ["đ"] = "d", ["ə"] = "e", ["ǝ"] = "e",
["ɜ"] = "e", ["ɡ"] = "g", ["ħ"] = "h", ["ʜ"] = "h", ["ı"] = "i",
["ɨ"] = "i", ["ł"] = "l", ["ŋ"] = "n", ["ɲ"] = "n", ["ø"] = "o",
["ɔ"] = "o", ["œ"] = "oe", ["ꝑ"] = "p", ["ꝓ"] = "p", ["ꝕ"] = "p",
["ß"] = "ss", ["ʋ"] = "v",
}},
},
},
standardChars = {
Latn = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz",
Brai = c.braille,
c.punc
},
}
m["eo"] = {
"Esperanto",
143,
"art",
"Latn",
sort_key = {
remove_diacritics = c.grave .. c.acute,
from = {"ĉ", "ĝ", "ĥ", "ĵ", "ŝ", "ŭ"},
to = {"c" .. p[1], "g" .. p[1], "h" .. p[1], "j" .. p[1], "s" .. p[1], "u" .. p[1]}
},
standardChars = "AaBbCcĈĉDdEeFfGgĜĝHhĤĥIiJjĴĵKkLlMmNnOoPpRrSsŜŝTtUuŬŭVvZz" .. c.punc,
}
m["es"] = {
"Spanish",
1321,
"roa-cas",
"Latn, Brai",
ancestors = "es-ear",
sort_key = {
Latn = {
remove_exceptions = {"ñ"},
remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.macron .. c.diaer .. c.cedilla,
from = {"ª", "æ", "ñ", "º", "œ"},
to = {"a", "ae", "n" .. p[1], "o", "oe"}
},
},
standardChars = {
Latn = "AaÁáBbCcDdEeÉéFfGgHhIiÍíJjLlMmNnÑñOoÓóPpQqRrSsTtUuÚúÜüVvXxYyZz",
Brai = c.braille,
c.punc
},
}
m["et"] = {
"Estonian",
9072,
"urj-fin",
"Latn",
sort_key = {
from = {
"š", "ž", "õ", "ä", "ö", "ü", -- 2 chars
"z" -- 1 char
},
to = {
"s" .. p[1], "s" .. p[3], "w" .. p[1], "w" .. p[2], "w" .. p[3], "w" .. p[4],
"s" .. p[2]
}
},
standardChars = "AaBbDdEeFfGgHhIiJjKkLlMmNnOoPpRrSsTtUuVvÕõÄäÖöÜü" .. c.punc,
}
m["eu"] = {
"Basque",
8752,
"euq",
"Latn",
sort_key = {
from = {"ç", "ñ"},
to = {"c" .. p[1], "n" .. p[1]}
},
standardChars = "AaBbDdEeFfGgHhIiJjKkLlMmNnÑñOoPpRrSsTtUuXxZz" .. c.punc,
}
m["fa"] = {
"Persian",
9168,
"ira-swi",
"fa-Arab, Hebr",
ancestors = "fa-cls",
display_text = {
Hebr = "Hebr-common",
},
entry_name = {
["fa-Arab"] = {
-- character "ۂ" code U+06C2 to "ه" and "هٔ" (U+0647 + U+0654) to "ه"; hamzatu l-waṣli to a regular alif
from = {"هٔ", "ٱ"}, -- character "ۂ" code U+06C2 to "ه"; hamzatu l-waṣli to a regular alif
to = {"ه", "ا"},
remove_diacritics = c.fathatan .. c.dammatan .. c.kasratan .. c.fatha .. c.damma .. c.kasra .. c.shadda .. c.sukun .. c.superalef,
},
Hebr = "Hebr-common",
},
sort_key = {
Hebr = "Hebr-common",
},
}
m["ff"] = {
"Fula",
33454,
"alv-fwo",
"Latn, Adlm",
}
m["fi"] = {
"Finnish",
1412,
"urj-fin",
"Latn",
display_text = {
from = {"'"},
to = {"’"}
},
entry_name = { -- used to indicate gemination of the next consonant
remove_diacritics = "ˣ",
from = {"’"},
to = {"'"},
},
sort_key = { -- [[Appendix:Finnish alphabet#Collation]] + "aͤ" and "oͤ" as historical variants of "ä" and "ö".
remove_diacritics = "'’:" .. c.diacritics,
remove_exceptions = {
"a[" .. c.ringabove .. c.diaer .. c.small_e .. "]", -- åäaͤ
"o[" .. c.diaer .. c.tilde .. c.dacute .. c.small_e .. "]", -- öõőoͤ
"u[" .. c.diaer .. c.dacute .. "]" -- üű
},
from = {"æ", "[ðđ]", "ł", "ŋ", "œ", "ß", "þ", "u[" .. c.diaer .. c.dacute .. "]", "å", "aͤ", "o[" .. c.tilde .. c.dacute .. c.small_e .. "]", "ø", "(.)['%-]"},
to = {"ae", "d", "l", "n", "oe", "ss", "th", "y", "z" .. p[1], "ä", "ö", "ö", "%1"}
},
standardChars = "AaBbDdEeFfGgHhIiJjKkLlMmNnOoPpRrSsTtUuVvYyÄäÖö" .. c.punc,
}
m["fj"] = {
"Fijian",
33295,
"poz-pcc",
"Latn",
}
m["fo"] = {
"Faroese",
25258,
"gmq-ins",
"Latn",
sort_key = {
from = {"á", "ð", "í", "ó", "ú", "ý", "æ", "ø"},
to = {"a" .. p[1], "d" .. p[1], "i" .. p[1], "o" .. p[1], "u" .. p[1], "y" .. p[1], "z" .. p[1], "z" .. p[2]}
},
standardChars = "AaÁáBbDdÐðEeFfGgHhIiÍíJjKkLlMmNnOoÓóPpRrSsTtUuÚúVvYyÝýÆæØø" .. c.punc,
}
m["fr"] = {
"French",
150,
"roa-oil",
"Latn, Brai",
ancestors = "frm",
sort_key = {
Latn = s["roa-oil-sortkey"]
},
standardChars = {
Latn = "AaÀàÂâBbCcÇçDdEeÉéÈèÊêËëFfGgHhIiÎîÏïJjLlMmNnOoÔôŒœPpQqRrSsTtUuÙùÛûÜüVvXxYyZz",
Brai = c.braille,
c.punc
},
}
m["fy"] = {
"West Frisian",
27175,
"gmw-fri",
"Latn",
sort_key = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.diaer,
from = {"y"},
to = {"i"}
},
standardChars = "AaâäàÆæBbCcDdEeéêëèFfGgHhIiïìYyỳJjKkLlMmNnOoôöòPpRrSsTtUuúûüùVvWwZz" .. c.punc,
}
m["ga"] = {
"Irish",
9142,
"cel-gae",
"Latn, Latg",
ancestors = "mga",
sort_key = {
remove_diacritics = c.acute,
from = {"ḃ", "ċ", "ḋ", "ḟ", "ġ", "ṁ", "ṗ", "ṡ", "ṫ"},
to = {"bh", "ch", "dh", "fh", "gh", "mh", "ph", "sh", "th"}
},
standardChars = "AaÁáBbCcDdEeÉéFfGgHhIiÍíLlMmNnOoÓóPpRrSsTtUuÚúVv" .. c.punc,
}
m["gd"] = {
"Scottish Gaelic",
9314,
"cel-gae",
"Latn, Latg",
ancestors = "mga",
sort_key = {remove_diacritics = c.grave .. c.acute},
standardChars = "AaÀàBbCcDdEeÈèFfGgHhIiÌìLlMmNnOoÒòPpRrSsTtUuÙù" .. c.punc,
}
m["gl"] = {
"Galician",
9307,
"roa-gap",
"Latn",
sort_key = {
remove_diacritics = c.acute,
from = {"ñ"},
to = {"n" .. p[1]}
},
standardChars = "AaÁáBbCcDdEeÉéFfGgHhIiÍíÏïLlMmNnÑñOoÓóPpQqRrSsTtUuÚúÜüVvXxZz" .. c.punc,
}
m["gn"] = {
"Guaraní",
35876,
"tup-gua",
"Latn",
}
m["gu"] = {
"Gujarati",
5137,
"inc-wes",
"Arab, Gujr",
ancestors = "inc-mgu",
translit = {
Gujr = "gu-translit",
},
entry_name = {
Arab = {remove_diacritics = c.fathatan .. c.dammatan .. c.kasratan .. c.fatha .. c.damma .. c.kasra .. c.kasra .. c.shadda .. c.sukun},
Gujr = {remove_diacritics = "઼"},
},
}
m["gv"] = {
"Manx",
12175,
"cel-gae",
"Latn",
ancestors = "mga",
sort_key = {remove_diacritics = c.cedilla .. "-"},
standardChars = "AaBbCcÇçDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwYy" .. c.punc,
}
m["ha"] = {
"Hausa",
56475,
"cdc-wst",
"Latn, Arab",
entry_name = {
Latn = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.macron}
},
sort_key = {
Latn = {
from = {"ɓ", "b'", "ɗ", "d'", "ƙ", "k'", "sh", "ƴ", "'y"},
to = {"b" .. p[1], "b" .. p[2], "d" .. p[1], "d" .. p[2], "k" .. p[1], "k" .. p[2], "s" .. p[1], "y" .. p[1], "y" .. p[2]}
},
},
}
m["he"] = {
"Hebrew",
9288,
"sem-can",
"Hebr, Phnx, Brai, Samr",
ancestors = "he-med",
display_text = {
Hebr = "Hebr-common",
},
entry_name = {
Hebr = "Hebr-common",
Samr = s["Samr-entryname"],
},
sort_key = {
Hebr = "Hebr-common",
Samr = s["Samr-sortkey"],
},
}
m["hi"] = {
"Hindi",
1568,
"inc-hnd",
"Deva, Kthi, Newa",
translit = {
Deva = "hi-translit"
},
standardChars = {
Deva = "अआइईउऊएऐओऔकखगघङचछजझञटठडढणतथदधनपफबभमयरलवशषसहत्रज्ञक्षक़ख़ग़ज़झ़ड़ढ़फ़काखागाघाङाचाछाजाझाञाटाठाडाढाणाताथादाधानापाफाबाभामायारालावाशाषासाहात्राज्ञाक्षाक़ाख़ाग़ाज़ाझ़ाड़ाढ़ाफ़ाकिखिगिघिङिचिछिजिझिञिटिठिडिढिणितिथिदिधिनिपिफिबिभिमियिरिलिविशिषिसिहित्रिज्ञिक्षिक़िख़िग़िज़िझ़िड़िढ़िफ़िकीखीगीघीङीचीछीजीझीञीटीठीडीढीणीतीथीदीधीनीपीफीबीभीमीयीरीलीवीशीषीसीहीत्रीज्ञीक्षीक़ीख़ीग़ीज़ीझ़ीड़ीढ़ीफ़ीकुखुगुघुङुचुछुजुझुञुटुठुडुढुणुतुथुदुधुनुपुफुबुभुमुयुरुलुवुशुषुसुहुत्रुज्ञुक्षुक़ुख़ुग़ुज़ुझ़ुड़ुढ़ुफ़ुकूखूगूघूङूचूछूजूझूञूटूठूडूढूणूतूथूदूधूनूपूफूबूभूमूयूरूलूवूशूषूसूहूत्रूज्ञूक्षूक़ूख़ूग़ूज़ूझ़ूड़ूढ़ूफ़ूकेखेगेघेङेचेछेजेझेञेटेठेडेढेणेतेथेदेधेनेपेफेबेभेमेयेरेलेवेशेषेसेहेत्रेज्ञेक्षेक़ेख़ेग़ेज़ेझ़ेड़ेढ़ेफ़ेकैखैगैघैङैचैछैजैझैञैटैठैडैढैणैतैथैदैधैनैपैफैबैभैमैयैरैलैवैशैषैसैहैत्रैज्ञैक्षैक़ैख़ैग़ैज़ैझ़ैड़ैढ़ैफ़ैकोखोगोघोङोचोछोजोझोञोटोठोडोढोणोतोथोदोधोनोपोफोबोभोमोयोरोलोवोशोषोसोहोत्रोज्ञोक्षोक़ोख़ोग़ोज़ोझ़ोड़ोढ़ोफ़ोकौखौगौघौङौचौछौजौझौञौटौठौडौढौणौतौथौदौधौनौपौफौबौभौमौयौरौलौवौशौषौसौहौत्रौज्ञौक्षौक़ौख़ौग़ौज़ौझ़ौड़ौढ़ौफ़ौक्ख्ग्घ्ङ्च्छ्ज्झ्ञ्ट्ठ्ड्ढ्ण्त्थ्द्ध्न्प्फ्ब्भ्म्य्र्ल्व्श्ष्स्ह्त्र्ज्ञ्क्ष्क़्ख़्ग़्ज़्झ़्ड़्ढ़्फ़्।॥०१२३४५६७८९॰",
c.punc
},
}
m["ho"] = {
"Hiri Motu",
33617,
"crp",
"Latn",
ancestors = "meu",
}
m["ht"] = {
"Haitian Creole",
33491,
"crp",
"Latn",
ancestors = "ht-sdm",
sort_key = {
from = {
"oun", -- 3 chars
"an", "ch", "è", "en", "ng", "ò", "on", "ou", "ui" -- 2 chars
},
to = {
"o" .. p[4],
"a" .. p[1], "c" .. p[1], "e" .. p[1], "e" .. p[2], "n" .. p[1], "o" .. p[1], "o" .. p[2], "o" .. p[3], "u" .. p[1]
}
},
}
m["hu"] = {
"Hungarian",
9067,
"urj-ugr",
"Latn, Hung",
ancestors = "ohu",
sort_key = {
Latn = {
from = {
"dzs", -- 3 chars
"á", "cs", "dz", "é", "gy", "í", "ly", "ny", "ó", "ö", "ő", "sz", "ty", "ú", "ü", "ű", "zs", -- 2 chars
},
to = {
"d" .. p[2],
"a" .. p[1], "c" .. p[1], "d" .. p[1], "e" .. p[1], "g" .. p[1], "i" .. p[1], "l" .. p[1], "n" .. p[1], "o" .. p[1], "o" .. p[2], "o" .. p[3], "s" .. p[1], "t" .. p[1], "u" .. p[1], "u" .. p[2], "u" .. p[3], "z" .. p[1],
}
},
},
standardChars = {
Latn = "AaÁáBbCcDdEeÉéFfGgHhIiÍíJjKkLlMmNnOoÓóÖöŐőPpQqRrSsTtUuÚúÜüŰűVvWwXxYyZz",
c.punc
},
}
m["hy"] = {
"Armenian",
8785,
"hyx",
"Armn, Brai",
ancestors = "axm",
translit = {
Armn = "Armn-translit"
},
override_translit = true,
entry_name = {
Armn = {
remove_diacritics = "՛՜՞՟",
from = {"եւ", "<sup>յ</sup>", "<sup>ի</sup>", "<sup>է</sup>", "յ̵", "ՙ", "՚"},
to = {"և", "յ", "ի", "է", "ֈ", "ʻ", "’"}
},
},
sort_key = {
Armn = {
from = {
"ու", "եւ", -- 2 chars
"և" -- 1 char
},
to = {
"ւ", "եվ",
"եվ"
}
},
},
}
m["hz"] = {
"Herero",
33315,
"bnt-swb",
"Latn",
}
m["ia"] = {
"Interlingua",
35934,
"art",
"Latn",
}
m["id"] = {
"Indonesian",
9240,
"poz-mly",
"Latn",
ancestors = "ms",
standardChars = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz" .. c.punc,
}
m["ie"] = {
"Interlingue",
35850,
"art",
"Latn",
type = "appendix-constructed",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.circ},
}
m["ig"] = {
"Igbo",
33578,
"alv-igb",
"Latn",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.macron},
sort_key = {
from = {"gb", "gh", "gw", "ị", "kp", "kw", "ṅ", "nw", "ny", "ọ", "sh", "ụ"},
to = {"g" .. p[1], "g" .. p[2], "g" .. p[3], "i" .. p[1], "k" .. p[1], "k" .. p[2], "n" .. p[1], "n" .. p[2], "n" .. p[3], "o" .. p[1], "s" .. p[1], "u" .. p[1]}
},
}
m["ii"] = {
"Nuosu",
34235,
"tbq-nlo",
"Yiii",
translit = "ii-translit",
}
m["ik"] = {
"Inupiaq",
27183,
"esx-inu",
"Latn",
sort_key = {
from = {
"ch", "ġ", "dj", "ḷ", "ł̣", "ñ", "ng", "r̂", "sr", "zr", -- 2 chars
"ł", "ŋ", "ʼ" -- 1 char
},
to = {
"c" .. p[1], "g" .. p[1], "h" .. p[1], "l" .. p[1], "l" .. p[3], "n" .. p[1], "n" .. p[2], "r" .. p[1], "s" .. p[1], "z" .. p[1],
"l" .. p[2], "n" .. p[2], "z" .. p[2]
}
},
}
m["io"] = {
"Ido",
35224,
"art",
"Latn",
}
m["is"] = {
"Icelandic",
294,
"gmq-ins",
"Latn",
sort_key = {
from = {"á", "ð", "é", "í", "ó", "ú", "ý", "þ", "æ", "ö"},
to = {"a" .. p[1], "d" .. p[1], "e" .. p[1], "i" .. p[1], "o" .. p[1], "u" .. p[1], "y" .. p[1], "z" .. p[1], "z" .. p[2], "z" .. p[3]}
},
standardChars = "AaÁáBbDdÐðEeÉéFfGgHhIiÍíJjKkLlMmNnOoÓóPpRrSsTtUuÚúVvXxYyÝýÞþÆæÖö" .. c.punc,
}
m["it"] = {
"Italian",
652,
"roa-itr",
"Latn",
ancestors = "roa-oit",
sort_key = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.diaer .. c.ringabove},
standardChars = "AaÀàBbCcDdEeÈèÉéFfGgHhIiÌìLlMmNnOoÒòPpQqRrSsTtUuÙùVvZz" .. c.punc,
}
m["iu"] = {
"Inuktitut",
29921,
"esx-inu",
"Cans, Latn",
translit = {
Cans = "cr-translit"
},
override_translit = true,
}
m["ja"] = {
"Japanese",
5287,
"jpx",
"Jpan, Latn, Brai",
ancestors = "ja-ear",
translit = s["jpx-translit"],
link_tr = true,
display_text = s["jpx-displaytext"],
entry_name = s["jpx-entryname"],
sort_key = s["jpx-sortkey"],
}
m["jv"] = {
"Javanese",
33549,
"poz",
"Latn, Java, Arab",
ancestors = "kaw",
translit = {
Java = "jv-translit"
},
link_tr = true,
entry_name = {
Latn = {remove_diacritics = c.circ} -- Modern jv don't use ê
},
sort_key = {
Latn = {
from = {"å", "dh", "é", "è", "ng", "ny", "th"},
to = {"a" .. p[1], "d" .. p[1], "e" .. p[1], "e" .. p[2], "n" .. p[1], "n" .. p[2], "t" .. p[1]}
},
},
}
m["ka"] = {
"Georgian",
8108,
"ccs-gzn",
"Geor, Geok, Hebr", -- Hebr is used to write Judeo-Georgian
ancestors = "ka-mid",
translit = {
Geor = "Geor-translit",
Geok = "Geok-translit",
},
override_translit = true,
display_text = {
Hebr = "Hebr-common",
},
entry_name = {
Geor = s["ka-entryname"],
Geok = s["ka-entryname"],
Hebr = "Hebr-common",
},
sort_key = {
Hebr = "Hebr-common",
}
}
m["kg"] = {
"Kongo",
33702,
"bnt-kng",
"Latn",
}
m["ki"] = {
"Kikuyu",
33587,
"bnt-kka",
"Latn",
}
m["kj"] = {
"Kwanyama",
1405077,
"bnt-ova",
"Latn",
}
m["kk"] = {
"Kazakh",
9252,
"trk-kno",
"Cyrl, Latn, kk-Arab",
translit = {
Cyrl = {
from = {
"Ё", "ё", "Й", "й", "Нг", "нг", "Ӯ", "ӯ", -- 2 chars; are "Ӯ" and "ӯ" actually used?
"А", "а", "Ә", "ә", "Б", "б", "В", "в", "Г", "г", "Ғ", "ғ", "Д", "д", "Е", "е", "Ж", "ж", "З", "з", "И", "и", "К", "к", "Қ", "қ", "Л", "л", "М", "м", "Н", "н", "Ң", "ң", "О", "о", "Ө", "ө", "П", "п", "Р", "р", "С", "с", "Т", "т", "У", "у", "Ұ", "ұ", "Ү", "ү", "Ф", "ф", "Х", "х", "Һ", "һ", "Ц", "ц", "Ч", "ч", "Ш", "ш", "Щ", "щ", "Ъ", "ъ", "Ы", "ы", "І", "і", "Ь", "ь", "Э", "э", "Ю", "ю", "Я", "я", -- 1 char
},
to = {
"E", "e", "İ", "i", "Ñ", "ñ", "U", "u",
"A", "a", "Ä", "ä", "B", "b", "V", "v", "G", "g", "Ğ", "ğ", "D", "d", "E", "e", "J", "j", "Z", "z", "İ", "i", "K", "k", "Q", "q", "L", "l", "M", "m", "N", "n", "Ñ", "ñ", "O", "o", "Ö", "ö", "P", "p", "R", "r", "S", "s", "T", "t", "U", "u", "Ū", "ū", "Ü", "ü", "F", "f", "X", "x", "H", "h", "S", "s", "Ç", "ç", "Ş", "ş", "Ş", "ş", "", "", "Y", "y", "I", "ı", "", "", "É", "é", "Ü", "ü", "Ä", "ä",
}
}
},
-- override_translit = true,
sort_key = {
Cyrl = {
from = {"ә", "ғ", "ё", "қ", "ң", "ө", "ұ", "ү", "һ", "і"},
to = {"а" .. p[1], "г" .. p[1], "е" .. p[1], "к" .. p[1], "н" .. p[1], "о" .. p[1], "у" .. p[1], "у" .. p[2], "х" .. p[1], "ы" .. p[1]}
},
},
standardChars = {
Cyrl = "АаӘәБбВвГгҒғДдЕеЁёЖжЗзИиЙйКкҚқЛлМмНнҢңОоӨөПпРрСсТтУуҰұҮүФфХхҺһЦцЧчШшЩщЪъЫыІіЬьЭэЮюЯя",
c.punc
},
}
m["kl"] = {
"Greenlandic",
25355,
"esx-inu",
"Latn",
sort_key = {
from = {"æ", "ø", "å"},
to = {"z" .. p[1], "z" .. p[2], "z" .. p[3]}
}
}
m["km"] = {
"Khmer",
9205,
"mkh-kmr",
"Khmr",
ancestors = "xhm",
translit = "km-translit",
}
m["kn"] = {
"Kannada",
33673,
"dra-kan",
"Knda, Tutg",
ancestors = "dra-mkn",
translit = {
Knda = "kn-translit",
},
}
m["ko"] = {
"Korean",
9176,
"qfa-kor",
"Kore, Brai",
ancestors = "ko-ear",
translit = {
Kore = "ko-translit",
},
entry_name = {
Kore = s["Kore-entryname"],
},
}
m["kr"] = {
"Kanuri",
36094,
"ssa-sah",
"Latn, Arab",
-- the sortkey and entry_name are only for standard Kanuri; when dialectal entries get added, someone will have to work out how the dialects should be represented orthographically
entry_name = {
Latn = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.breve}
},
sort_key = {
Latn = {
from = {"ǝ", "ny", "ɍ", "sh"},
to = {"e" .. p[1], "n" .. p[1], "r" .. p[1], "s" .. p[1]}
},
},
}
m["ks"] = {
"Kashmiri",
33552,
"inc-kas",
"ks-Arab, Deva, Shrd, Latn",
translit = {
["ks-Arab"] = "ks-Arab-translit",
Deva = "ks-Deva-translit",
Shrd = "Shrd-translit",
},
}
-- "kv" IS TREATED AS "koi", "kpv", SEE WT:LT
m["kw"] = {
"Cornish",
25289,
"cel-brs",
"Latn",
ancestors = "cnx",
sort_key = {
from = {"ch"},
to = {"c" .. p[1]}
},
}
m["ky"] = {
"Kyrgyz",
9255,
"trk-kkp",
"Cyrl, Latn, Arab",
translit = {
Cyrl = "ky-translit"
},
override_translit = true,
sort_key = {
Cyrl = {
from = {"ё", "ң", "ө", "ү"},
to = {"е" .. p[1], "н" .. p[1], "о" .. p[1], "у" .. p[1]}
},
},
}
m["la"] = {
"Latin",
397,
"itc-laf",
"Latn, Ital",
ancestors = "itc-ola",
display_text = {
Latn = s["itc-Latn-displaytext"]
},
entry_name = {
Latn = s["itc-Latn-entryname"]
},
sort_key = {
Latn = s["itc-Latn-sortkey"]
},
standardChars = {
Latn = "AaBbCcDdEeFfGgHhIiLlMmNnOoPpQqRrSsTtUuVvXx",
c.punc
},
}
m["lb"] = {
"Luxembourgish",
9051,
"gmw-hgm",
"Latn, Brai",
ancestors = "gmw-cfr",
sort_key = {
Latn = {
from = {"ä", "ë", "é"},
to = {"z" .. p[1], "z" .. p[2], "z" .. p[3]}
},
},
}
m["lg"] = {
"Luganda",
33368,
"bnt-nyg",
"Latn",
entry_name = {remove_diacritics = c.acute .. c.circ},
sort_key = {
from = {"ŋ"},
to = {"n" .. p[1]}
},
}
m["li"] = {
"Limburgish",
102172,
"gmw-frk",
"Latn",
ancestors = "dum",
}
m["ln"] = {
"Lingala",
36217,
"bnt-bmo",
"Latn",
sort_key = {
remove_diacritics = c.acute .. c.circ .. c.caron,
from = {"ɛ", "gb", "mb", "mp", "nd", "ng", "nk", "ns", "nt", "ny", "nz", "ɔ"},
to = {"e" .. p[1], "g" .. p[1], "m" .. p[1], "m" .. p[2], "n" .. p[1], "n" .. p[2], "n" .. p[3], "n" .. p[4], "n" .. p[5], "n" .. p[6], "n" .. p[7], "o" .. p[1]}
},
}
m["lo"] = {
"Lao",
9211,
"tai-swe",
"Laoo",
translit = "lo-translit",
sort_key = "Laoo-sortkey",
standardChars = "0-9ກຂຄງຈຊຍດຕຖທນບປຜຝພຟມຢຣລວສຫອຮຯ-ໝ" .. c.punc,
}
m["lt"] = {
"Lithuanian",
9083,
"bat-eas",
"Latn",
ancestors = "olt",
display_text = "lt-common",
entry_name = "lt-common",
sort_key = "lt-common",
standardChars = "AaĄąBbCcČčDdEeĘęĖėFfGgHhIiĮįYyJjKkLlMmNnOoPpRrSsŠšTtUuŲųŪūVvZzŽž" .. c.punc,
}
m["lu"] = {
"Luba-Katanga",
36157,
"bnt-lub",
"Latn",
}
m["lv"] = {
"Latvian",
9078,
"bat-eas",
"Latn",
entry_name = {
-- This attempts to convert vowels with tone marks to vowels either with or without macrons. Specifically, there should be no macrons if the vowel is part of a diphthong (including resonant diphthongs such pìrksts -> pirksts not #pīrksts). What we do is first convert the vowel + tone mark to a vowel + tilde in a decomposed fashion, then remove the tilde in diphthongs, then convert the remaining vowel + tilde sequences to macroned vowels, then delete any other tilde. We leave already-macroned vowels alone: Both e.g. ar and ār occur before consonants. FIXME: This still might not be sufficient.
from = {"([Ee])" .. c.cedilla, "[" .. c.grave .. c.circ .. c.tilde .."]", "([aAeEiIoOuU])" .. c.tilde .."?([lrnmuiLRNMUI])" .. c.tilde .. "?([^aAeEiIoOuU])", "([aAeEiIoOuU])" .. c.tilde .."?([lrnmuiLRNMUI])" .. c.tilde .."?$", "([iI])" .. c.tilde .. "?([eE])" .. c.tilde .. "?", "([aAeEiIuU])" .. c.tilde, c.tilde},
to = {"%1", c.tilde, "%1%2%3", "%1%2", "%1%2", "%1" .. c.macron}
},
sort_key = {
from = {"ā", "č", "ē", "ģ", "ī", "ķ", "ļ", "ņ", "š", "ū", "ž"},
to = {"a" .. p[1], "c" .. p[1], "e" .. p[1], "g" .. p[1], "i" .. p[1], "k" .. p[1], "l" .. p[1], "n" .. p[1], "s" .. p[1], "u" .. p[1], "z" .. p[1]}
},
standardChars = "AaĀāBbCcČčDdEeĒēFfGgĢģHhIiĪīJjKkĶķLlĻļMmNnŅņOoPpRrSsŠšTtUuŪūVvZzŽž" .. c.punc,
}
m["mg"] = {
"Malagasy",
7930,
"poz-bre",
"Latn, Arab",
}
m["mh"] = {
"Marshallese",
36280,
"poz-mic",
"Latn",
sort_key = {
from = {"ā", "ļ", "m̧", "ņ", "n̄", "o̧", "ō", "ū"},
to = {"a" .. p[1], "l" .. p[1], "m" .. p[1], "n" .. p[1], "n" .. p[2], "o" .. p[1], "o" .. p[2], "u" .. p[1]}
},
}
m["mi"] = {
"Maori",
36451,
"poz-pep",
"Latn",
sort_key = {
remove_diacritics = c.macron,
from = {"ng", "wh"},
to = {"n" .. p[1], "w" .. p[1]}
},
}
m["mk"] = {
"Macedonian",
9296,
"zls",
"Cyrl, Polyt",
ancestors = "cu",
translit = {
Cyrl = "mk-translit"
},
display_text = {
Polyt = s["Polyt-displaytext"]
},
entry_name = {
Cyrl = {
remove_diacritics = c.acute,
remove_exceptions = {"Ѓ", "ѓ", "Ќ", "ќ"}
},
Polyt = s["Polyt-entryname"],
},
sort_key = {
Cyrl = {
remove_diacritics = c.grave,
remove_exceptions = {"ѓ", "ќ"},
from = {"ѓ", "ѕ", "ј", "љ", "њ", "ќ", "џ"},
to = {"д" .. p[1], "з" .. p[1], "и" .. p[1], "л" .. p[1], "н" .. p[1], "т" .. p[1], "ч" .. p[1]}
},
Polyt = s["Polyt-sortkey"],
},
standardChars = {
Cyrl = "АаБбВвГгДдЃѓЕеЖжЗзЅѕИиЈјКкЛлЉљМмНнЊњОоПпРрСсТтЌќУуФфХхЦцЧчЏџШш",
c.punc
},
}
m["ml"] = {
"Malayalam",
36236,
"dra-mal",
"Mlym",
translit = "ml-translit",
override_translit = true,
}
m["mn"] = {
"Mongolian",
9246,
"xgn-cen",
"Cyrl, Mong, Latn, Brai",
ancestors = "cmg",
translit = {
Cyrl = "mn-translit",
Mong = "Mong-translit",
},
override_translit = true,
display_text = {
Mong = s["Mong-displaytext"]
},
entry_name = {
Cyrl = {remove_diacritics = c.grave .. c.acute},
Mong = s["Mong-entryname"],
},
sort_key = {
Cyrl = {
remove_diacritics = c.grave,
from = {"ё", "ө", "ү"},
to = {"е" .. p[1], "о" .. p[1], "у" .. p[1]}
},
},
standardChars = {
Cyrl = "АаБбВвГгДдЕеЁёЖжЗзИиЙйЛлМмНнОоӨөРрСсТтУуҮүХхЦцЧчШшЫыЬьЭэЮюЯя—",
Brai = c.braille,
c.punc
},
}
-- "mo" IS TREATED AS "ro", SEE WT:LT
m["mr"] = {
"Marathi",
1571,
"inc-sou",
"Deva, Modi",
ancestors = "omr",
translit = {
Deva = "mr-translit",
Modi = "mr-Modi-translit",
},
entry_name = {
Deva = {
from = {"च़", "ज़", "झ़"},
to = {"च", "ज", "झ"}
},
},
}
m["ms"] = {
"Malay",
9237,
"poz-mly",
"Latn, ms-Arab",
ancestors = "ms-cla",
standardChars = {
Latn = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz",
c.punc
},
}
m["mt"] = {
"Maltese",
9166,
"sem-arb",
"Latn",
display_text = {
from = {"'"},
to = {"’"}
},
entry_name = {
from = {"’"},
to = {"'"},
},
ancestors = "sqr",
sort_key = {
from = {
"ċ", "ġ", "ż", -- Convert into PUA so that decomposed form does not get caught by the next step.
"([cgz])", -- Ensure "c" comes after "ċ", "g" comes after "ġ" and "z" comes after "ż".
"g" .. p[1] .. "ħ", -- "għ" after initial conversion of "g".
p[3], p[4], "ħ", "ie", p[5] -- Convert "ċ", "ġ", "ħ", "ie", "ż" into final output.
},
to = {
p[3], p[4], p[5],
"%1" .. p[1],
"g" .. p[2],
"c", "g", "h" .. p[1], "i" .. p[1], "z"
}
},
}
m["my"] = {
"Burmese",
9228,
"tbq-brm",
"Mymr",
ancestors = "obr",
translit = "my-translit",
override_translit = true,
sort_key = {
from = {"ျ", "ြ", "ွ", "ှ", "ဿ"},
to = {"္ယ", "္ရ", "္ဝ", "္ဟ", "သ္သ"}
},
}
m["na"] = {
"Nauruan",
13307,
"poz-mic",
"Latn",
}
m["nb"] = {
"Norwegian Bokmål",
25167,
"gmq",
"Latn",
wikimedia_codes = "no",
ancestors = "gmq-mno, da", -- da as an (but not the) ancestor of nb was agreed on - do not change without discussion
sort_key = s["no-sortkey"],
standardChars = s["no-standardchars"],
}
m["nd"] = {
"Northern Ndebele",
35613,
"bnt-ngu",
"Latn",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.macron .. c.caron},
}
m["ne"] = {
"Nepali",
33823,
"inc-pah",
"Deva, Newa",
translit = {
Deva = "ne-translit"
},
}
m["ng"] = {
"Ndonga",
33900,
"bnt-ova",
"Latn",
}
m["nl"] = {
"Dutch",
7411,
"gmw-frk",
"Latn, Brai",
ancestors = "dum",
sort_key = {
Latn = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.diaer .. c.ringabove .. c.cedilla .. "'"},
},
standardChars = {
Latn = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz",
Brai = c.braille,
c.punc
},
}
m["nn"] = {
"Norwegian Nynorsk",
25164,
"gmq-wes",
"Latn",
ancestors = "gmq-mno",
entry_name = {
remove_diacritics = c.grave .. c.acute,
},
sort_key = s["no-sortkey"],
standardChars = s["no-standardchars"],
}
m["no"] = {
"Norwegian",
9043,
"gmq-wes",
"Latn",
ancestors = "gmq-mno",
sort_key = s["no-sortkey"],
standardChars = s["no-standardchars"],
}
m["nr"] = {
"Southern Ndebele",
36785,
"bnt-ngu",
"Latn",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.macron .. c.caron},
}
m["nv"] = {
"Navajo",
13310,
"apa",
"Latn, Brai",
sort_key = {
remove_diacritics = c.acute .. c.ogonek,
from = {
"chʼ", "tłʼ", "tsʼ", -- 3 chars
"ch", "dl", "dz", "gh", "hw", "kʼ", "kw", "sh", "tł", "ts", "zh", -- 2 chars
"ł", "ʼ" -- 1 char
},
to = {
"c" .. p[2], "t" .. p[2], "t" .. p[4],
"c" .. p[1], "d" .. p[1], "d" .. p[2], "g" .. p[1], "h" .. p[1], "k" .. p[1], "k" .. p[2], "s" .. p[1], "t" .. p[1], "t" .. p[3], "z" .. p[1],
"l" .. p[1], "z" .. p[2]
}
},
}
m["ny"] = {
"Chichewa",
33273,
"bnt-nys",
"Latn",
entry_name = {remove_diacritics = c.acute .. c.circ},
sort_key = {
from = {"ng'"},
to = {"ng"}
},
}
m["oc"] = {
"Occitan",
14185,
"roa-ocr",
"Latn, Hebr",
ancestors = "pro",
display_text = {
Hebr = "Hebr-common",
},
entry_name = {
Hebr = "Hebr-common",
},
sort_key = {
Latn = {
remove_diacritics = c.grave .. c.acute .. c.diaer .. c.cedilla,
from = {"([lns])·h"},
to = {"%1h"}
},
Hebr = "Hebr-common",
},
}
m["oj"] = {
"Ojibwe",
33875,
"alg",
"Cans, Latn",
sort_key = {
Latn = {
from = {"aa", "ʼ", "ii", "oo", "sh", "zh"},
to = {"a" .. p[1], "h" .. p[1], "i" .. p[1], "o" .. p[1], "s" .. p[1], "z" .. p[1]}
},
},
}
m["om"] = {
"Oromo",
33864,
"cus-eas",
"Latn, Ethi",
}
m["or"] = {
"Odia",
33810,
"inc-eas",
"Orya",
ancestors = "inc-mor",
translit = "or-translit",
}
m["os"] = {
"Ossetian",
33968,
"xsc-sar",
"Cyrl, Geor, Latn",
ancestors = "oos",
translit = {
Cyrl = "os-translit",
Geor = "Geor-translit",
},
override_translit = true,
display_text = {
Cyrl = {
from = {"æ"},
to = {"ӕ"}
},
Latn = {
from = {"ӕ"},
to = {"æ"}
},
},
entry_name = {
Cyrl = {
remove_diacritics = c.grave .. c.acute,
from = {"æ"},
to = {"ӕ"}
},
Latn = {
from = {"ӕ"},
to = {"æ"}
},
},
sort_key = {
Cyrl = {
from = {"ӕ", "гъ", "дж", "дз", "ё", "къ", "пъ", "тъ", "хъ", "цъ", "чъ"},
to = {"а" .. p[1], "г" .. p[1], "д" .. p[1], "д" .. p[2], "е" .. p[1], "к" .. p[1], "п" .. p[1], "т" .. p[1], "х" .. p[1], "ц" .. p[1], "ч" .. p[1]}
},
},
}
m["pa"] = {
"Punjabi",
58635,
"inc-pan",
"Guru, pa-Arab",
ancestors = "inc-opa",
translit = {
Guru = "Guru-translit",
["pa-Arab"] = "pa-Arab-translit",
},
entry_name = {
["pa-Arab"] = {
remove_diacritics = c.fathatan .. c.dammatan .. c.kasratan .. c.fatha .. c.damma .. c.kasra .. c.shadda .. c.sukun .. c.nunghunna,
from = {"ݨ", "ࣇ"},
to = {"ن", "ل"}
},
},
}
m["pi"] = {
"Pali",
36727,
"inc-mid",
"Latn, Brah, Deva, Beng, Sinh, Mymr, Thai, Lana, Laoo, Khmr, Cakm", --and also Khom
ancestors = "sa",
translit = {
Brah = "Brah-translit",
Deva = "sa-translit",
Beng = "pi-translit",
Sinh = "si-translit",
Mymr = "pi-translit",
Thai = "pi-translit",
Lana = "pi-translit",
Laoo = "pi-translit",
Khmr = "pi-translit",
Cakm = "Cakm-translit",
},
entry_name = {
Thai = {
from = {"ึ", u(0xF700), u(0xF70F)}, -- FIXME: Not clear what's going on with the PUA characters here.
to = {"ิํ", "ฐ", "ญ"}
},
remove_diacritics = c.VS01
},
sort_key = { -- FIXME: This needs to be converted into the current standardized format.
from = {"ā", "ī", "ū", "ḍ", "ḷ", "m[" .. c.dotabove .. c.dotbelow .. "]", "ṅ", "ñ", "ṇ", "ṭ", "([เโ])([ก-ฮ])", "([ເໂ])([ກ-ຮ])", "ᩔ", "ᩕ", "ᩖ", "ᩘ", "([ᨭ-ᨱ])ᩛ", "([ᨷ-ᨾ])ᩛ", "ᩤ", u(0xFE00), u(0x200D)},
to = {"a~", "i~", "u~", "d~", "l~", "m~", "n~", "n~~", "n~~~", "t~", "%2%1", "%2%1", "ᩈ᩠ᩈ", "᩠ᩁ", "᩠ᩃ", "ᨦ᩠", "%1᩠ᨮ", "%1᩠ᨻ", "ᩣ"}
},
}
m["pl"] = {
"Polish",
809,
"zlw-lch",
"Latn",
ancestors = "zlw-mpl",
sort_key = {
from = {"ą", "ć", "ę", "ł", "ń", "ó", "ś", "ź", "ż"},
to = {"a" .. p[1], "c" .. p[1], "e" .. p[1], "l" .. p[1], "n" .. p[1], "o" .. p[1], "s" .. p[1], "z" .. p[1], "z" .. p[2]}
},
standardChars = "AaĄąBbCcĆćDdEeĘęFfGgHhIiJjKkLlŁłMmNnŃńOoÓóPpRrSsŚśTtUuWwYyZzŹźŻż" .. c.punc,
}
m["ps"] = {
"Pashto",
58680,
"ira-pat",
"ps-Arab",
entry_name = {remove_diacritics = c.fathatan .. c.dammatan .. c.kasratan .. c.fatha .. c.damma .. c.kasra .. c.shadda .. c.sukun .. c.zwarakay .. c.superalef},
}
m["pt"] = {
"Portuguese",
5146,
"roa-gap",
"Latn, Brai",
sort_key = {
Latn = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.macron .. c.diaer .. c.cedilla,
from = {"ª", "æ", "º", "œ"},
to = {"a", "ae", "o", "oe"}
},
},
standardChars = {
Latn = "AaÁáÂâÃãBbCcÇçDdEeÉéÊêFfGgHhIiÍíJjLlMmNnOoÓóÔôÕõPpQqRrSsTtUuÚúVvXxZz",
Brai = c.braille,
c.punc
},
}
m["qu"] = {
"Quechua",
5218,
"qwe",
"Latn",
}
m["rm"] = {
"Romansch",
13199,
"roa-rhe",
"Latn",
sort_key = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.diaer .. c.small_e},
}
m["ro"] = {
"Romanian",
7913,
"roa-eas",
"Latn, Cyrl, Cyrs",
translit = {
Cyrl = "ro-translit"
},
sort_key = {
Latn = {
remove_diacritics = c.grave .. c.acute,
from = {"ă", "â", "î", "ș", "ț"},
to = {"a" .. p[1], "a" .. p[2], "i" .. p[1], "s" .. p[1], "t" .. p[1]}
},
Cyrl = {
from = {"ӂ"},
to = {"ж" .. p[1]}
},
},
standardChars = {
Latn = "AaĂăÂâBbCcDdEeFfGgHhIiÎîJjLlMmNnOoPpRrSsȘșTtȚțUuVvXxZz",
Cyrl = "АаБбВвГгДдЕеЖжӁӂЗзИиЙйКкЛлМмНнОоПпРрСсТтУуФфХхЦцЧчШшЫыЬьЭэЮюЯя",
c.punc
},
}
m["ru"] = {
"Russian",
7737,
"zle",
"Cyrl, Brai",
ancestors = "zle-mru",
translit = {
Cyrl = "ru-translit"
},
display_text = {
Cyrl = {
from = {"'"},
to = {"’"}
},
},
entry_name = {
Cyrl = {
remove_diacritics = c.grave .. c.acute .. c.diaer,
remove_exceptions = {"Ё", "ё", "Ѣ̈", "ѣ̈", "Я̈", "я̈"},
from = {"’"},
to = {"'"},
},
},
sort_key = {
Cyrl = {
remove_diacritics = c.grave .. c.acute .. c.diaer,
from = {
"і", "ѣ", "ѳ", "ѵ"
},
to = {
"и" .. p[1], "ь" .. p[1], "я" .. p[2], "я" .. p[3]
}
},
},
standardChars = {
Cyrl = "АаБбВвГгДдЕеЁёЖжЗзИиЙйКкЛлМмНнОоПпРрСсТтУуФфХхЦцЧчШшЩщЪъЫыЬьЭэЮюЯя—",
Brai = c.braille,
(c.punc:gsub("'", "")) -- Exclude apostrophe.
},
}
m["rw"] = {
"Rwanda-Rundi",
3217514,
"bnt-glb",
"Latn",
entry_name = {remove_diacritics = c.acute .. c.circ .. c.macron .. c.caron},
}
m["sa"] = {
"Sanskrit",
11059,
"inc",
"as-Beng, Bali, Beng, Bhks, Brah, Mymr, xwo-Mong, Deva, Gujr, Guru, Gran, Hani, Java, Kthi, Knda, Kawi, Khar, Khmr, Laoo, Mlym, mnc-Mong, Marc, Modi, Mong, Nand, Newa, Orya, Phag, Ranj, Saur, Shrd, Sidd, Sinh, Soyo, Lana, Takr, Taml, Tang, Telu, Thai, Tibt, Tutg, Tirh, Zanb", --and also Khom; script codes sorted by canonical name rather than code for [[MOD:sa-convert]]
translit = {
Beng = "sa-Beng-translit",
["as-Beng"] = "sa-Beng-translit",
Brah = "Brah-translit",
Deva = "sa-translit",
Gujr = "sa-Gujr-translit",
Guru = "sa-Guru-translit",
Java = "sa-Java-translit",
Kthi = "sa-Kthi-translit",
Khmr = "pi-translit",
Knda = "sa-Knda-translit",
Lana = "pi-translit",
Laoo = "pi-translit",
Mlym = "sa-Mlym-translit",
Modi = "sa-Modi-translit",
Mong = "Mong-translit",
["mnc-Mong"] = "mnc-translit",
["xwo-Mong"] = "xal-translit",
Mymr = "pi-translit",
Orya = "sa-Orya-translit",
Shrd = "Shrd-translit",
Sidd = "Sidd-translit",
Sinh = "si-translit",
Taml = "sa-Taml-translit",
Telu = "sa-Telu-translit",
Thai = "pi-translit",
Tibt = "Tibt-translit",
},
display_text = {
Mong = s["Mong-displaytext"],
Tibt = s["Tibt-displaytext"],
},
entry_name = {
Mong = s["Mong-entryname"],
Tibt = s["Tibt-entryname"],
Thai = {
from = {"ึ", u(0xF700), u(0xF70F)}, -- FIXME: Not clear what's going on with the PUA characters here.
to = {"ิํ", "ฐ", "ญ"}
},
remove_diacritics = c.VS01 .. c.udatta .. c.anudatta
},
sort_key = {
Tibt = "Tibt-sortkey",
{ -- FIXME: This needs to be converted into the current standardized format.
from = {"ā", "ī", "ū", "ḍ", "ḷ", "ḹ", "m[" .. c.dotabove .. c.dotbelow .. "]", "ṅ", "ñ", "ṇ", "ṛ", "ṝ", "ś", "ṣ", "ṭ", "([เโไ])([ก-ฮ])", "([ເໂໄ])([ກ-ຮ])", "ᩔ", "ᩕ", "ᩖ", "ᩘ", "([ᨭ-ᨱ])ᩛ", "([ᨷ-ᨾ])ᩛ", "ᩤ", u(0xFE00), u(0x200D)},
to = {"a~", "i~", "u~", "d~", "l~", "l~~", "m~", "n~", "n~~", "n~~~", "r~", "r~~", "s~", "s~~", "t~", "%2%1", "%2%1", "ᩈ᩠ᩈ", "᩠ᩁ", "᩠ᩃ", "ᨦ᩠", "%1᩠ᨮ", "%1᩠ᨻ", "ᩣ"},
},
},
}
m["sc"] = {
"Sardinian",
33976,
"roa-sou",
"Latn",
}
m["sd"] = {
"Sindhi",
33997,
"inc-snd",
"sd-Arab, Deva, Sind, Khoj",
translit = {
Sind = "Sind-translit"
},
entry_name = {
["sd-Arab"] = {
remove_diacritics = c.kashida .. c.fathatan .. c.dammatan .. c.kasratan .. c.fatha .. c.damma .. c.kasra .. c.shadda .. c.sukun .. c.superalef,
from = {"ٱ"},
to = {"ا"}
},
},
}
m["se"] = {
"Northern Sami",
33947,
"smi",
"Latn",
display_text = {
from = {"'"},
to = {"ˈ"}
},
entry_name = {remove_diacritics = c.macron .. c.dotbelow .. "'ˈ"},
sort_key = {
from = {"á", "č", "đ", "ŋ", "š", "ŧ", "ž"},
to = {"a" .. p[1], "c" .. p[1], "d" .. p[1], "n" .. p[1], "s" .. p[1], "t" .. p[1], "z" .. p[1]}
},
standardChars = "AaÁáBbCcČčDdĐđEeFfGgHhIiJjKkLlMmNnŊŋOoPpRrSsŠšTtŦŧUuVvZzŽž" .. c.punc,
}
m["sg"] = {
"Sango",
33954,
"crp",
"Latn",
ancestors = "ngb",
}
m["sh"] = {
"Serbo-Croatian",
9301,
"zls",
"Latn, Cyrl, Glag, Arab",
ietf_subtag = "hbs", -- ISO 639-3 code, since "sh" is deprecated from ISO 639-1
wikimedia_codes = "sh, bs, hr, sr",
entry_name = {
Latn = {
remove_diacritics = c.grave .. c.acute .. c.tilde .. c.macron .. c.dgrave .. c.invbreve,
remove_exceptions = {"Ć", "ć", "Ś", "ś", "Ź", "ź"}
},
Cyrl = {
remove_diacritics = c.grave .. c.acute .. c.tilde .. c.macron .. c.dgrave .. c.invbreve,
remove_exceptions = {"З́", "з́", "С́", "с́"}
},
},
sort_key = {
Latn = {
remove_diacritics = c.grave .. c.acute .. c.tilde .. c.macron .. c.dgrave .. c.invbreve,
remove_exceptions = {"ć", "ś", "ź"},
from = {"č", "ć", "dž", "đ", "lj", "nj", "š", "ś", "ž", "ź"},
to = {"c" .. p[1], "c" .. p[2], "d" .. p[1], "d" .. p[2], "l" .. p[1], "n" .. p[1], "s" .. p[1], "s" .. p[2], "z" .. p[1], "z" .. p[2]}
},
Cyrl = {
remove_diacritics = c.grave .. c.acute .. c.tilde .. c.macron .. c.dgrave .. c.invbreve,
remove_exceptions = {"з́", "с́"},
from = {"ђ", "з́", "ј", "љ", "њ", "с́", "ћ", "џ"},
to = {"д" .. p[1], "з" .. p[1], "и" .. p[1], "л" .. p[1], "н" .. p[1], "с" .. p[1], "т" .. p[1], "ч" .. p[1]}
},
},
standardChars = {
Latn = "AaBbCcČčĆćDdĐđEeFfGgHhIiJjKkLlMmNnOoPpRrSsŠšTtUuVvZzŽž",
Cyrl = "АаБбВвГгДдЂђЕеЖжЗзИиЈјКкЛлЉљМмНнЊњОоПпРрСсТтЋћУуФфХхЦцЧчЏџШш",
c.punc
},
}
m["si"] = {
"Sinhalese",
13267,
"inc-ins",
"Sinh",
translit = "si-translit",
override_translit = true,
}
m["sk"] = {
"Slovak",
9058,
"zlw",
"Latn",
ancestors = "zlw-osk",
sort_key = {remove_diacritics = c.acute .. c.circ .. c.diaer .. c.caron},
standardChars = "AaÁáÄäBbCcČčDdĎďEeÉéFfGgHhIiÍíJjKkLlĹ弾MmNnŇňOoÓóÔôPpRrŔŕSsŠšTtŤťUuÚúVvYyÝýZzŽž" .. c.punc,
}
m["sl"] = {
"Slovene",
9063,
"zls",
"Latn",
entry_name = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.macron .. c.dgrave .. c.invbreve .. c.dotbelow,
remove_exceptions = {"Ć", "ć", "Ǵ", "ǵ", "Ś", "ś", "Ź", "ź"},
from = {"Ə", "ə", "Ł", "ł"},
to = {"E", "e", "L", "l"},
},
sort_key = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.macron .. c.dotabove .. c.ringabove .. c.dgrave .. c.invbreve .. c.dotbelow .. c.ringbelow .. c.ogonek,
remove_exceptions = {"ć", "ǵ", "ś", "ź"},
from = {"ä", "č", "ć", "đ", "ə", "ë", "ǧ", "ǵ", "ï", "ł", "ö", "š", "ś", "ü", "ž", "ź"},
to = {"a" .. p[1], "c" .. p[1], "c" .. p[2], "d" .. p[1], "e", "e" .. p[1], "g" .. p[1], "g" .. p[2], "i" .. p[1], "l", "o" .. p[1], "s" .. p[1], "s" .. p[2], "u" .. p[1], "z" .. p[1], "z" .. p[2]},
},
standardChars = "AaBbCcČčDdEeFfGgHhIiJjKkLlMmNnOoPpRrSsŠšTtUuVvZzŽž" .. c.punc,
}
m["sm"] = {
"Samoan",
34011,
"poz-pnp",
"Latn",
}
m["sn"] = {
"Shona",
34004,
"bnt-sho",
"Latn",
entry_name = {remove_diacritics = c.acute},
}
m["so"] = {
"Somali",
13275,
"cus-som",
"Latn, Arab, Osma",
entry_name = {
Latn = {remove_diacritics = c.grave .. c.acute .. c.circ}
},
}
m["sq"] = {
"Albanian",
8748,
"sqj",
"Latn, Grek, ota-Arab, Elba, Todr, Vith",
translit = {
Elba = "Elba-translit",
},
display_text = {
Grek = s["Grek-displaytext"],
},
entry_name = {
Latn = {
remove_diacritics = c.acute .. c.circ,
from = {'^[ie] (%w)', '^të (%w)'}, to = {'%1', '%1'},
},
Grek = { -- Diacritic removal from Grek-entryname excluded.
from = s["Grek-entryname"].from,
to = s["Grek-entryname"].to,
},
},
sort_key = {
Latn = {
remove_diacritics = c.acute .. c.circ .. c.tilde .. c.breve .. c.caron,
from = {'^[ie] (%w)', '^të (%w)', 'ç', 'dh', 'ë', 'gj', 'll', 'nj', 'rr', 'sh', 'th', 'xh', 'zh'},
to = {'%1', '%1', 'c'..p[1], 'd'..p[1], 'e'..p[1], 'g'..p[1], 'l'..p[1], 'n'..p[1], 'r'..p[1], 's'..p[1], 't'..p[1], 'x'..p[1], 'z'..p[1]},
}
-- TODO: Grek
},
standardChars = {
Latn = "AaBbCcÇçDdEeËëFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvXxYyZz",
c.punc
},
}
m["ss"] = {
"Swazi",
34014,
"bnt-ngu",
"Latn",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.macron .. c.caron},
}
m["st"] = {
"Sotho",
34340,
"bnt-sts",
"Latn",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.macron .. c.caron},
}
m["su"] = {
"Sundanese",
34002,
"poz-msa",
"Latn, Sund, Arab",
ancestors = "osn",
translit = {
Sund = "Sund-translit"
},
}
m["sv"] = {
"Swedish",
9027,
"gmq-eas",
"Latn",
ancestors = "gmq-osw-lat",
sort_key = {
remove_diacritics = c.grave .. c.acute .. c.circ .. c.tilde .. c.macron .. c.dacute .. c.caron .. c.cedilla .. "':",
remove_exceptions = {"å"},
from = {"ø", "æ", "œ", "ß", "å", "aͤ", "oͤ"},
to = {"o", "ae", "oe", "ss", "z" .. p[1], "ä", "ö"}
},
standardChars = "AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpRrSsTtUuVvXxYyÅåÄäÖö" .. c.punc,
}
m["sw"] = {
"Swahili",
7838,
"bnt-swh",
"Latn, Arab",
sort_key = {
Latn = {
from = {"ng'"},
to = {"ng" .. p[1]}
},
},
}
m["ta"] = {
"Tamil",
5885,
"dra-tam",
"Taml",
ancestors = "ta-mid",
translit = "ta-translit",
override_translit = true,
}
m["te"] = {
"Telugu",
8097,
"dra-tel",
"Telu",
translit = "te-translit",
override_translit = true,
}
m["tg"] = {
"Tajik",
9260,
"ira-swi",
"Cyrl, fa-Arab, Latn",
ancestors = "fa-cls",
translit = {
Cyrl = "tg-translit"
},
override_translit = true,
entry_name = {
Cyrl = s["tg-entryname"],
Latn = s["tg-entryname"],
},
sort_key = {
Cyrl = {
from = {"ғ", "ё", "ӣ", "қ", "ӯ", "ҳ", "ҷ"},
to = {"г" .. p[1], "е" .. p[1], "и" .. p[1], "к" .. p[1], "у" .. p[1], "х" .. p[1], "ч" .. p[1]}
},
},
}
m["th"] = {
"Thai",
9217,
"tai-swe",
"Thai, Khomt, Brai",
translit = {
Thai = "th-translit"
},
sort_key = {
Thai = "Thai-sortkey"
},
}
m["ti"] = {
"Tigrinya",
34124,
"sem-eth",
"Ethi",
translit = "Ethi-translit",
}
m["tk"] = {
"Turkmen",
9267,
"trk-ogz",
"Latn, Cyrl, Arab",
entry_name = {
Latn = s["tk-entryname"],
Cyrl = s["tk-entryname"],
},
sort_key = {
Latn = {
from = {"ç", "ä", "ž", "ň", "ö", "ş", "ü", "ý"},
to = {"c" .. p[1], "e" .. p[1], "j" .. p[1], "n" .. p[1], "o" .. p[1], "s" .. p[1], "u" .. p[1], "y" .. p[1]}
},
Cyrl = {
from = {"ё", "җ", "ң", "ө", "ү", "ә"},
to = {"е" .. p[1], "ж" .. p[1], "н" .. p[1], "о" .. p[1], "у" .. p[1], "э" .. p[1]}
},
},
ancestors = "trk-eog",
}
m["tl"] = {
"Tagalog",
34057,
"phi",
"Latn, Tglg",
translit = {
Tglg = "tl-translit"
},
override_translit = true,
entry_name = {
Latn = {remove_diacritics = c.grave .. c.acute .. c.circ}
},
standardChars = {
Latn = "AaBbKkDdEeGgHhIiLlMmNnOoPpRrSsTtUuWwYy",
c.punc
},
sort_key = {
Latn = "tl-sortkey",
},
}
m["tn"] = {
"Tswana",
34137,
"bnt-sts",
"Latn",
}
m["to"] = {
"Tongan",
34094,
"poz-ton",
"Latn",
entry_name = {remove_diacritics = c.acute},
sort_key = {remove_diacritics = c.macron},
}
m["tr"] = {
"Turkish",
256,
"trk-ogz",
"Latn",
ancestors = "ota",
dotted_dotless_i = true,
sort_key = {
from = {
-- Ignore circumflex, but account for capital Î wrongly becoming ı + circ due to dotted dotless I logic.
"ı" .. c.circ, c.circ,
"i", -- Ensure "i" comes after "ı".
"ç", "ğ", "ı", "ö", "ş", "ü"
},
to = {
"i", "",
"i" .. p[1],
"c" .. p[1], "g" .. p[1], "i", "o" .. p[1], "s" .. p[1], "u" .. p[1]
}
},
standardChars = "AaÂâBbCcÇçDdEeFfGgĞğHhIıİiÎîJjKkLlMmNnOoÖöPpRrSsŞşTtUuÛûÜüVvYyZz" .. c.punc,
}
m["ts"] = {
"Tsonga",
34327,
"bnt-tsr",
"Latn",
}
m["tt"] = {
"Tatar",
25285,
"trk-kbu",
"Cyrl, Latn, tt-Arab",
translit = {
Cyrl = "tt-translit"
},
override_translit = false, -- until Module code can detect Russian loans such as [[аэропорт]]
dotted_dotless_i = true,
sort_key = {
Cyrl = {
from = {"ә", "ў", "ғ", "ё", "җ", "қ", "ң", "ө", "ү", "һ"},
to = {"а" .. p[1], "в" .. p[1], "г" .. p[1], "е" .. p[1], "ж" .. p[1], "к" .. p[1], "н" .. p[1], "о" .. p[1], "у" .. p[1], "х" .. p[1]}
},
Latn = {
from = {
"i", -- Ensure "i" comes after "ı".
"ä", "ə", "ç", "ğ", "ı", "ñ", "ŋ", "ö", "ɵ", "ş", "ü"
},
to = {
"i" .. p[1],
"a" .. p[1], "a" .. p[2], "c" .. p[1], "g" .. p[1], "i", "n" .. p[1], "n" .. p[2], "o" .. p[1], "o" .. p[2], "s" .. p[1], "u" .. p[1]
}
},
},
}
-- "tw" IS TREATED AS "ak", SEE WT:LT
m["ty"] = {
"Tahitian",
34128,
"poz-pep",
"Latn",
}
m["ug"] = {
"Uyghur",
13263,
"trk-kar",
"ug-Arab, Latn, Cyrl",
ancestors = "chg",
translit = {
["ug-Arab"] = "ug-translit",
Cyrl = "ug-translit",
},
override_translit = true,
}
m["uk"] = {
"Ukrainian",
8798,
"zle",
"Cyrl",
ancestors = "zle-muk",
translit = "uk-translit",
entry_name = {remove_diacritics = c.grave .. c.acute},
sort_key = {
remove_diacritics = c.grave .. c.acute,
from = {
"ї", -- 2 chars
"ґ", "є", "і" -- 1 char
},
to = {
"и" .. p[2],
"г" .. p[1], "е" .. p[1], "и" .. p[1]
}
},
standardChars = "АаБбВвГгДдЕеЄєЖжЗзИиІіЇїЙйКкЛлМмНнОоПпРрСсТтУуФфХхЦцЧчШшЩщЬьЮюЯя" .. c.punc:gsub("'", ""), -- Exclude apostrophe.
}
m["ur"] = {
"Urdu",
1617,
"inc-hnd",
"ur-Arab, Hebr",
translit = {
["ur-Arab"] = "ur-translit"
},
display_text = {
Hebr = "Hebr-common",
},
entry_name = {
["ur-Arab"] = {
-- character "ۂ" code U+06C2 to "ه" and "هٔ" (U+0647 + U+0654) to "ه"; hamzatu l-waṣli to a regular alif
from = {"هٔ", "ۂ", "ٱ"},
to = {"ہ", "ہ", "ا"},
remove_diacritics = c.fathatan .. c.dammatan .. c.kasratan .. c.fatha .. c.damma .. c.kasra .. c.shadda .. c.sukun .. c.nunghunna .. c.superalef
},
Hebr = "Hebr-common",
},
sort_key = {
Hebr = "Hebr-common",
},
standardChars = {
["ur-Arab"] = "ایببپتثجچحخدذرزژسشصضطظعغفقکگلࣇڷمنݨوؤہھئٹڈڑآے",
c.punc,
},
}
m["uz"] = {
"Uzbek",
9264,
"trk-kar",
"Latn, Cyrl, fa-Arab",
ancestors = "chg",
translit = {
Cyrl = "uz-translit"
},
sort_key = {
Latn = {
from = {"oʻ", "gʻ", "sh", "ch", "ng"},
to = {"z" .. p[1], "z" .. p[2], "z" .. p[3], "z" .. p[4], "z" .. p[5]}
},
Cyrl = {
from = {"ё", "ў", "қ", "ғ", "ҳ"},
to = {"е" .. p[1], "я" .. p[1], "я" .. p[2], "я" .. p[3], "я" .. p[4]}
},
},
entry_name = {
["fa-Arab"] = "ar-entryname",
},
}
m["ve"] = {
"Venda",
32704,
"bnt-bso",
"Latn",
}
m["vi"] = {
"Vietnamese",
9199,
"mkh-vie",
"Latn, Hani",
ancestors = "mkh-mvi",
sort_key = {
Latn = "vi-sortkey",
Hani = "Hani-sortkey",
},
}
m["vo"] = {
"Volapük",
36986,
"art",
"Latn",
}
m["wa"] = {
"Walloon",
34219,
"roa-oil",
"Latn",
sort_key = s["roa-oil-sortkey"],
}
m["wo"] = {
"Wolof",
34257,
"alv-fwo",
"Latn, Arab, Gara",
}
m["xh"] = {
"Xhosa",
13218,
"bnt-ngu",
"Latn",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.macron .. c.caron},
}
m["yi"] = {
"Yiddish",
8641,
"gmw-hgm",
"Hebr, Latn",
ancestors = "gmh",
translit = {
Hebr = "yi-translit",
},
display_text = {
Hebr = "Hebr-common",
},
entry_name = {
Hebr = "Hebr-common",
},
sort_key = {
Hebr = "Hebr-common",
},
}
m["yo"] = {
"Yoruba",
34311,
"alv-yor",
"Latn, Arab",
entry_name = {
Latn = {remove_diacritics = c.grave .. c.acute .. c.macron}
},
sort_key = {
Latn = {
from = {"ẹ", "ɛ", "gb", "ị", "kp", "ọ", "ɔ", "ṣ", "sh", "ụ"},
to = {"e" .. p[1], "e" .. p[1], "g" .. p[1], "i" .. p[1], "k" .. p[1], "o" .. p[1], "o" .. p[1], "s" .. p[1], "s" .. p[1], "u" .. p[1]}
},
},
}
m["za"] = {
"Zhuang",
13216,
"tai",
"Latn, Hani",
sort_key = {
Latn = "za-sortkey",
Hani = "Hani-sortkey",
},
}
m["zh"] = {
"Chinese",
7850,
"zhx",
"Hants, Latn, Bopo, Nshu, Brai",
ancestors = "ltc",
generate_forms = "zh-generateforms",
translit = {
Hani = "zh-translit",
Bopo = "zh-translit",
},
sort_key = {
Hani = "Hani-sortkey"
},
}
m["zu"] = {
"Zulu",
10179,
"bnt-ngu",
"Latn",
entry_name = {remove_diacritics = c.grave .. c.acute .. c.circ .. c.macron .. c.caron},
}
return require("Module:languages").finalizeData(m, "language")
i0jc3kit8zdle0lkybvkdlrypii56t1
Templat:cognate
10
214752
1349295
1083511
2026-04-10T21:05:35Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Templat:seakar]]
1349295
wikitext
text/x-wiki
#ALIH [[Templat:seakar]]
nrf594fl4jmi3vdayw6da06eojerk7f
Wikikamus:Templat-templat yang digunakan dalam KamusWiki
4
247867
1349293
1023277
2026-04-10T21:05:15Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Wikikamus:Templat]]
1349293
wikitext
text/x-wiki
#ALIH [[Wikikamus:Templat]]
t128mwbddyo6v143lm2tp410abmnbd2
mokondo
0
254486
1349268
1280350
2026-04-10T14:27:11Z
Swarabakti
18192
1349268
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# {{akronim dari|modal kontol doang}}
{{-turunan-|id}}
{{-terjemahan-}}
<!--Anda dapat menyalin templat {{t-atas}} -- {{t-bawah}} di bawah berulang kali untuk masing masing arti kata, masing-masing dibedakan melalui parameter pertamanya (misalkan {{kotak awal|arti 1}} dan {{kotak awal|arti 2}} dst). Lihat [[Wiktionary:Terjemahan]] untuk panduan membuat lebih dari satu kolom terjemahan-->
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
#seseorang yang mencoba mendapat perhatian lebih dengan memanfaatkan orang lain: <br />''Dia berteman hanya untuk pansos''
[[Kategori:WikiTutur - Indonesia]]
[[Kategori:WikiTutur Kelas Leksikografi 19 Mei 2024]]
hz1e51wix0olyx2ad99fjckmbh7rk9d
1349269
1349268
2026-04-10T14:27:32Z
Swarabakti
18192
1349269
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# {{label|id|cakapan|hinaan}} {{akronim dari|id|modal kontol doang}}
{{-turunan-|id}}
{{-terjemahan-}}
<!--Anda dapat menyalin templat {{t-atas}} -- {{t-bawah}} di bawah berulang kali untuk masing masing arti kata, masing-masing dibedakan melalui parameter pertamanya (misalkan {{kotak awal|arti 1}} dan {{kotak awal|arti 2}} dst). Lihat [[Wiktionary:Terjemahan]] untuk panduan membuat lebih dari satu kolom terjemahan-->
{{t-atas}}
{{t-bawah}}
{{-bacaan-}}
* {{R:KBBI Daring}}
{{rfv|id|impor dari KBBI}}
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
#seseorang yang mencoba mendapat perhatian lebih dengan memanfaatkan orang lain: <br />''Dia berteman hanya untuk pansos''
[[Kategori:WikiTutur - Indonesia]]
[[Kategori:WikiTutur Kelas Leksikografi 19 Mei 2024]]
dstz7w3d5f4klpq3kvoasdrnchmos8p
1349270
1349269
2026-04-10T14:28:12Z
Swarabakti
18192
1349270
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# {{label|id|cakapan|hinaan}} {{akronim dari|id|modal kontol doang}}
[[Kategori:WikiTutur - Indonesia]]
[[Kategori:WikiTutur Kelas Leksikografi 19 Mei 2024]]
hiavhci8kom2zbnsyuvgq0urpyes49v
Templat:cog+
10
263575
1349294
1083632
2026-04-10T21:05:25Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Templat:seakar+]]
1349294
wikitext
text/x-wiki
#ALIH [[Templat:seakar+]]
ejjk6zsrjlu3qi0ps6cp7cu3wjs50vx
Templat:cognate+
10
263576
1349296
1083506
2026-04-10T21:05:45Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Templat:seakar+]]
1349296
wikitext
text/x-wiki
#ALIH [[Templat:seakar+]]
ejjk6zsrjlu3qi0ps6cp7cu3wjs50vx
Templat:kog+
10
263578
1349298
1083510
2026-04-10T21:06:05Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Templat:seakar+]]
1349298
wikitext
text/x-wiki
#ALIH [[Templat:seakar+]]
ejjk6zsrjlu3qi0ps6cp7cu3wjs50vx
Templat:kog
10
263581
1349297
1083519
2026-04-10T21:05:55Z
EmausBot
16509
Memperbaiki pengalihan ganda ke [[Templat:seakar]]
1349297
wikitext
text/x-wiki
#ALIH [[Templat:seakar]]
nrf594fl4jmi3vdayw6da06eojerk7f
galer
0
266215
1349344
1346247
2026-04-11T07:10:15Z
Pitchrigi
38796
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349344
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# [[gatal]] di [[area]] [[kemaluan]] [[pria]]
=={{bahasa|bkr}}==
{{kepala|bkr}}
{{-adj-|bkr}}
# [[longgar]]
[[Kategori:WikiBalalah - Bakumpai]]
=={{bahasa|osi}}==
{{kepala|osi}}
: {{pemenggalan|osi|ga|ler}}
{{-etimologi-}}
: {{l|id|kawi}}
{{-n-|osi}}
# {{l|id|gores}}, {{l|id|garis}}, {{l|id|bilai}}, {{l|id|bilur}}
{{-rujukan-}}
* Ali, Hasan. (2002). ''[https://web.archive.org/web/20260115111844/https://ebookbanyuwangi.id/assets/2022/kamus_using.pdf Kamus Bahasa Daerah Using-Indonesia]''. Banyuwangi: Pemerintah Kabupaten Banyuwangi.
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
7vdjseu6ru6ks5y5gp50nxtqffh45t6
1349362
1349344
2026-04-11T07:23:46Z
Pitchrigi
38796
1349362
wikitext
text/x-wiki
=={{bahasa|bkr}}==
{{kepala|bkr}}
{{-adj-|bkr}}
# [[longgar]]
[[Kategori:WikiBalalah - Bakumpai]]
=={{bahasa|osi}}==
{{kepala|osi}}
: {{pemenggalan|osi|ga|ler}}
{{-etimologi-}}
: {{l|id|kawi}}
{{-n-|osi}}
# {{l|id|gores}}, {{l|id|garis}}, {{l|id|bilai}}, {{l|id|bilur}}
{{-rujukan-}}
* Ali, Hasan. (2002). ''[https://web.archive.org/web/20260115111844/https://ebookbanyuwangi.id/assets/2022/kamus_using.pdf Kamus Bahasa Daerah Using-Indonesia]''. Banyuwangi: Pemerintah Kabupaten Banyuwangi.
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
355qxet2ggqmywcblwuoqbck4e30159
1349363
1349362
2026-04-11T07:24:40Z
Pitchrigi
38796
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349363
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [[menggaruk]] [[kemaluan]] [[pria]]
{{kepala|id|num=2}}
{{-adj-|id}}
# [[gatal]] di area [[kemaluan]] [[pria]]
=={{bahasa|bkr}}==
{{kepala|bkr}}
{{-adj-|bkr}}
# [[longgar]]
[[Kategori:WikiBalalah - Bakumpai]]
=={{bahasa|osi}}==
{{kepala|osi}}
: {{pemenggalan|osi|ga|ler}}
{{-etimologi-}}
: {{l|id|kawi}}
{{-n-|osi}}
# {{l|id|gores}}, {{l|id|garis}}, {{l|id|bilai}}, {{l|id|bilur}}
{{-rujukan-}}
* Ali, Hasan. (2002). ''[https://web.archive.org/web/20260115111844/https://ebookbanyuwangi.id/assets/2022/kamus_using.pdf Kamus Bahasa Daerah Using-Indonesia]''. Banyuwangi: Pemerintah Kabupaten Banyuwangi.
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
5t7z1mjkrlher6y5czlur1wavlb53ih
mahantuyi
0
273840
1349323
1342269
2026-04-11T06:45:07Z
Ezagren
2314
1349323
wikitext
text/x-wiki
[[Berkas:Nutuk Beham - Mahantuyi.jpg|thumb|120px|mahantuyi]]
=={{bahasa|mqg}}==
{{kepala|mqg}}
:{{suara|mqg|LL-Q12952778 (mqg)-Syafrudin (Robbay12)-Mhantuyi.wav}}
{{-v-|mqg}}
# [[menggoreng]] atau [[menyangrai]] [[padi]] tanpa minyak
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
mbrq6j0ha360ltof2vrvi1ddf06c4u5
1349324
1349323
2026-04-11T06:45:15Z
Ezagren
2314
Ezagren memindahkan halaman [[mhantuyi]] ke [[mahantuyi]]
1349323
wikitext
text/x-wiki
[[Berkas:Nutuk Beham - Mahantuyi.jpg|thumb|120px|mahantuyi]]
=={{bahasa|mqg}}==
{{kepala|mqg}}
:{{suara|mqg|LL-Q12952778 (mqg)-Syafrudin (Robbay12)-Mhantuyi.wav}}
{{-v-|mqg}}
# [[menggoreng]] atau [[menyangrai]] [[padi]] tanpa minyak
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
mbrq6j0ha360ltof2vrvi1ddf06c4u5
melango
0
277347
1349321
1342237
2026-04-11T06:44:27Z
Ezagren
2314
Ezagren memindahkan halaman [[mlango]] ke [[melango]]
1342237
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-v-|mqg}}
# [[menelan]] [[bulat-bulat]]
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
pdtisdj0swyxb9li8y2gdl4nvzdrvay
melaneng
0
277348
1349319
1342235
2026-04-11T06:44:18Z
Ezagren
2314
Ezagren memindahkan halaman [[mlanneng]] ke [[melaneng]]
1342235
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-v-|mqg}}
# [[melihat]] [[seseorang]] dengan [[marah]]
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
o73vmnvp8tifr9bi6oom4431ymlmez0
melanteng
0
277349
1349317
1342234
2026-04-11T06:44:08Z
Ezagren
2314
Ezagren memindahkan halaman [[mlanteng]] ke [[melanteng]]
1342234
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-n-|mqg}}
# [[bunyi]] sesuatu yang [[terpental]] saat [[terlempar]]
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
g8wbgec3zxgit9t67tqzgedr842y0h1
melasung
0
277355
1349315
1342231
2026-04-11T06:43:46Z
Ezagren
2314
Ezagren memindahkan halaman [[mlassung]] ke [[melasung]]
1342231
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-n-|mqg}}
# [[dasar]] [[rumah]] yang tidak rata bagian [[tengah]]
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
66strlfpfg50h4hmhc9txz2675y9fsq
melasur
0
277357
1349313
1337610
2026-04-11T06:43:36Z
Ezagren
2314
Ezagren memindahkan halaman [[mlassur]] ke [[melasur]]
1337610
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-v-|mqg}}
# {{l|id|terpeleset}} saat turun
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
0rdc0erpriuyc3n9jikfqsl80du0v1i
melekke
0
277359
1349311
1337612
2026-04-11T06:43:24Z
Ezagren
2314
Ezagren memindahkan halaman [[mlekke]] ke [[melekke]]
1337612
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-v-|mqg}}
# {{l|id|melihati}}
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
o88gl9pj9v5b2nfe2ixrbfhdv738omj
melensang
0
277373
1349309
1337642
2026-04-11T06:42:56Z
Ezagren
2314
Ezagren memindahkan halaman [[mlensang]] ke [[melensang]]
1337642
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-n-|mqg}}
# lewat {{l|id|tengah hari}}
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
cn7lsypkqpriz21gucjdw1272hdsxy1
meleoki
0
277374
1349307
1337643
2026-04-11T06:42:44Z
Ezagren
2314
Ezagren memindahkan halaman [[mleokki]] ke [[meleoki]]
1337643
wikitext
text/x-wiki
=={{bahasa|mqg}}==
{{kepala|mqg}}
{{-v-|mqg}}
# {{l|id|melewati}} jalan yang agak jauh
[[Kategori:Edit-a-thon WikiKathā Maret 2026]]
s6lrapznrfoi7amqtj51s1wqqbaw7c6
Modul:ar-stripdiacritics
828
280785
1349276
2026-04-10T18:43:20Z
Swarabakti
18192
←Membuat halaman berisi 'local m_str_utils = require("Module:string utilities") local find = m_str_utils.find local gsub = m_str_utils.gsub local U = m_str_utils.char local taTwiil = U(0x640) local waSla = U(0x671) -- diacritics ordinarily removed by entry_name replacements local Arabic_diacritics = U(0x64B, 0x64C, 0x64D, 0x64E, 0x64F, 0x650, 0x651, 0x652, 0x670) -- replace alif waṣl with alif -- remove tatweel and diacritics: fathatan, dammatan, kasratan, fatha, -- damma, kasra, sh...'
1349276
Scribunto
text/plain
local m_str_utils = require("Module:string utilities")
local find = m_str_utils.find
local gsub = m_str_utils.gsub
local U = m_str_utils.char
local taTwiil = U(0x640)
local waSla = U(0x671)
-- diacritics ordinarily removed by entry_name replacements
local Arabic_diacritics = U(0x64B, 0x64C, 0x64D, 0x64E, 0x64F, 0x650, 0x651, 0x652, 0x670)
-- replace alif waṣl with alif
-- remove tatweel and diacritics: fathatan, dammatan, kasratan, fatha,
-- damma, kasra, shadda, sukun, superscript (dagger) alef
local replacements = {
from = {U(0x0671), "[" .. U(0x0640, 0x064B) .. "-" .. U(0x0652, 0x0670, 0x0656) .. "]"},
to = {U(0x0627)},
}
local export = {}
function export.stripDiacritics(text, lang, sc)
if text == waSla or find(text, "^" .. taTwiil .. "?[" .. Arabic_diacritics .. "]" .. "$") then
return text
end
for i, from in ipairs(replacements.from) do
local to = replacements.to[i] or ""
text = gsub(text, from, to)
end
return text
end
return export
gmvq9118u80f6bkyoy06j4u3ine0kmf
Modul:languages/byStripDiacriticsModule
828
280786
1349278
2026-04-10T18:46:09Z
Swarabakti
18192
←Membuat halaman berisi 'return function(stripDiacriticsModule) local langs = {} for code, data in pairs(require("Module:languages/data/all")) do if data.strip_diacritics == stripDiacriticsModule then langs[code] = data elseif type(data.strip_diacritics) == "table" then for script, strip_diacritics_data in pairs(data.strip_diacritics) do if strip_diacritics_data == stripDiacriticsModule then langs[code] = data end end end end local result = {} local i...'
1349278
Scribunto
text/plain
return function(stripDiacriticsModule)
local langs = {}
for code, data in pairs(require("Module:languages/data/all")) do
if data.strip_diacritics == stripDiacriticsModule then
langs[code] = data
elseif type(data.strip_diacritics) == "table" then
for script, strip_diacritics_data in pairs(data.strip_diacritics) do
if strip_diacritics_data == stripDiacriticsModule then
langs[code] = data
end
end
end
end
local result = {}
local i = 0
for code, data in pairs(langs) do
i = i + 1
result[i] = require("Module:languages").makeObject(code, data)
end
return result
end
lk41omakl9gzgmnob1lrwt4rea7f6mb
mleokki
0
280787
1349308
2026-04-11T06:42:44Z
Ezagren
2314
Ezagren memindahkan halaman [[mleokki]] ke [[meleoki]]
1349308
wikitext
text/x-wiki
#ALIH [[meleoki]]
jdvchjq70tvr3546peuvsvayqjpyvk6
mlensang
0
280788
1349310
2026-04-11T06:42:56Z
Ezagren
2314
Ezagren memindahkan halaman [[mlensang]] ke [[melensang]]
1349310
wikitext
text/x-wiki
#ALIH [[melensang]]
1hhkev74vh60ppwg4bfbbz2asn024ba
mlekke
0
280789
1349312
2026-04-11T06:43:25Z
Ezagren
2314
Ezagren memindahkan halaman [[mlekke]] ke [[melekke]]
1349312
wikitext
text/x-wiki
#ALIH [[melekke]]
lmn2rlkswif5lyo58fur4dgatbhzmu5
mlassur
0
280790
1349314
2026-04-11T06:43:36Z
Ezagren
2314
Ezagren memindahkan halaman [[mlassur]] ke [[melasur]]
1349314
wikitext
text/x-wiki
#ALIH [[melasur]]
b98hrneqt9k9ufzn4qwzxrn4nhfxopw
mlassung
0
280791
1349316
2026-04-11T06:43:46Z
Ezagren
2314
Ezagren memindahkan halaman [[mlassung]] ke [[melasung]]
1349316
wikitext
text/x-wiki
#ALIH [[melasung]]
74zl0vuanbazw7ek5hbe666q1e2xv5p
mlanteng
0
280792
1349318
2026-04-11T06:44:08Z
Ezagren
2314
Ezagren memindahkan halaman [[mlanteng]] ke [[melanteng]]
1349318
wikitext
text/x-wiki
#ALIH [[melanteng]]
pfnw2v87ga1wuxhbge50ylfvt724fj6
mlanneng
0
280793
1349320
2026-04-11T06:44:18Z
Ezagren
2314
Ezagren memindahkan halaman [[mlanneng]] ke [[melaneng]]
1349320
wikitext
text/x-wiki
#ALIH [[melaneng]]
724zhfgc9803o5rtjyhwqo7xco5ynoa
mlango
0
280794
1349322
2026-04-11T06:44:27Z
Ezagren
2314
Ezagren memindahkan halaman [[mlango]] ke [[melango]]
1349322
wikitext
text/x-wiki
#ALIH [[melango]]
mx7p8k54hwwiwlvphqqnmvsvnhxlabk
mhantuyi
0
280795
1349325
2026-04-11T06:45:15Z
Ezagren
2314
Ezagren memindahkan halaman [[mhantuyi]] ke [[mahantuyi]]
1349325
wikitext
text/x-wiki
#ALIH [[mahantuyi]]
buy4kt5es4fno9van360g0ix2ymh8rk
padiatapa
0
280796
1349327
2026-04-11T06:57:46Z
Agus Damanik
26229
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349327
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# persetujuan yang telah diinformasikan sebelumnnya sehingga dilakukan tanpa paksaan
hrm1qyb72aijeucet2900pypc7ioodc
1349329
1349327
2026-04-11T06:59:25Z
Agus Damanik
26229
/* {{bahasa|id}} */
1349329
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# {{akr}} persetujuan atas dasar informasi di awal tanpa paksaan
h1d8dxogajrgdunz9g3qmaqwoch7qk3
1349354
1349329
2026-04-11T07:15:10Z
Agus Damanik
26229
1349354
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# {{akr}} [[persetujuan]] atas dasar informasi di awal tanpa paksaan
76f9nrwbbp0n4rlxavgc2lf5aw6qqa1
ngadi-ngadi
0
280797
1349328
2026-04-11T06:59:09Z
Hisyam (WMID)
47557
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349328
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# [[mengada-ada]]
eor9r9ogth1dt50l1dwf80rs3ww4hxv
kerad
0
280798
1349330
2026-04-11T07:02:53Z
Sibiru45
40479
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349330
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# plesetan dari kata [[keras]]
ambmb4pl007520vgl7tplnxrsiani4f
1349361
1349330
2026-04-11T07:23:46Z
Sibiru45
40479
/* {{bahasa|id}} */
1349361
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# {{label|id|cakapan}} sinonim dari [[keras]]
scplzrf8fshd7r87yp0oxjc9wb9hv23
nolep
0
280799
1349331
2026-04-11T07:05:20Z
Hasnanf
40434
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349331
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# kurang [[pergaulan]]
0f5xk30g3i7p5s4lz0wjepsmlijoor3
1349334
1349331
2026-04-11T07:05:51Z
Hasnanf
40434
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [su]
1349334
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# kurang [[pergaulan]]
=={{bahasa|su}}==
{{kepala|su}}
{{-adj-|su}}
# [[kuuleun]]
54fhhl80td3njmxzazo2ambd5iekf1t
1349360
1349334
2026-04-11T07:23:29Z
Hasnanf
40434
/* {{bahasa|id}} */
1349360
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# {{label|id|cakapan}} kurang [[pergaulan]]
=={{bahasa|su}}==
{{kepala|su}}
{{-adj-|su}}
# [[kuuleun]]
j0vzlfynj60xieacewhbxe90ph3wxk3
nyawit
0
280800
1349333
2026-04-11T07:05:36Z
Fhikri Latifi
41987
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349333
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# tindakan sesorang yang memanfaatkan segala kesempetan untuk keuntungan pribadi
g4fxxoktiwmmf70v2rdrskin13yv431
1349355
1349333
2026-04-11T07:15:45Z
Fhikri Latifi
41987
1349355
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [[tindakan]] sesorang yang memanfaatkan segala kesempetan untuk keuntungan pribadi
sro9jjcbopqnj9upj4djb3ju095xclg
1349358
1349355
2026-04-11T07:21:23Z
Fhikri Latifi
41987
/* {{bahasa|id}} */
1349358
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [[tindakan]] sesorang yang memanfaatkan segala kesempatan untuk keuntungan pribadi
g92p4587liq2im05r4j6s58rfllrges
1349364
1349358
2026-04-11T07:25:29Z
Fhikri Latifi
41987
/* {{bahasa|id}} */
1349364
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [[tindakan]] sesorang yang memanfaatkan segala kesempatan untuk keuntungan pribadi
{{-v-|id}}
# menabur benih secara serampangan: mari ikut -- ke sawah
6znmm0gvj6drn8znixa9h6j3aud8jql
1349367
1349364
2026-04-11T07:25:48Z
Fhikri Latifi
41987
1349367
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [[tindakan]] sesorang yang memanfaatkan segala kesempatan untuk keuntungan pribadi
# menabur benih secara serampangan: mari ikut -- ke sawah
2hgweqjim4ool3tcm5u17rma7hyuxkd
niha
0
280801
1349335
2026-04-11T07:06:26Z
Fatamorganaa
38747
+ entri nias
1349335
wikitext
text/x-wiki
=={{bahasa|nia}}==
{{kepala|nia}}
{{-nom-|nia}}
# [[manusia]]
5n1ogy42kfwo9f4xdtpf93gkvchjf31
bjir
0
280802
1349336
2026-04-11T07:06:56Z
Annidafattiya
41441
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349336
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# Anjing
a3848247z6tthlepsx7g7dnzui7o9lb
1349340
1349336
2026-04-11T07:09:20Z
Annidafattiya
41441
Annidafattiya memindahkan halaman [[Bjir]] ke [[bjir]]
1349336
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
# Anjing
a3848247z6tthlepsx7g7dnzui7o9lb
1349343
1349340
2026-04-11T07:10:08Z
Annidafattiya
41441
1349343
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
Varian deskripsi anjing
fq06macz47sus5dtdc8hv0tbaw0o232
1349346
1349343
2026-04-11T07:11:06Z
Annidafattiya
41441
/* {{bahasa|id}} */
1349346
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adj-|id}}
#Varian deskripsi [[anjing]]
9lgbcihm6vsknen73fw0lpxvovr466s
sotta
0
280803
1349337
2026-04-11T07:07:15Z
Iripseudocorus
40083
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [mak]
1349337
wikitext
text/x-wiki
=={{bahasa|mak}}==
{{kepala|mak}}
{{-adj-|mak}}
# [[sok]] tahu
k20awd682c49bd53lyq6tchivbsvuyi
gaskeun
0
280804
1349338
2026-04-11T07:08:19Z
Aguswirawan108
41002
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349338
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [ayo lakukan]
flbpwfrfslpa929lurojfbdu65tugg7
1349345
1349338
2026-04-11T07:10:42Z
Aguswirawan108
41002
1349345
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [[ayo]] [[lakukan]]
7lndponmyikvwbkidkx3o2xvn4f60mf
1349348
1349345
2026-04-11T07:11:43Z
Aguswirawan108
41002
Aguswirawan108 memindahkan halaman [[Gaskeun]] ke [[gaskeun]]
1349345
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# [[ayo]] [[lakukan]]
7lndponmyikvwbkidkx3o2xvn4f60mf
1349365
1349348
2026-04-11T07:25:31Z
Aguswirawan108
41002
1349365
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# {{label|id|cakapan}} [[ayo]] [[lakukan]],
6n5elq339b2plxegyizw85ee7ejktzv
sokap
0
280805
1349339
2026-04-11T07:08:33Z
Bangrapip
37376
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349339
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-pron-|id}}
# siapa
fd3i5wvfjq7d8jzsbpemxdb6za3rrqc
1349357
1349339
2026-04-11T07:20:14Z
Bangrapip
37376
1349357
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-pron-|id}}
# {{label|id|cakapan}} [[siapa]]
pyf6j4jto4mwovzjemrruhsbvcxwtyg
nyabu
0
280807
1349342
2026-04-11T07:09:59Z
Volstand
24299
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349342
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# mengkonsumsi sabu
7pe0g4cxfyo3trcsdxl3q9wq90v73uh
1349353
1349342
2026-04-11T07:14:39Z
Volstand
24299
1349353
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-v-|id}}
# mengkonsumsi [[sabu]]
40bfiqnmodhdp9wncu95jn8t16r7oze
Gaskeun
0
280808
1349349
2026-04-11T07:11:43Z
Aguswirawan108
41002
Aguswirawan108 memindahkan halaman [[Gaskeun]] ke [[gaskeun]]
1349349
wikitext
text/x-wiki
#ALIH [[gaskeun]]
roi5tgom1imedyou57wfsa6f47mfioj
CTA
0
280809
1349350
2026-04-11T07:12:13Z
Istrikth
44873
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349350
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-adv-|id}}
# singkatan dari Cukup Tau Aja
b8jbzcwv1mo8ryicwlluwejdr7tcrhe
child grooming
0
280810
1349351
2026-04-11T07:14:06Z
Astari28
38522
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [en]
1349351
wikitext
text/x-wiki
=={{bahasa|en}}==
{{kepala|en}}
{{-n-|en}}
# manipulasi anak
cknoxiefhpzbkm4oc82ulx2m2y346vc
cegil
0
280811
1349352
2026-04-11T07:14:29Z
I Gede Krisna Dharmayudha
40627
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [id]
1349352
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# [[perempuan]] [[gila]]
h8ax8dpq5103cvzfccfbze98ugwty1l
1349368
1349352
2026-04-11T07:28:04Z
I Gede Krisna Dharmayudha
40627
1349368
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# {{label|id|cakapan}} [[singkatan]] [[dari]] [[perempuan]] [[gila]]
o9ykt68hxt2bihne67yehi1ro0aoycu
1349369
1349368
2026-04-11T07:28:29Z
I Gede Krisna Dharmayudha
40627
1349369
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id}}
{{-n-|id}}
# {{label|id|cakapan}} [[singkatan]] [[dari]] [[cewek]] [[gila]]
mkfrouxk7hgrr4idh0x10m0w294ifet
grooming
0
280812
1349356
2026-04-11T07:16:22Z
Astari28
38522
[[:wikt:id:Pengguna:Swarabakti/Gadget-EntryAdder.js|+entri]] [en]
1349356
wikitext
text/x-wiki
=={{bahasa|en}}==
{{kepala|en}}
{{-n-|en}}
# Manipulasi
67ix00yo6od4ye0zafyngk01949d0zz
relate
0
280813
1349366
2026-04-11T07:25:33Z
Alfinlutvianaaa
38561
←Membuat halaman berisi '=={{bahasa|id}}== {{kepala|id|num=1}}'
1349366
wikitext
text/x-wiki
=={{bahasa|id}}==
{{kepala|id|num=1}}
ahlc2mik7r6o238wpkpzciplk82zw4p
1349370
1349366
2026-04-11T07:30:52Z
Alfinlutvianaaa
38561
1349370
wikitext
text/x-wiki
=={{bahasa|id}}==
6nvimd4a2lxcfq522qpngk7tpmbkjxa