Search your language by Native or English name > Click : « Download »
On your device, unzip.
Post-processing Refer to the relevant tutorials in #See also to mass rename, mass convert or mass denoise your downloaded audios.
Programmatic tools
The tools below first fetch from one or several Wikimedia Commons categories the list of audio files within them.
Some of them allow to filter that list further to focus a single speaker, either by editing their code or by post-processing of the resulting .csv list of audio files. The listed targets are then downloaded at a speed of 500 to 15,000 per hours. Items already present locally and matching the latest Commons version are generally not re-downloaded.
Find your target
Categories on Wikimedia Commons are organized as follow:
List target files with Petscan : Given a target category on Commons, provides list of target files. Example.
Download target files with Wikiget : downloads targets files.
Comments:
Successful on November 2021, with 730,000 audio downloaded in 20 hours. Sustained average speed : 10 downloads/sec.
Some delete files on Commons may cause Wikiget to return an error and pause. The script has to be resumed manually. Occurrence have been reported to be around 1/30,000 files. Fix is underway, support the request on github.
WikiGet therefore requires a volunteer to supervise the script while running.
As of December 2021, WikiGet does not support multi-thread downloads. Therefore, to increase the efficiency of the download process it is recommended to run the Python Script on 20-30 terminal windows simultaneously. Each terminal running WikiGet would consume an average of 20 Kb/s.
WikiGet requires an stable internet connection. Any disruption of 1 second would stop the download process and it requires manual restart of the Python Script.
Any question about downloading datasets can be made on the Discord Server of Lingua Libre : https://discord.gg/2WECKUHj
NodeJS
Dependencies: git, nodejs, npm.
AWikiapiJS script allows to download target category's files, or a root category, its subcategories and contained files. Downloads about 1,400 audio files per hour.
WikiapiJS is the NodeJS / NPM package allowing scripted API calls upon Wikimedia Commons and LinguaLibre.
Successful on December 2021, with 400 audios downloaded in 16 minutes. Sustained average speed : 0.4 downloads/sec.
Successfully process single category's files.
Successfully process root category and subcategories' files, generating ./isocode/ folders.
Scalability tests for resilience with high amounts requests >500 to 100,000 items is required.
Python (slow)
Dependencies: python.
CommonsDownloadTool.py is a python script which formerly created datasets for LinguaLibre. It can be hacked and tinkered to your needs. To download all datasets as zips :
This script downloads all the pronunciations added by a user into a folder by first querying the Lingua Libre database and then downloading the files from Commons. See its github repository. Languageseeker (talk) 01:57, 24 May 2022 (UTC)
Anki Extension for Lingua Libre
The Lingua Libre and Forvo Addon. It has a number of advanced options to improve search results and can run either as a batch operation or on an individual note.
By default, it first checks Lingua Libre and, if there are no results on Lingua Libre, it then checks Forvo. To run as a pure Lingua Libre extension, you will need to set "disable_Forvo" to True in your configuration section.
$java -jar imker-cli.jar -o ./myFolder/ -c 'CategoryName'# Downloads all medias within Wikimedia Commons's category "CategoryName"
Comments :
Not used yet by any LinguaLibre member. If you do, please share your experience of this tool.
Manual
Imker -- Wikimedia Commons batch downloading tool.
Usage: java -jar imker-cli.jar [options]
Options:
--category, -c
Use the specified Wiki category as download source.
--domain, -d
Wiki domain to fetch from
Default: commons.wikimedia.org
--file, -f
Use the specified local file as download source.
* --outfolder, -o
The output folder.
--page, -p
Use the specified Wiki page as download source.
The download source must be ONE of the following:
↳ A Wiki category (Example: --category="Denver, Colorado")
↳ A Wiki page (Example: --page="Sandboarding")
↳ A local file (Example: --file="Documents/files.txt"; One filename per line!)