Imageharvest.py is a script of the Pywikibot framework. It is used to copy multiple images to a wiki. It takes a URL as an argument and finds all images (and other files specified by the extensions in 'fileformats') that URL is referring to, asking whether to upload them. If further arguments are given, they are considered to be the text that is common to the descriptions.
A second use is to get a number of images that have URLs only differing in numbers. To do this, use the command line option "-pattern", and give the URL with the variable part replaced by '$' (if that character occurs in the URL itself, you will have to change the bot code, my apologies).
-shown Choose images shown on the page as well as linked from it -justshown Choose _only_ images shown on the page, not those linked
python imageharvest.py http://www.sitename.org/folder
||This page may need to be updated to reflect current knowledge.
You can help update it, discuss progress, or request assistance.
Global arguments available for all bots
|-family:xyz||Set the family of the wiki you want to work on, e.g. wikipedia, wiktionary, commons, wikitravel, …. This will override the configuration in user-config.py settings.||user-config.py parameter:
|-lang:xx||Set the language of the wiki you want to work on, overriding the configuration in user-config.py where xx should be the language code.||user-config.py parameter:
|-log||Enable the logfile. Logs will be stored in the logs subdirectory.||user-config.py parameter:
|-log:xyz||Enable the logfile, using xyz as the filename.|
|-nolog||Disable the logfile (if it's enabled by default).|
|Set the minimum time (in seconds) the bot will wait between saving pages.||user-config.py parameter:
|Make the program output more detailed messages than usual to the standard output about its current work, or progress, while it is proceeding. This may be helpful when debugging or dealing with unusual situations.||not selected|
- Commons uses 'commons' for
family; Meta uses 'meta' for both.