A graphical implementation with a grid showing the number of implementations for each Rosetta Code task as well as total task and implementation counts. Uses MediaWiki API service call to fetch tasks/categories in a JSON format and meets API data limit and continuation requirements to consume 100% of the items.

jq does not duplicate the functionality of `curl` but works seamlessly with it,
as illustrated by the following bash script. Note in particular the use of jq's
`@uri` filter in the bash function `titles`.

Retrieves counts for both Tasks and Draft Tasks. Save / Display results as a sortable wikitable rather than a static list. Click on a column header to sort on that column. To do a secondary sort, hold down the shift key and click on a second column header. Tasks have a gray (default) background, Draft Tasks have a yellow background.

Counts no of "{{header|" (nb not "=={{header|") via web api (but gets tasks via scraping).
Since downloading all the pages can be very slow, this uses a cache.
Limiting (notdone) by "Phix" fairly obviously speeds it up tenfold :-)

function open_download(string filename, url) bool refetch = true if get_file_type("rc_cache")!=FILETYPE_DIRECTORY then if not create_directory("rc_cache") then crash("cannot create rc_cache directory") end if end if filename = join_path({"rc_cache",filename}) if file_exists(filename) then -- use existing file if <= refresh_cache (365 days) old sequence last_mod = get_file_date(filename) -- (0.8.1+) atom delta = timedate_diff(last_mod,date()) refetch = (delta>refresh_cache) or get_file_size(filename)=0 else string directory = get_file_path(filename) if get_file_type(directory)!=FILETYPE_DIRECTORY then if not create_directory(directory,make_parent:=true) then crash("cannot create %s directory",{directory}) end if end if end if object text if not refetch then text = trim(get_text(filename)) refetch = (not sequence(text)) or (length(text)<10) end if if refetch then progress("Downloading %s...\r",{filename}) if curl=NULL then curl_global_init() curl = curl_easy_init() pErrorBuffer = allocate(CURL_ERROR_SIZE) curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, pErrorBuffer) curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_cb) end if url = substitute(url,"%3A",":") url = substitute(url,"%2A","*") curl_easy_setopt(curl, CURLOPT_URL, url) integer fn = open(filename,"wb") if fn=-1 then ?9/0 end if curl_easy_setopt(curl, CURLOPT_WRITEDATA, fn) while true do CURLcode res = curl_easy_perform(curl) if res=CURLE_OK then exit end if string error = sprintf("%d",res) if res=CURLE_COULDNT_RESOLVE_HOST then error &= " [CURLE_COULDNT_RESOLVE_HOST]" end if progress("Error %s downloading file, retry?(Y/N):",{error}) if lower(wait_key())!='y' then abort(0) end if printf(1,"Y\n") end while close(fn) refresh_cache += timedelta(days:=1) -- did I mention it is slow? text = get_text(filename) end if return textend function