[00:07] *** benjins has joined #wikiteam [00:40] *** Wingy has quit IRC (Read error: Operation timed out) [00:44] *** Wingy has joined #wikiteam [07:22] *** VADemon_ has quit IRC (Read error: Operation timed out) [16:13] This seems archival-worthy: https://en.wikichip.org/wiki/WikiChip No dump on IA as far as I can see. It's a slightly concealed MW: https://en.wikichip.org/wiki/Special:Version [17:32] *** systwi has quit IRC (Read error: Operation timed out) [17:33] *** systwi has joined #wikiteam [18:32] *** freeacc has joined #wikiteam [18:32] hey there [18:33] i would like to be able to create a .zim file for offline wiki viewing [18:33] wiki server is a mediawiki instance [18:34] tried to use mwoffliner, but I keep getting errors [18:34] c(connection timeout, and other network related ones) [18:34] i just downloaded tthe WikiTeam/wikiteam repo [18:35] and about to use the dummper python script [18:35] would like to know if there is any tool that i could use to convert the generated ddump to .zim [18:36] thanks [18:36] server in question: outward.gamepedia.com, in case there already is a dump ssomewhere [18:49] anyone there? [18:51] *** kiska has quit IRC (Remote host closed the connection) [18:52] *** kiska has joined #wikiteam [18:52] *** svchfoo1 sets mode: +o kiska [18:57] Hi freeacc. Hang around and someone in the know might get back to you eventually. This is IRC, so expect that to take hours or even days. [18:58] JAA: thanks [20:23] freeacc: I'm not sure I understand. You set up your own Parsoid instance backed by an XML import of that wiki? [20:23] Or how did you expect mwoffliner to help? [20:24] Nemo_bis: i though it was some sort of scraper that saves the info inside a .zim file [20:24] freeacc: yes, but it scrapes Parsoid [20:25] and parsoid is some king of markup2html converter. right? [20:25] It's the node.js version of the PHP parser [20:25] And don't get me started on how terrible the whole idea is :D [20:26] anyways. here's the command i'm running. [20:26] This is a small wiki, I doubt mwoffliner is the best solution [20:27] You might be better off with the usual old way of using dumpHTML on your own mirror of the wiki [20:27] https://www.irccloud.com/pastebin/oLcaOGko/ [20:27] oh. ok [20:27] Hm yes. In theory you can get the HTML from the API, but in practice that path is not used much so who knows what happens [20:28] Wikis are often not particularly happy of being asked to reparse their entire content for you [20:28] If you're lucky and dumpHTML works (small chance but who knows), you might be done in few minutes with that route [20:29] oh. will try that, then. [20:29] Otherwise, please proceed to #kiwix on freenode because the last time I used mwoffliner was possibly 5 years ago [20:31] i also think that importing the dumpgenerator output would be much simpler [20:31] "simpler" not, because you need to install MediaWiki and all the Gamepedia extensions and then run the parsing yourself [20:31] However it *might* happen to be faster or more reliable on a small wiki like that [20:32] It's the way we used to make ZIM files until 2008 or so [20:32] Sorry, since 2008 or so and until mwoffliner was created [20:33] it just sucks that the tooling around such a task isn't so accessible. [20:33] Well, the alternative for most CMS is that there's nothing at all :) [20:34] ok thanks a lot. [20:34] (Well, last time I checked, a few years ago, there was a "static website exporter" for Joomla which had gone stale a few years before) [20:34] that's true, i mean im thankfull that i even have the option to create a dump [21:37] JAA: not the nicest wiki on the block requests.exceptions.HTTPError: 521 Server Error: status code 521 for url: https://en.wikichip.org/w/api.php?format=json&continue=&meta=siteinfo%7Cuserinfo%7Cuserinfo&action=query&siprop=general%7Cnamespaces&uiprop=groups%7Crights%7Cblockinfo%7Chasmsg [21:38] they already blocked special:export so it's probably only a matter of time before they block all relevant APIs as well [22:58] Nemo_bis: Aw, too bad. Thanks for trying though.