[00:04] *** BnAboyZ has quit IRC (Read error: Operation timed out) [00:17] *** Tenebrae has joined #wikiteam [00:29] *** BnAboyZ has joined #wikiteam [00:31] *** MrRadar2 has joined #wikiteam [01:04] *** benjinsmi has quit IRC (irc.efnet.nl efnet.deic.eu) [01:04] *** agris_ has quit IRC (irc.efnet.nl efnet.deic.eu) [01:04] *** TC01 has quit IRC (irc.efnet.nl efnet.deic.eu) [01:05] *** benjins has joined #wikiteam [01:07] *** TC01_ has joined #wikiteam [01:08] *** VADemon has quit IRC (Read error: Operation timed out) [01:23] *** agris has joined #wikiteam [01:41] hello [01:41] I'm archive a MASSIVE with hundreds of thousands of articles over decades [01:42] *archiving [01:42] the site has been going down and back up again for months [01:42] because the site is so large, I need to be selective about what I download first in case the whole thing goes offline again before I can finish the dump [01:43] If I do a dumpgenerator --xml thewiki.whatever now, how can I do a second pass just for images later and merge the two dumps? [02:35] *** zerkalo has quit IRC (Remote host closed the connection) [02:39] *** VADemon has joined #wikiteam [02:46] *** VADemon has quit IRC (Read error: Connection reset by peer) [02:46] *** VADemon has joined #wikiteam [04:53] *** Zerote has joined #wikiteam [07:12] agris: yes [07:12] usually the XML for pages takes the most time; if you use --xmlrevisions you cannot really resume, but it might be fast enough to complete in a single sitting depending on how the revisions are scattered [07:14] We already have me-pedia.org https://archive.org/details/wiki-me_pediaorg it will update in a future run [15:05] *** Zerote_ has joined #wikiteam [15:11] *** Zerote has quit IRC (Read error: Operation timed out) [18:46] *** Zerote__ has joined #wikiteam [18:52] *** Zerote_ has quit IRC (Read error: Operation timed out) [20:19] *** Zerote__ has quit IRC (Read error: Operation timed out) [20:52] *** Zerote has joined #wikiteam [22:11] *** Zerote has quit IRC (Read error: Operation timed out)