[07:30] underscor, actually 80 [07:30] now [07:30] Of course disk space finished before I woke up, so many dumps were killed. I deleted all images and xml in the directories of finished (compressed) dumps and restarted everything, sigh. [11:40] Hello emijrp [11:40] hi [11:40] Of course disk space finished before I woke up, so many dumps were killed. I deleted all images and xml in the directories of finished (compressed) dumps and restarted everything, sigh [11:41] why dont you download just a 100 wiki list, split in 50 threads? [11:41] For unknown reasons many jobs started compression immediately (I expected most dumps not to be completed), so I have a dozen 7z jobs now and load at 50, sigh. [11:41] Because I want to start them all and the forget about it. [11:42] if you dont have enough disk space, that is not possible [11:42] I just have to cleanup, I didn't expect them to download 5o GiB in a night. [11:43] Not watch and restart continuously. Also, if a job has 10 wikis there's more probability for the work to be split equally, otherwise one might get a huge wiki, 5 minuscule wikis or wrong API which don't do anything, etc. [11:43] On average 60 jobs were consuming only half a core. :) [11:44] you are wrong, the optimus way is 1 item per list [11:44] so, it will take the time of the long wiki [11:44] No, because that way I have to restart tenfold the jobs. [11:45] Or, ten times more often. [16:21] I discovered what cluttered my disk, a wikicafe dump and a bunch of Commons images [20:54] underscor, what about the wikis upload script? :D [20:54] come on, can't be that hard for you [20:54] I already have 423 wikis to upload