Time |
Nickname |
Message |
07:30
🔗
|
Nemo_bis |
underscor, actually 80 |
07:30
🔗
|
Nemo_bis |
now |
07:30
🔗
|
Nemo_bis |
Of course disk space finished before I woke up, so many dumps were killed. I deleted all images and xml in the directories of finished (compressed) dumps and restarted everything, sigh. |
11:40
🔗
|
Nemo_bis |
Hello emijrp |
11:40
🔗
|
emijrp |
hi |
11:40
🔗
|
Nemo_bis |
<Nemo_bis> Of course disk space finished before I woke up, so many dumps were killed. I deleted all images and xml in the directories of finished (compressed) dumps and restarted everything, sigh |
11:41
🔗
|
emijrp |
why dont you download just a 100 wiki list, split in 50 threads? |
11:41
🔗
|
Nemo_bis |
For unknown reasons many jobs started compression immediately (I expected most dumps not to be completed), so I have a dozen 7z jobs now and load at 50, sigh. |
11:41
🔗
|
Nemo_bis |
Because I want to start them all and the forget about it. |
11:42
🔗
|
emijrp |
if you dont have enough disk space, that is not possible |
11:42
🔗
|
Nemo_bis |
I just have to cleanup, I didn't expect them to download 5o GiB in a night. |
11:43
🔗
|
Nemo_bis |
Not watch and restart continuously. Also, if a job has 10 wikis there's more probability for the work to be split equally, otherwise one might get a huge wiki, 5 minuscule wikis or wrong API which don't do anything, etc. |
11:43
🔗
|
Nemo_bis |
On average 60 jobs were consuming only half a core. :) |
11:44
🔗
|
emijrp |
you are wrong, the optimus way is 1 item per list |
11:44
🔗
|
emijrp |
so, it will take the time of the long wiki |
11:44
🔗
|
Nemo_bis |
No, because that way I have to restart tenfold the jobs. |
11:45
🔗
|
Nemo_bis |
Or, ten times more often. |
16:21
🔗
|
Nemo_bis |
I discovered what cluttered my disk, a wikicafe dump and a bunch of Commons images |
20:54
🔗
|
Nemo_bis |
underscor, what about the wikis upload script? :D |
20:54
🔗
|
Nemo_bis |
come on, can't be that hard for you |
20:54
🔗
|
Nemo_bis |
I already have 423 wikis to upload |