[00:10] I've finished reading all the threads in [[Support]] from the beggginning of time \o/ [00:10] that was about https://translatewiki.net [00:12] Coderjoe: are you sure about it? I'm almost certain that it once worked for me, saving me some Gbs of upload [00:56] greetings from HOPE [03:21] underscor: did you get these: http://www.demonoid.me/files/?uid=3783180&seeded=2 [03:22] imaginefx 49-74 dvds [03:22] saidly i think its the contents of the iso and the iso's themself [03:22] but not sure [03:35] i'm planning on getting them now [05:36] Um, hello? [05:37] Is there anyone who can help me out? I'm the owner of a couple of sites that are listed in the Parodius Networking project. [05:59] channel is a little slow, especially at this time [08:44] anyone gave planetphillip.com a try? i will try to let it pass [16:33] uploaded: http://archive.org/details/cdrom-3d-world-118 [16:33] uploaded: http://archive.org/details/cdrom-3d-world-115 [17:40] uploaded: http://archive.org/details/cdrom-3d-world-124 [17:43] uploaded: http://archive.org/details/cdrom-3d-world-199 [17:45] very funny [17:48] :) [18:45] uploading this now: http://archive.org/details/cdrom-inside-mac-games-jan-feb-1995 [18:45] its a hfs iso image [19:39] finally was uploaded: http://archive.org/details/cdrom-inside-mac-games-jan-feb-1995 [21:07] we don't need the status updates, really [21:08] #archiveteam-godane [21:08] Nemo_bis: well, I think for it to be able to resume, it would need to do a HEAD request to see what's already been uploaded, and the last time I poked, the s3api did not give the correct size in the HEAD response [21:09] (actually, I think it might have been nginx's fault, but I am not sure) [21:09] s3 in general doesn't support resume like that, afaik [21:09] Anyway, use multipart and send the missing pieces at the end [21:10] https://gist.github.com/764224 this is what we use internally for a lot of stuff [21:10] well, you have to find out SOMEHOW what the current state is in order to resume. [21:10] particularly across multiple runs [21:11] Well, yeah. In that case, you have to look and see which pieces are missing, and send those, and then send the multiput finished command [21:11] it's not very clean [21:12] anyway, if you use that python script, modify line 162 to: [21:12] s3 = boto.connect_s3(key, secret, host='s3.us.archive.org', is_secure=False) [21:12] and it will handle automatic retries and stuff [21:13] Coderjoe: yes, dunno [21:13] i am retrying planetphillip :( [21:28] meh [21:28] I was not aware that amazon had to make multipart stupidly more complext [21:28] -t [21:36] well, I suppose this makes sense for multipart [21:36] but for resuming, without parallel uploads, multipart is stupidly complex [21:43] yeah [21:43] :( [22:42] shit I was working on planetphillip too. Duplicate effort. I will stop until Schbirid gets back