[19:33] hello emijrp [19:33] see my reply on list [19:34] if you could write a quick and dirty solution then I'm sure others would be able to improve it (I'll try as well, for metadata details and similar things) [19:44] sorry for not seeing your reply: to reduce list-addiction I download emails from the lists account only every 20 min :) [19:45] ah, emijrp, for the hardcoded s3 key you should ask alard, that's what his seesaw-s3 script does for mobileme [19:51] im not sure if harcode the key, or display an input [19:51] emijrp, input is better IMHO [19:51] how do you ask for a s3 key? [19:51] at least for now [19:51] emijrp, info AT archive.org [19:52] it was quite easy to get one for me, they just told me to use s3 when I asked the a way to upload 8+ Gb files [19:54] emijrp: http://archive.org/account/s3.php [19:54] alard, is it available since the beginning? [19:54] I discovered it only after a lot of headaches so I don't know [19:55] I don't know. I haven't been using s3 until recently. [19:55] Same here... [19:59] But emijrp can tell us, if he sees it it's for everyone without requesting I suppose. [20:00] I've never done anything special to get s3 keys. Just make an account, go to that page and ask for them. [20:00] I don't run the memac uploads from my own account, even. [20:05] what is this [20:05] --header 'x-amz-auto-make-bucket:1' \ [20:05] --header 'x-archive-queue-derive:0' \ [20:05] --header 'x-archive-size-hint:9638436173' \ [20:06] http://archive.org/help/abouts3.txt [20:06] auto-make-bucket: Make an item. [20:06] queue-derive:0 Do not derive things. [20:07] size-hint: Allows the system to place the item on a server with enough space (it doesn't know the size when you start uploading, but you do). [20:12] Nemo_bis: there are some values that cannot be automated, license, description.. [20:12] i can make an uploader, but those fields must be fillled by hand [20:12] emijrp, the description is wiki's sitename + URL + boilerplate about wikiteam [20:12] license can be extracted from API [20:12] (often) [20:12] a free electrocardiography (ECG) tutorial and textbook to which anyone can contribute, designed for medical professionals such as cardiac care nurses and physicians. [20:13] yeah, but that's not so necessary [20:13] one can always add it later [20:13] emijrp, https://www.mediawiki.org/wiki/API:Meta [20:14] like https://www.mediawiki.org/w/api.php?action=query&meta=siteinfo&siprop=rightsinfo [20:14] output can vary a bit across versions and configs [20:17] give me your WT.log [20:17] a sample [20:18] emijrp, it should be empty [20:19] it currently is for me; it's just a way to see the progress info on the terminal [20:19] When the upload failed, I had some HTML crap with a 500 error by archive.org, last time. [20:22] s3 keys are given by a automate e-mail, or manually approved? [20:22] I have no idea! [20:22] What do you see at s3.php? [20:23] emijrp: They appear on the page, at least for me. You click a button once to get your initial set, then click the button again to get more. [20:23] k [20:40] Nemo_bis: the curl command you pasted only uploads the -wikidump file [20:40] one curl per file? [20:40] emijrp, I don't know if you can do two [20:40] how do yuo do it? [20:41] I do another curl with just the filename etc. [20:41] no metadata [20:41] ok [20:42] plus --header 'x-archive-queue-derive:0' --header 'x-archive-size-hint:9638436173' if you want [20:42] but the hint is less useful on second upload AFAIK [20:43] and make-bucket? [20:44] no [20:46] why do you put nofollow on the description links? [20:48] emijrp, they get added anyway [20:48] I'm lazy, I copy them from previous items so there's the nofollow as well [20:59] this curl command is cool [20:59] we can made a script to upload all the Creative Commons videos from YouTube, download and upload, download and upload, etc [21:01] Yeah [21:15] Nemo_bis: --location is needed always? [21:15] emijrp, I think so [21:16] but I don't remember exactly [21:16] sometimes you can even use -C to resume failed uploads, but doesn't work consistently [21:17] emijrp, I mean that I read the man some time ago and that I'm always using --location so it surely works like that but I don't remember if you can do in another way. [21:27] how do you check if upload is ok? md5? [21:30] emijrp, curl exit code? [21:31] but md5 is an option as well, if the file is uploaded [21:31] usually it either fails or succeeds, AFAICS [21:32] do you add wiki- to all the items? [21:32] emijrp, yes, we were requested to do so by the collection admins [21:33] and the same applies to the Wiki - prefix in the title [21:33] underscor told us