[00:26] so archiving geek brief tv [00:26] and uploaded most of 2006 year so far [02:14] https://www.youtube.com/watch?v=PF7EpEnglgk [02:32] NOOB!!! [04:27] heh [04:27] https://www.youtube.com/watch?v=CmFV_f-snTY [04:27] Google Children [04:28] Beta [07:52] heh [07:52] https://soundcloud.com/bestdropsever/hobbits-aka-samwise-needs-to [17:10] Backing up large YouTube channels to ia, getting error '503 Server Error: Slow Down' [18:30] Yeah, that means: Slow down [18:30] :) [19:39] is there more info on that, daily limits, etc? [20:11] I think mostly it's whenever the hardware gets too busy to server you [20:23] I would guess that it has to do with the number of outstanding tasks you have queued up https://catalogd.archive.org/catalog.php?history=1&id=XCVii007r1 [20:24] maybe [20:24] adding many big files to a single item one at a time takes forever because it has to keep rsyncing them all around and stuff [20:25] while the derive of the earlier ones holds up the works [20:27] right [20:27] ohhdemgir: are you uploading with the suppress derive flag? [20:28] perhaps a better way to do that would be to upload each video as a single item in a collection [20:31] exmic, I'm using - https://pypi.python.org/pypi/internetarchive - ... suppress derive flag? [20:32] you want to set the header x-archive-queue-derive: 0 [20:32] not sure how that tool does it [20:33] nothing in the documentation as far as i can see [20:58] i'm getting the rest of geek brief tv 2007 episodes uploaded [21:46] yes [21:46] each video should be a separate item, with metadata saying what channel it was from [21:49] what the fuck [21:50] only in Objective-C can your object's properties go from non-nil to nil without ANY action on your part [21:51] sounds cool [21:58] OH WAIT [21:58] fuck [21:58] fuck fuck fuck fuck fuck [21:58] the enumerator method I'm using is asynchronous [21:58] badger [21:58] arrrrrrrrrrrrrrgh [21:58] so it just falls through [21:59] this is where some nerd is going to tell me that async/await in C# 5 is awesome [22:01] ohhdemgir: interesting. looks like your derive failed https://catalogd.archive.org/log/309534251 but it didn't yield a redrow? [22:02] I can rerun that derive job if you want [22:02] or interrupt [22:02] I did wonder why it was hung so long, got busy doing other things though, how do I re-run it? [22:02] do you see a rerun button at the top of https://catalogd.archive.org/catalog.php?history=1&id=XCVii007r1 ? [22:03] if not, I do, and I can poke it for you [22:03] nope, please do [22:03] I'll try "Interrupt" first [22:04] okies [22:04] interrupt does nothing, rerun [22:04] exmic: the host is read-only atm [22:05] hmm [22:05] not sure what I'm seeing actually [22:05] notice the yellow background on the host name? [22:05] derive.php isn't assigned to a host now [22:05] the archive jobs are assigned to a readonly host though [22:08] probably filled up the hdd [22:10] the size currently appears to be 608MB [22:11] and that will balloon once derive can get to work on it [22:11] huh [22:12] looking at the huge files table on the item details page, there appear to be several broken file names [22:12] (truncated before the extension) [22:14] also, my "each video should be a separate item" isn't from any position of authority at IA or anything. I'm just fairly sure that with the number of files you're talking about, they will want them as separate items. [22:14] I'm not sure how to handle the item id, though. [22:19] possibly channelname-videoid, with the video title in the title metadata field, as well as the channel name in the author field? [22:20] aye, I've just just been going at it in the fastest most automated way possible [22:21] I'll likely end up sticking to ftp sites after this, 1000's of videos on some channels each as a new item would take a lonnnng time