#warrior 2013-11-22,Fri

↑back Search

Time Nickname Message
00:15 🔗 Coderjoe i wonder
00:18 🔗 Coderjoe is it possible in seesaw to move the rsync/trackerfinish into a separate queue, with a max depth. if the queue gets to the maxdepth, attempts to queue further items blocks.
00:20 🔗 Coderjoe such that if concurrent is 2, when an item gets to the rsync step, it goes into a separate batch and then immediately begins a new item
01:14 🔗 Atluxity so you count an item as finished when its finished downloading, and ready for uploading
01:14 🔗 yipdw it's not finished at that point
01:14 🔗 yipdw so counting it as done would be wrong
01:15 🔗 Atluxity the effect will be the same, no matter the definitions you choose
01:15 🔗 yipdw I mean that any change that effectively makes a downloaded-but-not-uploaded item "finished" is wrong
01:15 🔗 yipdw and it's wrong because the job isn't done until receipt is confirmed
01:17 🔗 yipdw in any case, that was to Coderjoe
01:17 🔗 Atluxity I belive Coderjoe was refering to the concurrent setting, and thats what I was refering to when I said "count". So that the setting would not affect number of unfinished, but number of active downloads
01:18 🔗 yipdw I guess, though I'm wondering what the payoff is for the additional complexity of multiple queues
01:19 🔗 Atluxity but I have been awake for 19 hours, so I am not up for arguments :)
01:19 🔗 Atluxity my social antennas get even worse when sleep deprivated
01:29 🔗 Coderjoe i didn't mean to say count it finished, just have it running separately from downloading.
01:29 🔗 Coderjoe mainly because while an item is being uploaded, that's one less able downloader doing downloading
01:30 🔗 Coderjoe and the max queue length is so it doesn't go out of hand if there is a slow upload (or the upload server isn't available)
01:32 🔗 Coderjoe like with hyves, most of the items are pretty small and upload fairly quickly, with the occasional one being slightly larger, and the rare one being huge.
01:33 🔗 Coderjoe having a separate upload/markfinish queue would allow for those slightly larger ones to not block further downloaders
01:35 🔗 Coderjoe and the length limit would prevent the huge ones that take 10-30 minutes (or more) to upload from making too large of a backlog of uploads.
01:41 🔗 Coderjoe or perhaps not a separate queue, but another item state that changes what count the item is counted towards, from working (the current concurrent setting) to uploading (with a new setting there, which prevents the state change as described above)
01:43 🔗 Coderjoe I'm not familiar enough with the seesaw code at the moment (and busy with other, paying stuff) to even make a branch to show what I'm talking about.
01:43 🔗 yipdw how much of the benefit could you get by running an upload queue of two?
01:44 🔗 yipdw or any size greater than one
01:44 🔗 yipdw I mean, yes, that will fail in the case where you get a bunch of massive items where most of them are small
01:44 🔗 yipdw but how often does that occur
01:47 🔗 Coderjoe i don't have anything approaching hard data to answer that
01:48 🔗 Coderjoe I have seen a couple of times where I had all three items blocked waiting for one slightly larger (perhaps 10-50M. I don't remember) item to finish uploading
01:49 🔗 yipdw sure, but that could be addressed by having an upload queue of size > 1
01:49 🔗 Coderjoe are you refering to the "rsync threads" setting?
01:49 🔗 yipdw yeah, or whatever it is that manipulates the LimitConcurrent setting
01:50 🔗 yipdw in that case, two things will happen: (1) the smaller items will be unblocked, or (2) the upload destinations are already getting hard-hit
01:50 🔗 yipdw so queuing up more upload work probably isn't going to help things
01:56 🔗 Coderjoe i'm not talking about increasing the upload concurrency, but rather releasing them from the concurrent items so more downloading can happen while the uploads clear
01:56 🔗 yipdw yeah, but once you have those items, they're going to need to be uploaded at some point
01:57 🔗 yipdw so if you can't upload fast enough to clear the queue, there's really two possibilities, right
01:57 🔗 yipdw (A) upload more at a time, or (B) the upload destinations are already choked and (effectively) extending the length of the queue won't really do much good, globally speaking
01:58 🔗 yipdw though I guess there is an argument to be made that if you have everything downloaded, you can shove it off in a controlled manner when things quiet down
02:01 🔗 Coderjoe the buildup I was observing was from my own upload rate limit, though.
02:02 🔗 Coderjoe at my ISP level
02:03 🔗 Coderjoe but my concern is that there is time wasted on the downloading side with a deadline looming
02:05 🔗 yipdw oh yeah
02:05 🔗 yipdw but that's why we have multiple workers :P
02:06 🔗 yipdw I mean, sure, the output of one node could be stalled
02:06 🔗 yipdw I guess it could work; I'm just suspicious of things that involve changing multiplicity from 1 to n
02:06 🔗 yipdw it usually never goes well
02:06 🔗 yipdw but there's always exceptions
02:25 🔗 Atluxity the warrior-repo should include the get-wget-lua.sh thats in hyves-grab repo
12:00 🔗 GLaDOS 01[13seesaw-kit01] 15chfoo pushed 3 new commits to 06development: 02https://github.com/ArchiveTeam/seesaw-kit/compare/2c03da043e04...058e31d61914
12:00 🔗 GLaDOS 13seesaw-kit/06development 14058e31d 15Christopher Foo: Bump version to 0.1.3.b1
12:00 🔗 GLaDOS 13seesaw-kit/06development 142abd612 15Christopher Foo: Updates bandwidth when not divide-by-zero (Closes ArchiveTeam/seesaw-kit#31).
12:00 🔗 GLaDOS 13seesaw-kit/06development 14322f64f 15Christopher Foo: Show project name as tooltip (Closes ArchiveTeam/seesaw-kit#19)....
12:26 🔗 ersi b1? ;o
12:53 🔗 chfoo trying to keep it as semantic as possible. the commit message should say 0.1.3b1 though.
13:56 🔗 ersi ah, yeah

irclogger-viewer