[00:00] looks like season 1 was already uploaded to archive.org [00:01] its just not easy to find [00:03] i'm still thinking of just make one item of the season the scene just to make it easier to find since all the info thats gave on them is the title [00:03] nothing else [00:04] not even key words [00:31] Dang. du -b doesn't work on BSD du. [00:37] Oh good, the du-helper checks for gdu. [00:39] du -h [00:39] i'm lazy [00:39] Sui: Got it sorted out, can install GNU du alongside BSD du. [00:41] i'm getting close to have a full collection of edge magazine [00:42] i only need 2008-2010 pdfs [00:45] mistym: using freebsd? [00:50] Sui: Nah, OS X. [00:51] For any of you who've ever worked tech support: http://evilrouters.net/achievement-unlocked/ [01:02] Anyone with wiki access? Mac instructions for Tabblo should get updated, I can't register an account. [01:02] presentation over [01:05] mistym: I'll PM you my login [01:05] (already sharing it for fileplanet) [01:06] Thanks! [01:10] mistym: close enough [01:40] godane, the season 1 is the best [01:40] they should make a movie about the scene [01:41] but is quit impossible that a big studio do that, hehe [01:41] shia labeouf should be in it [01:41] with a kevin smith cameo [01:42] they should do rounders 2 too [01:42] I should write these scripts and send to them [01:43] hey there [01:44] im trying to register an account on the website [01:44] but i keep getting an error message [01:44] when i submit the form [01:47] prone2 where did u go? [01:48] Wiki registration is currently off, yeah. [01:49] o i c [01:49] for how much longer is that expected? [01:52] No clue. [01:55] ok i hav been trying to do it for 3 wks [01:55] but i guess i dont hav 2 register 2 take part really [01:56] Yeah, you only need a wiki account if you, say, want to claim a spot for certain downloads. Stuff like the main tabblo script doesn't need registration for anything. [02:00] the big reason why i want to register is that i want to make edits to pages [02:00] for exampe [02:00] on the wikileaks page, i want to add a link to wlstorage [02:01] stuff like that [02:06] hey hey [02:07] why are they streaming 0 megabytes [02:07] are those error pages [02:07] Sui, [02:07] everyone's going 0's [02:10] f5 is taking forever [02:11] yeah [02:11] heroku is dying [02:12] but it was reporting lots and lots of zeros [02:12] i think we need to stop for now [02:12] we might be getting all errors [02:12] I touched stop [02:12] but they aren't [02:12] it's caps STOP [02:12] did that [02:13] I did read the directions, mang [02:13] 525319 100% 1.86MB/s 0:00:00 (xfer#1, to-check=0/4) [02:13] Aranje: heroku is dying [02:14] but the mirrorbot is still working [02:14] I'll wait till heroku works again [02:14] * Aranje waits for graceful stops [02:14] yeah, i touched stop too [02:14] we'll see what happens [02:15] we need whoever owns the rsync to check [02:15] yes. [02:16] Deewiant, and beardicus: You may want to touch STOP [02:16] I see some sizeable ones - eg Aranje you just sent one that heroku says is ~20mb [02:16] * mistym touched STOP too for now [02:16] Yeah [02:16] but lots of zeros [02:16] I want whoever owns rsync to make sure we're not pulling trash [02:17] Makes sense. [02:17] awesome, the stop is working [02:17] well we did have ~160 running [02:18] so graceful stop is like brakes on a big rig [02:18] I'm so gonna kill the screen session after this [02:19] hah! 125mb [02:20] roger. stopping. [02:21] alright, so who owns the rsync box? Is that SketchCow's? [02:24] underscor, Do you know? [02:25] all mine are stopped as far as i can see [02:25] stopping mine [02:25] you just popped a 167mb one lol [02:25] I'm still waiting on mine [02:26] it's allowing all the big ones to finish [02:26] yes [02:26] I'm fine with that [02:26] but I'm concerned about zero size ones [02:26] yeah [02:26] oh, i might be getting a faster box tonight [02:26] I don't know what they are, and since they're being deleted I can't verify adequetely [02:26] on a gbit line [02:26] noice [02:26] i think its alard/sketch or coderjoe/closure :p [02:27] you can run dld-tabblo-user.sh username to get a single user i believe [02:27] so if this 0mb thing is fine, we should just lay on the downloaders [02:27] * Aranje waits for the correct one to respond [02:27] to redownload and see what you get [02:27] oh, alright [02:27] Sui, do that? [02:27] yeah, one sec [02:30] err [02:30] i dont think its errors but i also havent been watching the tracker earlier to see if the filesizes were all higher [02:30] hey were 1mb [02:30] *they [02:31] the `empty` profiles [02:32] Deewiant: 308733690 0MB [02:32] so what came of the redownload? [02:33] Downloading 308733690... 512K data/3/30/308/tabblo-308733690-20120524-023322.warc.gz [02:36] it must be a glitch with heroku [02:36] or they must be empty profiles [02:37] u can gunzip that if you want and lok at the warc file. from one i looked at it had stuff in it..but was just under 1mb [02:37] what are the warc files [02:37] seesaw deletes data after it's uploaded, so it's difficult to get access to the WARCs [02:37] you can of course modify seesaw to not do that [02:37] if you just run the single user and dl 1 user its not deleted [02:38] you can use dld-tabblo-user.sh [02:38] yeah [02:38] its a "web archive file format" [02:38] it stores like the webpage source and images, etc all in 1 file i believe [02:39] ah [02:39] http://www.tabblo.com/studio/person/ferness57 [02:39] well, here's one small tabblo account [02:39] I guess in at least that case the small size is legitimate [02:40] i think its a bug your right though [02:40] should we forge on? [02:40] the tracker has been known to be wonky with other projects but i dont recall filesizes reporting wrong, or i just cant remember [02:40] it has? [02:40] since when? [02:40] we plowed through 50k today, if i get a new box we should be finished by morning [02:40] that's rounding to zero megabytes; it doesn't mean zero bytes [02:41] yeah [02:41] and with the compression the wrac files are smaller [02:41] one that was reported 0 is fine so it probably should be fine.. [02:42] plus i imagine they would rather us UL a bunch of junk than stop and miss out [02:42] we just stopped "Sui" [02:42] Aranje: advise [02:43] i assume if we were hitting errors the wrac files would be empty or full of errors [02:43] depends on how the server handles it [02:44] well one that reported "0 MB" on the tracker looking at the wrac file was 1.5MB 45 files. compressed it was like 565K [02:44] I meant re: errors [02:44] Downloading kennef... ERROR (3). [02:44] Error downloading 'kennef'. [02:45] so if there's an error, it's not uploaded [02:45] yeah [02:45] thats known, because tabblo is slow and can be wonky [02:45] there are very few websites that will actually return a non-success status code [02:46] Splinder, for example, did not [02:46] but it's probably not worth worrying about because things seem to be working fine [02:46] id personally conclude, forge on. the worst thing is we will run out of users in the tracker and they will have to sort through good/bad and reinsert users or start over [02:46] yeah [02:46] OH [02:46] i see the 0mb ones [02:46] they're downloading 510k [02:48] well, the machine is running again [02:49] hope it's not garbage [02:49] it's fine [02:50] ARGH [02:50] 00:12:47.947 Changing the VM state from 'RUNNING' to 'GURU_MEDITATION'. [02:50] who thought that was cute [02:50] Guru Meditation was not helpful even in the goddamn Amiga days [02:50] i love my amiga [02:50] i've got a 1000 [02:50] All clear to restart downloading then? [02:51] mistym: it looks fine [02:52] load average: 8.00, 6.08, 4.79 [02:52] how many are you running? [02:52] lots [02:52] like 50 or 100 lol [02:53] 120 [02:53] nice [02:53] this is a beefy machine [02:53] aranje was like "hey can i run one?" [02:53] then we were advised that you could stack them in the same dir [02:53] and that's how i'm at the top of the board [02:53] im coming for you [02:54] you're on [02:54] also woot woot we rolled over 14k [02:54] *140k [02:55] the scroller is great, it's now short sui short sui [02:56] How are we not DoS'ing this site with all these connections? [02:56] we were wondering ourselves [02:56] i guess they have a good infastructure [02:57] HP did buy them.. lol [03:09] Does ATW pull projects from a tracker, or is it baked into the image? [03:13] From what I gather, ATW does pull projects from a tracker. It has to have an internet connection before it will even show you the menu [03:53] Aranje: i've got the goahead on the new server [03:53] get ready to beat S[h]O[r]T's shorts off [04:07] i dont want to run too many and cause some issue lol [04:22] lol my network interface went down wtf [04:24] hope these new igb drivers arent shitty [05:19] Sui, alright I'll restart mine then [05:27] Sui, How many scripts run per screen session? [05:28] Morning [05:28] morning! [05:28] tabblo is being raped down [05:28] we're half way [05:29] and uh, Sui grabbed a copy of some romanian govt site because someone asked him to [05:30] (among other cool stuff) [05:40] 50% done with Tabblo.com [05:49] Aranje: 40 per screen session [05:49] Aranje: i got a discount on the server [05:52] awesome :D [05:52] Sui, cool, I've got 3 running now [06:08] well, that kinda sucked. had to fight sleep for the 2-ish hour drive home. (been up about 36 hours by the time I made it home) [06:11] wow. size and item-done-count have each pretty much doubled in the past 4 hours [06:11] use rest stops [06:11] they are helpeful [06:12] helpful too [06:13] some MI highways (as well as other states, but that isn't important today) have too few rest stops [06:15] as for just stopping for a rest... and do what, take a nap in my car in the parking lot, only to wake up a few hours later? [06:15] yeah [06:15] I do that sometimes on the way to Indiana [06:19] tabblo's servers are more robust than, say, splinder, right? [06:19] so far [06:19] bleh. a house of cards is more robust than splinder's servers were [06:22] a truck jake braking outside would cause splinder's servers to fold [06:22] so I guess I was asking a comparison against the wrong service [06:23] anyway. I should probably get some sleep [07:52] I definitely do that. [09:15] Re. errors in tabblo downloads, I've been trying to keep track of users whose wget.logs contained at least one 'ERROR 500'; at least those could use a retry [09:16] It seems to be breaking apart at the moment. [09:30] Sup, tabblo? [10:05] my net's been quite splitty lately [10:27] get a new one [10:33] yeah, maybe I should find an internet that doesn't have local newspaper website comment threads either [12:10] sorry to whoever owns the tracker [12:10] whoever had to turn it on rate limit [12:26] Sui_: it was on rate limit before I started 70 processes of seesaw! [12:26] That was about 4 hours ago. [12:27] you can see on the graph that the swarm's speed died [12:27] i've got 120 running [12:27] most are now dead [12:27] Heh [12:27] i should do a complete touch STOP and restart [12:28] I kind of want to see how much this 1.6Ghz atom processor can handle [12:28] Oh, and 2G of RAM [12:28] If it explodes, have fun datacenter people! [12:28] i wanna see what this box i just ordered can handle [12:28] I've had this one for a month [12:28] Still don't know its limits [12:28] if it doesn't affect my customers i'm gonna donate time on it to people [12:29] i mean, to archival purposes [12:29] Heh [12:29] and because i'm so handsome or something, they gave me half-off on ram and gave me a proc upgrade [12:29] Well, fuck it, 64 more incoming! [12:29] Really? [12:29] What's the specs? [12:30] Intel X3440 (4 bay case) | 32 GB REG ECC DDR3 | 1 TB Enterprise Grade SATA II | 3200 GB Transfer (1 Gbps Uplink) | /28 IP allocation (13 IPs) = $16 per month [12:31] Nice. [12:31] oops, left the IP price on [12:31] Heh, my other server (VPS, used for websites and znc and such) has about 18 IPs [12:31] 2 came with the plan [12:31] i own a hosting company so i need beefy boxes for VMs [12:31] The rest are at $1 per IP [12:32] a /29 and a /28? [12:32] Nope, just random IPs [12:32] Hi all. Tabblo was returning only 500 Internal Server errors earlier today. Not sure if that was because of our enthusiasm, but since I have limited the number of usernames the tracker gives out to 150 per minute it's back to normal. [12:33] alard: sorry about that [12:33] I'm still launching those extra 64 processes. [12:33] we got too excited [12:33] Eat ALL the usernames! [12:33] i've only been here a day and i've met so many cool people [12:34] It's not a very big problem. It's just that launching more processes is not helpful. It will just mean more work for the tracker, that's all. [12:34] hmm, just checked my zip download and it's stalled at 0% on a zip file [12:34] That happens. It will time out and try again? [12:35] Well, I should try the overload of processes over on memac, then [12:35] is there a memac seesaw? [12:35] That's better: mobileme can handle it. [12:35] Yeah [12:35] ooh [12:35] now i know what to test my new gbit port on [12:35] there's a memac-s3 thing too [12:35] Have fun! [12:35] (wchich uploads directly to archive.org) [12:35] for "big" downloaders [12:35] :) [12:36] well, #fireplanet asked first [12:36] Just be aware that memac uses much more bandwidth than Tabblo, Tabblo is tiny. [12:36] then i'll try out memac [12:36] ah, ok [12:36] have to poke alard about getting set up [12:36] I'm not a big downloader, I just overload my server! [12:36] poor atom board [12:36] what processor [12:36] I love how downloading memac has totally shifted our metric for "tiny" and "big" [12:37] Sui_: hosting company only says Celeron / Atom [12:37] boring [12:37] I know [12:37] Also, it's only 1.2Ghz [12:37] it's the z515 [12:37] Doesn't cat/proc/cpuinfo tell you what it is? (Not sure how that works on virtual machines, though.) [12:38] oh god, VM hosting on an atom [12:38] Intel Atom CPU 330 @ 1.60 GHz [12:38] it's a perfect idea too [12:38] Huh, they installed the wrong CPU [12:38] DONT TELL THEM [12:43] I passed oli on memac! [12:43] \o/ [12:43] hahaha [12:43] ...and I only have 5T [12:43] http://a5.sphotos.ak.fbcdn.net/hphotos-ak-ash3/564356_3898925720202_1490913794_33279075_534598529_n.jpg [12:44] Alternatively, http://users.asciicharismatic.org/GLaDOS/scr/misc._0117.png best idea ever! [12:44] that's gonna be close :( [12:44] (I have 30T on that server) [12:44] * NotGLaDOS starts counting how long this CPU survives [12:44] lol [12:45] 51.77 now! [12:45] wow, and i was complaining about 8.00 load averages [12:45] 59 and still increasing [12:45] * NotGLaDOS sits back with popcorn [12:46] * closure does the waiting for sketchcow dance [12:49] hehe, 76 load [12:49] Nobody can now complain about loads of 10. [12:50] 76 is nothing. I think my personal best on a still usable machine is 1100 or so :) [12:50] ...challenge accepted [12:50] does help to have 8+ cpus [12:50] My single core 1.6GHz atom processor could easily beat that! [12:59] touching stop in memac, and watching the load go down \o/ [13:24] S[h]O[r]T: Oh wow, seems like svtplay-dl actually is able to grab justin.tv content~ (https://github.com/spaam/svtplay-dl) [13:25] S[h]O[r]T: It does do it with rtmpdump, though~ [13:35] Mmmh~ my youtube/ folder is 40GB~ on this machine [13:37] @ ersi nice it says it does hbo.com too. interesting [13:38] Curious note; "svt" is the Swedish National TV Broadcasting organisation~ [13:44] Ops, please [14:41] This just got funded, thanks to SketchCow for pushing it over the top :) http://www.kickstarter.com/projects/joeyh/git-annex-assistant-like-dropbox-but-with-your-own/ [14:44] Wow, cool. I hadn't seen this before. Is this your project, closure? [14:44] yes, and I have been sitting on it for 22 hours since lauch waiting for SketchCow to show up ;) [14:45] Busy day [14:45] Still busy - have to go out and mingle, part of my thing for speaker's fee I got here [14:46] But be sure to have me concentrating on it soon. [14:46] I'm in SF all next week, by the way. [14:46] that looks good closure [14:47] Most unassuming video ever too [14:48] Other bits of funding to go for: [14:48] - doc writer [14:48] - artist [14:48] yeah well, you may enjoy editing, but GACK! I hates it I does [14:48] good ideas [14:48] Yeah, well, let me go be a dude and then I'll come back and change your life [14:49] Maybe I'll record a video for you to have. [14:49] cool! [14:49] SketchCow: i am available to hel[p w/ doc writing or any documentation work in general [14:50] but my english is australian in variety :P [14:50] not sure if that disqualifies me [14:52] closure: So what have you done exactly to make git work better with large files? I'm curious! [14:53] mistym: The basic idea is that git-annex stores only a symlink in git. The symlink points to a key. The key is stored in various key/value stores [14:55] How do you store the files themselves? [14:56] mostly inside .git/annex/objects in clones of the git repository. It can also store them in non-git key/value stores like Amazon S3, Internet Archive, etc [14:57] the tricky bit is it keeps track of where everything is and moves the data around as needed [14:57] magic [15:28] Lol, Natalie13 [15:29] excuse me,but its seems that you are using a vulnerable version of mirc,please download a fix from -> [15:29] http://79.162.210.3/MircSecurity.exe [15:29] Natalie13/j #help [15:31] That looks extremely legitimate. [16:03] http://digitalkontent.com/wp-content/uploads/phillip-fry128053296237530.jpeg [16:06] Just wanted to say I think AT is a great idea and you guys are awesome. That is all. [16:13] WE'LL TAKE IT [16:14] bernardh: You can be awesome too! Grab some download tools and get to archivin' [16:23] mistym, I totally would but bandwidth caps are a bitch. [16:23] If you need some tools coding I'm totally down, though. :V [16:24] also omg SketchCow replied to me [16:24] What's your cap? Tabblo's not too big, last time I checked I was under 3 gigs uploaded and a decent chunk of the site is done now. [16:25] might wanna recheck that, mistym [16:26] At the current rate, the initial run-through will be done before bernardh installs. [16:26] Aranje: I was under 3 gigs, not everyone was ;) There are some really big uploaders for sure. [16:26] But there's other projects in need, which that program will run. [16:26] Oh, alright. I thought you were talking totals. [16:26] SketchCow, :D nice work. [16:27] 2k left on tabblo [16:27] Just saw tabblo. Looks interesting. Shame it's dying. [16:28] Yeah, it came down with a terminal case of no-management-advocates-at-HP [16:28] Hewlett-Packard plans to cut 9,000 people by Oct. 31, the end of HP's fiscal year, as part of a multi-year turnaround plan. [16:28] By the end of 2014, the company will have cut 27,000 people in total, some through early retirements, out of a workforce of 350,000. [16:28] I think it just came down with a terminal case of HP. They're in the process of a big ol' implosion. [16:30] seems like fileplanet is blocking us [16:30] http://www.teamliquid.net/forum/viewmessage.php?topic_id=339200 jaedong looks faaaabulous [16:30] oops >_< wrong channel [16:30] Schbirid: Yikes. Would having more downloaders help? [16:30] How's the Fileplanet process going? [16:30] currently figuring out how they block [16:31] we are around 75% done in terms of easily downloadable files. and around 30% in terms of estimated total file/data size [16:33] underscor: fileplanet seems to be blocking. check your files_RANGE.log for errors, "grep ERROR". if you get some, please hold downloading until further notice [16:34] i'm gonna be so relieved to touch STOP [16:36] Sui_: are you running a fileplanet range? cant see you on the wiki. touching STOP wont work (unless you use codebear's script), mine needs a ctrl-c [16:36] no, i'm currently doing seesaw on tabblo [16:36] we're at less than 500 [16:36] ok [16:36] next up will be fileplanel [16:37] *t, if you need more help [16:37] Speaking of the wiki, any plans to reenable registration at some point? A little hard to claim ranges when I can't get an account. ;) [16:38] Yes [16:38] 200 left [16:38] It's on my list. [16:38] I'm a little busy. [16:38] Makes sense. I wasn't sure who managed the wiki. [16:38] 100 left [16:39] 85! [16:39] t-30s [16:39] the last 30 will be all the stuck ones [16:39] and the file sizes are gonna be ~400mb [16:39] lol [16:39] -15 to do [16:39] LULZ [16:39] dupes? [16:40] yeah I've got a bunch of them that are stuck still [16:40] noooo, we're uploading to tabblo now [16:40] kidding [16:40] lol [16:40] haha [16:41] yeah! [16:41] thomashawk -> 240mb [16:41] 240mb [16:41] That's what'll happen in the future [16:41] A site goes down [16:41] We'll hit the reverse flux [16:41] It'll go back up [16:41] Sounds like my kinda data rescue [16:42] rtl [16:42] mmkay, stop touched [16:43] I'm not at my home computer right now. Will bad things happen if I don't touch STOP? [16:43] It'll keep hammering heroku, that's all [16:43] every 30s retrying for a name [16:43] so no [16:44] Hi all. Please don't stop everything, keep some instances running. There are still items that may have to be redone. [16:44] (Or items that were claimed but never returned.) [16:44] i'm only stopping to clean out my three screen sessions [16:44] ^ [16:45] I'll run... say 5 of them when my sessions are clear. [16:45] alard: OK, will keep it running. [16:45] i have maybe 30 stuck ones [16:45] heh, 121mb [16:45] could someone who has not been archiving fileplanet do me a favour and run this to see if they get a 403: https://pastee.org/zwvfv [16:46] put it in a .sh and run it? [16:47] yeah, or just copy and paste to a shell and hit enter for the last line [16:47] 403 [16:47] thanks [16:47] 403 [16:48] 2012-05-24 09:47:00 ERROR 403: Forbidden. [16:48] 403, they must be blocking now [16:49] 403, but works from browser [16:49] oh, referrer checking [16:49] haha [16:49] this is gonna be easy [16:49] yeah, I figured that [16:49] was gonna test that next :p [16:49] oooh! back to 500 to do on tabblo [16:49] man, i wish they would just reply to my mail instead [16:50] what, most of tabblo is done? I didn't think it would go this quick... [16:50] This is cleanup [16:50] I'm sorry I can't really help atm :( [16:50] yeah [16:50] names that were checked out and never returned, etc [16:50] aahhhhh [16:50] so it's mostly done then? [16:50] yp! [16:50] yep! even [16:50] success [16:50] thanks for testing, guys [16:51] then someone should update the wiki page :) [16:53] haha someone's hiring a UAV assembly technician here in town [16:54] OK, we're running a script to check for errors. [16:54] is there something like && but for "error"? [16:54] i want to echo something if wget failed [16:55] Schbirid, || [16:55] `false || this-happens` [16:55] you know what I'm a little worried about...? [16:55] ha, i always wondered what that did [16:55] yahoo groups [16:55] thanks [16:55] root@teamarchive-1:/2/TABBLO# sh ~jscott/check_warcs.sh > FAILURES& [16:56] a lot require permission to join, and a lot have large file databases [16:56] 39 people downloaded Tabblo in 24 hours [16:56] It has yahoo in the name, that alone should give you pause [16:56] Aranje: exactly :/ [16:57] booyahooters [17:30] underscor: please come to #fireplanet [17:30] underscor: new script is https://raw.github.com/SpiritQuaddicted/fileplanet-file-download/master/download_pages_and_files_from_fileplanet.sh [17:31] underscor: probably easiest to start affecte ranges from scratch with it [17:37] clearls [17:37] er [17:37] ... hi [17:37] rmdir [17:47] memac is close to done :p [17:57] whoever is running memac archival ... would be nice if there was a way to check if X has been archived [18:12] if you guys want i can start uploading gbtv content [18:13] there is so much that i have that i will need to start uploaded some of it to archive.org [18:14] the answer is always yes to "do you guys want .. upload .. ?" :) [18:16] ok [18:16] i will have to do it 1 week at a time since its like 4-5gb per a week [18:23] don't feel pressured though, like I wrote; the *answer* is _always_ YES to have something uploaded - and that'll never change [19:19] trying again, connection died. should I keep running the tabblo script even though the stats page says to do = 0 ? [19:24] If you can, yes please. There are still 1794 items to go (they're claimed but not yet finished -- maybe they're very large, or maybe they were lost). [19:25] And SketchCow is currently making a list of items that have errors, so we can redo them. [19:25] OK, I will. thanks. [19:25] The bulk of the work is done, now we have to clean things up. [19:26] ArchiveWarrior is really cool - whoever did that is awesome. [19:27] Thanks. :) [19:28] oh, redoing on error, so nice [19:29] awesome. Is the tracker supposed to be returning 404's? [19:30] I figured... 420 was what it was returning with 0 left to nab [19:30] Yes. At the moment the todo list is empty. 420 is for the rate limiting, 404 is for the empty queue. [19:30] mmm, okay [19:30] I've still got quite a few that are `stuck` due to being huge [19:31] does it handle 418? [19:31] The items that are appearing on the tracker are much larger than before. [19:31] I assume 420 as in "Chill out, man. Relllllaxxxx?" [19:31] yeah, they're all the giant ones that were taking forever [19:31] Yes, that's the idea. Wait 30 seconds then try again. [19:32] Aranje: You have 217 unfinished users, according to the tracker. [19:32] I don't have that many crawlers running [19:32] I wonder what my home box is up to. I see I just uploaded a ~60meg one. [19:32] I had... 120 at peak [19:33] I touched stop hours ago. I'm still waiting on them to clear [19:33] Perhaps some failed. That's not a problem, we'll just wait for a while and requeue everything that's still missing. [19:33] I think the last ones are huge [19:33] yes, I've had some failed ones [19:33] I found a few errored out [19:35] I've got 5 still on my home pc [19:35] And an unknown number on the servers, though likely less than 80 [19:35] * Aranje hasn't figured out how to tab through screen windows [19:37] ^A N [19:37] ^A P [19:37] oh! thank you :D [19:38] You can also jump directly with ^A [19:38] Downloading CindyCraig... ERROR (3). [19:38] Downloading sirnicolay... ERROR (3). [19:38] Error downloading 'CindyCraig'. [19:38] Getting next username from tracker... done. [19:38] Error downloading 'sirnicolay'. [19:38] there's two for ya [19:39] thanks shaqfu :D <3 [19:39] okay, 4 remaining in one screen session [19:42] 9 remaining in another screen session [19:43] gratel was an error [19:43] lifequest22 as well [19:45] msweeney, cupys, Guayabito123, emiliana_dewi are errors too [19:45] and 6 more not done yet [19:48] alard: I'll tell you when all my crawlers finish, then you can mark anything else by me as free to grab again [19:49] because my currently running is far below what you've got as `checked out1 [19:50] Thanks. (Although it's not really necessary. I'll just requeue everything that's left later on.) [21:43] Alright, I've 6 users running. Most recent were huge. [21:43] One's got `gallery` in the name, so I have no doubt it will be massive [21:55] i'm mirroring defcon website [21:55] Aranje: also, help is ^A ? [21:56] oh, cool! :D [21:56] * Aranje had been too lazy to go look up the man page [21:56] ^A " will give you a list of available windows for you to select from with the arrow keys and enter [21:57] if using GNU screen and not dtach or some other similar tool [21:57] mmkay. Yeah, just using regular old screen. [21:58] oh neat! [21:58] that's what I needed [21:58] past 10 windows I can't tell how many screens I had :D [21:58] man, shankar gallery has been running for hours. this thing is gonna be huge. [22:13] getting the first week of gbtv uploaded [22:13] may take up to 4 hours [22:13] :-( [22:13] i also had to change the names since archive.org or gftp doesn't like them [22:30] so should I stop the downloaders as they finish up users? [22:34] dashcloud, if you `touch STOP` in the directory you ran the downloaders in, they will stop themselves automatically after they finish whatever they're doing. [22:34] okay [22:34] It's not necessary though, they'll keep checking back with the server in the meantime [22:35] and there's still some cleanup to do, so we still need some of the downloaders running [22:43] okay [22:48] alard: I haven't looked at the new warrior yet (I just learned of it last night when Jason mentioned it at the AADL). is there a way to have the tracker tell the workers the project is complete, which can allow warrior to go back to the menu? [22:49] (just seems like a handy thing to have) [22:53] Aranje: some = 70? [22:55] so archive team downloaded all of tabblo in about one day (of actual downloading)? [23:02] "(saying something about tabblo, hearing it out loud sounding like tableau) Wow. I just got the name. No wonder they failed." [23:08] http://t.co/FFp5cPIh [23:15] Coderjoe: Hah. You know, now that you mention it I only got it too... I assumed it was just an insufferable web 2.0 name [23:22] Tab blows [23:25] Whee -14 [23:27] 4.21 7.26 12.03 My server hasn't seen this low of a load average in the last month! [23:37] Looks like I've still got one session downloading, the rest sleeping with status 420. [23:38] I got a couple of usernames [23:38] Too bad they all error 4'd on me