[00:13] $ curl https://logplex.heroku.com/sessions/06ec4859-d9a0-4d8e-8060-12aaf4b2c155?srv=1330215270 [00:13] hurry it'll only be there for a minute [00:14] but that's the realtime stream of what's going down :) [00:18] (don't click in a browser) [00:53] 60 7:54PM:alex@alex-desktop:~ 710 π curl "https://logplex.heroku.com/sessions/06ec4859-d9a0-4d8e-8060-12aaf4b2c155?srv=1330215270" -k [00:53] Not found% [00:53] :( [00:53] too late now [00:53] let me make another [00:54] undersco2: curl https://logplex.heroku.com/sessions/94a1be48-a427-41d0-8c47-ef353bc52e88?srv=1330217735 [00:54] ooh [00:55] Will this stay alive if I keep the curl running? [00:55] yeah [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: % Total % Received % Xferd Average Speed Time Time Time Current [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: > [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: > Content-Length: 7227084800 [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: > Expect: 100-continue [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: > authorization: LOW i0X3DBmEtLlNnLGX: [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: Dload Upload Total Spent Left Speed [00:55] 0 6892M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* The requested URL returned error: 403 [00:55] 0 6892M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Closing connection #0 [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: curl: (22) The requested URL returned error: 403 [00:55] 2012-02-26T00:56:50+00:00 app[scrape.87]: Upload error. Wait and try again. [00:55] uh oh [00:56] 2012-02-26T00:57:09+00:00 app[scrape.99]: > Expect: 100-continue [00:56] uh oh [00:56] 2012-02-26T00:57:09+00:00 app[scrape.99]: > [00:56] 2012-02-26T00:57:09+00:00 app[scrape.99]: % Total % Received % Xferd Average Speed Time Time Time Current [00:56] 2012-02-26T00:57:09+00:00 app[scrape.99]: Dload Upload Total Spent Left Speed [00:56] 0 8433M 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0* The requested URL returned error: 403 [00:56] 0 8433M 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0* Closing connection #0 [00:56] 2012-02-26T00:57:09+00:00 app[scrape.99]: [00:56] 2012-02-26T00:57:09+00:00 app[scrape.99]: curl: (22) The requested URL returned error: 403 [00:56] 2012-02-26T00:57:09+00:00 app[scrape.99]: Upload error. Wait and try again. [00:56] Every single one says that [00:56] Are you trying to upload to a collection you don't have privs for? [00:56] * kennethre kills them [01:04] undersco2: I shouldn't be, alard, were those keys valid? [01:23] alard: It's working. [01:23] Wow, undersco2 - dump that somewhere else? [02:32] user error. [02:33] SketchCow: Is the s3 upload limited? I'm uploading, but with 250kB/s. [02:34] From batcave it did 20MB/s. [02:34] And rsync to fos wasn't slow either. [02:38] Nothing's limited, period. [02:39] But we have a lot of people doing a lot of maintenance and work [02:39] And they don't tend to have engineer alerts, period. [02:51] http://cl.ly/1d432z372j03463k0H45 [02:58] SketchCow: Sorry, didn't mean to be quite so noisy [02:58] kennethre: Will you generate another logplex link? <3 [02:58] undersco2, haha one sec [02:59] I'm addicted to watching things like that [02:59] me too :) [02:59] https://logplex.heroku.com/sessions/7737d2bb-1efa-4138-8813-024c9f80aa6d?srv=1330225244 [02:59] \o/ [02:59] thanks [03:00] 4 sets of 100 instances, would that be 400x$5 for us normals? [03:00] what time period? [03:00] $0.05/hr * 400 [03:01] so $20 an hour [03:01] and i think each one can get a good 10GB in an hour easily [03:01] so [03:01] $1 for 500GB [03:01] not bad at all ;) [03:02] That's actually a pretty good deal [03:02] Oh, okay, I thought it was $5 an hour [03:02] $0.05 is a lot better! [03:02] haha [03:03] well most people don't use it [03:03] like this [03:03] That'd be good for a quick and dirty DOS too [03:03] which is why we get these great speeds [03:03] * undersco2 whistles innocently [03:03] it's almost always all inbound traffic, not outbound [03:03] schlurp [03:03] yes, they are perfect for it [03:03] haha [03:03] Is there a limit to what software you can run? [03:03] nope [03:03] I should probably fuck around and learn more about the platform [03:03] only turing-compliant software is allowed. [03:04] undersco2: http://devcenter.heroku.com/articles/buildpacks [03:04] Unmetered bw included in that $0.05/h? [03:04] undersco2: correct [03:04] nice [03:04] yeah it's nice [03:04] It'd be good just for like running encoding farms on demand or such [03:04] he he he [03:04] although if this is a trend i'm sure that'd change quickly [03:04] yeah [03:04] potentially, yaeh [03:05] scientific computing is certianly a possibility [03:05] we're really optimized for http stuff right now [03:05] product-wise [03:05] technically, you can run just about anything though [03:06] I see [03:06] one's at 79% [03:06] i can't wait to see one of these uploads succeed [03:07] So are the users not getting marked as done until they upload? Is that way the tracker isn't showing kennethre as a vertical line again? [03:07] DoubleJ: we're marking at upload time right now [03:07] DoubleJ: and uploading in 10GB chunks [03:08] I saw the 10 GB part, I was just wondering why things weren't blowing up yet. Guess I'll just wait for the first one to finish :) [03:08] 2012-02-26T03:09:42+00:00 heroku[scrape2.97]: Error R14 (Memory quota exceeded [03:08] 2012-02-26T03:09:42+00:00 heroku[scrape2.97]: Process running mem=512M(100.0%) [03:08] :'( [03:08] undersco2: yeah wget's a beast [03:09] haha [03:09] undersco2: it hasn't been a problem yet, just swaps a bit [03:09] we're saving some great stuff here [03:09] http://gallery.me.com/gunnimacman#100097/DSCF3630&bgcolor=black [03:11] kennethre: You don't have any promo codes or anything for heroku, do you? :D [03:11] Or do you guys have a student/educational pricing thing? [03:11] Would be pretty cool to use this in our web programming class [03:12] undersco2: we're extremely generous already [03:12] oh really? [03:12] undersco2: every app gets 750 hours free per month [03:12] oh wow! [03:12] undersco2: which lets you run, say, one web process constantly all month [03:12] that's impressive [03:12] it's pretty awesome :) [03:13] So you could run 750 instances for an hour, or 1 instance for 750 hours? [03:13] that's the great thing about clouds [03:13] undersco2: both [03:13] undersco2: and that's per app, so you could have 100 apps that are all running full time for free [03:14] undersco2: essentially, we charge you when you scale [03:14] I see [03:14] So, when $webstartup is small or demoing, it's effectively free [03:14] But once they get slashdotted, heroku gets $$$ [03:14] :D [03:14] haha [03:15] depends on your app [03:15] i get 2000r/s on a free app [03:15] depends on your performance :) [03:15] Wow [03:15] That's a pretty incredible platform [03:15] (even with the downtime earlier ;P) [03:15] most don't get nearly that because people use crappy servers [03:15] hehe, thanks [03:16] I work there because of how much I love it, not the other way around [03:16] it's inspirational (for me) [03:17] :) [03:19] kennethre: What do you do for persistent data storage? [03:19] undersco2: you don't [03:19] undersco2: that doesn't scale [03:19] undersco2: databases, s3 [03:19] I'm just saying, if I want to run a webapp with persistent storage [03:19] Oh, okay [03:20] that's the biggest limitation [03:20] yeah you'd use s3 [03:20] some providers are doing like fuse mounts [03:20] Or set up tahoe-lafs and dynamically move data when dynos disappear [03:20] ;D [03:20] and all the runtimes share a network share, but that's just a terrible idea [03:20] iwhat's that? [03:21] It ensures data integrity [03:21] oh nice [03:21] On untrusted storage nodes [03:21] Tahoe-LAFS is a Free and Open cloud storage system. It distributes your data across multiple servers. Even if some of the servers fail or are taken over by an attacker, the entire filesystem continues to function correctly, including preservation of your privacy and security. [03:21] you should build a buildpack for it :) [03:21] I should, twould be a good learning experience [03:21] does it require kernel hacks? [03:21] nope [03:21] interesting [03:21] written in python too, iirc [03:22] DO IT [03:22] haha [03:22] ok [03:23] kennethre: Is there an api to automatically scale? [03:23] undersco2: no, but there is an api [03:23] undersco2: every app's needs are different. [03:23] Like, if my web app goes "hey, we have a lot more videos coming in currently, let me spin up 5 more encoding worker dynos" [03:24] (I know the app has to do all the logic, just curious if there's an API to stop/start dynos) [03:25] ah yes there is [03:25] undersco2: https://github.com/heroku/heroku.py [03:26] while everyone's still paying attention to this conversation, my 1.5tb backup drive fell off a table, boom, click of death. it was mostly backed up tv shows, but theres enough on it that i cant easily replace, that is actually mine, that i was actually considering paying the insane recovery costs [03:26] undersco2: heroku.apps['myapp'].processes['encoder'].scale(40) [03:26] kennethre: that's awesome! [03:26] Jeez, this is really cool [03:27] the api is actually quite terrible right now [03:27] my wrapper makes it awesome though :) [03:28] bsmith093: woo gotta do some recovery. you need 3-4TB available somewhere. image the drive asap [03:29] yeah but my piint was, sorry for not clarifying, should i send it out for recovery or will they bust me? do they care whats on it? [03:29] oh [03:30] the drive will bareky spin up [03:30] I doubt they care [03:30] bsmith093: a place might refuse to work on it but they won't turn you in or anything, if it's just pirated stuff. if it's really illegal stuff, like tor-level illegal, then they would [03:31] they said 1700 "oh and of course well bill you for a new drive, sir" [03:31] arrith: i figured that [03:31] dang [03:31] could make a mean backup machine for 1.7k [03:31] really it was my fault for leacing it on a table unattended like that [03:31] how? [03:32] 1.5 TB drives are $100 ea [03:32] yes but they need a clean room for this case [03:32] you can always attempt it yourself ;) [03:32] yeah you could try [03:32] see what ddrescue does [03:32] open, remove platers, put in new drive assembly, align the damn heads, read to new drive ship new drive back to schlub [03:33] assuming you need that [03:33] tried spins stutteringly click of death [03:34] and i dont have another multi tb drive to mirror it to even if it was just damaged instead of dead [03:52] WTF? A geocities website that's still up? http://www.geocities.com/growingjoel/index.html [03:53] yeah apparently there are a couple still, paying customers or something [03:53] that site was too autistic to notice that everyone else had left [03:54] or yahoo has just not managed to find the servers in their rats nest yet [03:55] haha [03:56] weird [03:57] http://www.geocities.com/xanderubi/ is the other one someone mentioned [03:58] http://www.geocities.com/SoHo/7373/ [03:59] I like how a website about ASCII art uses a Java applet for a splash screen [04:03] kennethre: So a buildpack is responsible for setting up the execution environment of an app [04:03] Whether that be installing python, or building wget-warc? [04:05] undersco2: correct [04:05] undersco2: you typically want that stuff to be prebuilt [04:06] undersco2: like in a tarball it fetches [04:10] Oh, okay [04:10] Like, static binaries? [04:10] exactly [04:10] you know, speed and all [04:10] yeah, ofc [04:10] every time you push code to the app, the buildpack runs [04:10] so you don't want to make it take a while [04:10] you can always build once and cache though [04:10] we have a cache directory [04:11] What's the difference of making the app grab the tarball and run it, or including it in the buildpack [04:11] I notice that's what you do with heroku-splinder [04:12] https://github.com/kennethreitz/heroku-splinder/blob/master/Makefile [04:12] Specifically there [04:15] undersco2: that was me being super lazy and making it 'just work' [04:15] undersco2: the new one is much nicer [04:15] oh okay [04:16] Oh, does the new one actually use a buildpack? [04:16] yeah [04:16] before it took like 5 minutes for each one to compile [04:17] it was a hack [04:17] the output of 400 wget builds is not fun :) [04:17] (in a single stream) [04:17] haha [04:18] g2g [04:28] if you're curious about what's going on with the geocities pages that are still up, this is probably the best answer: http://contemporary-home-computing.org/1tb/archives/3022 [05:30] Damn, no tahoe-lafs on heroku [05:31] Hm, I wonder... [05:31] nah [05:31] No sqlite3 dev headers [05:39] I LOVE YOU TOO, AT&T [05:40] lol [05:42] 10. 75.29.192.7 85.7% 259 66.3 62.6 61.3 69.0 1.6 [05:42] 11. 75.29.192.57 93.0% 258 60.9 60.2 59.9 61.0 0.3 [05:42] 12. 75.29.192.29 85.2% 258 60.8 62.2 60.4 89.8 4.9 [05:43] i love those 80%+ packet losses [05:44] (that's from the perspective of an outside server towards my home connection) [05:46] that's nice [05:55] heh [08:00] undersco2: getting in on the heroku party? [08:00] loving it [08:01] except they have no sqlite headers [08:01] so building tahoe-lafs isn't possible that I can figure out [08:01] heh [08:01] you do know that you will end up paying quite a bit of $/month to do what kenneth is doing, right :P [08:02] wow, compacted a CouchDB database from 600 MB to 157.3 MB [08:02] that's always awesome, I guess [10:15] kennethre: Oops? http://s3.us.archive.org/ "No server is available to handle this request." [10:21] alard: gahh [10:23] How many instances are/were you running? Do you still have some uploading? [10:27] they're all still running [10:27] i can shut them down if needed [10:29] kennethre, are you running that in the middle of heroku's almost-downtime? :p [10:30] Nemo_bis: All Systems Go [11:12] kennethre: Maybe we should stop, then wait for IA's s3 api to recover and then slowly start again (with a few instances). [11:12] If indeed the mobileme uploads are causing the trouble. [11:27] http://canv.as/ugc/original/60762f02d5fd2b1c7fdd9b806a5bd232033130b9.png [11:54] http://www36.us.archive.org/xx/mrtg/ [11:54] http://www37.us.archive.org/xx/mrtg/ [11:54] http://www38.us.archive.org/xx/mrtg/ [11:54] http://www40.us.archive.org/xx/mrtg/ [11:54] http://www41.us.archive.org/xx/mrtg/ [12:42] breaking stuff again? [12:50] brute force downloading that smf forum was a terrible idea. it seems to serve a page for each single post as well... [12:51] Coderjoe: Well, it's like the local school's library says they are "always interested in new books" and kennethre sends them 400 truckloads. At once. [12:53] Schbirid: yep. fun, isn't it? :-\ [12:53] yeah :( [12:53] * Coderjoe disappears for 12 hours or so [12:54] * Schbirid hopes ext3 does not have a 64k files per directory limitation or so [13:02] oh thanks got, you can use wildcards in --reject [13:02] smf/index.php/topic,4019.msg71589.html -> --reject .msg*.html [13:06] 7z badly needs a --quiet [14:00] crap, --reject .msg*.html still downloads eg http://rome.ro/smf/index.php/topic,5000.msg112682.html [14:01] ah [14:01] i should strip the .html [14:03] --reject *.msg* does not work either, raaa [14:07] Maybe it needs a full path, like /smf/index.php/*msg* ? [14:08] it should not, "Specify comma-separated lists of file name suffixes or patterns to accept or reject." but i will try [14:08] nope [14:09] Ah. You need to quote the rejlist to avoid the shell trying to expand the asterisk. --reject "*msg*" [14:09] Why they had to put that on a separate page is beyond me. [14:09] ooh, i COULD have thought of that [14:10] I never do. It just seems so blatantly wrong that the shell would handle expansions instead of teh program being invoked that it never dawns on me. [14:11] oh, i previously also only checked the log and it still lists those urls but does not seem to save them [14:11] So I had to find it by clicking the "See Types of Files" link in the wget manual :) [14:11] heh [15:33] kennethre: Have you killed your Heroku instances? The queues seem to have disappeared. [15:33] alard: i did [15:34] Ah, okay. [15:34] i need to figure out if everything was uploading properly [15:34] the tracker wasn't catching my stuff [15:34] and the archive.org stuff didn't seem to have the files in each item [15:34] i didn't spend too much time investigating though [15:35] Of the 396 numbers that have been given out, only 46 have been uploaded. [15:36] You ran 400 instances? [15:45] *cough*, maybe. [15:45] mayhaps. [15:45] the Jason Scott adventure is pretty darn good- I'm glad my idea got turned into something other people liked http://www.glorioustrainwrecks.com/node/2511 [15:45] kennethre: The bump is probably your work: http://www36.us.archive.org/xx/mrtg/ [15:45] hmm, so 400 is too many [15:46] i like how the cpu load dropped dramatically [15:46] Yes. From the graph I derive that the server is configured to handle up to 40 concurrent connections. [15:46] hmm, i'll leave it at 50 then i guess [15:46] From the logs I believe there are 6 servers, so 6 * 40 = 240. [15:46] ah excellent [15:46] i'll do 200 then [15:46] But there are other people too, so I'd try 100. [15:47] is s3 back up? [15:47] The uploads probably won't be going much faster if you do 100 or 200. [15:47] Yeah, s3.us.archive.org is responsive again. (Probably around the time the number of connections dropped.) [15:47] starting up 100 [15:48] up [15:48] I paused you on the tracker, so I'll unpause you now. [15:48] alright, i'll watch carefully this time :) [15:50] Yeah, I'll hope it will work better now. The uploads that did manage to get through look fine. [17:54] SketchCow: thought you'd like to know http://en.wikipedia.org/wiki/Computer_Networks:_The_Heralds_of_Resource_Sharing [18:19] it appropriate to use archive.org for some open software distribution? [18:19] I'm assuming yes [18:33] You'd like me to know about a film my film deatcher directed in 1972? [18:33] :) [18:33] teacher. [18:34] Archive.org is certainly fine for open software distribution, yes. [18:34] I was going to setup some s3 buckets and see if heroku would sponsor [18:34] but that would a lot easier [18:35] as long as it's never unavailable [18:38] Nice demanded uptime. [18:39] hah [18:39] a better wording would have been [18:40] as long as it doesn't have a history of being unavailable frequently :) [18:42] That could depend on the archiveteam operations that are going on. [18:43] :) [18:49] No, it's not. [18:49] The #1 reason archive.org goes offline is some tard uploads an anime that hit the streets 2 hours previously. [18:49] classy [18:50] Under an item name like dskjfhsdkjhdf so you know it's not there for the betterment of the archive. [18:50] It's literally just to inject into the host like a parasite. [18:50] free megaupload [18:50] Oh, non-proift library? Yeah, let's shit on that [18:51] gotta love people [18:51] SketchCow: True, although I'm afraid that we might have had something to do with today's s3 api downtime. :) [18:54] alard: hmm app[scrape1.94]: ---> 303% full. [18:54] Is it going to upload? [18:54] It checks the size after each user, so that's probably a 15GB user. [18:55] hopefully [18:55] hard to tell right now [18:55] ah yeah it is uploading [18:55] Do you see any upload speeds? I still find it weird that it's so slow. [18:56] hard to tell [18:56] is the final column upload speed? [18:56] Yes, that's the current speed. [18:56] ~267k [18:56] sucks [18:56] 100–300 [18:56] Same for my instance. [18:56] 252k on average. [18:57] are we saturating the pipe? [18:58] Not sure which pipe. If I try an upload from batcave.textfiles.com it's doing 10MB/s with ease. [18:58] gah [18:58] sadface [18:58] this will take all day [19:00] ~9/hr upload [19:00] not so bad [19:06] s3? is that one line? i am currently uploading something unrelated with ~120kbyte/s. if you want, lets try what happens if i stop? [19:06] highlight me if [19:35] Schbirid: I don't think that would make any difference. The speed didn't change when more mobileme instances started uploading. [19:35] It's strange. [19:35] ok [19:35] (But thanks for the offer. :) [19:35] :) [19:37] Any idea how large is the "Bell System Technical Journal" archive ? [19:39] ahaha, this MobileMe user is still going [19:39] holy crap, you got a copy of the entire BSTR library? [19:39] i was looking for that for months [19:39] back in 2008 [19:40] to get a copy of a paper doug mcilroy wrote [19:40] eventually i emailed him at dartmouth and he scanned and sent me a copy of the relevant paper [19:40] but there was a 'lost' digital archive of the whole thing? [19:41] X-Scale: You mean the ones that are up at http://www.alcatel-lucent.com/bstj/ ? [19:41] Exactly, ersi [19:41] I'm not sure about where to get the reports, LordNlptp [19:41] I've been looking to crawl and download those [19:41] just havn't had time yet [19:43] http://torrentz.me/b7ac86621e38b27529f7afa4fc318e2d0b0ca646 [19:43] 40 GB :o [19:53] http://batcave.textfiles.com/business_cards.pdf [19:57] SketchCow: nice! [19:57] ordered already or would you like feedback? [19:58] Indeed. (Isn't it a bit old-fashioned to include the http:// though?) [20:00] nice cards [20:00] alard: Shu'up [20:00] how would you distinguish if it's isn't available on gopher:// alard? [20:01] http:// helps software turn the URL into a link [20:01] IRC clients/terminals and such [20:06] That's what I said: old-fashioned. :) [20:08] it's 7 more characters to type into the google [20:08] You mean Altvista, right? [20:08] *Alta [20:08] Ask.com? right? [20:09] No. You're confused. Askjeeves.com is where you ask how to type http:// into Altavista [20:10] The good old days when Jeeves actually existed! [20:13] http://www.opengeocoder.net/ [20:14] LordNlptp: McIlroy lecture at Bell Labs: .ram audio file -> http://cm.bell-labs.com/cm/cs/doug97.html and audio transcript -> http://research.swtch.com/bell-labs [20:15] "The 'http://' at the beginning of URLs is a command to the internet browser. It stands for 'head to this place:' followed by two laser-gun noises. " [20:15] haha [20:17] heh...I still remember reading on a journal about a brand new "search engine" located at "www.altavista.digital.com" [20:21] But seriously, I hate certain trends in IT, such as the aforementioned removal of the http protocol [20:21] Why don't we do the same for the href attribute? [20:21] [20:22] Opera goes further and eliminates query strings [20:22] Eww! [20:23] Why not go even one step further and eliminate the address field entirely, and when you want to go somewhere you get a prompt asking you for an address [20:24] yes, i really hate that in opera [20:24] HATE [20:24] Or an even more extreme approach: ask for content -> http://www.parc.com/content/attachments/networking-named-content-CACM.pdf [20:24] seriously [20:25] nitro2k01: because there's more freckin' ways to ask for content besides over HTTP [20:26] Similar thing that YouTube did for a short while but thankfully revoked. The whole control ba at the bottom of the widget would hide after a short while. "No. You don't need to now where you are a in the clip or skip. Shut up and watch it to the end." [20:26] ersi: What was that a response to exactly? [20:27] Um, to skipping http:// in href="" [20:27] Right. That was sarcasm, in case you didn't notice. [20:27] Well, I'll redirect it to alard then :-P [20:35] Ha. [20:36] http://www.youtube.com/watch?feature=player_embedded&v=hhO1DnNKYbo#!&t=0m46s [20:36] ha ha feedback on business cards [20:36] dude, they're business cards [20:36] What feedback is there. [20:37] I'm mostly just wondering.. [20:48] the image on the front would look better if you were on the right side of it [20:48] to me! [20:50] OK. [20:50] "Here's my new girlfriend." [20:50] engaged already or would you like feedback? [20:50] :) [21:24] SketchCow: if your copies of the Hacker's Dictionary are younger than 1982, here's a good old version: http://article.olduse.net/114@Ahouxs.UUCP [21:28] interestingly, this one ends at YU-SHIANG WHOLE FISH, the one wikipedia says is from 1981 adds ZERO and XYZZY. [21:39] ha, old school equivalent to a wiki, just ftp over it yourself [21:40] yeah [21:40] jstor liberator is up on mefi: http://www.metafilter.com/113250/Liberate-Knowledge [21:44] Hmm, only "450 saved documents!"? I remember it being over thousand [21:45] on a side note, metafilter sure is ugleh [21:47] Hmm, was that intended to be published or just discovered by someone? [21:51] SketchCow? undersco2? [21:52] jason just tweeted about it [21:53] Ah, I see. [21:54] (Interestingly non-related hostname.) [21:59] kill it [21:59] I am so angry [21:59] I asked them to kill it the story. [21:59] we can't have something like that on an archive.org hostname. [22:00] but kill it on our side. [22:00] partially my fault, I should of checked up on the project. [22:00] SketchCow: You should ask undersco2, it's in his directory. [22:00] I am in my car about to head to interview, I really can't concentrate on that. [22:01] you are right. I'm going to kill the entire web services on that machine. [22:01] or maybe just his project. [22:02] As an alternative, we could redirect it to a non-archive.org domain and call it the JSTOR shuffler, so it'll not be 'the thing that was removed' but 'the thing that was misunderstood'. [22:03] should move it to archiveteam.org [22:04] I do like this comment, though: http://www.metafilter.com/113250/Liberate-Knowledge#4208877 [22:17] I am trying to get them to shut down the machine. [22:17] I had no f****** god damn god damn idea that tracker.archive.org lead to that. [22:17] I am so angry [22:20] I have no access to that machine, a terrible oversight. [22:33] SketchCow: Should we try to break it? We could try to clear the queue, for instance, so that perhaps it won't give out new papers. [22:38] no, the problem are the hostname [22:39] not the work being done. [23:08] It's gone. Bye! [23:49] shit shit shit shit shit [23:50] ? [23:53] shaqfu: Read scrollback. [23:54] Oh, that