[02:00] Do we need more people for memac? [02:01] I can pull a few more in [02:07] alard: ^ [02:24] http://www.countdown.com.au/ [02:24] is going offline end of month [02:24] for good [02:24] http://en.wikipedia.org/wiki/Countdown_(Australian_TV_series) [02:24] famous tv show ran in 70s and 80s [02:24] culuturally very significant [02:25] hrmm. [02:28] I can agree with oli [02:28] Countdown was big over here [02:28] big is probably understatement for those who lived in that era [02:28] was hugely significant [02:29] in pop culture [02:29] music [02:29] everything [02:29] had a massive audience [02:30] I just sent them a msg [02:30] asking what can be done to help archive it [02:31] go for it [02:31] Start a project channel [02:31] And a wiki page [02:31] fk they have already closed their youtube account [02:32] btw sketchcow can you do a htaccess/mod_rewrite on the AT website to force either with or without www ? [02:33] am starting a page now [02:39] im kinda retarded [02:39] http://archiveteam.org/index.php?title=Countdown [02:40] http://archiveteam.org/index.php?title=File:Countdown.jpg [02:42] can someone help plz :P [02:54] Looks like you're doing fine. [03:00] pretty apt name for the project :/ [07:09] http://winchester.craigslist.org/vgm/3074184507.html [07:09] should i? [07:12] It's a harmless purchase and a classic [07:12] So, what's going to happen to readmill and readibility when the attorney general sues them? [07:12] Wait, that's -bs [07:13] SketchCow: do you know how much one of those things weighs? [07:13] * kennethre has a 3rd floor apartment :) [07:13] SketchCow: what's readmill doing now? [07:14] They're significant [07:14] I have an account but i never really figured it out [07:15] like why it exists [07:51] Logos now being drawn for Archive Team [08:16] kennethre: this site says the "dimensional weight" is 345 lbs [08:17] it has a regular weight column, but that has just ??? in it [08:17] "dimensional weight" is the equivalent weight for postal billing, based on the size of the thing [08:18] SketchCow: I think we have enough people running memac. The main issue right now is that these people aren't as effective, because the current set of users is full of problem cases. [08:18] I've got a 20G user uploading now [08:18] That user needs to be shot [08:18] yes [08:19] I had one a few days ago that caused wget to OOM [08:19] they had like 14,000 files I think [08:19] they are certainly problem users [08:19] but that's just based on 1 lb per 194 cubic inches [08:19] Coderjoe_: yes [09:58] oops. if a item at IA is deleted, i guess the identifier is not available to be re-used, eh? :\ [10:01] one hour until today's https://www.quaddicted.com/stuff/temp/ovh.html (11 UTC) [10:28] why does this shit always happen during prime sleeping hours [10:30] a couple of days ago there was one at ~5utc [10:36] Eh, it's always in the evening for me. [11:35] kennethre: grab that sucker if you can [11:35] an original galaga machine is worth quite a bit [11:36] and the stupid locks can be replaced [14:15] http://www.infoworld.com/d/cloud-computing/3-reasons-we-wont-see-cloud-api-standard-196056?source=rss_ [14:15] Not a great article. [14:15] turns out rackspace isn't even using regular openstack [14:16] did their own auth scheme [14:16] Rackspace CEO recently made some statements about "We need a standard!!!" [14:16] Which really means "We need a smoother upgrade path away from AWS because they are fucking killing us!!!" [14:16] Must be hard to be Avis, when you're an Avis that's the size of Enterprise [20:18] http://en.canoe.ca/montrealmirror/ [20:19] Lovely. I heard about the paper shutting down, but not that the website was taken down already. [20:39] speaking of newspapers, is there any good way to archive stuff from the Google News Archives? ( http://news.google.com/newspapers/ ) [20:50] i am afraid i asked this before but forgot the answer, is there a simple warc extractor yet? [21:03] Schbirid: http://warctozip.herokuapp.com/ [21:04] Depends on what you mean by 'extracting'. [21:04] haha, nice [21:04] yeah, that will save me the time i was wanting to save [21:04] thanks [21:04] I don't know if it works for large warcs, though, there might be some Heroku limits. [21:05] mine are tiny, it will be fine [21:06] It might be useful if SketchCow or underscor could run it on an archive.org machine, so we could link people to it. [21:23] zgrepping a 13G archive would probably be faster than grepping a gazillion of tiny files in many subdirectories, right? cpu is powerful [21:26] probably, yes [21:27] for some reason it gives me bollocks: [21:27] $ zgrep fileplanet forums.tar.gz [21:27] Binary file (standard input) matches [21:28] on a smaller tar.gz it works fine [21:28] meh [21:28] grep --text ? [21:29] that works, thanks [21:33] you could extract it to tmpfs instead of disk [21:33] tmpfs flies [21:33] whoa, i can! that server has 16GB ram and i aint using much of it [21:33] great idea, thanks [21:33] :) [21:34] tmpfs would be more context switches than tar|grep, but you get the filename out of it [21:47] stupid server. it filled up swap instead [21:51] meh, the file is just 1G too much or something [22:00] heh [23:52] i think you guys have it [23:52] but here is full science magazine archive: http://www.demonoid.me/files/details/1791652/16021566/