[00:20] whoa- reCaptcha on archiveteam wiki- I feel a meme coming on here [01:04] so is there a spot to put all the files that can't be read, or is someone collecting them now? I'm aware of the massive multimedia file archives, but not so much for other file types [15:08] " Attention: This site will cease operations after August 31, 2012. Please make sure to download and save any files you wish to keep before that date. " [15:08] hey, are there any plans to archive this? https://good.net/dl/bd/ [15:08] someone asked about it, i suggested contacting the owner, no idea what happened afterwards [15:09] I grabbed a copy. [15:09] I've been integrating the data into archive.org as hackercon stuff [15:09] there is much more at good.net than that dir [15:09] Yes [15:09] But that's what I wanted to grab. [15:09] It's quite a bit in there. [15:10] gotcha, just noticed the warning and thought I'd mention it - I should've known someone was on it :) [15:13] never assume that! [15:17] Yeah, we get blindsided often. [15:17] Lot of shit dyin' [16:34] SketchCow: underscor: :D [16:34] The con went amazingly well, even with a goddamn fire evacuation. [16:35] Fire evacuation at a con sounds... hazardous [16:44] It went perfectly. [17:05] Zebranky: Did everyone canter out? [17:05] :D [17:06] ... [17:13] I, um... was actually asleep for it >.> [17:14] Woke up to the vice chair calling me. "Andrew, where the hell are you? A light just caught on fire and the venue needs to speak with you." [17:14] slacker chair [17:14] :D [17:14] Was is spontaneous or due to stupidity? [17:15] (attendee stupidity) [17:19] Spontaneous. [17:33] it was high up in the air [17:33] google showed me [17:33] woop woop woop off-topic siren [17:38] < underscor> I can't believe people are that dumb [17:38] I can [17:41] yeah, the only problem with -bs is getting people to join there [17:43] does #archiveteam have an auto-invite-on-join mode? [17:43] er [17:44] s/#archiveteam/efnet/ [17:44] doubt it [18:26] nope [19:04] Anyone able to build wget from git? Seems to have broken, think it might be gnulib. [19:05] Not sure if it's just OS X. [19:10] mistym: No, can't build. [19:10] make[2]: Entering directory `/home/gijs/archive/wget-t/po' [19:10] make[2]: *** No rule to make target `Makevars', needed by `Makefile'. Stop [19:10] alard: OK, exact same problem I'm seeing. [19:11] I did see that message recently, though. There's a message on the wget mailing list. [19:13] Yeah, saw it was another Mac user - part of what made me wonder if there was a Mac specific bug. [19:13] Ubuntu as well, apparently. [19:14] Stepping back a few commits in gnulib to see if I can see where this came up - there hasn't been a commit in wget for a couple of weeks so I'm assuming the problem comes from there. [19:40] spam issues on the wiki :[ [19:42] Is it at least funny spam? :( [19:42] Aha. Tracked it down. It is a gnulib bug. [19:45] Hurrah! Library nerds showing up on my [19:45] "Just Solve It" project. [19:45] library_nerds++ [19:47] yay [19:49] mistym: no, it's annoying spam [19:49] SketchCow: O_O where? [19:53] My ascii blog. [19:55] Dag Ågren ... I've had some correspondence with him [19:56] among other things, he wrote a FOSS RAR v3 extractor :) [19:58] yeah I've been pestering him with unarchiver bugs [19:59] DFJustin: related to which formats? [19:59] SketchCow: is there a list yet? [19:59] would be nice if we could start adding to it [19:59] (list of formats) [19:59] Got some other stuff to work on with it before we go that far. [19:59] I can think of quite a few. [19:59] ok [19:59] didn't realize he was WAHa.06x36 until yesterday [20:00] disk image formats, which I'd consider rather critical [20:00] ohh? where does he use that name? [20:05] his site seems to have shat a brick but http://web.archive.org/web/20101108095125/http://wakaba.c3.cx/s/web/wakaba_kareha [20:07] re: which formats, https://code.google.com/p/theunarchiver/issues/list?can=1&q=reporter%3Adopefishjustin [20:19] SketchCow: really enjoying your Super 64 preso [20:20] yeah I actually just rebought mario 64 because of watching that, sold mine years ago [20:25] hmm [20:25] somewhere I have a document that mostly describes the WSR88D radar data file format [20:26] (dammit, NWS, you're a government agency. why is it such a pain in the ass to locate documentation within your perview?) [20:28] it might be had instead of have, since I don't know what I did with it, and it might be on the drive that died at home [20:34] alard: Thanks for your help earlier - reported the bug to the gnulib and wget lists [20:35] ...and apparently as I was doing so, someone actually submmitted a patch for the very bug to the gnulib list. Go figure! [21:53] http://techcrunch.com/2012/07/03/google-shutdowns-continue-igoogle-google-video-google-mini-others-are-killed/ [21:53] posted 2 hrs ago [21:54] did google end up automatically migrating all Google Video stuff to Youtube? or did they just put closing Google Video on hold while continuing that line of "we'll help users migrate their videos"? [21:55] since if it hasn't automatically been migrated, then stuff will get lost [21:56] 1) they haven't migrated, but they've made a halfhearted effort to convince people to migrate [21:56] 2) we got it all years ago [21:56] so fuck em [21:57] that was only a little over a year ago, wasn't it? [21:58] time flies when you're archiving stuff [21:58] and did we actually finish, or did we just back off when they said they'd migrate? [21:58] it was last week before easter [21:58] we finished, iirc [22:00] but what happened to the data? [22:02] I thought we backed off, the full amount of data would have been huge as I recall [22:02] that was my recollection as well [22:05] huge by 2009 standards ;) [22:06] yes it wasn't mobileme huge [22:06] http://archiveteam.org/index.php?title=Google_video [22:07] is there something I should throw some heroku boxes at now that mobileme's done? [22:08] urlteam! heh [22:08] chronomex: we're sure we got it *all* years ago? [22:08] wrong, I was mistaken [22:08] maybe run a 'verify' sweep or something [22:08] about half, according to the wiki [22:08] ah [22:09] nice thing is there haven't been any new uploads since then, and I would bet no site changes [22:09] our wiki, which I believe is on the internet which you have access to [22:09] yeah i read 'on track to gwt it all', but that doesn't sound like all [22:09] chronomex: yes i've heard of this internet, been meaning to try it [22:10] DFJustin: yeah hmm. i'd think that would make verifying existing downloads a lot easier [22:10] so probably a few sprinkles of fairy dust from alard and we can inhale the rest lickety-split [22:10] on the format problem wiki page, wotsit.org is listed as broken- I just went there and the homepage loaded- so what's actually broken? [22:11] dashcloud: the downloads of individual documentations [22:11] the actual important part :| [22:11] one I tried appeared to work [22:11] ----------------------------- [22:11] Google Video stopped taking uploads in May 2009. Later this summer we.ll be moving the remaining hosted content to YouTube. Google Video users have until August 20 to migrate, delete or download their content. We.ll then move all remaining Google Video content to YouTube as private videos that users can access in the YouTube video manager. For more details, please see our post on the YouTube blog. [22:11] ----------------------------- [22:11] perhaps it's anything that's not cached, I suppose ... their asp backend seems brokenish [22:11] See that? We did that. [22:12] wonder why they chose to make public videos private [22:12] oh sweet [22:12] Because of terms of service. [22:12] er wait private fuck [22:12] Unless they can get you to sign to the new one, they will make it private until you do. [22:13] they say in that message 'we.ll be moving the remaining hosted content to YouTube' but in the latest thing today i'm not sure they say they've moved stuff, but just encouraged users to migrate their videos [22:13] I think it's a safe bet most people that haven't migrated by now are awol and we still need to archive [22:13] ah [22:13] archive.org's been doing it. [22:13] I'll check with them [22:13] ah, private. so it does get migrated [22:13] good news everybody- wotsit.org is only HALF broken [22:13] SketchCow: that's good to hear [22:13] http://gigaom.com/2012/07/03/google-video-igoogle-and-others-closing-for-good/ [22:14] if you stop the page load before it tries to pop up the download window, you can use the 2nd link (the one marked if you have problems with the download, click here) [22:15] http://archive.org/details/hackercons-notacon-2007-brickipedia [22:15] hm, ok, the other day that page was redirecting to a 500 error [22:16] here's a sample link: http://wotsit.org/getfile.asp?file=amff&sc=394643401- you should get amff.zip [22:16] ok [22:17] getfile.asp was hosed yesterday. [22:17] I put wotsit up there because I remembered finding it super useful when I was in highschool [22:17] but you can only do that if you stop the page before it tries to bring up the regular download link- otherwise you are hosed [22:17] ah [22:18] I didn't put that much effort into it [22:18] again, the automatic download seemed to work for me earlier [22:23] so archive.org has a new uploader thing we're banging on [22:23] and brewster wants me to get some people to give it a whirl [22:23] It's an html5 front end to the s3 api and is a lot more usable than our current thing [22:23] so try out http://archive.org/upload [22:23] neato [22:24] Do note, though, that Firefox will silently corrupt files >4GB [22:24] bug in firefox? :( [22:24] so either use smaller files or chrome [22:24] balrog: yep [22:24] been a bug since 2003 [22:24] has it been reported? [22:24] still not fixed [22:24] whattt [22:24] what's the bug id? [22:24] 4G is huuuuuge why would anyone want to upload things that big [22:25] [12:22:27 PDT] rajamaphone: https://bugzilla.mozilla.org/show_bug.cgi?id=215450 [22:25] here is the bug report from 2003 [22:25] er, fucking skype [22:25] anyway [22:25] still status:new [22:29] i don't know if the download in question requires cookies or not but copying and pasting it to "wget" or "curl -O" might work [22:30] i do that in a screen session. verifying the integrity of http downloads without hashes is always tricky though [22:30] oh upload [22:31] chronomex: youtube for a long time has had their own custom java upload utility thing. i think more to make resumeable uploads possible, but maybe also to deal with stuff like that firefox bug [22:31] there are so many ancient mozilla bugs in basic functionality that they give no shits about fixing, it's not even funny anymore [22:32] html5 uploads should allow resuming as well, I think, at least if the server supports range requests or the like [22:42] they were working on this big electrolysis effort to split parts of the existing codebase into multiple processes, but that got put on hold and i think the latest thing is this effort to write a replacement for gecko, their rendering engine. which might end up touching other areas and fixing longstanding bugs [22:43] I mean stuff like yanking out the ethernet cable makes your downloads all succeed at 50% with no resume ability https://bugzilla.mozilla.org/show_bug.cgi?id=237623 [22:43] oh wait, "according to the claims of Mozilla?s engineers, the Servo is not aimed to replace Gecko" http://browserfame.com/746/mozilla-servo-browser-engine [22:44] DFJustin: that is an odd sounding bug [22:44] well, it's a good thing that I only wanted half of all my downloads to begin with [22:45] DFJustin: wow that's old :D [22:45] I would like to confirm that I have the same problem. [22:45] My setup is Windows98SE, Firefox 0.9.1... P200 56MB ram... [22:45] I have the download manager enabled (to appear). [22:47] https://bugzilla.mozilla.org/show_bug.cgi?id=237623#c104 makes a good point, about how fx does have the information it needs. then the comment after about how it could check for FTP downloads [22:49] loving http://www.archiveteam.org/index.php?title=Just_Solve_the_Problem_2012 btw [22:50] needs a good catchy name though. i'm not sure how to have a good name but still keep it really universal for any kind of information storage or fixed form [22:50] that's basically my point, it's not hairy gecko stuff, it's "get a mozilla engineer on it for an hour or two" stuff but nope, let's write a new engine [22:51] [22:51] make some noise about it now- especially if you can connect it to something Mozilla cares about currently [22:52] they've started caring about problems people thought they had written off- look at Memshrink and Project Snappy [22:52] that one dev keeps popping up in the bug, at one point saying "like to get this done by FF4" heh [22:52] * Coderjoe nudges towards -bs [22:53] it's on the lame-network whiteboard thing now which is why I remember it, but there are others [23:49] So I tried migrating videos over from google video [23:50] got "upload failed - sorry about that" for each one [23:51] awesome [23:52] one of my google videos was downloaded and put into youtube by someone else, so I can't move it over