[00:03] yipdw: big surprise https://wwws.whitehouse.gov/petitions/!/response/why-we-cant-comment [00:09] underscor: yeah... and it took us over a year to learn of it. gives me the warm fuzzies. how about you? [00:13] also, this AC claims that Ken Silva (cited in that article) DID know: http://it.slashdot.org/comments.pl?sid=2651017&cid=38904163 [17:20] SketchCow: I'm going to call Proust "done" [17:20] since it appears to now be stable [17:20] (for the near future) [18:18] so , anything for the ffnet grab? [20:32] bsmith094: nothing new from me; I've been busy with other tasks [20:39] bsmith094/yipdw: Is ffnet going away soon? [20:40] Or is this a no-need-to-hurry long term project that can wait? [20:43] alard: it is a no-need-to-hurry project [20:43] (which is why I am not moving urgently on it :P) [20:45] Good. [20:45] I'm currently uploading some last bits of Proust [20:46] and, after I archive the AMI (just in case SketchCow wants it reinstated or something), will have a machine ready to do more fetches of stuff [20:46] alard: I gather Tabblo is the next high-priority one [20:46] ? [20:47] It could be, but I haven't been able to find a confirmation for that. [20:48] ok [20:48] The only thing I've found is this blog by a former employee (since November 2010) employee, http://nedbatchelder.com/blog/201201/goodbye_tabblo.html [20:48] uh, I guess I can switch this EC2 instance over to Mobileme [20:48] haha [20:49] ANOTHER storytelling site bites the dust [20:50] they haven't announced it's going down but it's on life support at this point [20:50] Yes. And another 'social network' too, since Tabblo has all these things everyone else has too: friends, comments etc. [20:50] the tabblo archiver tool doesn't look too reliable [20:50] DFJustin: The date of 15 March is floating around, not sure where that comes from. [20:52] What's wrong about the lifeboat? (Haven't tried it.) [20:52] "With the latest employee departures, no one at HP even knows how to shut it down, other than to simply pull the plug." [20:52] I'd archive what I can. [20:52] I was referring to the "it doesn't always get all the images" comment [20:52] oh wait, HP owns it [20:52] FUCK [20:52] yeah that shit is going to crash pretty soon [20:52] lol [20:53] yipdw: He supposedly fixed that bug in 2.2 [20:53] "Sometimes, the downloaded tabblo zip file seems OK, but is actually missing some images. Tabblo Lifeboat now checks for this when the zip file is downloaded, and will retry if parts are missing. It will also check all your previously downloaded tabblos in case you had downloaded them with an earlier version. " [20:54] balrog_ph: yeah, I'm looking at the lifeboat mercurial repository niow [20:54] just to understand how the lifeboat works [20:55] Ahh, ok [20:55] the code is...weird [20:55] and I don't mean structurally; it's fine in that regard [20:55] it's just full of comments like [20:55] # Tabblo returns short pages sometimes!? [20:55] # Why does tabblo.com not just return 302 for redirects?? [20:55] which, from a developer on the webapp, is NOT what I expect to see [20:55] it's like he's doing archaeology on some digital monolith [20:56] Yes, the zip file download is strange. I've tried that. It's *very* slow, then it just stops half way. The next time you try it, you get more data, then it stops again. Repeat until you have a valid zip file. [20:58] it looks like Tabblo suffers from a similar problem as Splinder [20:58] (and every other huge webapp, really) [20:58] Which is what? [20:58] application server timeout [20:58] or more precisely app server overload [20:59] there's code in the lifeboat that retries a download of a page up to ten times [20:59] That seems a likely explanation. And they have caching, so the next time you try it things go faster. [20:59] And eventually things are cached enough to give you the whole file within the time limit. [20:59] yeah, assuming your request didn't get knocked out of cache [21:00] er, response to your request [21:00] Should we set organize a rescue mission? [21:00] hmm [21:00] I wonder if organizing a rescue mission would make things worse :P [21:00] Saving the tabblos seems easy and simple enough. [21:00] A rescue mission with limited admission? [21:00] in the sense that it'd be stressing the site and causing more download failures [21:00] probably, yeah [21:01] And perhaps make such a big problem that they'll just shut it down. [21:01] right [21:02] I guess we'd just use the lifeboat code [21:02] It isn't warc. [21:02] true [21:03] but it does handle a lot of Tabblo corner cases already [21:03] how hard would it be to add WARC generation? [21:03] Well, basically the only thing of real interest is the download_tabblo method. [21:04] Discovering tabblo id's is less important, we just start at 1 and continue to 180000+ [21:04] The download_tabblo method downloads the zip file, which we can do ourselves. [21:05] I'm wondering how to handle things like truncated pages and error reponses [21:05] post-processing the WARC and wget log files? [21:05] or can we use that wget-lua branch [21:06] Maybe it should be a two-step process: 1. we run a wget --page-requisites on the tabblo page, which will give us a complete web page to put in a WARC. [21:06] 2. we also download the zip file that contains the original images, but don't add that zip file to the warc. [21:07] Then we'd have a more or less browsable (as in: WARC) copy of the site, and we'd have a copy of the original photos. The rest is derived from that from the lifeboat, which can be done later, if necessary. [21:08] hmm, let me see if I follow that [21:08] (The lifeboat just downloads that one zip file per tabblo, as far as I can see.) [21:08] we retrieve the page structure using wget-warc, and use the ZIP from the lifeboat to augment whatever's missing [21:08] (or alternatively use the ZIP as the source of truth for photos) [21:09] or do you propose that the ZIP and WARC remain separate? [21:11] I'd think they serve different purposes: 1. the warc would give people the pages they link to now (the tabblos can be viewed via the wayback machine, for example). 2. The original content is still in the zip file. It's not directly browsable, but the data is there and can be processed by something like the lifeboat. [21:12] (The lifeboat doesn't include comments, by the way, since those aren't in the zip.) [22:13] hi guys [22:13] i have word from the inside that rutnet.org.uk is going to close down in a few weeks since it lost its funding [22:14] nothing's certain right now, but it has a lot of sub-sites for towns and villages in rutland [22:17] and i've been told its funding will be cut and it's going to close at the end of the financial year 2012, which means early april [22:18] if you have internal contacts why don't you ask a backup [22:18] sorry, .co.uk* [22:19] Nemo_bis: the internal contact is a client who has a sub-site on rutnet and needs to get it off rutnet by april [22:19] not internal enough then, ok [22:19] Nemo_bis: yeah, not quite [22:21] if we get the contract, then we will end up with some contact with the owners of the website now (in order to set up redirects from their old rutnet site to the new one) [22:22] and we *might* (very slim might) get friendly enough to negotiate a database dump [22:23] but that's a might on top of a might [22:26] isnt the end of FY12 actualyl april 2013? [22:27] Coderjoe: you might be right, actually, i do know it's whatever end of the FY that happens this year [22:29] it depends in the company, doesn't it [22:29] Nemo_bis: it's going down in april 2012 [22:29] unless something happens [22:29] the FY I mean [22:31] Nemo_bis: i think the FY is the same anywhere here [22:31] ah [22:31] in any case, it's funded by the government right now [22:32] and you know, cutting £50 a month for a dedicated server would zero the national debt overnight [22:32] meanwhile, the fucknuts who get to make decisions like that actually keep their jobs