[00:52] Google Latitude is shutting down, and will be deleting friends lists, badges, and perhaps other things [09:15] WHAT FORSOOTH, PRITHEE TELL ME THE SECRET WORD [09:22] yahoosucks [09:22] IS THY SECRET WORD [11:05] re's an update: it works. A team of researchers at the University of Southampton have demonstrated a way to record and retrieve as much as 360 terabytes of digital data onto a single disk of quartz glass in a way that can withstand temperatures of up to 1000 C and should keep the data stable and readable for up to a million years. [11:05] :O :D [11:08] can we start a kickstart for them? [11:33] And the single glass disc is approximately three miles in diameter. :) [11:42] and/or costs six billion quid [11:47] or both. :) [11:48] "The wildly impractical storage innovation was immediately purchased by Iomega Corporation." :) [11:49] when the technology takes off they'll cost-reduce it to 'up to 100 C and readable for up to a million minutes', and in ten years we'll be worrying about backing it all up again [11:49] or, in other words, what antomatic said ;) [11:54] g4tv.com-video3281: Alan Paller Interview: https://archive.org/details/g4tv.com-video3281 [11:56] g4tv.com-video3166: DMCA Debate: [11:56] g4tv.com-video3166: DMCA Debate: https://archive.org/details/g4tv.com-video3166 [11:57] g4tv.com-video3145: Andy Jones Interview: https://archive.org/details/g4tv.com-video3145 [18:10] newsflash: Linden Labs bought Desura [18:10] and Linden doesn't exactly have a stellar reputation of being careful stewards for user data [18:10] so perhaps it's worth seeing if there's anything scrape-able on Desura in terms of user content [19:19] Maybe old BUT. "After slightly more than 30 years, PCWorld — one of the most successful computer magazines of all time — is discontinuing print publication. [20:47] http://i.imgur.com/h8qs65w.gif [20:47] ARCHIVE TEAM SUMMONS [20:55] me gusta [21:26] i'm starting hate the way wayback machine finds pages [21:27] its archiving pages that i was trying to search using * with [21:27] like: http://podcast.cbc.ca/mp3/podcasts/bonusspark* [21:28] and that was 2 days ago [21:28] Howdy peoples. I found a site that needs saving and SketchCow suggested I show up here and mention it so we can haz collaboration. [21:29] BuzzData.com is closing, and all data is being deleted at the end of July. [21:29] The 31st, to be specific. [21:29] If you have a username and password, you can see 2300+ datasets, plus comments on them and user profiles and stuff like that. All about to go *poof*. [21:30] Not very personal stuff, mostly dry government created spreadsheets. [21:32] But still, someone ought to try, right? Screenshot #1: https://www.dropbox.com/s/b891l8y177g6vt9/buzzdata_screenshot_01.png [21:32] Screenshot #2: https://www.dropbox.com/s/7868f285lnbiref/buzzdata_screenshot_02.png [21:35] yes, get it [21:37] Okay. I can start a panic grab on a cloud server later tonight. I have wget 1.14, which I think is the latest, and I will follow the directions on the AT wiki for doing the WARC dump. [21:38] "The team behind BuzzData has a new product, a new name and a new mission – we’re now LookBookHQ." [21:38] Based on the wiki, it should be this, I think? [21:38] nothing says "wow i want to use that new product" more than that [21:38] wget -e robots=off --mirror --page-requisites --save-headers --wait 3 --waitretry 5 --timeout 60 --tries 5 -H -Dbuzzdata.com --warc-header "operator: Archive Team" --warc-cdx --warc-file="$WARC_NAME" -U "$USER_AGENT" "$SAVE_HOST" [21:38] Asparagir: yes [21:39] Okay. New to the world of WARC, want to make sure I get this right. [21:39] I will also need to figure out the code for cookies to do the initial login [21:39] if there's stuff that only appears while logged in, you might want to look at --use-cookies as well [21:39] Since the data requires user login first. [21:39] Right, basicvally the entire site is visible *only* when logged in. [21:40] Even the "public" stuff. [21:40] (Gee, I wonder why it never got popular.) [21:40] in that case, i'd suggest you concentrate on getting the data, WARCs are mostly useful while ingesting into wayback, which that won't be [21:41] ^^ NOT OFFICIAL ARCHIVE TEAM OPINION, YOU CAN IGNORE IT [21:42] Can we still create an account to access the data [21:42] I don't know. [21:42] Accounts are (or were) free, though. [21:42] You're all welcome to use mine. :-) [21:46] Okay, have to step out for 30 minutes to go pick up my daughter from summer camp. Back later... [21:46] hb [21:47] I just created a new account [21:48] I would recommend a few other people create accounts as well so we have multiple ones to work with in case the ban hammer comes down [21:48] Their base url scheme is not retarded so that is good [22:47] Back now. [22:52] So, how does one organize a panic grab of something like this? Do we each just run wget on our own boxes, and hope we each get different things than the other AT peeps? Or do we seed a tracker with usernames first, or what? [22:52] Verily, I am new at this. [22:56] I just went through the motions of trying to get a dataset. It is a mess of javascript to make things go [22:59] If someone pulls down the 238 pages of public dataset results I can probably whip up some javascript bullshit to get the datasets [23:00] they do some ajax stuff to get to the dataset url [23:16] Ugh, yeah, it looks like they're using jQuery templates to render a lot of the links to the actual data; it's not all written right to the HTML. [23:25] Format of the data preview is like this: http://buzzdata.com/data_files/blIVT0LGqr4y37yAyCM7w3 [23:33] Aaaaand I see unique authenticity tokens posted for each dataset, to make it hard to screen scrape. [23:34] Curses, folied again. [23:38] Hold up, they have a free API. [23:38] http://buzzdata.com/faq/api/api-methods [23:40] that makes it easy [23:40] hooray! [23:41] you want to do it Asparagir ? [23:42] I don't think my kung-fu is quite strong enough to code the whole thing. [23:44] It looks like at a minimum you would need to know a list of usernames and/or "hive names" beforehand, in order to use the API to grab each of their datasets. [23:44] See also https://github.com/buzzdata/api-docs [23:46] Yeah, minimum requirements for all the API endpoints is an already-known username. Such as GET `https://:HIVE_NAME.buzzdata.com/api/:USERNAME` where HIVE_NAME is optional (if you leave it out, it just gets the public stuff)