[02:26] Only 3 more portal levels left! [02:26] Not bad for an hour or so [02:26] * underscor eats SketchCow for getting him hooked [02:29] portal 1 or 2? [02:34] I'm assuming portal first slice [04:32] underscor: good game to get addicted to [06:00] I get this after exporting some MediaWiki pages: http://p.defau.lt/?T1uU_47VU1PSgvqoeIHbog [06:01] To add the missing XML tags, sed 's/ <\/revision>/ <\/revision>\n <\/page>\n<\/mediawiki>/' would match every , how can I replace it only at the end of file? [06:10] http://buddhisttorrents.blogspot.com/2011_07_01_archive.html [06:33] old, but... http://gizmodo.com/5820812/china-shut-down-13-million-websites-last-year [07:47] http://googleblog.blogspot.com/2011/07/gco-official-url-shortcut-for-google.html [07:47] bleh, more shorteners [07:50] It's just an internal one, like twitter's t.co [07:56] true [10:19] http://www.kryoflux.com/ of possible interest. [11:23] Maybe this will interest someone here.... http://i.imgur.com/A1UII.jpg [11:33] did you mean to screenshot your steam and irc too? :P [11:37] Cameron: hehe [11:44] Cameron_D: but yeah, torrent it up or something maybe [11:44] Ymgve, sorting that out currently [16:52] aaaah, i think i need to use recursion [16:59] recursion is always a good thing [17:06] recursion is always a good thing is always a good thing [17:21] i am also doing things like sed "s/^/${pwd//\//\/\/}\//" [17:29] also known as awk '{print ENVIRON["PWD"],$0}' [17:29] i bet i killed 3k brain cells in #bash today [17:32] Spirit_: We need to go deeper [17:33] * Spirit_ goes derper [17:33] herpa [17:37] i better make my daily backup NOW before unleashing derping-recursion on my ramdisk [17:39] Wonder how much new content IA gets from liveweb.archive.org/URL [17:43] holy shit, this was not supposed to work [17:43] but it seems to [17:45] * Spirit_ is teeh awesome [17:46] i will be download a shitload of files from atomicgamer.com [17:47] with 1 file per 5 minutes... [17:50] i should have made it remove temporary files [19:06] WHY HELLO [19:06] Wonder why I got bounced from the planet. [19:09] you got mad cow disease [19:13] anyone got an idea how to make a "while" in bash that waits for some grep to have no match? [19:13] like: [19:13] while $(curl -s "http://www.atomicgamer.com/${queue_url}" | grep -q "Time Left"); do [19:14] so while i get "Time Left" it will stay in the while loop but once i do not get a result anymore it should continue [19:14] it currently stays in the while forever [19:14] and i think i am stupid [19:24] this is becoming the worst script i ever made [19:45] Make it delete a random file each time [19:50] woohoo, except for that sleeping issue it works [19:50] i think it does that by default :P [19:51] News from the Google Groups Files project: [19:51] completion rate: directories: 231/hr, groups: 1727/hr [19:51] directories: TOTAL: 443013, NEW: 159064, DONE: 270836 [19:51] groups: TOTAL: 1505795, NEW: 558656, DONE + ERROR + ADULT: 947139 [19:51] new discovery rate: dirs: 0/hr, grps: 2/hr [19:52] Victory will be ours [19:52] awesome [19:52] did you hear about this http://news.ycombinator.com/item?id=2781615 ? [19:53] just read it, sounds weird [19:54] that the way current science works, behind a paywall [19:56] oh i could do this with a really ugly "while read thing;do ;done < $(curl and grep here)" i think [19:56] no wait, that would be the same thing, just even worse to read [19:58] ambiguous redirect :( [19:59] SketchCow: what kind of index would make most sense for the Google Groups files? HTML? XML? Tar with directories and symlinks inside? [20:00] i will abuse a for loop [20:03] I've got 160GB of Google Groups downloaded at this time. [20:07] Hmmm. [20:07] .txt [20:07] and a .html [20:12] * ndurner nods [20:37] SketchCow: http://www.reddit.com/r/gaming/comments/itzwt/post_from_1993_has_anyone_tried_playing_doom_yet/c26mwrt [20:37] possibly a BBS for you [22:51] have you ever used spinrite, and if you did, what did you think of it? [23:27] SketchCow: anything else happening on the hackercon video collection? if you can get the 2600 people to agree, I can send you the whole 2004 VCD collection [23:29] also, I'm planning a new archiveteam project: I've got or come across many Internet books that had lists of what was popular or otherwise referencing more than a handful of sites, and it would be nice to have copies of those sites so future readers can see what the original readers saw