[00:31] oh yeah, if you're on iOS, update now [00:39] heh, just called my parents to check if they had updated ther iDevices [00:42] BiggieJon: "update, why?" [00:42] "there's a security problem" [00:42] "but this is an apple, I can't get viruses!" [00:42] mom heard about it this morning, stole dads phone and did the update [00:43] (how this conversation goes with most people) [00:43] ah [00:43] more technically clueful people then :) [00:43] my parents met in programming class in 1972 [00:43] ... ha [00:43] lucky [00:43] passed notes to each other by inserting invalid punchcards in each others program decks [00:44] bahaha [00:44] classy :) [00:47] this isn't about getting viruses, lol [00:47] I'm about to update, just have to extract the relevant files for analysis [01:05] Found and uploaded the coolest zines that some guy did while he was in elementary school: https://archive.org/details/galactic_gazette_1 [01:06] In other news, if you're using Amazon Glacier for backups I hope you're never in a hurry to restore your files... [01:07] I've been waiing for over 3 hours for 76MB of data to start restoring. [01:09] isnt that kinda the point ? [01:09] waiting for the tape robot to find your tape in the 2 bizillion tape library [01:11] it's true yeah, but my earlier mis-click has really reinforced it. ;-) [01:13] wish amazon would give you an estimate [01:15] * nico is thinking about uploading his backup.tar.gpg to glacier [01:15] yes, that would be nice... just to have some re-assurance that I'll be getting my data *eventually* [01:17] sorry, your tape is burried in a slat mine in nevada, we will schedule an excavation crew when we receive 100 requests for that container [01:19] BiggieJon: Aperture Science style ? [01:20] :) [01:21] we use glacier at work for logs past 6 months old, rarely ever need to restore anything past that [01:24] at $dayjob we fail logging compliance hard [01:37] how do you deal with large plaintext log files? I accidentally let one of them get to 400 MB, and had to toss it because I couldn't open it locally, and the process to prune it on the server kept timing out [01:39] grep is able to select something in a 1gb+ file [01:40] i still can't upload to IA [01:40] :-( [01:41] godane: try the legacy ftp interface [01:48] there are text viewers like 'less' that will only load the part of the file currently being viewed [01:49] looks like the files was upload anyways [01:49] i found it here: https://archive.org/details/cdrom-riscos-riscuser [01:50] hey one of my uploads [01:54] ok its working now: https://archive.org/details/cdrom-twilight-030 [02:00] nico: not to all like "min's bigger", but I zgrep'd a 20GB file just last night... I love modern unix tool. [02:31] CNN Money Archive: Register to vote on your Xbox: https://archive.org/details/news.xboxvote.090408.cnnmoney_576x324_dl [02:32] i didn't remember that [02:33] Woo! The tape monkeys at Amazon finally came through for me. It took a little over four hours, but getting that album back is maybe the best 14 cents I've ever spent. [04:25] 4 hours to load a tape from deep storage aint bad [04:25] robots [04:25] it's robots all the way down [04:25] Hi again. [04:25] Any news? [04:49] copyin stuff off (c|dv)d-r media [04:49] I am in Meza, Arizona from Monday night to tuesday morning [11:06] [05:37:37] how do you deal with large plaintext log files? I accidentally let one of them get to 400 MB, and had to toss it because I couldn't open it locally, and the process to prune it on the server kept timing out [11:06] less will handle a 400MB logfile just fine [11:06] or 4000MB for that matter [11:07] as will other line-based tools like grep, as mentioned [11:07] or tail for that matter - if you want to get rid of everything before the last 10,000 lines for example, you'd just do tail -n 10000 large.log > snipped.log [11:08] nano is the only terminal tool I know of that would have a problem with a 400MB file :) [11:08] it'd work, but veeeeery slowly [12:12] joepie91: try pico - it is worse :) [12:15] I don't doubt that :) [12:24] https://bridges.torproject.org/robots.txt [12:24] w t f ?!? [12:25] What's so "w t f ?!?" about it? That there's a bunch of comments? [12:26] that somebody took the time to write this page [12:37] wont load here, needs tor? [12:40] no, it doesn't need tor [12:41] nico: Yeah, there's many who do [12:41] and "write"? most likely just copy pasted a bunch of ascii art [14:10] take part in striking excalibur online competition: MAGEGO JUMBASTIC [14:10] no [15:11] so more xml data from cnet: http://cnettv.cnet.com/9773-1_53-0-2008101024.xml [16:56] do we have a copy of angelfire yet [16:56] or tripod for that matter [16:58] not a complete one- there's bit and pieces from various jobs, and whatever IA themselves have, but nothing even close to what the Geocities one had [16:59] probably the easiest way to do a grab would be to pick a site that's part of a webring, and just grab out from there [17:05] dashcloud: heh - it's almost as if they -made- it for archiveteam :) [19:52] http://www.mameworld.info/ubbthreads/showflat.php?Number=322477 [21:30] https://www.youtube.com/watch?v=WTKIgdfoHxM [21:47] running 2 vms is killing my laptop :( [22:10] hey Famicoman [22:43] batting 1.000 so far on CD- and DVD-Rs [22:45] I can't believe I watched anime at 352x240 [22:54] oh phooey a read error [23:11] walmart annual reports collection is coming: https://archive.org/details/1972-annual-report-for-walmart-stores-inc [23:14] Yo dashcloud - 400mb logs are nothing! [23:15] Also, 674gb of items on viddler. MIGHT fill that drive. [23:16] SketchCow: can I pm you a question? (it's private, otherwise I'd ask you in the chat) [23:53] i'm going to be uploading a big zip file tonight [23:54] its the full az204696.vo.msecnd.net/download/ 1 to 70000 grab i did [23:54] it has images, video, mp3s, jpegs, and pdfs related to walmart