[00:48] proust position post: pull progressing; projection pending [01:18] and I just learned that not all x86_64 CPUs from Intel do 40-bit physical addressing [01:18] that's so weird [01:23] Just a quick note before I forget; seems Tim Follin has floppies with his old source in "Einstein" format (Tatung Einstein?). Do we have the capability to read them? [01:27] Speaking of those (this is the first I've heard of them), this page is pretty amazing http://www.tatungeinstein.co.uk/front/bandhcomputers.htm [01:27] In various meanings. [01:44] SketchCow: do you think it makes sense to archive private stories on Proust? [01:45] we don't get much out of it [01:45] and from what I'm seeing, they are (1) the majority of stories; (2) easily identified [01:47] or, to put it another way, 100 * (12 / 174) = 6.89% of downloaded stories are public [03:09] But that is a snazzy hat, SketchCow. How could people _not_ want to be interviewed by that? [03:19] for serious [03:38] yipdw: Which don't? It would make sense if the early P4s or some budget chips don't for example [03:43] nitro2k01: the Xeon E5430 does 38-bit physical addressing, or at least the one reported on my EC2 instance is [03:43] a friend's Core i5-2500K reports 36-bit [03:43] my E5520 reports 40-bit [03:43] I think Intel just cripples their chips for some market segmentation dealie [03:46] You know, I never thought about all that. How does that affect performance, exactly? [03:47] it doesn't realy [03:47] it mostly affects how much RAM you can access [03:48] because all those chips do 48-bit virtual addressing, and in 64-bit mode the software is juggling 64-bit pointers anyway [03:54] So if you put more than 8GB memory in a machine with 36-bit physical addressing, it will...? [03:55] the memory above 8 GiB won't be addressable without bank-switching tomfoolery [03:56] So it'll degrade performance a bit. [03:57] actually, I don't know if x86_64 can even address > 8 GiB in that situation [03:58] I think it should be able to. PAE has been available on commodity parts for years. [03:58] IIRC, PAE is only applicable to 32-bit processors [03:59] or more specifically 32-bit modes [04:25] Okay, my bad. I forgot that memory is byte-addressable. 36-bit memory address width is 64GB [04:57] is torrent.textfiles.com stil availible? [05:00] by your powers combined, i am... [05:00] CAPTAIN ARCHIVE! [05:00] Wilford Brimley! [05:04] i had to google for that ref, good one [05:07] Coderjoe, you the Coderjoe from tgg on freenode? [05:27] ... [05:45] alard: Are we gonna try and get the new anyhub prefixes? [06:08] yipdw: is archiving a private story hard? [06:08] If you can get to it from the net, it's not private. [06:19] tumblr is the geocities of 2010: http://gutsygumshoe.tumblr.com/ [07:05] SketchCow: I haven't looked into whether or not having an account gives you further access to said private stories [07:05] I'll check that [07:06] hmm, nope, no further access [07:09] private stories? Sounds provocative [07:09] there is GET http://www.proust.com/ac/story/export/generate [07:09] that relies on session data [07:10] and for Proust, said session data is server-side [07:15] well, maybe [07:15] so what IS the status of the klol script? [07:15] underscor: what email address did you use? [07:15] underscor: to register with Proust [07:15] I wonder if I can trick it to download other users' data [07:15] "it" being the PDF exporter [07:16] wouldnt tht mean they had really horrible securityZ? [07:16] it's not uncommon [07:16] hi SketchCow [07:16] might want to be careful with SoftDisk for Apple II, those are still legitimately distributed. [07:16] also, it's the only way I see to actually get the stories that are shared only with family and friends [07:16] I mean, I *could* also just friend everyone on Proust and hope they reciprocate [07:16] but for now I'm just getting ones marked as public [07:18] my upload from yesterday is done its in bsmith on batcave as a 7z called ffnet_dump_and_script.7z, just fyi if someone wants to contiue where i left off 112025 stories grabbed out of 3.6 million, in the folder books [07:18] bye now, good luck with proust [07:18] just as an FYI, none of us have read access to that\ [07:19] bsmith093: Is the script up on github? [07:19] ah well ok then ummm it sdhould be hold on [07:20] http://code.google.com/p/fanficdownloader-fork/downloads/detail?name=fanficdownloader-fork0.0.1.7z [07:20] no, not github, but heres the link to a repo i set up [07:20] 7.2 megabytes? [07:20] that has everything except the stories [07:20] damn your quick [07:21] run automate.sh link to grab all the storeies in sequence [07:22] automate runs download.py using enery line of link in order, it will take several months to complete and there will be new stories by then anyway but this is a complete list as of several weeks ago [07:22] i reccommend using a vps or something you dont have to leave on yourself [07:23] there's no way it has to take several months [07:23] then you fix the code then, i just ran it for a week straight, and got oly 112k storis [07:23] I did fix it :P [07:24] you and your ruby voodoo, this is why i like bash, it Just Works (TM) [07:24] bash actually has some serious portability problems [07:24] we've hit them quite often here [07:24] anyway storis is the raw id list and link is the id list wrapped up into url form [07:25] for example, du-helper.sh in splinder-grab exists solely to paper over differences between GNU and BSD du [07:25] and it's not really perfect [07:25] to be fair, that's not bash per se, but a dependency of a bash script [07:25] well ruby has some serious noob coder issues, and likes to spit back cryptic error messages to me [07:25] but even within bash-the-language there's real problems between versions [07:26] from story_grab.rb:1 [07:26] ruby story_grab.rb 8 [07:26] story_grab.rb:1:in `require': no such file to load -- mechanize (LoadError) [07:26] for examply i thought i fixed this last night?!?! [07:26] that means that a file called "mechanize" can't be loaded [07:27] make sure you're using the right Ruby installation [07:27] rmv use 1.9.3 [07:27] using 1.9.3p0 [07:28] now what? [07:28] ensure the mechanize gem is present [07:29] gem install mechanize [07:29] gem list -i mechanize [07:29] true [07:29] then it's installed [07:29] run it again [07:29] the girl_friday and connection_pool gems are also used [07:29] stack trace [07:30] ok [07:30] what is it that you want to save from fanfiction.net? [07:30] http://archiveteam.org/index.php?title=FanFiction.Net doesn't state what [07:31] the stories, minimum, the reviews and author profiles would be really nice [07:31] ok [07:32] ruby story_grab.rb 8 maybe this is a stupid question, but i am running this right, right? [07:32] so stories, reviews, author profiles [07:32] yes, that's correct [07:33] although that script will not handle stories without chapters correctly; it needs to be modified for that [07:33] yes, that would be great [07:33] every story has atleast one chapter [07:33] that script will not handle stories without >= 2 chapters [07:34] ohhhh thats what u meant?! [07:34] ok that makes more sense, check the link i gave u, they solved that problem, the google group in fanficdownloader [07:34] I know what the problem is [07:34] really , ehat [07:35] see lines 24-28 [07:35] there's an assumption that the chapter box is present [07:35] as I mentioned, that script is just a test [07:35] if anyone, I feel like this group would appreciate this link http://www.therestartpage.com/# [07:35] to demonstrate that it is possible to download a multi-chapter story in less than 2.5 seconds per chapter [07:35] for actual use it needs to be expanded [07:36] check downloader py and all the stuff it refs [07:36] they solved this somehow [07:36] I know how to solve it [07:36] ....and? [07:36] (1) download the first page; (2) if a chapter box is present, add chapters (2..n) to the queue [07:37] and I'm not working for you, so I haven't solved it? [07:37] ummm, ok so grep the page and se if the chaoter box is there? [07:37] yes, and if it's there then initiate further downloads [07:37] if it isn't there, you're done [07:39] I can expand the story finder and downloader, but I don't know when [07:39] sorry for being so rude, i thought this was a bigger issue than it truned out to be [07:39] it isn't [07:39] downloading fanfiction.net is really trivial [07:39] well, at least the reviews, stories, and user profiles [07:39] however I am working on other things [07:43] hmm [07:43] that said, if I say it's really trivial, I guess I better go do it, right [07:44] so, im looking thorugh the mechanize docs, and this looks like some if's and a agent.search thing [07:45] http://mechanize.rubyforge.org/GUIDE_rdoc.html way at the bottom [07:51] yeah, that's pretty much it [07:54] if agents.search("chapter" i am horrible with syntax [08:38] bsmith093: https://gist.github.com/1577729 is a set of scripts that will grab stories, reviews, and profile for that story [08:38] https://s3.amazonaws.com/nw-depot/example_run.tar.gz is an example of two runs of get_one_story.rb [08:38] thanks, seriously. [08:39] one on story ID 8, and one on story 4089014, which I chose because it has 701 reviews and 60 chapterws [08:39] I have not yet inspected the WARCs [08:39] but they should work [08:39] actually, they might be slightly broken -- I'm not sure if --page-requisites is doing what I think it's doing [08:39] time to fire up wayback [08:39] so, yeah [08:40] I don't think you need to grab one profile per story [08:40] it is probably better to queue all the URLs up and just fetch once per unique URL [08:40] but that depends on your approach [08:41] warcs can be fixed later, to be honest, i have no idea why session data is useful to anyone, even the archivers. [08:42] request/response headers tell you the circumstances under which a resource was retrieved, which is important for determining what state that resource is in [08:42] because Web resources can change their content depending on headers [08:43] thee that dynamic? [08:43] Web resources can change based on *anything* [08:43] i need to earn to type slower [08:43] oy "P [08:44] oh, fuck [08:44] yeah, I didn't fetch the images or CSS [08:44] that needs to be fixed [08:45] oh, damnit [08:45] the chapter selector doesn't work in the WARC [08:45] because it suffixes the name of the story [08:45] that's fairly annoying [08:46] bsmith093: if you want to see what I'm "oh, fuck"ing about: https://s3.amazonaws.com/nw-depot/wayback1.png [08:48] i would say thats fine the images dont change much ever [08:48] it's not fine, it's incomplete [08:48] grab once and link to them [08:48] just needs some wget tweaks though [08:48] bsmith093: There's no emergency, so there's no reason not to do it right. [08:48] ok, then [08:48] also I want to find a way to get that chapter selector working [08:49] ALL THAT SAID [08:49] if all you want is the text, the text is there [08:50] hmm [08:50] I wonder how hard it'd be to set up our own Wayback Machine [08:50] with a WARC upload UI [08:50] that'd make checking archives pretty snazy [08:50] snazzy, too [08:51] * yipdw tries [08:51] Haha, I was just thinking a warc viewer for my phone would be neat too. [08:51] wayback seems to already support that in some capacity, so maybe I just need to throw on some UI code [08:51] Wyatt|Wor: I wish there was a lightweight WARC viewer out there [08:51] Actually, are there browser plugins for warc files or something? [08:51] I wish :P [08:51] I didn't even think to look. [08:51] if you find one, let me know [08:52] Ah...will do. [08:52] wayback is the only thing I've found that will render a WARC's content in a Web browser [08:52] WARNING: Installing to ~/.gem since /var/lib/gems/1.8 and /var/lib/gems/1.8/bin aren't both writable. WARNING: You don't have /home/ben/.gem/ruby/1.8/bin in your PATH, gem executables will not run. [08:52] and it's pretty heavy [08:52] thats the output of gem install mechanize [08:52] it worked but i figure huge warnings are notable [08:52] bsmith093: if you're using your system's Ruby installation, that'll happen [08:53] * Wyatt|Wor flinches at the mention of ruby gems. [08:53] happens every tim i try to run make_story_urls [08:53] you either need to grant your user write permission to those directories (ick) or use a Ruby distribution that your user controls [08:53] ive got rvm in my home dir [08:53] rvm is good for setting up the latter [08:54] bsmith093: ...or set your $PATH. [08:54] rvm isn't a Ruby distribution; it just manages distributions [08:54] yeah, or that [08:54] but mechanize's executables are not used by make_story_urls so [08:54] wheres $PATH, in the configs [08:54] Or use your distro's package manager to install the gem. [08:54] PATH is an environment variable, but don't worry about it [08:54] require': no such file to load -- mechanize (LoadError) [08:54] from make_story_urls.rb:3 [08:54] before and after [08:54] do this [08:54] add require "rubygems" to the top of all Ruby source files [08:55] I don't like to do that for various reasons but it will ensure Rubygems is loaded [08:55] (the main reason is that Ruby programs should not have any dependency on a specific package manager) [08:56] /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- /home/ben/1577729/url_generators (LoadError) [08:56] from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `require' [08:56] ran for a sec then that [08:56] you'll need that file from the gist, too [08:56] oh it's not in there [08:56] https://gist.github.com/1577729#file_url_generators.rb [08:56] yipdw: That's the most salient argument I think I've heard against gem from a non-Debian/Gentoo developer. [08:57] Wyatt|Wor: heh [08:57] Wyatt|Wor: yeah, it's largely a theoretical argument but that is a good point [08:57] ruby programs shouldn't break just because you installed a library via Rubygems or apt-get or whatever [08:59] yipdw: Oh I wouldn't say it's theoretical. If our experience in buying a couple "rails hosting" brands is any indication it's more like...a tsunami of ass-pain. [08:59] See also: flameeyes adventures in gem packaging. [08:59] I feel really bad for people who have to package Ruby gems [09:00] gems move really, really freaking fast [09:00] ok actual code error this time /home/ben/.gem/ruby/1.8/gems/mechanize-2.1/lib/mechanize/http/agent.rb:303:in `fetch': 404 => Net::HTTPNotFound (Mechanize::ResponseCodeError) from /home/ben/.gem/ruby/1.8/gems/mechanize-2.1/lib/mechanize.rb:319:in `get' from make_story_urls.rb:16 [09:00] Kind of. Sometimes. [09:00] bsmith093: yeah, that script has no graceful error handling at all [09:00] but uh [09:00] are you sure you passed a valid story ID as the first argument [09:00] to get_one_story [09:01] oh, um i was running "ruby make_story_urls.rb" errr, whoops :D [09:02] Wyatt|Wor: actually, for most languages I work with -- python, ruby, occasionally haskell and node -- I've actually begun to not use the OS' package manager [09:02] and have been instead using easy_install, rubygems, cabal, npm [09:02] it's way more complex and makes my package manifest incomplete, but there's so many other people who just publish libraries in those languages in their specific package managers [09:03] which is quite a bit of inertia to overcome [09:03] the only language I can think of that I work in and use the distribution's packages is C/C++ [09:03] and that's not entirely true for things like Qt :P [09:03] Not familiar with the latter two, but python eggs have a lot of the same issues as gems, as far as I'm aware. [09:04] At least CPAN gets it right~ [09:04] I think I'll become rich and famous if I find a way to encapsulate a gem/egg/whatever as a deb or whatever [09:04] short version could not find gem custom_require locally or in a repository [09:04] YES YOU WILL, fantastically so [09:04] Just formalise package metadata about the gem to the extent perl does and you can. [09:05] Wyatt|Wor: what does CPAN do? I've just used perl -MCPAN -e 'install ...' [09:05] is there a way to do it that doesn't involve doing that [09:05] or, more specifically, respects the OS' package management system [09:05] yipdw: cpan itself is just software. It's all because they have a good packaging format that we can have things like g-cpan. [09:05] ahh [09:06] actually, that reminds me [09:06] the source code that drives rubygems.org is available [09:07] perhaps it is feasible to add a service endpoint to it that makes it behave as an apt repo [09:08] BTW, here's the horse's mouth on the subject: http://blog.flameeyes.eu/2008/12/14/rubygems-cpan-and-other-languages [09:11] ahh [09:11] yeah, I agree with all of those points [09:11] there has been *some* success on the standardization front though [09:11] namely, running "rake" in an increasing number of projects runs the testsuite [09:11] regardless of test harness [09:12] but, yes, the file format of gems is scattersht [09:12] shot [09:17] Ergh, yeah, If last week's tirade about mongo_mapper is any indication. [09:17] Well a couple weeks, I guess. [09:18] oh, I had no idea mongo_mapper sucked that bad [09:19] Oh, did you read his post about it? [09:19] yeah [09:20] I also realized that the gems I maintain do not include test files or a Rakefile in their gem form [09:20] under the rationale that tests and build process are useful only to a developer [09:20] I'll have to change that [09:20] somehow it didn't click that someone might want to use the *.gem and repackage it in a package manager that does things like run tests [09:21] Hehe, yeah. The most recent post is a semi-continuation of the mongo_mapper post, too. This happens every couple months, or so, btw [09:22] I'm surprised he's stuck with it [09:22] (I didn't :P) [09:22] try to get gems to play nice with the package manager that is [09:22] And thanks! I'm not a Ruby user, personally, but I'm always thankful when release engineering is improved. [09:23] Yeah, he _really_ loves him some ruby [09:23] yeah, no problem [09:23] thanks for pointing out flameeyes' blog [09:24] I'll follow it, as he is the first person I've seen who is still sticking with it [09:24] most other people I know who do Ruby use rvm + bundler to just throw all of an application's dependencies into a directory [09:24] I mean, it works, and it isolates things [09:24] but it is very heavy [09:25] it makes sense on systems that don't really try to define their system configuration in terms of packages [09:25] like Windows, OS X [09:25] RVM is kind of neat for developers. [09:26] But it's a nightmare for our setup. [09:26] (Speaking of dependency hell, http://blog.flameeyes.eu/files/bones-dependencies-graph.png) [09:28] haha what [09:28] oh bones [09:28] ugh [09:28] I do not like bones, jeweler, hoe [09:29] they make the process of making a gem so ridiculously complex [09:29] Apparently they make the packaging difficult, too [09:29] actually, the gem command in bundler is very minimal and seems to do it bets [09:29] best [09:35] oh! [09:35] fanfiction.net pages include the canonical URL [09:35] badass [09:42] They generate a lot of stuff into their pages, as I recall it. [09:42] yeah, really helps with retrieval [09:52] "To modrenaissancewoman: Thank you for pointing that out. I thought French kissing is the one where friends give each other on their cheeks. My mistakes." [09:52] whoops [09:52] A common mistake. [09:52] yeah, but the implications are funny [09:53] lol, was joking. [09:53] ruby get_one_story.rb http://www.fanfiction.net/s/4/1/get_one_story.rb:11: warning: already initialized constant VERSION get_one_story.rb:11: command not found: ./make_story_urls.rb http://www.fanfiction.net/s/4/1/ get_one_story.rb:26:in `initialize': No such file or directory - /home/ben/1577729/data/h/ht/htt/http://www.fanfiction.net/s/4/1//http://www.fanfiction.net/s/4/1/_urls (Errno::ENOENT) from get_one_story.rb:26:in `open [09:53] bsmith093: it's just the ID, not the full URL [09:53] /home/ben/1577729/wget-warc -U 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.54 Safari/535.2' -o /home/ben/1577729/data/4/4/4/4/4.log -e 'robots=off' --warc-file=/home/ben/1577729/data/4/4/4/4/4 --warc-max-size=inf --warc-header='operator: Archive Team' --warc-header='ff-download-script-version: 20120108.01' -nd -nv --no-timestamping --page-requisites -i /home/ben/1577729/d [09:53] get_one_story.rb:11: command not found: ./make_story_urls.rb 4 [09:53] get_one_story.rb:11: warning: already initialized constant VERSION [09:53] ruby get_one_story.rb 4 [09:53] sh: /home/ben/1577729/wget-warc: not found [09:53] that, then [09:53] you need wget-warc [09:54] or some wget that does WARC [09:54] adjust the WGET_WARC constant as required [09:54] oy right hold on [09:56] whats gnutls and do i want wgetwarc compiled with it [09:57] get_one_story.rb:11: command not found: ./make_story_urls.rb 4 [09:57] man, fanfiction.net really does not want to be archived [09:57] in addition to their robots.txt file there's a ROBOTS=NOARCHIVE meta tag in every generated output [09:58] I feel bad doing this [09:58] well it is technically against the tos not that i care [09:58] yipdw: Yeah, I mentioned that a while back, I think. [09:58] hm [09:58] yeah [09:58] bsmith093: Oh dear, I might get the account I don't have banned. [09:58] I think at this point I'll just stop [09:59] * chronomex slides in [09:59] I mean, yes, I understand the point of archiving this, but on the other hand ignoring all of those signs is really shitty netizen behavior [10:00] shitty netizen on one hand, but fanfic people on the other hand [10:00] the noarchive bullshit is just ff.n trying to force the internet to depend on its continued existence [10:01] I think that's debatable. It's not very good netizenship to put yourself in a position where millions of users' work could just disappear, either. [10:01] right [10:01] I know that the existence of public logs is going to cause me to regret saying so eventually, but fuck that shit. [10:01] a moral quandrary [10:01] History lasts longer than any one website. [10:02] story_page = agent.get(UrlGenerators::STORY_URL[sid, '']) this line in make_story_urls is throwing an error [10:02] Which is about as weird a way as I could have found to express that. [10:02] in my experience, fanfic people can be rabidly anti-archivism, and I have no idea why -- especially because all the fannish people I've met save webpages religiously [10:03] And they don't tend to keep backups. [10:03] Of their own stuff, at least. [10:03] well, maybe. [10:03] chronomex: http://ansuz.sooke.bc.ca/entry/35 is one theory [10:03] the its my story, and ill kill it if i want to line of thought [10:03] bsmith093: there's more to it than that [10:03] bsmith093: yes, that. exactly. [10:04] /home/ben/.gem/ruby/1.8/gems/mechanize-2.1/lib/mechanize/http/agent.rb:303:in `fetch': 404 => Net::HTTPNotFound (Mechanize::ResponseCodeError) [10:04] ben@ben-laptop:~/1577729$ ruby make_story_urls.rb [10:04] from /home/ben/.gem/ruby/1.8/gems/mechanize-2.1/lib/mechanize.rb:319:in `get' [10:04] from make_story_urls.rb:16 [10:04] siorry forgot to sump line breaks [10:04] a lot of fandoms are actually very sensitive to the legal complications surrounding their fandom [10:05] bsmith093: make_story_urls is meant to be called from get_one_story, and it requires a story ID [10:05] oy well that explains it [10:05] ruby make_story_urls.rb 4 [10:06] worked perfectly [10:08] lol wtf [10:08] http://b.fanfiction.net/static/styles/fanfiction42.css [10:08] I do not know how the fuck that is coming back [10:09] if I get that with curl, I get gzipped CSS (?!) [10:09] if I get that with Chrome, I get an HTML page that has the CSS between
 tags
[10:09]  yipdw: gzipped CSS!?
[10:09]  and I mean it's gzipped CSS, not merely sent with Content-Encoding: gzip and compressed by the server
[10:09]  Wyatt|Wor: yeah, try it
[10:10]  I...
[10:10]  What.
[10:10]  I am amazed that works
[10:11]  no 
 tags in opera
[10:11]  oh
[10:11]  that might just be the web inspector
[10:11]  are you viewing-source in chrome?
[10:11]  I am now
[10:11]  and yeah, that appears fine
[10:12]  but that is so weird
[10:12]  quick thing i have a list of id numbers in a file, and they work individually, but the autogeneration part of the scrupt seems to be tripping over itself
[10:13]  could you just package the id list into the repo
[10:13]  well, wait
[10:13]  it IS sent with Content-Encoding: gzip
[10:13]  so I guess that's valid
[10:13]  Huh, interesting.
[10:13]  I expected curl to inflate the stream, though
[10:13]  to say nothing of wget
[10:14]  are they gzipping gzipped data?
[10:14]  Content-Encoding: gzip
[10:14]  right
[10:14]  I thought curl/wget would be able to handle that by inflating the stream
[10:14]  it is single-gzipped
[10:14]  (curl | gunzip) --> plaintext
[10:15]  yeah, that works
[10:17]  ohh
[10:17]  b.fanfiction.net sends that regardless of Accept-Encoding
[10:17]  that's...broken
[10:21]  I guess we just need to download and gunzip that separately
[10:21]  or something
[10:21]  tricksy
[10:37]  well its 536am est so being in ny, im going to bed, keep the repo updated, ciao, night | morning depending on timezones
[10:43]  nite
[11:46]  And here I am!!
[11:46]  Packing the car up
[11:46]  Gah, I thought dotwizards.com would be some cool Japanese pixel art site.  Alas, corporate coaching.
[11:47]  SketchCow: Ah, have a good Magfest?
[11:47]  I had a very good magfest.
[11:47]  Awesome.  That couple with the arcade sounds like it's going to be an awesome...err, episode?
[11:49]  Just more filming stuff
[11:49]  But yeah, I like them a lot.
[11:50]  http://www.facebook.com/SavePointMD
[11:51]  How much do you think Arcade will cover in terms of pinball's role in arcades?  I mean, yeah there's Tilt! (which I need to get a copy of, come to think of it), but I'm a fanatic. ;)
[12:54]  Good question, no answer.
[14:42]  can someoneca
[14:43]  hey
[14:44]  underscor gave a rough address in here a ways back. google link. can someone tell ot to me?
[14:45]  http://g.co/maps/zge32
[14:45]  on phone, keyboard fuckery limited.
[14:45]  just give me the location, kid
[14:45]  it's grassy knoll ct woodbridge, va 22193
[14:45]  om
[14:46]  Is there a nickserv here?
[14:46]  no
[14:46]  no services on efnet
[14:46]  Besides chanfix
[14:46]  ok
[14:47]  SketchCow: does this mean you'll be here in like 45 minutes?
[14:47]  or are you just planning ahead
[14:47]  may e
[14:47]  oh
[14:47]  damn, we have church at 11
[14:48]  see soon!
[14:48]  when do you get back?
[14:49]  no rush.
[14:49]  no ticking off family.
[14:49]  1:15ish
[14:49]  hahah
[14:51]  see you around then.
[20:07]  Someone posted some data to usenet in 1982 and I made a visualization of it today.  http://olduse.net/blog/current_usenet_map/  fun collaboration :)
[20:14]  I especially like the tall doubly linked list of systems at the bottom. we don't build networks like that anymore.
[20:15]  Token Link?
[20:15]  Token Ring rather
[20:15]  Kill me!
[20:17]  could be token ring, more likely it was a dozen systems talking over 300 baud dialup
[20:18]  hmm, actually, token ring seems to be 1985 or so, not 1982
[20:18]  Seems like an expensive way of conencting
[20:18]  If the middle box needs to reach out, it needs to rely on a bunch of telephone lines
[20:19]  and it probably takes it *days* to get new traffic
[20:19]  Damn (whoever) for not providing more metadata
[20:20]  yeah, I hope for a future dataset with more info
[20:20]  Also, why the double arrows everywhere?
[20:20]  Seems like they don'tprovide additional information
[20:20]  (of course, telehack.org has a newer, much more extensive uucp map they use in their simulation)
[20:21]  bidirectional links, each system could call the other
[20:21]  Right. But this applied to ALL of the links?
[20:21]  Except the wormhole :p
[20:22]  according to Mark, it did, yes
[20:22]  Wait, look at eagle and mhux*
[20:23]  Multiple links
[20:23]  mhuxj -> eagle *2
[20:23]  mhuxj <-> mhuxm *2
[20:23]  yeah, I've been fixing a few that he doubled
[20:24]  Oh, so that's not even useful data? ._.
[20:25]  well, look at the original post :P
[20:25]  it was like a bunch of badly formatted lines from 1982
[20:25]  But that's like text and stuff
[20:25]  I can't read text
[20:25]  hahahaha
[20:26]  Like, if someone would send me a link to textfiles.com
[20:26]  I'd be lost
[20:27]  this is why I thought a graphical map would be nice.. I personally prefer the handdrawn ascii ones below it though
[20:27]  In fact, if I didn't have tghis program that translated IRC messages to pictures of fruit, I couldn't have this conversation
[20:27]     
[20:30]  http://www.textfiles.com/conspiracy/art-04.txt
[20:30]  Just look at the second paragraph
[20:31]  How nicely sliced it is
[20:31]  A signle diagonal stroke
[20:31]  Same with the first too actually
[22:20]  Hi
[22:21]  Any emergency downloads going on right now?
[22:29]  (will read the log tomorrow.. gn8)