[00:00] more baleeted-broken: https://www.sans.org/webcasts/pen-test-a-go-go-integrating-mobile-network-attacks-in-depth-pwnage-97007 [04:21] archiveteam.org down? [05:01] Emcy: quick! archive archiveteam.or-- oh, wait [05:01] seriously though, it's up for me :) [05:18] SketchCow: the Wisconsin Public Radio shows should go into audio_news collection [05:19] i just noticed it [05:19] since thats the section for public radio type stuff [06:25] do netsplits happen this often on efnet? [06:28] all the time [06:52] Hmm http://p.defau.lt/?iZsX9v7OD2SZYH3XH6USkA [08:51] That reminds me, have we archived the archiveteam.org wiki lately? [08:51] Not sure how regularly we backup our own stuff [08:55] No, we need to do that. [08:55] I'll be doing a bunch of stuff once I'm out in CA (leaving for CA Thursday morning) [09:04] I'll back it up now, too, using the usual Wikiteam stuff [09:07] danneh_: time to cron it up ;) [09:08] midas: aha, I'll work on that after I get the automation sorted out ;) [09:11] alright, archiving now, will letchas know once it's done [09:20] the dumpgenerator is doing something strange for me [09:20] i've put it on a 30 second delay, first request is delayed by 30 seconds, after that 10 seconds [09:21] that's really interesting [09:21] 30 second delay between grabbing each page? [09:22] for most sites, even something like 5-10 seconds tends to fly under the radar decently well, in my experience [09:22] but how exactly are you setting that 30 second delay? [09:43] --delay=25 [09:43] like that [09:47] not all delays in dumpgenerator are the same [09:52] did i miss something in the settings? [10:04] no [10:04] ok :p [10:05] If websites status codes didn't lie, it would be a bit easier to decide what to do with requests ^^ [10:05] (^ excuses, all excuses) [10:08] haha, archiving is alot like running around with middlefingers in the air and screaming as hard as you can sometimes ;) [10:08] and why am i not doing this in #wikiteam [11:03] Alright, Archiveteam.org updated, gonna try to throw it up today sometime [11:12] thanks [14:30] its working now [14:30] was definitely down before though [14:51] http://cs.umd.edu/~amiller/permacoin.pdf [14:51] microsoft research is doing the best thing ever? [14:57] gah why haven't i fixed up distcc yet D: [14:59] urgh wrong channel, sry [15:22] https://help.justin.tv/entries/41803380-Changes-to-Video-Archive-System :C [15:30] #justofftv [15:30] Sorry [15:30] #justouttv [17:56] http://www.engadget.com/2014/06/04/permacoin-mining-data-storage [18:19] "You wouldn't be able to cheat and use Dropbox or Google Drive thanks to an encrypted key," [18:19] ...? [18:20] sounds snakeoily [18:20] how do they do distribution? [20:12] wouldn't be just easier to pay bitcoins to put/retrieve data ? [20:13] anyway interesting thanks for the link [20:14] edcba: the idea is probably that it's a self-governing and self-healing network [20:14] paying BTC to a (presumably) centralized service would not accomplish the same thing [20:22] ie i don't see how the 'coin' part can work well here [20:28] edcba: direct integration of the service with the currency (thus blockchain) removes the need for a central clearinghouse connecting the two [20:38] also if i have permacoins, how do i retrieve my backuped lost porn collection ? [21:09] https://i.imgur.com/ZFJsrRu.jpg [21:13] :) [21:48] https://i.imgur.com/dUFfCGJ.png lolwut? [21:48] robots.txt? [21:53] in that case the wayback machine shows a message with a link to the corresponding robot.txt file [21:54] I´ve also seen it doing this on and off to different twitter profiles [22:52] if those are non robots.txt related censoring, it would be nice if the requests were shared on chillingeffects [22:54] http://truecrypt.sourceforge.net/robots.txt [22:54] User-agent: ia_archiver [22:54] Disallow: / [22:54] that's volontary :( [22:59] http://www.reddit.com/r/conspiracy/comments/26swea/the_google_cache_and_the_waybackmachine_internet/chuqxv5 [22:59] «Yesterday I found this link that had a directory of all of the Truecrypt files archived. It now leads to an error page. It was working fine for me yesterday, I even downloaded a couple of files for testing purposes. Here is an archive.org link to the page in question (note: you cannot download any of the files from this archive.org page).» [23:00] and now you can because i told archivebot to get the whole directory [23:01] yay [23:02] who has a phonr and is in the us [23:02] phone [23:08] archivebot is pretty backlogged again though [23:09] oh you did it in the past [23:13] yes [23:13] just in time :) [23:24] SketchCow: I have lots of phones [23:25] I've lost count how many phone numbers I have