[01:57] http://seclists.org/fulldisclosure/ Full Disclosure security mailing list is dying [02:12] zyphlar: And they don't provide a conveinant file to download? [02:23] Nope, doesn't look like it. Did anybody make a grab while I was at school? [02:27] archivebot is grabbing it [02:28] or, it was [02:28] Oh man, there's some public access television on IA. Download manager at the ready [02:29] actually seclists is in the archivebot queue, but archivebot will get banned quickly when it starts crawling it [02:30] the person running seclists sometimes whitelists an IP for crawling [02:30] too bad we wouldn't even know which archivebot pipeline the job will land on [02:33] ivan`: And since the last time I asked I never recieved an answer, to add my metadata to items on IA I put it in a huge file and send it to an admin right? [02:33] wouldn't marc.info have a copy? [02:34] http://marc.info/?l=full-disclosure [02:34] closure: archivebot started crawling that presumably got banned [02:34] hard to tell without seeing the wpull log [02:34] namespace: I don't know anything about uploading to IA [02:34] godane might know [02:35] which, seclists.org or marc.org? [02:35] marc [03:00] does anyone here have a link to software that can handle reading from DAT72 Quantum tapes? [06:28] ivan`: I'm half way http://seclists.org/fulldisclosure/ and not banned yet... [06:29] ivan`: oh.... haha.... ust banned 10 minutes ago [06:30] ----- [06:30] Blocked for possible web abuse [06:30] The IP address you are coming from has requested an inordinately large number of pages in a short amount of time and has been temporarily blocked to conserve our resources. This often happens when people try to use web spidering programs to download large portions of the site. The block will be removed 24 hours after the latest period of high traffic. If you feel this IP ban was made in error, you can email fyodor@nmap.org. [06:30] ----- [06:34] I will restart it in 24 hours. It looks like it downloaded 30000 urls witout getting banned, so I'll make it download a max of 20000 urls every 24 hours. [07:20] you guys should look at this: http://www.vintageliterature.ca [21:51] so, is there an easy way to extract a whole label's worth of email from GMail? I've got (and I suspect many other people do as well) the Full Disclosure digest subscription going back to 2005 in my email box. If other archival methods aren't working terribly well, this could be an avenue [22:03] dashcloud: http://archivebot.at.ninjawedding.org:4567/#/histories/http://seclists.org/fulldisclosure/ <-- that seems to have worked [22:03] we'll have to see what wayback says [22:07] I assume that got banned before getting everything [22:07] dashcloud: you can use Takeout to get an export of a label [22:07] Google's Takeout tends to crash and not give you a backup, though [22:07] it's data export for people with just a little bit of email [22:08] I assume they're running Takeout processes in memory/disk-restricted environments and they just crash after 2GB heap or something [22:13] I've sent a bunch of feedback on that Takeout feedback form but it's probably ignored [22:20] dashcloud: if you install Thunderbird and set up the gmail account it creates an mbox file for each label [22:21] retrieved via IMAP