[04:09] Hello [04:28] hi [05:17] that cnet metatdata dump has all cbs evening news full episode links [05:17] :-D [06:10] if anyone want to know you can get a 2tb toshiba external for $90 at staples this week [08:39] Microsoft Research Video 104188: Behind the Code with Tony Williams: https://archive.org/details/Microsoft_Research_Video_104188 [09:24] SketchCow: i found a way to get manuals from national instruments [09:24] the manuals go back to the 1980s [09:24] maybe 1970s [09:28] the number for the file name is the part number [11:32] so 2 of my recent microsoft research videos is in my top download of the week [14:57] anyone made a script to generate the metadata csv for ias3 upload if I have a bunch of files like samename-issue##.pdf with similar metadata. I could hack something together myself but I figured why reinvent the weheel if someone has something [15:00] I have scripts that pull from metadata in the filename. [15:00] But that's now almost brain-dead work. [15:00] Are you using the internetarchive python script to do it, yet? [15:03] not yet, I was considering using the python over a perl script. I imagine I could easily generate metadata for the flag provided the json was as simple as "title:title,date:"YYY-MM-DD",..." [18:05] ohhdemgir: i always forget the domain of your site... halp! [18:08] http://rip.rarchives.com/ [18:13] ripme would rock if you could feed it a list of urls or use it as cli tool [18:14] oh nevermind the cli request, that works already [18:52] schbirid, it does lol [18:56] schbirid, what are you ripping? [19:00] ohhdemgir: last archive was a bit sluggish, just 127GB uploaded [19:01] aye, didn't expect that to be so popular but thanks for the assist :) [19:01] no problem [19:44] ohhdemgir: not imagefap, something else [19:45] wink wink, nod nod [19:47] speaking of imgfap, this guy is doing a fairly good job at archiving casting sets - http://www.imagefap.com/profile/fiftyteddy/galleries [21:17] And my browser history gets just a little more life-without-parole-y [21:30] yeahhhh, staying away from his other stuff :/ [21:34] SketchCow, talking about and documenting the script that collected all the gonewild data in that 220GB upload - http://www.reddit.com/r/DataHoarder/comments/245ij1/start_your_own_rgonewild_archive_automated_data/ [21:35] fair interest in a small unrelated sub, not bad