[00:07] No, you didn't. [00:16] Can I get a slot so I can upload it to you? [00:35] On it. [00:37] thanks [01:16] Blasting some twitter up to the archive. [02:50] Twitter Blasted. http://archive.org/details/twitterstream [03:22] SketchCow: yeah i'm pandering to you, but as a fellow fitbit user: http://www.fitbit.com/premium/export - $50 to free our data, eh? fuckers. [03:23] "Your data belongs to you!" ... for an extra one time charge! [03:23] That is the funny. [03:29] Their ToS is oh so helpful too. "You hereby grant to Fitbit a perpetual, irrevocable, non-exclusive, worldwide, blah blah alh blah information you submit (User-Generated Content)", but then later they explicitly disclaim ownership of the USer Generated Content. Methinks I will make this my personal mission from god to get my stats for free on a constant basis without paying $50 [03:31] whew, there's an API and folks are working on it/have done it. [03:40] There, I punched fitbit [03:43] well said [03:45] i haven't tried it, and there's nothing lovelier then moving your data from one closed silo to another, but this looks like the only complete free export via API someone's come up with: http://quantifiedself.com/2011/07/fitbit-google-spreadsheets-awesome/ , also found a mention of libfitbit, which is about accessing the data on the device directly https://github.com/qdot/libfitbit [03:46] backtrack, ok so google docs isn't a "closed silo" but you get my point [06:44] i got the star wars music by john williams from bbc [06:46] it was only aired once [07:17] it looks like the new engadget site is less on html code [07:18] my old 2004 urls dump of engadget is like 3 times the site of the new dump [07:19] 126mb vs new 46mb dump [07:24] also looks arstechnica image dumps realy get big around 2008 or 2009 [09:07] so looks like parts of S2205 articles i'm grabing are very big [09:07] like 5 pages are 81mb [10:54] http://www.korea-dpr.com/e_library.html [10:56] looks like image host is closed [10:56] did archive.org get it? [11:08] i mean archiveteam [11:08] also ftp upload is acting very slow [11:09] like i can get above 70kbytes [11:09] right now its running at 42kbytes [11:09] ok it looks like its jumping back up [11:24] It happens [12:36] Yahoo blog in Vietnam (http://blog.yahoo.com) will close on January 17th 2013. Should we rescue this? [12:37] aaaugh fuck yahoo [12:39] Maybe we should rescue after Jan 17th because after this time, you can't do anything. It will close on March 14th [12:39] huh? [12:40] but a index of urls would be nice to do first [12:40] Yes [12:40] hmmm, I wonder if any of the yahoo usernames we've gathered for other projects will be of use [12:40] can't hurt [12:42] May not. This service is only in Vietnam so the database may not corret [12:44] is it different from normal yahoo usernames? [12:45] No [12:46] ok [12:50] User pages are like this: http://blog.yahoo.com/{usernames} [12:50] that's simple enough [12:54] If you need more info, contact me. I know Vietnamese [12:55] great [12:55] are you located there at the moment? [13:01] tuankiet: cool [13:03] @chronomex: What do you meam? [13:03] in vietnam [13:03] Yes [13:03] cool [13:08] My nationality isVitnamese [13:11] greetings from seattle, usa [13:14] Thanks! [13:36] Is this the message? http://blog.yahoo.com/vnteam/articles/831443 [13:53] Yes [13:55] Anyone: If your task would be saving a whole Mediawiki engine based website, how would you do it? [13:57] norbert79: https://code.google.com/p/wikiteam/wiki/NewTutorial ? [13:58] Nice [13:58] I am actually looking for a method making a wiki be displayed within gopher [13:58] so basically replicating it for making it work within gopherd too [13:59] hmm, this might work [15:09] http://contemporary-home-computing.org/1tb/archives/3647 [15:10] http://www.merz-akademie.de/lectures/where-are-the-files [15:12] Cool [16:11] alard: Lua runtime error: dailybooth.lua:8: attempt to index local 'f' (a nil value). [16:12] (ID 613221: 'LucceMulrine') [16:16] So, fundraising. [16:16] I will be making an archive team Holiday Hard Drive video to push people to donate. [16:18] SketchCow: I'd rub Jeff Atwood a second time for a pretty please [16:20] alard: Happened another time now with a different user (ID 624462: 'carlit0x'), I guess it's due to getting rate limited? A somewhat worrying message even if it isn't fatal [16:22] Deewiant: #dailybooth [16:29] dailybooth.com is returning 504 [16:33] balrog_: #dailybooth [17:05] The f (a nil value) error usually indicates that there's no file to read (if there's a HTTP error, for instance). Not sure if it's a problem. [17:46] http://urbusinessnetwork.com/urbnshows/URBN-SHOWS/BizSAM1_MYOB%20guest%20Jason%20Scott%202012-11-29.124335.mp3 [20:34] anyone have a script to grab flickr users/sets? [20:36] http://archiveteam.org/index.php?title=FlickrFckr [20:53] sounds like that's for "your Flickr photos" [20:53] I want everybody else's [20:56] http://voicebunny.com/ is going to be brutal [21:10] I'm going to try it for an archiveteam project. [22:34] http://www.flickr.com/photos/textfiles/8251180702/in/photostream/ [22:36] 1 2 [22:36] fucking putty [22:37] what are you pointing at [22:37] or just the books in general [22:39] SketchCow: Yes, it's a library, well done :)) [22:39] Though I wonder if any library considered storing the works digitally too [22:40] like how archive.og does [22:43] it would be nice having real digital librarieslike back in the WAIS systems and Lexis/Nexis [22:45] I mean accessible to anyone