[00:00] it seems liek you could fund yourselves by selling copies, or something [00:00] I myself am now wondering how big a hd can be [00:01] hmm 6tb is i spend a lot [00:01] I see 2tb drives sometimes, but 1tb is more common [00:03] one thing you may want to think about is that when we get quantum computers, only thing we will need is the hashes of files and thier size in bytes, so a list of the files by type, size in bytes, md5 hash, sha1 hash, and crc32 hash should be enough for you to recreate everything at that point. [00:08] Do you guys enjoy http://www.drobo.com/ type products? [00:10] are you talking about the torrents hosted by the Internet Archive? [00:11] those are backed by IA's monster servers; you can download those as fast as your internet connection will allow [00:11] as a seeder it's really hard to compete [00:13] Not sure whee they atre hosted, I saw the word "bit-torrent" and assumed a bunch of peoples random computers [00:15] If I hosted all of it on AWS it would cost about 320/month to store 12tb [00:16] yep [00:18] thats without downloading it at all [00:18] just storeing it [00:18] almost everything we archive ends up on the Internet Archive (in addition to other places) [00:19] https://archive.org/index.php ? [00:19] yes [00:19] so if being able to access something is as good as having a local copy, then a donation to IA is a pretty cost-effective way to go :) [00:19] So every time I pull up a website tht no longer exists and extract a bit of data ftrom an old copy of the page, thats you guys? [00:19] no [00:20] the "wayback machine"? [00:20] we're Archive Team; we're just a bunch of hobbiests [00:20] (however that might be spelled) [00:20] so you source for IA but are not a part of them? [00:21] right. we focus on grabbing things that are shutting down, while IA uses the Wayback Machine to crawl everything on the net, hitting most places a few times a year [00:21] so the sourcforge.net stuff I gave you guys earlier, is that going to be used? [00:21] or is it not very high priority? [00:22] we're not that cohesive or structured [00:22] So you run "Warriors" but you are not set up as an army/ [00:22] ? [00:23] ;) [00:23] :) [00:23] I see the word warrior used and that makes me expect a chain of command, etc [00:23] heh [00:23] and honestly I think sf is going to die, suddenly, with no body told in advance [00:23] if it goes the way freshmeat and such did [00:24] yea, it's quite possible [00:24] its owned by the same groups I think [00:24] I'd like to see it get saved [00:24] I'm working on Pixorial right now though; that has an actual deadline [00:24] The "Warrior" thing thats just your distributed comuting efforts, correct? [00:24] When yiu say "working" do you mean you are just running a script? or manually working on the websites/ [00:24] yea, it's just a VM people can download and run, it'll automatically join in on any job we put up on the server [00:25] ? [00:25] I'm working on writing the script that the warriors will download and run so that we can archive Pixorial [00:26] right now I've got the warrior scanning Pixorial's url shortener, so that we can simultaneously archive the mapping of short url to full url, and get a list of things that need to be saved [00:26] I see. So your building the tasklet run by the 'Warrior" distrubuted task running system, if I understand correctly? [00:26] (Pixorial doesn't provide a way to search or browse the content they host, unlike most video sites) [00:26] correct [00:26] What gets me is the large barrier to entry to run a tasklet [00:27] it's not very large :) [00:27] its a vm setup correct? [00:27] just download a virtual machine image and run it in virtualbox [00:27] there's also a docker image a few people use [00:28] why not a webstart browser page to let people boot from a webpage? [00:28] http://bellard.org/jslinux/ [00:29] that would be pretty funny, actually [00:29] just make the image used your warrior [00:29] it's not exactly the fastest way to go [00:30] Well fastest deployment or the fastest execution? [00:30] and the memory and storage needed to archive a single "item" varies [00:33] if you're interested in running things inside the browser, you should checkout the JSMESS project [00:33] http://jsmess.textfiles.com/ [00:33] interesting [00:34] So you already have a warrior taslet for checking out files from a svn/ [00:34] ? [00:35] hmm. not specifically [00:35] I would instead program the task to download and then run svnsync [00:38] I wouldn't spider the HTML view of the repository though, that would be much more laborious [00:38] hmm... ok. Well my wife ants me to go get burgers for the grill, bbl [00:38] on the other hand, a historical recreation would be harder as a result [00:39] would have to at least make a note of what version of cvsweb was in use at the time [00:39] well I have a list of all proects on sourceforge as of earlier today [00:39] mmm, burgers [00:39] wouldnt be hard to get the svn's of every one [00:39] anyway, bbl [00:39] honestdua: if you want to make a warrior task for that, I'd be happy to help out [00:39] enjoy your burgers :) [00:40] just going to the store.. wife is going to grill them [00:40] like most canadian women she is not at all worthless around a grill [00:40] bbiab [01:04] ok. back [01:05] honestdua: the tricky thing about boot-from-webpage is that, although the warrior infrastructure has some degree of fault tolerance, we do not have any way for clients to communicate "this client gave up on this work item" [01:05] we do have ways to requeue "failed" items, but "failure" is more or less defined as "project admin thinks some node is gone" [01:06] that said, warrior pipelines do provide ways to explicitly fail items, and the tracker has an endpoint for reporting failures, so AFAICT the remaining bit is plumbing [01:07] I think boot-from-webpage would be fine for something like urlteam where the items are all quite small (on the order of a kilobyte) [01:07] yes [01:07] but less fine for something like Google Video where an individual video could be a gigabyte [01:08] also, for urlteam the client could just be written in straight-up javascript, rather than writing in in python and then compiling the python compiler, linux kernel, filesystem drivers and a million other things to Javascript [01:09] Its an interesting idea either way; if the goal is to harvet more faster, logic state that more workers is better. [01:09] *harvest [01:09] yea [01:09] sure, but we've also managed to do that by being lucky and having people who run ISPs run workers :P [01:09] I doubt jslinux could provide the performance required [01:10] did you guys watch the 'Birth and Death of Javascript' video? [01:10] is that Gary Bernhardt's thing [01:10] yes [01:10] it's probable that lots of things will end up that way [01:11] yeah [01:11] I also hope the part about the San Francisco Exclusion Zone is also true [01:11] at least on powerful machines, there will probably be more aggregate computing power in tiny machines though [01:11] Exclusion zone?? [01:11] heh [01:11] honestdua: a joke from the video [01:11] https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript [01:13] jslinux doesn't seem to have a network stack [01:13] although it has wget for some reason [01:13] the issue there is cors [01:14] by default browsers limit traffic to just the site hosting the page [01:14] uless you disable it [01:14] honestdua: yes [01:14] on the website to tell the cleint its ok [01:14] *client [01:14] man I can't type sorry [01:14] it's cool [01:14] but thats something you can disable/enable [01:14] honestdua: anyway, if you'd like to get the warrior working on jslinux, that'd be cool [01:14] if you host the page that loads the app [01:14] I _think_ we could do urlteam in spite of CORS [01:15] right now i'm looking through the data I collected earlier on my 16gb Ram box [01:15] it'd be hilarious to have that and then exploit some inevitable Twitter client XSS exploit to have a billion warriors [01:15] no just kidding, that'd be mean [01:15] heh [01:15] and extracting a list of actual projects verses user profiles, etc [01:15] since over 3.2 million users profiles are included in the list of links thats really only 2.5 or so million project links [01:16] and most projects ahve 3-4 links in there [01:16] each [01:17] codeing up the extractors now [01:17] and users have up to 4 links for them as well [01:17] so if all projects ahve 4 links and all users had 4 links [01:18] interesting math [01:18] 312k or so possible projects in that scenerio [01:19] you'd probably want to do one item per user and one item per project [01:19] or if you're just going after repositories, then one per project [01:24] yep [01:24] code would be the priority [01:24] indexes by licence [01:24] etc [01:27] and users on SF can have blogs and wiki's [01:27] not just an activities page and a profile [01:28] yes, that's why I'd like to use ForgePlucker [01:28] it knows how to grab all of that efficiently [01:28] our standard tools would just follow the links and record what the website returned [01:29] which is great for recreating the website, but not for exploring the data or importing it elsewhere [01:36] hmm they alo have a third type of link, http://sourceforge.net/apps/mediawiki/nhruby.u/ to show the apps a given person is related to [01:38] hmm.. I'm counting up to 7 possible links for just one project [01:38] there could be only 10k or so projects, a lot less than I thought, on SF [01:40] http://sourceforge.net/blog/sourceforge-myths/ says 325k [01:42] hmm thats in line with teh number of links i'm finging [01:42] *finding [01:42] but we ahve multiple links per project [01:42] we shal know soon enough teh exact number [02:00] wow.. gettign OOM's [02:01] pretty much means anybody with less than my 16 gb of RAM would too [02:01] thats the serialization step however [02:01] hmm... [02:01] * honestdua fades into his computer code [02:02] Found 443487 Projects and 1451925 users [02:03] thats the actual number [02:03] of projects and users on sourceforge as of earlier today [02:04] from just the big sitemap file [02:05] awesome [02:18] well I can serliaze out the project data into json but my machine says "no" to the uses file, I thik its due to me being on windows however [02:22] https://dl.dropboxusercontent.com/u/18627325/sourceforge.net/projects.json [02:23] thats every project url, and its sub urls, collected in a Dictionary> collection [02:23] using the /p/ pages as aliases of /project/ [02:26] 398418 of them have a wiki [02:27] 443174 of them have a files download page [02:28] only 27 of them have a git page [02:28] 3 of them have a page named svn [02:28] heh [02:29] 3574 of them have a page named cvs [02:29] as in, most just are file uploads [02:29] and I bet you a lot of such projects are binary only or no uplaods and a link to an xternal site [02:32] 74973 of them have mailman setups [02:33] 143518 of them have a tickets page [02:36] so i would say that around 200 ofthem are actually active [02:36] *200k ofthem [02:39] still thats a lot of code [02:44] and an average of over 3 users per project [02:47] mam.. this is cooler than I expected [02:48] I wonder if i posted this online if anybody else besides you guys would be interested? [02:55] hmm teeting.. just cuz [03:49] Boop [05:03] freecode appears to be back up [05:04] with a "no longer updated" banner [06:18] trs80: it's always been up for me, just stylesheets broke [07:52] db48x: not using the tracker? if it works it works :) [08:09] midas: I don't understand your question [08:10] I've got the tinyarchive tracker running on http://argonath.db48x.net/ [08:13] ah ok :) [08:13] got you [08:45] I'm still seeding that 75G urlteam torrent. [09:56] ouch https://gerrit.wikimedia.org/r/141386 [09:57] exmic: cute, do you have 100 % of it? is it on archive.org now? [10:01] Nemo_bis: what? :| [10:07] https://ganglia.wikimedia.org/latest/graph.php?r=day&z=large&c=Miscellaneous+eqiad&h=sodium.wikimedia.org&jr=&js=&v=13.5&m=cpu_wio&vl=%25&ti=CPU+wio I think [10:15] all we did was cause a little cpu load and everybody starts screaming [10:15] 20 % io wait probably equals swapdeath :) [10:21] it's not like we killed wikipedia :p [13:58] they could have contacted someone rather than just banning via useragent [14:01] use google's useragent, good luck! [14:03] hon [14:03] gah he's not here [14:04] asking f he should post stuff online if we ar einterested... _post everything_ even if people aren't.