[00:20] the tracker is under GLaDOS domain though [00:23] i would be maintaining it but year 11 happened [00:46] i have a crazy idea. create a github organization with lots of repos for the shorteners. the scraper scripts will scrape and upload to a staging server which will then commit the urls to github. this way, it allows easy browsing urls and selective downloading of the data set. [02:49] so a whole restructure? [02:50] could work [04:45] chfoo: the data is *way* too big for github [04:50] maybe github won't notice if we have hundreds of repos :p [05:17] we're all going to internet jail [06:31] ono [06:40] Don't drop the internet soap [11:41] chfoo: GitHub allows only 1GB of data in each git repo. [13:22] LET'S UPLOAD IT TO IA THEN! [14:14] Well, yeah - that'd be the obvious good candidate storage place [14:14] But if I understand correctly, the problem isn't hosting for the data set [14:49] ..what was it?