Time |
Nickname |
Message |
00:20
🔗
|
chfoo |
the tracker is under GLaDOS domain though |
00:23
🔗
|
GLaDOS |
i would be maintaining it but year 11 happened |
00:46
🔗
|
chfoo |
i have a crazy idea. create a github organization with lots of repos for the shorteners. the scraper scripts will scrape and upload to a staging server which will then commit the urls to github. this way, it allows easy browsing urls and selective downloading of the data set. |
02:49
🔗
|
GLaDOS |
so a whole restructure? |
02:50
🔗
|
GLaDOS |
could work |
04:45
🔗
|
xmc |
chfoo: the data is *way* too big for github |
04:50
🔗
|
chfoo |
maybe github won't notice if we have hundreds of repos :p |
05:17
🔗
|
xmc |
we're all going to internet jail |
06:31
🔗
|
GLaDOS |
ono |
06:40
🔗
|
SketchCow |
Don't drop the internet soap |
11:41
🔗
|
ersi |
chfoo: GitHub allows only 1GB of data in each git repo. |
13:22
🔗
|
GLaDOS |
LET'S UPLOAD IT TO IA THEN! |
14:14
🔗
|
ersi |
Well, yeah - that'd be the obvious good candidate storage place |
14:14
🔗
|
ersi |
But if I understand correctly, the problem isn't hosting for the data set |
14:49
🔗
|
GLaDOS |
..what was it? |