Time |
Nickname |
Message |
00:44
🔗
|
|
toad1 has joined #urlteam |
04:52
🔗
|
|
aaaaaaaaa has quit IRC (Leaving) |
05:25
🔗
|
|
phuzion_ has quit IRC (Read error: Operation timed out) |
05:27
🔗
|
|
phuzion has joined #urlteam |
06:21
🔗
|
|
bsmith093 has quit IRC (Read error: Operation timed out) |
12:36
🔗
|
|
bsmith093 has joined #urlteam |
12:54
🔗
|
|
soultcer has quit IRC (Remote host closed the connection) |
12:56
🔗
|
|
soultcer has joined #urlteam |
15:55
🔗
|
|
aaaaaaaaa has joined #urlteam |
21:29
🔗
|
arkiver |
chfoo: maybe it would be possible to do a lot more viddy short urls? |
21:29
🔗
|
arkiver |
the site seems to be able to handle it |
21:30
🔗
|
chfoo |
i would but the tracker can't right now |
21:30
🔗
|
chfoo |
it's really slow |
21:30
🔗
|
arkiver |
I see |
21:31
🔗
|
arkiver |
So there are 56.000.000.000 short urls |
21:31
🔗
|
chfoo |
i'm waiting for someone to do a code review for an improvement but if no one does, i'll just make a backup of the database and try it anyway |
21:33
🔗
|
arkiver |
Maybe it would be easier to create a discovery project for those 56 billion urls? and then add, like 10.000 urls per pack |
21:33
🔗
|
arkiver |
That would leave us with 5.6 million items, which is doable |
21:34
🔗
|
arkiver |
assuming the websites can hold up |
22:44
🔗
|
|
GitHub196 has joined #urlteam |
22:44
🔗
|
|
GitHub196 has left |