#warrior 2019-03-31,Sun

↑back Search

Time Nickname Message
00:06 🔗 JAA has joined #warrior
00:07 🔗 bakJAA sets mode: +o JAA
00:10 🔗 arkiver has quit IRC (Read error: Operation timed out)
00:10 🔗 ivan has quit IRC (Read error: Operation timed out)
00:10 🔗 balrog has quit IRC (Read error: Operation timed out)
00:10 🔗 kiska1 has quit IRC (Read error: Operation timed out)
00:11 🔗 logchfoo2 starts logging #warrior at Sun Mar 31 00:11:27 2019
00:11 🔗 logchfoo2 has joined #warrior
00:11 🔗 sep332 has quit IRC (Read error: Operation timed out)
00:11 🔗 TigerbotH has quit IRC (Read error: Operation timed out)
00:11 🔗 jrayhawk has quit IRC (Read error: Operation timed out)
00:12 🔗 joepie91 has joined #warrior
00:13 🔗 rnduser_ has quit IRC (Read error: Operation timed out)
00:16 🔗 arkiver has joined #warrior
00:31 🔗 TigerbotH has joined #warrior
00:33 🔗 jrayhawk has joined #warrior
01:09 🔗 step has joined #warrior
01:09 🔗 kiska1 has joined #warrior
01:11 🔗 sep332 has joined #warrior
01:25 🔗 robbierut has quit IRC (Read error: Operation timed out)
02:03 🔗 balrog has quit IRC (Read error: Operation timed out)
02:07 🔗 balrog has joined #warrior
02:46 🔗 rnduser_ has joined #warrior
02:50 🔗 rnduser has quit IRC (Read error: Operation timed out)
03:02 🔗 robbierut has joined #warrior
03:48 🔗 robbierut has quit IRC (Read error: Operation timed out)
03:56 🔗 robbierut has joined #warrior
03:57 🔗 mtntmnky has quit IRC (Write error: Broken pipe)
03:57 🔗 balrog has quit IRC (Write error: Broken pipe)
03:57 🔗 kiska1 has quit IRC (Read error: Operation timed out)
03:57 🔗 jrayhawk has quit IRC (Read error: Operation timed out)
03:57 🔗 balrog has joined #warrior
03:58 🔗 TigerbotH has quit IRC (Read error: Operation timed out)
03:58 🔗 sep332 has quit IRC (Read error: Operation timed out)
03:59 🔗 kiska1 has joined #warrior
03:59 🔗 tmg1|eva_ has quit IRC (Read error: Operation timed out)
03:59 🔗 tmg1|eva has joined #warrior
04:02 🔗 TigerbotH has joined #warrior
04:02 🔗 mtntmnky has joined #warrior
04:03 🔗 sep332 has joined #warrior
04:04 🔗 jrayhawk has joined #warrior
05:24 🔗 robbierut has quit IRC (Read error: Operation timed out)
05:54 🔗 robbierut has joined #warrior
06:53 🔗 d5f4a3622 has quit IRC (Quit: WeeChat 2.4)
06:58 🔗 d5f4a3622 has joined #warrior
07:22 🔗 rnduser__ has joined #warrior
07:23 🔗 Somebody2 has quit IRC (Read error: Operation timed out)
07:24 🔗 Somebody2 has joined #warrior
07:26 🔗 rnduser_ has quit IRC (Read error: Operation timed out)
07:31 🔗 Smiley has quit IRC (Remote host closed the connection)
07:31 🔗 Smiley has joined #warrior
07:36 🔗 VADemon has joined #warrior
07:46 🔗 clayray_ has joined #warrior
07:54 🔗 Somebody2 has quit IRC (Ping timeout: 360 seconds)
07:55 🔗 Somebody2 has joined #warrior
07:56 🔗 clayray has quit IRC (Read error: Operation timed out)
07:56 🔗 clayray has joined #warrior
08:00 🔗 clayray_ has quit IRC (Read error: Operation timed out)
08:06 🔗 robbierut has quit IRC (Ping timeout: 1212 seconds)
08:06 🔗 robbierut has joined #warrior
11:28 🔗 blueacid Thinking from the google+ project...
11:28 🔗 blueacid Given there were lots of people who volunteered space for rsync targets but they weren't able to help
11:28 🔗 blueacid but since a lot of the machines people are using have a lot of space
11:29 🔗 blueacid Could you set a far higher limit on how many threads a warrior can run, but then add a 'maximum concurrent downloads' limit, like the rsync limit?
11:30 🔗 blueacid e.g. on my server I have 1TB of free space on the drive that's running the warriors. Could I set, say, 100 to run at once, maximum of 6 concurrent downloads, 6 concurrent rsyncs, and therefore as soon as a thread finishes downloading, another will start to download (keeping the transfers from google going!) but then I'll end up with a lot more local disk usage and a lot more threads queuing for the
11:30 🔗 blueacid rsync targets
11:34 🔗 robbierut I think Kaz has a very clear opinion on this.
11:48 🔗 blueacid Ha, based on that reply I'm guessing the opinion is "nope, no chance"?
11:49 🔗 robbierut has quit IRC (Read error: Connection reset by peer)
11:49 🔗 robbierut has joined #warrior
15:31 🔗 marked has quit IRC (Read error: Operation timed out)
15:32 🔗 marked has joined #warrior
16:49 🔗 HunterZ has joined #warrior
17:37 🔗 t3 blueacid: I think it would be more appropriate to have a special option in the scripts rather than the Warrior, since the Warrior is only allocated a specified amount of disk space from VirtualBox/VMware.
17:37 🔗 t3 robbierut: What was Kaz's opinion?
17:38 🔗 t3 I mean, reasoning.
17:38 🔗 rnduser has joined #warrior
17:38 🔗 Kaz we don't want caching on the warriors
17:41 🔗 t3 Yes. And what about the scripts that do not use the Warriors?
17:42 🔗 rnduser__ has quit IRC (Read error: Operation timed out)
18:09 🔗 Kaz same thing applies
19:13 🔗 figpucker has joined #warrior
19:41 🔗 figpucker has quit IRC (Read error: Connection reset by peer)
19:43 🔗 figpucker has joined #warrior
21:08 🔗 betamax has joined #warrior
21:11 🔗 betamax quick question: once the warrior has uploaded an item it grabbed to the staging server, does it delete that item from it's VDI (talking VirtualBox one here)?
21:11 🔗 betamax cause I don't have much disk space left, and the warrior VDI is growing steadily bigger when on G+
21:12 🔗 betamax started at 1.1G, after a few hours now at 6GB
21:12 🔗 blueacid betamax: Yes it does
21:13 🔗 blueacid You might have a job which is large, and therefore will download a lot before it completes, archives and uploads
21:13 🔗 blueacid most jobs are only around 45MB compressed (so maybe 100-200MB of downloaded data before getting compressed)
21:14 🔗 blueacid But some are as big as 1-2GB. Think the record sized one is said to be 50GB
21:15 🔗 betamax ah, OK. (And wow, 50GB? I'm assuming that's from videos?)
21:16 🔗 blueacid Possibly! I just had a flick through my warriors, one is currently uploading a 2GB compressed archive, so presumably 3/4GB of downloaded data for that
21:17 🔗 JAA I believe VirtualBox won't shrink the disk image back down when the (virtual) space becomes unused though, and you have to compact it manually.
21:18 🔗 robbierut Thats true
21:44 🔗 blueacid JAA: You're right
21:44 🔗 blueacid It will grow automatically up to the limit
21:44 🔗 blueacid and then stays there, which is a pain
21:49 🔗 JAA Yeah, there are limitations to this VM setup. If you can, use the Docker container instead (which I think shouldn't have that issue although I'm not familiar with Docker) or run the scripts directly. That should also give better performance/less waste since it avoids the virtualisation layer.
21:51 🔗 VADemon betamax: the v3 appliance just grows to 60gb unlike the old one that only grew as _really_ required. Must be some fs anti-fragmentation mechanism. So you really should expect it to become 60gb soon
21:52 🔗 JAA I think the v3 VM runs the Docker container inside a minimal Alpine VM, right? Maybe that has something to do with it.
21:52 🔗 VADemon I run the scripts manually from inside the VM... :]
21:53 🔗 JAA lol, ok, sure, why not.
22:08 🔗 betamax VADemon: good to know (tried using VBox's compress utility, but as I expected that didn't work)
22:08 🔗 betamax I guess once it gets too big I'll shut it down (safely, of course) and spin up a new image
22:10 🔗 figpucker has quit IRC (Quit: Leaving)
22:28 🔗 xsfx has quit IRC (Ping timeout: 268 seconds)
22:39 🔗 xsfx has joined #warrior
22:46 🔗 HunterZ has quit IRC (Ping timeout: 260 seconds)
23:12 🔗 blueacid has quit IRC (Quit: leaving)
23:28 🔗 marked feature request, check ban list when giving out rsync slots
23:32 🔗 JAA What ban list?
23:33 🔗 marked the usernames that are banned, perhaps shouldn't be allowed rsync assignments. though this could be in the code base already, didn't have time to check. too much going on, just didn't want to forget about it.
23:41 🔗 JAA No, not in there as far as I can see.
23:50 🔗 thelounge has joined #warrior
23:50 🔗 thelounge is now known as Evie
23:51 🔗 * Evie makes a note not to ask about upload issues
23:52 🔗 Evie However, anyone seeing rate-limit redirects with googleplus ?
23:54 🔗 Evie ahh. well then. https://twitter.com/textfiles/status/1112494767601053696
23:58 🔗 marked the common report is 503's. if it's not a 503, perhaps report that
23:58 🔗 Evie Rate-limit redirection encountered, sleeping ...
23:58 🔗 Evie 22=302 https://plus.google.com/102542898851051868654
23:58 🔗 Evie I'm seeing 302's across two different endpoints
23:59 🔗 marked join #googleminus and #googleminus-ot
23:59 🔗 Evie Danke

irclogger-viewer