[00:17] *** bithippo has quit IRC (Textual IRC Client: www.textualapp.com) [00:20] *** s4t has joined #archiveteam-ot [00:26] *** s4t has quit IRC (Quit: s4t) [01:01] *** Stilett0 has quit IRC () [01:12] *** Stiletto has joined #archiveteam-ot [01:27] *** Stiletto has quit IRC () [01:35] *** ubahn has joined #archiveteam-ot [01:35] *** Stiletto has joined #archiveteam-ot [02:42] *** ubahn has quit IRC (Quit: ubahn) [02:44] *** ubahn has joined #archiveteam-ot [03:10] *** ubahn has quit IRC (ubahn) [03:33] *** degauss has quit IRC (Quit: gtg 73) [04:10] *** odemg has quit IRC (Ping timeout: 265 seconds) [04:22] *** odemg has joined #archiveteam-ot [04:56] *** Mateon1 has quit IRC (Ping timeout: 265 seconds) [04:56] *** Mateon1 has joined #archiveteam-ot [06:34] *** kiska1 has quit IRC (Ping timeout (120 seconds)) [06:34] *** wmvhater has quit IRC (Ping timeout (120 seconds)) [06:36] *** wmvhater has joined #archiveteam-ot [06:36] *** kiska1 has joined #archiveteam-ot [07:30] *** wp494 has quit IRC (Read error: Operation timed out) [07:31] *** wp494 has joined #archiveteam-ot [07:32] *** svchfoo3 sets mode: +o wp494 [07:51] *** surewhyno has joined #archiveteam-ot [08:20] psi: You can also use grab-site. [08:20] Goodnight. [08:20] Hm? [08:20] Did you mean to ping s4t [08:21] psi: Oh. Whoops. I'm sorry. [08:21] s4t: You can also use grab-site. [08:21] Great. They left. [08:21] Oh well. [08:22] Goodnight. [09:05] *** jspiros has quit IRC (Read error: Operation timed out) [09:06] *** jspiros has joined #archiveteam-ot [10:37] *** anarcat has quit IRC (Read error: Operation timed out) [10:37] *** anarcat has joined #archiveteam-ot [11:29] *** surewhyno has quit IRC (Quit: leaving) [11:46] *** BlueMax has quit IRC (Quit: Leaving) [12:08] *** degauss has joined #archiveteam-ot [12:28] *** _niklas has joined #archiveteam-ot [13:23] *** sardine has joined #archiveteam-ot [13:28] *** sardine1 has joined #archiveteam-ot [13:31] *** sardine has quit IRC (Read error: Operation timed out) [14:23] *** VerifiedJ has joined #archiveteam-ot [15:30] SpiderOak is discontinuing their "unlimited" plans because they figured out they can't actually provide unlimited storage for that price. Duh. [16:04] backblaze still is "unlimited" [16:04] but i suspect their fineprint limits that severely [16:11] Nope, someone on /r/DataHoarder posted an exchange with the Backblaze support a while ago where they said that a few hundred TB are not a problem for them. [16:11] But I've heard that restoring from Backblaze is terrible. Also, there's still no Linux support. [16:12] Here's that DH post: https://old.reddit.com/r/DataHoarder/comments/6vzu3d/backblaze_casually_told_me_backing_up_282tb/ [16:14] *** svchfoo3 has quit IRC (Read error: Operation timed out) [16:14] *** kbtoo_ has quit IRC (Read error: Connection reset by peer) [16:16] *** kbtoo has joined #archiveteam-ot [16:21] *** ubahn has joined #archiveteam-ot [16:27] *** wp494 has quit IRC (Ping timeout: 268 seconds) [16:27] *** wp494 has joined #archiveteam-ot [16:28] *** svchfoo1 sets mode: +o wp494 [16:34] what the heck is a "run-pipeline" and how do I get it into bash [16:34] I'm really confused there's absolutely nothing that explains what this means [16:34] (https://github.com/ArchiveTeam/terroroftinytown-client-grab) [16:35] <_niklas> run-pipeline is a command in the seesaw python package [16:35] <_niklas> (which was written specifically for this) [16:36] <_niklas> it's little more than a wrapper around the pipeline.py in there [17:25] *** m007a83_ has joined #archiveteam-ot [17:28] *** m007a83_ has quit IRC (Client Quit) [17:29] *** m007a83_ has joined #archiveteam-ot [17:30] *** m007a83 has quit IRC (Read error: Operation timed out) [17:30] *** m007a83_ is now known as m007a83 [17:53] *** Dj-Wawa has joined #archiveteam-ot [18:06] *** ubahn has quit IRC (Quit: ubahn) [18:23] *** sardine1 has quit IRC (Quit: Leaving.) [18:45] this is nice. i like this https://usercontent.irccloud-cdn.com/file/gUmSD9Qe/image.png [18:48] *** Jens has quit IRC (Remote host closed the connection) [18:49] *** Jens has joined #archiveteam-ot [19:15] *** ubahn has joined #archiveteam-ot [19:20] *** schbirid has quit IRC (Remote host closed the connection) [19:26] what setup is that psi [19:26] idk much about docker [19:26] Portainer [19:27] It connects to the Docker daemon over TCP with TLS and allows you to manage multiple machines at once [19:28] Whoa. Rad. [19:36] *** ubahn has quit IRC (ubahn) [19:43] Also, I just found out Certbot exists. That certainly makes things a lot easier to HTTPS [19:48] yano: So why do you burn the Google credit? [19:48] Why not save it? [19:48] *** teej_ is now known as t3 [19:52] t3: it only lasts for 365 days [19:52] and it's $300 per google account [19:52] actually, i just found out that my one google account has a balance I owe [19:52] but it's only $45 [19:52] i couldn't afford to run all of these for a whole month [19:53] each of them are anywhere from $200 to $300/mo [19:53] "You'll be charged when your balance reaches $100.00 or 30 days after your last automatic payment, whichever comes first." [19:57] yano: So make sure you're not charged. [19:57] t3: yeah, i'm setting up alerts [19:57] and my other 3 google accounts still have money left [19:57] the one has $90 left, $26 left, and $45 left [19:59] <_niklas> how time-accurate are those alerts [20:00] <_niklas> if they update that once a day like some cloud providers do (with regular billing information)... [20:00] _niklas: it looks like once an hour [20:01] and that's not ad, considering each machine is $0.45/hr [20:01] *not bad [20:01] <_niklas> billing alerts may up differently than the numbers you see on the web console [20:02] <_niklas> I believe aws's do, in your benefit [20:05] does this work anymore? https://gist.github.com/JustAnotherArchivist/f4617c902626377532692a341794f273 [20:10] wait, got it [20:10] It didn't work for me on macOS. [20:11] Have you ever seen a ninja wedding? [20:11] Or a fart website? [20:16] lol: [20:16] # ./tumblr-monitor.sh | sort -k 4,4n [20:16] -su: fork: Cannot allocate memory [20:17] oops [20:48] *** tuluu has quit IRC (Ping timeout: 260 seconds) [21:30] MrRadar2: Oh. [21:30] So why not have many more folders? [21:30] Well, that's the solution [21:30] Split your files up in to folders [21:30] What's the difference? [21:30] I thought files and directories are the same. [21:31] <_niklas> the data structures are typically chosen to handle the typical case well [21:31] If a folder has 100 sub-folders, and those sub-folders have 100 more sub-folders which each have 100 files you have 1 million files [21:31] However each time you access each folder the FS only needs to look at 100 items, so you only need to look at 300 items to find any given file [21:32] Oh... now it makes sense. [21:32] Whereas if you just had 1,000,000 files in a single folder it may need to look through all of them to find the one you're looking for [21:33] Yeah, it's the power of tree-style data structures [21:33] MrRadar2: Why not have a file system that chunks data alphabetically, so if you're looking at a file with string "myfile.txt", then it will look through "m", and then find the ones with "y" and so on? [21:33] Wouldn't that be much more efficient? [21:34] File systems do do things like that (I'm not an expect on them myself so I couldn't tell you the details) but that runs into the issue that when you're adding stuff it can become expensive to keep all the data sorted [21:35] MrRadar2: Oh. That makes sense. [21:36] This is complicated stuff. [21:36] Yep. :) There's a reason why people are always writing new file systems [22:11] The google free credit is supposed to let you know before you start actually getting charged [22:11] At least it does here in Canada [22:19] *** BlueMax has joined #archiveteam-ot [22:36] *** dataorc has joined #archiveteam-ot [22:50] *** s4t has joined #archiveteam-ot [22:54] *** BlueMax has quit IRC (Quit: Leaving) [22:58] *** s4t has quit IRC (Quit: s4t) [23:17] *** VerifiedJ has quit IRC (Quit: Leaving) [23:28] edisded: Okay.