| Time | Nickname | Message | 
    
        | 00:19
            
                🔗 |  | MrRadar has joined #archiveteam-ot | 
    
        | 01:13
            
                🔗 |  | keith20 has quit IRC (byeee) | 
    
        | 01:45
            
                🔗 |  | chferfa has quit IRC () | 
    
        | 03:41
            
                🔗 |  | odemg has quit IRC (Ping timeout: 260 seconds) | 
    
        | 03:53
            
                🔗 |  | odemg has joined #archiveteam-ot | 
    
        | 04:13
            
                🔗 |  | astrid has quit IRC (Read error: Operation timed out) | 
    
        | 04:15
            
                🔗 |  | MrRadar has quit IRC (Read error: Operation timed out) | 
    
        | 04:18
            
                🔗 |  | jut has quit IRC (Ping timeout: 360 seconds) | 
    
        | 04:18
            
                🔗 |  | jut has joined #archiveteam-ot | 
    
        | 04:20
            
                🔗 |  | schbirid has joined #archiveteam-ot | 
    
        | 04:21
            
                🔗 |  | MrRadar has joined #archiveteam-ot | 
    
        | 04:23
            
                🔗 |  | astrid has joined #archiveteam-ot | 
    
        | 04:24
            
                🔗 |  | svchfoo1 sets mode: +o astrid | 
    
        | 04:27
            
                🔗 |  | dashcloud has quit IRC (Read error: Operation timed out) | 
    
        | 05:06
            
                🔗 |  | schbirid has quit IRC (Remote host closed the connection) | 
    
        | 05:53
            
                🔗 |  | m007a83_ has joined #archiveteam-ot | 
    
        | 05:55
            
                🔗 |  | m007a83 has quit IRC (Ping timeout: 252 seconds) | 
    
        | 07:31
            
                🔗 |  | t2t2 has quit IRC (Ping timeout: 260 seconds) | 
    
        | 07:31
            
                🔗 |  | t2t2 has joined #archiveteam-ot | 
    
        | 08:20
            
                🔗 |  | jut has quit IRC (Quit: WeeChat 1.4) | 
    
        | 08:23
            
                🔗 |  | m007a83 has joined #archiveteam-ot | 
    
        | 08:27
            
                🔗 |  | m007a83_ has quit IRC (Read error: Operation timed out) | 
    
        | 09:10
            
                🔗 |  | jut has joined #archiveteam-ot | 
    
        | 09:19
            
                🔗 |  | wp494 has quit IRC (Ping timeout: 840 seconds) | 
    
        | 09:30
            
                🔗 |  | chferfa has joined #archiveteam-ot | 
    
        | 10:13
            
                🔗 |  | Ravenloft has quit IRC (Read error: Connection reset by peer) | 
    
        | 10:56
            
                🔗 |  | BlueMax has quit IRC (Read error: Connection reset by peer) | 
    
        | 11:48
            
                🔗 |  | wp494 has joined #archiveteam-ot | 
    
        | 12:31
            
                🔗 |  | icedice has joined #archiveteam-ot | 
    
        | 12:33
            
                🔗 | icedice | Any recommendations for a reliable 6TB HDD? I'll use in my desktop to store and access files, mostly videos. | 
    
        | 12:34
            
                🔗 | icedice | I'm thinking about getting a WD Black 6TB 256MB | 
    
        | 12:34
            
                🔗 | icedice | And I won't RAID the HDD | 
    
        | 12:35
            
                🔗 | ivan | best deal per GB right now is shucked 8TB WD My Book assuming stripping the warranty + applying tape over 3V lines is OK | 
    
        | 12:36
            
                🔗 | icedice | Also, how reliable are 6TB HDDs overall? I know that 1.5TB, 3TB, and 8+TB HDDs aren't as reliable as the other capacities. | 
    
        | 12:36
            
                🔗 | icedice | I ment internal HDD, not external | 
    
        | 12:37
            
                🔗 | ivan | external drives contain internal drives :-) :-) | 
    
        | 12:37
            
                🔗 | ivan | backblaze is the place that might have data but I don't think they bought many 6TB | 
    
        | 12:37
            
                🔗 | ivan | in general the annual failure rate is around 1% now | 
    
        | 12:38
            
                🔗 | icedice | BackBlaze stats is a joke | 
    
        | 12:38
            
                🔗 | ivan | there's an 6TB HGST He8 fwiw | 
    
        | 12:39
            
                🔗 | ivan | still available new as HGST Deskstar NAS | 
    
        | 12:40
            
                🔗 | JAA | 1.5 and 3 TB had some issues, yeah, but I haven't heard about systematic problems on 8+ TB drives. Any source on that? | 
    
        | 12:41
            
                🔗 | icedice | JAA: A hard drive technician on Overclock.net warned about 8+ TB drives: https://www.overclock.net/forum/20-hard-drives-storage/1634071-warning-about-8tb-drives-cautious-aware-higher-rate-failure-degraded-surfaces.html | 
    
        | 12:41
            
                🔗 | icedice | Why BlackBlaze's study sucks: http://www.tomshardware.co.uk/answers/id-2724690/seagate-hgst.html#r16270047 | 
    
        | 12:42
            
                🔗 | icedice | They're using consumer hard drives in a server environment | 
    
        | 12:42
            
                🔗 | JAA | Yeah, Backblaze's data is pretty much useless. | 
    
        | 12:42
            
                🔗 | ivan | I have a lot of 8TB He8 drives and they're fine | 
    
        | 12:42
            
                🔗 | icedice | Of course HGST is going to dominate that | 
    
        | 12:42
            
                🔗 | icedice | ivan: How old are they? | 
    
        | 12:42
            
                🔗 | ivan | varies, mostly 1-1.5 years | 
    
        | 12:43
            
                🔗 | ivan | I test them a lot and have not seen bad blocks or whatever | 
    
        | 12:43
            
                🔗 | icedice | Ok. I'm mostly worried about the long term reliability, so it's too early to draw conclusions from that | 
    
        | 12:44
            
                🔗 | icedice | The only decent stats on hard drive reliability: https://www.hardware.fr/articles/962-6/disques-durs.html | 
    
        | 12:45
            
                🔗 | ivan | the "backblaze study sucks" comment you linked is clueless | 
    
        | 12:45
            
                🔗 | ivan | backblaze loads the drives with data and mostly leaves them alone | 
    
        | 12:45
            
                🔗 | ivan | also, complaining about stressing the drives too hard? wouldn't you want _more_ room for error in your own environment? | 
    
        | 12:46
            
                🔗 | icedice | They drives are in a high vibration environment | 
    
        | 12:46
            
                🔗 | icedice | Which they are not designed for | 
    
        | 12:46
            
                🔗 | icedice | Of course they're going to fail | 
    
        | 12:46
            
                🔗 | icedice | The server/NAS drives are the most reliable ones, but they don't have head parking | 
    
        | 12:47
            
                🔗 | icedice | And they have error recovery control | 
    
        | 12:47
            
                🔗 | ivan | backblaze is doing it wrong (internet commenter can assume they have excessive vibration?), and yet they fail 1% of the time? | 
    
        | 12:47
            
                🔗 | ivan | 2015 comment is motivated by the large number of 3TB Seagate failures and people wishfully hoping it wouldn't happen to them | 
    
        | 12:48
            
                🔗 | Meroje | warranty returns stats are worse than backblaze's imho | 
    
        | 12:49
            
                🔗 | Meroje | I returned none of my ST3000 because why would I want more of these ? | 
    
        | 12:50
            
                🔗 | icedice | A desktop environment and a server environment are completely different and can't be compared | 
    
        | 12:50
            
                🔗 | ivan | uh-huh | 
    
        | 12:51
            
                🔗 | Meroje | yet they get excellent numbers ? | 
    
        | 12:52
            
                🔗 | ivan | put a bunch of hard drives together into a crappy case at home and you've got vibration too | 
    
        | 12:52
            
                🔗 | ivan | do a lot of seeks for days, are you "server" now? | 
    
        | 12:53
            
                🔗 | ivan | anyway the modern drives seem to survive suboptimal conditions just fine | 
    
        | 12:56
            
                🔗 | Meroje | pretty sure most home cases have way more protection than backblaze chassis | 
    
        | 12:56
            
                🔗 | ivan | heh probably | 
    
        | 12:58
            
                🔗 | Meroje | https://www.backblaze.com/blog/wp-content/uploads/2016/04/blog-60-drives-ooh-aah.jpg | 
    
        | 13:00
            
                🔗 | ivan | boy are there going to be a lot of cheap hard drives available when flash finally eats everything | 
    
        | 13:01
            
                🔗 | icedice | https://www.tweaktown.com/articles/6028/dispelling-backblaze-s-hdd-reliability-myth-the-real-story-covered/index3.html | 
    
        | 13:03
            
                🔗 | ivan | you're linking to an article by a guy who has basically published Seagate press releases. what do you think the odds are of Seagate feeding him this as well? | 
    
        | 13:04
            
                🔗 | Meroje | the ST3000 incident was very real | 
    
        | 13:04
            
                🔗 | ivan | "HDD Reliability Myth - The Real Story Covered" sounds like something a PR drone would feed | 
    
        | 13:04
            
                🔗 | icedice | How am I supposed to know what the motivations of the writer of an article is? | 
    
        | 13:05
            
                🔗 | icedice | I just tried to find some article about vibration in BackBlaze's storage pods | 
    
        | 13:05
            
                🔗 | icedice | Anyway, for the record: Fuck Seagate | 
    
        | 13:05
            
                🔗 | ivan | well you can form good guesses with inference on the contents and author | 
    
        | 13:05
            
                🔗 | ivan | "This paints them in a very unforgiving light due to obvious chassis issues, with a misleading annual failure rate of 25.4% that would surely put Seagate out of business, if it were realistic." | 
    
        | 13:05
            
                🔗 | icedice | Never trusting that brand | 
    
        | 13:07
            
                🔗 | JAA | "Fuck Seagate" sounds like something someone would say who owned 10 ST3000DM001s, had them fail, and then swore to never, ever buy Seagate drives again. | 
    
        | 13:07
            
                🔗 | JAA | Modern Seagate drives are fine. | 
    
        | 13:07
            
                🔗 | icedice | I remember reading a post on Overclock.net from someone who's brother works at Seagate and the brother had warned him to "not buy Seagate right now", or something like that | 
    
        | 13:07
            
                🔗 | icedice | I don't trust Seagate | 
    
        | 13:09
            
                🔗 | icedice | 10.00% RMAs for Seagate Desktop HDD 6TB and 6.78% RMAs for Seagate Enterprise NAS HDD 6TB is not ok | 
    
        | 13:09
            
                🔗 | JAA | Neither do I. I don't trust any other company either. That's why I have drives from multiple brands and backups. | 
    
        | 13:09
            
                🔗 | JAA | Where are those numbers from? | 
    
        | 13:10
            
                🔗 | icedice | Yeah, I'm planning on getting a Glyph BlackBox Pro 6TB eventually to use for backups | 
    
        | 13:10
            
                🔗 | icedice | https://www.hardware.fr/articles/962-6/disques-durs.html | 
    
        | 13:11
            
                🔗 | JAA | Sample size of 100 to 200 drives. Okay then... | 
    
        | 13:13
            
                🔗 | JAA | In fact, all models for which they report a failure rate (ok, RMA rate) of over 3 % have such a small sample size. So yeah... | 
    
        | 13:22
            
                🔗 | icedice | I'd be all for trying HGST though, but I don't think they have head parking | 
    
        | 13:23
            
                🔗 | icedice | And I think all of their 6TB HDDs are sever/NAS drives, which means that they probably have error recovery control | 
    
        | 13:27
            
                🔗 | JAA | Yeah, I don't think there are any consumer drives from "HGST". | 
    
        | 13:28
            
                🔗 | JAA | I put quotes around that because HGST is just a brand of WD, several WD drives are actually rebranded HGST drives (e.g. WD Reds 8+ TB), and the HGST brand is being phased out. | 
    
        | 13:38
            
                🔗 | ivan | https://www.hgst.com/products/hard-drives mobile drives might be consumer enough | 
    
        | 13:38
            
                🔗 |  | dashcloud has joined #archiveteam-ot | 
    
        | 13:41
            
                🔗 | JAA | Not 6 TB though. | 
    
        | 13:48
            
                🔗 | icedice | Yeah | 
    
        | 13:48
            
                🔗 | icedice | If I was using RAID I'd probably use HGST or WD Gold | 
    
        | 13:48
            
                🔗 | icedice | But since I'm not I'll probably go for a WD Black 6TB 256MB | 
    
        | 13:49
            
                🔗 | icedice | I was also considering Toshiba X300 6TB, but that one is apparently loud af | 
    
        | 13:50
            
                🔗 | icedice | https://www.youtube.com/watch?v=wlpyF8anf8A&t=1m29s | 
    
        | 13:51
            
                🔗 | kiska | So this is mostly anecdotal but I work with the NSW Department of Education and Training on casual basis. We purchased 12 Seagate Exos x12 12TB helium drives from Seagate themselves for a pilot test | 
    
        | 13:51
            
                🔗 | kiska | Within 6 months we had 7 of those 12 drives fail completely | 
    
        | 13:52
            
                🔗 |  | ozlo__ has quit IRC (Quit: WeeChat 2.1) | 
    
        | 13:56
            
                🔗 | kiska | btw this is the model: ST12000NM0027 | 
    
        | 14:14
            
                🔗 |  | keith20 has joined #archiveteam-ot | 
    
        | 14:25
            
                🔗 |  | ivan has quit IRC (Read error: Operation timed out) | 
    
        | 14:25
            
                🔗 |  | JAA has quit IRC (Read error: Operation timed out) | 
    
        | 14:26
            
                🔗 |  | jspiros has quit IRC (Read error: Operation timed out) | 
    
        | 14:27
            
                🔗 |  | ivan has joined #archiveteam-ot | 
    
        | 14:40
            
                🔗 | bakJAA | Colosolutions pls | 
    
        | 14:51
            
                🔗 | astrid | i run a raid5 of three different brands of 8T | 
    
        | 14:51
            
                🔗 | astrid | lightly loaded | 
    
        | 14:51
            
                🔗 | astrid | about a year now | 
    
        | 14:51
            
                🔗 | astrid | ok, six months. | 
    
        | 14:52
            
                🔗 | astrid | no errors yet reported by zfs; i've got 4.6T of data in it | 
    
        | 14:58
            
                🔗 |  | icedice has quit IRC (Ping timeout: 260 seconds) | 
    
        | 15:25
            
                🔗 |  | jspiros has joined #archiveteam-ot | 
    
        | 15:26
            
                🔗 |  | JAA has joined #archiveteam-ot | 
    
        | 15:26
            
                🔗 |  | svchfoo3 sets mode: +o JAA | 
    
        | 15:27
            
                🔗 |  | bakJAA sets mode: +o JAA | 
    
        | 15:46
            
                🔗 |  | eientei95 has quit IRC (Quit: ZNC 1.6.5 - http://znc.in) | 
    
        | 15:46
            
                🔗 |  | dashcloud has quit IRC (Read error: Operation timed out) | 
    
        | 16:49
            
                🔗 |  | schbirid has joined #archiveteam-ot | 
    
        | 17:14
            
                🔗 |  | dashcloud has joined #archiveteam-ot | 
    
        | 17:41
            
                🔗 |  | chferfa has quit IRC () | 
    
        | 18:01
            
                🔗 |  | dashcloud has quit IRC (Ping timeout: 633 seconds) | 
    
        | 18:24
            
                🔗 |  | jc86035 has joined #archiveteam-ot | 
    
        | 18:24
            
                🔗 |  | jc86035 has left | 
    
        | 19:01
            
                🔗 |  | schbirid has quit IRC (Remote host closed the connection) | 
    
        | 20:47
            
                🔗 |  | caff_ has joined #archiveteam-ot | 
    
        | 20:47
            
                🔗 |  | wp494 has quit IRC (Ping timeout: 255 seconds) | 
    
        | 20:47
            
                🔗 |  | wp494 has joined #archiveteam-ot | 
    
        | 21:22
            
                🔗 |  | LimpPanda has joined #archiveteam-ot | 
    
        | 21:22
            
                🔗 |  | LimpPanda has quit IRC (Client Quit) | 
    
        | 21:35
            
                🔗 | ivan | JAA: want to own grab-site? I'm bored of it | 
    
        | 21:36
            
                🔗 | Flashfire | Hey Ivan can grabsite be configured to upload to FOS or directly to the IA | 
    
        | 21:38
            
                🔗 | ivan | you can't upload your grab-site WARCs to FOS because I believe SketchCow wants ArchiveBot WARCs | 
    
        | 21:38
            
                🔗 | ivan | I can link you to an uploader for uploading to IA | 
    
        | 21:40
            
                🔗 | ivan | https://gist.github.com/ivan/079530350ac94851d581b55b1d372440 you may have to make changes, good luck | 
    
        | 21:42
            
                🔗 | JAA | ivan: I would like to merge grab-site with ArchiveBot eventually; that would make maintaining it essentially trivial since it would be a by-product of maintaining ArchiveBot (which likely won't go away for a while). As for taking grab-site over, I've never used it, so not sure how much sense that makes. (I always used plain wpull.) | 
    
        | 21:43
            
                🔗 | ivan | oh ok I didn't realize | 
    
        | 21:44
            
                🔗 | ivan | you just seemed like the only person doing some wpull programming | 
    
        | 21:45
            
                🔗 | JAA | Yeah, that's true. | 
    
        | 21:45
            
                🔗 | JAA | I should pick it up again. Haven't worked on it in over half a year. | 
    
        | 21:45
            
                🔗 | JAA | Getting it stable enough for a version 2.1.0 that is actually usable is the goal. | 
    
        | 21:46
            
                🔗 | ivan | grab-site does a bunch of stuff to avoid thinking about all the wpull arguments one might want to archive a website. if you have different opinions, maybe it can be salvaged and adapted to your preferences. | 
    
        | 21:48
            
                🔗 | JAA | I just found that I had to write custom code anyway in 99 % of the cases where I couldn't just throw it into ArchiveBot. | 
    
        | 21:48
            
                🔗 | ivan | it sounds like your archiving is of the sort where you actually try to archive it properly | 
    
        | 21:48
            
                🔗 | ivan | mine is the kind where I just toss in websites and hope for the best | 
    
        | 21:49
            
                🔗 | JAA | Heavily scripted sites, special ignores/filters that depend on the parent URL, stuff like that. | 
    
        | 21:49
            
                🔗 | JAA | Yeah, I try to grab it as close to the real traffic from a browser as feasible, essentially. | 
    
        | 21:50
            
                🔗 | JAA | Well, if I care enough. There are definitely cases where I just throw it into ArchiveBot and hope it works. | 
    
        | 21:51
            
                🔗 | JAA | If I care enough and have time for it. | 
    
        | 22:42
            
                🔗 |  | dashcloud has joined #archiveteam-ot | 
    
        | 23:05
            
                🔗 |  | dashcloud has quit IRC (Ping timeout: 260 seconds) |