[00:19] *** MrRadar has joined #archiveteam-ot [01:13] *** keith20 has quit IRC (byeee) [01:45] *** chferfa has quit IRC () [03:41] *** odemg has quit IRC (Ping timeout: 260 seconds) [03:53] *** odemg has joined #archiveteam-ot [04:13] *** astrid has quit IRC (Read error: Operation timed out) [04:15] *** MrRadar has quit IRC (Read error: Operation timed out) [04:18] *** jut has quit IRC (Ping timeout: 360 seconds) [04:18] *** jut has joined #archiveteam-ot [04:20] *** schbirid has joined #archiveteam-ot [04:21] *** MrRadar has joined #archiveteam-ot [04:23] *** astrid has joined #archiveteam-ot [04:24] *** svchfoo1 sets mode: +o astrid [04:27] *** dashcloud has quit IRC (Read error: Operation timed out) [05:06] *** schbirid has quit IRC (Remote host closed the connection) [05:53] *** m007a83_ has joined #archiveteam-ot [05:55] *** m007a83 has quit IRC (Ping timeout: 252 seconds) [07:31] *** t2t2 has quit IRC (Ping timeout: 260 seconds) [07:31] *** t2t2 has joined #archiveteam-ot [08:20] *** jut has quit IRC (Quit: WeeChat 1.4) [08:23] *** m007a83 has joined #archiveteam-ot [08:27] *** m007a83_ has quit IRC (Read error: Operation timed out) [09:10] *** jut has joined #archiveteam-ot [09:19] *** wp494 has quit IRC (Ping timeout: 840 seconds) [09:30] *** chferfa has joined #archiveteam-ot [10:13] *** Ravenloft has quit IRC (Read error: Connection reset by peer) [10:56] *** BlueMax has quit IRC (Read error: Connection reset by peer) [11:48] *** wp494 has joined #archiveteam-ot [12:31] *** icedice has joined #archiveteam-ot [12:33] Any recommendations for a reliable 6TB HDD? I'll use in my desktop to store and access files, mostly videos. [12:34] I'm thinking about getting a WD Black 6TB 256MB [12:34] And I won't RAID the HDD [12:35] best deal per GB right now is shucked 8TB WD My Book assuming stripping the warranty + applying tape over 3V lines is OK [12:36] Also, how reliable are 6TB HDDs overall? I know that 1.5TB, 3TB, and 8+TB HDDs aren't as reliable as the other capacities. [12:36] I ment internal HDD, not external [12:37] external drives contain internal drives :-) :-) [12:37] backblaze is the place that might have data but I don't think they bought many 6TB [12:37] in general the annual failure rate is around 1% now [12:38] BackBlaze stats is a joke [12:38] there's an 6TB HGST He8 fwiw [12:39] still available new as HGST Deskstar NAS [12:40] 1.5 and 3 TB had some issues, yeah, but I haven't heard about systematic problems on 8+ TB drives. Any source on that? [12:41] JAA: A hard drive technician on Overclock.net warned about 8+ TB drives: https://www.overclock.net/forum/20-hard-drives-storage/1634071-warning-about-8tb-drives-cautious-aware-higher-rate-failure-degraded-surfaces.html [12:41] Why BlackBlaze's study sucks: http://www.tomshardware.co.uk/answers/id-2724690/seagate-hgst.html#r16270047 [12:42] They're using consumer hard drives in a server environment [12:42] Yeah, Backblaze's data is pretty much useless. [12:42] I have a lot of 8TB He8 drives and they're fine [12:42] Of course HGST is going to dominate that [12:42] ivan: How old are they? [12:42] varies, mostly 1-1.5 years [12:43] I test them a lot and have not seen bad blocks or whatever [12:43] Ok. I'm mostly worried about the long term reliability, so it's too early to draw conclusions from that [12:44] The only decent stats on hard drive reliability: https://www.hardware.fr/articles/962-6/disques-durs.html [12:45] the "backblaze study sucks" comment you linked is clueless [12:45] backblaze loads the drives with data and mostly leaves them alone [12:45] also, complaining about stressing the drives too hard? wouldn't you want _more_ room for error in your own environment? [12:46] They drives are in a high vibration environment [12:46] Which they are not designed for [12:46] Of course they're going to fail [12:46] The server/NAS drives are the most reliable ones, but they don't have head parking [12:47] And they have error recovery control [12:47] backblaze is doing it wrong (internet commenter can assume they have excessive vibration?), and yet they fail 1% of the time? [12:47] 2015 comment is motivated by the large number of 3TB Seagate failures and people wishfully hoping it wouldn't happen to them [12:48] warranty returns stats are worse than backblaze's imho [12:49] I returned none of my ST3000 because why would I want more of these ? [12:50] A desktop environment and a server environment are completely different and can't be compared [12:50] uh-huh [12:51] yet they get excellent numbers ? [12:52] put a bunch of hard drives together into a crappy case at home and you've got vibration too [12:52] do a lot of seeks for days, are you "server" now? [12:53] anyway the modern drives seem to survive suboptimal conditions just fine [12:56] pretty sure most home cases have way more protection than backblaze chassis [12:56] heh probably [12:58] https://www.backblaze.com/blog/wp-content/uploads/2016/04/blog-60-drives-ooh-aah.jpg [13:00] boy are there going to be a lot of cheap hard drives available when flash finally eats everything [13:01] https://www.tweaktown.com/articles/6028/dispelling-backblaze-s-hdd-reliability-myth-the-real-story-covered/index3.html [13:03] you're linking to an article by a guy who has basically published Seagate press releases. what do you think the odds are of Seagate feeding him this as well? [13:04] the ST3000 incident was very real [13:04] "HDD Reliability Myth - The Real Story Covered" sounds like something a PR drone would feed [13:04] How am I supposed to know what the motivations of the writer of an article is? [13:05] I just tried to find some article about vibration in BackBlaze's storage pods [13:05] Anyway, for the record: Fuck Seagate [13:05] well you can form good guesses with inference on the contents and author [13:05] "This paints them in a very unforgiving light due to obvious chassis issues, with a misleading annual failure rate of 25.4% that would surely put Seagate out of business, if it were realistic." [13:05] Never trusting that brand [13:07] "Fuck Seagate" sounds like something someone would say who owned 10 ST3000DM001s, had them fail, and then swore to never, ever buy Seagate drives again. [13:07] Modern Seagate drives are fine. [13:07] I remember reading a post on Overclock.net from someone who's brother works at Seagate and the brother had warned him to "not buy Seagate right now", or something like that [13:07] I don't trust Seagate [13:09] 10.00% RMAs for Seagate Desktop HDD 6TB and 6.78% RMAs for Seagate Enterprise NAS HDD 6TB is not ok [13:09] Neither do I. I don't trust any other company either. That's why I have drives from multiple brands and backups. [13:09] Where are those numbers from? [13:10] Yeah, I'm planning on getting a Glyph BlackBox Pro 6TB eventually to use for backups [13:10] https://www.hardware.fr/articles/962-6/disques-durs.html [13:11] Sample size of 100 to 200 drives. Okay then... [13:13] In fact, all models for which they report a failure rate (ok, RMA rate) of over 3 % have such a small sample size. So yeah... [13:22] I'd be all for trying HGST though, but I don't think they have head parking [13:23] And I think all of their 6TB HDDs are sever/NAS drives, which means that they probably have error recovery control [13:27] Yeah, I don't think there are any consumer drives from "HGST". [13:28] I put quotes around that because HGST is just a brand of WD, several WD drives are actually rebranded HGST drives (e.g. WD Reds 8+ TB), and the HGST brand is being phased out. [13:38] https://www.hgst.com/products/hard-drives mobile drives might be consumer enough [13:38] *** dashcloud has joined #archiveteam-ot [13:41] Not 6 TB though. [13:48] Yeah [13:48] If I was using RAID I'd probably use HGST or WD Gold [13:48] But since I'm not I'll probably go for a WD Black 6TB 256MB [13:49] I was also considering Toshiba X300 6TB, but that one is apparently loud af [13:50] https://www.youtube.com/watch?v=wlpyF8anf8A&t=1m29s [13:51] So this is mostly anecdotal but I work with the NSW Department of Education and Training on casual basis. We purchased 12 Seagate Exos x12 12TB helium drives from Seagate themselves for a pilot test [13:51] Within 6 months we had 7 of those 12 drives fail completely [13:52] *** ozlo__ has quit IRC (Quit: WeeChat 2.1) [13:56] btw this is the model: ST12000NM0027 [14:14] *** keith20 has joined #archiveteam-ot [14:25] *** ivan has quit IRC (Read error: Operation timed out) [14:25] *** JAA has quit IRC (Read error: Operation timed out) [14:26] *** jspiros has quit IRC (Read error: Operation timed out) [14:27] *** ivan has joined #archiveteam-ot [14:40] Colosolutions pls [14:51] i run a raid5 of three different brands of 8T [14:51] lightly loaded [14:51] about a year now [14:51] ok, six months. [14:52] no errors yet reported by zfs; i've got 4.6T of data in it [14:58] *** icedice has quit IRC (Ping timeout: 260 seconds) [15:25] *** jspiros has joined #archiveteam-ot [15:26] *** JAA has joined #archiveteam-ot [15:26] *** svchfoo3 sets mode: +o JAA [15:27] *** bakJAA sets mode: +o JAA [15:46] *** eientei95 has quit IRC (Quit: ZNC 1.6.5 - http://znc.in) [15:46] *** dashcloud has quit IRC (Read error: Operation timed out) [16:49] *** schbirid has joined #archiveteam-ot [17:14] *** dashcloud has joined #archiveteam-ot [17:41] *** chferfa has quit IRC () [18:01] *** dashcloud has quit IRC (Ping timeout: 633 seconds) [18:24] *** jc86035 has joined #archiveteam-ot [18:24] *** jc86035 has left [19:01] *** schbirid has quit IRC (Remote host closed the connection) [20:47] *** caff_ has joined #archiveteam-ot [20:47] *** wp494 has quit IRC (Ping timeout: 255 seconds) [20:47] *** wp494 has joined #archiveteam-ot [21:22] *** LimpPanda has joined #archiveteam-ot [21:22] *** LimpPanda has quit IRC (Client Quit) [21:35] JAA: want to own grab-site? I'm bored of it [21:36] Hey Ivan can grabsite be configured to upload to FOS or directly to the IA [21:38] you can't upload your grab-site WARCs to FOS because I believe SketchCow wants ArchiveBot WARCs [21:38] I can link you to an uploader for uploading to IA [21:40] https://gist.github.com/ivan/079530350ac94851d581b55b1d372440 you may have to make changes, good luck [21:42] ivan: I would like to merge grab-site with ArchiveBot eventually; that would make maintaining it essentially trivial since it would be a by-product of maintaining ArchiveBot (which likely won't go away for a while). As for taking grab-site over, I've never used it, so not sure how much sense that makes. (I always used plain wpull.) [21:43] oh ok I didn't realize [21:44] you just seemed like the only person doing some wpull programming [21:45] Yeah, that's true. [21:45] I should pick it up again. Haven't worked on it in over half a year. [21:45] Getting it stable enough for a version 2.1.0 that is actually usable is the goal. [21:46] grab-site does a bunch of stuff to avoid thinking about all the wpull arguments one might want to archive a website. if you have different opinions, maybe it can be salvaged and adapted to your preferences. [21:48] I just found that I had to write custom code anyway in 99 % of the cases where I couldn't just throw it into ArchiveBot. [21:48] it sounds like your archiving is of the sort where you actually try to archive it properly [21:48] mine is the kind where I just toss in websites and hope for the best [21:49] Heavily scripted sites, special ignores/filters that depend on the parent URL, stuff like that. [21:49] Yeah, I try to grab it as close to the real traffic from a browser as feasible, essentially. [21:50] Well, if I care enough. There are definitely cases where I just throw it into ArchiveBot and hope it works. [21:51] If I care enough and have time for it. [22:42] *** dashcloud has joined #archiveteam-ot [23:05] *** dashcloud has quit IRC (Ping timeout: 260 seconds)