[00:08] *** Madbrad has joined #archiveteam-ot [01:13] *** jspiros__ has quit IRC () [01:14] *** jspiros__ has joined #archiveteam-ot [01:15] *** jspiros__ has quit IRC (Client Quit) [01:16] *** jspiros__ has joined #archiveteam-ot [01:22] *** killsushi has quit IRC (Quit: Leaving) [02:12] *** ayanami_ has joined #archiveteam-ot [02:21] *** BlueMax has quit IRC (Quit: Leaving) [02:49] *** Odd0002_ has joined #archiveteam-ot [02:50] *** Odd0002 has quit IRC (Read error: Operation timed out) [02:50] *** Odd0002_ is now known as Odd0002 [02:54] *** bytefray has quit IRC (Read error: Connection reset by peer) [02:57] *** bytefray has joined #archiveteam-ot [02:58] *** paul2520 has quit IRC (Read error: Operation timed out) [03:04] *** dxrt_ has quit IRC (Read error: Connection reset by peer) [03:07] *** BlueMax has joined #archiveteam-ot [03:07] *** qw3rty113 has quit IRC (Ping timeout: 600 seconds) [03:08] *** paul2520 has joined #archiveteam-ot [03:08] *** step has quit IRC (Ping timeout: 600 seconds) [03:09] *** kiska1 has quit IRC (Ping timeout: 600 seconds) [03:10] *** qw3rty113 has joined #archiveteam-ot [03:11] *** step has joined #archiveteam-ot [03:12] *** kiska1 has joined #archiveteam-ot [03:12] *** Fusl sets mode: +o kiska1 [03:12] *** qw3rty114 has joined #archiveteam-ot [03:13] *** dxrt_ has joined #archiveteam-ot [03:15] *** killsushi has joined #archiveteam-ot [03:20] *** qw3rty113 has quit IRC (Read error: Operation timed out) [03:21] *** qw3rty115 has joined #archiveteam-ot [03:25] *** step has quit IRC (Remote host closed the connection) [03:27] *** qw3rty114 has quit IRC (Read error: Operation timed out) [03:35] *** odemg has quit IRC (Ping timeout: 615 seconds) [03:42] *** odemg has joined #archiveteam-ot [04:08] *** ayanami_ has quit IRC (Quit: Leaving) [05:22] ivan: Oh. Okay. Thanks. [05:23] Kaz: Where did you get that graph? [05:43] *** dhyan_nat has joined #archiveteam-ot [06:06] *** JAA has quit IRC (Read error: Operation timed out) [06:06] *** cfarquhar has quit IRC (Read error: Operation timed out) [06:07] *** svchfoo1 has quit IRC (Read error: Operation timed out) [06:08] *** simon816 has quit IRC (Read error: Operation timed out) [06:12] *** lunik1 has quit IRC (Read error: Operation timed out) [07:08] *** cfarquhar has joined #archiveteam-ot [07:08] *** svchfoo1 has joined #archiveteam-ot [07:08] *** simon816 has joined #archiveteam-ot [07:08] *** lunik1 has joined #archiveteam-ot [07:10] *** JAA has joined #archiveteam-ot [07:11] *** Fusl sets mode: +o JAA [07:11] *** bakJAA sets mode: +o JAA [07:34] *** killsushi has quit IRC (Quit: Leaving) [07:47] https://www.presstv.com/Detail/2019/04/19/593779/Google-Youtube-presstv-hispantv-channel-close [08:11] *** Atom__ has joined #archiveteam-ot [08:18] *** Atom-- has quit IRC (Read error: Operation timed out) [08:41] *** antiufo has joined #archiveteam-ot [09:25] t3: archive team grafana https://atdash.meo.ws/, the graph showing upload speed into IA from one of Kaz host [09:25] sorry if grammar wrong just woke up [10:24] *** VerifiedJ has joined #archiveteam-ot [10:24] *** Verified_ has quit IRC (Ping timeout: 252 seconds) [10:25] *** Verified_ has joined #archiveteam-ot [10:26] *** antiufo has quit IRC (Quit: WeeChat 2.3) [10:28] *** VerifiedJ has quit IRC (Ping timeout: 252 seconds) [11:01] *** Oddly has quit IRC (Ping timeout: 360 seconds) [11:12] *** bytefray has quit IRC (WeeChat 2.3) [11:17] *** Verified_ has quit IRC (Ping timeout: 252 seconds) [11:18] *** BlueMax has quit IRC (Read error: Connection reset by peer) [11:54] *** dhyan_nat has quit IRC (Read error: Operation timed out) [11:59] *** Verified_ has joined #archiveteam-ot [12:08] *** Tsuser has quit IRC (Ping timeout: 260 seconds) [12:09] *** benjins has joined #archiveteam-ot [13:20] *** Kenshin has joined #archiveteam-ot [13:20] *** Fusl sets mode: +o Kenshin [13:20] Fusl: my guys are all traditional dedi/vps guys, no experience with openstack or ceph [13:21] so the problem with ceph is, you dont really want to run with anything less than 3-5 nodes as it will cause more performance bottlenecks than standalone ZFS cluster per node does [13:21] we're looking at 3 nodes per "cluster" of sorts? [13:21] *** jspiros__ has quit IRC () [13:21] 3 copies of data [13:21] but trying to figure out what kind of network backbone [13:21] i tried out onapp storage for a bit in the past, hated it [13:22] running quad or dual 10gbit is what i would recommend [13:22] per node that is [13:22] the other question is whether we need that kind of speed [13:22] doing dual 100gbit is what i do at home and it didnt increase the performance by a lot [13:23] the physical servers we're using are 8 bay E5 single cores [13:23] well, you definitely do not want to go gigabit [13:23] *single processors [13:24] unless we do pure SSD, which is unlikely [13:24] hard drives? multiple amount of hard drives by 1gbit and you get the required network speed to run a stable cluster [13:24] probably won't saturate a 2x10G [13:24] we're thinking of ssd+hdd mix per server [13:24] 2/6 or 4/4 [13:25] for ssd's it's more like 4gbit per ssd [13:25] at least for sata [13:25] more likely 4/4, high capacity ssd + high cap hdd [13:25] so 20gbit [13:25] most of our customers are still traditional cpanel hosting or ecommerce [13:25] you can run dual 10gbit on that [13:26] link bundle? or two vlans [13:26] and just let lacp layer 2+3+4 load balancing do the trick for you [13:26] ok that sounds good [13:26] compute nodes should also have 2x10G towards storage network right? [13:26] yes [13:27] just as a future note if you ever end up like i did [13:27] then + 2x10G public internet facing [13:27] if you ever run a ceph cluster with more than around 10k nodes [13:27] split them up into a separate vlan/network and run a secondary cluster [13:27] the heck? 10k? [13:28] amazon? [13:28] nah, some private project i've been doing with a friend [13:28] single OSD per ceph host [13:28] ethernet/ceph drives [13:28] so 10 drives in a server = 10 ceph nodes? [13:28] https://ceph.com/geen-categorie/500-osd-ceph-cluster/ [13:29] 10 drives in a server = 10 OSDs in 1 host [13:29] each drive is called OSD [13:30] ic [13:30] so don't overdo the ceph nodes [13:30] gotcha [13:31] it's a relatively small setup [13:31] i got plenty of E3 microclouds (2bay), E5 single or dual procs with 8 bays [13:32] whats the GHz on those cpus? [13:32] so plan is to convert some of these dedis into a proxmox + ceph cluster. probably just 2-3 racks worth at most [13:33] E3-1230V3 or V5, so 3.4Ghz x4, E5 2620 V3/V4 [13:33] if you're running standalone ceph clusters segregated from the proxmox clusters, disable hyperthreading, vtx and vtd, that will give you at least 30% performance increase, at least the hyperthreading part [13:33] yeah separated, we have some units that are only single proc so 8 cores, planning to reuse them for pure ceph storage [13:34] yeah that sounds good [13:34] the dual procs will be used for compute, as well as E3s for high Ghz compute [13:34] how much memory does each node have? [13:34] E3s are stuck at 32G or 64G max, depending on DDR3/4 [13:34] you'll see ceph eat around 2gb memory for a HDD and 4gb memory for an SSD OSD [13:34] oh cpeh, hmm [13:35] if we did 4x 2TB SSD + 4x 10TB HDD [13:35] you can cut that down to around 1.5GB per OSD tho [13:35] what are we looking at? [13:35] around 28ish gb memory usage [13:35] so 32G should be safe [13:35] yep [13:36] ok cool, thanks [13:36] and then, you'll see yourself play around with `rbd cache` on the proxmox side ceph.conf a lot [13:36] noob question but, how does scaling work? [13:36] add 3 more ceph nodes when we need more space? [13:37] yeah, adding more OSDs [13:37] they dont even have to be the same size [13:37] thats the good thing about ceph, it will technically eat everything that you throw at it [13:37] what about balancing? [13:37] it will automatically balance all objects around so they are equally distributed based on the size of the drives [13:38] proxmox-side ceph.conf rbd stuff: http://xor.meo.ws/BgPBAf5FZztBkJPKrG5pMQ60hXRlVEYs.txt [13:38] as for your SSD/HDD mixin [13:38] so assuming 3 hosts, 4 ssd, using ssd storage only. when we spin up an instance does it only use 1 drive per host? [13:39] throw both into the same pool, don't run pool caching [13:39] then go ahead and enable osd primary affinity on all ceph.conf ends [13:39] then set HDD primary affinity to 0 [13:39] this will cause all OSD ready to happen from the SSDs rather from the HDDs [13:39] reads* [13:40] and it will make your SSDs the primary OSD for all your objects [13:40] You must enable ‘mon osd allow primary affinity = true’ on the mons before you can adjust primary-affinity. note that older clients will no longer be able to communicate with the cluster. [13:41] > when we spin up an instance does it only use 1 drive per host? [13:41] can you elaborate on that question? [13:42] does it "raid0" across all available OSD on the host [13:42] sorry my ceph knowledge is very minimum [13:42] that depends how you configure it [13:42] so a normal, sane setup would be to set the RBD block size to 4MB and the replication size to 3 [13:42] my thinking is that it sounds like RAID1 over 3 physical nodes [13:42] that will cause all your blocks to be written three times to three different OSD [13:42] but whether there's RAID0 within the host, no idea [13:43] so that RBD 1MB block size i mentioned earlier [13:43] is essentially the size that your RBD will be sliced up into chunks [13:43] or "objects" [13:44] because thats what they are in ceph [13:44] "objects" [13:44] so you have a 1MB object, that object lives distributed across three different OSDs, each on its own host [13:44] if you configure it correctly, ceph will ensure that no more than one copy of the same block will live on the same host [13:44] but it will live on a random OSD on that host [13:44] that's what CRUSH map is for [13:45] ic, that makes sense [13:45] but god if ceph's database is fucked [13:45] the whole thing collapses [13:45] it's a code, and ill give you an example shortly, that describes how your objects are distributed in the cluster [13:45] there's no "database" [13:45] its all just CRUSH [13:45] so each host in the ceph cluster, each monitor, each manager, admin, etc. [13:46] every client [13:46] even the proxmox clients [13:46] see the exact same CRUSH map [13:46] and that CRUSH map is a hash calculation algorithm that tells the client where it has to store that data and how it distributes that across everything [13:47] and that map is stored somewhere? or dynamically generated? [13:47] http://xor.meo.ws/e759Hlzp31ohMQzKGfAl3Rc8Rrq9oBnv.txt example crush map on one of my clusters [13:47] this is the CRUSH map ^ [13:47] its stored on the monitor servers [13:47] you get that map [13:47] you modify it [13:47] and then you push that map into the cluster (the monitors) again [13:47] each client always connects to the monitor servers first [13:47] to figure out what the CRUSH map looks like [13:47] and where the OSDs live, ip address, etc. [13:48] and once thats done, tthe clients will connect to the OSDs whenever they need to [13:48] and when that crush map changes [13:48] for example when one of your OSDs gets offline [13:48] or an entire host goes offline [13:49] the managers will coordinate generating a temporary crush map that resembles a new map based on your static map but with the down OSDs removed from the calculations [13:49] so that your clients always have a way to put the data somewhere, at least temporary [13:49] so the monitors are the coordinators of the entire cluster [13:49] run them on SSDs [13:50] but run them on raid1 SSDs [13:50] they dont need to be large [13:50] 16gb is everything they need [13:50] but they need to be fast [13:50] because they will do all the magic when somethign breaks or when you do maintenance [13:50] and they like to live in a consensus [13:50] so always have an uneven amount of monitors [13:50] 3,5,7,9 [13:51] you are technically fine if you run the monitors on the same hosts where the OSDs live but they need to have dedicated SSDs [13:51] anyways, i'm afk for 5 mins, ask away and ill answer them when im back [14:02] sounds easier just to raid0 your production cluster /s [14:02] map /dev/null, easiest [14:02] and very good performance [14:02] unbeatable [14:10] ok so a bit of reading up done, CRUSH is basically like a map of the entire cluster? [14:10] or at least where data is being stored [14:11] if you wanna see it as that yes [14:11] The CRUSH algorithm determines how to store and retrieve data by computing data storage locations. CRUSH empowers Ceph clients to communicate with OSDs directly rather than through a centralized server or broker. With an algorithmically determined method of storing and retrieving data, Ceph avoids a single point of failure, a performance bottleneck, and a physical limit to its scalability. [14:11] CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly store and retrieve data in OSDs with a uniform distribution of data across the cluster. For a detailed discussion of CRUSH, see CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data [14:11] more on that: http://docs.ceph.com/docs/master/rados/operations/crush-map/ [14:12] right. so that will point the client to the correct nodes+osd right? [14:12] for some things, the ceph documentation is REALLY worth a good read [14:12] yes [14:12] thus bypassing a proxy and going direct to source [14:12] exactly [14:12] but then say i have a 1GB data file, and 1MB block size [14:13] i know it's stored in 3 nodes, across 4 OSDs each [14:13] so [14:13] but the OSDs also store other data [14:13] see it as that [14:13] where's the data map stored? on the OSD itself? [14:13] the 1gb volume [14:14] will be sliced up into 1024 equal sized 1mb chunks. objects. [14:14] all objects are distributed into several placement groups [14:14] placement groups are essentially buckets that hold millions of objects [14:15] and there are many placement groups, but there shouldn't be too many placement groups because they dont scale very well [14:15] placement groups are stored on the OSDs [14:16] thats how the stuff is distributed across all OSDs, by placement groups [14:16] all the objects within the same placement group always stay within that placement group [14:16] but the placement group is essentially what you replicate across different OSDs [14:17] so PG is like partitioning? [14:17] kind of, yes [14:17] so when one OSD does down because the disk dies, ceph does not look up what objects it needs to move based on the OSD id [14:17] it looks up the placement groups that need to be moved [14:17] and then it just moves all objects within that placement group once [14:18] self-healing? [14:18] yes [14:18] this assumes one of the other OSDs on the same hosts has available space? [14:18] correct [14:18] so PGs won't take up 100% of a volume [14:18] it will automatically shrink your available disk space down to whatever is available after that OSD died [14:19] something like created as necessary [14:19] lets assume you have a 4 node cluster, each having 8 OSDs, no matter what kind of OSDs [14:20] and you have a placement group thats defined to be stored with size=3 (replicate/store the data 3 times) [14:20] if one node of that cluster goes down, 25% of your data is essentially degraded [14:20] it's still there because for most of the 75% of that data you still have two more copies available [14:21] and thats where ceph will go in and automatically re-replicate all placement groups that have been lost [14:21] right, in simpler terms, self-healing [14:21] and it will do so by balancing all the 25% of the lost placement groups to the rest of the 3 nodes [14:21] yep [14:21] then when the node comes back up, auto rebalancing? [14:21] yes [14:21] when the node comes back [14:22] you might have the data still stored there [14:22] but by assuming the node is empty/dirty [14:22] overwrite everything? [14:22] it compares the version of the objects iin all the placement groups [14:22] and then moves that data back into that node [14:22] by merging the new data onto the old one [14:22] and deleting old objects as necessary [14:22] oh, nice [14:22] so less data transferred [14:22] *** jspiros__ has joined #archiveteam-ot [14:22] yep [14:23] how is data cleaning handled? if an object is deleted, old data is zeroized? [14:24] so up to a specific ceph version with bluestore OSDs, data is only unlinked from the disk and then later overwritten [14:24] newer versions support TRIMming of OSDs so that your data is actually deleted from on-disk [14:25] but as far as ceph disk utilization goes, if you delete objects, you're essentially freeing up the space [14:25] and this also means that assuming the client network able to handle it (2x100G uplink for example), it's actually able to retrieve data from across multiple hosts to read data [14:25] correct [14:25] since it's technically distributed RAID0 on the storage level [14:25] assuming balancing is all done right [14:25] correct [14:25] there's just one pitfall [14:26] placement groups are replicated as a kind-of primary/secondary way [14:26] where there is one master OSD per placement group [14:26] and all the others are followers [14:26] so when your client goes ahead and reads an objects from a PG, it will always read from the primary OSD [14:27] but write is x3? [14:27] yep [14:27] and if for some reason, write fails to 1 of 3? [14:27] if it fails, the OSD will be kicked out of the cluster [14:28] that will trigger a restart of that OSD if possible or completely mark that OSD as down [14:28] ic [14:28] so write is expensive, network wise [14:28] thus client needs to have sufficient bandwidth [14:28] yup [14:28] unles [14:28] s [14:28] erasure-coded pools :P [14:29] going to chime in with a couple of q's: when master OSD fails, another just takes the master position? and is replication handled by the client, or the hosts? [14:29] * Kaz is learning things [14:30] if an OSD fails and happened to be a primary OSD for some PGs, the crush rule defines a new master by math [14:30] so "primary" OSDs aren't really defined anywhere, they are just there by being calculated as such in the CRUSH map [14:31] and if a PG's primary OSD goes down, the CRUSH map's calculations define in what order the secondary OSDs become master [14:31] so this also means, it's technically not 100% raid0 because reads are not load balanced across all the OSDs, but N/3 [14:31] also [14:32] data replication is not handled by the client itself [14:32] its handled by the primary OSD [14:32] as a background task or realtime? [14:32] realtime [14:32] ah ic [14:32] ty [14:32] it will only acknowledge a write back to the client once all OSDs have that write acknowledged [14:33] so client writes to primary OSD, primary OSD write to the other 2? [14:33] technically it *is* 100% RAID0 for reads, just that instead of 128KB striping in an mdadm raid, its the object size that you stripe over all the different PGs and OSDs with [14:33] Kenshin: yep [14:33] so client network is still 1:1 [14:33] yes [14:33] so with your example, there are 4*8=32 OSDs [14:34] with a 3 copy setting [14:34] meaning 1 primary OSD per 3? [14:34] yes [14:34] there is always one primary OSD for each PG [14:34] so read only goes to the primary OSD for that specific read request [14:34] but it's distributed across multiple PG [14:34] yes but no [14:34] so chances are, it'll still hit the other PGs? [14:35] rbd_balance_parent_reads [14:35] i think i got confused, lol [14:35] Description [14:35] Ceph typically reads objects from the primary OSD. Since reads are immutable, you may enable this feature to balance parent reads between the primary OSD and the replicas. [14:35] so to get like godly read speeds, we can do that? [14:35] yes [14:35] i recommend running at least ceph luminous for that though [14:36] any older version below that was known to corrupt data [14:36] :P [14:36] does ceph deal with data integrity? [14:36] since bluestore, yes [14:36] since that's something ZFS is supposedly very good in [14:36] it checksums read data and it also does a background scrub [14:37] ic [14:37] and if checksums are wrong, it will automatically re-replicate the data [14:37] how would it know who has the correct data though [14:37] to counteract against bitrot [14:37] since there are 3 copies [14:37] it stores the checksum in a table [14:37] ah ok [14:37] each OSD stores its own data checksum [14:37] and if that checksum differs from what it has stored [14:38] it will ask the other OSDs for that checksum, if any other OSD has that checksum, it will copy that data over [14:38] ok great. i think it all makes sense now [14:38] back to something you mentioned earlier, why ssd + hdd in the same pool but turn off caching? [14:38] and if no other OSD has that data, you'll have to do a manual recovery and god help you if you ever end up having to do that [14:39] cache tiering [14:39] makes only sense for REALLY fast, small SSDs [14:39] like DC enterprise nvme SSDs [14:39] but say i want to sell VMs with both HDD and SSD volumes [14:40] heh [14:40] this is where it gets fun [14:40] so [14:40] ceph defines whats called [14:40] pools [14:40] and each pool can have a different CRUSH rule [14:40] so in my crush example that i pasted earlier [14:40] http://xor.meo.ws/e759Hlzp31ohMQzKGfAl3Rc8Rrq9oBnv.txt [14:40] see how it has several "rule" blocks at the end? [14:41] thats a crush rule [14:41] and if you create or modify a ceph pool, you can set the crush rule that it has to use to replicate its own placement groups [14:41] because placement groups in ceph are bound to pools [14:41] so you can have a mix of HDDs and SSDs [14:41] tag them differently [14:41] with the class [14:42] create a rule called "hdd", define that it should only use "hdd" class-tagged OSDs [14:42] create another rule called "ssd", define that it should only use "ssd" class-tagged OSDs [14:42] then create two different pools, one uses the hdd and the other one the ssd crush rule [14:42] and then you just add those as two different pools in proxmox [14:42] as two different storage backends [14:42] that way you can select between SSD and HDD [14:43] right. so still technically 2 pools [14:43] but using rules to define [14:43] and you can also have a third pool that says use ssds AND hdds [14:43] so tiered storage? [14:43] nope [14:43] just by specifying a third crush rule [14:43] that says, use all OSDs [14:44] like [14:44] "use whatever free space available i don't care"? [14:44] dont limit what OSD class to use [14:44] correct [14:44] and you can still have cache tiering on top of that [14:44] like, creating a 4th pool that says, use all SSDs [14:44] and a 5th pool that says, use all HDDs use the 4th pool as cache tier [14:45] does the cache tiering really work though? [14:45] yes [14:45] because you define a pool overlay [14:45] but it allocates 200% of space? 100% of each? [14:45] and also define how the data is overlayed between the pools [14:45] nope [14:45] you can tell it to drop cold data after a while from the 4th to the 5th pool [14:45] http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ [14:47] i assume you've tested it? what kind of use case would you recommend? [14:47] i have tested it and [14:47] corrupted my data [14:48] lol [14:48] ok, don't use. [14:48] i am still not sure if i was just wayy too tired [14:48] i can't really think of a use case based on VMs [14:48] /topic Today's agenda: Ceph crush-course ¯\_(ツ)_/¯ [14:48] it just seems like stuff will get shuffled between HDD-SSD way too much [14:49] and it's not like it's file based, it's block based [14:49] which may generally make no sense at all [14:49] things that are either read or write heavy or both would benefit from it [14:49] but onyl if they keep hitting the same objects [14:50] like, databases [14:50] or websites [14:50] yeah the thing with database is that, mysql would write to the same file but when converted to block by filesystem it may not be the same exact block [14:50] that's my thinking where things would likely become really messy [14:51] yep [14:51] if it's file storage, maybe S3 -> CEPH then it would make a lot of sense [14:52] rados/S3 or cephfs would be another candidate, yeah [14:52] did you have the chance to test PCIE based SSDs with this? [14:52] for caching? [14:53] cause my nodes are all 3.5" hdd slots, putting 2.5" ssds seems like a complete waste [14:53] for everything [14:53] i need to buy new SSD/HDD for this project anyway [14:53] i dont have any pcie-based ssds but from what i heard, the performance is pretty good [14:53] the stuff i have in stock are all 256GB SSD or 1/2TB HDD due to dedi servers [14:53] instead of wasting 4 slots for SSD, might as well slap in a nice big PCIE SSD [14:55] bcache is also another thing [14:55] like, have bcache use nvme for caching and hdds for the cold storage devices [14:56] and then point ceph to use the virtual bcache devices [14:57] hmm, that sounds like an idea [14:57] but that means i gotta partition the nvme (assuming qty < number of hdds) then attach them? [14:58] yep [14:58] there's journals as well right [14:59] oh yes [14:59] you'll need to store them on nvme as well and then be careful about how you do that [15:00] but bluestore journals on hard drives is essentially really good by now [15:00] so don't really need to do it on SSD [15:00] so you can just put the journal onto the HDDs [15:00] less surprises [15:01] unless you want to run filestore for which you have absolutely no good reason to [15:01] *** jspiros is now known as jspiros_ [15:15] Fusl: if i use a PCI-E based 2TB SSD, would it be sufficient? [15:15] or the better question, how many partitions do i need? [15:15] for journal or caching? [15:15] data [15:15] customer data [15:15] journal like you said, use the OSD itself [15:15] yep [15:15] caching is risky [15:16] 2TB sounds fine [15:16] so just pure customer data [15:16] intel P4600 is either 2TB or 4TB [15:16] max speed 3200MB/sec read, 1575MB/sec write [15:16] if you wanna go with the larger one i dont see a reason why you shouldn't [15:16] $$$ [15:16] need to buy 3 remember [15:16] lol [15:16] gets expensive [15:16] yeah but [15:16] more storage [15:16] :P [15:17] question is whether i should bother with 2x2TB [15:19] i wouldnt [15:19] size/replication 2 is not recommended [15:19] neither is 1 obviously [15:21] do you mean 2x2TB per host? [15:21] so 3x2x2TB? [15:23] 2 PCIE cards per host, each 2TB [15:23] vs 1x 4TB [15:24] i get more bandwidth definitely, but bandwidth isn't an issue [15:24] since i'm stuck with 2x10G network [16:19] *** Zerote_ has joined #archiveteam-ot [16:25] *** chferfa has joined #archiveteam-ot [16:57] *** Zerote_ has quit IRC (Ping timeout: 263 seconds) [18:11] *** Zerote has joined #archiveteam-ot [18:52] *** jspiros__ has quit IRC () [19:01] *** Stiletto has quit IRC (Ping timeout: 252 seconds) [19:04] *** Stiletto has joined #archiveteam-ot [19:08] *** Stiletto has quit IRC (Ping timeout: 246 seconds) [19:09] *** jspiros has joined #archiveteam-ot [19:12] *** Stiletto has joined #archiveteam-ot [19:17] *** Stiletto has quit IRC (Read error: Operation timed out) [20:29] *** t2t2 has quit IRC (Read error: Operation timed out) [20:29] *** t2t2 has joined #archiveteam-ot [21:51] *** ivan has quit IRC (Leaving) [21:52] *** ivan has joined #archiveteam-ot [21:55] *** ivan has quit IRC (Client Quit) [21:57] *** ivan has joined #archiveteam-ot [22:18] *** killsushi has joined #archiveteam-ot [22:31] *** BlueMax has joined #archiveteam-ot [23:30] *** m007a83_ has joined #archiveteam-ot [23:32] *** m007a83 has quit IRC (Ping timeout: 252 seconds) [23:53] 10 rotations IPs at his caravan park