Luc-SVKanybody alive?
Luc-SVKPls i have a problem with my lab vmware box ( HP ML310G8 ), vmware esxi 6.0
Luc-SVKDisk write performance is really bad, ~ 6 - 10 MB/s
Luc-SVKI have already removed old vib, replaced with old recommended vib
Luc-SVKany tips what to do?
twkmreplace the drives?
Luc-SVKno, that's problem with combination b120i and new vib
linux_probeHBA's and expecting battery backed write caching performance, meh
Luc-SVKb120i isn't true hardware raid
twkmreplace it too?
linux_probesounds like 100% non write cached performance <Luc-SVK> Disk write performance is really bad, ~ 6 - 10 MB/s
twkmsoftraid anyway.
linux_probeif it worked faster then, write cache was probably enabled and disabled upon update for data security
twkmfbwc isn't available for them, i think.
Luc-SVKit is but, b120i is still shit
linux_probegoogle "b120i esxi slow"
twkmso replace it with an lsi w/cache.
linux_probehp claims the B120i has.. 512 MB 40-bit wide DDR3-800MHz flash backed write cache write back caching with FBW
linux_probeI can only imagine it's been disabled for "data security"
twkmoptional, or yeah it's at least disabled.
Luc-SVKlinux_probe: that cache is optional and you must have also license.. as i know
Luc-SVKa license
linux_probethen why are you asking and claiming there was a speed change
twkmreplace drives, replace controller, problem solved.
Luc-SVKlinux_probe: because it shouldn't have that bad performance
Luc-SVKAnd i didn't realize when this started
linux_probethat is what uncached performance is like
Luc-SVKit was ok long time ago
Luc-SVKi didn't use this box long time .. but i started to use it with esxi 5.0 or 5.5
Luc-SVKI am going to install HP image 6.0 U2 and i will see
Luc-SVKlinux_probe: 6 - 10MB/s is useless .. even standard SATA disk has better performance
linux_probe>>>> <twkm> replace drives, replace controller, problem solved.
Mr_Roboto1Yeah that sounds really wrong.
Mr_Roboto13WARE used to have a really awesome tuning guide that would bump that a lot.
linux_probeno caching is no caching, game over deal with it
Mr_Roboto1There's certain RAID cards that are worth a damn and others that aren't. If I were on a budget these days I'd look at 3Ware 9750 or an Adapec 9504/9508
linux_probereal radi cards with "battery backed write cache"
Mr_Roboto1I had an AAR-2810 with 8 SATA drives it wouldn't exceed the performance of a single disk even in RAID10 and typically was lower even with a lot of tuning
linux_probeTiming cached reads: 7300 MB in 2.00 seconds = 3652.29 MB/secTiming buffered disk reads: 260 MB in 3.02 seconds = 85.99 MB/sec
linux_probeguess my dinosaur drives are still working
Mr_Roboto1If you have enough cache you don't have to worry about shit disk performance :)
linux_probeirl, that's a terribad HBA with WC enabled
Mr_Roboto1agreed that should hit the bin or the RMA stack unless you find something wrong with the drivers or you're insanely desparate
Luc-SVKbtw HP P410 is bad?
Mr_Roboto1Looking on a random forum it seems like you should be able to get more performance than that
Luc-SVKi am humble, 50MB/s read/write would be great
Mr_Roboto1Did you flash the firmware?
Luc-SVKwait, firmware for P410?
Luc-SVKi don't have p410
Mr_Roboto1What card do you have?
Mr_Roboto1doh totally mis read the conversation
Luc-SVKit's onboard raid HP B120i
Mr_Roboto1It appears that's a fancy name for the Intel integrated PCH RAID.
Mr_Roboto1Which quite frankly is likely junk.
Luc-SVKwrite throughput 9MB/s still
neoticLuc-SVK: no fbwc?
neoticalso don't measure with dd
Mr_Roboto1DD is really best case scenario in most cases because it's sequential
neoticwell, dd is fifo
neoticuse iometer instead
neoticcan do both sequential and random
Mr_Roboto1fuck I need to sleep.
neoticMr_Roboto1: also, a typical virtual infrastructure workload isn't sequential
Mr_Roboto1That's why I said best case scenario
Mr_Roboto1Usually you're going to end up with an order of magnitude less out of it in reality with random IO
Mr_Roboto1It's one of the reasons why certain kinds of parity (RAID3 or 4) aren't really used except in certain very niche cases.
Mr_Roboto1But that aside the record player arm's only gonna move so fast in a spinner and even if it moves super fast you're still waiting for the platter to rotate at whatever speed it's going to rotate at. Which is slow
Mr_Roboto1One day when we go all SSD hipsters are going to want to store their music or movies on a rotational disk because they sound better or something absurd. Some guy will be offloading 1TB Hitachis to morons for stupid money next to low oxygen copper speaker cables.
Mr_Roboto1"The spinning really brings out the clarity of the digital format music"
TandyUKlol Mr_Roboto1, makes me tempted to buy a box of WD blue/green 1tb disks (surely the hipsters would pay more for being eco friendly too :P)
Eagleman_With 2 way VSAN the witness should be outside the cluster, so this ESXi host cannot be used for DRS. however can this host still mount the VSAN using iSCSI and use it as datastore for VMs?
puffiEagleman_: What do you mean by 2 way? Which ESXi host? The one where the witness appliance is running?
Eagleman_2-Node vSAN is the correct term I believe, so I will run the witness on machine 3, but machine 3 has extra ram available but it cannot be used for DRS since it cannot be part of the cluster. Can I still mount the vSAN share with iSCSI on that host?
puffiEagleman_: Have a look at the vSAN iSCSI Target Service
Eagleman_alright, and every host in the (vSAN) cluster should have disks to participate in vSAN?
GVDyou're creating a virtual SAN which replicates data across all hosts. it stands to reason that each node would need local storage. ;)
Eagleman_yeh makes sense. Just asking a few questions so I am sure how to setup my enviroment.
Eagleman_I got 3 systems atm, but only 2 are connected with 10GBe and the other system does not have PCIe slots available to do vSAN or do 10GBe. So makes sense for me to leave that system out of the cluster and use it as witness for the other 2 machines that are equipped with 10GBe. But I would still like to run VMs on the third host using iSCSI with vsan, but then outside the cluster ofcourse.
Eagleman_Any unforseen problems by mounting vSAN on the third host and use it as datastore?
GVDno clue
GVDi know that it wasn't supported on release, but that's a long time ago
puffiEagleman_: You can have nodes in a vSAN cluster that do not have local storage. e.g compute only nodes
GVDpuffi: without 10GbE? doubt it.
puffiGVD: Why would 10GB be a requirement for compute only?
Eagleman_puffi: but then I would need atleast 4 hosts right?
GVDbecause it needs fast access to the storage nodes to not be slow as fuck?
Eagleman_There needs to be atleast 3 "votes" or 2 votes and a witness?
puffiGVD: all nodes unless FTT=0 is going across the wire anyway, why would it be any different for a compute only
puffiEagleman_: We're discussing two different issues here
GVDpuffi: pretty sure the design guide says 10GbE is mandatory (although not necessarily technically required)
puffiGVD: mandatory for all flash
puffiGVD: We're getting mixed up a bit, I'm referring to compute only in general not specifically in robo/stretched
GVDpuffi: I distinctly remember a VMWorld session that said 10GbE is not enforced as a requirement, but anything below 10GbE would need to be treated as "non-production"
puffiGVD: the requirement is 1GB for hybrid, 10GB for flash
puffiit's not a compute/non-compute requirement
GVDi know
GVDbut he has a node without 10GbE, so it was relevant. but it's fine if they updated the requirements as you say.
Eagleman_Anyways, cluster with 1 compute node (no storage) is possible but you would need 3 other nodes with vSAN?
Eagleman_since 3 is the minimum with FFT=1
puffiEagleman_: Going back to your original question, looks like it's still not supported as a target for esxi hosts
Eagleman_puffi: alright to bad, what about my last question?
puffi3 nodes 1 with no storage = no witness
Eagleman_So not possible, you will need atleast 3 nodes with storage and 1 compute node.
puffiYou could probably do something funky and I'm not sure if you would get official support for it
puffibut run the witness in VMware work station and have a 1+2+1 stretched
puffifairly ugly
puffiis this going into production ?
Eagleman_Homelab, but I woulndt like to loose all my VMs
neoticyou can do 2-node vsan with a witness.
Eagleman_so semi production, would still like to follow the reccomodations
neotichomelab is not semi production...
Eagleman_2-node with witness seems to be the best option, but I would "loose" 1 host which isnt able to do DRS
neoticwell, if you want ftt=1 you need 3 hosts contributing to storage.
puffineotic: You can do FTT=1 on 2-node robo
Eagleman_well 2-node seems to be the only option since I cant add storage to the 3rd host.
puffiEagleman_: go mad and buy a 3rd server and have 3 node with 1 compute, sorted.
neoticso 2-node vsan with a 3rd compute only node?
neoticnot sure that's even possible.
puffineotic: no, 3 node cluster and his 4th node with no storage as compute
Eagleman_no, the witness cant be on the same cluster
puffiEagleman_: if you buy another server you don't care about the witness
puffiyou'd have a 4 node non stretcted with 1 compute
neoticideally you'd want 4 nodes for vsan unless you want manual work :)
Eagleman_Ofcourse, but do I need a 4th server ;)
puffineotic: for sure, but it's a home lab :)
theacolyteIt is possible
theacolyteI've done it
theacolyte(in my home lab)
puffitheacolyte: What? robo + compute?
neotictheacolyte: 3 node vsan with 2 nodes contributing to storage?
theacolyte2 nodes
puffitheacolyte: where was your witness?
Eagleman_I can just equip my 3rd host with a local SSD for a local datastore and run the witness on that.
puffiEagleman_: sure
Eagleman_I would go from 2 esxi hosts to 3 anyway if using vsan,
Eagleman_Will be converting my storage server to an esxi host.
Eagleman_All storage (VMs) will run local when using a 2-Node vSAN, right?
puffiyou can pin VM's
theacolytePretty dang sure I had the witness on one of my VSAN hosts, but I don't recall
theacolyteI got rid of the setup
Eagleman_pin VM on compute or storage?
Eagleman_theacolyte: what would happen if that 1 host went down?
puffiEagleman_: He had a witness, so in theroy nothing
puffiat FTT=1 anyway
Eagleman_Yeh but the witness was running on the same host as vSAN?
theacolyteit's a lab
puffiEagleman_: There's what's technically possible
puffiand what's supported
puffiTechnically you can pretty much do whatever you want
puffiEagleman_: Anyway, if he had a witness on a host that went down he'd lose quarom
puffihost down, witness down...
puffivm's down..
Eagleman_What would happen when there is a power failure?
linux_probethe world stops and everything dies
Eagleman_makes sense, but what if it happens only on the location where my machines are running ;)
puffijust your world stops and dies
puffithe rest of us are fine
Eagleman_lol, no one thought about this when designing vsan?
puffivsan didn't need to think about, the UPS guys thought about it
Eagleman_yeh but mine only runs for 15 mins and tries to shutdown everything ;)
puffiSave up for some batteries
puffior if you have a large familiy some bikes
Luc-SVKguys, pls what disk performance should i expect on esxi 6.0 or 6.5 with standard AHCI SATA drive? ( 7200rpm, WD RE4 1TB )
Luc-SVKconnected to sata3 interface?
puffiYour question makes little sense
Luc-SVKpuffi: i have ml310e g8 v2 server, 2 x sata3 drives ( re4 1TB ), esxi 6.0 U2 HP image
Luc-SVKbut there is still problem with disk write performance
Luc-SVKi don't know what to do. Write performance is around 10MB/s. I don't need raid, but what do i need is better performance at least 40MB/s
Luc-SVKor something like that .. And my question was about if this performance is expectable from single drive in AHCI mode
jaelaeMorning or afternoon or I guess evening
theacolyteESXi doesn't do writeback
theacolyteyou need a RAID card with cache to do that
theacolyteotherwise it's going to do write through which is slow
theacolytewhat you just described sounds about right
jaelaeI’m upgrading two vcenters one works great and is a 6.0 appliance he other is a problematic windows 6.0 vcenter that will have some issues resolved after a migration to 6.5 migration. What do I uograde first ?
theacolytenot sure how we could answer that for you, but whatever you do, make sure to upgrade them on a Friday before you go on vacation for 3 months
jaelaeJust debating what to do first but I don’t see either as being problematic just I can’t wait to get off windows .
JohnWayneJust my personal experience: I would not waste time trying to migrate from vCenter running on Windows Server to the Appliance. I know that VMWare supports it and that there is plenty of documentation out there. But, we tried it (albeit, early on) and it failed in two of our Data Centers.
JohnWayneFailed = Everything looked fine ... until it didn't.
theacolytePerfect time to make your vacation getaway
JohnWayneIf you are not relying on historical data out of vCenter, just deploy a new Appliance and join your Hosts.
theacolytehow much time does he have before it breaks?
jaelaeNo way I am deploying a new appliance that’s a lot to configure on it
theacolytelatent problems are the best
JohnWayneIt's not bad ... I have a decent amount of customization in my vCenters and I can build and deploy in less than two days at this point.
jaelaetheacolyte: I just have Heap size alerts where some services consume too much memory. It’s just funky
jaelaeNsx integration would be a problem
JohnWayneIf you have Professional Services you can engage them to help you do a Database Migration, but we have also had problems with that.
theacolyteIf you have NSX, it's time to consult The Matrix (TM)
JohnWayneAgain, my experiences are based on early on experiences when VMWare initially announced support for migration.
JohnWayneSo, things may have changed.
jaelaeBut if the migration fails you can power off the new appliance and power on the windows vm
jaelaeEasy rollback
JohnWayneThis is true.
JohnWayneUnfortunately, the issues that we ran into were not immediately apparent.
JohnWayneIt took like a week.
jaelaeI do like a clean appliance and database idea. But nsx has so much control of our clusters and rules in place
JohnWayneThe most recent attempt we had Database Corruption to the point that even VMWare Support backed away from the table.
JohnWayneYeah, NSX does change things a bit.
jaelaeAnd no worries theacolyte I went through the compatibility matrix
jaelaeWe also use vRA automation so blueprints target clusters directly for all vm provisioning
jaelaeAs I get further into the VMware ecosystem it makes rebuilds more impossible :)
jaelaenext vsphere is what? 6.7? not 7.0?
JohnWayneI want vRA Automation
JohnWayneIs Director still a thing?
JohnWayneOr was it replaced?
neoticJohnWayne: i've done a ton of migrations from windows to appliance, for large environments. never had any break as long as you follow the documentation and the pre-reqs.
neoticvcloud director is still a thing.
jaelaewe went all in with vra automation its pretty great. although we still dont have end users create their vms
jaelaebecause they dont know what they want. but when requests come in we easily provision a ton of vms giving them the most minimal resources then scale up
jaelaeintegrated into AWS as well now
JohnWayneThat is awesome, jaelae. I am going to make note and look into that.
Luc-SVKguys and could you recommend gear for homelab? I thought that one gen8 server with sata drive is enough, but it's useless because of that slow write speed
neoticif you're doing hp server you really need fbwc
Luc-SVKI need esxi because many vendors provide ovf
Luc-SVKi need something really cheap, is there such option?
neoticif you have a g8 server, just get some fbwc for it
neoticand maybe not a sata drive, as they are slow
Luc-SVKhmm ML310e g8 v2 doesn't support FBWC for b120i
Luc-SVKor what about for example some synology as iscsi target?
neoticoh, that's a crappy controller.
neoticyou can use synology to host vms, sure.
Luc-SVKno disks in ml310 at all, just boot from usb
jaelaemy home lab i have a 14TB synology with iscsi and thats all my datastores
jaelaeworks great for lab purposes
Luc-SVKcould this setup give more than 10MB/s for disk writing ?
neoticLuc-SVK: is throughput important for you? how did you measure throughput?
Luc-SVKif storage and server will be connected to 1GE ethernet
Luc-SVKneotic: no, but 10MB/s disk riting is so slow that i need to wait for everything
Luc-SVKsome VM have basic check before installation and they refuse to install with this slow storage
Luc-SVK*some VMs
neoticml310 gen8 v2 isn't supported for esxi 6.5
neoticthe b120i isn't listed at all on the hcl
neoticLuc-SVK: plenty on google regarding that card/driver and bad performance
JohnWaynejaelae, Are you using Read Write Cache on your Synology?
JohnWayneI understand why, but I effing hate that Read Write Cache is limited to the Volume.
JohnWayneI'd have to look, but I think that on FreeNAS, you can assign Read Write Cache to the Pool.
JohnWayneBut, it would behove anyone to look at the size / forecasted size of their Volume before implementing Read Write Cache and selecting SSD's.
JohnWayneOn a side note: SANDisk SUCKS. :-) My Synologys were eating them every 90 days or so. I moved to Crucial and I have not had a problem for over a year.
Luc-SVKneotic: i have 6.0 and i tried everything, hp image for 5.5, 6.0, hpvsa driver downgrade, test in AHCI mode
Luc-SVKand still the same
neotictried 6.5?
Luc-SVKi am not expecting high performance, but at least i don't know 30MB/s for example
Luc-SVKneotic: not yet
jaelaeits a new month which means another month where turbonomic is showing no data
GVDjaelae: lol, what does support say?
jaelaecreated another ticket
jaelaeyou get this nice UI set of dashboards that show you each cluster and their capacity and growth for 12 months
jaelaeits great and real helpful
jaelaeonly whenever i go into it and need it it is blank
jaelaeim trying to get all the dashboards i like and replicate in vRA operations manager
jaelaeGVD i found a new one
jaelaeturbonomic columns sort from 1-9
jaelaeso if you want to sort VMs by memory and show greatest at top it will consider 9 the greatest and 1 the least
jaelaetherefore 9GB is greater than 16gb
meyousame with sorting by IP
meyou172.16.199.1 comes before
meyouit's strings all the way down
GVDalphanumerical sort :)
GVDhey, what are you complaining about, it did sort them, didn't it? ;)
meyousort of
meyouget it?
GVDoh dear lord, the pun
jaelaeanyone know if vRealize Operations Manager determines how many hosts you need by the average VM?
GVDconsidering it gives suggestions per VM, i'd expect it to make a total calc out of that rather than use averages
GVDthat said, i very frequently disagree with vROPS' assessment
GVDit's suggestion to take away resources is very frequently way too aggressive
jaelaeyea i mean it says we have room for -243 VMs
jaelaein a cluster with 16 hosts with 79% memory
jaelae40% cpu
GVDthat's on the high side in my experience, but seems workable
GVDhow's your co-stop etc?
GVDmy most stressed cluster is at 80% RAM & 35% CPU, and starting to run into worries with regards to redundancy (on RAM side) if hosts were to fail. at 35% CPU with peaks up to 50% it's also starting to raise co-stop values and getting visible performance degradation
GVDit's an 8 host cluster iirc.