Category Archives: Technical

azure-cli and Azure DNS!

Hi!

I have been looking at using Azure DNS for a couple of weeks for my domains. So, yesterday morning when my SO was still sleeping I went in for the kill and researched options on how I could import BIND zone files without having access to a machine with Powershell.

Hello azure-cli! A magical little tool that runs on pretty much all platforms capable of running node.js. My interest in Azure just skyrocketed after finding out MS actually has spent some time working on this brilliant little tool. A quick “npm -g install azure-cli” from my MacOSX laptop got me going.

But back to the Azure DNS magic.

First – make sure you have a copy of your BIND zone file.
Next, Switch to azure-cli ARM config mode: azure config mode arm
Next, Create the domain in Azure DNS: azure network dns zone create myresourcegroup myawesomedomain.com
Next, Import the BIND zone file: azure network dns zone import myresourcegroup myawesomedomain.com myawesomedomain.com.txt (Where myawesomedomain.com.txt is my BIND zone file)

Now, check out what NS servers you were assigned with the following command:
azure network dns record-set show myresourcegroup “myawesomedomain.com” “@” NS
Then go to your domain registrar control panel and point your domain to the Azure DNS servers listed with the above command.

You can get more info from this MS documentation site.

And….if you have a huge domain estate then you might be interested in knowing that the Men&Mice Suite actually supports managing DNS records in Azure DNS! 🙂

Bgrds,
Finnur

All Flash…and other SAN stuff

Hi all,

This year I have (among other things) been working on migrations from older storage arrays to some newer all flash arrays. Man, those babies scream!

It was awesome to see the latency drop from 5-10ms+ to under 1ms….at times less than 0.4ms. You also start to see the flaws of older filesystems (read: ext3).

In the progress we also moved from 8Gbit FC to 16Gbit FC to be able to push more bandwidth as well as IOPS.

Some unforeseen problems (which everyone should also look out for:))…….It can often be very easy to saturate the back-end bandwidth on your arrays if everything is doing 16Gbit….and you run backup jobs on top of some huge batch processes plus your normal workload 🙂

Just some words of advice – when you move onto faster equipment you often move problems from one place to another!

Bgrds,
Finnur

Interesting info on the IBM FlashSystem V9000 and VMware by Dusan Tekeljak

Howdy,

While trying to find up-to-date information on SCSI UNMAP and IBM Storwize products I came to find this blog post by Dusan Tekeljak which I found to be very interesting.

Especially the best practice part on RTC:
“To get best Real-Time Compression performance use at least 8 compressed volumes (LUNs) per V9000. Regardless what sales people tell you, it is not good thing from performance point of view to create one big volume (and not even talking from VMware point of view). There are 8 threads dedicated for RTC and one volume can be handled by 1 thread only.”

I am pretty sure everyone working with RTC on IBM Storwize/SVC products would like to have this written in bold letters on the product spec sheet so they can configure their MDISKs for best performance!

Bgrds,
Finnur

LPAR2RRD – A nice tool to gather (and watch) your historic VMware performance data

I installed LPAR2RRD to graph a VM environment about 6 months ago that holds a few hundred VMs. At that time I had some issues and didn’t really use it but kept it installed and allowed it to gather data.

However, I installed the newest version couple of days ago and was a bit impressed. Although this small tool does not look like much it is a simple solution for gathering performance data and help you out with debugging performance issues in your VMware environment. I have never used the LPAR (AIX) functionality but that part seems even more complete.

The errors I kept getting have been cleaned up (I didn’t really spend any time on debugging those when I did the initial installation). I kept getting errors when looking at cluster data etc. I also had some issues with the datastore graphs.

I sound like a bloody advertisement but I have no connection to the company that is working on the product except that I am a (happy) user 🙂

Bgrds,
Finnur

3PAR support in STOR2RRD

Hello!

I have been a user of STOR2RRD since ~2012 I think. It is a brilliant tool to gather historical performance information on your IBM San Volume Controller based storage (Storwize, SVC, etc).

However, I was looking for a new version few days ago and saw that they have added support for HP 3PAR. They also added support for some HDS storage as well as NetApp. So now, if you cannot cough up dough for [insert your best-of-breed storage management tool here] you might have a cool alternative there 🙂

Of course, this is no replacement for IBM’s TPC products but if you are looking for a single tool to gather performance statistics for your storage arrays and your Brocade fabrics, look no further. This is a very valuable tool to use in daily operations.

Check out their webpage here.

Bgrds,
Finnur

VMware HBA driver issues

Hi,

I have been migrating some tier 1 workloads over to VMware in the last 3 months or so. While doing so we ended up having to debug some performance issues which were both related to number of available vCPUs to the operating system as well as performance related issues.

We started debugging the vCPU based issues. Our application administrators spent loads and loads of time and effort on getting those issues fixed. Most of them were just issues because of settings in the application it self and after they had been modified to represent the actual number of available vCPUs of the VMs running those workloads things mostly got back to normal.

However, we also had some performance issues with our storage. While we were running those application on the old hardware things seemed to run good enough. Nothing was great but nothing was horrible either.

After the migration was done and the vCPU issues were sorted we started looking at those storage related issues. The new hardware was a lot more powerful so it seemed obvious that it could throughput more on the storage side. We were seeing higher throughput rates (MB/s) but response time was horrible at times.

While chasing some leads I was starting to think that the storage was just becoming the bottleneck. Countless hours were spent analysing graphs from the OS, the arrays and our VMware hosts.

Finally, we noticed that the response times from the VMware hosts did not match with the response times of the array. That led us to thing that the issue was either on the host side or it could be a network issue.

We opened up a ticket with our hardware vendor and got the support team to go over our VMware driver+firmware setup (which was the same as we had been running on countless other hosts with what we thought was not an issue).

Nothing obvious came out of the support case, but after our monthly “no-change” period finished after the new year we decided to update the HBA driver+HBA firmware.

*BAM*!

Finally we got response times down. It finally matched the latency on the array. I kicked my self since I cannot even remember how often I have yelled at other people: “UPGRADE YOUR DRIVERS AND FIRMWARE!”…………..live’n’learn people! 🙂

The moral of the story: Check and see if you are running the latest supported firmware+driver versions on your hosts before spending countless hours analysing performance data when you are debugging some damned performance issues! 😉

Bgrds,
Finnur

On VMware vSphere and driver/firmware issues

Hi,

I have spent the better half of this year planning and finishing preparing to migrate some large databases on to virtual machines running on top of VMware vSphere.

While working through specs and other stuff I read up on loads and loads of forums, white papers, guides and anything else I could find on the subject.

In my research I started to find more and more posts that mentioned issues with drivers and/or firmware on VMware hosts, and not specific to any one vendor. Of course this worried me somewhat. So I did some more research on this.

My conclusion was very simple after reading through a lot of blog posts and speaking to multiple experts on VMware ESXi. Since we are seeing larger and larger mission critical systems virtualized we are pushing the hardware a lot more then we have done normally. And when we push the hardware to 70%, 80% or even 100% utilization, flaws that were hidden before are often more visible then they have been in the past when systems were only utilized at like 30-40% of the resources that were available to the operating system.

Just thought I should write this down….especially since I am watching one of my DB hosts pushing its CPU hard! 🙂

Bgrds,
Finnur

Using LVM to migrate between arrays (and raw device mapped LUNs to VMFS backed ones)

Hi,

Recently I have been working on a project that requires me to migrate few multi-terabyte databases from physical to virtual machines.

Since we were lucky enough that the LUNs for those databases were hosting a LVM-backed filesystems I was able to present the LUNs as RDMs to the VMware virtual machines and then create new virtual hard disks and use the magical pvmove command to migrate the data.

The total downtime for each database is around 5-15 minutes and is mostly due to the fact that we have to present the LUNs to the virtual machine, mount the file systems and then chown the database files to a new uid/gid. After that is done the databases are started.

When the database has been verified to work as expected we created new virtual hard disks, ran pvcreate on them and import them to the volume group we were migrating.

After that we just fire up a trusty screen session (or tmux or whatever!) and run the mythical command: pvmove -i 10 -v /dev/oldlun /dev/newlun.

When that command finishes we remove the LUN from the volume group with vgreduce, run pvdestroy on the LUN and then remove the LUN from the virtual machine (you might want to run echo 1 >/sys/block/lunname/device/delete before you do that), unmap the LUN from the ESXi hosts and we are done!

The biggest reason for us not to use RDMs is that the flexibility we get by using native virtual disks kind of nulls all performance gains we might gain (with emphasis on might) by using RDMs (although I have yet to see any performance loss due to using VMFS). And when we finally make the jump over to vSphere 6.x I can migrate those virtual disks straight to VVols.

The only sad thing in our case is that by using this method we are stuck on EXT3 since the file systems are migrated over from old RHEL5 machines. I’m not sure I want to recommend anyone to run a migration from EXT3 to EXT4 on 6-16TB file systems 😀 (at least make sure you have a full backup available before testing this!).

Bgrds,
Finnur