All posts by finnzi

Migrating Solaris 10 to zones hosted on Solaris 11 – How I learned to (somewhat) like Solaris!

Hi,

NOTE: There is not much technical info here!

Anyone who has ever worked with me has probably gotten a pretty good idea about how I somewhat dislike most proprietary UNIX systems. Sure, it probably has most to do with the fact that I am very fond of Linux (which was the first *NIX system I ever played around with). Although I have never pretended to be a UNIX expert, I have had to learn far more then I ever thought I would about HP-UX, AIX and Solaris. Yes – I am the guy who wants to migrate everything to Linux (except when the other platform is the better tool for the job).

However, I often end up with being one of the few guys that has any experience with any kind of *NIX systems so if there is a old UNIX machine around it will probably end up in my hands at some point.

Now – I have a application in a dev/test/prod environment that was running on a couple of Sun M4000’s and a single T4-1 machine that is one of those cases. This application is expected to live for years to come and the machines were showing their age. After spending a night replacing the motherboard in the prod machine with the one from the test machine last winter I now had a good case for a hardware replacement (not that we didn’t really have a good case before – it just made things move a little bit higher on the priority list). And after a couple of meetings we decided to go for a couple of T7-1 machines….and to get a contractor to help us out with the migration.

The T7-1’s are probably an overkill for what we had to do but since we needed SPARC machines it was the best option – and I was pretty amazed after receiving the quote for the hardware. The pricing was far less than I thought – even with 24×7 3 Year hardware and software support. And yes….they actually weigh a lot less then those damn M4000s!

We finally got to work and the contractor setup a plan for us. The machines would be installed with Solaris 11 (Oracle VM for SPARC really I guess), and run three global zones (LDOMs) – one for each instance of the application. We would then migrate the Solaris 10 installs into a zone on the global zone.

This was the first time I did any real work on Solaris 11, and had someone helping me out who was more the capable with Solaris. Long story short, with the help of our contractor we quickly had the three LDOMs ready for action. The contractor showed me some ldm magic (ahhh, hello ldm migrate!) and I got to give it to Oracle that they have done wonders with Logical Domains and migration. We played around with the vHBA stuff but it seems it is still a bit buggy so we mapped the disks through the ldm interface instead. The guys at Oracle might want to fix that though – it makes things very easy with disk management. If I remember correctly, IBM has already mastered this with the VIOS (Virtual IO Server).

But – I cannot praise the migration process highly enough. Solaris 10 zones on Solaris 11 are pretty darn cool. A native Solaris tool was used to create a backup (flar) of the source operating system installation. It is then restored into a Solaris 10 zone hosted on a Solaris 11 global zone. Data is then migrated with (we used dd for the raw devices and either NFS for the data or just took a storage snapshot on our SAN and copied the data with the usual tools between the LUNs from the UFS filesystem over to a new ZFS one).

After fixing up some symlinks and some permissions we were able to start the application in about 2 hours. Rinse, repeat for each instance. However I must admit that we had some issues with the first try of migrating the test system (which was the first system we tried to migrate from a M4000, the dev was on the T4-1) so we had to give it another try.

With the migration done, we have had the application running on the new hardware for about a month now and man, those beasts fly! I have done some reading on Solaris 11 and working with it ain’t all that bad. IPS makes installing and patching packages a breeze. Man…I wish we had IPS on Solaris 10 since we still have to manually update those Solaris 10 zones!)

The point of this post (moral of the story?): I thought I would never say this – but I have actually learned to like Solaris 11 somewhat. It has come a long way since Solaris 10, and I really think both Solaris and SPARC are going to be here for a long time to come…..at least in the enterprise.

……(and this fondness of Solaris has probably something to do with the fact I was actually working with someone who actually knew what they were doing :-))

Bye!
Finnur

STOR2RRD to the rescue once again!

Hi,

A few weeks ago we decided to upgrade the firmware on couple of storage systems. Everything seemed to go as planned but after the upgrade we started to notice some latency issues.

I won’t go into detail what was wrong (hint, HDD firmware can cause some serious issues!)- but this tool, STOR2RRD has saved us loads and loads of time – and is as simple as any storage monitoring tool can be. It ain’t perfect but it pointed us in the right direction and helped with getting this specific issue solved.

Cannot recommend this tool enough (and it is free and open source licensed under the GPL v3!).

Bgrds,
Finnur

azure-cli and Azure DNS!

Hi!

I have been looking at using Azure DNS for a couple of weeks for my domains. So, yesterday morning when my SO was still sleeping I went in for the kill and researched options on how I could import BIND zone files without having access to a machine with Powershell.

Hello azure-cli! A magical little tool that runs on pretty much all platforms capable of running node.js. My interest in Azure just skyrocketed after finding out MS actually has spent some time working on this brilliant little tool. A quick “npm -g install azure-cli” from my MacOSX laptop got me going.

But back to the Azure DNS magic.

First – make sure you have a copy of your BIND zone file.
Next, Switch to azure-cli ARM config mode: azure config mode arm
Next, Create the domain in Azure DNS: azure network dns zone create myresourcegroup myawesomedomain.com
Next, Import the BIND zone file: azure network dns zone import myresourcegroup myawesomedomain.com myawesomedomain.com.txt (Where myawesomedomain.com.txt is my BIND zone file)

Now, check out what NS servers you were assigned with the following command:
azure network dns record-set show myresourcegroup “myawesomedomain.com” “@” NS
Then go to your domain registrar control panel and point your domain to the Azure DNS servers listed with the above command.

You can get more info from this MS documentation site.

And….if you have a huge domain estate then you might be interested in knowing that the Men&Mice Suite actually supports managing DNS records in Azure DNS! 🙂

Bgrds,
Finnur

All Flash…and other SAN stuff

Hi all,

This year I have (among other things) been working on migrations from older storage arrays to some newer all flash arrays. Man, those babies scream!

It was awesome to see the latency drop from 5-10ms+ to under 1ms….at times less than 0.4ms. You also start to see the flaws of older filesystems (read: ext3).

In the progress we also moved from 8Gbit FC to 16Gbit FC to be able to push more bandwidth as well as IOPS.

Some unforeseen problems (which everyone should also look out for:))…….It can often be very easy to saturate the back-end bandwidth on your arrays if everything is doing 16Gbit….and you run backup jobs on top of some huge batch processes plus your normal workload 🙂

Just some words of advice – when you move onto faster equipment you often move problems from one place to another!

Bgrds,
Finnur

Interesting info on the IBM FlashSystem V9000 and VMware by Dusan Tekeljak

Howdy,

While trying to find up-to-date information on SCSI UNMAP and IBM Storwize products I came to find this blog post by Dusan Tekeljak which I found to be very interesting.

Especially the best practice part on RTC:
“To get best Real-Time Compression performance use at least 8 compressed volumes (LUNs) per V9000. Regardless what sales people tell you, it is not good thing from performance point of view to create one big volume (and not even talking from VMware point of view). There are 8 threads dedicated for RTC and one volume can be handled by 1 thread only.”

I am pretty sure everyone working with RTC on IBM Storwize/SVC products would like to have this written in bold letters on the product spec sheet so they can configure their MDISKs for best performance!

Bgrds,
Finnur

LPAR2RRD – A nice tool to gather (and watch) your historic VMware performance data

I installed LPAR2RRD to graph a VM environment about 6 months ago that holds a few hundred VMs. At that time I had some issues and didn’t really use it but kept it installed and allowed it to gather data.

However, I installed the newest version couple of days ago and was a bit impressed. Although this small tool does not look like much it is a simple solution for gathering performance data and help you out with debugging performance issues in your VMware environment. I have never used the LPAR (AIX) functionality but that part seems even more complete.

The errors I kept getting have been cleaned up (I didn’t really spend any time on debugging those when I did the initial installation). I kept getting errors when looking at cluster data etc. I also had some issues with the datastore graphs.

I sound like a bloody advertisement but I have no connection to the company that is working on the product except that I am a (happy) user 🙂

Bgrds,
Finnur

3PAR support in STOR2RRD

Hello!

I have been a user of STOR2RRD since ~2012 I think. It is a brilliant tool to gather historical performance information on your IBM San Volume Controller based storage (Storwize, SVC, etc).

However, I was looking for a new version few days ago and saw that they have added support for HP 3PAR. They also added support for some HDS storage as well as NetApp. So now, if you cannot cough up dough for [insert your best-of-breed storage management tool here] you might have a cool alternative there 🙂

Of course, this is no replacement for IBM’s TPC products but if you are looking for a single tool to gather performance statistics for your storage arrays and your Brocade fabrics, look no further. This is a very valuable tool to use in daily operations.

Check out their webpage here.

Bgrds,
Finnur