Category Archives: Technical

Cisco UCS: vHBA bandwidth

I never really understood how Cisco UCS vHBA are configured in regards to bandwidth (coming from a FC background).

Finally I got it spelled out for me like I was five…..IT IS JUST A ETHERNET PORT (yes yes…I knew that. But I really thought there was some more magic involved). IT JUST SYNCS ON THE SAME SPEED AS THE FEX PORT IT IS CONNECTED TO. That would mean if you have a blade with a VIC, a 6332 FI and a 2304 FEX and you do not have the port expander it will be configured as a 20Gbit port (2x10Gbit) with a single flow maxing out at 10Gbit/s (to the FI…not taking into account break out speeds to your ethernet and storage network from the FI). If you have the port expander for the VIC you get native 40Gbit/s if you are using the 2304 FEX and the 6332 FI (single flow can reach 40Gbit/s from the FEX to the FI).

I had a real duh! moment there.

Now it is out there! Hopefully this can help some poor soul out there. I googled my life away for couple of days and did not find a real answer.


Trying out an iPad Pro 12.9″ for sketching and drawing….but it is awesome!

I recently got my hands on a iPad Pro 12.9″ which I wanted to use for drawing sketches when I am working on some issues or just designs since I always seem to have the need to visualize stuff when I am working (and attaching them into OneNote). My desk often looks like there has been a mass-murder of post-its or notebooks. And those end up in the trash and I end up having to hammer down a drawing in Gliffy (or I don’t…..which isn’t exacly a good thing since I often would like to remember what I was sketching later on).

So – I got an iPad Pro 12.9″, Apple Smart Keyboard and a Apple Pencil.

Now I guess I have to admit I might have laughted histerically of those crazy people buying into the iPad Pro + Apple Pencil hype. More the once. Probably more then three times even….I have owned a iPad in the past (think it was the iPad 3) but I mostly used it for watching TV episodes before going to sleep. And Skype couple of times.

So I spent last night setting everything up. Sat down in my La-Z-Boy and got down to business – getting the iPad enrolled in our MDM, installing the apps I normally use on my laptop (SSH client, RDP client, Outlook, Word, Excel, Powerpoint, VPN client etc). Played around with the pencil.

My SO was sitting in her chair with her Surface Pro 4, doing her nightly surfing. She has been using the SP4 for two years if I remember correctly. She loves that thing. Probably more than me. But less than her cats.

After playing with the iPad for like an hour I start to realize how cool device this actually is. And how useful it is (I had my doubts it would actually be this useful). Start mumbling something about how awesome it is to be able to sketch with the pencil (which is better then any pencil I have tried before). Keep using the iPad. Keep mumbling about how awesome it is. She suddenly looks at me and says: “I told you multiple times – having a tablet with a pencil is extremly useful”. While I have thought about getting a Surface I never acted on that – they are pretty expensive here in the land of ice and snow.

I guess I have to eat my words. The iPad Pro is actually one of the most useful devices I have used for work-related computing. And I can even use it to do more then I actually thought I could do on a iPad. Which is a lot of researching, hammering at those pesky SSH terminals and replying to emails. I might even stop taking my laptop to meetings. And let me tell you – Before last night I would never have told you I would replace my laptop for any task.

Don’t get me wrong – This device will not replace my laptop. But I will probably use it less then before.

Hopefully I can pair a bluetooth mouse with it and use it with my RDP client. Haven’t tried it yet. If that works then I don’t think I will take my laptop with me when I am going on short trips over weekends etc.

I just have to say it. Apple might have created a market of devices that we don’t really need – but the iPad Pro is one brilliant device. While iOS is a little bit limited as a desktop OS it has couple of things going for it LInux and Windows can’t keep up with. The battery life is awesome (at least on this thing and so was the battery life on my old iPhone 6s Plus). I am still on the first charge on this bad boy. And I probably have OST of 8 hours already.

My Lenovo T460s laptop chews through the battery in just 3-4 hours. And that thing is just a year old. But I can probably blame Chrome + Extensions for that 🙂


UniFi Network kit – awesome stuff!

Recently I got fed up with yet another router (with integrated wireless) provided by my ISP. In the last 5-6 years or so I have gone through like 6 of those – having horrible experience with each and all of them. Well… be fair – they all worked as expected as a router but the wireless function was just a joke. Most of these were different versions of Thomson provided by Siminn (my ISP at the time). Before all this I had always been using a Linux based router (and even OpenBSD and FreeBSD at some point!) along with a standalone access point but due to limited time and other things going on I didn’t feel like spending time building yet another one (and my second trusty WRT54G had just died) so I just started using the router from my ISP to get things going again.

I moved to another apartment in November and got yet another router from my ISP (but this time I finally got a fiber connection!). Again – the wireless signal was horrible in some of the rooms so I started checking out some new equipment.

My friends have been raving about the stuff from UniFi – the EdgeMax routers and the UniFi APs. So I decided to try it out.

This stuff is brilliant! Luckily I have an old Linux box hooked up in a corner where I can run the UniFi software for the wireless access point. The configuration is very simple and the pricing of those APs (and the routers) is a joke. I got a single UniFi AP-AC-LR and it just rocks.

Then I configured the EdgeRouter….for a kit that only costs 99$ (or something like that) this thing is just awesome. I’m late to the game but this little dude can route 1 MPPS which you don’t normally find in such a small box (or at least when it was released). Guess I won’t have to worry about that on my 100Mbit connection 😉

The GUI is pretty self explanatory and if you have ever worked with a Cisco/Juniper kit you will find your way around the CLI quickly as well.

So – for around ~200$ I finally have a home network that I don’t have to worry about!

For the next phase I am thinking about getting a NGFW…..still debating if I will go with a small Fortigate, Juniper, Palo Alto or just a UniFi Security Gateway – it would be awesome to be able to inspect SSL!



My experience with Nimble Storage in a POC I did nearly 9 months ago

About this time last year I was part of a project where we were looking at replacing the primary storage systems in our infrastructure.

While we did not end up going with Nimble Storage at the time we did do a POC of the CS700 arrays (which has now been replaced by the more powerful CS7000).

Since we live on a small island in the north Atlantic, companies do not normally ship out a fully configured solution so when the Nimble reps actually asked me if I wanted to do a POC before I even asked them about availability of such a program I was pretty amazed. And when they were actually willing to ship a fully configured solution (two CS700 arrays) to us without any commitment I was even more amazed. Of course, since Nimble is a small company in the storage world this is probably the only way for them to get the larger companies to give them a chance when competing against the big dogs (EMC, HPE, NetApp).

When we were considering storage vendors we did not even think of Nimble Storage. I actually saw them at a VMMUG meeting here in Iceland where they had a presentation about the solution. The presentation was on a Thursday evening and I spent the days after wondering if the claims they laid out were actually true (X IOPS per array with 11 NLSAS disks and 4 SSDs).

I sent the sales guy who was at the presentation couple of emails over the weekend asking him questions about the solution. He got me in touch with a pre-sales technician who was able to answer my questions quickly and gave me lot of stuff to think about.

Well, long story short – we ended up testing the CS700 arrays with ~200TB usable space and ~7TB of SSD cache. The technician came on a Tuesday morning at ~09:00 and about 2 hours after we started installing the array in the first datacenter we were actually done installing both arrays in two different sites and could start migrating data to the arrays. The technician went through the basic stuff but since the interface is so simple (and well, the solution it self is just amazingly simple as whole) so he left us at ~14:00 if I am not mistaken. Nimble Storage shipped the arrays to us (and back) free of charge.

We moved loads of dev and test databases (and even some production ones later in the POC) on the system and started having some fun. It was obvious that even though the system only uses NLSAS drives as the backend storage they are capable of pushing some extreme IOPS/throughput in comparison to the traditional array with the Adaptive Flash/CASL secret sauce. Performance was pretty good. However – since our workload is not only IOPS based but also throughput based we did actually hit a small wall (please note that this was with the older CS generation, they have newer arrays out now that fix this somewhat). We actually have a window in our environment where we push more then ~1.6GB/sec (gigabytes) for quite a while (however, we were just maxing out the FC interfaces in the hosts/arrays – we later found out that we could push ~3GB/s on a faster array/faster network). And while the solution actually powered through things nicely we were not happy with the max throughput available at the time.

While we were doing the POC I interacted with the support team often, especially when we were looking at the throughput issue. I had access to Infosight during the POC and the information gathered there was very helpful. First we had a small latency issue – and the Nimble tech was able to point out that the issue was not in the Nimble array but in our VMware hosts. This ended up being a driver issue – so we got that fixed. After that the support technicians went through a lot of data analysis (we had multiple phone meetings with support during this case) and they helped us understand where the bottlenecks we were hitting and why the system did not perform in the way we were expecting.

After finding out that the product wasn’t a fit for us at the time they were understanding and didn’t hold any grudge against us, even thought they spent quite a lot of time trying to get things working. We packed the systems in the packaging they arrived in and got things shipped out.

The saddest part was that while the solution didn’t work out for us I have never had as pleasant support experience as I did during this ~30 day POC (and I have sadly had to deal with the support of multiple vendors and most of the time it can be quite tedious). When we were debugging the host (latency) related issue they never tried to argue that this was not there problem (but of course this was a POC and maybe they were just being extra nice, I don’t really know) but I never got the feeling they were trying to push the problem over to anyone else.

Thumbs up for Nimble – I hope they will keep up the good work!


Vendor bashing in the storage space….

I’ve been spending some time reading up on different storage vendors in the last 6-12 months (yes…my personal interests are probably different from yours)…..and it has been horrible to see employees of different companies fighting things out on forums.

Now – Most of the time I have been searching up on some performance related stuff from customer stories on those forums and without an exception when I find something interesting there is always a employee of a competitor in the forum thread yelling something bad about the competition. Yes….we get it, your product must be superior and without all bugs.

What bothers me the most is that I have had sales people talking some shit about other vendors directly to me. And often, those sales people have actually sold me or a company where I am working or have been working something that maybe was not the best solution at the time.

Vendors/Resellers: Can we please leave the bashing at home? I have no interest in buying a product from someone that spends his day talking crap about a competing product. It really just tells me that I should stay away from doing business with you!

If someone has actually decided to buy a IBM storage solution instead of something else because a vendor rep called NetApp “NetCrapp” one might wonder if that decision was actually made on the correct terms…….


HPE OneView 2.0


Recently we upgraded from OneView 1.20 to OneView 2.0. The 2.0 version has been out for a while but we didn’t upgrade right away. Luckily though – because there were some serious issues with the upgrade from 1.x to 2.0 with the first release. Earlier this year they released a updated version, 2.00.07, that fixed those issues so one can safely upgrade 🙂

The biggest feature I was waiting for was the ability to move a server profile from one hardware type to another. For example: A HPE BL460c G9 with one mezzanine adapter is one type and BL460c G9 with two mezzanine cards is another one. So a server profile created for one type of hardware cannot be reused for another. Imagine how surprised I was when I was adding a HBA to few blades and I had to recreate those profiles from scratch! But now – this is as easy as 1-2-3 since you just move the server profile to the new hardware type and start your server on either the same blade with the added HBA or move it to another generation or type of a blade. A small (and dare I say – a little bit weird) issue but makes life just a little bit easier when adding new hardware to your blades (managed by HPE OneView).

Server Profile templates are also brilliant – you simply create a VMware host template, then create server profiles based of that template. If you need to do a change on the BIOS settings on all your VMware hosts, you can do so on the template and the changes should be pushed down to all your servers.

Another feature I like a lot is the addition of Smart Update Tools and the ability of being able to upgrade drivers as well as firmware from OneView without needing to shutdown the machine and change the FW baseline to a newer version and then wait for OneView to start up HPE SUM and finish it’s business. And then going into the OS and upgrade the drivers manually (or by running SUM locally on the OS). I have not tested it yet but I am planning to do so later on. Cool feature at least.

So far – cannot complain! 🙂


Migrating Solaris 10 to zones hosted on Solaris 11 – How I learned to (somewhat) like Solaris!


NOTE: There is not much technical info here!

Anyone who has ever worked with me has probably gotten a pretty good idea about how I somewhat dislike most proprietary UNIX systems. Sure, it probably has most to do with the fact that I am very fond of Linux (which was the first *NIX system I ever played around with). Although I have never pretended to be a UNIX expert, I have had to learn far more then I ever thought I would about HP-UX, AIX and Solaris. Yes – I am the guy who wants to migrate everything to Linux (except when the other platform is the better tool for the job).

However, I often end up with being one of the few guys that has any experience with any kind of *NIX systems so if there is a old UNIX machine around it will probably end up in my hands at some point.

Now – I have a application in a dev/test/prod environment that was running on a couple of Sun M4000’s and a single T4-1 machine that is one of those cases. This application is expected to live for years to come and the machines were showing their age. After spending a night replacing the motherboard in the prod machine with the one from the test machine last winter I now had a good case for a hardware replacement (not that we didn’t really have a good case before – it just made things move a little bit higher on the priority list). And after a couple of meetings we decided to go for a couple of T7-1 machines….and to get a contractor to help us out with the migration.

The T7-1’s are probably an overkill for what we had to do but since we needed SPARC machines it was the best option – and I was pretty amazed after receiving the quote for the hardware. The pricing was far less than I thought – even with 24×7 3 Year hardware and software support. And yes….they actually weigh a lot less then those damn M4000s!

We finally got to work and the contractor setup a plan for us. The machines would be installed with Solaris 11 (Oracle VM for SPARC really I guess), and run three global zones (LDOMs) – one for each instance of the application. We would then migrate the Solaris 10 installs into a zone on the global zone.

This was the first time I did any real work on Solaris 11, and had someone helping me out who was more the capable with Solaris. Long story short, with the help of our contractor we quickly had the three LDOMs ready for action. The contractor showed me some ldm magic (ahhh, hello ldm migrate!) and I got to give it to Oracle that they have done wonders with Logical Domains and migration. We played around with the vHBA stuff but it seems it is still a bit buggy so we mapped the disks through the ldm interface instead. The guys at Oracle might want to fix that though – it makes things very easy with disk management. If I remember correctly, IBM has already mastered this with the VIOS (Virtual IO Server).

But – I cannot praise the migration process highly enough. Solaris 10 zones on Solaris 11 are pretty darn cool. A native Solaris tool was used to create a backup (flar) of the source operating system installation. It is then restored into a Solaris 10 zone hosted on a Solaris 11 global zone. Data is then migrated with (we used dd for the raw devices and either NFS for the data or just took a storage snapshot on our SAN and copied the data with the usual tools between the LUNs from the UFS filesystem over to a new ZFS one).

After fixing up some symlinks and some permissions we were able to start the application in about 2 hours. Rinse, repeat for each instance. However I must admit that we had some issues with the first try of migrating the test system (which was the first system we tried to migrate from a M4000, the dev was on the T4-1) so we had to give it another try.

With the migration done, we have had the application running on the new hardware for about a month now and man, those beasts fly! I have done some reading on Solaris 11 and working with it ain’t all that bad. IPS makes installing and patching packages a breeze. Man…I wish we had IPS on Solaris 10 since we still have to manually update those Solaris 10 zones!)

The point of this post (moral of the story?): I thought I would never say this – but I have actually learned to like Solaris 11 somewhat. It has come a long way since Solaris 10, and I really think both Solaris and SPARC are going to be here for a long time to come… least in the enterprise.

……(and this fondness of Solaris has probably something to do with the fact I was actually working with someone who actually knew what they were doing :-))


STOR2RRD to the rescue once again!


A few weeks ago we decided to upgrade the firmware on couple of storage systems. Everything seemed to go as planned but after the upgrade we started to notice some latency issues.

I won’t go into detail what was wrong (hint, HDD firmware can cause some serious issues!)- but this tool, STOR2RRD has saved us loads and loads of time – and is as simple as any storage monitoring tool can be. It ain’t perfect but it pointed us in the right direction and helped with getting this specific issue solved.

Cannot recommend this tool enough (and it is free and open source licensed under the GPL v3!).