I recently encountered an issue in vSphere 5.5 where I wasn’t able to change the interface speed on an ESXi host from auto-negotiate to anything else. After a good amount of troubleshooting I was able to determine that this issue was occurring due to the fact that the NIC firmware version and driver version on a blade server was out of date. VMware has a good KB article on how to grab the firmware and driver versions that I followed. However, the problem is that even on the most modest ESXi host that isn’t running 10G to it, you’ll likely have a minimum of 4 NICs on it. This means that you’ll have to enable SSH on every host you want to check, run one command per NIC each of them (or run a one line script that loops through them, but will that really save you time..?). It doesn’t take a very large cluster for that to become a very large endeavor.
This PowerShell script will connect to a vCenter server, allow you to scan all ESXi hosts, or only hosts within a particular cluster, and output the results in object format so that you can manipulate them how you wish.
I like having documentation, but I hate creating documentation. I’ll be the first to admit that I’m slightly lazy at times, however, my own personal preferences for what I’ll call “Effort Allocation”, are not the root of my dislike for creating documentation. The issue really stems from the fact that creating it is very time consuming, tedious, and usually lower on the priority list.
However, sometimes it’s not you that failed to create the documentation. Consultants frequently fall into this category.
The issue I’m handling here is documenting CDP information from the perspective of ESXi hosts using PowerCLI.
Over the past few months I’ve been doing a lot of work with VMware Horizon 6. I’m not going to go into details, as VMware has done a great job doing that, plus I’m under a NDA. Suffice to say, details that have been provided suffice if you’re just looking for info.
In my company sponsored lab environment, I have HWS 1.8 deployed in feature/option parity compared to production. Deploying a second instance of HWS (perhaps a pre-release version), is challenging, due to DNS/Reverse DNS Checks that are done.
Anyways – the reason you’re probably here is to find out how to systematically modify DNS records, so here ya go!
For anyone that’s ever been through the process of provisioning a new datastore to multiple ESX hosts, you know it can take some time. Below are the steps I use
- Create Volume on NetApp
- Set Security Style to Unix
- Enable Storage Efficiency
- Set NFS Export permissions to allow Read/Write + Root Permissions to all applicable hosts
- Mount datastores on ESXi hosts
For a handful of hosts this is fine, but adding it to anything more than 4-5 hosts is reaaally painful in my experience. Below is a script you can use to take care of these steps in one swipe.
Does anyone actually find NetApp OnCommand System Commander to be fast enough for normal operation? I’ll admit, I still create a good amount of Volumes and LUNs using it, but it leaves a lot to be desired in the performance category. If you follow my blog at all, you know that I’m in the middle of a migration from a non-ha exchange environment to a DAG. Being the sensible admin that I am, I have multiple copies of my Exchange databases on different storage arrays, controlled by different NetApp filers. Using System Manager to monitor the space usage of the Volumes hosting my mailbox databases is way to slow for my comfort.
I created a new version of this script here
I recently completed a project that involved migrating Exchange 2010 Mailbox role from a standalone server to a Database Availability Group, or DAG. This was a large project that took a lot of time and planning, and had the potential to be very tedious. Fortunately, with a little knowhow, you can automate many of the tedious tasks.
I wanted to be in full control of mailbox migration, so my requirements were fairly strict:
This falls into the “Come on Microsoft” category.
I have been writing a script that will gather a bunch of information from servers, and returns an object with the information. Part of what I’m gathering is the servers that are set to auto start, and have their corresponding service stopped.
While you can get a list of services that meet part of this criteria like this:
I encountered a challenge today that was fun to fix. There’s an Organizational Unit in my AD setup that has historically been used to store disabled AD objects instead of deleting them.
When an employee leaves the organization, our standard procedure is as followed:
- Disable User Object
- Move to separate OU (IE AD://internal.msd/disabled/users)
- Update Description field with something like: Disabled by [username] on [date]
- Retain user object for x amount of days, then tombstone it.
Best laid plans of mice and men… yada yada…
So yesterday I talked about using a Powershell script with in Solarwinds to monitor volume sizes. Using the NetApp Data ONTAP Toolkit, we have the ability to do monitor a lot of different things, and track the information using Solarwinds. In this post I will show how to monitor SnapMirror relationships using Solarwinds.
I was unable to find any examples for how to do this online, so I came up with my own solution using the NetApp Data ONTAP toolkit as well as a slight modification of your powershell profile.