Friday, March 24, 2017

A case of unkillable hyper-v snapshot

Well.. not really unkillable. Just hard to kill
After one-time backup with Backup Exec, our hyper-v machine became stuck with “backup” checkpoint. Which was undeletable from GUI (no delete button).
Well, first we needed to do – is to delete rogue snapshot. Powershell for the resque!
Get-VMSnapshot <machinename> |  Remove-VMSnapshot
Now we need to merge. Snapshot left .avhd disk behind, and our machine’s system disk is considered differencing. That’s no good.
Hyper-v manager -> Edit disk (actions pane)
Now we need to point master to the avhd and tell him to merge it with parent disk
After that was finished, my vm still thought that her disk was under migration, so i deleted it in VM settings dialog (not from disk! =)) and then added merged VHD as a new drive.
And that’s about it.
Hope it will be helpful to someone!

Tuesday, March 21, 2017

xLiveMigration

First draft of Hyper-v Live Migration resource for DSC
Eventually, I’ll try to merge it with xHyper-V module (if guys who wrote it will accept my code of cource), but for now I’ll put it there.
With this you can enable VMMigration and configure all its options
DSC FTW!

Tuesday, March 7, 2017

Create multiple VM's from CSV and assign IP addresses

The title sais it all

Basically i needed to deploy multiple VMs on 2012 r2 windows server - so i could'n use powershell direct to setup machines. Fortunately, I discovered this post - a neet way to assign IP to VM via WMI

Script itself is fairly straightforward - you can find it and description on gitlab

Wednesday, December 4, 2013

XenDesktop7 & VMM2012 - No hosts available

Quick note:
You can experience error like this one:

creation of Image Preparation VM Failed, message Citrix.ManagedMachineAPI.ManagedMachineException: No hosts available

while using XenDesktop7 and scvmm 2012 sp1. In our case it turned out that scvmm had hyper-v hosts in 'update needed' state. After an update had been delivered, MCS started working again.

Tuesday, November 5, 2013

Case of ClusterStorage.000




Recently I worked on an issue: after a reboot of one of the cluster nodes, virtual machines couldn’t migrate back on this node anymore. Cluster events log contained some errors like these ones:

Cluster resource 'SCVMM pxe Configuration' of type 'Virtual Machine Configuration' in clustered role 'pxe' failed.

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

The Cluster service failed to bring clustered service or application 'pxe' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.
 
I checked ClusterStorage folder and it turned out that there were three ClusterStorage folders with suffixes 000 and 001. 

It all looked like a good reason to dug in to a cluster log

00000cfc.000017c8::2013/11/01-20:00:36.081 INFO  [DCM] Cluster Shared Volume Root is C:\ClusterStorage
00000cfc.000017c8::2013/11/01-20:00:36.081 INFO  [DCM] UpdateClusDiskMembership(enter): nodeSet (1 2 3)
00000cfc.000017c8::2013/11/01-20:00:36.081 INFO  [DCM] CsvFs Listener already started...
00000cfc.000017c8::2013/11/01-20:00:36.081 INFO  [DCM] CsvFlt Listener already started...
00000cfc.000017c8::2013/11/01-20:00:36.081 INFO  [DCM] NFlt Listener already started...
00000cfc.000017c8::2013/11/01-20:00:36.081 INFO  [DCM] DeleteCsvShare: remove csv blockstream C:\ClusterStorage:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000cfc.000017c8::2013/11/01-20:00:36.081 WARN  [DCM] Failed to delete csv share CSV$ status 2310
00000cfc.000017c8::2013/11/01-20:00:36.097 WARN  [DCM] rename attempt C:\ClusterStorage => C:\ClusterStorage.000, status 183
00000cfc.000017c8::2013/11/01-20:00:36.113 WARN  [DCM] Renamed existing C:\ClusterStorage to C:\ClusterStorage.001
00000cfc.000017c8::2013/11/01-20:00:36.128 INFO  [DCM] CreateRootDirectory: keeping open handle HDL( bb4 ) to CSV root
00000cfc.000017c8::2013/11/01-20:00:36.128 INFO  [DCM] create CSV stream file C:\ClusterStorage:{db19d832-b034-46ed-a6c5-61e0ebe370d1}

Then I checked EMC PowerPath – and it contained some dead path to our old SAN array. I deleted them, stopped cluster service on the node, and deleted ClusterStorage.000 and .001 folders. Then I started cluster service again. Issue resolved!

Another quite similar issue once happened with our file cluster - again, the culprit was an old csv record that was not deleted correctly.

So, if you'll face similar issues, all you need to do is to delete unnecessary clusterstorage folders when cluster service is stopped and delete obsolete links to old array in your multipath software so that it won't be accidentally recreated.

Hope that this will be helpful for you.