vSphere ESXi 5.0 Update 2 and Nexus 1000v
A few weeks ago I upgraded some Clusters at a customer site from 5.0 U1 to 5.0 U2. All of the cluster used the Nexus 1000v (N1kv) for network connectivity. It was the first time that I upgraded a N1kv environment because in Austria the N1kv is not widely spread. Before I started, I read the N1kv Upgrade Guide and used the "Cisco Nexus 1000V and VMware ESX/ESXi Upgrade Utility" which can be found here. I upgraded the first cluster (lab environment) including the N1kv and it ran smooth and without problems. After the first ESXi in one of the production clusters some error/info messages regarding VLANs and Channels came up in the VSM.
VSM01 %VEM_MGR-SLOT9-2-VEM_SYSLOG_CRIT: VLAN_MISCONFIG : Eth9/3 and Eth9/5 are carrying vlans [VLAN IDs]. Ensure that channel is configured on profile carrying multiple ports on same VEM and no more than one one channel on a VEM is carrying the same vlan. Please ignore this message if any of these ports is configured as a local span destination.
I talked to the network admins and they knew the problem and fixed it for me, but in they meantime the told me that the VEM Modules of the updated ESXi was now showing up under a different Module ID. I checked this also but couldn't get any clue why this happened.
Because I had no problems with updating the lab environment I connected to the VSM of the lab and checked the modules with the following outcome.
I saw that the last module was not licensed but I don't know why. I used another command to check the license infos/status of the VSM.
show license usage NEXUS1000V_LAN_SERVICES_PKG
In the command line outcome you can see the problem. The VEM uses not only a new module slot but also a new license. Because all 20 licenses are used by the "old" VEMs the new VEMs used the overdraft licenses which are only 16 so 1 VEM was not licensed. I did a little research and found out that VEMs are bound to the ESXi Host Server-UUID which is created at installation time and will not change. I checked the Server-UUID before and after the ESXi Update and saw that it has changed and I wondered why. In the Release Note of the ESXi 5.0 Update 2 I found the answer:
SMBIOS UUID reported by ESXi 5.0 hosts might be different from the actual SMBIOS UUID
If the SMBIOS version of the ESXi 5.0 system is of version 2.6 or later, the SMBIOS UUID reported by the ESXi 5.0 host might be different from the actual SMBIOS UUID. The byte order of the first 3 fields of the UUID is not correct.
This issue is resolved in this release.
So there was a bug in previous releases of ESXi 5.0 which calculated a wrong Server-UUID. After applying Update 2 the Host recalculated the Server-UUID the right way and so all VEMs were bound to a new UUID. Thanks god there is an easy way to reallocate the licenses used by the orphaned VEMs to the new VEMs.
- Connect to the VSM of the required VEMs
- Enter the command configure to enter the config mode
- Use svs license transfer src-vem [VEM-MODULE-#] license_pool, transfers the license from the defined VEM to the license pool
- transfer all licenses to the pool before transfering it to the new VEMs
- Use svs license transfer license_pool dst-vem [VEM-MODULE-#], transfers the license from the license pool to the defined VEM
After this procedure all VEMs should be correctly licensed again and the "overdraft licenses in use" count is 0.