This is a static archive of JasonPearce.com, as it looked from September 2012 to May 2014.

Posted on May 21, 2014

Microsoft Xbox One is ignoring customer requests regarding Media Center Extender support

In June 2014, Microsoft will release a system update that they claim will include “two of your most-requested features will be here soon: external storage support and real names for identifying your friends.”

While both of these changes add value, neither address one the most frequently requested features: which is to bring back Media Center Extender functionality.

Microsoft’s Windows Media Center was introduced in 2005 as a means of using your PC as a DVR. The Xbox 360 was the best Windows Media Center Extender. It offered an excellent way of streaming media stored from a central location on your home network.

Enthusiast like myself, adopted these solutions as an affordable and practical way to centralize photos, music, and video on a single storage repository (desktop, server, NAS) that would be easily accessible from multiple Xbox 360s. Years of created, recorded, and purchased content has been organized and retained.

The Xbox One, however, doesn’t offer an easy means of accessing any of this content on home networks. Instead, it focuses on streaming content from the internet; leaving those of us with our private cloud no means of consuming our content off of devices on our home network.

The Xbox Video team maintains a xboxvideo.uservoice.com forum that makes it easy for customers to make feature requests and vote for features suggested by others.

Care to guess what is the most-requested feature? It’s “Media Center Extender Support” with more than 8,800 user votes. The second highest request has only 1,600 votes.

Microsoft, please listen to your users. For those of us who use the Xbox One for media consumption, we desire Media Center Extender Support most of all. And while direct-attached external storage is helpful, more users want external storage support over the network so that we can access the large media libraries of photos, music, and video that we have created or purchased over the years from a central storage repository on our home network.

Posted on May 21, 2014

Pure Storage and Round Robin

Yesterday, Pure Storage support and I performed a non-disruptive upgrade of the operating system (called Purity) that runs the controllers for our solid state SAN (storage area network). Prior to the upgrade, Pure Storage support wanted to confirm that all of our LUNs were configured to use Round Robin.

Specifically, their “VMware ESXi 5.1 Best Practice Guide for Pure Storage Arrays” article says the ESXi parameter called “Path Selection Policy” should be configured to use “Round Robin” instead of “Most Recently Used (MRU).” Both RR and MRU support failover. The difference is RR balances the load while MRU uses just one path at a time.

My favorite command line interface to our VMware vSphere environment is PowerCLI, which is a PowerShell interface. My previous “Using PowerCLI to change all LUNs to Round Robin” article offers a great introduction to the topic.

Getting to the point, I ran this command in PowerCLI to quickly check that all of my LUNs were were properly balanced and configured to use Round Robin.

Get-VMHost | Get-VMHostHba -Type "FibreChannel" | Get-ScsiLun -LunType "disk" | Where {$_.MultipathPolicy -ne "RoundRobin"}

Sure enough, there was one newly-created LUN that was still using the default Most Recently Used multipath policy. All I had to do was run this command to properly configure it for all six hosts in that cluster.

Get-VMHost | Get-VMHostHba -Type "FibreChannel" | Get-ScsiLun -LunType "disk" | Where {$_.MultipathPolicy -ne "RoundRobin"} | Where {$_.CapacityGB -ge 100} | Set-ScsiLun -MultipathPolicy RoundRobin

Here’s what it does.

  • Get-VMHost: This cmdlet retrieves the hosts on a vCenter Server system
  • Get-VMHostHba -Type “FibreChannel”: This cmdlet retrieves information about the available HBAs (Host Bus Adapter) and selects only those that are using FibreChannel
  • Get-ScsiLun -LunType “disk”: This cmdlet retrieves the SCSI devices available on the vCenter Server system and selects only those that are disk, because I don’t want to configure Round Robin for things like CD-ROM drives
  • Where {$_.MultipathPolicy -ne “RoundRobin”}: A filter to find only those LUNs whose MultipathPolicy is not equal to RoundRobin (e.g. Fixed, MostRecentlyUsed, or Unknown)
  • Where {$_.CapacityGB -ge 100}: A filter to find only LUNs that are 100 GB or larger (which is just an extra safety-net filter I use to ensure I’m not including CD-ROM drives or direct attached storage)
  • Set-ScsiLun -MultipathPolicy RoundRobin: This is what makes changes if all of the previous conditions are met. If any LUNs were found to match the previous filters, change them to use the RoundRobin MultipathPolicy.

A few minutes later, I confirmed with Pure Storage that all LUNs from all vSphere hosts were using Round Robin. We then performed a successful and non-disruptive upgrade of our two storage controllers. I love my storage.

Posted on Apr 24, 2014

NetworkWorld Fave Rave on Pure Storage

NetworkWorld magazine featured a brief blurb on me regarding my appreciation for Pure Storage‘s SSD SAN.

NetworkWorld Pure Storage fave rave

Posted on Apr 14, 2014

Comparing Bank Password Requirements

The recent Heartbleed OpenSSL vulnerability prompted another healthy round of resetting my passwords all over the web.

I already have a good understanding and high regard for password security thanks to Steve Gibson‘s Security Now podcast and Password Haystacks service.

This time around, I have noticed more sites are permitting longer passwords, which is great. There is one category, however, that still lags behind; and that’s banking.

Several of my financial institutions permit a maximum of only 12 to 15 characters. I continue to ask them to accept longer and more complex passwords, to no avail.

Perhaps it’s time for me to choose financial institutions that have a greater emphasis on security than low interest rates or cash-back rewards.

I have started to create a spreadsheet that compares the minimum and maximum password requirements for some of the larger banking institutions in the US. It’s a public Google Docs Spreadsheet. I invite anyone you to help contribute or edit:

Comparing Bank Password Requirements

It is frustrating that the password that I use to protect my Netflix account is many times more secure than the passwords I’m permitted to use to protect my financial assets. I can use a 32-character password to protect content that isn’t even mine, but can’t do the same for my own money.

Please help me find a bank that cares about security. Any contributions to the Comparing Bank Password Requirements would be appreciated.

Posted on Mar 6, 2014

How to move automated persistent desktops to another cluster

My objective was to upgrade vSphere host from 1 gigabt Ethernet to 10 gigabit Ethernet connections.

All of my VMware Horizon View 5.2 linked-clone desktops resided in a single cluster of 10 hosts that I’ll call Cluster A. I had an even mix of persistent and non-persistent desktops.

Cluster B was needed

Through trial and error, I determined that I would need to create and use a second cluster in my process of upgrading my hosts to 10 Gig Ethernet.

I made use of Distributed Virtual Switches. And since no two port groups could have the same name, the linked-clone desktops (built from a parent virtual machine with specific port group settings) would run correctly on the 1 Gig hosts in the cluster but fail if they ran on the 10 Gig hosts because they used a different Distributed Virtual Switch and port group.

Also, VMware View Composer would build new linked-clone desktops on any host in the cluster without first checking if the desktop was configured to use a port group that existed on that host.

So began my task of figuring out how to move persistent and non-persistent linked-clone desktops from Cluster A to Cluster B. My plan was to remove a host from Cluster A, physically convert it to support 10 Gig, and add it to Cluster B. Then gradually shift desktops pools from Cluster A to Cluster B.

Here’s how I did it.

How to move non-persistent linked-cloned desktops to a different cluster

This step was easy. In vSphere I cloned my parent virtual machine from Cluster A to Cluster B. I then changed the vNetwork settings of the PVM to use a new 10 Gig port group. Used vSphere to then take a snapshot of my PVM on Cluster B with a 10 Gig port group on Cluster B.

Switching over to View Administrator, I edited the non-persistent desktop pool to Delete desktops on log off. On the vCenter Settings tab, I changed the default image (PVM) to the new one that resided on Cluster B and used all of the same datastores as before.

New non-persistent linked-cloned desktops were automatically be deleted from Cluster A and added to Cluster B as users logged off at the end of the day. A day or two later, and my non-persistent pools were now running on 10 Gig hosts.

How to move persistent linked-cloned desktops to a different cluster

This was more difficult. Like before, I used vSphere to clone my parent virtual machine on Cluster A to Cluster B and configured the vNetwork to use a new 10 Gig port group. Took a snapshot of the PVM.

In View Administrator, I edited the persistent desktop pool to use the new parent virtual machine that resided on Cluster B and again selected all of the same datastores as before.

To test, I deleted one if the Available spare desktops to confirm that it was rebuilt on Cluster B and had 10 Gig port group.

Next, I used View Composer > Recompose to recompose all persistent linked-clone desktops to the new PVM that resided on Cluster B. The strangest thing will happen…

All of your persistent linked-clone desktops will recompose using the PVM from Cluster B, but will remain on Cluster A and retain 1 Gig port group vNetwork configuration on Cluster A (not the 10 Gig port group vNetwork configuration on Cluster B or within the PVM settings).

I wished and hoped the desktops would have recomposed to Cluster B. At least they will still work on Cluster A as I sought for an alternative way to move them to Cluster B.

Used PowerCLI to manually move persistent linked-clone desktops to a different cluster

Breaking the rules, I decided that I had to manage and modify my linked-clone desktops via vSphere/vCenter instead of via VMware View. And PowerShell/PowerCLI was the tool I decided to use.

I wanted this to be as non-disruptive to the end users and gradual for me since I was manually converting one host at a time from Cluster A to Cluster B based on capacity.

My persistent desktop pools are configured to power off desktops when idle. I decided I would just run these three one-liner PowerCLI scripts on occasion to move and configure just the powered off desktops to Cluster B.

Move Powered Off VMs from Cluster-A to a host in Cluster-B

Get-Cluster -Name "Cluster-A" | Get-VM -Name "VDI-*" | Where{$_.PowerState -eq "PoweredOff"} | Move-VM -Destination "host-in-cluster-b.example.local"

Move Powered Off VMs in Cluster-B to a Resource Pool in Cluster-B

Get-Cluster -Name "Cluster-B" | Get-VM -Name "VDI-*" | Where{$_.PowerState -eq "PoweredOff"} | Move-VM -Destination "Resource-Pool-In-B"

Change PortGroup of Powered Off VMs to the 10 Gig Portgroup

Get-Cluster -Name "Cluster-B" | Get-VM -Name "VDI-*" | Where{$_.PowerState -eq "PoweredOff"} | Get-NetworkAdapter -Name "Network adapter 1" | Set-NetworkAdapter -Portgroup "dvPG-10G-VLAN-44" –Confirm:$false

View seems happy

VMware View seems to accommodate my manual changes. These desktops will remain powered off until a user attempts to log into VMware View. When they do, View correctly powers on the persistent linked-clone desktop in Cluster B and everything works as normal.

I also performed a new View Composer > Recompose task to a desktop pool after I had finished moving all desktops from Cluster A to Cluster B, which also worked as normal.

If you have a better/easier way to move persistent linked-clone desktops from Cluster A to Cluster B, please share in the comments.