So during the keynote at VMworld in Barcelona on Tuesday morning, 18 October 2016, VMware showed a demo of how a VMware Cloud infrastructure is stood up in AWS and following that, showed how a virtual machine was migrated with vMotion into the AWS hosted VMware Cloud. This seemed impressive. However, something’s been bothering me and I’ve been to the VMware booth to get an answer but came up short.
The question I have is around processor architecture. If I’m running Intel in my local vSphere environment and AWS/VMware decided to run AMD in the VMware Cloud on AWS, how would you get that vMotion migration to work? It can’t right?
Is there an option to select the processor vendor for the newly deployed VMware Cloud on AWS?
Answers on o postcard or comment section below! Go!
And we have an answer!! Thank you Alex Jauch (@ajauch)!
Container technology has been around for quite a while now. Most people would by now have heard about Docker, and a lot of people are using Docker. What about VMware Photon? What’s that? Well again, I’d say that it’s been around for a while, however while people have been raving on about Docker and the container revolution, VMware has been working on their own implementation of container technologies as well as products that utilise and integrate with existing container technologies, such as Docker. At VMworld Europe 2016, VMware announced vSphere 6.5 and one feature that has caught my attention in this release (apart from the long overdue vSphere HTML5 Client) is vSphere Integrated Containers, or simply, VIC. At the moment I’m trying to make sense of all these technologies, how (and if) they fit together and where you would want to use each one.
In the last 6 months, I've done quite a bit of vRA6, 7 and vRO. During this time, I've had to learn quite a bit about both products, and how they interact with each other and with other REST based APIs, such as ServiceNow. Having been set in my ways in vRA 6 of using workflow stubs to break out to vRO in order to extend vRA functionality, I was concious of the fact that VMware will be removing .NET workflow stubs in future releases of vRA 7, and that the preferred method of extending out to vRO in vRA7 is to make use of the event broker service. Also, vRA7 makes use of converged blueprints, which from an extensibility point of view, actually means that we have to do things slightly differently in code than what we got used in in vRA/vRO6.
In VMware vRealize Automation 7 (vRA), blueprints are converged, rather than the single vs. multi machine blueprints that we were used to in vRA6. This presents an interesting challenge when requesting new catalog items from vRO.
In vRA6, if you wanted to request a new catalog item from vRO, you would run the “Request catalog item” workflow and simply pass any property values along with your request and those property values would be applied to the resulting item in vRA. For instance, when requesting a new VM with 2 vCPUs specified as part of the request, you could specify the following custom property in as part of the request from vRA6:
provider-VirtualMachine.CPU.Count = 2;
In vRA7, you could still use the “Request a catalog item” workflow, however you’ll find that the “provider-<propertyName>“ properties passed with the request are not honoured and will have no effect on the resulting virtual machine. The reason this is happening is because of the converged blueprint. You now need to specify the VM for which the property value is mean to be set. It’s no longer assumed that you only have one virtual machine as part of your blueprint.
So, you've done all the hard work to change your Hyperic Server certificate (or not). Now you browse to your Hyperic server's management page via HTTPS on port 7443 and you're presented with this uninspiring message from your browser:
I've been working intensively with the VMware vRealize product suite over that past 4 months, including Hyperic. One of the things we have to do on our current project is to replace the Hyperic server certificate whenever a new Hyperic instance is introduced into the environment. This is a relatively straight forward task, but one that consists of quite a few steps. In this blog post, I've documented exactly how to go about replacing Hyperic server certificates.
RT @vmwarecode: TONIGHT! “Containers and Persistent Storage” in Palo Alto, featuring @kerneltime of @VMware & @eric7han of @portwx: https:/…
RT @DABCC: First AWS Certification Study Guide Now Available - https://t.co/JrqkbIMuTR #AWS
RT @erailine: Final Round of 2017 Hands-on Labs to be Released! #VMware #VMworld #HoL https://t.co/CThyM0UlK7 https://t.co/gCFdH0vnIV