This test had been in my sights for some time, moreso after Graham Barker posted about it last year. I was pretty flat out at work when he made his passing the exam post here and decided to knock it out within the first week of the new year. More importantly, I’m going to deal with VMConAWS this year at my new job, it made even more sense to skill up.
So I took the test this morning and passed. I never brag, it only took me about 3 hours of study, most of which was around (and this builds upon Graham’s post):
- what was where within the SDDC console in VMConAWS
- billing and consolidation in VMConAWS
- interoperability and management of on-premises vSphere installation(s) with that in VMConAWS
- migration to and from VMConAWS
- TCO and sizing of a solution in VMConAWS
I don’t think passing this test gives you a certification, it’s a certified skill, whatever that means. I had the following digital badge show up:
Material used and study tips:
- An account setup in VMConAWS, I had trouble using my credit card details in the billing area severely limiting my ability to drive around in the SDDC console. I believe this was because it said – your country must use USD currency in order for the billing address to be accepted. This is weird because the SDDC is available in the APAC region. Anyway, to get around this constraint, I Googled around for pictures of the SDDC console to see what things looked like and where they were located matching them with items in the blueprint.
- The VMConAWS Getting Started guide located here. Pay attention to the billing and TCO bits.
- The VMConAWS Operations Guide located here. Pay particular attention to the various migration options, how to do those migrations to and fro, and what’s required to get vMotion happening between on-premises vSphere and VMConAWS.
- Managing VMConAWS Guide located here. Pay attention on how to consume the three APIs.
- VMConAWS Networking and Security Guide located here.
- Using EFS for VM storage in VMConAWS blog post located here.
- The Site Recovery Guide for VMConAWS located here. Make sure you flip through the 5 guides located within this one on the last page.
- Stretched Clusters in VMConAWS blog post located here.
- Understanding VPC for a vSphere NSX guy located here.
- Connectivity options for hooking up your on-premises installation with the SDDC in VMConAWS located here.
- Lastly, a comparison between Enhanced Linked Mode and Hybrid Linked Mode located in the blog post here. Make sure you pay attention to the SSO bits in this post.
My study notes:
- Default VSAN policy is RAID 1 in the SDDC.
- In Hybrid Linked Mode, both vCenter Server instances MUST be in the same authentication domain meaning using the same authentication source.
- Resource Pools – Compute-ResourcePool – default location for all VMs. Allows for child RPs. The Mgmt-ResourcePool contains machines such as NSX Manager and controllers.
- With networking – there are two types of networks, routed and extended. Routed is the default and allows connectivity to other networks in the same SSDDC and services such as SDDC firewall and NAT. The extended network needs an L2VPN which allows on-premises to VMConAWS connectivity. There’s always a single network, called sddc-cgw-network-1 (compute gateway) to begin with, more can be created as required. The mgmt network (NSX Mgmt Gateway) has CIDR address range associated with it, there can be no overlap between it and the ones in the workload network.
- With storage – there’s a vsanDatastore that’s managed by VMware and the WorkloadDatastore managed by the Cloud Administrator.
- Roles – only two types of roles exist. CloudAdmin and CloudGlobalAdmin. Custom roles cannot be created. The CloudAdmin role allows for creation of VMs and their management. Hosts, clusters and mgmt machines are managed by VMware. The CloudGlobalAdmin allows for things like content libraries to be managed.
- VMs – use the content onboarding assistant or subscribe to the on-premise content library. A content library in the SDDC can also be created to which iso, ova’s can be uploaded and machines deployed from.
- Management – vCenter can be connected using the following tools – the vSphere client/API Explorer/PowerCLI
- Site Recovery is an add on service not available by default. Uses host based replication. Steps:
- setup the SDDC
- activate the service
- create firewall rules betwen the on-prem DC and the Mgmt gateway
- the on-prem environment must be ready (SRM database and vSphere Replication installation)
- Stretched clusters are possible, but this decision must be made before starting on the deployment. Once a cluster is standalone it wont turn into a stretched cluster. A stretched cluster has hosts sitting in different AZs. This sorta cluster is limited to 28 hosts (probably more expensive to run too?). Min 6 hosts, this is also the default number. More hosts can be added but must be done in pairs across AZs which makes sense, so the numbers go like 6 – 8 – 10 – 12 – 14 upto 16. The stretched cluster’s witness resides in a third AZ. Sync writes across the two AZs.
- Three ways to consume APIs – Cloud Services Platform API, VMConAWS API and vSphere API (this one is in the Connection Info tab in the SDDC). The Developer Center has the API Explorer, Code Samples and SDKs for devs to get going with developing solutions.
- The on-premises vCenter must be running at least 6.0u3. Only ONE on-prem SSO domain can be linked. This domain can contain multiple vCenter Servers, which will all appear in the cloud SDDC provided they are Enhanced Linked Mode amongst themselves. All the resultant linked vCenter Server (on prem and off prem) will only be available in the VMConAWS console, not the on-premises vCenter console.
- Migration Options
- HCX for bulk migrations, there is downtime, small one though. Host based replication is used. Machine is up during the replication, but switched off and quickly booted up when in the cloud.
- Cold migration, longest downtime.
- If an on-premises machine was on a vSS before migration, it wont be possible to move it back to the on-premises DC after migration unless EVC mode is turned on or the Broadwell chipset is supported
- Each SDDC environment sits within a VPC. Has vCenter Server, NSX-V or NSX-T, VSAN and one or more ESXi hosts.
- Billing is handled through VMware Cloud Services. Billing starts on the day the first service was setup in the cloud.
- Clusters are added in the same AZ as the one in which the SDDC was first deployed. Spreading hosts in multiple SDDC isn’t yet supported.
- Compute storage and networks are automatically available across all clusters.
- Permissions are also the same across all clusters automatically.
- Elastic DRS allows for the adding or removal of hosts based on demand. A policy can be created per cluster which can be tuned for cost or performance when applying the recommendations. The least utilized host is removed from the cluster. This mechanism is either scale in or scale out, not up. Scale out and in stays within the number of hosts, minimum or max, defined for the cluster. Thresholds for CPU, RAM, storage and network performance cannot be altered. Essentially, all a user can do is choose the number of hosts, min or max. The algorithm runs every 5 mins. 30 min delay between scale out events. 3 hour delay between scale in events.
- Hybrid linked mode. Required for migration using the vSphere client. Not required for migration using APIs for PowerShell. When a cloud vCenter Server is linked to an on-premises SSO domain, all the vCenters are linked to the cloud SDDC. Active Directory must be the SSO source.
- Tags and categories are shared across on-premises and VMConAWS. This mode can be achieved either directly or via the Cloud Gateway Appliance. Roles arent replicated. For single pane of mgmt, the cloud SDDC vCenter must be logged onto.
- If the Cloud Gateway appliance is used, then Hybrid Linked Mode is available to the on-premises vCenter Server too.
- VMC on AWS connects to VPCs using an ENI. VM workloads get access to public AWS endpoints.
- Single tenant hosts are provided by AWS in a single account. Cluster size is 3-32 hosts.
- Single vCenter Server per account. Nested virtualization is not used.
- Least privilege access model used.
- The version of vCenter Server in AWS is usually always the latest.
- Storage is VSAN backed by EBS (and maybe flash disks formatted with vSAN). Storage per host is 15TB to 35TB, increments of 5TB. EFS volumes can be used as datastores to store vSphere VMs.
- 2 ESGs sit per account to provide northbound traffic access. 25Gbps supported throughput.
- All VMs in an SDDC will survive the loss of one AZ without data loss.
- Single host environments are limited to 30days, will be automatically deleted after 30 days, all data will be lost unless scaled out to 3+ hosts. Scale down isn’t possible.
- The VMConAWS service is charged by VMware. AWS will only charge for consumption of AWS offerings such as S3 and bill separately.
My notes are pretty comprehensive and should do you a good job of helping you prepare for the test. Don’t rely on these notes as your only study source, as always – lab up the blueprint in its entirety.
Next up? The TOGAF Part 1 test is booked for a go in 10 days time. Laters..
Hi there – i’ve looked on the pearson vue site for this exam but cannot find it listed – has it been discontinued do you know ?
Cheers
Steve