Showing posts with label Miscellaneous. Show all posts
Showing posts with label Miscellaneous. Show all posts

2019-05-27

(Not) Real Scientific Proof that AMI has #3syllables

AWS has 26, (yes) I counted them, different products with exactly 3 letters in them (or derivatives of) - lets go through them one at a time.


  • A-C-M AWS Certificate Manager - Is not pronounced ac-em (also not hack-em) 
  • D-M-S Database Migration Service - Is not pronounced dems nor dee-miss (and also not dimms)
  • E-B-S Elastic Block Store - Is not pronounced ebbs (and we are not being washed back out to sea), nor ee-bzz (people might be allergic to bees) 
  • E-C-2 (Well it should actually be E-C-C - but EC2 sounds so much sexier) Elastic Compute Cloud - Is not pronounced ek-2 (or even eck - otherwise people might get confused with "what the heck2")
  • E-C-R - Elastic Container Registry - Is not pronounced Ecker-R (sounds too much like pecker) 
  • E-C-S - Elastic Container Service - Is not pronounced eh-ckes neither ee-cees nor Ex (People would be wary to use a product named Amazon X - they might think that AWS is taking after Google with their Alphabet) 
  • E-F-S - Elastic File System - Is not pronounced ef-s neither ee-fees nor eefs
  • E-K-S - Elastic Container Service for Kubernetes - pronouncing this x-kay (ECS-K) would sound too much like Xray (another AWS product). Also see above about E-C-S 
  • E-M-R Elastic MapReduce - We don't call it ee-mer - nor emmer (otherwise all the Dutch people might think that this is an S3 look-alike) 
  • F-S-X - I can't find what this stands for - except for FSx :) - not ef-sex (that is not politically correct..) 
  • I-A-M - Identity and Access Managment - no-one uses I-AM - (Dr. Suess would be happy with I-AM-SAM - SAM-I-AM
  • I-O-T - Internet Of Things - Not eye-ot (people might think there are more than 7 dwarfs in the service - eye-o, eye-o it's off to work we go..) 
  • K-M-S Key Management Service - Is not pronounced kems - nor kee-mes (keemes - the new AWS meme-as-a-service product is probably not a good idea either) 
  • L-E-X - this is actually the product name - Amazon Lex - even though the French might have enjoyed it if it was actually Le'X (but then again people don't like having their Ex in the spotlight) 
  • M-S-K - Managed Streaming for Kafka - Is not pronounced musk (Elon might not like it), em-sek (could be too fast for us to use). And of course AWS had to name a product after me.
  • P-H-D - Personal Health Dashboard - Is not pronounced pee-hud and phud - would get them in trouble with spreading Fear Uncertainty and Doubt
  • R-A-M - Resource Access Manager - Not (a battering) ram (nor the the ancient Indian king Raam
  • R-D-S - Relational Database Service - Is not pronounced ar-dis, nor ar-dees (and definitely not the new time machine service - tardis) 
  • S3 - Simple Storage Service - This is a 3 letter product - S-S-S (S3 is so much sexier) - Not sss (people might think there are snakes) - here I conceded - ess-ess-ess brings up really bad vibes 
  • S-E-S - Simple Email Service - Is not pronounced Sess nor sees (otherwise us customers might think this is a new tax in eu-west-1 or ap-south-1) 
  • S-N-S - Simple Notification Service - Is not pronounced S-ness, neither sneeze nor Sans (and not nessie either - she is still somewhere in the Loch) 
  • S-Q-S - Simple Queue Service - Is not pronounced see-ques - nor squeeze 
  • S-S-O - Single Sign On - Is not pronounced sa-so neither ses-o nor se-so (just because I say so) 
  • S-W-F - Simple Workflow Service - Is not pronounced see-wiff - nor Swiff 
  • V-P-C - Virtual Private Cloud - Is not pronounced vee-pic, neither ve-peec nor veep-see 
  • W-A-F - Web Application Firewall - I concede - this one is #1syllable - there I said it! BUT IT IS NOT #2syllables !!

Except for three exceptions (S3, LEX and WAF) - all the three letter products in AWS - are all pronounced with three syllables!!!!

Just like A-M-I - which has #3syllables 

I rest my case. 

2019-03-06

My awesome-podcasts List

I have a decent commute every day back and forth to work and I have come to enjoy listening to a number of podcasts throughout the week.

I will try and keep the list up to date - here

As of today - this is my current list of podcasts


Grumpy Old Geeks

Two old farts (like me) that bitch about tech, and how ridiculous we have all become - Link

AWS Podcast

A weekly show about what is happening in the world of AWS - Link

The Cloud Pod

A podcast about what is going on the cloud - Link

Screaming in the Cloud

Conversations about the cloud with people - Link

PodCTL

Podcast about Kubernetes - with a RedHat focus - Link

The Cloudcast

Podcast about all things Cloud - Link

Cloudtalk (Hebrew)

Hebrew Podcast about the world of cloud - Link

The Tony Robbins Podcast

Inspirational talk with Tony Robbins - Link

Datanauts (Packet Pushers)

Podcast about tech, cloud and all things nice - Link

Rural Emergency Medicine Podcast

A Podcast about emergency medicine - Link

Speaking in Tech

Podcast about things happening in the tech world - Link

The Secure Developer

Security focused Podcast - Link

The Full Stack Journey

Interviews with people that have made a change in their technical career - Link

To Be Continuous

DevOps focused podcast - Link

The Microsoft Cloud Show

A Microsoft focused cloud podcast - Link

Emergency Medicine Cases

A podcast about emergency medicine - Link

Techtalk

A podcast in Hebrew about the cloud and tech - Link

2019-02-25

Goodbye Docker and Thanks for all the Fish

Back in July 2018, I started to write a blog post about the upcoming death of Docker as a company (and also perhaps as a technology) but I never got round to completing and publishing the post. It is time to actually get that post out.





So here you go....


Of course Docker is still here, and of course everyone is still using Docker and will continue to do so the near and foreseeable future (how far that foreseeable future is - is yet to be determined). The reason I chose this title for the blogpost is because, in my humble opinion the days for Docker as a company are numbered and maybe also a technology as well. If would indulge me with a few minutes of your time - I will share with you the basis for my thoughts.

A number of years ago - Docker was the company that changed the world - and we can safely say - is still changing the world today. Containers and the technology behind containers has been around for many years, long before the word docker was even thought of, even turned into a verb (“Dockerize all the things”), but Docker was the company that enabled the masses to consume the technology of containers, in a easy and simple fashion. Most technology companies (or at least companies that consider themselves to be a modern tech company) will be using Docker or containers as part of their product or their pipeline - because it makes so much sense and brings so much benefit to whole process.

Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.

Kubernetes has won the container orchestration war, I don’t think that anyone can deny that fact. Docker itself has adopted Kubernetes. There will always be niche players that have specific use cases for Docker Swarm, Mesos, Marathon, Nomad - but the de-facto standard is Kubernetes. All 3 big cloud providers, now have a managed Kubernetes solution that they offer to their customers (and as a result will eventually sunset their own home-made solutions that they built over the years - because there can be only one). Everyone is building more services and providing more solutions, to bring in more customers, increase their revenue.

Story is done. Nothing to see here. Next shiny thing please..

At the moment, Kubernetes uses docker as the underlying container engine. I think that the Kubernetes community understood that Docker as a container runtime (and I use this term specifically) was the ultimate solution to get a product out of the gate as soon as possible. They also (wisely) understood quite early on they needed to have the option of switching out that container runtime - and allowing the consumers of Kubernetes to make a choice.

The Open Container Initiative - brought with it the Runtime Spec - which opened the door to allow us all to use something else besides docker as the runtime. And they are growing - steadily. Docker is no longer the only runtime that is being used. Their is a growth in the community - that are slowly sharing the knowledge of how use something else besides Docker. Kelsey Hightower - has updated his Kubernetes the hard way (amazing work - honestly) over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing.

Most people (including myself) do not have enough knowledge and expertise of how to swap out the runtime to what ever they would like and usually just go with the default out of the box. When people understand that they can easily make the choice to swap out the container runtime, and the knowledge is out there and easily and readily available, I do not think there is any reason for us to user docker any more and therefore Docker as a technology and as a company will slowly vanish. The other container runtimes that are coming out will be faster, more secure, smarter, feature rich (some of them already are) compared to what Docker has to offer. If you have a better, smarter, more secure product - why would people continue to use technology that no longer suits their ever increasing needs?

For Docker - to avert this outcome - I would advise to invest as much energy as possible - into creating the best of breed runtime for any workload - so that docker remains the de-facto standard that everyone uses. The problem with this statement - is that there no money in a container runtime. Docker never made money on their runtime, they looked for their revenue on the enterprise features above and on top the container runtime. How they are going to solve this problem - is beyond me and the scope of this post.

The docker community has been steadily declining, the popularity of the events has been declining, the number of new features, announcements - is on the decline and has been on the decline for the past year or two.

Someone told me a while back - that speaking bad about things or giving bad news is usually very easy. We can easily say that this is wrong, this is no useful, this should change. But without providing a positive twist on something - you become the “doom and gloom”. The “grim reaper”. Don’t be that person.

I would like to heed their advice, and with that add something about - what that means for you today. You should start investing in understanding how these other runtimes can help you, where they fit, increase your knowledge and expertise - so that you can prepare for this and not be surprised when everyone else stops using docker and you find yourself having to rush into adapting all your infrastructure. I think it is inevitable.

That was the post I wanted to write 8 months ago...

What triggered me to finish this post today was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools - and my tweet that followed

Lo and behold - no more docker package available in RHEL 8.
If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package:
podman-docker.noarch : "package to Emulate Docker CLI using podman."
What is this Podman we speak of? The docker package is replaced by the Container Tools module, which consists of Podman, Buildah, Skopeo and several other tidbits. There are a lot of new names packed into that sentence so, let’s break them down.









(Source - Tutorial - Doug Tidwell (https://youtu.be/bJDI_QuXeCE)

I think a picture is worth more than a thousand words..

Please feel free to share this post and share your feedback with me on Twitter (@maishsk)

2019-01-30

Empires are not built in a day (but they also do not last forever)

I am currently on vacation in Rome (my first time) and during this trip I came to a number of realizations that I would like to share with you.

I went to the Colosseum today - and I have to say I was in awe. The structure is magnificent (even if the remains are only part of of the original structure in all its glory). As I progressed throughout the day - I came to the following realizations.

(.. and of course how they tie into our tech world today)

Acquire vs. Train


Throughout ancient history - all the great empires (or what were once considered as such) were barbarians. They left legacies that remain to this day - but none of them were earned honestly. Most of the great wonders of the worlds - from the pyramids to the Colosseum to the great wall of China - these were all built with slave labor. The Romans conquered the world, enslaved almost every country they touched - used them to build an empire. I think it is safe to say this how the world used to work. Today, this would not be acceptable. Slavery and taking advantage of the other is not correct. 

The knowledge was there, the brains were there, but they needed working hands get the shit done. 
That is why people outsource development resources to places where labor is cheap (India for example) but leave the brains at home and only let the 'workers' churn out the hard stuff. 

There are several problems with this - and we are seeing this today in many walks of life. Some companies understand that even though the labor is cheaper, the quality and speed with which the work they wished to complete - is not what they expect. In the olden days would be able to terrorize your slaves into working to their deaths to provide what you want. This happened in ancient Egypt, in ancient Rome, pretty much everywhere. But that does not and cannot happen today. So we do one of two things. Instead of working the people to their death we provide incentives to produce more, be it higher salaries, better conditions, bonuses - hoping that this will encourage (or should I rather say force) people to work harder. The other option is - we compromise on quality - or on delivery times - which either pisses off our customers because we are late, or pisses them off - because the product is not as good as we promised.

It is obvious though - that the easiest way for us to produce - is not by training the talent from the ground up - but rather let someone else invest that time and effort - and when we have the opportunity, swoop in (in the olden days conquer) and reap the benefits of someone else's work. 

In today's world we see this with most big companies acquiring smaller ones. Growth by Acquisition. Cisco has built its empire over the years in this way. You can't build an amazing Wireless product - buy one. VMware the same. You can build a great Kubernetes offering - buy one

This is the way business works. Sometimes these mergers work and make the company better and sometimes they fail - dismally. Sometimes the talent gets incorporated but that is not always the case. 
It will all depend on how much you want to invest in the knowledge you acquired, and how much you become one with those people that bring that knowledge to the table.

True belief stays eternal


Religion is funny thing. I think I can say there is really only one religion that has stayed with us from the beginning and that is Judaism. Christianity became a well known religion - somewhere around the 4th century. Islam - somewhere in the 7th century. All the ancient kingdoms, rulers, empires, no matter how great they were, how much of the world they conquered (or tried to) - they no longer exist. The only true thing that people will cling to is an idea, a belief. Something that is emotional.

The Persians built an empire - it is no more.
The Egyptians , the Greeks, the Romans, the Ottoman empire, the list goes on and on and on - all gone. 

In our technological world today, it is hard to call anything eternal. Computers have only been around for less than 100 years. But even with its young age there are already religions forming around technology and it use..
  • vim vs emacs
  • Windows vs Mac
  • Windows vs Linux
  • Closed source vs open source
It is very hard to convert someone from one religion to another, sometimes with works some severe, and more severe, and sometimes less severe persuasion but there are cases where people will change their mind.

I am of the conviction that if what you believe in - is something that is connected to a deep emotion, something that is personal, it is something that will stay with you forever.

Technology - is still in its infancy - we might not realize it - and the rate at which things change is grower faster and faster as we go along.

I think I got a bit lost in the journey and lost sight of the end goal here - so let me get to the point.

Emotion, making it personal, and connecting with what you do - is something that will always stay with you. The technology you invest in, your day-to-day job, the tools you use - they will evolve and change - they are not eternal.

You are not a Java guy. You are not a kubernetes girl. You are not a X.

You are a person that learns, a person that adapts. Connect to your goal with emotion and this will allow you to succeed.

That is who you should be!

(Also published on Linkedin)

2019-01-11

The Year 2018 in review

I don't always do these kind of posts but 2018 was a substantial year for me that warrants a short summary.

I released the AWS Powershell Container - gauging by the number of pulls - I guess that is was not that useful.. :)

I completed my 5th AWS Certification. The post was also translated into Hebrew as well.

I presented a session at the DevOps Israel conference



I left Cisco (NDS) after 13 years and started a new position at CyberArk.

I became a lot more involved in the Israel Cloud community (for example Encounters in the Cloud - Interview).

I went to re:Invent again this year - and it my posts Keeping Kosher at re:Invent 2018 and How I Get the Most Out of #AWS re:Invent 2018 (Hebrew version) were very useful not only to me - but from what I heard - to others as well.

I was a guest on the Datanauts podcast - Datanauts 143: Getting To Day 2 Cloud.  I found out - that this episode was the most popular episode of the year 2018 on the show. Respect!


I presented an Ignite (in Hebrew) at DevOpsDaysTLV



I also presented a session at the AWS Community Tel Aviv 2018



And last but not least - I released the AWS Visio Stencils

All in all - it was a good year.

One thing that I neglected (badly!!), was my writing the rest of The Cloud Walkabout - which is something that I will make the most effort to rectify this year.

Looking forward to 2019... Upward and onward!!


2019-01-04

I was not expecting this at re:Invent

There was a lot to absorb during the jam packed week in Las Vegas but there were a number of things that I was truly surprised about during the conference..

It was clear that AWS is going after the Enterprise market and are accommodating the on-prem / legacy / old-school way of thinking. This is the first re:Invent that you could really feel the change.

Here are a few of them:

AWS Outposts

AWS Well Architected
Lake Formation

Security Hub

Control Tower

FSx


Next was containers or the lack of containers actually. There were no significant container announcements. ECS and EKS - were not mentioned once during the keynote. No new functionality, no new features. For the product that was probably the most demanded release that everyone wanted last year at re:Invent - this year - it was crickets all the way down. I was thinking that AWS was saving some glory and glitters for the Kubecon conference the week after - but all that really came out of there was the Containers Roadmap (which is actually amazing - because AWS never disclose what their roadmap is - at least not publicly. I suppose it is expected of them as their keeping up the image of Opensource contribution and championship).

And the last shocker was the fact that inbound traffic to S3 is now going to cost you money.. 

Wait, What? You are now charged for uploads to S3????
Well that is not entirely true. Traditionally - you do not pay for incoming traffic into S3 - it says that black on white.  

s3 Pricing



So no you are not charged for direct uploads to S3. But if you do it through another service that acts as a proxy to S3 - then that's different.

Storage Gateway was one such a service.

Storage Gateway

Here you are allowed 100GB for free each month and capped at a maximum of $125 / month. For a company that transfers hundreds and thousands of TB a month - the $125 is chump change which essentially makes it pretty much free.

And then came AWS Transfer for SFTP and the change that no-one really noticed.

SFTP Pricing
Whoa!! Not only are you being charged for 4x the amount of any other service,  you are not capped at a maximum monthly spend, and you get no free monthly uploads either.

You use it - you pay (and pay for it you will).

Next up was DataSync

Datasync Pricing







Again - same new price of $0.04/GB for transfer traffic into S3.

Pricing example

Their pricing example as well
If you were to do the exact same thing - but with regular S3 upload. 
If you perform a one-time migration of 50 TB of 16 MB files into Amazon S3 in US East (Ohio), it costs you the following to use S3 cli
(50 TB copied into S3 * 1024 GB * $0.00 / GB) + (1 S3 LIST request * $0.005 / 1000) + (50 TB / 16 MB S3 PUT requests * $0.005 / 1000)
= $0 + $0 + $16.38
= $16.38
That is one heck of a difference. Now I have not tested the difference in speed, or throughput you can get from Datasync - I am sure there is a difference in the data transfer speeds.

But for me this is troubling. The whole bloody world uses S3 (granted most of the traffic is going from S3 out of AWS). Are AWS planning a change in their pricing model? Even if it is $0.04/GB - this would be a huge channel of additional revenue for them. Something to ponder on.

The pricing model that is now attached to S3 uploads seems strange to me - especially if you are receiving the exact same thing through another route for free. If it would have been network traffic through the service - I would have easily been able to accept.
And last but not least, Werner Vogels finished his keynote on time this year. Well done and thank you for assisting in the effort of improving our experience at re:Invent this year.

Thoughts? Comments? 
Feel free to reach out to me on Twitter (@maishsk)

2018-12-19

AWS Client VPN

So after leaking (or not really leaking) from some of the sessions from re:Invent it seems that AWS have finally released the Client VPN

AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.
So instead of you having to provision a EC2 instance on your own and configure your own OpenVPN server - you can use this service

But pricing is outrageous...

$0.05 per AWS Client VPN connection hour
$0.10 per AWS Client VPN endpoint association hour

Assuming I would like to bring up a EC2 instance that would handle a 5 VPN connections and I leave the server running 24/7 for a month users connect for approximately 8 hours a day - 5 days a week
LEaving this service provisioned for the entire month would cost

0.10 * 750(hours in a month) = $75
0.05 * 5(people) * 8(hours) * 5 (days) * 4 (weeks) = $40

Total cost for one month - $115

If I were to roll my own on EC2

Using a t3.small instance (2vCPU/2GB ram) should be more than sufficient.

0.02 * 750 (hours in a month) = $15


OK - it is not comparing apples to apples - not by a long shot

Client VPN offers the following features:

Secure — It provides a secure TLS connection from any location using the OpenVPN client.
Managed service — It is an AWS managed service, so it removes the operational burden of deploying and managing a third-party remote access VPN solution.
Highly available and elastic — It automatically scales to the number of users connecting to your AWS resources and on-premises resources.
Authentication — It supports client authentication using Active Directory and certificate-based authentication.
Granular control — It enables you to implement custom security controls by defining network-based access rules. These rules can be configured at the granularity of Active Directory groups. You can also implement access control using security groups.
Ease of use — It enables you to access your AWS resources and on-premises resources using a single VPN tunnel.
Manageability — It enables you to view connection logs, which provide details on client connection attempts. You can also manage active client connections, with the ability to terminate active client connections.
Deep integration — It integrates with existing AWS services, including AWS Directory Service and Amazon VPC.
Are all these extra features worth paying so much more for this managed service?
You are the only one that can answer this.

I am throwing the gauntlet out there - for someone to write the code that will enable the provisioning of a VPN Endpoint on demand - based on usage - which will make this service more cost effective.

2018-11-12

How I Get the Most Out of #AWS re:Invent 2018

I am not an expert, and I only went to re:Invent for the first time last year, but I have been to a quite a number of conferences over the years.

So here come my thoughts about making the most of the crazy week in Vegas.

re:Invent


The (regular) sessions


Contrary to what you might think, going to sessions where you have a speaker (or speakers) up on stage going through a slide deck, or a panel of speakers talking about a subject - is where you should be, is not a good use of your time.

There are currently 2358 sessions and activities listed on the portal (a good portion of them are repeats - but hell that is a lot of content)sessions

Almost all of the sessions (I will get back this in a few minutes) are recorded and therefore can be consumed after the event - in the car, on the bus or train - or even in the air during your travels.

Here is a podcast feed (http://aws-reinvent-audio.s3-website.us-east-2.amazonaws.com/2017/2017.html) for all 2017 sessions for your listening pleasure.

That is why you can spend your time better elsewhere.


The Builder / Chalk Talk / Workshop sessions


Here is where I would spend my time. The cost of re:Invent (if you paid the full price) is $1,800 for 4.5 days (Friday is a short day). These are the sessions that will not be recorded and where I will probably get the most benefit
(and here are some of my interests). The value I receive is from doing things that I learn from, not by being a passive listener, but by actively participating in a discussion or an activity.


Chalk talks


This is similar to getting a design session and time with an AWS expert in their field and diving deep into a specific subject. Most of the sessions are level 300/400 - which meant they are advanced and highly technical. The rooms are small - usually no more than 50-100 people and the participants there are usually people that are looking for a very specific answers about the journey they have embarked on - or are about to.

ARC210-R - SaaS Jumpstart: A Primer for Launching Your SaaS Journey
ARC213-R - Architecting for the Cloud
ARC216 - SaaS Operations: The Foundation of SaaS Agility
ARC301 - Cost Optimization Tooling
ARC306 - Breaking up the Monolith
ARC310-R - From One to Many: Diving Deeper into Evolving VPC Design
ARC317-R - Reliability of the Cloud: How AWS Achieves High Availability
ARC325 - SaaS Analytics and Metrics: Capturing and Surfacing the Data That's Fundamental to Your Success
ARC326-R1 - Migrating Single-Tenant Applications to Multi-Tenant SaaS
ARC408 - Under the Hood of Amazon Route 53

Builder Sessions


Looking for some personal time with an SA on a specific topic, and even better - you get to build the solution at hand with the guidance from the expert on-hand. Pure learning experience.

ANT402-R - Securing Your Amazon Elasticsearch Service Domain
ARC415-R - Building Multi-Region Persistence with MySQL

Workshops


Again - a hands-on learning experience - 2-3 hours of sitting down on a specific topic getting my hands dirty...

ARC404 - Designing for Operability: Getting the Last Nines in Five-Nines Availability
ARC315-R1 - Hands-On: Building a Multi-Region Active-Active Solution
ARC327-R1 - Hands-on SaaS: Constructing a Multi-Tenant Solution on AWS
ARC403 - Resiliency Testing: Verify That Your System Is as Reliable as You Think
CMP403-R1 - Running Amazon EKS Workloads on Amazon EC2 Spot Instances
DEV303-R2 - Instrumenting Kubernetes for Observability Using AWS X-Ray and Amazon CloudWatch
DEV306-R - Monitoring for Operational Outcomes and Application Insights: Best Practices
GPSWS402 - Continuous Compliance for Modern Application Pipelines
GPSWS407 - Automated Solution for Deploying AWS Landing Zone
NET410 - Workshop: Deep Dive on Container Networking at Scale on Amazon EKS, Amazon ECS, & Amazon EC2
SEC331-R1 - Find All the Threats: AWS Threat Detection and Remediation
SEC337-R - Build a Vulnerability Management Program Using AWS for AWS

Hackathons


Want to geek out and build something, play a game or solve a whodunnit quest? This is where I will get my game on.  Some are for fun, some are for fame, and others just for plain doing some good.

Giving back


Being at re:Invent is something that is fun, and usually something that can involve consumption of many things. Food, alcohol, entertainment and even your hard earned cash. Me being me - I prioritize giving back to others  as part of my daily life. Spending a week at a conference only receiving is not something I am comfortable with.
So as a result I will be spending some of my time  here https://reinvent.awsevents.com/play/giving-back
The BackPack for Kids program provides bags of nutritious, single-serving, ready-to-eat food items each Friday to children who might otherwise go without during weekends and long breaks from school. Come by the Venetian Sands Foyer to get involved and help put together a backpack or two! Learn more about Three Square here.

Keynotes


Event though the keynotes can be consumed from a live stream - there is something about sitting in a room (or a huge hall) with a boatload of people - where Andy Jassy goes up on stage and bombards you with all the new features that are coming (some that will only be available sometime in the future). But still it is quite mesmerizing and if you have not been in one of these keynotes - I would suggest you go. It is quite an experience.

The Certification Lounge


As Corey Quinn just wrote a few days ago
it's a $100 lounge pass with a very odd entrance questionnaire
If you have an AWS certification - go to the lounge - it is a place to get away from the other 49,000 others in the hallways and the constant buzz around you.

The Expo


Do not under any circumstances miss going to the Expo floor. To really make proper use of the floor - I would say you will need a good 6-8 hours of your schedule (don't do it one shot though). Go to the vendors, especially the smaller ones that don't have the huge booths. Look at your competition, speak to people, make yourself known. Yes you will be bombarded after the show with sales calls - but all it takes is a simple "Sorry not interested anymore" and most vendors will leave you be.


Social media


I don't think I could get by without following what is going on in Twitter.
I have a search column dedicated for re:Invent (already for the past month)

image

I will also be checking the og-aws Slack channel to co-ordinate snark about the announcements and on-goings at the event and also some face to face meetings with some of the people that I only have met through their avatars.

(And as always the great set of posts at the Guide of Guides is invaluable.)

See you all in 2 weeks!

2018-11-08

Events as a Service (EaaS)

Most vendors that perceive themselves as a market leader will have a major annual event (some will even have multiple events in different geographical locations).

Here are few of these major events that come to mind:


And every year we come around to the registration and scheduling of sessions to these events, and they almost always suck... 

(I am going to use re:invent as the victim here - but I am sure that the experience is probably the same with most conferences) 

There are more than enough things that one could find wrong with the way things go at a conference - and I am not diminishing the problems one little bit.

I would like us all to view it in a different perspective.

The companies that hold these events - are tech companies. They are great at selling technology, great at creating some amazing technology. An of course they also have people that are in charge of events and marketing - but it is not their core business. 

I do not underestimate the impact a good event can have on your product - or how a bad event can damage a company's brand - that is why companies like these spend many millions of dollars on events like this. But again that is not what they are trying to sell,  they are not trying to sell an event. They are not event planners, this is something we seem to forget from time to time especially when things are not optimal (another polite way of saying that they suck).

They outsource the events to an external company.

The signs, the transport, the advertising, the venue, website, the on-site services, scanners, the food - and yes - even the mobile app. All of these do not belong to any one of these companies they are all provided as part of the service that another company sells to these market leaders.

It does not make sense for any of the large vendors to bring up an event all by themselves. For an event that is sometimes no more than 5 days in a year - they will not maintain all the dedicated resources (physical, human and virtual) for just one event. 

So it make sense to outsource it all. And they do.

There are a few vendors out there that are capable of bringing up events on this scale - such as Cvent or Lanyon and if you ask me - they do a pretty good job.

There are always things that can be improved. The app could be better (this year there are significant improvements in the re:Invent app experience 😃 ) The registration could be better, the directing of human traffic at the conference could better, the list could go on and on.

Is IS the job of the tech vendor marketing teams to demand from these event companies to improve from one event to another and get better from year to year. To make sure the food is better, improve registration, make sure that the (also human) traffic flows. 

If I look at this from a technology perspective - it is a classic case of consuming something aaS (As a Service). AWS provides us with infrastructure, and they maintain software. but they do not employ all the people that put the chips on the motherboards of every server in their datacenters. They do have people that provide input into the design of the servers - in order for them to operate more efficiently, and in turn provide a better service to their customers (you and me). 

I would not expect them to have chip designers or assembly plants on the payroll to allow them to run their business. They outsource / contract that work from a 3rd party. 

They contract / outsource their event management. All the big companies do - it makes perfect financial sense. 

Does that mean we should stop bitching about the food, the lines, the app? Hell no! By providing constructive criticism (or complaining) we make things better, because that is what we the customer demand. And these event management companies - will hopefully improve.

Some food (pun intended) for thought - when you are your next conference. 

2018-08-19

A Triangle is Not a Circle & Some Things Don’t Fit in the Cloud

Baby Blocks

We all started off as babies, and I am sure that not many of you remember that one of the first toys you played with (and if you do not remember - then I am sure those of you with kids have probably done the same with your children) was a plastic container with different shapes on the lid and blocks that were made of different shapes.

A triangle would only go into the triangle, a circle in the circle, a block in the block and so on.

This is a basic skill that teaches us that no matter how hard we try, there are some things that just do not work. Things can only work in a certain way (of course coordination, patience and whole lot of other educational things).

It is a skill that we acquire, it takes time, patience, everyone gets there in the end.

And why am I blogging about this – you may ask?

This analogy came up a few days ago in a discussion of a way to provide a highly available database in the cloud.

And it got me thinking….

There are certain things that are not meant to be deployed in a cloud environment because they were never meant to be there in the first place. The application needed an Oracle database and it was supposed to be deployed in a cloud environment.

What is the default way to deploy Oracle in highly available configuration? Oracle RAC. There are a number of basic requirements (simplified) you need for Oracle RAC.

  1. Shared disk between the nodes.
    That will not work in a cloud environment.
    So we can try using dNFS – as the shared storage for the nodes – that might work..
    But then you have to make an NFS mount available to the nodes – in the cloud.
    So let’s deploy an NFS node as part of the solution.
    But then we have to make that NFS node highly available.
  2. Multicast between the nodes - that also does not work well in the cloud.
    So maybe create a networking environment in the cloud that will support multicast?
    Deploy a router appliance in the cloud.
    Now connect all the instances in the cloud into the router.
    But the router poses as a single point of failure.
    Make the router highly available.

And if not Oracle RAC – then how about Data Guard – which does not require shared storage?

But it has a steep licensing fee.
And you have to find a way for managing the virtual IP address – that you not necessarily will have control over.
But that can be overcome by deploying a VRRP solution with IP addresses that are manually managed.

ENOUGH!!!

Trying to fit a triangle into a square – yes if you push hard enough (it will break the lid and fit).
If you cry hard enough – Mom/Dad will come over and put it in for you.

Or you come up with half-assbaked solution like the one below…

blocks

Some things will not fit. Trying to make them fit creates even more (and sometimes even bigger) problems.

In this case the solution should have been - change the code to use a NoSQL database that can be deployed easily and reliably in a cloud environment.

As always your thoughts and comments are welcome.

2018-07-19

The #AWS World Shook and Nobody Noticed

A few days ago at the AWS Summit in New York there was an announcement which in my honest opinion went very noticeably under the radar and i don't think many people understand exactly what it means.
The announcement i'm talking about is this one EC2 Compute Instances for Snowball Edge
Let's dig into the announcement. There are new instance types released the sbe1 family which can been on AWS Snowball Edge device which essentially a computer with a lot of disks inside.

The Snowball is a service that AWS provides to enable you to upload large amounts of data from your datacenter up to S3. Since its inception it is actually a very interesting concept and to me it has always been as a one off way enticing you to bring more of your workloads and your data in a much easier way to AWS.

I also posted this on Twitter

Since its inception AWS has always beaten the drum and pushed the message that everything will run in the cloud - and only there. That was the premise they build a large part of their business model upon. You don't need to run anything on-premises because everything that you would ever want or ever need is available on the cloud, consume as a service, through an API.

During the course of my career a number a number of times the question came up asking, "Does AWS deploy on-prem?" Of course the answer was always "No, never gonna happen."

Most environments out there are datacenter snowflakes, built differently, none of them look the same, have the same capabilities, features or functionality. They are unique and integrating a system into different datacenters is not easy. Adapting to so many different snoflakes is really hard job, and something we have been trying to solve for many years - trying to build layers of abstraction, automation and standards across the industry. In some way we as an idustry have suceeded, and in others we have failed dismally.

In June 2017 AWS announced general availability of GreenGrass. A service that allows you to run Lambda functions on Connected devices wherever they are in the world (and more importantly - they are not part of the AWS cloud).

This is the first leg in the door - to allow AWS into your datacenter. The first step of the transformation.

Back to the announcement.

It seems that each Snowball is a server with approximately 16 CPUS's and 32GB of RAM (I assume a bit more to manage the overhead for the background processes). So essentially a small hypervisor - most of us have servers which are much beefeir than this little box - as our home labs or our laptops even. It is not a strong machine - not by any means.

But now you have a the option to run Pre-provisioned EC2 instances on this box. Of course it is locked down and you have a limited set of functionality availble to you (the same way that you have a set of pre-defined option availble in AWS itself. Yes there are literraly tens of thousands of operations you can perform - but it is not a free for all).

Here is what stopped me in my tracks

EC2_endpoint
Connecting and Configuring the Device
After I create the job, I wait until my Snowball Edge device arrives. I connect it to my network, power it on, and then unlock it using my manifest and device code, as detailed in Unlock the Snowball Edge. Then I configure my EC2 CLI to use the EC2 endpoint on the device and launch an instance. Since I configured my AMI for SSH access, I can connect to it as if it were an EC2 instance in the cloud.

Did you notice what Jeff wrote ?
"Then I configure my EC2 CLI to use the EC2 endpoint on the device and launch an instance"

Also this little tidbit..

S3_Endpoint
"S3 Access – Each Snowball Edge device includes an S3-compatible endpoint that you can access from your on-device code. You can also make use of existing S3 tools and applications"
That means AWS just brought the functionality of the public cloud - right into your datacenter.

Is it all the bells and whistles? Infinitely scalable, can run complex map reduce jobs? Hell no - this is not what this is for.  (Honestly - I cannot actually think of any use case that I personally would want to run a EC2 instance on a Snowball - at least not yet).

Now if you ask me - this is a trial balloon that they are putting out there to see if the solution is viable - and something that their customers are interested in using.

If this works - for me it is obvious what the next step is. Snowmobile 

SnowMobile

Imagine being able to run significantly more workloads on prem - same AWS experience, same API - and seamlessly connected to the public cloud.

Ladies and gentlemen. AWS has just brought the public cloud smack bang right into your datacenter.

They are no longer only a public cloud only company - they provide hybrid cloud solutions as well.


If you have any ideas for a use case to run workloads on Snowball - or if you have any thoughts or comments - please feel free to leave below.

2018-07-03

Encounters in the Cloud - Interview

This is a translation of an interview I gave to IsraelClouds (a meet the architect session).

Hello, my name is Maish Saidel-Keesing. I am a Cloud and DevOps architect at CyberArk in Petach Tikva. I have over 19 years experience in the compute industry. In the past I was a system administrator, managing Active Directory, Exchange and Windows servers. I have a lot of past experience with VMware systems - I wrote the first version of VMware vSphere Design and I have extensive knowledge of OpenStack (where I also participated in the OpenStack Architecture Design Guide). In recent years I have been working in the public cloud area (AWS and Azure) and I am also in the process of writing another book called “The Cloud Walkabout”, which was written following my experience with AWS.

What was the catalyst that made you interested in cloud computing?

My interest in technology has been ingrained in me since I was a child. I am always interested in trying new things all the time, and the cloud was for me a tool that enabled me as an IT infrastructure professional to push the organization to run faster and bring value to the entire company. 

The pace at which the company wanted to run with the standard ("old fashioned") tools was not fast enough and we headed toward the cloud (private and public) to help us meet our goals.

What difficulties did you encounter when you wanted to learn about cloud computing?

First of all organizational buy-in. At first I encountered difficulties when I tried to explain to upper management why the cloud is important to the organization, it was not obvious and required a lot of persuasion and date to back up the statements.

Second, the level of local education(courses, lecturers) was not very high at the time, which required a lot of hours of self-study and practical experience to learn one topic or another. I have never done a frontal course here in Israel - only self-study at my own pace, including 5 AWS certifications and additional certifications.

What do you predict for the future in cloud computing vs. on-prem?

I predict that the day is near where the number of Workloads run on-prem will be minimal, and the vast majority or our software will run in the public cloud. There will always be some applications that are not financially viable to be moved to the cloud or because of security restrictions cannot live in the cloud, so we will have to live in a hybrid world for many years. The world has become a cloud world, but the distance is so long that we can seamlessly transfer our applications between cloud and cloud.

‍Did the solutions you were looking for have alternatives among the various public cloud providers you worked with? If so, what were the considerations in choosing the supplier? What support did you receive from the cloud provider?

Similar to the situation today, the market leader was AWS. However, Google Cloud and Microsoft Azure have narrowed a huge gap in recent years. When I started the journey to the cloud, I worked only with AWS - they helped us with both individual and technical advice - about the existing ways to move the applications to the cloud, optimization and improvement, and in addition to the business aspect of streamlining and reducing costs. 

What are the conclusions after your transition to the cloud compared to on-premises?

It is clear to me that it is impossible to compete with a public cloud. The many possibilities offered by each of the cloud providers are a difference of heaven and earth compared to the capabilities in your datacenter. Building services that can be consumed from the cloud by simply calling a rich API can take thousands of hours, and even then, as private organizations, we can not keep up with the tremendous pace of development in the industry.

In the public cloud we have no "limitations" of resources. (Of course there are certain limitations - but no organization has a chance to match the scale that cloud providers are working with)

How does an organization know that it is ready for the cloud? What are the important points (time, staff) that will indicate readiness?

When you start to see that business processes are moving too slowly, competitors are faster than you and take too much time to respond to business requests. If the services you are asked to provide are not within your grasp, or to meet the load and demands - it will take you months or years. In these situations you need to recognize that you have reached your limit, and now it is time to ask for the "help of the cloud" and move on to the next stage in the growth of the company.

It is important to switch to the approach of becoming an enabler, a partner of the organization and accompany the business on the journey. Work together and think about how to advance the organizational agenda and not be the obstacle. If you insist on being gatekeepers and the "no, you cannot" person, you find yourself irrelevant to your organization and customers.

Finally, is there something important that you would like to see happening (meetings, studies or anything else) in the cloud computing community and / or the cloud computing portal?

Today, cloud vendors are trying to sell a story of success in every customer transition to the cloud. It is clear to me that this is not always a reflection of reality. For every success story I am sure there is at least the same amount - and even more - of failures.

I would like all of us to learn from these failures - they are an invaluable resource and can serve us in our attempts to go through the same process. I wish we were sharing many more of these stories. It's true that it's embarrassing, it's true that it does not bring the glamor and glory of a successful story - but it is very important that we learn from each other's mistakes.‍

Thank you for coming to Israel's cloud computing portal, we were very happy to host you.

2018-06-04

Microsoft to acquire Github??

Microsft is currently in negotiations to acquire. Github. Github.com. Github, it's the place where we all store our code, all our open source code.

I was actually quite shocked. There is this article. The first thing that I was surprised by was that Microsoft has bean negotiations with Github for quite some time. If they do buy Github then it could possibly change the world of open source. Almost everybody I know stores their code on Github. There are a few other places where you can store your code, for example, bitbucket, but the main code depository in the world is definitely Github.

If this acquisition actually goes through - I was trying to understand what would this actually mean? Microsoft would now have acess to every single line of code - which if you come to think of it - it actually quite a frightening thought. Bloody scary!! All the insights into the code, everything, the options are pretty much endless. Yes of course there will be terms, stating what exactly they can do with all this data, what data they will have access to and what they will keep private. We are wary of big brother and our privacy - but entrusting all our code to a potential competitor?

Microsoft has traditionally been percieved as the arch-villian of opensource. But that has changed. Microsoft has become one of the biggest open source contributors in the world, largely because of the visual studio code but they also contribute a good number of other opensource projects. There is a culture change within Microsoft, where the direction has become opensource first, and if you don't do open source and you have to justify why this is not the case. I was personally was exposed to this transformation for a few days where I spent at the Microsoft mothership a couple of weeks ago. I participated in a number of briefings from several leading architects, project managers and product managers within the company and was actually pleasantly that they are becoming an open source company themselves.

So the consequences of such an acquisition are not yet clear to me. For the Github people I have to say "Good for you a huge exit, enjoy the fame, the glory that comes with being bought out by Microsoft". Whatever the numbers may be (two - five billion dollars) is not a small sum. For the rest of people in the world who are using Github this might be a difficult situation. There are not very many neutral places like Switzerland left around in the world and definitely not many neutral places like Github left around in the software world any more.

Everybody has an edge. They might not say that they have alterior motives, but it is all about providing revenue for your company. Not to mention what this edge will give Microsoft as a cloud provider that now has access to the biggest code repositry in the world and a huge developer base which can now tie in conveniently to Azure.. The conspiracy theories and reactions on social media - are really amusing...

Something to think about..

Let me ask you readers of my blog. If Microsoft were to acquire Github, would you continue storing your code in a Microsoft owned repository? Yes or no ?

Feel free to leave your comments and thoughts below.

2018-05-29

My commentary on Gartner’s Cloud MQ - 2018

As a true technologist – I am not a favor of analyst reports and in some circles Gartner is a dirty word – but since most of the industry swears by Gartner – I went over the report.

Here are my highlights…(emphasis is mine – not from the source

Most customers have a multicloud strategy. Most customers choose a primary strategic cloud IaaS provider, and some will choose a secondary strategic provider as well. They may also use other providers on a tactical basis for narrow use cases. While it is relatively straightforward to move VM images from one cloud to another, cloud IaaS is not a commodity. <- No shit Sherlock! Each and every cloud wants you using their services and ditching the competition which is why there will never be a standard across clouds.

Customers choose to adopt multiple providers in order to have a broader array of solutions to choose from. Relatively few customers use multicloud architectures (where a single application or workload runs on multiple cloud providers), as these architectures are complex and difficult to implement. <- Damn straight! 

Managing multiple cloud IaaS providers is challenging. <- Really????? 
Many organizations are facing the challenge of creating standardized policies and procedures, repeatable processes, governance, and cost optimization across multiple cloud providers. "Single pane of glass" management, seamless movement across infrastructure platforms and "cloudbursting" are unlikely to become reality, even between providers using the same underlying CIF or with use of portable application container technology. <- This is an interesting one … Everyone in the Kubernetes community will probably not agree – because that is exactly what so many organizations are hoping that Kubernetes will give them, their holy grail of Cloud Agnostic..

Note that the claim that an ecosystem is "open" has nothing to do with actual portability. Due to the high degree of differentiation between providers, the organizations that use cloud IaaS most effectively will embrace cloud-native management, rather than allow the legacy enterprise environment to dictate their choices.

"Lift and shift" migrations rarely achieve the desired business outcomes. Most customers who simply treat cloud IaaS like "rented virtualization" do not achieve significant cost savings, increased operational efficiency or greater agility. It is possible to achieve these outcomes with a "lift and optimize" approach — cloud-enabled virtual automation — in which the applications do not change, but the IT operations management approach changes to be more automated and cloud-optimized. Customers who execute a lift-and-shift migration often recognize, after a year, that optimization is needed. Gartner believes it is more efficient to optimize during the migration rather than afterward, and that customers typically achieve the best outcomes by adopting the full range of relevant capabilities from a hyperscale integrated IaaS+PaaS provider. <- This is exactly what all the cloud vendors are selling, Start with lift and shift – and then go all in

What Key Market Aspects Should Buyers Be Aware Of?
The global market remains consolidated around two clear leaders. The market consolidated dramatically over the course of 2015. Since 2016, just two providers — AWS and Microsoft Azure — have accounted for the overwhelming majority of the IaaS-related infrastructure consumption in the market, and their dominance is even more thorough if their PaaS-related infrastructure consumption is included as well. Furthermore, AWS is many times the size of Microsoft Azure, further skewing the market structure. Most customers will choose one of these leaders as their strategic cloud IaaS provider.

Chinese cloud providers have gone global, but still have limited success outside of the domestic Chinese market. The sheer potential size of the market in mainland China has motivated multiple Chinese cloud providers to build a broad range of capabilities; such providers are often trying to imitate the global leaders feature-for-feature. While this is a major technological accomplishment, these providers are primarily succeeding in their domestic market, rather than becoming global leaders. Their customers are currently China-based companies, international companies that are doing business in China and some Asia/Pacific entities that are strongly influenced by China. <- The Cchinese market is just this – a Chinese market

AWS

Provider maturity: Tier 1. AWS has been the market pioneer and leader in cloud IaaS for over 10 years.

Recommended mode: AWS strongly appeals to Mode 2 buyers, but is also frequently chosen for Mode 1 needs. AWS is the provider most commonly chosen for strategic, organization wide adoption. Transformation efforts are best undertaken in conjunction with an SI. <- This does not mean that you can’t do it on your own – you definitely can – but you will sweat blood and tears getting there.

Recommended uses: All use cases that run well in a virtualized environment.

Strengths

AWS has been the dominant market leader and an IT thought leader for more than 10 years, not only in IaaS, but also in integrated IaaS+PaaS, with an end-of-2017 revenue run rate of more than $20 billion. It continues to aggressively expand into new IT markets via new services as well as acquisitions, adding to an already rich portfolio of services. It also continues to enhance existing services with new capabilities, with a particular emphasis on management and integration.

AWS is the provider most commonly chosen for strategic adoption; many enterprise customers now spend over $5 million annually, and some spend over $100 million.<- No-one said that cloud is cheap – on the contrary. 
While not the ideal fit for every need, it has become the "safe choice" in this market, appealing to customers that desire the broadest range of capabilities and long-term market leadership.

AWS is the most mature, enterprise-ready provider, with the strongest track record of customer success and the most useful partner ecosystem. Thus, it is the provider not only chosen by customers that value innovation and are implementing digital business projects, but also preferred by customers that are migrating traditional data centers to cloud IaaS. It can readily support mission-critical production applications, as well as the implementation of highly secure and compliant solutions. Implementation, migration and management are significantly eased by AWS's ecosystem of more than 2,000 consulting partners that offer managed and professional services. AWS has the broadest cloud IaaS provider ecosystem of ISVs, which ensures that customers are able to obtain support and licenses for most commercial software, as well as obtain software and SaaS solutions that are preintegrated with AWS. <- which is exactly why they have been, are and will probably stay the market leader
                     

Google

Provider maturity: Tier 1. GCP benefits, to some extent, from Google's massive investments in infrastructure for Google as a whole.

Recommended mode
: GCP primarily appeals to Mode 2 buyers. <- Google is for the new stuff, don’t want no stinking old legacy

Recommended uses: Big data and other analytics applications, machine learning projects, cloud-native applications, or other applications optimized for cloud-native operations.

Strengths

Google's strategy for GCP centers on commercializing the internal innovative technology capabilities that Google has developed to run its consumer business at scale, and making them available as services that other companies can purchase. Google's roadmap of capabilities increasingly targets customers with traditional workloads and IT processes, as well as with cloud-native applications. Google has positioned itself as an "open" provider, with a portability emphasis that is centered on open-source ecosystems.<- hell yeah – they are the ones that gave the world Kubernetes 
Like its competitors, though, Google delivers value through operations automation at scale, and it does not open-source these proprietary advantages.

GCP has a well-implemented, reliable and performant core of fundamental IaaS and PaaS capabilities — including an increasing number of unique and innovative capabilities — even though its scope of services is not as broad as that of the other market leaders. Google has been most differentiated on the forward edge of IT, with deep investments in analytics and ML, and many customers who choose Google for strategic adoption have applications that are anchored by BigQuery.

Google can potentially assist customers with the process of operations transformation via its Customer Reliability Engineering program (currently offered directly to a limited number of customers, as well as in conjunction with Pivotal and Rackspace). The program uses a shared-operations approach to teach customers to run operations the way that Google's site reliability engineers do.

Microsoft

Provider maturity: Tier 1. Microsoft's strong commitment to cloud services has been rewarded with significant market success.

Recommended mode
: Microsoft Azure appeals to both Mode 1 and Mode 2 customers, but for different reasons. Mode 1 customers tend to value the ability to use Azure to extend their infrastructure-oriented Microsoft relationship and investment in Microsoft technologies. Mode 2 customers tend to value Azure's ability to integrate with Microsoft's application development tools and technologies, or are interested in integrated specialized PaaS capabilities, such as the Azure Data Lake, Azure Machine Learning or the Azure IoT Suite.

Recommended uses
: All use cases that run well in a virtualized environment, particularly for Microsoft-centric organizations.

Strengths

Microsoft Azure's core strength is its Microsoft heritage — its integrations (both current and future) with other Microsoft products and services, its leverage of the existing Microsoft ISV ecosystem, and its overall strategic importance to Microsoft's future. Azure has a very broad range of services, and Microsoft has steadily executed on an ambitious roadmap. Customers that are strategically committed to Microsoft technology generally choose Azure as their primary cloud provider. <- This! This is the primary reason why Microsoft has come up in the Cloud in the past few years and why they will continue to push hard on Amazon’s heels. They are the one and only Cloud provider with a complete and true hybrid story

Microsoft Azure's capabilities have become increasingly innovative and open, with improved support for Linux and open-source application stacks. Furthermore, many customers that are pursuing a multicloud strategy will use Azure for some of their workloads, and Microsoft's on-premises Azure Stack software may potentially attract customers seeking hybrid solutions. <- Having spent a few days a week or two ago on the Microsoft campus – this is unbelievably true, They are a completely different company – and for the better.


So there were significant changes from the number of participants from last year – many vendors were left out. Here are the changes in the Inclusion and Exclusion criteria – which probably caused the shift.

2018

Market traction and momentum. They must be among the top global providers for the relevant segments (public and industrialized private cloud IaaS, excluding small deployments of two or fewer VMs). They must have ISO 27001-audited (or equivalent) data centers on at least three continents. They must have at least one public cloud IaaS offering that meets the following criteria:

- If the offering has been generally available for more than three years: A minimum of $250 million in 2017 revenue, excluding all managed and professional services; or more than 1,000 customers with at least 100 VMs.
- If the offering has been generally available for less than three years: A minimum of $10 million in 2017 revenue, excluding all managed and professional services, as well as a growth rate of at least 50% exiting 2017.

2017

Market traction and momentum. They must be among the top 15 global providers for the relevant segments (public and industrialized private cloud IaaS, excluding small deployments of one or two VMs), based on Gartner-estimated market share and mind share

2018

Technical capabilities relevant to Gartner clients. They must have a public cloud IaaS service that is suitable for supporting mission-critical, large-scale production workloads, whether enterprise or cloud-native. Specific generally available service features must include: •   

- Software-defined compute, storage and networking, with access to a web services API for these capabilities.
- Cloud software infrastructure services facilitating automated management, including, at minimum, monitoring, autoscaling services and database services.
- A distributed, continuously available control plane supporting a hyperscale architecture.

- Real-time provisioning for compute instances (small Linux VM in five minutes, 1,000 Linux VMs in one hour) and a container service that can provision Docker containers in seconds.
- An allowable VM size of at least 16 vCPUs and 128GB of RAM.
- An SLA for compute, with a minimum of 99.9% availability.
- The ability to securely extend the customer's data center network into the cloud environment.
- The ability to support multiple users and API keys, with role-based access control.
                                                                       
The 2018 inclusion criteria were chosen to reflect the key traits that Gartner clients are seeking for strategic cloud IaaS providers, and thus reflect minimum requirements across a range of bimodal use cases.

2017

Technical capabilities relevant to Gartner clients. The public cloud IaaS service must be suitable for supporting production workloads, whether enterprise or cloud-native. Specific service features must include:

- Data centers in at least two metropolitan areas, separated by a minimum of 250 miles, on separate power grids, with SSAE 16, ISO 27001 or equivalent audits Real-time provisioning (small Linux VM in five minutes)
- The ability to scale an application beyond the capacity of a single physical server An allowable VM size of at least eight vCPUs and 64GB of RAM
- An SLA for compute, with a minimum of 99.9% availability
- The ability to securely extend the customer's data center network into the cloud environment
- The ability to support multiple users and API keys, with role-based access control
Access to a web services API

So I think it is really sneaky that Gartner changed their criteria – and probably what pulled most of the players out of the game was the part about containers and the rest of the managed services. Some were also probably pushed out because of the revenue. They are very much still alive – they are very much still making money but it is clear who is the boss, who is hot on their heels, and who is going to picking up the scraps.

Here are the quadrants – year after year

gartner_iaas_mq_june_2018 

gartner_iaas_mq_june_2017

gartner-magic-quadrant-cloud-iaas-2016

iaas-mq-2015