Setting up kubernetes on AWS EC2 with kube-aws

Following this procedure Kubernetes on AWS by CoreOS team you get a kubernetes cluster with 1 or 2 public subnets. While it may be ok in some cases, I went further and added ability to put worker and master nodes into private subnets being accessed via OpenVPN gateway node.

So, run the steps from this guide until kube-aws up - this command pushes CloudFormation template to AWS and creates the infrastructure.

Create the VPC with subnets, route tables and so on with own template Template.

Open cluster.yaml and set vpcId, routeTableId, vpcCIDR, availability zones and instanceCIDR, serviceCIDR, podCIDR, dnsServiceIP:

vpcId: "vpc-c28215a6"

# ID of existing route table in existing VPC to attach subnet to. Leave blank to use the VPC's main route table.
routeTableId: "rtb-1ed0ab7a"

# CIDR for Kubernetes VPC. If vpcId is specified, must match the CIDR of existing vpc.
vpcCIDR: ""

# CIDR for Kubernetes subnet when placing nodes in a single availability zone (not highly-available) Leave commented out for multi availability zone setting and use the below `subnets` section instead.
# instanceCIDR: ""

# Kubernetes subnets with their CIDRs and availability zones. Differentiating availability zone for 2 or more subnets result in high-availability (failures of a single availability zone won't result in immediate downtimes)
  - availabilityZone: us-west-2a
    instanceCIDR: ""
  - availabilityZone: us-west-2b
    instanceCIDR: ""

# IP Address for the controller in Kubernetes subnet. When we have 2 or more subnets, the controller is placed in the first subnet and controllerIP must be included in the instanceCIDR of the first subnet. This convention will change once we have H/A controllers
controllerIP: ""

# CIDR for all service IP addresses
serviceCIDR: ""

# CIDR for all pod IP addresses
podCIDR: ""

# IP address of Kubernetes dns service (must be contained by serviceCIDR)
dnsServiceIP: ""

Run the export comman to generate CloudFormation template, it will be used later to run kubernetes stack:

kube-aws up --export

Open CLUSTER_NAME.stack-template.json and substitute RouteTableId property in Subnet0RouteTableAssociation, Subnet1RouteTableAssociation with relevant route tables from new VPC (availability zone of related subnet and route table NAT GW should be the same to avoid sending traffic via NAT in the other availability zone)

Find and remove AWS::EC2::EIP resource associated with InstanceController, as we do not need kubernetes controller to have public ip attached

    "EIPController": {
      "Properties": {
        "Domain": "vpc",
        "InstanceId": {
          "Ref": "InstanceController"
      "Type": "AWS::EC2::EIP"

Set MapPublicIpOnLaunch to false for both subnets:

    "Subnet0": {
          "Properties": {
            "AvailabilityZone": "us-west-2a",
            "CidrBlock": "",
            "MapPublicIpOnLaunch": false,
            "Tags": [
                "Key": "KubernetesCluster",
                "Value": "DEV-KUBE"
            "VpcId": "vpc-c28215a6"
          "Type": "AWS::EC2::Subnet"

Then run this template CLUSTER_NAME.stack-template.json with CloudFormation - it will create private subnets, master node and autoscaling group with workers.

SSH to the gateway instance created by the 1st stack, configure OpenVPN so you can interact directly with kubernetes api.