Kubernetes Ingress Controller for AWS
This is an ingress controller for Kubernetes — the open-source container deployment, scaling, and management system — on AWS. It runs inside a Kubernetes cluster to monitor changes to your ingress resources and orchestrate AWS Load Balancers accordingly.
This ingress controller uses the EC2 instance metadata of the worker node where it's currently running to find the additional details about the cluster provisioned by Kubernetes on top of AWS. This information is used to manage AWS resources for each ingress objects of the cluster.
Features
- Uses CloudFormation to guarantee consistent state
- Automatic discovery of SSL certificates
- Automatic forwarding of requests to all Worker Nodes, even with auto scaling
- Automatic cleanup of unnecessary managed resources
- Support for both [Application Load Balancers][alb] and [Network Load Balancers][nlb].
- Support for internet-facing and internal load balancers
- Support for ignoring cluster-internal ingress, that only have
--cluster-local-domain=cluster.local
domains. - Support for denying traffic for internal domains.
- Support for multiple Auto Scaling Groups
- Support for instances that are not part of Auto Scaling Group
- Support for SSLPolicy, set default and per ingress
- Support for CloudWatch Alarm configuration
- Can be used in clusters created by Kops, see our deployment guide for Kops
- Support Multiple TLS Certificates per ALB (SNI).
- Support for AWS WAF and WAFv2
- Support for AWS CNI pod direct access
- Support for Kubernetes CRD RouteGroup
- Support for zone aware traffic (defaults to cross zone traffic and no zone affinity)
- enable and disable cross zone traffic:
--nlb-cross-zone=false
- set zone affinity to resolve DNS to same zone:
--nlb-zone-affinity=availability_zone_affinity
, see also NLB attributes and NLB zonal DNS affinity
- enable and disable cross zone traffic:
- Support for explicitly enable certificates by using certificate Tags
--cert-filter-tag=key=value
Upgrade
<v0.15.0 to >=v0.15.0
Version v0.15.0
removes support for deprecated Ingress versions
extensions/v1beta1
and networking.k8s.io/v1beta1
.
<v0.14.0 to >=v0.14.0
Version v0.14.0
makes target-access-mode
flag required to make upgrading users aware of the issue.
New deployment of the controller should use --target-access-mode=HostPort
or --target-access-mode=AWSCNI
.
To upgrade from <v0.12.17
use --target-access-mode=Legacy
- it is the same as HostPort
but does not set target type and
relies on CloudFormation to use instance
as a default value.
Note that changing later from --target-access-mode=Legacy
will change target type in CloudFormation and trigger target group recreation and downtime.
To upgrade from >=v0.12.17
when --target-access-mode
is not set use explicit --target-access-mode=HostPort
.
<v0.13.0 to >=0.13.0
Version v0.13.0
use Ingress version v1 as default. You can downgrade
ingress version to earlier versions via flag. You will also need to
allow the access via RBAC, see more information in <v0.11.0 to >=0.11.0 below.
<v0.12.17 to <v0.14.0
Please see release note and issue this update can cause 30s downtime, if you don't use AWS CNI mode.
Please upgrade to >=v0.14.0
.
<v0.12.0 to <=0.12.16
Version v0.12.0
changes Network Load Balancer type handling if Application Load Balancer type feature is requested. See Load Balancers types notes for details.
<v0.11.0 to >=0.11.0
Version v0.11.0
changes the default apiVersion
used for fetching/updating
ingresses from extensions/v1beta1
to networking.k8s.io/v1beta1
. For this to
work the controller needs to have permissions to list
ingresses
and
update
, patch
ingresses/status
from the networking.k8s.io
apiGroup
.
See deployment example. To fallback to
the old behavior you can set the apiVersion via the --ingress-api-version
flag. Value must be extensions/v1beta1
or networking.k8s.io/v1beta1
(default) or networking.k8s.io/v1
.
<v0.9.0 to >=v0.9.0
Version v0.9.0
changes the internal flag parsing library to
kingpin this means flags are now defined with --
(two dashes)
instead of a single dash. You need to change all the flags like this:
-stack-termination-protection
-> --stack-termination-protection
before
running v0.9.0
of the controller.
<v0.8.0 to >=v0.8.0
Version v0.8.0
added certificate verification check to automatically ignore
self-signed and certificates from internal CAs. The IAM role used by the controller
now needs the acm:GetCertificate
permission. acm:DescribeCertificate
permission
is no longer needed and can be removed from the role.
<v0.7.0 to >=v0.7.0
Version v0.7.0
deletes the annotation
zalando.org/aws-load-balancer-ssl-cert-domain
, which we do not
consider as feature since we have SNI enabled ALBs.
<v0.6.0 to >=v0.6.0
Version v0.6.0
introduced support for Multiple TLS Certificates per ALB
(SNI). When upgrading your ALBs will automatically be aggregated to a single
ALB with multiple certificates configured.
It also adds support for attaching single EC2 instances and multiple
AutoScalingGroups to the ALBs therefore you must ensure you have the correct
instance filter defined before upgrading. The default filter is
tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node
see
How it works for more information on how to configure this.
<v0.5.0 to >=v0.5.0
Version v0.5.0
introduced support for both internet-facing
and internal
load balancers. For this change we had to change the naming of the
CloudFormation stacks created by the controller. To upgrade from v0.4.* to
v0.5.0 no changes are needed, but since the naming change of the stacks
migrating back down to a v0.4.* version will not be non-disruptive as it will
be unable to manage the stacks with the new naming scheme. Deleting the stacks
manually will allow for a working downgrade.
<v0.4.0 to >=v0.4.0
In versions before v0.4.0 we used AWS Tags that were set by CloudFormation automatically to find some AWS resources. This behavior has been changed to use custom non cloudformation tags.
In order to update to v0.4.0, you have to add the following tags to your AWs Loadbalancer SecurityGroup before updating:
kubernetes:application=kube-ingress-aws-controller
kubernetes.io/cluster/<cluster-id>=owned
Additionally you must ensure that the instance where the ingress-controller is
running has the clusterID tag kubernetes.io/cluster/<cluster-id>=owned
set
(was ClusterID=<cluster-id>
before v0.4.0).
Ingress annotations
Overview of configuration which can be set via Ingress annotations.
Annotations
Name | Value | Default |
---|---|---|
alb.ingress.kubernetes.io/ip-address-type | ipv4 | dualstack | ipv4 |
zalando.org/aws-load-balancer-ssl-cert | string | N/A |
zalando.org/aws-load-balancer-scheme | internal | internet-facing | internet-facing |
zalando.org/aws-load-balancer-shared | true | false | true |
zalando.org/aws-load-balancer-security-group | string | N/A |
zalando.org/aws-load-balancer-ssl-policy | string | ELBSecurityPolicy-2016-08 |
zalando.org/aws-load-balancer-type | nlb | alb | alb |
zalando.org/aws-load-balancer-http2 | true | false | true |
zalando.org/aws-waf-web-acl-id | string | N/A |
kubernetes.io/ingress.class | string | N/A |
The defaults can also be configured globally via a flag on the controller.
Load Balancers types
The controller supports both [Application Load Balancers][alb] and [Network Load Balancers][nlb]. Below is an overview of which features can be used with the individual Load Balancer types.
Feature | Application Load Balancer | Network Load Balancer |
---|---|---|
HTTPS | :heavy_check_mark: | :heavy_check_mark: |
HTTP | :heavy_check_mark: | :heavy_check_mark: --nlb-http-enabled |
HTTP -> HTTPS redirect | :heavy_check_mark: --redirect-http-to-https | :heavy_multiplication_x: |
Cross Zone Load Balancing | :heavy_check_mark: (only option) | :heavy_check_mark: --nlb-cross-zone |
Zone Affinity | :heavy_multiplication_x: | :heavy_check_mark: --nlb-zone-affinity |
Dualstack support | :heavy_check_mark: --ip-addr-type=dualstack | :heavy_multiplication_x: |
Idle Timeout | :heavy_check_mark: --idle-connection-timeout | :heavy_multiplication_x: |
Custom Security Group | :heavy_check_mark: | :heavy_multiplication_x: |
Web Application Firewall (WAF) | :heavy_check_mark: | :heavy_multiplication_x: |
HTTP/2 Support | :white_check_mark: | (not relevant) |
To facilitate default load balancer type switch from Application to Network when the default load balancer type is Network
(--load-balancer-type="network"
) and Custom Security Group (zalando.org/aws-load-balancer-security-group
) or
Web Application Firewall (zalando.org/aws-waf-web-acl-id
) annotation is present the controller configures Application Load Balancer.
If zalando.org/aws-load-balancer-type: nlb
annotation is also present then controller ignores the configuration and logs an error.
AWS Tags
SecurityGroup auto detection needs the following AWS Tags on the SecurityGroup:
kubernetes.io/cluster/<cluster-id>=owned
kubernetes:application=<controller-id>
, controller-id defaults tokube-ingress-aws-controller
and can be set by flag--controller-id=<my-ctrl-id>
.
AutoScalingGroup auto detection needs the same AWS tags on the AutoScalingGroup as defined for the SecurityGroup.
In case you want to attach/detach single EC2 instances to the ALB
TargetGroup, you have to have the same <cluster-id>
set as on the
running kube-ingress-aws-controller. Normally this would be
kubernetes.io/cluster/<cluster-id>=owned
.
Development Status
This controller is used in production since Q1 2017. It aims to be out-of-the-box useful for anyone running Kubernetes. Jump down to the Quickstart to try it out—and please let us know if you have trouble getting it running by filing an Issue. If you created your cluster with Kops, see our deployment guide for Kops
As of this writing, it's being used in production use cases at Zalando, and can be considered battle-tested in this setup. We're actively seeking devs/teams/companies to try it out and share feedback so we can make improvements.
We are also eager to bring new contributors on board. See our contributor guidelines to get started, or claim a "Help Wanted" item.
Why We Created This Ingress Controller
The maintainers of this project are building an infrastructure that runs Kubernetes on top of AWS at large scale (for nearly 200 delivery teams), and with automation. As such, we're creating our own tooling to support this new infrastructure. We couldn't find an existing ingress controller that operates like this one does, so we created one ourselves.
We're using this ingress controller with Skipper, an HTTP router that Zalando has used in production since Q4 2015 as part of its front-end microservices architecture. Skipper's also open source and has some outstanding features, that we documented here. Feel free to use it, or use another ingress of your choosing.
How It Works
This controller continuously polls the API server to check for ingress resources. It runs an infinite loop. For each cycle it creates load balancers for new ingress resources, and deletes the load balancers for obsolete/removed ingress resources.
This is achieved using AWS CloudFormation. For more details check our CloudFormation Documentation
The controller will not manage the security groups required to allow access from the Internet to the load balancers. It assumes that their lifecycle is external to the controller itself.
During startup phase EC2 filters are constructed as follows:
- If
CUSTOM_FILTERS
environment variable is set, it is used to generate filters that are later used to fetch instances from EC2. - If
CUSTOM_FILTERS
environment variable is not set or could not be parsed, then default filters aretag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node
where<cluster-id>
is determined from EC2 tags of instance on which Ingress Controller pod is started.
CUSTOM_FILTERS
is a list of filters separated by spaces. Each filter has a form of name=value
where name can be a tag:
or tag-key:
prefixed expression, as would be recognized by the EC2 API, and value is value of a filter, or a comma seperated list of values.
For example:
tag-key=test
will filter instances that have a tag namedtest
, ignoring the value.