Contour Ingress Controller: Difference between revisions

From WilliamsNet Wiki
Jump to navigation Jump to search
(Created page with "[https://projectcontour.io/ Contour] is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy.​ Contour supports dyna...")
 
 
(3 intermediate revisions by the same user not shown)
Line 60: Line 60:
Note that the serviceName and servicePort match those values in the Service specification.  The 'spec.rules.host' parameter is the hostname that will trigger this ingress when a HTTP/HTTPS request arrives.  This definition of an ingress will support unencrypted HTTP.
Note that the serviceName and servicePort match those values in the Service specification.  The 'spec.rules.host' parameter is the hostname that will trigger this ingress when a HTTP/HTTPS request arrives.  This definition of an ingress will support unencrypted HTTP.


== Creatng Certificates ==
== Creating Certificates ==
To create and use a certificate for an Ingress, we have to make two changes to the Ingress"
To create and use a certificate for an Ingress, we have to make two changes to the Ingress"


Line 104: Line 104:


You will see the messages from the exchange with the certificate servers -- and within a few seconds, should see that the certificate was issued.  At that point, accesses to the service should be forced to HTTPS and show a valid certificate issued by Let's Encrypt.
You will see the messages from the exchange with the certificate servers -- and within a few seconds, should see that the certificate was issued.  At that point, accesses to the service should be forced to HTTPS and show a valid certificate issued by Let's Encrypt.
== HTTPProxy as a replacement for Ingress ==
From the Contour documentation for [https://projectcontour.io/docs/v1.6.1/httpproxy/ HTTPProxy]:
<pre>The Ingress object was added to Kubernetes in version 1.1 to describe properties of a cluster-wide reverse HTTP proxy. Since that time, the Ingress object has not progressed beyond the beta stage, and its stagnation inspired an explosion of annotations to express missing properties of HTTP routing.
The goal of the HTTPProxy (previously IngressRoute) Custom Resource Definition (CRD) is to expand upon the functionality of the Ingress API to allow for a richer user experience as well addressing the limitations of the latter’s use in multi tenent environments.</pre>
From what I have observed, the use of HTTPProxy vs. Ingress results in a simpler manifest with less duplicate information and more flexibility.  There is one challenge, however:
<pre>cert-manager currently does not have a way to interact with HTTPProxy objects in order to respond to the HTTP01 challenge correctly. (See [https://github.com/projectcontour/contour/issues/950 #950] and [https://github.com/projectcontour/contour/issues/951 #951] for details.) ... This means that cert-manager can’t be directly used for generating certificates for HTTPProxy configuration.</pre>
=== Using HTTPProxy ===
The workaround for this problem is to create a dummy Ingress with enough information to pass YAML validation and trigger cert-manager to create a certificate ... but not enough to actually create any routes.  Here are the resulting manifests using the examples as above:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpbinproxy
spec:
  virtualhost:
    fqdn: httpbinproxy.davecheney.com
    tls:
      secretName: httpbin
  routes:
  - services:
    - name: httpbin
      port: 8080
This object will be marked as Invalid by Contour, since the TLS secret doesn’t exist yet. Once that’s done, create the dummy Ingress object:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    ingress.kubernetes.io/force-ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
  name: httpbinproxy
  namespace: default
spec:
  rules:
  - host: httpbinproxy.davecheney.com
    http:
      paths:
      - backend:
          serviceName: httpbin
          servicePort: 8080
  tls:
  - hosts:
    - httpbinproxy.davecheney.com
    secretName: httpbin
Set the hostname as appropriate, and make sure the secret names match.  This seems excessive, as this does nothing more than the example in the Ingress section above ... but it does come in handy when the added capability of a HTTPProxy is needed.
=== Encrypted Upstream Communication ===
While many (most?) upstream services are happy to just handle unencrypted communications, some require the use of TLS (HTTPS).  This is a problem since the Ingress terminates the SSL/TLS session and the resulting communication with the upstream service is unencrypted. 
This is a simple example of the power of the HTTPProxy, and the flexibility it provides over an Ingress.  The HTTPProxy allows the specification of what protocol (specifically, which encryption protocol) is used for communicating with the upstream service.  Here is the above example HTTPProxy communicating with the upstream service on port 443 using HTTPS (TLS):
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpbinproxy
spec:
  virtualhost:
    fqdn: httpbinproxy.davecheney.com
    tls:
      secretName: httpbin
  routes:
  - services:
    - name: httpbin
      port: 443
      protocol: tls
This will instruct Contour to terminate the incoming SSL/TLS with the specified hostname's certificate, and then initiate a new TLS connection to the upstream process specified by the service provided.
== External Services ==
The power of the Ingress is that it can serve as a reverse proxy, allowing one entry point to be directed wherever is needed to serve the request based on the hostname being accessed ... but if the complete suite of services includes capabilities outside of the kubernetes cluster, special actions must be taken to include them in the cluster reverse proxy.  The key to this capability is that Contour establishes routes to a kubernetes Service -- it doesn't care what the service is or how it is defined.  That abstraction simplifies the problem to simply creating a Service that references the external service.
Kubernetes includes a special Service type called '[https://kubernetes.io/docs/concepts/services-networking/service/#externalname ExternalName]' that effectively creates a CNAME in the cluster internal DNS that points to a FQDN provided in the Service specification:
<pre>apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: prod
spec:
  type: ExternalName
  externalName: my.database.example.com</pre>
The port number of the external service is specified in the Ingress (or HTTPProxy) definition, and the connection is just redirected to the external service.  If encrypted communications is needed, the HTTPProxy can be used as described above.

Latest revision as of 23:37, 19 July 2020

Contour is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy.​ Contour supports dynamic configuration updates and multi-team ingress delegation out of the box while maintaining a lightweight profile.

Contour was recently (July 2020) added as a CNCF incubator project.

Installation[edit]

Installation was literally as simple as applying a single manifest directly from the Contour site:

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

The supplied installation guide was very helpful, and provided some 'toy' examples to help understand the basic concepts. One of the more useful documents on their website, however, was the guide to connecting Contour with cert-manager to provide automatic Let's Encrypt certificate management for ingresses. Cert-manager is a package of its own with multiple ways to install it -- I'm inclined to avoid helm packages at this time because of my bad experiences with them on the MediaWiki installation (a story for another day), so a simple kubernetes deployment was available and worked well.

The installation of cert-manager was also very straightforward, but the Contour guide was a few minor versions off and caused some consternation until I went directly to the cert-manager GitHub site and read their directions. in the end, all I really had to do was use a new manifest:

kubectl create namespace cert-manager
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml

The fact that they require you to turn off YAML validation bothers me ... but it did seem to work properly.

At this point you have the core functionality present and ready for use ... but each package requires some CRDs to be created in order to use the functionality.

cert-manager ClusterIssuer[edit]

The CRD required for cert-manager is the definition of where and how to get the certificates. Using the ClusterIssuer CRD enables a single source for certificates across all namespaces; if there is a need for special certificate process for individual namespaces, the Issuer CRD can be used.

Let's Encrypt has placed rate limits on issuing certificates for a given CN to prevent abuse; but the also recognize that you need to test your installation, which can end up requesting lots of certificates. They have set up a staging request server that doesn't implement rate limits, but also doesn't issue 'valid' certificates -- so you can get your process down, then switch over to their production server when you're ready to go live. The guide has you create both a staging and production ClusterIssuer for this purpose, saved as 'letsencrypt-staging.yaml' and 'letsencrypt-prod.yaml' in the repository. The only thing you need to modify in these manifests is the email address -- as that is the key that the let's encrypt servers use to identify certificate ownership. These CRDs are installed by applying their manifests:

kubectl apply -f letsencrypt-staging.yaml -f letsencrypt-prod.yaml

Creating Ingresses to publish access to Services[edit]

The whole point of a kubernetes ingress (and the newer HTTPProxy that is part of the Contour package) is to enable access to kubernetes services outside of the cluster. The Service can be of any type.

All that needs to be done is to create an Ingress in the same namespace as the Service, referencing the service by name and matching the port as specified in the Service. A simple example Service:

apiVersion: v1
kind: Service
metadata:
  name: httpbin
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: httpbin

... and the corresponding Ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: httpbin
spec:
  rules:
  - host: httpbin.davecheney.com
    http:
      paths:
      - backend:
          serviceName: httpbin
          servicePort: 8080

Note that the serviceName and servicePort match those values in the Service specification. The 'spec.rules.host' parameter is the hostname that will trigger this ingress when a HTTP/HTTPS request arrives. This definition of an ingress will support unencrypted HTTP.

Creating Certificates[edit]

To create and use a certificate for an Ingress, we have to make two changes to the Ingress"

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: httpbin
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-staging
    ingress.kubernetes.io/force-ssl-redirect: "true"
    kubernetes.io/ingress.class: contour
    kubernetes.io/tls-acme: "true"
spec:
  tls:
  - secretName: httpbin
    hosts:
    - httpbin.davecheney.com
  rules:
  - host: httpbin.davecheney.com
    http:
      paths:
      - backend:
          serviceName: httpbin
          servicePort: 8080

The two changes are in the 'annotations' and in the 'spec.tls' sections. The 'annotations' give Contour the instructions to:

  • get the certificate from lets-encrypt (staging or prod)
  • tell the ingress to force all connections to this ingress to be HTTPS (redirecting HTTP as needed)
  • identify Contour as the ingress controller
  • Use the 'acme' source for getting let's encrypt certificates

The spec.tls parameters specify how and where the certificate is created and stored:

  • kubernetes secrets in the current namespace (not the cert-manager namespace) are used to store certificates ... the name of the secret can be whatever is needed to identify the certificate
  • the hostnames provided are used to create the certificate

The staging ClusterIssuer can be used initially, but make sure the production ClusterIssuer is specified in the final manifests when deploying. Create the Ingress by applying the manifest:

kubectl apply -f <ingress manifest>.yaml

As soon as the ingress is created, cert-manager will see the request and obtain the certificate as specified in the spec.tls section. It creates a new Certificate CRD to hold all the details needed to support and renew the certificates. To verify the certificate is created successfully, look at the certificate CRD (which is named the same as the secret):

kubectl describe certificate <secret name> | tail -n 12

You will see the messages from the exchange with the certificate servers -- and within a few seconds, should see that the certificate was issued. At that point, accesses to the service should be forced to HTTPS and show a valid certificate issued by Let's Encrypt.

HTTPProxy as a replacement for Ingress[edit]

From the Contour documentation for HTTPProxy:

The Ingress object was added to Kubernetes in version 1.1 to describe properties of a cluster-wide reverse HTTP proxy. Since that time, the Ingress object has not progressed beyond the beta stage, and its stagnation inspired an explosion of annotations to express missing properties of HTTP routing.

The goal of the HTTPProxy (previously IngressRoute) Custom Resource Definition (CRD) is to expand upon the functionality of the Ingress API to allow for a richer user experience as well addressing the limitations of the latter’s use in multi tenent environments.

From what I have observed, the use of HTTPProxy vs. Ingress results in a simpler manifest with less duplicate information and more flexibility. There is one challenge, however:

cert-manager currently does not have a way to interact with HTTPProxy objects in order to respond to the HTTP01 challenge correctly. (See [https://github.com/projectcontour/contour/issues/950 #950] and [https://github.com/projectcontour/contour/issues/951 #951] for details.) ... This means that cert-manager can’t be directly used for generating certificates for HTTPProxy configuration.

Using HTTPProxy[edit]

The workaround for this problem is to create a dummy Ingress with enough information to pass YAML validation and trigger cert-manager to create a certificate ... but not enough to actually create any routes. Here are the resulting manifests using the examples as above:

apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpbinproxy
spec:
  virtualhost:
    fqdn: httpbinproxy.davecheney.com
    tls:
      secretName: httpbin
  routes:
  - services:
    - name: httpbin
      port: 8080

This object will be marked as Invalid by Contour, since the TLS secret doesn’t exist yet. Once that’s done, create the dummy Ingress object:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    ingress.kubernetes.io/force-ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
  name: httpbinproxy
  namespace: default
spec:
  rules:
  - host: httpbinproxy.davecheney.com
    http:
      paths:
      - backend:
          serviceName: httpbin
          servicePort: 8080
  tls:
  - hosts:
    - httpbinproxy.davecheney.com
    secretName: httpbin

Set the hostname as appropriate, and make sure the secret names match. This seems excessive, as this does nothing more than the example in the Ingress section above ... but it does come in handy when the added capability of a HTTPProxy is needed.

Encrypted Upstream Communication[edit]

While many (most?) upstream services are happy to just handle unencrypted communications, some require the use of TLS (HTTPS). This is a problem since the Ingress terminates the SSL/TLS session and the resulting communication with the upstream service is unencrypted.

This is a simple example of the power of the HTTPProxy, and the flexibility it provides over an Ingress. The HTTPProxy allows the specification of what protocol (specifically, which encryption protocol) is used for communicating with the upstream service. Here is the above example HTTPProxy communicating with the upstream service on port 443 using HTTPS (TLS):

apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: httpbinproxy
spec:
  virtualhost:
    fqdn: httpbinproxy.davecheney.com
    tls:
      secretName: httpbin
  routes:
  - services:
    - name: httpbin
      port: 443
      protocol: tls

This will instruct Contour to terminate the incoming SSL/TLS with the specified hostname's certificate, and then initiate a new TLS connection to the upstream process specified by the service provided.

External Services[edit]

The power of the Ingress is that it can serve as a reverse proxy, allowing one entry point to be directed wherever is needed to serve the request based on the hostname being accessed ... but if the complete suite of services includes capabilities outside of the kubernetes cluster, special actions must be taken to include them in the cluster reverse proxy. The key to this capability is that Contour establishes routes to a kubernetes Service -- it doesn't care what the service is or how it is defined. That abstraction simplifies the problem to simply creating a Service that references the external service.

Kubernetes includes a special Service type called 'ExternalName' that effectively creates a CNAME in the cluster internal DNS that points to a FQDN provided in the Service specification:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: prod
spec:
  type: ExternalName
  externalName: my.database.example.com

The port number of the external service is specified in the Ingress (or HTTPProxy) definition, and the connection is just redirected to the external service. If encrypted communications is needed, the HTTPProxy can be used as described above.