5 Reasons to Try the Kubernetes Gateway API

by

in
Man holding a laptop and standing in front of an oversized computer screen displaying a checklist

Since its early days, Kubernetes has included an API – the built-in Ingress resource – for configuring request routing of external HTTP traffic to Services. While it has been widely adopted by users and supported by many implementations (e.g., Ingress controllers), the Ingress resource limits its users in three major ways:

  • Insufficient features – Reduces the number of supported use cases.
  • Poor extensibility model – Limits access to advanced features available in many data planes like NGINX.
  • Lack of different user roles – Inhibits the safe sharing of data plane infrastructure among multiple teams within a cluster.

In response to these limitations, the Kubernetes community designed the Gateway API, a new project that provides a better alternative to the Ingress resource. In this blog post, we cover five reasons to try the Gateway API and discuss how it compares with the Ingress resource. We also introduce NGINX Gateway Fabric, our open source project that enables you to start using the Gateway API in your Kubernetes cluster while leveraging NGINX as a data plane.

Note: The Gateway API supports multiple use cases related to Service networking, including the experimental service mesh. That said, this blog post focuses on the Gateway API’s primary use case of routing external traffic to Services in a cluster. Additionally, while the API supports multiple protocols, we limit our discussion to the most common protocol, HTTP.

Gateway API Overview

The Gateway API is a collection of custom resources that provisions and configures a data plane to model Service networking traffic to Services in your cluster.

These are the primary Gateway API resources: 

  • GatewayClass – Defines a template for any data planes yet to be provisioned.
  • Gateway – Provisions a data plane from a template (GatewayClass) and configures the entry points (ports) on it for accepting external traffic.
  • HTTPRoute – Configures HTTP request routing rules of external traffic to Services in the cluster and attaches those rules to the entry points defined by a Gateway.

Another critical part of the Gateway API is a Gateway implementation, which is a Kubernetes controller that actually provisions and configures your data plane according to the Gateway API resources.

To learn more about the API, visit the Gateway API project website or watch a video introduction.

What Are the Reasons to Try the Gateway API?

These are five key reasons to try the new Gateway API:

  1. Number of supported features
  2. Powerful extensibility model
  3. Role separation
  4. Portability
  5. Community

Let’s look at each reason in detail.

Reason 1: Number of Supported Features

The Gateway API offers a multitude of features that, in turn, unlock numerous new use cases, some of which may not be fully supported by the Ingress resource.

These use cases include:

  • Canary releases
  • A/B testing
  • Request and response manipulation
  • Request redirects
  • Traffic mirroring
  • Cross-namespace traffic routing

For example, below is a request routing rule from an HTTPRoute that splits the traffic between two Kubernetes Services using weights. This enables the canary releases use case.

- matches: 
  - path: 
      type: PathPrefix 
      value: / 
  backendRefs: 
  - name: my-app-old 
    port: 80 
    weight: 95 
  - name: my-app-new 
    port: 80 
    weight: 5 

As a result, the data plane will route 95% of the requests to the Service my-app-old and the remaining 5% to my-app-new.

Next is an example featuring two rules. One of these rules leverages the Gateway API advanced routing capabilities, using headers and query parameters for routing.

- matches: # rule 1 
  - path: 
      type: PathPrefix 
      value: /coffee 
  backendRefs: 
  - name: coffee-v1-svc 
    port: 80 
- matches: # rule 2 
  - path: 
      type: PathPrefix 
      value: /coffee 
    headers: 
    - name: version 
      value: v2 
  - path: 
      type: PathPrefix 
      value: /coffee 
    queryParams: 
    - name: TEST 
      value: v2 
  backendRefs: 
  - name: coffee-v2-svc 
    port: 80 

As a result, the data plane routes requests that have the URI beginning with /coffee to the Service coffee-v2-svc under two conditions: if the header version is equal to v2 or if the query parameter TEST is equal to v2 (like /coffee?TEST=v2 in rule 2). At the same time, the data plane will route all requests for /coffee to coffee-v1-svc (as seen in rule 1).

You can read the HTTPRoute documentation to learn about all of the supported features.

Reason 2: Powerful Extensibility Model

The Gateway API comes with a powerful extensibility model that allows an implementation to expose advanced data plane features that are not inherently supported by the API itself. While Ingress controllers work around some of the Ingress resource’s limitations by supporting custom extensions through applied annotations, the Gateway API extensibility model is also superior to the Ingress extensibility model.

For example, below is an example of a resource extended with annotations of NGINX Ingress Controller to enable some advanced NGINX features (an explanation of each feature is added as a comment next to the annotation):

apiVersion: networking.k8s.io/v1 
kind: Ingress 
metadata: 
  name: webapp  
 annotations: 
    nginx.org/lb-method: "ip_hash" # choose the ip_hash load-balancing method 
    nginx.org/ssl-services: "webapp" # enable TLS to the backend 
    nginx.org/proxy-connect-timeout: "10s" # configure timeouts to the backend 
    nginx.org/proxy-read-timeout: "10s" 
    nginx.org/proxy-send-timeout: "10s" 
    nginx.org/rewrites: "serviceName=webapp rewrite=/v1" # rewrite request URI 
    nginx.com/jwt-key: "webapp-jwk" # enable JWT authentication of requests 
    nginx.com/jwt-realm: "Webb App" 
    nginx.com/jwt-token: "$cookie_auth_token" 
    nginx.com/jwt-login-url: "https://login.example.com" 
spec: 
  rules: 
  - host: webapp.example.com 
  . . . 

Annotations were never meant for expressing such a large amount of configuration – they are simple key-value string pairs that lack structure, validation, and granularity. (By lack of granularity, we mean annotations are applied per whole resource, not per part[s] of a resource like an individual routing rule in an Ingress resource.)

The Gateway API includes a powerful annotation-less extensibility model with several extension points, custom filters, policy attachments, and destinations. This model enables Gateway API implementations to offer advanced data plane features via custom resources that provide structure and validation. Additionally, users can apply an extension per part of a resource like a routing rule, which further adds granularity.

For example, this is how a custom filter of an imaginary Gateway API implementation enhances a routing rule in an HTTPRoute by applying a rate limit granularly for the /coffee rule:

rules: 
- matches: 
  - path: 
      type: PathPrefix 
      value: /coffee 
  filters: 
  - type: ExtensionRef 
    extensionRef: 
      group: someproxy.example.com 
      kind: RateLimit 
      name: coffee-limit 
  backendRefs: 
  - name: coffee 
    port: 80 

The Gateway API implementation consequently applies a filter configured in the coffee-limit custom resource to the /coffee rule, where the rate limit specification can look like this:

rateLimit: 
  rate: 10r/s 
  key: ${binary_remote_addr} 

Note: We showed you a possible extension rather than a concrete one because the NGINX Gateway Fabric project hasn’t yet taken the advantage of the Gateway API extensibility model. However, this will change in the future, as the project will support many extensions to enable users to access advanced NGINX features that are not available through the Gateway API.

Reason 3: Role Separation

The Ingress resource only supports a single user role (application developer), which gets full control over how traffic gets to an application in a Kubernetes cluster. However, that level of control is often not required, and it could even inhibit safely sharing the data plane infrastructure among multiple developer teams.

The Gateway API divides responsibility over provisioning and configuring infrastructure among three roles: infrastructure provider, cluster operator, and application developer. These roles are summarized in the table below.

RoleOwner of the Gateway API ResourcesResponsibilities
Infrastructure providerGatewayClassManage cluster-related infrastructure**
Cluster operatorGateway, GatewayClass*Manage a cluster for application developers
Application developerHTTPRouteManage applications

*If the cluster operator installs and manages a Gateway API implementation rather than using the one from the infrastructure provider, they will own the GatewayClass resource.

**Similar to a cloud provider offering managed Kubernetes clusters.

The above roles enable RBAC-enforced separation of responsibilities. This split works well for organizations in a common situation where a platform team (cluster operator) owns the data plane infrastructure and wants to share it safely among multiple developer teams in a cluster (application developers).

Reason 4: Portability

Two aspects of the Gateway API make it extremely portable:

  • Features – As mentioned in Reason 1, a large number of features reduces the need to rely on Gateway API implementation-specific extension APIs, which means users will be less tied to those APIs. In contrast, Ingress users rely heavily on extensions specific to their Ingress controller.
  • Conformance tests – The Gateway API comes with tests to ensure consistency in how the API features are supported by implementations. For an implementation to be conformant, it needs to pass the conformance tests. As an example, see the test results of NGINX Gateway Fabric.

Because of this portability, users can switch from one Gateway API implementation to another with significantly less effort than it takes to switch an Ingress controller.

Reason 5: Community

The Gateway API is a community-driven project. It is also a young project that’s far from being finished. If you’d like to participate in its evolution – whether through proposing a new feature or sharing your feedback – check out the project’s contributing page.

How to Try the Gateway API

Two steps are needed to try the Gateway API:

  1. Install the Gateway API into a Kubernetes cluster.
  2. Install a Gateway API implementation.

NGINX has created a Gateway API implementation – NGINX Gateway Fabric. This implementation uses NGINX as a data plane. To try it out, follow the installation instructions of its latest release. You can also check out our contributing guide if you’d like to ask questions or contribute to the project.

Our documentation includes multiple guides and examples that showcase different use cases enabled by the Gateway API. For additional support, check out the guides on the Kubernetes Gateway API project page.

Note: NGINX Gateway Fabric is a new project that has not reached the level of maturity compared to our similar NGINX Ingress Controller project. Additionally, while NGINX Gateway Fabric supports all core features of the Gateway API (see Reason 1), it doesn’t yet offer Gateway API extensions for popular NGINX features (see Reason 2). In the meantime, those features are available in NGINX Ingress Controller.

Summary

The Kubernetes Gateway API is a new community project that addresses the limitations of the Ingress resource. We discussed the top five reasons to try this new API and briefly introduced NGINX Gateway Fabric, an NGINX-based Gateway API implementation.

With NGINX Gateway Fabric, we are focused on a native NGINX implementation of the Gateway API. We encourage you to join us in shaping the future of Kubernetes app connectivity!

Ways you can get involved in NGINX Gateway Fabric include:

  • Join the project as a contributor
  • Try the implementation in your lab
  • Test and provide feedback

To join the project, visit NGINX Gateway Fabric on GitHub.