添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Search

Networking

download PDF
OpenShift Dedicated 4

Configuring OpenShift Dedicated networking

Red Hat OpenShift Documentation Team

Abstract

This document provides information about networking for OpenShift Dedicated clusters.

Chapter 1. About networking

Red Hat OpenShift Networking is an ecosystem of features, plugins, and advanced networking capabilities that enhance Kubernetes networking with advanced networking-related features that your cluster needs to manage network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, and inter- and intra-cluster traffic management. The Red Hat OpenShift Networking ecosystem also provides role-based observability tooling to reduce its natural complexities. The following are some of the most commonly used Red Hat OpenShift Networking features available on your cluster: Primary cluster network provided by either of the following Container Network Interface (CNI) plugins: OVN-Kubernetes network plugin - the default plugin OpenShift SDN network plugin - deprecated for clusters as of OpenShift 4.14 Cluster Network Operator for network plugin management OpenShift Dedicated clusters created with OpenShift 4.11 and above use OVN-Kubernetes network plugin by default. OpenShift Dedicated clusters created before OpenShift version 4.11 use the OpenShift SDN plugin after they are upgraded to OpenShift version 4.11 and above.

Important

OpenShift Dedicated follows the life cycle of SDN according to OpenShift Core Platform: SDN is deprecated for clusters as of OpenShift version 4.14. Clusters that already use the OpenShift SDN plugin continue to use the plugin after they are upgraded to OpenShift versions 4.11 and above. Clusters can be upgraded up to OpenShift version 4.16. Clusters running on OpenShift 4.16: Clusters using OpenShift version 4.16 cannot upgrade if the clusters are using the SDN plugin. The SDN plugin will be discontinued in OpenShift version 4.17. You will soon be able to migrate from OpenShift SDN to OVN for clusters running on OpenShift version 4.15 and later. This migration tool is not currently available. For more information about the OpenShift SDN deprecation and the OVN migration, see the KCS article about OpenShift SDN CNI removal in OCP 4.17 .

Chapter 2. DNS Operator in OpenShift Dedicated

In OpenShift Dedicated, the DNS Operator deploys and manages a CoreDNS instance to provide a name resolution service to pods inside the cluster, enables DNS-based Kubernetes Service discovery, and resolves internal cluster.local names.

2.1. Checking the status of the DNS Operator

The DNS Operator implements the dns API from the operator.openshift.io API group. The Operator deploys CoreDNS using a daemon set, creates a service for the daemon set, and configures the kubelet to instruct pods to use the CoreDNS service IP address for name resolution.

Procedure

The DNS Operator is deployed during installation with a Deployment object. Use the oc get command to view the deployment status:

$ oc get -n openshift-dns-operator deployment/dns-operator

Example output

NAME           READY     UP-TO-DATE   AVAILABLE   AGE
dns-operator   1/1       1            1           23h
Use the oc get command to view the state of the DNS Operator:

$ oc get clusteroperator/dns

Example output

NAME      VERSION     AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
dns       4.1.15-0.11  True        False         False      92m
AVAILABLE , PROGRESSING , and DEGRADED provide information about the status of the Operator. AVAILABLE is True when at least 1 pod from the CoreDNS daemon set reports an Available status condition, and the DNS service has a cluster IP address.

2.2. View the default DNS

Every new OpenShift Dedicated installation has a dns.operator named default .

Procedure

  1. Use the oc describe command to view the default dns :

    $ oc describe dns.operator/default

    Example output

    Name:         default
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  operator.openshift.io/v1
    Kind:         DNS
    Status:
      Cluster Domain:  cluster.local 1
      Cluster IP:      172.30.0.10 2
    								The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names.
    								The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range.
    							

2.3. Using DNS forwarding

You can use DNS forwarding to override the default forwarding configuration in the /etc/resolv.conf file in the following ways: Specify name servers ( spec.servers ) for every zone. If the forwarded zone is the ingress domain managed by OpenShift Dedicated, then the upstream name server must be authorized for the domain.

Important

You must specify at least one zone. Otherwise, your cluster can lose functionality. Provide a list of upstream DNS servers ( spec.upstreamResolvers ). Change the default forwarding policy. A DNS forwarding configuration for the default domain can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers.

Procedure

  1. Modify the DNS Operator object named default :

    $ oc edit dns.operator/default

    After you issue the previous command, the Operator creates and updates the config map named dns-default with additional server configuration blocks based on spec.servers .

    Important

    When specifying values for the zones parameter, ensure that you only forward to specific zones, such as your intranet. You must specify at least one zone. Otherwise, your cluster can lose functionality. If none of the servers have a zone that matches the query, then name resolution falls back to the upstream DNS servers.

    Configuring DNS forwarding

    apiVersion: operator.openshift.io/v1
    kind: DNS
    metadata:
      name: default
    spec:
      cache:
        negativeTTL: 0s
        positiveTTL: 0s
      logLevel: Normal
      nodePlacement: {}
      operatorLogLevel: Normal
      servers:
      - name: example-server 1
        zones:
        - example.com 2
        forwardPlugin:
          policy: Random 3
          upstreams: 4
          - 1.1.1.1
          - 2.2.2.2:5353
      upstreamResolvers: 5
        policy: Random 6
        protocolStrategy: ""  7
        transportConfig: {}  8
        upstreams:
        - type: SystemResolvConf 9
        - type: Network
          address: 1.2.3.4 10
          port: 53 11
        status:
          clusterDomain: cluster.local
          clusterIP: x.y.z.10
          conditions:
    								Must comply with the rfc6335 service name syntax.
    								Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local, is an invalid subdomain for the zones field.
    								Defines the policy to select upstream resolvers listed in the forwardPlugin. Default value is Random. You can also use the values RoundRobin, and Sequential.
    								A maximum of 15 upstreams is allowed per forwardPlugin.
    								You can use upstreamResolvers to override the default forwarding policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers declared in /etc/resolv.conf.
    								Determines the order in which upstream servers listed in upstreams are selected for querying. You can specify one of these values: Random, RoundRobin, or Sequential. The default value is Sequential.
    								When omitted, the platform chooses a default, normally the protocol of the original client request. Set to TCP to specify that the platform should use TCP for all upstream DNS requests, even if the client request uses UDP.
    								Used to configure the transport type, server name, and optional custom CA or CA bundle to use when forwarding DNS requests to an upstream resolver.
    								You can specify two types of upstreams: SystemResolvConf or Network. SystemResolvConf configures the upstream to use /etc/resolv.conf and Network defines a Networkresolver. You can specify one or both.
    								If the specified type is Network, you must provide an IP address. The address field must be a valid IPv4 or IPv6 address.
    								If the specified type is Network, you can optionally provide a port. The port field must have a value between 1 and 65535. If you do not specify a port for the upstream, the default port is 853.
    							

Additional resources

2.4. Checking DNS Operator status

You can inspect the status and view the details of the DNS Operator using the oc describe command.

Procedure

  • View the status of the DNS Operator:

    $ oc describe clusteroperators/dns

    Though the messages and spelling might vary in a specific release, the expected status output looks like:

    Status:
      Conditions:
        Last Transition Time:  <date>
        Message:               DNS "default" is available.
        Reason:                AsExpected
        Status:                True
        Type:                  Available
        Last Transition Time:  <date>
        Message:               Desired and current number of DNSes are equal
        Reason:                AsExpected
        Status:                False
        Type:                  Progressing
        Last Transition Time:  <date>
        Reason:                DNSNotDegraded
        Status:                False
        Type:                  Degraded
        Last Transition Time:  <date>
        Message:               DNS default is upgradeable: DNS Operator can be upgraded
        Reason:                DNSUpgradeable
        Status:                True
        Type:                  Upgradeable

2.5. Viewing DNS Operator logs

You can view DNS Operator logs by using the oc logs command.

Procedure

  • View the logs of the DNS Operator:

    $ oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator

2.6. Setting the CoreDNS log level

Log levels for CoreDNS and the CoreDNS Operator are set by using different methods. You can configure the CoreDNS log level to determine the amount of detail in logged error messages. The valid values for CoreDNS log level are Normal , Debug , and Trace . The default logLevel is Normal . The CoreDNS error log level is always enabled. The following log level settings report different error responses: logLevel : Normal enables the "errors" class: log . { class error } . logLevel : Debug enables the "denial" class: log . { class denial error } . logLevel : Trace enables the "all" class: log . { class all } .

Procedure

  • To set logLevel to Debug , enter the following command:

    $ oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Debug"}}' --type=merge
  • To set logLevel to Trace , enter the following command:

    $ oc patch dnses.operator.openshift.io/default -p '{"spec":{"logLevel":"Trace"}}' --type=merge

Verification

  • To ensure the desired log level was set, check the config map:

    $ oc get configmap/dns-default -n openshift-dns -o yaml

    For example, after setting the logLevel to Trace , you should see this stanza in each server block:

    errors
    log . {
        class all
    }

2.7. Setting the CoreDNS Operator log level

Log levels for CoreDNS and CoreDNS Operator are set by using different methods. Cluster administrators can configure the Operator log level to more quickly track down OpenShift DNS issues. The valid values for operatorLogLevel are Normal , Debug , and Trace . Trace has the most detailed information. The default operatorlogLevel is Normal . There are seven logging levels for Operator issues: Trace, Debug, Info, Warning, Error, Fatal, and Panic. After the logging level is set, log entries with that severity or anything above it will be logged. operatorLogLevel: "Normal" sets logrus.SetLogLevel("Info") . operatorLogLevel: "Debug" sets logrus.SetLogLevel("Debug") . operatorLogLevel: "Trace" sets logrus.SetLogLevel("Trace") .

Procedure

  • To set operatorLogLevel to Debug , enter the following command:

    $ oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Debug"}}' --type=merge
  • To set operatorLogLevel to Trace , enter the following command:

    $ oc patch dnses.operator.openshift.io/default -p '{"spec":{"operatorLogLevel":"Trace"}}' --type=merge

Verification

  1. To review the resulting change, enter the following command:

    $ oc get dnses.operator -A -oyaml

    You should see two log level entries. The operatorLogLevel applies to OpenShift DNS Operator issues, and the logLevel applies to the daemonset of CoreDNS pods:

     logLevel: Trace
     operatorLogLevel: Debug
  2. To review the logs for the daemonset, enter the following command:

    $ oc logs -n openshift-dns ds/dns-default

2.8. Tuning the CoreDNS cache

For CoreDNS, you can configure the maximum duration of both successful or unsuccessful caching, also known respectively as positive or negative caching. Tuning the cache duration of DNS query responses can reduce the load for any upstream DNS resolvers.

Warning

Setting TTL fields to low values could lead to an increased load on the cluster, any upstream resolvers, or both.

Procedure

  1. Edit the DNS Operator object named default by running the following command:

    $ oc edit dns.operator.openshift.io/default
  2. Modify the time-to-live (TTL) caching values:

    Configuring DNS caching

    apiVersion: operator.openshift.io/v1
    kind: DNS
    metadata:
      name: default
    spec:
      cache:
        positiveTTL: 1h 1
        negativeTTL: 0.5h10m 2
    The string value 1h is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be 0s and the cluster uses the internal default value of 900s as a fallback. The string value can be a combination of units such as 0.5h10m and is converted to its respective number of seconds by CoreDNS. If this field is omitted, the value is assumed to be 0s and the cluster uses the internal default value of 30s as a fallback.

Verification

  1. To review the change, look at the config map again by running the following command:

    oc get configmap/dns-default -n openshift-dns -o yaml
  2. Verify that you see entries that look like the following example:

           cache 3600 {
                denial 9984 2400
            }

Additional resources

For more information on caching, see CoreDNS cache .

2.9. Advanced tasks

2.9.1. Changing the DNS Operator managementState

The DNS Operator manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. The managementState of the DNS Operator is set to Managed by default, which means that the DNS Operator is actively managing its resources. You can change it to Unmanaged , which means the DNS Operator is not managing its resources. The following are use cases for changing the DNS Operator managementState : You are a developer and want to test a configuration change to see if it fixes an issue in CoreDNS. You can stop the DNS Operator from overwriting the configuration change by setting the managementState to Unmanaged . You are a cluster administrator and have reported an issue with CoreDNS, but need to apply a workaround until the issue is fixed. You can set the managementState field of the DNS Operator to Unmanaged to apply the workaround.

Procedure

  1. Change managementState to Unmanaged in the DNS Operator:

    oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}'
  2. Review managementState of the DNS Operator using the jsonpath command line JSON parser:

    $ oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}'

    Example output

    "Unmanaged"
    You cannot upgrade while the managementState is set to Unmanaged .

2.9.2. Controlling DNS pod placement

The DNS Operator has two daemon sets: one for CoreDNS called dns-default and one for managing the /etc/hosts file called node-resolver . You might find a need to control which nodes have CoreDNS pods assigned and running, although this is not a common operation. For example, if the cluster administrator has configured security policies that can prohibit communication between pairs of nodes, that would necessitate restricting the set of nodes on which the daemonset for CoreDNS runs. If DNS pods are running on some nodes in the cluster and the nodes where DNS pods are not running have network connectivity to nodes where DNS pods are running, DNS service will be available to all pods. The node-resolver daemon set must run on every node host because it adds an entry for the cluster image registry to support pulling images. The node-resolver pods have only one job: to look up the image-registry.openshift-image-registry.svc service’s cluster IP address and add it to /etc/hosts on the node host so that the container runtime can resolve the service name. As a cluster administrator, you can use a custom node selector to configure the daemon set for CoreDNS to run or not run on certain nodes.

Prerequisites

  • You installed the oc CLI. You are logged in to the cluster as a user with cluster-admin privileges. Your DNS Operator managementState is set to Managed .

Procedure

  • To allow the daemon set for CoreDNS to run on certain nodes, configure a taint and toleration: Modify the DNS Operator object named default :

    $ oc edit dns.operator/default
  • Specify a taint key and a toleration for the taint:

     spec:
       nodePlacement:
         tolerations:
         - effect: NoExecute
           key: "dns-only"
           operators: Equal
           value: abc
           tolerationSeconds: 3600 1
    1
    If the taint is dns-only , it can be tolerated indefinitely. You can omit tolerationSeconds .

2.9.3. Configuring DNS forwarding with TLS

When working in a highly regulated environment, you might need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that you can ensure additional DNS traffic and data privacy. Be aware that CoreDNS caches forwarded connections for 10 seconds. CoreDNS will hold a TCP connection open for those 10 seconds if no request is issued. With large clusters, ensure that your DNS server is aware that it might get many new connections to hold open because you can initiate a connection per node. Set up your DNS hierarchy accordingly to avoid performance issues.

Important

When specifying values for the zones parameter, ensure that you only forward to specific zones, such as your intranet. You must specify at least one zone. Otherwise, your cluster can lose functionality.

Procedure

  1. Modify the DNS Operator object named default :

    $ oc edit dns.operator/default

    Cluster administrators can configure transport layer security (TLS) for forwarded DNS queries.

    Configuring DNS forwarding with TLS

    apiVersion: operator.openshift.io/v1
    kind: DNS
    metadata:
      name: default
    spec:
      servers:
      - name: example-server 1
        zones:
        - example.com 2
        forwardPlugin:
          transportConfig:
            transport: TLS 3
              caBundle:
                name: mycacert
              serverName: dnstls.example.com  4
          policy: Random 5
          upstreams: 6
          - 1.1.1.1
          - 2.2.2.2:5353
      upstreamResolvers: 7
        transportConfig:
          transport: TLS
            caBundle:
              name: mycacert
            serverName: dnstls.example.com
        upstreams:
        - type: Network 8
          address: 1.2.3.4 9
          port: 53 10
    Must comply with the rfc6335 service name syntax. Must conform to the definition of a subdomain in the rfc1123 service name syntax. The cluster domain, cluster.local , is an invalid subdomain for the zones field. The cluster domain, cluster.local , is an invalid subdomain for zones . When configuring TLS for forwarded DNS queries, set the transport field to have the value TLS . When configuring TLS for forwarded DNS queries, this is a mandatory server name used as part of the server name indication (SNI) to validate the upstream TLS server certificate. Defines the policy to select upstream resolvers. Default value is Random . You can also use the values RoundRobin , and Sequential . Required. Use it to provide upstream resolvers. A maximum of 15 upstreams entries are allowed per forwardPlugin entry. Optional. You can use it to override the default policy and forward DNS resolution to the specified DNS resolvers (upstream resolvers) for the default domain. If you do not provide any upstream resolvers, the DNS name queries go to the servers in /etc/resolv.conf . Only the Network type is allowed when using TLS and you must provide an IP address. Network type indicates that this upstream resolver should handle forwarded requests separately from the upstream resolvers listed in /etc/resolv.conf . The address field must be a valid IPv4 or IPv6 address. You can optionally provide a port. The port must have a value between 1 and 65535 . If you do not specify a port for the upstream, the default port is 853. If servers is undefined or invalid, the config map only contains the default server.

Verification

  1. View the config map:

    $ oc get configmap/dns-default -n openshift-dns -o yaml

    Sample DNS ConfigMap based on TLS forwarding example

    apiVersion: v1
    data:
      Corefile: |
        example.com:5353 {
            forward . 1.1.1.1 2.2.2.2:5353
        bar.com:5353 example.com:5353 {
            forward . 3.3.3.3 4.4.4.4:5454 1
        .:5353 {
            errors
            health
            kubernetes cluster.local in-addr.arpa ip6.arpa {
                pods insecure
                upstream
                fallthrough in-addr.arpa ip6.arpa
            prometheus :9153
            forward . /etc/resolv.conf 1.2.3.4:53 {
                policy Random
            cache 30
            reload
    kind: ConfigMap
    metadata:
      labels:
        dns.operator.openshift.io/owning-dns: default
      name: dns-default
      namespace: openshift-dns
    Changes to the forwardPlugin triggers a rolling update of the CoreDNS daemon set.

Additional resources

Chapter 3. Ingress Operator in OpenShift Dedicated

3.1. OpenShift Dedicated Ingress Operator

When you create your OpenShift Dedicated cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Dedicated cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. Red Hat Site Reliability Engineers (SRE) manage the Ingress Operator for OpenShift Dedicated clusters. While you cannot alter the settings for the Ingress Operator, you may view the default Ingress Controller configurations, status, and logs as well as the Ingress Operator status.

3.2. The Ingress configuration asset

The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml .

YAML Definition of the Ingress resource

apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
  name: cluster
spec:
  domain: apps.openshiftdemos.com
The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows: The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller. The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host.

3.3. Ingress Controller configuration parameters

The Infrastructure custom resource (CR) includes optional configuration parameters that you can configure to meet specific needs for your organization.

Parameter Description

domain domain is a DNS name serviced by the Ingress Controller and is used to configure multiple features: For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy . When using a generated default certificate, the certificate is valid for domain and its subdomains . See defaultCertificate . The value is published to individual Route statuses so that users know where to target external DNS records. The domain value must be unique among all Ingress Controllers and cannot be updated. If empty, the default value is ingress.config.openshift.io/cluster .spec.domain . replicas replicas is the number of Ingress Controller replicas. If not set, the default value is 2 . endpointPublishingStrategy endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. For cloud environments, use the loadBalancer field to configure the endpoint publishing strategy for your Ingress Controller. You can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadBalancer.allowedSourceRanges If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform : Amazon Web Services (AWS): LoadBalancerService (with External scope) Google Cloud Platform (GCP): LoadBalancerService (with External scope) For most platforms, the endpointPublishingStrategy value can be updated. On GCP, you can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadbalancer.providerParameters.gcp.clientAccess If you need to update the endpointPublishingStrategy value after your cluster is deployed, you can configure the following endpointPublishingStrategy fields: hostNetwork.protocol nodePort.protocol private.protocol defaultCertificate The defaultCertificate value is a reference to a secret that contains the default certificate that is served by the Ingress Controller. When Routes do not specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: * tls.crt : certificate file contents * tls.key : key file contents If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller domain and subdomains , and the generated certificate’s CA is automatically integrated with the cluster’s trust store. The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Dedicated built-in OAuth server. namespaceSelector namespaceSelector is used to filter the set of namespaces serviced by the Ingress Controller. This is useful for implementing shards. routeSelector routeSelector is used to filter the set of Routes serviced by the Ingress Controller. This is useful for implementing shards. nodePlacement nodePlacement enables explicit control over the scheduling of the Ingress Controller. If not set, the defaults values are used. The nodePlacement parameter includes two parts, nodeSelector and tolerations . For example:

nodePlacement:
 nodeSelector:
   matchLabels:
     kubernetes.io/os: linux
 tolerations:
 - effect: NoSchedule
   operator: Exists
tlsSecurityProfile tlsSecurityProfile specifies settings for TLS connections for Ingress Controllers. If not set, the default value is based on the apiservers.config.openshift.io/cluster resource. When using the Old , Intermediate , and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z , an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the Ingress Controller, resulting in a rollout. The minimum TLS version for Ingress Controllers is 1.1 , and the maximum TLS version is 1.3 . Ciphers and the minimum TLS version of the configured security profile are reflected in the TLSProfile status.
Important

The Ingress Operator converts the TLS 1.0 of an Old or Custom profile to 1.1 . clientTLS clientTLS authenticates client access to the cluster and services; as a result, mutual TLS authentication is enabled. If not set, then client TLS is not enabled. clientTLS has the required subfields, spec.clientTLS.clientCertificatePolicy and spec.clientTLS.ClientCA . The ClientCertificatePolicy subfield accepts one of the two values: Required or Optional . The ClientCA subfield specifies a config map that is in the openshift-config namespace. The config map should contain a CA certificate bundle. The AllowedSubjectPatterns is an optional value that specifies a list of regular expressions, which are matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. At least one pattern must match a client certificate’s distinguished name; otherwise, the Ingress Controller rejects the certificate and denies the connection. If not specified, the Ingress Controller does not reject certificates based on the distinguished name. routeAdmission routeAdmission defines a policy for handling new route claims, such as allowing or denying claims across namespaces. namespaceOwnership describes how hostname claims across namespaces should be handled. The default is Strict . Strict : does not allow routes to claim the same hostname across namespaces. InterNamespaceAllowed : allows routes to claim different paths of the same hostname across namespaces. wildcardPolicy describes how routes with wildcard policies are handled by the Ingress Controller. WildcardsAllowed : Indicates routes with any wildcard policy are admitted by the Ingress Controller. WildcardsDisallowed : Indicates only routes with a wildcard policy of None are admitted by the Ingress Controller. Updating wildcardPolicy from WildcardsAllowed to WildcardsDisallowed causes admitted routes with a wildcard policy of Subdomain to stop working. These routes must be recreated to a wildcard policy of None to be readmitted by the Ingress Controller. WildcardsDisallowed is the default setting. IngressControllerLogging logging defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled. access describes how client requests are logged. If this field is empty, access logging is disabled. destination describes a destination for log messages. type is the type of destination for logs: Container specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs , on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity. Syslog specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance. container describes parameters for the Container logging destination type. Currently there are no parameters for container logging, so this field must be empty. syslog describes parameters for the Syslog logging destination type: address is the IP address of the syslog endpoint that receives log messages. port is the UDP port number of the syslog endpoint that receives log messages. maxLength is the maximum length of the syslog message. It must be between 480 and 4096 bytes. If this field is empty, the maximum length is set to the default value of 1024 bytes. facility specifies the syslog facility of log messages. If this field is empty, the facility is local1 . Otherwise, it must specify a valid syslog facility: kern , user , mail , daemon , auth , syslog , lpr , news , uucp , cron , auth2 , ftp , ntp , audit , alert , cron2 , local0 , local1 , local2 , local3 . local4 , local5 , local6 , or local7 . httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation’s default HTTP log format. For HAProxy’s default HTTP log format, see the HAProxy documentation . httpHeaders httpHeaders defines the policy for HTTP headers. By setting the forwardedHeaderPolicy for the IngressControllerHTTPHeaders , you specify when and how the Ingress Controller sets the Forwarded , X-Forwarded-For , X-Forwarded-Host , X-Forwarded-Port , X-Forwarded-Proto , and X-Forwarded-Proto-Version HTTP headers. By default, the policy is set to Append . Append specifies that the Ingress Controller appends the headers, preserving any existing headers. Replace specifies that the Ingress Controller sets the headers, removing any existing headers. IfNone specifies that the Ingress Controller sets the headers if they are not already set. Never specifies that the Ingress Controller never sets the headers, preserving any existing headers. By setting headerNameCaseAdjustments , you can specify case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying X-Forwarded-For indicates that the x-forwarded-for HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. actions specifies options for performing certain actions on headers. Headers cannot be set or deleted for TLS passthrough connections. The actions field has additional subfields spec.httpHeader.actions.response and spec.httpHeader.actions.request : The response subfield specifies a list of HTTP response headers to set or delete. The request subfield specifies a list of HTTP request headers to set or delete. httpCompression httpCompression defines the policy for HTTP traffic compression. mimeTypes defines a list of MIME types to which compression should be applied. For example, text/css; charset=utf-8 , text/html , text/* , image/svg+xml , application/octet-stream , X-custom/customsub , using the format pattern, type/subtype; [;attribute=value] . The types are: application, image, message, multipart, text, video, or a custom type prefaced by X- ; e.g. To see the full notation for MIME types and subtypes, see RFC1341 httpErrorCodePages httpErrorCodePages specifies custom HTTP error code response pages. By default, an IngressController uses error pages built into the IngressController image. httpCaptureCookies httpCaptureCookies specifies HTTP cookies that you want to capture in access logs. If the httpCaptureCookies field is empty, the access logs do not capture the cookies. For any cookie that you want to capture, the following parameters must be in your IngressController configuration: name specifies the name of the cookie. maxLength specifies tha maximum length of the cookie. matchType specifies if the field name of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. The matchType field uses the Exact and Prefix parameters. For example:

  httpCaptureCookies:
  - matchType: Exact
    maxLength: 128
    name: MYCOOKIE
httpCaptureHeaders httpCaptureHeaders specifies the HTTP headers that you want to capture in the access logs. If the httpCaptureHeaders field is empty, the access logs do not capture the headers. httpCaptureHeaders contains two lists of headers to capture in the access logs. The two lists of header fields are request and response . In both lists, the name field must specify the header name and the maxlength field must specify the maximum length of the header. For example:
  httpCaptureHeaders:
    request:
    - maxLength: 256
      name: Connection
    - maxLength: 128
      name: User-Agent
    response:
    - maxLength: 256
      name: Content-Type
    - maxLength: 256
      name: Content-Length
tuningOptions tuningOptions specifies options for tuning the performance of Ingress Controller pods. clientFinTimeout specifies how long a connection is held open while waiting for the client response to the server closing the connection. The default timeout is 1s . clientTimeout specifies how long a connection is held open while waiting for a client response. The default timeout is 30s . headerBufferBytes specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least 16384 if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is 32768 bytes. Setting this field not recommended because headerBufferBytes values that are too small can break the Ingress Controller, and headerBufferBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. headerBufferMaxRewriteBytes specifies how much memory should be reserved, in bytes, from headerBufferBytes for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for headerBufferMaxRewriteBytes is 4096 . headerBufferBytes must be greater than headerBufferMaxRewriteBytes for incoming HTTP requests. If not set, the default value is 8192 bytes. Setting this field not recommended because headerBufferMaxRewriteBytes values that are too small can break the Ingress Controller and headerBufferMaxRewriteBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. healthCheckInterval specifies how long the router waits between health checks. The default is 5s . serverFinTimeout specifies how long a connection is held open while waiting for the server response to the client that is closing the connection. The default timeout is 1s . serverTimeout specifies how long a connection is held open while waiting for a server response. The default timeout is 30s . threadCount specifies the number of threads to create per HAProxy process. Creating more threads allows each Ingress Controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to 64 threads. If this field is empty, the Ingress Controller uses the default value of 4 threads. The default value can change in future releases. Setting this field is not recommended because increasing the number of HAProxy threads allows Ingress Controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the Ingress Controller to perform poorly. tlsInspectDelay specifies how long the router can hold data to find a matching route. Setting this value too short can cause the router to fall back to the default certificate for edge-terminated, reencrypted, or passthrough routes, even when using a better matched certificate. The default inspect delay is 5s . tunnelTimeout specifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. The default timeout is 1h . maxConnections specifies the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections at the cost of additional system resources. Permitted values are 0 , -1 , any value within the range 2000 and 2000000 , or the field can be left empty. If this field is left empty or has the value 0 , the Ingress Controller will use the default value of 50000 . This value is subject to change in future releases. If the field has the value of -1 , then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. This process results in a large computed value that will incur significant memory usage compared to the current default value of 50000 . If the field has a value that is greater than the current operating system limit, the HAProxy process will not start. If you choose a discrete value and the router pod is migrated to a new node, it is possible the new node does not have an identical ulimit configured. In such cases, the pod fails to start. If you have nodes with different ulimits configured, and you choose a discrete value, it is recommended to use the value of -1 for this field so that the maximum number of connections is calculated at runtime. logEmptyRequests logEmptyRequests specifies connections for which no request is received and logged. These empty requests come from load balancer health probes or web browser speculative connections (preconnect) and logging these requests can be undesirable. However, these requests can be caused by network errors, in which case logging empty requests can be useful for diagnosing the errors. These requests can be caused by port scans, and logging empty requests can aid in detecting intrusion attempts. Allowed values for this field are Log and Ignore . The default value is Log . The LoggingPolicy type accepts either one of two values: Log : Setting this value to Log indicates that an event should be logged. Ignore : Setting this value to Ignore sets the dontlognull option in the HAproxy configuration. HTTPEmptyRequestsPolicy HTTPEmptyRequestsPolicy describes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are Respond and Ignore . The default value is Respond . The HTTPEmptyRequestsPolicy type accepts either one of two values: Respond : If the field is set to Respond , the Ingress Controller sends an HTTP 400 or 408 response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics. Ignore : Setting this option to Ignore adds the http-ignore-probes parameter in the HAproxy configuration. If the field is set to Ignore , the Ingress Controller closes the connection without sending a response, then logs the connection, or incrementing metrics. These connections come from load balancer health probes or web browser speculative connections (preconnect) and can be safely ignored. However, these requests can be caused by network errors, so setting this field to Ignore can impede detection and diagnosis of problems. These requests can be caused by port scans, in which case logging empty requests can aid in detecting intrusion attempts.

3.3.1. Ingress Controller TLS security profiles

TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server.

3.3.1.1. Understanding TLS security profiles

You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Dedicated components. The OpenShift Dedicated TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component:

Table 3.1. TLS security profiles
Profile Description

This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use.

Warning

Use caution when using a Custom profile, because invalid configurations can cause problems. When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout.

3.3.1.2. Configuring the TLS security profile for the Ingress Controller

To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server.

Sample IngressController CR that configures the Old TLS security profile

apiVersion: operator.openshift.io/v1
kind: IngressController
spec:
  tlsSecurityProfile:
    old: {}
    type: Old
						The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers.
						You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile. For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters.
							The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile.
							The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1.
						

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile:

    $ oc edit IngressController default -n openshift-ingress-operator
  2. Add the spec.tlsSecurityProfile field:

    Sample IngressController CR for a Custom profile

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    spec:
      tlsSecurityProfile:
        type: Custom 1
        custom: 2
          ciphers: 3
          - ECDHE-ECDSA-CHACHA20-POLY1305
          - ECDHE-RSA-CHACHA20-POLY1305
          - ECDHE-RSA-AES128-GCM-SHA256
          - ECDHE-ECDSA-AES128-GCM-SHA256
          minTLSVersion: VersionTLS11
    										Specify the TLS security profile type (Old, Intermediate, or Custom). The default is Intermediate.
    										Specify the appropriate field for the selected type:
    												old: {}
    												intermediate: {}
    												custom:
    										For the custom type, specify a list of TLS ciphers and minimum accepted TLS version.
    								Save the file to apply the changes.
    							

Verification

  • Verify that the profile is set in the IngressController CR:

    $ oc describe IngressController default -n openshift-ingress-operator

    Example output

    Name:         default
    Namespace:    openshift-ingress-operator
    Labels:       <none>
    Annotations:  <none>
    API Version:  operator.openshift.io/v1
    Kind:         IngressController
    Spec:
      Tls Security Profile:
        Custom:
          Ciphers:
            ECDHE-ECDSA-CHACHA20-POLY1305
            ECDHE-RSA-CHACHA20-POLY1305
            ECDHE-RSA-AES128-GCM-SHA256
            ECDHE-ECDSA-AES128-GCM-SHA256
          Min TLS Version:  VersionTLS11
        Type:               Custom
    								

3.3.1.3. Configuring mutual TLS authentication

You can configure the Ingress Controller to enable mutual TLS (mTLS) authentication by setting a spec.clientTLS value. The clientTLS value configures the Ingress Controller to verify client certificates. This configuration includes setting a clientCA value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client’s certificate. Optionally, you can also configure a list of certificate subject filters. If the clientCA value specifies an X509v3 certificate revocation list (CRL) distribution point, the Ingress Operator downloads and manages a CRL config map based on the HTTP URI X509v3 CRL Distribution Point specified in each provided certificate. The Ingress Controller uses this config map during mTLS/TLS negotiation. Requests that do not provide valid certificates are rejected.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role. You have a PEM-encoded CA certificate bundle. If your CA bundle references a CRL distribution point, you must have also included the end-entity or leaf certificate to the client CA bundle. This certificate must have included an HTTP URI under CRL Distribution Points , as described in RFC 5280. For example:

     Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1
             Subject: SOME SIGNED CERT            X509v3 CRL Distribution Points:
                    Full Name:
                      URI:http://crl.example.com/example.crl

Procedure

  1. In the openshift-config namespace, create a config map from your CA bundle:

    $ oc create configmap \
       router-ca-certs-default \
       --from-file=ca-bundle.pem=client-ca.crt \1
       -n openshift-config
    1
    The config map data key must be ca-bundle.pem , and the data value must be a CA certificate in PEM format. Edit the IngressController resource in the openshift-ingress-operator project:

    $ oc edit IngressController default -n openshift-ingress-operator
  2. Add the spec.clientTLS field and subfields to configure mutual TLS:

    Sample IngressController CR for a clientTLS profile that specifies filtering patterns

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        clientTLS:
          clientCertificatePolicy: Required
          clientCA:
            name: router-ca-certs-default
          allowedSubjectPatterns:
          - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift$"
    Optional, get the Distinguished Name (DN) for allowedSubjectPatterns by entering the following command.
$ openssl  x509 -in custom-cert.pem  -noout -subject
subject= /CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift

3.4. View the default Ingress Controller

The Ingress Operator is a core feature of OpenShift Dedicated and is enabled out of the box. Every new OpenShift Dedicated installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute.

Procedure

  • View the default Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/default

3.5. View Ingress Operator status

You can view and inspect the status of your Ingress Operator.

Procedure

  • View your Ingress Operator status:

    $ oc describe clusteroperators/ingress

3.6. View Ingress Controller logs

You can view your Ingress Controller logs.

Procedure

  • View your Ingress Controller logs:

    $ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>

3.7. View Ingress Controller status

Your can view the status of a particular Ingress Controller.

Procedure

  • View the status of an Ingress Controller:

    $ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>

3.8. Creating a custom Ingress Controller

As a cluster administrator, you can create a new custom Ingress Controller. Because the default Ingress Controller might change during OpenShift Dedicated updates, creating a custom Ingress Controller can be helpful when maintaining a configuration manually that persists across cluster updates. This example provides a minimal spec for a custom Ingress Controller. To further customize your custom Ingress Controller, see "Configuring the Ingress Controller".

Prerequisites

  • Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges.

Procedure

  1. Create a YAML file that defines the custom IngressController object:

    Example custom-ingress-controller.yaml file

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
        name: <custom_name> 1
        namespace: openshift-ingress-operator
    spec:
        defaultCertificate:
            name: <custom-ingress-custom-certs> 2
        replicas: 1 3
        domain: <custom_domain> 4
    Specify the a custom name for the IngressController object. Specify the name of the secret with the custom wildcard certificate. Minimum replica needs to be ONE Specify the domain to your domain name. The domain specified on the IngressController object and the domain used for the certificate must match. For example, if the domain value is "custom_domain.mycompany.com", then the certificate must have SAN *.custom_domain.mycompany.com (with the *. added to the domain). Create the object by running the following command:

    $ oc create -f custom-ingress-controller.yaml

3.9. Configuring the Ingress Controller

3.9.1. Setting a custom default certificate

As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR).

Prerequisites

  • You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI. Your certificate meets the following requirements: The certificate is valid for the ingress domain. The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com . You must have an IngressController CR. You may use the default one:

    $ oc --namespace openshift-ingress-operator get ingresscontrollers

    Example output

    NAME      AGE
    default   10m
    If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s).

    Procedure

    The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key . You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR. This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files.

    $ oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key
  • Update the IngressController CR to reference the new certificate secret:

    $ oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \
      --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'
  • Verify the update was effective:

    $ echo Q |\
      openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\
      openssl x509 -noout -subject -issuer -enddate

    where:

    <domain>
    Specifies the base domain name for your cluster.

    Example output

    subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com
    issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com
    notAfter=May 10 08:32:45 2022 GM
    You can alternatively apply the following YAML to set a custom default certificate:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      defaultCertificate:
        name: custom-certs-default

    The certificate secret name should match the value used to update the CR. Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller’s deployment to use the custom certificate.

3.9.2. Removing a custom default certificate

As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You previously configured a custom default certificate for the Ingress Controller.

Procedure

  • To remove the custom certificate and restore the certificate that ships with OpenShift Dedicated, enter the following command:

    $ oc patch -n openshift-ingress-operator ingresscontrollers/default \
      --type json -p $'- op: remove\n  path: /spec/defaultCertificate'

    There can be a delay while the cluster reconciles the new certificate configuration.

Verification

  • To confirm that the original cluster certificate is restored, enter the following command:

    $ echo Q | \
      openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \
      openssl x509 -noout -subject -issuer -enddate

    where:

    <domain>
    Specifies the base domain name for your cluster.

    Example output

    subject=CN = *.apps.<domain>
    issuer=CN = ingress-operator@1620633373
    notAfter=May 10 10:44:36 2023 GMT

3.9.3. Autoscaling an Ingress Controller

Automatically scale an Ingress Controller to dynamically meet routing performance or availability requirements such as the requirement to increase throughput. The following procedure provides an example for scaling up the default IngressController .

Prerequisites

  1. You have the OpenShift CLI ( oc ) installed. You have access to an OpenShift Dedicated cluster as a user with the cluster-admin role. You have the Custom Metrics Autoscaler Operator installed. You are in the openshift-ingress-operator project namespace.

Procedure

  1. Create a service account to authenticate with Thanos by running the following command:

    $ oc create serviceaccount thanos && oc describe serviceaccount thanos

    Example output

    Name:                thanos
    Namespace:           openshift-ingress-operator
    Labels:              <none>
    Annotations:         <none>
    Image pull secrets:  thanos-dockercfg-b4l9s
    Mountable secrets:   thanos-dockercfg-b4l9s
    Tokens:              thanos-token-c422q
    Events:              <none>
    Manually create the service account secret token with the following command:

    $ oc apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: thanos-token
      annotations:
        kubernetes.io/service-account.name: thanos
      type: kubernetes.io/service-account-token
    							Define a TriggerAuthentication object within the openshift-ingress-operator namespace using the service account’s token.
    									Define the variable secret that contains the secret by running the following command:
    								

    $ secret=$(oc get secret | grep thanos-token | head -n 1 | awk '{ print $1 }')
  2. Create the TriggerAuthentication object and pass the value of the secret variable to the TOKEN parameter:

    $ oc process TOKEN="$secret" -f - <<EOF | oc apply -f -
    apiVersion: template.openshift.io/v1
    kind: Template
    parameters:
    - name: TOKEN
    objects:
    - apiVersion: keda.sh/v1alpha1
      kind: TriggerAuthentication
      metadata:
        name: keda-trigger-auth-prometheus
      spec:
        secretTargetRef:
        - parameter: bearerToken
          name: \${TOKEN}
          key: token
        - parameter: ca
          name: \${TOKEN}
          key: ca.crt
    							Create and apply a role for reading metrics from Thanos:
    									Create a new role, thanos-metrics-reader.yaml, that reads metrics from pods and nodes:
    								

    thanos-metrics-reader.yaml

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: thanos-metrics-reader
    rules:
    - apiGroups:
      resources:
      - pods
      - nodes
      verbs:
      - get
    - apiGroups:
      - metrics.k8s.io
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      resources:
      - namespaces
      verbs:
      - get
    Apply the new role by running the following command:

    $ oc apply -f thanos-metrics-reader.yaml
  • Add the new role to the service account by entering the following commands:

    $ oc adm policy add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator
    $ oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos
    Note

    The argument add-cluster-role-to-user is only required if you use cross-namespace queries. The following step uses a query from the kube-metrics namespace which requires this argument. Create a new ScaledObject YAML file, ingress-autoscaler.yaml , that targets the default Ingress Controller deployment:

    Example ScaledObject definition

    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      name: ingress-scaler
    spec:
      scaleTargetRef: 1
        apiVersion: operator.openshift.io/v1
        kind: IngressController
        name: default
        envSourceContainerName: ingress-operator
      minReplicaCount: 1
      maxReplicaCount: 20 2
      cooldownPeriod: 1
      pollingInterval: 1
      triggers:
      - type: prometheus
        metricType: AverageValue
        metadata:
          serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3
          namespace: openshift-ingress-operator 4
          metricName: 'kube-node-role'
          threshold: '1'
          query: 'sum(kube_node_role{role="worker",service="kube-state-metrics"})' 5
          authModes: "bearer"
        authenticationRef:
          name: keda-trigger-auth-prometheus
    The custom resource that you are targeting. In this case, the Ingress Controller. Optional: The maximum number of replicas. If you omit this field, the default maximum is set to 100 replicas. The Thanos service endpoint in the openshift-monitoring namespace. The Ingress Operator namespace. This expression evaluates to however many worker nodes are present in the deployed cluster.
    Important

    If you are using cross-namespace queries, you must target port 9091 and not port 9092 in the serverAddress field. You also must have elevated privileges to read metrics from this port. Apply the custom resource definition by running the following command:

    $ oc apply -f ingress-autoscaler.yaml
  • Verification

    • Verify that the default Ingress Controller is scaled out to match the value returned by the kube-state-metrics query by running the following commands: Use the grep command to search the Ingress Controller YAML file for replicas:

      $ oc get ingresscontroller/default -o yaml | grep replicas:

      Example output

      replicas: 3
      Get the pods in the openshift-ingress project:

      $ oc get pods -n openshift-ingress

      Example output

      NAME                             READY   STATUS    RESTARTS   AGE
      router-default-7b5df44ff-l9pmm   2/2     Running   0          17h
      router-default-7b5df44ff-s5sl5   2/2     Running   0          3d22h
      router-default-7b5df44ff-wwsth   2/2     Running   0          66s

    3.9.4. Scaling an Ingress Controller

    Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController . Scaling is not an immediate action, as it takes time to create the desired number of replicas.

    Procedure

    1. View the current number of available replicas for the default IngressController :

      $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

      Example output

      Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas:

      $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge

      Example output

      ingresscontroller.operator.openshift.io/default patched
      Verify that the default IngressController scaled to the number of replicas that you specified:

      $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'

      Example output

      You can alternatively apply the following YAML to scale an Ingress Controller to three replicas:

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        replicas: 3               1
      1
      If you need a different amount of replicas, change the replicas value.

    3.9.5. Configuring Ingress access logging

    You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Dedicated, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs. Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller. Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack’s capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap.

    Prerequisites

    • Log in as a user with cluster-admin privileges.

    Procedure

    Configure Ingress access logging to a sidecar. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type . The following example is an Ingress Controller definition that logs to a Container destination:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Container
  • When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod:

    $ oc -n openshift-ingress logs deployment.apps/router-default -c logs

    Example output

    2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1"
    Configure Ingress access logging to a Syslog endpoint. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type . If the destination type is Syslog , you must also specify a destination endpoint using spec.logging.access.destination.syslog.address and you can specify a facility using spec.logging.access.destination.syslog.facility . The following example is an Ingress Controller definition that logs to a Syslog destination:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Syslog
            syslog:
              address: 1.2.3.4
              port: 10514
    Note

    The syslog destination port must be UDP. The syslog destination address must be an IP address. It does not support DNS hostname. Configure Ingress access logging with a specific log format. You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Syslog
            syslog:
              address: 1.2.3.4
              port: 10514
          httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'
  • Disable Ingress access logging. To disable Ingress access logging, leave spec.logging or spec.logging.access empty:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access: null

    Allow the Ingress Controller to modify the HAProxy log length when using a sidecar. Use spec.logging.access.destination.syslog.maxLength if you are using spec.logging.access.destination.type: Syslog .

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Syslog
            syslog:
              address: 1.2.3.4
              maxLength: 4096
              port: 10514
  • Use spec.logging.access.destination.container.maxLength if you are using spec.logging.access.destination.type: Container .

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: 2
      logging:
        access:
          destination:
            type: Container
            container:
              maxLength: 8192
  • 3.9.6. Setting Ingress Controller thread count

    A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads.

    Prerequisites

    • The following assumes that you already created an Ingress Controller.

    Procedure

    • Update the Ingress Controller to increase the number of threads:

      $ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}'
      Note

      If you have a node that is capable of running large amounts of resources, you can configure spec.nodePlacement.nodeSelector with labels that match the capacity of the intended node, and configure spec.tuningOptions.threadCount to an appropriately high value.

    3.9.7. Configuring an Ingress Controller to use an internal load balancer

    When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer.

    Important

    If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created.

    Figure 3.1. Diagram of LoadBalancer

    OpenShift Dedicated Ingress LoadBalancerService endpoint publishing strategy

    The preceding graphic shows the following concepts pertaining to OpenShift Dedicated Ingress LoadBalancerService endpoint publishing strategy: You can load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer. You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic. Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details.

    Prerequisites

    • Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges.

    Procedure

    1. Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml , such as in the following example:

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        namespace: openshift-ingress-operator
        name: <name> 1
      spec:
        domain: <domain> 2
        endpointPublishingStrategy:
          type: LoadBalancerService
          loadBalancer:
            scope: Internal 3
      1
      Replace <name> with a name for the IngressController object. Specify the domain for the application published by the controller. Specify a value of Internal to use an internal load balancer. Create the Ingress Controller defined in the previous step by running the following command:

      $ oc create -f <name>-ingress-controller.yaml 1
      1
      Replace <name> with the name of the IngressController object. Optional: Confirm that the Ingress Controller was created by running the following command:

      $ oc --all-namespaces=true get ingresscontrollers

    3.9.8. Setting the Ingress Controller health check interval

    A cluster administrator can set the health check interval to define how long the router waits between two consecutive health checks. This value is applied globally as a default for all routes. The default value is 5 seconds.

    Prerequisites

    • The following assumes that you already created an Ingress Controller.

    Procedure

    • Update the Ingress Controller to change the interval between back end health checks:

      $ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"healthCheckInterval": "8s"}}}'
      Note

      To override the healthCheckInterval for a single route, use the route annotation router.openshift.io/haproxy.health.check.interval

    3.9.9. Configuring the default Ingress Controller for your cluster to be internal

    You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it.

    Important

    If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created.

    Prerequisites

    • Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges.

    Procedure

    1. Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it.

      $ oc replace --force --wait --filename - <<EOF
      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        namespace: openshift-ingress-operator
        name: default
      spec:
        endpointPublishingStrategy:
          type: LoadBalancerService
          loadBalancer:
            scope: Internal
      EOF

    3.9.10. Configuring the route admission policy

    Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname.

    Warning

    Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces.

    Prerequisites

    • Cluster administrator privileges.

    Procedure

    • Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command:

      $ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge

      Sample Ingress Controller configuration

      spec:
        routeAdmission:
          namespaceOwnership: InterNamespaceAllowed
      							You can alternatively apply the following YAML to configure the route admission policy:
      						

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        routeAdmission:
          namespaceOwnership: InterNamespaceAllowed

    3.9.11. Using wildcard routes

    The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller. The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None , which is backwards compatible with existing IngressController resources.

    Procedure

    1. Configure the wildcard policy. Use the following command to edit the IngressController resource:

      $ oc edit IngressController
    2. Under spec , set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed :

      spec:
        routeAdmission:
          wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed

    3.9.12. HTTP header configuration

    OpenShift Dedicated provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together. You can only set or delete headers within an IngressController or Route CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy field, instead of spec.httpHeaders.actions .

    3.9.12.1. Order of precedence

    When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header. For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence. For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence. For example, a cluster administrator sets the X-Frame-Options response header with the value DENY in the Ingress Controller using the following configuration:

    Example IngressController spec

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    # ...
    spec:
      httpHeaders:
        actions:
          response:
          - name: X-Frame-Options
            action:
              type: Set
                value: DENY
    A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN using the following configuration:

    Example Route spec

    apiVersion: route.openshift.io/v1
    kind: Route
    # ...
    spec:
      httpHeaders:
        actions:
          response:
          - name: X-Frame-Options
            action:
              type: Set
                value: SAMEORIGIN
    When both the IngressController spec and Route spec are configuring the X-Frame-Options response header, then the value set for this header at the global level in the Ingress Controller takes precedence, even if a specific route allows frames. For a request header, the Route spec value overrides the IngressController spec value. This prioritization occurs because the haproxy.config file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY applied to the front end configurations overrides the same header with the value SAMEORIGIN that is set in the back end:

    frontend public
      http-response set-header X-Frame-Options 'DENY'
    frontend fe_sni
      http-response set-header X-Frame-Options 'DENY'
    frontend fe_no_sni
      http-response set-header X-Frame-Options 'DENY'
    backend be_secure:openshift-monitoring:alertmanager-main
      http-response set-header X-Frame-Options 'SAMEORIGIN'

    Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations.

    3.9.12.2. Special case headers

    The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances:

    Table 3.2. Special case header configuration options
    Header name Configurable using IngressController spec Configurable using Route spec Reason for disallowment Configurable using another method

    proxy The proxy HTTP request header can be used to exploit vulnerable CGI applications by injecting the header value into the HTTP_PROXY environment variable. The proxy HTTP request header is also non-standard and prone to error during configuration. When the host HTTP request header is set using the IngressController CR, HAProxy can fail when looking up the correct route. strict-transport-security The strict-transport-security HTTP response header is already handled using route annotations and does not need a separate implementation. Yes: the haproxy.router.openshift.io/hsts_header route annotation cookie and set-cookie The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy’s session affinity and restrict HAProxy’s ownership of a cookie. the haproxy.router.openshift.io/disable_cookie route annotation the haproxy.router.openshift.io/cookie_name route annotation

    3.9.13. Setting or deleting HTTP request and response headers in an Ingress Controller

    You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes. For example, you might want to migrate an application running on your cluster to use mutual TLS, which requires that your application checks for an X-Forwarded-Client-Cert request header, but the OpenShift Dedicated default Ingress Controller provides an X-SSL-Client-Der request header. The following procedure modifies the Ingress Controller to set the X-Forwarded-Client-Cert request header, and delete the X-SSL-Client-Der request header.

    Prerequisites

    • You have installed the OpenShift CLI ( oc ). You have access to an OpenShift Dedicated cluster as a user with the cluster-admin role.

    Procedure

    1. Edit the Ingress Controller resource:

      $ oc -n openshift-ingress-operator edit ingresscontroller/default
    2. Replace the X-SSL-Client-Der HTTP request header with the X-Forwarded-Client-Cert HTTP request header:

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        httpHeaders:
          actions: 1
            request: 2
            - name: X-Forwarded-Client-Cert 3
              action:
                type: Set 4
                 value: "%{+Q}[ssl_c_der,base64]" 5
            - name: X-SSL-Client-Der
              action:
                type: Delete
      1
      The list of actions you want to perform on the HTTP headers. The type of header you want to change. In this case, a request header. The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration . The type of action being taken on the header. This field can have the value Set or Delete . When setting HTTP headers, you must provide a value . The value can be a string from a list of available directives for that header, for example DENY , or it can be a dynamic value that will be interpreted using HAProxy’s dynamic value syntax. In this case, a dynamic value is added. For setting dynamic header values for HTTP responses, allowed sample fetchers are res.hdr and ssl_c_der . For setting dynamic header values for HTTP requests, allowed sample fetchers are req.hdr and ssl_c_der . Both request and response dynamic values can use the lower and base64 converters. Save the file to apply the changes.

    3.9.14. Using X-Forwarded headers

    You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For . The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller.

    Procedure

    1. Configure the HTTPHeaders field for the Ingress Controller. Use the following command to edit the IngressController resource:

      $ oc edit IngressController
    2. Under spec , set the HTTPHeaders policy field to Append , Replace , IfNone , or Never :

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        httpHeaders:
          forwardedHeaderPolicy: Append
    Example use cases

    As a cluster administrator, you can: Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller. To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides. Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified. To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header. As an application developer, you can: Configure an application-specific external proxy that injects the X-Forwarded-For header. To configure an Ingress Controller to pass the header through unmodified for an application’s Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application. You can set the haproxy.router.openshift.io/set-forwarded-headers annotation on a per route basis, independent from the globally set value for the Ingress Controller.

    3.9.15. Enabling HTTP/2 Ingress connectivity

    You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.

    Important

    For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol.

    Procedure

    Enable HTTP/2 on a single Ingress Controller. To enable HTTP/2 on an Ingress Controller, enter the oc annotate command:

    $ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

    Replace <ingresscontroller_name> with the name of the Ingress Controller to annotate. Enable HTTP/2 on the entire cluster. To enable HTTP/2 for the entire cluster, enter the oc annotate command:

    $ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true
    Tip

    You can alternatively apply the following YAML to add the annotation:

    apiVersion: config.openshift.io/v1
    kind: Ingress
    metadata:
      name: cluster
      annotations:
        ingress.operator.openshift.io/default-enable-http2: "true"

    3.9.16. Configuring the PROXY protocol for an Ingress Controller

    A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the HostNetwork , NodePortService , or Private endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer.

    Warning

    The default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress Virtual IP (VIP) do not support the PROXY protocol. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives contain only the source IP address that is associated with the load balancer.

    Important

    For a passthrough route configuration, servers in OpenShift Dedicated clusters cannot observe the original client source IP address. If you need to know the original client source IP address, configure Ingress access logging for your Ingress Controller so that you can view the client source IP addresses. For re-encrypt and edge routes, the OpenShift Dedicated router sets the Forwarded and X-Forwarded-For headers so that application workloads check the client source IP address. For more information about Ingress access logging, see "Configuring Ingress access logging". Configuring the PROXY protocol for an Ingress Controller is not supported when using the LoadBalancerService endpoint publishing strategy type. This restriction is because when OpenShift Dedicated runs in a cloud platform, and an Ingress Controller specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses.

    Important

    You must configure both OpenShift Dedicated and the external load balancer to use either the PROXY protocol or TCP. This feature is not supported in cloud deployments. This restriction is because when OpenShift Dedicated runs in a cloud platform, and an Ingress Controller specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses.

    Important

    You must configure both OpenShift Dedicated and the external load balancer to either use the PROXY protocol or to use Transmission Control Protocol (TCP).

    Prerequisites

    • You created an Ingress Controller.

    Procedure

    1. Edit the Ingress Controller resource by entering the following command in your CLI:

      $ oc -n openshift-ingress-operator edit ingresscontroller/default
    2. Set the PROXY configuration: If your Ingress Controller uses the HostNetwork endpoint publishing strategy type, set the spec.endpointPublishingStrategy.hostNetwork.protocol subfield to PROXY :

      Sample hostNetwork configuration to PROXY

      # ...
        spec:
          endpointPublishingStrategy:
            hostNetwork:
              protocol: PROXY
            type: HostNetwork
      # ...
      If your Ingress Controller uses the NodePortService endpoint publishing strategy type, set the spec.endpointPublishingStrategy.nodePort.protocol subfield to PROXY :

      Sample nodePort configuration to PROXY

      # ...
        spec:
          endpointPublishingStrategy:
            nodePort:
              protocol: PROXY
            type: NodePortService
      # ...
      If your Ingress Controller uses the Private endpoint publishing strategy type, set the spec.endpointPublishingStrategy.private.protocol subfield to PROXY :

      Sample private configuration to PROXY

      # ...
        spec:
          endpointPublishingStrategy:
            private:
              protocol: PROXY
          type: Private
      # ...

    3.9.17. Specifying an alternative cluster domain using the appsDomain option

    As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain field. The appsDomain field is an optional domain for OpenShift Dedicated to use instead of the default, which is specified in the domain field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route. For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster.

    Prerequisites

    • You deployed an OpenShift Dedicated cluster. You installed the oc command line interface.

    Procedure

    1. Configure the appsDomain field by specifying an alternative default domain for user-created routes. Edit the ingress cluster resource:

      $ oc edit ingresses.config/cluster -o yaml
    2. Edit the YAML file:

      Sample appsDomain configuration to test.example.com

      apiVersion: config.openshift.io/v1
      kind: Ingress
      metadata:
        name: cluster
      spec:
        domain: apps.example.com            1
        appsDomain: <test.example.com>      2
      Specifies the default domain. You cannot modify the default domain after installation. Optional: Domain for OpenShift Dedicated infrastructure to use for application routes. Instead of the default prefix, apps , you can use an alternative prefix like test . Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change: Wait for the openshift-apiserver finish rolling updates before exposing the route. Expose the route:

      $ oc expose service hello-openshift
      route.route.openshift.io/hello-openshift exposed

      Example output

      $ oc get routes
      NAME              HOST/PORT                                   PATH   SERVICES          PORT       TERMINATION   WILDCARD
      hello-openshift   hello_openshift-<my_project>.test.example.com
      hello-openshift   8080-tcp                 None

    3.9.18. Converting HTTP header case

    HAProxy lowercases HTTP header names by default; for example, changing Host: xyz.com to host: xyz.com . If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller spec.httpHeaders.headerNameCaseAdjustments API field for a solution to accommodate legacy applications until they can be fixed.

    Important

    OpenShift Dedicated includes HAProxy 2.8. If you want to update to this version of the web-based load balancer, ensure that you add the spec.httpHeaders.headerNameCaseAdjustments section to your cluster’s configuration file. As a cluster administrator, you can convert the HTTP header case by entering the oc patch command or by setting the HeaderNameCaseAdjustments field in the Ingress Controller YAML file.

    Prerequisites

    • You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role.

    Procedure

    • Capitalize an HTTP header by using the oc patch command. Change the HTTP header from host to Host by running the following command:

      $ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}'
    • Create a Route resource YAML file so that the annotation can be applied to the application.

      Example of a route named my-application

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        annotations:
          haproxy.router.openshift.io/h1-adjust-case: true 1
        name: <application_name>
        namespace: <application_name>
      # ...
      Set haproxy.router.openshift.io/h1-adjust-case so that the Ingress Controller can adjust the host request header as specified. Specify adjustments by configuring the HeaderNameCaseAdjustments field in the Ingress Controller YAML configuration file. The following example Ingress Controller YAML file adjusts the host header to Host for HTTP/1 requests to appropriately annotated routes:

      Example Ingress Controller YAML

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        httpHeaders:
          headerNameCaseAdjustments:
          - Host
      The following example route enables HTTP response header name case adjustments by using the haproxy.router.openshift.io/h1-adjust-case annotation:

      Example route YAML

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        annotations:
          haproxy.router.openshift.io/h1-adjust-case: true 1
        name: my-application
        namespace: my-application
      spec:
          kind: Service
          name: my-application
      Set haproxy.router.openshift.io/h1-adjust-case to true.

    3.9.19. Using router compression

    You configure the HAProxy Ingress Controller to specify router compression globally for specific MIME types. You can use the mimeTypes variable to define the formats of MIME types to which compression is applied. The types are: application, image, message, multipart, text, video, or a custom type prefaced by "X-". To see the full notation for MIME types and subtypes, see RFC1341 . Memory allocated for compression can affect the max connections. Additionally, compression of large buffers can cause latency, like heavy regex or long lists of regex. Not all MIME types benefit from compression, but HAProxy still uses resources to try to compress if instructed to. Generally, text formats, such as html, css, and js, formats benefit from compression, but formats that are already compressed, such as image, audio, and video, benefit little in exchange for the time and resources spent on compression.

    Procedure

    1. Configure the httpCompression field for the Ingress Controller. Use the following command to edit the IngressController resource:

      $ oc edit -n openshift-ingress-operator ingresscontrollers/default
    2. Under spec , set the httpCompression policy field to mimeTypes and specify a list of MIME types that should have compression applied:

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      metadata:
        name: default
        namespace: openshift-ingress-operator
      spec:
        httpCompression:
          mimeTypes:
          - "text/html"
          - "text/css; charset=utf-8"
          - "application/json"
         ...

    3.9.20. Exposing router metrics

    You can expose the HAProxy router metrics by default in Prometheus format on the default stats port, 1936. The external metrics collection and aggregation systems such as Prometheus can access the HAProxy router metrics. You can view the HAProxy router metrics in a browser in the HTML and comma separated values (CSV) format.

    Prerequisites

    • You configured your firewall to access the default stats port, 1936.

    Procedure

    1. Get the router pod name by running the following command:

      $ oc get pods -n openshift-ingress

      Example output

      NAME                              READY   STATUS    RESTARTS   AGE
      router-default-76bfffb66c-46qwp   1/1     Running   0          11h
      Get the router’s username and password, which the router pod stores in the /var/lib/haproxy/conf/metrics-auth/statsUsername and /var/lib/haproxy/conf/metrics-auth/statsPassword files: Get the username by running the following command:

      $ oc rsh <router_pod_name> cat metrics-auth/statsUsername
    2. Get the password by running the following command:

      $ oc rsh <router_pod_name> cat metrics-auth/statsPassword
  • Get the router IP and metrics certificates by running the following command:

    $ oc describe pod <router_pod>
  • Get the raw statistics in Prometheus format by running the following command:

    $ curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics
  • Access the metrics securely by running the following command:

    $ curl -u user:password https://<router_IP>:<stats_port>/metrics -k
  • Access the default stats port, 1936, by running the following command:

    $ curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics

    Example 3.1. Example output

    ...
    # HELP haproxy_backend_connections_total Total number of connections.
    # TYPE haproxy_backend_connections_total gauge
    haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0
    haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0
    haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0
    # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value.
    # TYPE haproxy_exporter_server_threshold gauge
    haproxy_exporter_server_threshold{type="current"} 11
    haproxy_exporter_server_threshold{type="limit"} 500
    # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes.
    # TYPE haproxy_frontend_bytes_in_total gauge
    haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0
    haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0
    haproxy_frontend_bytes_in_total{frontend="public"} 119070
    # HELP haproxy_server_bytes_in_total Current total of incoming bytes.
    # TYPE haproxy_server_bytes_in_total gauge
    haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0
    haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0
    haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0
    haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0
    							Launch the stats window by entering the following URL in a browser:
    						

    http://<user>:<password>@<router_IP>:<stats_port>
  • Optional: Get the stats in CSV format by entering the following URL in a browser:

    http://<user>:<password>@<router_ip>:1936/metrics;csv
  • 3.9.21. Customizing HAProxy error code response pages

    As a cluster administrator, you can specify a custom error code response page for either 503, 404, or both error pages. The HAProxy router serves a 503 error page when the application pod is not running or a 404 error page when the requested URL does not exist. For example, if you customize the 503 error code response page, then the page is served when the application pod is not running, and the default 404 error code HTTP response page is served by the HAProxy router for an incorrect route or a non-existing route. Custom error code response pages are specified in a config map then patched to the Ingress Controller. The config map keys have two available file names as follows: error-page-503.http and error-page-404.http . Custom HTTP error code response pages must follow the HAProxy HTTP error page configuration guidelines . Here is an example of the default OpenShift Dedicated HAProxy router http 503 error code response page . You can use the default content as a template for creating your own custom page. By default, the HAProxy router serves only a 503 error page when the application is not running or when the route is incorrect or non-existent. This default behavior is the same as the behavior on OpenShift Dedicated 4.8 and earlier. If a config map for the customization of an HTTP error code response is not provided, and you are using a custom HTTP error code response page, the router serves a default 404 or 503 error code response page. If you use the OpenShift Dedicated default 503 error code page as a template for your customizations, the headers in the file require an editor that can use CRLF line endings.

    Procedure

    1. Create a config map named my-custom-error-code-pages in the openshift-config namespace:

      $ oc -n openshift-config create configmap my-custom-error-code-pages \
      --from-file=error-page-503.http \
      --from-file=error-page-404.http
      Important

      If you do not specify the correct format for the custom error code response page, a router pod outage occurs. To resolve this outage, you must delete or correct the config map and delete the affected router pods so they can be recreated with the correct information. Patch the Ingress Controller to reference the my-custom-error-code-pages config map by name:

      $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge

      The Ingress Operator copies the my-custom-error-code-pages config map from the openshift-config namespace to the openshift-ingress namespace. The Operator names the config map according to the pattern, <your_ingresscontroller_name>-errorpages , in the openshift-ingress namespace. Display the copy:

      $ oc get cm default-errorpages -n openshift-ingress

      Example output

      NAME                       DATA   AGE
      default-errorpages         2      25s  1
      The example config map name is default-errorpages because the default Ingress Controller custom resource (CR) was patched. Confirm that the config map containing the custom error response page mounts on the router volume where the config map key is the filename that has the custom HTTP error code response: For 503 custom HTTP custom error code response:

      $ oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http
    2. For 404 custom HTTP custom error code response:

      $ oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http

    Verification

    Verify your custom error code HTTP response: Create a test project and application:

     $ oc new-project test-ingress
    $ oc new-app django-psql-example
  • For 503 custom http error code response: Stop all the pods for the application. Run the following curl command or visit the route hostname in the browser:

    $ curl -vk <route_hostname>
  • For 404 custom http error code response: Visit a non-existent route or an incorrect route. Run the following curl command or visit the route hostname in the browser:

    $ curl -vk <route_hostname>
  • Check if the errorfile attribute is properly in the haproxy.config file:

    $ oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile
  • 3.9.22. Setting the Ingress Controller maximum connections

    A cluster administrator can set the maximum number of simultaneous connections for OpenShift router deployments. You can patch an existing Ingress Controller to increase the maximum number of connections.

    Prerequisites

    • The following assumes that you already created an Ingress Controller

    Procedure

    • Update the Ingress Controller to change the maximum number of connections for HAProxy:

      $ oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}'
      Warning

      If you set the spec.tuningOptions.maxConnections value greater than the current operating system limit, the HAProxy process will not start. See the table in the "Ingress Controller configuration parameters" section for more information about this parameter.

    3.10. OpenShift Dedicated Ingress Operator configurations

    The following table details the components of the Ingress Operator and if Red Hat Site Reliability Engineers (SRE) maintains this component on OpenShift Dedicated clusters.

    Table 3.3. Ingress Operator Responsibility Chart
    Ingress component Managed by Default configuration?

    Scaling Ingress Controller Ingress Operator thread count Ingress Controller access logging Ingress Controller sharding Ingress Controller route admission policy Ingress Controller wildcard routes Ingress Controller X-Forwarded headers Ingress Controller route compression

    Chapter 4. OpenShift SDN default CNI network provider

    4.1. Enabling multicast for a project

    Note

    OpenShift SDN CNI is deprecated as of OpenShift Dedicated 4.14. As of OpenShift Dedicated 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead.

    4.1.1. About multicast

    With IP multicast, data is broadcast to many IP addresses simultaneously.

    Important
    • At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Dedicated pods is disabled by default. If you are using the OpenShift SDN network plugin, you can enable multicast on a per-project basis. When using the OpenShift SDN network plugin in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plugin in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project.

    4.1.2. Enabling multicast between pods

    You can enable multicast between pods for your project.

    Prerequisites

    • Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin or the dedicated-admin role.

    Procedure

    • Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for.

      $ oc annotate netnamespace <namespace> \
          netnamespace.network.openshift.io/multicast-enabled=true

    Verification

    To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name.

    $ oc project <project>
  • Create a pod to act as a multicast receiver:

    $ cat <<EOF| oc create -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: mlistener
      labels:
        app: multicast-verify
    spec:
      containers:
        - name: mlistener
          image: registry.access.redhat.com/ubi9
          command: ["/bin/sh", "-c"]
          args:
            ["dnf -y install socat hostname && sleep inf"]
          ports:
            - containerPort: 30102
              name: mlistener
              protocol: UDP
    							Create a pod to act as a multicast sender:
    						

    $ cat <<EOF| oc create -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: msender
      labels:
        app: multicast-verify
    spec:
      containers:
        - name: msender
          image: registry.access.redhat.com/ubi9
          command: ["/bin/sh", "-c"]
          args:
            ["dnf -y install socat && sleep inf"]
    							In a new terminal window or tab, start the multicast listener.
    									Get the IP address for the Pod:
    								

    $ POD_IP=$(oc get pods mlistener -o jsonpath='{.status.podIP}')
  • Start the multicast listener by entering the following command:

    $ oc exec mlistener -i -t -- \
        socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:$POD_IP,fork EXEC:hostname
  • Start the multicast transmitter. Get the pod network IP address range:

    $ CIDR=$(oc get Network.config.openshift.io cluster \
        -o jsonpath='{.status.clusterNetwork[0].cidr}')
  • To send a multicast message, enter the following command:

    $ oc exec msender -i -t -- \
        /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=$CIDR,ip-multicast-ttl=64"

    If multicast is working, the previous command returns the following output:

    mlistener
  • Chapter 5. Network verification for OpenShift Dedicated clusters

    Network verification checks run automatically when you deploy an OpenShift Dedicated cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster. The checks validate your network configuration and highlight errors, enabling you to resolve configuration issues prior to deployment. You can also run the network verification checks manually to validate the configuration for an existing cluster.

    5.1. Understanding network verification for OpenShift Dedicated clusters

    When you deploy an OpenShift Dedicated cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster, network verification runs automatically. This helps you identify and resolve configuration issues prior to deployment. When you prepare to install your cluster by using Red Hat OpenShift Cluster Manager, the automatic checks run after you input a subnet into a subnet ID field on the Virtual Private Cloud (VPC) subnet settings page. When you add a machine pool with a subnet that is new to your cluster, the automatic network verification checks the subnet to ensure that network connectivity is available before the machine pool is provisioned. After automatic network verification completes, a record is sent to the service log. The record provides the results of the verification check, including any network configuration errors. You can resolve the identified issues before a deployment and the deployment has a greater chance of success. You can also run the network verification manually for an existing cluster. This enables you to verify the network configuration for your cluster after making configuration changes. For steps to run the network verification checks manually, see Running the network verification manually .

    5.2. Scope of the network verification checks

    The network verification includes checks for each of the following requirements: The parent Virtual Private Cloud (VPC) exists. All specified subnets belong to the VPC. The VPC has enableDnsSupport enabled. The VPC has enableDnsHostnames enabled. Egress is available to the required domain and port combinations that are specified in the AWS firewall prerequisites section.

    5.3. Automatic network verification bypassing

    You can bypass the automatic network verification if you want to deploy an OpenShift Dedicated cluster with known network configuration issues into an existing Virtual Private Cloud (VPC). If you bypass the network verification when you create a cluster, the cluster has a limited support status. After installation, you can resolve the issues and then manually run the network verification. The limited support status is removed after the verification succeeds. When you install a cluster into an existing VPC by using Red Hat OpenShift Cluster Manager, you can bypass the automatic verification by selecting Bypass network verification on the Virtual Private Cloud (VPC) subnet settings page.

    5.4. Running the network verification manually

    You can manually run the network verification checks for an existing OpenShift Dedicated cluster by using Red Hat OpenShift Cluster Manager.

    Prerequisites

    • You have an existing OpenShift Dedicated cluster. You are the cluster owner or you have the cluster editor role.

    Procedure

    1. Navigate to OpenShift Cluster Manager and select your cluster. Select Verify networking from the Actions drop-down menu.

    Chapter 6. Configuring a cluster-wide proxy

    If you are using an existing Virtual Private Cloud (VPC), you can configure a cluster-wide proxy during an OpenShift Dedicated cluster installation or after the cluster is installed. When you enable a proxy, the core cluster components are denied direct access to the internet, but the proxy does not affect user workloads. Only cluster system egress traffic is proxied, including calls to the cloud provider API. You can enable a proxy only for OpenShift Dedicated clusters that use the Customer Cloud Subscription (CCS) model. If you use a cluster-wide proxy, you are responsible for maintaining the availability of the proxy to the cluster. If the proxy becomes unavailable, then it might impact the health and supportability of the cluster.

    6.1. Prerequisites for configuring a cluster-wide proxy

    To configure a cluster-wide proxy, you must meet the following requirements. These requirements are valid when you configure a proxy during installation or postinstallation.

    General requirements
    • You are the cluster owner. Your account has sufficient privileges. You have an existing Virtual Private Cloud (VPC) for your cluster. You are using the Customer Cloud Subscription (CCS) model for your cluster. The proxy can access the VPC for the cluster and the private subnets of the VPC. The proxy is also accessible from the VPC for the cluster and from the private subnets of the VPC. You have added the following endpoints to your VPC endpoint: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works at the container level and not at the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not enough.

      Important

      When using a cluster-wide proxy, you must configure the s3.<aws_region>.amazonaws.com endpoint as type Gateway .

    Network requirements
    • If your proxy re-encrypts egress traffic, you must create exclusions to the domain and port combinations. The following table offers guidance into these exceptions. Your proxy must exclude re-encrypting the following OpenShift URLs:

      Address Protocol/Port Function

      observatorium-mst.api.openshift.com https/443 Required. Used for Managed OpenShift-specific telemetry. sso.redhat.com https/443 The https://cloud.redhat.com/openshift site uses authentication from sso.redhat.com to download the cluster pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, and chargeback reporting. Your proxy must exclude re-encrypting the following site reliability engineering (SRE) and management URLs:

      Address Protocol/Port Function

      *.osdsecuritylogs.splunkcloud.com inputs1.osdsecuritylogs.splunkcloud.com inputs2.osdsecuritylogs.splunkcloud.com inputs4.osdsecuritylogs.splunkcloud.com inputs5.osdsecuritylogs.splunkcloud.com inputs6.osdsecuritylogs.splunkcloud.com inputs7.osdsecuritylogs.splunkcloud.com inputs8.osdsecuritylogs.splunkcloud.com inputs9.osdsecuritylogs.splunkcloud.com inputs10.osdsecuritylogs.splunkcloud.com inputs11.osdsecuritylogs.splunkcloud.com inputs12.osdsecuritylogs.splunkcloud.com inputs13.osdsecuritylogs.splunkcloud.com inputs14.osdsecuritylogs.splunkcloud.com inputs15.osdsecuritylogs.splunkcloud.com tcp/9997 Used by the splunk-forwarder-operator as a log forwarding endpoint to be used by Red Hat SRE for log-based alerting. http-inputs-osdsecuritylogs.splunkcloud.com https/443 Used by the splunk-forwarder-operator as a log forwarding endpoint to be used by Red Hat SRE for log-based alerting.

      Additional Resources

      6.2. Responsibilities for additional trust bundles

      If you supply an additional trust bundle, you are responsible for the following requirements: Ensuring that the contents of the additional trust bundle are valid Ensuring that the certificates, including intermediary certificates, contained in the additional trust bundle have not expired Tracking the expiry and performing any necessary renewals for certificates contained in the additional trust bundle Updating the cluster configuration with the updated additional trust bundle

      6.3. Configuring a proxy during installation

      You can configure an HTTP or HTTPS proxy when you install an OpenShift Dedicated with Customer Cloud Subscription (CCS) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy during installation by using Red Hat OpenShift Cluster Manager.

      6.4. Configuring a proxy during installation using OpenShift Cluster Manager

      If you are installing an OpenShift Dedicated cluster into an existing Virtual Private Cloud (VPC), you can use Red Hat OpenShift Cluster Manager to enable a cluster-wide HTTP or HTTPS proxy during installation. You can enable a proxy only for clusters that use the Customer Cloud Subscription (CCS) model. Prior to the installation, you must verify that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC. For detailed steps to configure a cluster-wide proxy during installation by using OpenShift Cluster Manager, see Creating a cluster on AWS with CCS or Creating a cluster on GCP with CCS .

      Additional Resources

      6.5. Configuring a proxy after installation

      You can configure an HTTP or HTTPS proxy after you install an OpenShift Dedicated with Customer Cloud Subscription (CCS) cluster into an existing Virtual Private Cloud (VPC). You can configure the proxy after installation by using Red Hat OpenShift Cluster Manager.

      6.6. Configuring a proxy after installation using OpenShift Cluster Manager

      You can use Red Hat OpenShift Cluster Manager to add a cluster-wide proxy configuration to an existing OpenShift Dedicated cluster in a Virtual Private Cloud (VPC). You can enable a proxy only for clusters that use the Customer Cloud Subscription (CCS) model. You can also use OpenShift Cluster Manager to update an existing cluster-wide proxy configuration. For example, you might need to update the network address for the proxy or replace the additional trust bundle if any of the certificate authorities for the proxy expire.

      Important

      The cluster applies the proxy configuration to the control plane and compute nodes. While applying the configuration, each cluster node is temporarily placed in an unschedulable state and drained of its workloads. Each node is restarted as part of the process.

      Prerequisites

      • You have an OpenShift Dedicated cluster that uses the Customer Cloud Subscription (CCS) model . Your cluster is deployed in a VPC.

      Procedure

      1. Navigate to OpenShift Cluster Manager and select your cluster. Under the Virtual Private Cloud (VPC) section on the Networking page, click Edit cluster-wide proxy . On the Edit cluster-wide proxy page, provide your proxy configuration details: Enter a value in at least one of the following fields: Specify a valid HTTP proxy URL . Specify a valid HTTPS proxy URL . In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. If you are replacing an existing trust bundle file, select Replace file to view the field. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. Click Confirm .

      Verification

      • Under the Virtual Private Cloud (VPC) section on the Networking page, verify that the proxy configuration for your cluster is as expected.

      Chapter 7. CIDR range definitions

      You must specify non-overlapping ranges for the following CIDR ranges. Machine CIDR ranges cannot be changed after creating your cluster. When specifying subnet CIDR ranges, ensure that the subnet CIDR range is within the defined Machine CIDR. You must verify that the subnet CIDR ranges allow for enough IP addresses for all intended workloads depending on which platform the cluster is hosted.

      Important

      OVN-Kubernetes, the default network provider in OpenShift Dedicated 4.14 and later versions, uses the following IP address ranges internally: 100.64.0.0/16 , 169.254.169.0/29 , 100.88.0.0/16 , fd98::/64 , fd69::/125 , and fd97::/64 . If your cluster uses OVN-Kubernetes, do not include any of these IP address ranges in any other CIDR definitions in your cluster or infrastructure.

      7.1. Machine CIDR

      In the Machine classless inter-domain routing (CIDR) field, you must specify the IP address range for machines or cluster nodes. This range must encompass all CIDR address ranges for your virtual private cloud (VPC) subnets. Subnets must be contiguous. A minimum IP address range of 128 addresses, using the subnet prefix /25 , is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix /24 , is supported for deployments that use multiple availability zones. The default is 10.0.0.0/16 . This range must not conflict with any connected networks.

      7.2. Service CIDR

      In the Service CIDR field, you must specify the IP address range for services. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 172.30.0.0/16 .

      7.3. Pod CIDR

      In the pod CIDR field, you must specify the IP address range for pods. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 10.128.0.0/14 .

      7.4. Host Prefix

      In the Host Prefix field, you must specify the subnet prefix length assigned to pods scheduled to individual machines. The host prefix determines the pod IP address pool for each machine. For example, if the host prefix is set to /23 , each machine is assigned a /23 subnet from the pod CIDR address range. The default is /23 , allowing 512 cluster nodes, and 512 pods per node (both of which are beyond our maximum supported).

      Chapter 8. Network security

      8.1. Understanding network policy APIs

      Kubernetes offers two features that users can use to enforce network security. One feature that allows users to enforce network policy is the NetworkPolicy API that is designed mainly for application developers and namespace tenants to protect their namespaces by creating namespace-scoped policies. The second feature is AdminNetworkPolicy which consists of two APIs: the AdminNetworkPolicy (ANP) API and the BaselineAdminNetworkPolicy (BANP) API. ANP and BANP are designed for cluster and network administrators to protect their entire cluster by creating cluster-scoped policies. Cluster administrators can use ANPs to enforce non-overridable policies that take precedence over NetworkPolicy objects. Administrators can use BANP to set up and enforce optional cluster-scoped network policy rules that are overridable by users using NetworkPolicy objects when necessary. When used together, ANP, BANP, and network policy can achieve full multi-tenant isolation that administrators can use to secure their cluster. OVN-Kubernetes CNI in OpenShift Dedicated implements these network policies using Access Control List (ACL) Tiers to evaluate and apply them. ACLs are evaluated in descending order from Tier 1 to Tier 3. Tier 1 evaluates AdminNetworkPolicy (ANP) objects. Tier 2 evaluates NetworkPolicy objects. Tier 3 evaluates BaselineAdminNetworkPolicy (BANP) objects. ANPs are evaluated first. When the match is an ANP allow or deny rule, any existing NetworkPolicy and BaselineAdminNetworkPolicy (BANP) objects in the cluster are skipped from evaluation. When the match is an ANP pass rule, then evaluation moves from tier 1 of the ACL to tier 2 where the NetworkPolicy policy is evaluated. If no NetworkPolicy matches the traffic then evaluation moves from tier 2 ACLs to tier 3 ACLs where BANP is evaluated.

      8.1.1. Key differences between AdminNetworkPolicy and NetworkPolicy custom resources

      The following table explains key differences between the cluster scoped AdminNetworkPolicy API and the namespace scoped NetworkPolicy API.

      Policy elements AdminNetworkPolicy NetworkPolicy

      Applicable user Cluster administrator or equivalent Namespace owners Scope Cluster Namespaced Drop traffic Supported with an explicit Deny action set as a rule. Supported via implicit Deny isolation at policy creation time. Delegate traffic Supported with an Pass action set as a rule. Not applicable Allow traffic Supported with an explicit Allow action set as a rule. The default action for all rules is to allow. Rule precedence within the policy Depends on the order in which they appear within an ANP. The higher the rule’s position the higher the precedence. Rules are additive Policy precedence Among ANPs the priority field sets the order for evaluation. The lower the priority number higher the policy precedence. There is no policy ordering between policies. Feature precedence Evaluated first via tier 1 ACL and BANP is evaluated last via tier 3 ACL. Enforced after ANP and before BANP, they are evaluated in tier 2 of the ACL. Matching pod selection Can apply different rules across namespaces. Can apply different rules across pods in single namespace. Cluster egress traffic Supported via nodes and networks peers Supported through ipBlock field along with accepted CIDR syntax. Cluster ingress traffic Not supported Not supported Fully qualified domain names (FQDN) peer support Not supported Not supported Namespace selectors Supports advanced selection of Namespaces with the use of namespaces.matchLabels field Supports label based namespace selection with the use of namespaceSelector field

      8.2. Network policy

      8.2.1. About network policy

      As a developer, you can define network policies that restrict traffic to pods in your cluster.

      8.2.1.1. About network policy

      In a cluster using a network plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Dedicated 4, OpenShift SDN supports using network policy in its default network isolation mode.

      Warning

      Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: deny-by-default
      spec:
        podSelector: {}
        ingress: []
    • Only allow connections from the OpenShift Dedicated Ingress Controller: To make a project allow only connections from the OpenShift Dedicated Ingress Controller, add the following NetworkPolicy object.

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-ingress
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                network.openshift.io/policy-group: ingress
        podSelector: {}
        policyTypes:
        - Ingress
    • Only accept connections from pods within a project:

      Important

      To allow ingress connections from hostNetwork pods in the same namespace, you need to apply the allow-from-hostnetwork policy together with the allow-same-namespace policy. To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-same-namespace
      spec:
        podSelector: {}
        ingress:
        - from:
          - podSelector: {}
    • Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-http-and-https
      spec:
        podSelector:
          matchLabels:
            role: frontend
        ingress:
        - ports:
          - protocol: TCP
            port: 80
          - protocol: TCP
            port: 443
    • Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-pod-and-namespace-both
      spec:
        podSelector:
          matchLabels:
            name: test-pods
        ingress:
          - from:
            - namespaceSelector:
                matchLabels:
                  project: project_name
              podSelector:
                matchLabels:
                  name: test-pods
    • NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in previous samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace.

      8.2.1.1.1. Using the allow-from-router network policy

      Use the following NetworkPolicy to allow external traffic regardless of the router configuration:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-router
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                policy-group.network.openshift.io/ingress: ""1
        podSelector: {}
        policyTypes:
        - Ingress
      1
      policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes.
      8.2.1.1.2. Using the allow-from-hostnetwork network policy

      Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods.

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-hostnetwork
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                policy-group.network.openshift.io/host-network: ""
        podSelector: {}
        policyTypes:
        - Ingress
      8.2.1.2. Optimizations for network policy with OpenShift SDN

      Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. It is inefficient to apply NetworkPolicy objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a podSelector . For example, if the spec podSelector and the ingress podSelector within a NetworkPolicy object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. When designing your network policy, refer to the following guidelines: Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. NetworkPolicy objects that select a whole namespace, by using the namespaceSelector or an empty podSelector , generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods.

      8.2.1.3. Optimizations for network policy with OVN-Kubernetes network plugin

      When designing your network policy, refer to the following guidelines: For network policies with the same spec.podSelector spec, it is more efficient to use one network policy with multiple ingress or egress rules, than multiple network policies with subsets of ingress or egress rules. Every ingress or egress rule based on the podSelector or namespaceSelector spec generates the number of OVS flows proportional to number of pods selected by network policy + number of pods selected by ingress or egress rule . Therefore, it is preferable to use the podSelector or namespaceSelector spec that can select as many pods as you need in one rule, instead of creating individual rules for every pod. For example, the following policy contains two rules:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: test-network-policy
      spec:
        podSelector: {}
        ingress:
        - from:
          - podSelector:
              matchLabels:
                role: frontend
        - from:
          - podSelector:
              matchLabels:
                role: backend

      The following policy expresses those same two rules as one:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: test-network-policy
      spec:
        podSelector: {}
        ingress:
        - from:
          - podSelector:
              matchExpressions:
              - {key: role, operator: In, values: [frontend, backend]}

      The same guideline applies to the spec.podSelector spec. If you have the same ingress or egress rules for different network policies, it might be more efficient to create one network policy with a common spec.podSelector spec. For example, the following two policies have different rules:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: policy1
      spec:
        podSelector:
          matchLabels:
            role: db
        ingress:
        - from:
          - podSelector:
              matchLabels:
                role: frontend
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: policy2
      spec:
        podSelector:
          matchLabels:
            role: client
        ingress:
        - from:
          - podSelector:
              matchLabels:
                role: frontend

      The following network policy expresses those same two rules as one:

      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: policy3
      spec:
        podSelector:
          matchExpressions:
          - {key: role, operator: In, values: [db, client]}
        ingress:
        - from:
          - podSelector:
              matchLabels:
                role: frontend

      You can apply this optimization when only multiple selectors are expressed as one. In cases where selectors are based on different labels, it may not be possible to apply this optimization. In those cases, consider applying some new labels for network policy optimization specifically.

      8.2.1.4. Next steps

      8.2.2. Creating a network policy

      As a user with the admin role, you can create a network policy for a namespace.

      8.2.2.1. Example NetworkPolicy object

      The following annotates an example NetworkPolicy object:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-27107 1
      spec:
        podSelector: 2
          matchLabels:
            app: mongodb
        ingress:
        - from:
          - podSelector: 3
              matchLabels:
                app: app
          ports: 4
          - protocol: TCP
            port: 27017
      1
      The name of the NetworkPolicy object. A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. A list of one or more destination ports on which to accept traffic.
      8.2.2.2. Creating a network policy using the CLI

      To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

      Prerequisites

      • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to.

      Procedure

      1. Create a policy rule: Create a <policy_name>.yaml file:

        $ touch <policy_name>.yaml

        where:

        <policy_name>
        Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples:

        Deny ingress from all pods in all namespaces

        This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: deny-by-default
        spec:
          podSelector: {}
          policyTypes:
          - Ingress
          ingress: []

        Allow ingress from all pods in the same namespace

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: allow-same-namespace
        spec:
          podSelector:
          ingress:
          - from:
            - podSelector: {}

        Allow ingress traffic to one pod from a particular namespace

        This policy allows traffic to pods labelled pod-a from pods running in namespace-y .

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: allow-traffic-pod
        spec:
          podSelector:
           matchLabels:
              pod: pod-a
          policyTypes:
          - Ingress
          ingress:
          - from:
            - namespaceSelector:
                matchLabels:
                   kubernetes.io/metadata.name: namespace-y
    • To create the network policy object, enter the following command:

      $ oc apply -f <policy_name>.yaml -n <namespace>

      where:

      <policy_name>
      Specifies the network policy file name.
      <namespace>
      Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

      Example output

      networkpolicy.networking.k8s.io/deny-by-default created
      If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

    • 8.2.2.3. Creating a default deny all network policy

      This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

      Prerequisites

      • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to.

      Procedure

      1. Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file:

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: deny-by-default
          namespace: default 1
        spec:
          podSelector: {} 2
          ingress: [] 3
        1
        namespace: default deploys this policy to the default namespace. podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command:

        $ oc apply -f deny-by-default.yaml

        Example output

        networkpolicy.networking.k8s.io/deny-by-default created

      8.2.2.4. Creating a network policy to allow traffic from external clients

      With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web .

      Prerequisites

      • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to.

      Procedure

      1. Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file:

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: web-allow-external
          namespace: default
        spec:
          policyTypes:
          - Ingress
          podSelector:
            matchLabels:
              app: web
          ingress:
        								Apply the policy by entering the following command:
        							

        $ oc apply -f web-allow-external.yaml

        Example output

        networkpolicy.networking.k8s.io/web-allow-external created
        This policy allows traffic from all resources, including external traffic as illustrated in the following diagram:

      Allow traffic from external clients
      8.2.2.5. Creating a network policy allowing traffic to an application from all namespaces
      Note

      If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application.

      Prerequisites

      • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to.

      Procedure

      1. Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file:

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: web-allow-all-namespaces
          namespace: default
        spec:
          podSelector:
            matchLabels:
              app: web 1
          policyTypes:
          - Ingress
          ingress:
          - from:
            - namespaceSelector: {} 2
        1
        Applies the policy only to app:web pods in default namespace. Selects all pods in all namespaces. By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command:

        $ oc apply -f web-allow-all-namespaces.yaml

        Example output

        networkpolicy.networking.k8s.io/web-allow-all-namespaces created

      Verification

      1. Start a web service in the default namespace by entering the following command:

        $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
      2. Run the following command to deploy an alpine image in the secondary namespace and to start a shell:

        $ oc run test-$RANDOM --namespace=secondary --rm -i -t --image=alpine -- sh
      3. Run the following command in the shell and observe that the request is allowed:

        # wget -qO- --timeout=2 http://web.default

        Expected output

        <!DOCTYPE html>
        <title>Welcome to nginx!</title>
        <style>
        html { color-scheme: light dark; }
        body { width: 35em; margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif; }
        </style>
        </head>
        <h1>Welcome to nginx!</h1>
        <p>If you see this page, the nginx web server is successfully installed and
        working. Further configuration is required.</p>
        <p>For online documentation and support please refer to
        <a href="http://nginx.org/">nginx.org</a>.<br/>
        Commercial support is available at
        <a href="http://nginx.com/">nginx.com</a>.</p>
        <p><em>Thank you for using nginx.</em></p>
        </body>
        </html>

      8.2.2.6. Creating a network policy allowing traffic to an application from a namespace
      Note

      If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace.

      Prerequisites

      • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to.

      Procedure

      1. Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file:

        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: web-allow-prod
          namespace: default
        spec:
          podSelector:
            matchLabels:
              app: web 1
          policyTypes:
          - Ingress
          ingress:
          - from:
            - namespaceSelector:
                matchLabels:
                  purpose: production 2
        1
        Applies the policy only to app:web pods in the default namespace. Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command:

        $ oc apply -f web-allow-prod.yaml

        Example output

        networkpolicy.networking.k8s.io/web-allow-prod created

      Verification

      1. Start a web service in the default namespace by entering the following command:

        $ oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
      2. Run the following command to create the prod namespace:

        $ oc create namespace prod
      3. Run the following command to label the prod namespace:

        $ oc label namespace/prod purpose=production
      4. Run the following command to create the dev namespace:

        $ oc create namespace dev
      5. Run the following command to label the dev namespace:

        $ oc label namespace/dev purpose=testing
      6. Run the following command to deploy an alpine image in the dev namespace and to start a shell:

        $ oc run test-$RANDOM --namespace=dev --rm -i -t --image=alpine -- sh
      7. Run the following command in the shell and observe that the request is blocked:

        # wget -qO- --timeout=2 http://web.default

        Expected output

        wget: download timed out
        Run the following command to deploy an alpine image in the prod namespace and start a shell:

        $ oc run test-$RANDOM --namespace=prod --rm -i -t --image=alpine -- sh
      8. Run the following command in the shell and observe that the request is allowed:

        # wget -qO- --timeout=2 http://web.default

        Expected output

        <!DOCTYPE html>
        <title>Welcome to nginx!</title>
        <style>
        html { color-scheme: light dark; }
        body { width: 35em; margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif; }
        </style>
        </head>
        <h1>Welcome to nginx!</h1>
        <p>If you see this page, the nginx web server is successfully installed and
        working. Further configuration is required.</p>
        <p>For online documentation and support please refer to
        <a href="http://nginx.org/">nginx.org</a>.<br/>
        Commercial support is available at
        <a href="http://nginx.com/">nginx.com</a>.</p>
        <p><em>Thank you for using nginx.</em></p>
        </body>
        </html>

      8.2.2.7. Creating a network policy using OpenShift Cluster Manager

      To define granular rules describing the ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.

      Prerequisites

      • You logged in to OpenShift Cluster Manager . You created an OpenShift Dedicated cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. You created a project within your OpenShift Dedicated cluster.

      Procedure

      1. From OpenShift Cluster Manager , click on the cluster you want to access. Click Open console to navigate to the OpenShift web console. Click on your identity provider and provide your credentials to log in to the cluster. From the administrator perspective, under Networking , click NetworkPolicies . Click Create NetworkPolicy . Provide a name for the policy in the Policy name field. Optional: You can provide the label and selector for a specific pod if this policy applies only to one or more specific pods. If you do not select a specific pod, then this policy will be applicable to all pods on the cluster. Optional: You can block all ingress and egress traffic by using the Deny all ingress traffic or Deny all egress traffic checkboxes. You can also add any combination of ingress and egress rules, allowing you to specify the port, namespace, or IP blocks you want to approve. Add ingress rules to your policy: Select Add ingress rule to configure a new rule. This action creates a new Ingress rule row with an Add allowed source drop-down menu that enables you to specify how you want to limit inbound traffic. The drop-down menu offers three options to limit your ingress traffic: Allow pods from the same namespace limits traffic to pods within the same namespace. You can specify the pods in a namespace, but leaving this option blank allows all of the traffic from pods in the namespace. Allow pods from inside the cluster limits traffic to pods within the same cluster as the policy. You can specify namespaces and pods from which you want to allow inbound traffic. Leaving this option blank allows inbound traffic from all namespaces and pods within this cluster. Allow peers by IP block limits traffic from a specified Classless Inter-Domain Routing (CIDR) IP block. You can block certain IPs with the exceptions option. Leaving the CIDR field blank allows all inbound traffic from all external sources. You can restrict all of your inbound traffic to a port. If you do not add any ports then all ports are accessible to traffic. Add egress rules to your network policy: Select Add egress rule to configure a new rule. This action creates a new Egress rule row with an Add allowed destination "* drop-down menu that enables you to specify how you want to limit outbound traffic. The drop-down menu offers three options to limit your egress traffic: Allow pods from the same namespace limits outbound traffic to pods within the same namespace. You can specify the pods in a namespace, but leaving this option blank allows all of the traffic from pods in the namespace. Allow pods from inside the cluster limits traffic to pods within the same cluster as the policy. You can specify namespaces and pods from which you want to allow outbound traffic. Leaving this option blank allows outbound traffic from all namespaces and pods within this cluster. Allow peers by IP block limits traffic from a specified CIDR IP block. You can block certain IPs with the exceptions option. Leaving the CIDR field blank allows all outbound traffic from all external sources. You can restrict all of your outbound traffic to a port. If you do not add any ports then all ports are accessible to traffic.

      8.2.3. Viewing a network policy

      As a user with the admin role, you can view a network policy for a namespace.

      8.2.3.1. Example NetworkPolicy object

      The following annotates an example NetworkPolicy object:

      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-27107 1
      spec:
        podSelector: 2
          matchLabels:
            app: mongodb
        ingress:
        - from:
          - podSelector: 3
              matchLabels:
                app: app
          ports: 4
          - protocol: TCP
            port: 27017
      1
      The name of the NetworkPolicy object. A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. A list of one or more destination ports on which to accept traffic.
      8.2.3.2. Viewing network policies using the CLI

      You can examine the network policies in a namespace. If you log in with a user with the cluster-admin role, then you can view any network policy in the cluster.

      Prerequisites

      • You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists.

      Procedure

      • List network policies in a namespace: To view network policy objects defined in a namespace, enter the following command:

        $ oc get networkpolicy
      • Optional: To examine a specific network policy, enter the following command:

        $ oc describe networkpolicy <policy_name> -n <namespace>

        where:

        <policy_name>
        Specifies the name of the network policy to inspect.
        <namespace>
        Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. For example:

        $ oc describe networkpolicy allow-same-namespace

        Output for oc describe command

        Name:         allow-same-namespace
        Namespace:    ns1
        Created on:   2021-05-24 22:28:56 -0400 EDT
        Labels:       <none>
        Annotations:  <none>
        Spec:
          PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
          Allowing ingress traffic:
            To Port: <any> (traffic allowed to all ports)
            From:
              PodSelector: <none>
          Not affecting egress traffic
          Policy Types: Ingress
        If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

      8.2.3.3. Viewing network policies using OpenShift Cluster Manager

      You can view the configuration details of your network policy in Red Hat OpenShift Cluster Manager.

      Prerequisites

      • You logged in to OpenShift Cluster Manager . You created an OpenShift Dedicated cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. You created a network policy.

      Procedure

      1. From the Administrator perspective in the OpenShift Cluster Manager web console, under Networking , click NetworkPolicies . Select the desired network policy to view. In the Network Policy details page, you can view all of the associated ingress and egress rules. Select YAML on the network policy details to view the policy configuration in YAML format. You can only view the details of these policies. You cannot edit these policies.

      8.2.4. Deleting a network policy

      As a user with the admin role, you can delete a network policy from a namespace.

      8.2.4.1. Deleting a network policy using the CLI

      You can delete a network policy in a namespace. If you log in with a user with the cluster-admin role, then you can delete any network policy in the cluster.

      Prerequisites

      • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace where the network policy exists.

      Procedure

      • To delete a network policy object, enter the following command:

        $ oc delete networkpolicy <policy_name> -n <namespace>

        where:

        <policy_name>
        Specifies the name of the network policy.
        <namespace>
        Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

        Example output

        networkpolicy.networking.k8s.io/default-deny deleted
        If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu.

      8.2.4.2. Deleting a network policy using OpenShift Cluster Manager

      You can delete a network policy in a namespace.

      Prerequisites

      • You logged in to OpenShift Cluster Manager . You created an OpenShift Dedicated cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider.

      Procedure

      1. From the Administrator perspective in the OpenShift Cluster Manager web console, under Networking , click NetworkPolicies . Use one of the following methods for deleting your network policy: Delete the policy from the Network Policies table: From the Network Policies table, select the stack menu on the row of the network policy you want to delete and then, click Delete NetworkPolicy . Delete the policy using the Actions drop-down menu from the individual network policy details: Click on Actions drop-down menu for your network policy. Select Delete NetworkPolicy from the menu.

      8.2.5. Configuring multitenant isolation with network policy

      As a cluster administrator, you can configure your network policies to provide multitenant network isolation. If you are using the OpenShift SDN network plugin, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set.

      8.2.5.1. Configuring multitenant isolation by using network policy

      You can configure your project to isolate it from pods and services in other project namespaces.

      Prerequisites

      • Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges.

      Procedure

      1. Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress .

        $ cat << EOF| oc create -f -
        apiVersion: networking.k8s.io/v1
        kind: NetworkPolicy
        metadata:
          name: allow-from-openshift-ingress
        spec:
          ingress:
          - from:
            - namespaceSelector:
                matchLabels:
                  policy-group.network.openshift.io/ingress: ""
          podSelector: {}
          policyTypes:
          - Ingress
        EOF
        Note

        policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring :

        $ cat << EOF| oc create -f -
        apiVersion: networking.k8s.io/v1
        kind: NetworkPolicy
        metadata:
          name: allow-from-openshift-monitoring
        spec:
          ingress:
          - from:
            - namespaceSelector:
                matchLabels:
                  network.openshift.io/policy-group: monitoring
          podSelector: {}
          policyTypes:
          - Ingress
        										A policy named allow-same-namespace:
        									

        $ cat << EOF| oc create -f -
        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          name: allow-same-namespace
        spec:
          podSelector:
          ingress:
          - from:
            - podSelector: {}
        										A policy named allow-from-kube-apiserver-operator:
        									

        $ cat << EOF| oc create -f -
        apiVersion: networking.k8s.io/v1
        kind: NetworkPolicy
        metadata:
          name: allow-from-kube-apiserver-operator
        spec:
          ingress:
          - from:
            - namespaceSelector:
                matchLabels:
                  kubernetes.io/metadata.name: openshift-kube-apiserver-operator
              podSelector:
                matchLabels:
                  app: kube-apiserver-operator
          policyTypes:
          - Ingress
        										For more details, see New kube-apiserver-operator webhook controller validating health of webhook.
        								Optional: To confirm that the network policies exist in your current project, enter the following command:
        							

        $ oc describe networkpolicy

        Example output

        Name:         allow-from-openshift-ingress
        Namespace:    example1
        Created on:   2020-06-09 00:28:17 -0400 EDT
        Labels:       <none>
        Annotations:  <none>
        Spec:
          PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
          Allowing ingress traffic:
            To Port: <any> (traffic allowed to all ports)
            From:
              NamespaceSelector: network.openshift.io/policy-group: ingress
          Not affecting egress traffic
          Policy Types: Ingress
        Name:         allow-from-openshift-monitoring
        Namespace:    example1
        Created on:   2020-06-09 00:29:57 -0400 EDT
        Labels:       <none>
        Annotations:  <none>
        Spec:
          PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
          Allowing ingress traffic:
            To Port: <any> (traffic allowed to all ports)
            From:
              NamespaceSelector: network.openshift.io/policy-group: monitoring
          Not affecting egress traffic
          Policy Types: Ingress

      Chapter 9. OVN-Kubernetes network plugin

      9.1. About the OVN-Kubernetes network plugin

      The OpenShift Dedicated cluster uses a virtualized network for pod and service networks. Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OpenShift Dedicated. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration. OVN-Kubernetes is the default networking solution for OpenShift Dedicated and single-node OpenShift deployments. OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to determine how packets travel through the network. For more information, see the Open Virtual Network website . OVN-Kubernetes is a series of daemons for OVS that translate virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device, allowing network administrators to configure, manage, and monitor the flow of network traffic. OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow . OVN supports distributed virtual routing, distributed logical switches, access control, DHCP and DNS. OVN implements distributed virtual routing within logic flows which equate to open flows. So for example if you have a pod that sends out a DHCP request on the network, it sends out that broadcast looking for DHCP address there will be a logic flow rule that matches that packet, and it responds giving it a gateway, a DNS server an IP address and so on. OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the network provider features; egress IPs, firewalls, routers, hybrid networking, IPSEC encryption, IPv6, network policy, network policy logs, hardware offloading and multicast.

      9.1.1. OVN-Kubernetes purpose

      The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin: Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. Implements Kubernetes network policy support, including ingress and egress rules. Uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes. The OVN-Kubernetes network plugin provides the following advantages over OpenShift SDN. Full support for IPv6 single-stack and IPv4/IPv6 dual-stack networking on supported platforms Support for hybrid clusters with both Linux and Microsoft Windows workloads Optional IPsec encryption of intra-cluster communications Offload of network data processing from host CPU to compatible network cards and data processing units (DPUs)

      9.1.2. Supported network plugin feature matrix

      Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins:

      Table 9.1. Default CNI network plugin feature comparison
      Feature OVN-Kubernetes OpenShift SDN

      Egress IPs Supported Supported Egress firewall [1] Supported Supported Egress router Supported [2] Supported Hybrid networking Supported Not supported IPsec encryption for intra-cluster communication Supported Not supported Supported [3] [4] Not supported Kubernetes network policy Supported Supported Kubernetes network policy logs Supported Not supported Hardware offloading Supported Not supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 is supported only on bare metal, vSphere, IBM Power®, IBM Z®, and Red Hat OpenStack clusters. IPv6 single stack is not supported on IBM Power®, IBM Z®, and Red Hat OpenStack clusters.

      9.1.3. OVN-Kubernetes IPv6 and dual-stack limitations

      The OVN-Kubernetes network plugin has the following limitations: For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output:

      I1006 16:09:50.985852   60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1
      I1006 16:09:50.985923   60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4
      F1006 16:09:50.985939   60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4

      The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway. For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway. If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state. If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml , the status field contains more than one message about the default gateway, as shown in the following output:

      I0512 19:07:17.589083  108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1
      F0512 19:07:17.589141  108432 ovnkube.go:133] failed to get default gateway interface

      The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.

      9.1.4. Session affinity

      Session affinity is a feature that applies to Kubernetes Service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client’s IP address, see Session affinity .

      Stickiness timeout for session affinity

      The OVN-Kubernetes network plugin for OpenShift Dedicated calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter.

      Chapter 10. Configuring Routes

      10.1. Route configuration

      10.1.1. Creating an HTTP-based route

      A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example.

      Prerequisites

      • You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port.

      Procedure

      1. Create a project called hello-openshift by running the following command:

        $ oc new-project hello-openshift
      2. Create a pod in the project by running the following command:

        $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json
      3. Create a service called hello-openshift by running the following command:

        $ oc expose pod/hello-openshift
      4. Create an unsecured route to the hello-openshift application by running the following command:

        $ oc expose svc hello-openshift

      Verification

      • To verify that the route resource that you created, run the following command:

        $ oc get routes -o yaml <name of resource> 1
        1
        In this example, the route is named hello-openshift .

      Sample YAML definition of the created unsecured route

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: hello-openshift
      spec:
        host: hello-openshift-hello-openshift.<Ingress_Domain> 1
        port:
          targetPort: 8080 2
          kind: Service
          name: hello-openshift
      <Ingress_Domain> is the default ingress domain name. The ingresses.config/cluster object is created during the installation and cannot be changed. If you want to specify a different domain, you can specify an alternative cluster domain using the appsDomain option. targetPort is the target port on pods that is selected by the service that this route points to. To display your default ingress domain, run the following command:

      $ oc get ingresses.config/cluster -o jsonpath={.spec.domain}

      10.1.2. Configuring route timeouts

      You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end.

      Prerequisites

      • You need a deployed Ingress Controller on a running cluster.

      Procedure

      1. Using the oc annotate command, add the timeout to the route:

        $ oc annotate route <route_name> \
            --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1
        1
        Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute :

        $ oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s

      10.1.3. HTTP Strict Transport Security

      HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites. When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. Cluster administrators can configure HSTS to do the following: Enable HSTS per-route Disable HSTS per-route Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains

      Important

      HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes.

      10.1.3.1. Enabling HTTP Strict Transport Security per-route

      HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation.

      Prerequisites

      • You are logged in to the cluster with a user with administrator privileges for the project. You installed the OpenShift CLI ( oc ).

      Procedure

      • To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command:

        $ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ 1
        includeSubDomains;preload"
        1
        In this example, the maximum age is set to 31536000 ms, which is approximately 8.5 hours. In this example, the equal sign ( = ) is in quotes. This is required to properly execute the annotate command.

        Example route configured with an annotation

        apiVersion: route.openshift.io/v1
        kind: Route
        metadata:
          annotations:
            haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3
        spec:
          host: def.abc.com
            termination: "reencrypt"
          wildcardPolicy: "Subdomain"
        Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0 , it negates the policy. Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host. Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header.
      10.1.3.2. Disabling HTTP Strict Transport Security per-route

      To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0 .

      Prerequisites

      • You are logged in to the cluster with a user with administrator privileges for the project. You installed the OpenShift CLI ( oc ).

      Procedure

      • To disable HSTS, set the max-age value in the route annotation to 0 , by entering the following command:

        $ oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"
        Tip

        You can alternatively apply the following YAML to create the config map:

        Example of disabling HSTS per-route

        metadata:
          annotations:
            haproxy.router.openshift.io/hsts_header: max-age=0
        To disable HSTS for every route in a namespace, enter the following command:

        $ oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0"

      Verification

      1. To query the annotation for all routes, enter the following command:

        $ oc get route  --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{$a := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{$n := .metadata.name}}{{with $a}}Name: {{$n}} HSTS: {{$a}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}'

        Example output

        Name: routename HSTS: max-age=0

      10.1.4. Using cookies to keep route statefulness

      OpenShift Dedicated provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OpenShift Dedicated can use cookies to configure session persistence. The ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the next request in the session. The cookie tells the ingress controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod.

      10.1.5. Path-based routes

      Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. The following table shows example routes and their accessibility:

      Table 10.1. Route availability
      Route When Compared to Accessible

      www.example.com/test www.example.com/test www.example.com www.example.com/test and www.example.com www.example.com/test www.example.com www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com

      An unsecured route with a path

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: route-unsecured
      spec:
        host: www.example.com
        path: "/test" 1
          kind: Service
          name: service-name
      The path is the only added attribute for a path-based route. Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request.

      10.1.6. HTTP header configuration

      OpenShift Dedicated provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together. You can only set or delete headers within an IngressController or Route CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy field, instead of spec.httpHeaders.actions .

      10.1.6.1. Order of precedence

      When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header. For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence. For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence. For example, a cluster administrator sets the X-Frame-Options response header with the value DENY in the Ingress Controller using the following configuration:

      Example IngressController spec

      apiVersion: operator.openshift.io/v1
      kind: IngressController
      # ...
      spec:
        httpHeaders:
          actions:
            response:
            - name: X-Frame-Options
              action:
                type: Set
                  value: DENY
      A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN using the following configuration:

      Example Route spec

      apiVersion: route.openshift.io/v1
      kind: Route
      # ...
      spec:
        httpHeaders:
          actions:
            response:
            - name: X-Frame-Options
              action:
                type: Set
                  value: SAMEORIGIN
      When both the IngressController spec and Route spec are configuring the X-Frame-Options response header, then the value set for this header at the global level in the Ingress Controller takes precedence, even if a specific route allows frames. For a request header, the Route spec value overrides the IngressController spec value. This prioritization occurs because the haproxy.config file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY applied to the front end configurations overrides the same header with the value SAMEORIGIN that is set in the back end:

      frontend public
        http-response set-header X-Frame-Options 'DENY'
      frontend fe_sni
        http-response set-header X-Frame-Options 'DENY'
      frontend fe_no_sni
        http-response set-header X-Frame-Options 'DENY'
      backend be_secure:openshift-monitoring:alertmanager-main
        http-response set-header X-Frame-Options 'SAMEORIGIN'

      Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations.

      10.1.6.2. Special case headers

      The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances:

      Table 10.2. Special case header configuration options
      Header name Configurable using IngressController spec Configurable using Route spec Reason for disallowment Configurable using another method

      proxy The proxy HTTP request header can be used to exploit vulnerable CGI applications by injecting the header value into the HTTP_PROXY environment variable. The proxy HTTP request header is also non-standard and prone to error during configuration. When the host HTTP request header is set using the IngressController CR, HAProxy can fail when looking up the correct route. strict-transport-security The strict-transport-security HTTP response header is already handled using route annotations and does not need a separate implementation. Yes: the haproxy.router.openshift.io/hsts_header route annotation cookie and set-cookie The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy’s session affinity and restrict HAProxy’s ownership of a cookie. the haproxy.router.openshift.io/disable_cookie route annotation the haproxy.router.openshift.io/cookie_name route annotation

      10.1.7. Setting or deleting HTTP request and response headers in a route

      You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes. For example, you might want to enable a web application to serve content in alternate locations for specific routes if that content is written in multiple languages, even if there is a default global location specified by the Ingress Controller serving the routes. The following procedure creates a route that sets the Content-Location HTTP request header so that the URL associated with the application, https://app.example.com , directs to the location https://app.example.com/lang/en-us . Directing application traffic to this location means that anyone using that specific route is accessing web content written in American English.

      Prerequisites

      • You have installed the OpenShift CLI ( oc ). You are logged into an OpenShift Dedicated cluster as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port.

      Procedure

      1. Create a route definition and save it in a file called app-example-route.yaml :

        YAML definition of the created route with HTTP header directives

        apiVersion: route.openshift.io/v1
        kind: Route
        # ...
        spec:
          host: app.example.com
            termination: edge
            kind: Service
            name: app-example
          httpHeaders:
            actions: 1
              response: 2
              - name: Content-Location 3
                action:
                  type: Set 4
                    value: /lang/en-us 5
        The list of actions you want to perform on the HTTP headers. The type of header you want to change. In this case, a response header. The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration . The type of action being taken on the header. This field can have the value Set or Delete . When setting HTTP headers, you must provide a value . The value can be a string from a list of available directives for that header, for example DENY , or it can be a dynamic value that will be interpreted using HAProxy’s dynamic value syntax. In this case, the value is set to the relative location of the content. Create a route to your existing web application using the newly created route definition:

        $ oc -n app-example create -f app-example-route.yaml

      For HTTP request headers, the actions specified in the route definitions are executed after any actions performed on HTTP request headers in the Ingress Controller. This means that any values set for those request headers in a route will take precedence over the ones set in the Ingress Controller. For more information on the processing order of HTTP headers, see HTTP header configuration .

      10.1.8. Route-specific annotations

      The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route.

      Important

      To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message.

      Table 10.3. Route annotations
      Variable Description Environment variable used as default

      haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are random , source , roundrobin , and leastconn . The default value is source for TLS passthrough routes. For all other routes, the default is random . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router.
      Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route.
      Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value.
      Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value.
      Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value.
      Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_whitelist Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped. The maximum number of IP addresses and CIDR ranges directly visible in the haproxy.config file is 61. [ 1 ] haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : the browser does not send cookies on cross-site requests, but does send cookies when users navigate to the origin site from an external site. This is the default browser behavior when the SameSite value is not specified. Strict : the browser sends cookies only for same-site requests. None : the browser sends cookies for both cross-site and same-site requests. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from haproxy.config . This file is stored in the var/lib/haproxy/router/whitelists folder. To ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist. Environment variables cannot be edited.

      Router timeout variables

      TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ).

      Variable Default Description

      ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT Timeout for the gathering of HAProxy metrics.

      A route setting custom timeout

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        annotations:
          haproxy.router.openshift.io/timeout: 5500ms 1
      							Specifies the new timeout with HAProxy supported units (us, ms, s, m, h, d). If the unit is not provided, ms is the default.
      						Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route.
      					

      A route that allows only one specific IP address

      metadata:
        annotations:
          haproxy.router.openshift.io/ip_whitelist: 192.168.1.10

      A route that allows several IP addresses

      metadata:
        annotations:
          haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12

      A route that allows an IP address CIDR network

      metadata:
        annotations:
          haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24

      A route that allows both IP an address and IP address CIDR networks

      metadata:
        annotations:
          haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8

      A route specifying a rewrite target

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        annotations:
          haproxy.router.openshift.io/rewrite-target: / 1
      							Sets / as rewrite path of the request on the backend.
      					Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation.
      					The following table provides examples of the path rewriting behavior for various combinations of spec.path, request path, and rewrite target.
      				

      Table 10.4. rewrite-target examples
      Route.spec.pathRequest pathRewrite targetForwarded request path

      /foo/ /foo/bar /foo/bar/ /bar/ /foo/ /bar/ /foo/bar /baz/bar /foo/bar/ /baz/bar/ /foo/ N/A (request path does not match route path) /foo/ /foo/ /foo/ /foo/bar Certain special characters in haproxy.router.openshift.io/rewrite-target require special handling because they must be escaped properly. Refer to the following table to understand how these characters are handled.

      Table 10.5. Special character handling
      For characterUse charactersNotes

      Avoid # because it terminates the rewrite expression % or %% Avoid odd sequences such as %%% Avoid ‘ because it is ignored All other valid URL characters can be used without escaping.

      10.1.9. Creating a route using the default certificate through an Ingress object

      If you create an Ingress object without specifying any TLS configuration, OpenShift Dedicated generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows.

      Prerequisites

      • You have a service that you want to expose. You have access to the OpenShift CLI (oc).

      Procedure

      1. Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml:

        YAML definition of an Ingress object

        apiVersion: networking.k8s.io/v1
        kind: Ingress
        metadata:
          name: frontend
        spec:
          rules:
          - {} 1
        Use this exact syntax to specify TLS without specifying a custom certificate. Create the Ingress object by running the following command:

        $ oc create -f example-ingress.yaml

      Verification

      • Verify that OpenShift Dedicated has created the expected route for the Ingress object by running the following command:

        $ oc get routes -o yaml

        Example output

        apiVersion: v1
        items:
        - apiVersion: route.openshift.io/v1
          kind: Route
          metadata:
            name: frontend-j9sdd 1
          spec:
            tls: 2
              insecureEdgeTerminationPolicy: Redirect
              termination: edge 3
        									The name of the route includes the name of the Ingress object followed by a random suffix.
        									In order to use the default certificate, the route should not specify spec.certificate.
        									The route should specify the edge termination policy.
        								

      10.1.10. Creating a route using the destination CA certificate in the Ingress annotation

      The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate.

      Prerequisites

      • You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose.

      Procedure

      1. Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations:

        apiVersion: networking.k8s.io/v1
        kind: Ingress
        metadata:
          name: frontend
          annotations:
            route.openshift.io/termination: "reencrypt"
            route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1
        ...
        1
        The annotation references a kubernetes secret. The secret referenced in this annotation will be inserted into the generated route.

        Example output

        apiVersion: route.openshift.io/v1
        kind: Route
        metadata:
          name: frontend
          annotations:
            route.openshift.io/termination: reencrypt
            route.openshift.io/destination-ca-certificate-secret: secret-ca-cert
        spec:
            insecureEdgeTerminationPolicy: Redirect
            termination: reencrypt
            destinationCACertificate: |
              -----BEGIN CERTIFICATE-----
              [...]
              -----END CERTIFICATE-----
        							

      10.2. Secured routes

      Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates.

      Important

      If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

      10.2.1. Creating a re-encrypt route with a custom certificate

      You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command.

      Prerequisites

      • You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Password protected key files are not supported. To remove a passphrase from a key file, use the following command:

        $ openssl rsa -in password_protected_tls.key -out tls.key

      Procedure

      This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service’s certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt, tls.key, cacert.crt, and (optionally) ca.crt. Substitute the name of the Service resource that you want to expose for frontend. Substitute the appropriate hostname for www.example.com. Create a secure Route resource using reencrypt TLS termination and a custom certificate:

      $ oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com

      If you examine the resulting Route resource, it should look similar to the following:

      YAML Definition of the Secure Route

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: frontend
      spec:
        host: www.example.com
          kind: Service
          name: frontend
          termination: reencrypt
          key: |-
            -----BEGIN PRIVATE KEY-----
            [...]
            -----END PRIVATE KEY-----
          certificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
          caCertificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
          destinationCACertificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
      See oc create route reencrypt --help for more options.

      10.2.2. Creating an edge route with a custom certificate

      You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route.

      Prerequisites

      • You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Password protected key files are not supported. To remove a passphrase from a key file, use the following command:

        $ openssl rsa -in password_protected_tls.key -out tls.key

      Procedure

      This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt, tls.key, and (optionally) ca.crt. Substitute the name of the service that you want to expose for frontend. Substitute the appropriate hostname for www.example.com. Create a secure Route resource using edge TLS termination and a custom certificate.

      $ oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com

      If you examine the resulting Route resource, it should look similar to the following:

      YAML Definition of the Secure Route

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: frontend
      spec:
        host: www.example.com
          kind: Service
          name: frontend
          termination: edge
          key: |-
            -----BEGIN PRIVATE KEY-----
            [...]
            -----END PRIVATE KEY-----
          certificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
          caCertificate: |-
            -----BEGIN CERTIFICATE-----
            [...]
            -----END CERTIFICATE-----
      See oc create route edge --help for more options.

      10.2.3. Creating a passthrough route

      You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route.

      Prerequisites

      • You must have a service that you want to expose.

      Procedure

      • Create a Route resource:

        $ oc create route passthrough route-passthrough-secured --service=frontend --port=8080

        If you examine the resulting Route resource, it should look similar to the following:

        A Secured Route Using Passthrough Termination

        apiVersion: route.openshift.io/v1
        kind: Route
        metadata:
          name: route-passthrough-secured 1
        spec:
          host: www.example.com
          port:
            targetPort: 8080
            termination: passthrough 2
            insecureEdgeTerminationPolicy: None 3
            kind: Service
            name: frontend
        The name of the object, which is limited to 63 characters. The termination field is set to passthrough. This is the only required tls field. Optional insecureEdgeTerminationPolicy. The only valid values are None, Redirect, or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication.

      10.2.4. Creating a route with externally managed certificate

      Important

      Securing route with external certificates in TLS secrets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. You can configure OpenShift Dedicated routes with third-party certificate management solutions by using the .spec.tls.externalCertificate field of the route API. You can reference externally managed TLS certificates via secrets, eliminating the need for manual certificate management. Using the externally managed certificate reduces errors ensuring a smoother rollout of certificate updates, enabling the OpenShift router to serve renewed certificates promptly. This feature applies to both edge routes and re-encrypt routes.

      Prerequisites

      • You must enable the RouteExternalCertificate feature gate. You must have the create and update permissions on the routes/custom-host. You must have a secret containing a valid certificate/key pair in PEM-encoded format of type kubernetes.io/tls, which includes both tls.key and tls.crt keys. You must place the referenced secret in the same namespace as the route you want to secure.

      Procedure

      1. Create a role in the same namespace as the secret to allow the router service account read access by running the following command:

        $ oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \ 1
        --namespace=<current-namespace> 2
        1
        Specify the actual name of your secret. Specify the namespace where both your secret and route reside. Create a rolebinding in the same namespace as the secret and bind the router service account to the newly created role by running the following command:

        $ oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace> 1
        1
        Specify the namespace where both your secret and route reside. Create a YAML file that defines the route and specifies the secret containing your certificate using the following example.

        YAML definition of the secure route

        apiVersion: route.openshift.io/v1
        kind: Route
        metadata:
          name: myedge
          namespace: test
        spec:
          host: myedge-test.apps.example.com
            externalCertificate:
              name: <secret-name> 1
            termination: edge
            [...]
        [...]
        Specify the actual name of your secret. Create a route resource by running the following command:

        $ oc apply -f <route.yaml> 1
        1
        Specify the generated YAML filename. If the secret exists and has a certificate/key pair, the router will serve the generated certificate if all prerequisites are met. If .spec.tls.externalCertificate is not provided, the router will use default generated certificates. You cannot provide the .spec.tls.certificate field or the .spec.tls.key field when using the .spec.tls.externalCertificate field.

        Additional resources

        • For troubleshooting routes with externally managed certificates, check the OpenShift Dedicated router pod logs for errors, see Investigating pod issues.

      Legal Notice

      The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.