添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Red Hat OpenShift Container Platform

Build, deploy and manage your applications across cloud- and on-premise infrastructure

Red Hat OpenShift Dedicated

Single-tenant, high-availability Kubernetes clusters in the public cloud

Red Hat OpenShift Online

The fastest way for developers to build, host and scale applications in the public cloud

All products
  • OpenShift Container Platform 3.11 Release Notes
  • xPaaS Release Notes
  • Comparing with OpenShift Enterprise 2
  • Overview
  • Install OpenShift
  • Configure OpenShift
  • Web Console Walkthrough
  • Command-Line Walkthrough
  • Overview
  • Kubernetes Infrastructure
  • Container Registry
  • Web Console
  • Overview
  • Containers and Images
  • Pods and Services
  • Projects and Users
  • Builds and Image Streams
  • Deployments
  • Authentication
  • Authorization
  • Persistent Storage
  • Ephemeral Storage
  • Source Control Management
  • Admission Controllers
  • Custom Admission Controllers
  • Other API Objects
  • Networking
  • OpenShift SDN
  • Available SDN plug-ins
  • Available router plug-ins
  • Port Forwarding
  • Remote Commands
  • Routes
  • Service Catalog
  • Service Catalog CLI
  • Template Service Broker
  • OpenShift Ansible Broker
  • AWS Service Broker
  • Introduction
  • Container Hosts and Multi-tenancy
  • Container Content
  • Registries
  • Build Process
  • Deployment
  • Securing the Container Platform
  • Network Security
  • Attached Storage
  • Monitoring Events and Logs
  • Planning your installation
  • System and environment requirements
  • Preparing your hosts
  • Configuring Your Inventory File
  • Example Inventory Files
  • Installing OpenShift
  • Disconnected installation
  • Installing a stand-alone deployment of OpenShift container image registry
  • Uninstalling OpenShift
  • Upgrade methods and strategies
  • In-place upgrades
  • Blue-green upgrades
  • Updating operating systems
  • Downgrading
  • Overview
  • Internal Registry Overview
  • Deploying a Registry on Existing Clusters
  • Accessing the Registry
  • Securing and Exposing the Registry
  • Extended Registry Configuration
  • Known Issues
  • Router Overview
  • Using the Default HAProxy Router
  • Deploying a Customized HAProxy Router
  • Configuring the HAProxy Router to Use the PROXY Protocol
  • Introduction
  • Requirements
  • Configuring Role Variables
  • Running the Installer
  • Enabling Container Provider Integration
  • Uninstalling
  • Prometheus Cluster Monitoring
  • Accessing and Configuring the Red Hat Registry
  • Master and Node Configuration
  • OpenShift Ansible Broker Configuration
  • Adding Hosts to an Existing Cluster
  • Loading the Default Image Streams and Templates
  • Configuring Custom Certificates
  • Redeploying Certificates
  • Configuring Authentication and User Agent
  • Syncing groups with LDAP
  • Configuring LDAP failover
  • Configuring the SDN
  • Configuring Nuage SDN
  • Configuring NSX-T SDN
  • Configuring Kuryr SDN
  • Configuring for AWS
  • Configuring for Red Hat Virtualization
  • Configuring for OpenStack
  • Configuring for Google Compute Engine
  • Configuring for Azure
  • Configuring for VMware vSphere
  • Configuring Local Volumes
  • Overview
  • Using NFS
  • Using GlusterFS
  • Using OpenStack Cinder
  • Using Ceph RBD
  • Using AWS Elastic Block Store
  • Using GCE Persistent Disk
  • Using iSCSI
  • Using Fibre Channel
  • Using Azure Disk
  • Using Azure File
  • Using FlexVolume
  • Using VMware vSphere volumes for persistent storage
  • Using Local Volume
  • Using Container Storage Interface (CSI)
  • Using OpenStack Manila shares
  • Dynamic Provisioning and Creating Storage Classes
  • Volume Security
  • Selector-Label Volume Binding
  • Enabling Controller-managed Attachment and Detachment
  • Persistent Volume Snapshots
  • Using hostPath
  • Overview
  • Sharing an NFS PV Across Two Pods
  • Using Ceph RBD for Persistent Storage
  • Using Ceph RBD for dynamic provisioning
  • Complete Example Using GlusterFS
  • Complete Example Using GlusterFS for Dynamic Provisioning
  • Mounting Volumes To Privileged Pods
  • Mount Propagation
  • Switching an Integrated OpenShift Container Registry to GlusterFS
  • Binding Persistent Volumes by Label
  • Using StorageClasses for Dynamic Provisioning
  • Using StorageClasses for Existing Legacy Storage
  • Configuring Azure Blob Storage for Integrated Container Image Registry
  • Configuring Ephemeral Storage
  • Working with HTTP Proxies
  • Configuring Global Build Defaults and Overrides
  • Configuring Pipeline Execution
  • Configuring Route Timeouts
  • Configuring Native Container Routing
  • Routing from Edge Load Balancers
  • Aggregating Container Logs
  • Aggregate Logging Sizing Guidelines
  • Enabling Cluster Metrics
  • Customizing the Web Console
  • Deploying External Persistent Volume Provisioners
  • Installing the Operator Framework (Technology Preview)
  • Uninstalling Operator Lifecycle Manager
  • Overview
  • Run-once tasks
  • Environment health checks
  • Creating an environment-wide backup
  • Host-level tasks
  • Project-level tasks
  • Docker tasks
  • Managing Certificates
  • Overview
  • Managing Nodes
  • Restoring your cluster
  • Replacing a master host
  • Managing Users
  • Managing Projects
  • Managing Pods
  • Managing Networking
  • Configuring Service Accounts
  • Managing Role-based Access Control
  • Image Policy
  • Image Signatures
  • Scoped Tokens
  • Monitoring Images
  • Managing Security Context Constraints
  • Overview
  • Default Scheduling
  • Descheduling
  • Custom Scheduling
  • Controlling Pod Placement
  • Pod Priority and Preemption
  • Advanced Scheduling
  • Advanced Scheduling and Node Affinity
  • Advanced Scheduling and Pod Affinity/Anti-affinity
  • Advanced Scheduling and Node Selectors
  • Advanced Scheduling and Taints and Tolerations
  • Setting Quotas
  • Setting Multi-Project Quotas
  • Pruning objects
  • Extending the Kubernetes API with Custom Resources
  • Garbage Collection
  • Allocating Node Resources
  • Overcommitting
  • Out of Resource Handling
  • Setting Limit Ranges
  • Node Problem Detector
  • Assigning Unique External IPs for Ingress Traffic
  • Monitoring and Debugging Routers
  • High Availability
  • IPtables
  • Securing Builds by Strategy
  • Restricting Application Capabilities Using Seccomp
  • Sysctls
  • Encrypting Data at Datastore Layer
  • Encrypting traffic between nodes with IPsec
  • Building Dependency Trees
  • Replacing a failed etcd member
  • Restoring etcd quorum
  • Troubleshooting Networking
  • Diagnostics Tool
  • Idling Applications
  • Analyzing Cluster Capacity
  • Configuring the cluster auto-scaler in AWS
  • Disabling Features using Feature Gates
  • Kuryr SDN Administration
  • Overview
  • Recommended Installation Practices
  • Recommended Host Practices
  • Optimizing Compute Resources
  • Optimizing Persistent Storage
  • Optimizing Ephemeral Storage
  • Network Optimization
  • Routing Optimization
  • Scaling Cluster Metrics
  • Scaling Cluster Monitoring
  • Tested Maximums per Cluster
  • Using Cluster Loader
  • Using CPU Manager
  • Managing Huge Pages
  • Optimizing On GlusterFS Storage
  • Overview
  • Planning Your Development Process
  • Creating New Applications
  • Promoting Applications Across Environments
  • Authentication
  • Authorization
  • Projects
  • Overview
  • Database Applications
  • Web Framework Applications
  • QuickStart Examples
  • Continuous Integration and Deployment
  • Webhooks and Action Hooks
  • S2I Tool
  • Support Guide
  • Overview
  • Quickstart Templates
  • Ruby on Rails
  • Setting Up a Nexus Mirror
  • OpenShift Pipeline Builds
  • Binary Builds
  • How Builds Work
  • Basic Build Operations
  • Build Inputs
  • Build Output
  • Build Strategy Options
  • Build Environment
  • Triggering Builds
  • Build Hooks
  • Build Run Policy
  • Advanced Build Operations
  • Troubleshooting
  • How Deployments Work
  • Basic Deployment Operations
  • Deployment Strategies
  • Advanced Deployment Strategies
  • Kubernetes Deployments Support
  • Templates
  • Opening a Remote Shell to Containers
  • Service Accounts
  • Managing Images
  • Quotas and Limit Ranges
  • Overview
  • Using a Router
  • Using a Load Balancer
  • Using a Service ExternalIP
  • Using a NodePort
  • Routes
  • Integrating External Services
  • Using Device Manager
  • Using Device Plug-ins
  • Secrets
  • ConfigMaps
  • Downward API
  • Projected Volumes
  • Using Daemonsets
  • Pod Autoscaling
  • Managing Volumes
  • Using Persistent Volumes
  • Expanding Persistent Volumes
  • Executing Remote Commands
  • Copying Files
  • Port Forwarding
  • Shared Memory
  • Application Health
  • Events
  • Managing Environment Variables
  • OpenShift Pipeline
  • Cron Jobs
  • Create from URL
  • Creating an object from a custom resource definition
  • Application memory sizing
  • Application ephemeral storage sizing
  • Overview
  • Guidelines
  • Image Metadata
  • S2I Requirements
  • Testing S2I Images
  • Custom Builder
  • Overview
  • Overview
  • .NET Core
  • Node.js
  • Python
  • Customizing S2I Images
  • Overview
  • MySQL
  • PostgreSQL
  • MongoDB
  • MariaDB
  • Overview
  • Jenkins
  • Jenkins Slaves
  • Other Container Images
  • Overview
  • Get Started with the CLI
  • Managing CLI Profiles
  • Developer CLI Operations
  • Administrator CLI Operations
  • Differences Between oc and kubectl
  • Extending the CLI
  • Introduction
  • CLI Tooling
  • Getting Started
  • Reference
  • API list
  • Index
  • About core
  • Binding [core/v1]
  • ComponentStatus [core/v1]
  • ConfigMap [core/v1]
  • Endpoints [core/v1]
  • Event [core/v1]
  • LimitRange [core/v1]
  • Namespace [core/v1]
  • Node [core/v1]
  • PersistentVolumeClaim [core/v1]
  • PersistentVolume [core/v1]
  • Pod [core/v1]
  • PodTemplate [core/v1]
  • ReplicationController [core/v1]
  • ResourceQuota [core/v1]
  • Secret [core/v1]
  • ServiceAccount [core/v1]
  • Service [core/v1]
  • About admissionregistration.k8s.io
  • MutatingWebhookConfiguration [admissionregistration.k8s.io/v1beta1]
  • ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1beta1]
  • About apiregistration.k8s.io
  • APIService [apiregistration.k8s.io/v1]
  • About apps
  • ControllerRevision [apps/v1]
  • DaemonSet [apps/v1]
  • Deployment [apps/v1]
  • ReplicaSet [apps/v1]
  • StatefulSet [apps/v1]
  • About apps.openshift.io
  • DeploymentConfig [apps.openshift.io/v1]
  • About authentication.k8s.io
  • TokenReview [authentication.k8s.io/v1]
  • About authorization.k8s.io
  • LocalSubjectAccessReview [authorization.k8s.io/v1]
  • SelfSubjectAccessReview [authorization.k8s.io/v1]
  • SelfSubjectRulesReview [authorization.k8s.io/v1]
  • SubjectAccessReview [authorization.k8s.io/v1]
  • About authorization.openshift.io
  • ClusterRoleBinding [authorization.openshift.io/v1]
  • ClusterRole [authorization.openshift.io/v1]
  • LocalResourceAccessReview [authorization.openshift.io/v1]
  • LocalSubjectAccessReview [authorization.openshift.io/v1]
  • ResourceAccessReview [authorization.openshift.io/v1]
  • RoleBindingRestriction [authorization.openshift.io/v1]
  • RoleBinding [authorization.openshift.io/v1]
  • Role [authorization.openshift.io/v1]
  • SelfSubjectRulesReview [authorization.openshift.io/v1]
  • SubjectAccessReview [authorization.openshift.io/v1]
  • SubjectRulesReview [authorization.openshift.io/v1]
  • About autoscaling
  • HorizontalPodAutoscaler [autoscaling/v1]
  • About batch
  • CronJob [batch/v1beta1]
  • Job [batch/v1]
  • About build.openshift.io
  • BuildConfig [build.openshift.io/v1]
  • Build [build.openshift.io/v1]
  • About certificates.k8s.io
  • CertificateSigningRequest [certificates.k8s.io/v1beta1]
  • About events.k8s.io
  • Event [events.k8s.io/v1beta1]
  • About image.openshift.io
  • Image [image.openshift.io/v1]
  • ImageSignature [image.openshift.io/v1]
  • ImageStreamImage [image.openshift.io/v1]
  • ImageStreamImport [image.openshift.io/v1]
  • ImageStreamMapping [image.openshift.io/v1]
  • ImageStream [image.openshift.io/v1]
  • ImageStreamTag [image.openshift.io/v1]
  • About network.openshift.io
  • ClusterNetwork [network.openshift.io/v1]
  • EgressNetworkPolicy [network.openshift.io/v1]
  • HostSubnet [network.openshift.io/v1]
  • NetNamespace [network.openshift.io/v1]
  • About networking.k8s.io
  • NetworkPolicy [networking.k8s.io/v1]
  • About oauth.openshift.io
  • OAuthAccessToken [oauth.openshift.io/v1]
  • OAuthAuthorizeToken [oauth.openshift.io/v1]
  • OAuthClientAuthorization [oauth.openshift.io/v1]
  • OAuthClient [oauth.openshift.io/v1]
  • About policy
  • PodDisruptionBudget [policy/v1beta1]
  • PodSecurityPolicy [policy/v1beta1]
  • About project.openshift.io
  • ProjectRequest [project.openshift.io/v1]
  • Project [project.openshift.io/v1]
  • About quota.openshift.io
  • AppliedClusterResourceQuota [quota.openshift.io/v1]
  • ClusterResourceQuota [quota.openshift.io/v1]
  • About rbac.authorization.k8s.io
  • ClusterRoleBinding [rbac.authorization.k8s.io/v1]
  • ClusterRole [rbac.authorization.k8s.io/v1]
  • RoleBinding [rbac.authorization.k8s.io/v1]
  • Role [rbac.authorization.k8s.io/v1]
  • About route.openshift.io
  • Route [route.openshift.io/v1]
  • About scheduling.k8s.io
  • PriorityClass [scheduling.k8s.io/v1beta1]
  • About security.openshift.io
  • PodSecurityPolicyReview [security.openshift.io/v1]
  • PodSecurityPolicySelfSubjectReview [security.openshift.io/v1]
  • PodSecurityPolicySubjectReview [security.openshift.io/v1]
  • RangeAllocation [security.openshift.io/v1]
  • SecurityContextConstraints [security.openshift.io/v1]
  • About storage.k8s.io
  • StorageClass [storage.k8s.io/v1]
  • VolumeAttachment [storage.k8s.io/v1beta1]
  • About template.openshift.io
  • BrokerTemplateInstance [template.openshift.io/v1]
  • Template [template.openshift.io/v1]
  • TemplateInstance [template.openshift.io/v1]
  • Template [template.openshift.io/v1]
  • About user.openshift.io
  • Group [user.openshift.io/v1]
  • Identity [user.openshift.io/v1]
  • UserIdentityMapping [user.openshift.io/v1]
  • User [user.openshift.io/v1]
  • Using the CRI-O Container Engine
  • Container-native Virtualization Installation
  • Container-native Virtualization Users Guide
  • Container-native Virtualization Release Notes
  • Customizing master and node configuration after installation
  • Installation dependencies
  • Configuring masters and nodes
  • Making configuration changes using Ansible
  • Using the htpasswd command
  • Making manual configuration changes
  • Master Configuration Files
  • Admission Control Configuration
  • Asset Configuration
  • Authentication and Authorization Configuration
  • Controller Configuration
  • etcd Configuration
  • Grant Configuration
  • Image Configuration
  • Image Policy Configuration
  • Kubernetes Master Configuration
  • Network Configuration
  • OAuth Authentication Configuration
  • Project Configuration
  • Scheduler Configuration
  • Security Allocator Configuration
  • Service Account Configuration
  • Serving Information Configuration
  • Volume Configuration
  • Basic Audit
  • Advanced Audit
  • Specifying TLS ciphers for etcd
  • Node Configuration Files
  • Pod and Node Configuration
  • Docker Configuration
  • Local Storage Configuration
  • Setting Node Queries per Second (QPS) Limits and Burst Values
  • Parallel Image Pulls with Docker 1.9+
  • Passwords and Other Sensitive Data
  • Creating New Configuration Files
  • Launching Servers Using Configuration Files
  • Viewing Master and Node Logs
  • Configuring Logging Levels
  • Restarting master and node services
  • The openshift start command (for master servers) and hyperkube command (for node servers) take a limited set of arguments that are sufficient for launching servers in a development or experimental environment. However, these arguments are insufficient to describe and control the full set of configuration and security options that are necessary in a production environment.

    You must provide these options in the master configuration file , at /etc/origin/master/master-config.yaml , and the node configuration maps . These files define options including overriding the default plug-ins, connecting to etcd, automatically creating service accounts, building image names, customizing project requests, configuring volume plug-ins, and much more.

    This topic covers the available options for customizing your OpenShift Container Platform master and node hosts, and shows you how to make changes to the configuration after installation.

    These files are fully specified with no default values. Therefore, an empty value indicates that you want to start up with an empty value for that parameter. This makes it easy to reason about exactly what your configuration is, but it also makes it difficult to remember all of the options to specify. To make this easier, the configuration files can be created with the --write-config option and then used with the --config option.

    Production environments should be installed using the standard cluster installation process. In production environments, it is a good idea to use multiple masters for the purposes of high availability (HA). A cluster architecture of three masters is recommended, and HAproxy is the recommended solution for this.

    If etcd is installed on the master hosts , you must configure your cluster to use at least three masters, because etcd would not be able to decide which one is authoritative. The only way to successfully run only two masters is if you install etcd on hosts other than the masters.

    The method you use to configure your master and node configuration files must match the method that was used to install your OpenShift Container Platform cluster. If you followed the standard cluster installation processe, then make your configuration changes in the Ansible inventory file.

    Only a portion of the available host configuration options are exposed to Ansible . After an OpenShift Container Platform install, Ansible creates an inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your OpenShift Container Platform cluster.

    While OpenShift Container Platform supports using Ansible for cluster installation, using an Ansible playbook and inventory file, you can also use other management tools, such as Puppet , Chef , or Salt .

    Use Case: Configuring the cluster to use HTPasswd authentication

    This use case assumes you have already set up SSH keys to all the nodes referenced in the playbook.

    The htpasswd utility is in the httpd-tools package:

    # yum install httpd-tools
    # htpasswd auth
    openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    # Defining htpasswd users
    #openshift_master_htpasswd_users={'<name>': '<hashed-password>', '<name>': '<hashed-password>'}
    #openshift_master_htpasswd_file=/etc/origin/master/htpasswd

    For HTPasswd authentication the openshift_master_identity_providers variable enables the authentication type. You can configure three different authentication options that use HTPasswd:

    Specify only openshift_master_identity_providers if /etc/origin/master/htpasswd is already configured and present on the host.

    Specify both openshift_master_identity_providers and openshift_master_htpasswd_file to copy a local htpasswd file to the host.

    Specify both openshift_master_identity_providers and openshift_master_htpasswd_users to generate a new htpasswd file on the host.

    Because OpenShift Container Platform requires a hashed password to configure HTPasswd authentication, you can use the htpasswd command, as shown in the following section , to generate the hashed password(s) for your user(s) or to create the flat file with the users and associated hashed passwords.

    The following example changes the authentication method from the default deny all setting to htpasswd and uses the specified file to generate user IDs and passwords for the jsmith and bloblaw users.

    # htpasswd auth
    openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    # Defining htpasswd users
    openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'bloblaw': '7IRJ$2ODmeLoxf4I6sUEKfiA$2aDJqLJe'}
    #openshift_master_htpasswd_file=/etc/origin/master/htpasswd

    You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which master and node configuration options are exposed to Ansible and customize your own Ansible inventory.

    Using the htpasswd command

    To configure the OpenShift Container Platform cluster to use HTPasswd authentication, you need at least one user with a hashed password to include in the inventory file .

    You can:

    Open the configuration file you want to modify, which in this case is the /etc/origin/master/master-config.yaml file:

    Add the following new variables to the identityProviders stanza of the file:

    oauthConfig:
      identityProviders:
      - name: my_htpasswd_provider
        challenge: true
        login: true
        mappingMethod: claim
        provider:
          apiVersion: v1
          kind: HTPasswdPasswordIdentityProvider
          file: /etc/origin/master/htpasswd

    You have now manually modified the master and node configuration files, but this is just a simple use case. From here you can see all the master and node configuration options, and further customize your own cluster by making further modifications.

    Contains the admission control plug-in configuration. OpenShift Container Platform has a configurable list of admission controller plug-ins that are triggered whenever API objects are created or modified. This option allows you to override the default list of plug-ins; for example, disabling some plug-ins, adding others, changing the ordering, and specifying configuration. Both the list of plug-ins and their configuration can be controlled from Ansible.

    Key-value pairs that will be passed directly to the Kube API server that match the API servers' command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig , which may cause invalid configurations. Use APIServerArguments with the event-ttl value to store events in etcd. The default is 2h , but it can be set to less to prevent memory growth:

    apiServerArguments:
      event-ttl:
      - "15m"

    Key-value pairs that will be passed directly to the Kube controller manager that match the controller manager’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig , which may cause invalid configurations.

    Used to enable or disable various admission plug-ins. When this type is present as the configuration object under pluginConfig and if the admission plug-in supports it, this will cause an off by default admission plug-in to be enabled.

    Key-value pairs that will be passed directly to the Kube scheduler that match the scheduler’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig , which may cause invalid configurations.

    logoutURL: "" masterPublicURL: https://master.ose32.example.com:8443 publicURL: https://master.ose32.example.com:8443/console/ servingInfo: bindAddress: 0.0.0.0:8443 bindNetwork: tcp4 certFile: master.server.crt clientCA: "" keyFile: master.server.key maxRequestsInFlight: 0 requestTimeoutSeconds: 0

    To access the API server from a web application using a different host name, you must whitelist that host name by specifying corsAllowedOrigins in the configuration field or by specifying the --cors-allowed-origins option on openshift start . No pinning or escaping is done to the value. See Console for example usage.

    A list of features that should not be started. You will likely want to set this as null . It is very unlikely that anyone will want to manually disable features and that is not encouraged.

    When set to true , tells the asset server to reload extension scripts and stylesheets for every request rather than only at startup. It lets you develop extensions without having to restart the server for every change.

    Indicates how long an authorization result should be cached. It takes a valid time duration string (e.g. "5m"). If empty, you get the default timeout. If zero (e.g. "0m"), caching is disabled.

    List of the controllers that should be started. If set to none , no controllers will start automatically. The default value is * which will start all controllers. When using *, you may exclude controllers by prepending a - in front of their name. No other values are recognized at this time.

    Enables controller election, instructing the master to attempt to acquire a lease before controllers start and renewing it within a number of seconds defined by this value. Setting this value non-negative forces pauseControllers=true . This value defaults off (0, or omitted) and controller election can be disabled with -1.

    etcdConfig:
      address: master.ose32.example.com:4001
      peerAddress: master.ose32.example.com:7001
      peerServingInfo:
        bindAddress: 0.0.0.0:7001
        certFile: etcd.server.crt
        clientCA: ca.crt
        keyFile: etcd.server.key
      servingInfo:
        bindAddress: 0.0.0.0:4001
        certFile: etcd.server.crt
        clientCA: ca.crt
        keyFile: etcd.server.key
      storageDirectory: /var/lib/origin/openshift.local.etcd

    The path within etcd that the Kubernetes resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is kubernetes.io .

    The API version that Kubernetes resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version.

    The path within etcd that the OpenShift Container Platform resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is openshift.io .

    API version that OS resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version.

    Determines the default strategy to use when an OAuth client requests a grant.This method will be used only if the specific OAuth client does not provide a strategy of their own. Valid grant handling methods are:

    prompt: prompts the end user for approval of grant requests, useful for third-party clients

    deny: always denies grant requests, useful for black-listed clients

    Controls the number of images that are imported when a user does a bulk import of a Docker repository. This number defaults to 5 to prevent users from importing large numbers of images accidentally. Set -1 for no limit.

    The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is 15 minutes.

    Limits the docker registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions.

    Specified a filepath to a PEM-encoded file listing additional certificate authorities that should be trusted during imagestream import. This file needs to be accessible to the API server process. Depending how your cluster is installed, this may require mounting the file into the API server pod.

    Sets the hostname for the default internal image registry. The value must be in hostname[:port] format. For backward compatibility, users can still use OPENSHIFT_DEFAULT_REGISTRY environment variable but this setting overrides the environment variable. When this is set, the internal registry must have its hostname set as well. See setting the registry hostname for more details.

    ExternalRegistryHostname sets the hostname for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The value is used in publicDockerImageRepository field in ImageStreams. The value must be in hostname[:port] format.

    The number of expected masters that should be running. This value defaults to 1 and may be set to a positive integer, or if set to -1, indicates this is part of a cluster.

    Network Configuration

    Choose the CIDRs in the following parameters carefully, because the IPv4 address space is shared by all users of the nodes. OpenShift Container Platform reserves CIDRs from the IPv4 address space for its own use, and reserves CIDRs from the IPv4 address space for addresses that are shared between the external user and the cluster.

    Table 10. Network Configuration Parameters

    Controls what values are acceptable for the service external IP field. If empty, no externalIP may be set. It may contain a list of CIDRs which are checked for access. If a CIDR is prefixed with ! , IPs in that CIDR will be rejected. Rejections will be applied first, then the IP checked against one of the allowed CIDRs. You must ensure this range does not overlap with your nodes, pods, or service CIDRs for security reasons.

    Controls the range to assign ingress IPs from for services of type LoadBalancer on bare metal. It may contain a single CIDR that it will be allocated from. By default 172.46.0.0/16 is configured. For security reasons, you should ensure that this range does not overlap with the CIDRs reserved for external IPs, nodes, pods, or services.

    externalIPNetworkCIDRs (string array): Controls which values are acceptable for the service external IP field. If empty, no external IP may be set. It can contain a list of CIDRs which are checked for access. If a CIDR is prefixed with ! , then IPs in that CIDR are rejected. Rejections are applied first, then the IP is checked against one of the allowed CIDRs. For security purposes, you should ensure this range does not overlap with your nodes, pods, or service CIDRs.

    hostSubnetLength: 8 networkPluginName: example/openshift-ovs-subnet # serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet serviceNetworkCIDR: 179.29.0.0/16 file: /etc/origin/openshift-passwd masterCA: ca.crt masterPublicURL: https://master.ose32.example.com:8443 masterURL: https://master.ose32.example.com:8443 sessionConfig: sessionMaxAgeSeconds: 3600 sessionName: ssn sessionSecretsFile: /etc/origin/master/session-secrets.yaml tokenConfig: accessTokenMaxAgeSeconds: 86400 authorizeTokenMaxAgeSeconds: 500

    ProjectRequestMessage (string): The string presented to a user if they are unable to request a project via the projectrequest API endpoint.

    ProjectRequestTemplate (string): The template to use for creating projects in response to projectrequest. It is in the format <namespace>/<template> . It is optional, and if it is not specified, a default template is used.

    SecurityAllocator : Controls the automatic allocation of UIDs and MCS labels to a project. If nil, allocation is disabled:

    mcsAllocatorRange (string): Defines the range of MCS categories that will be assigned to namespaces. The format is <prefix>/<numberOfLabels>[,<maxCategory>] . The default is s0/2 and will allocate from c0 → c1023, which means a total of 535k labels are available. If this value is changed after startup, new projects may receive labels that are already allocated to other projects. The prefix may be any valid SELinux set of terms (including user, role, and type). However, leaving the prefix at its default allows the server to set them automatically. For example, s0:/2 would allocate labels from s0:c0,c0 to s0:c511,c511 whereas s0:/2,512 would allocate labels from s0:c0,c0,c0 to s0:c511,c511,511.

    mcsLabelsPerProject (integer): Defines the number of labels to reserve per project. The default is 5 to match the default UID and MCS ranges.

    uidAllocatorRange (string): Defines the total set of Unix user IDs (UIDs) automatically allocated to projects, and the size of the block that each namespace gets. For example, 1000-1999/10 would allocate ten UIDs per namespace, and would be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks, which is the expected size of ranges for container images when user namespaces are started.

    The template to use for creating projects in response to a projectrequest . It is in the format namespace/template and it is optional. If it is not specified, a default template is used.

    Defines the range of MCS categories that will be assigned to namespaces. The format is <prefix>/<numberOfLabels>[,<maxCategory>] . The default is s0/2 and will allocate from c0 to c1023, which means a total of 535k labels are available (1024 choose 2 ~ 535k). If this value is changed after startup, new projects may receive labels that are already allocated to other projects. Prefix may be any valid SELinux set of terms (including user, role, and type), although leaving them as the default will allow the server to set them automatically.

    Defines the total set of Unix user IDs (UIDs) that will be allocated to projects automatically, and the size of the block that each namespace gets. For example, 1000-1999/10 will allocate ten UIDs per namespace, and will be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks (which is the expected size of the ranges container images will use once user namespaces are started).

    A list of service account names that will be auto-created in every namespace. If no names are specified, the ServiceAccountsController will not be started.

    The CA for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so they can verify connections to the master.

    A file containing a PEM-encoded private RSA key, used to sign service account tokens. If no private key is specified, the service account TokensController will not be started.

    A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, the public portion of the key is used. The list of public keys is used to verify presented service account tokens. Each key is tried in order until the list is exhausted or verification succeeds. If no keys are specified, no service account authentication will be available.

    LimitSecretReferences (boolean): Controls whether or not to allow a service account to reference any secret in a namespace without explicitly referencing them.

    ManagedNames (string): A list of service account names that will be auto-created in every namespace. If no names are specified, then the ServiceAccountsController will not be started.

    MasterCA (string): The certificate authority for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so that they can verify connections to the master.

    PrivateKeyFile (string): Contains a PEM-encoded private RSA key, used to sign service account tokens. If no private key is specified, then the service account TokensController will not be started.

    PublicKeyFiles (string): A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, then OpenShift Container Platform uses the public portion of the key. The list of public keys is used to verify service account tokens; each key is tried in order until either the list is exhausted or verification succeeds. If no keys are specified, then service account authentication will not be available.

    Allows the DNS server on the master to answer queries recursively. Note that open resolvers can be used for DNS amplification attacks and the master DNS should not be made accessible to public networks.

    Provides overrides to the client connection used to connect to the master. This parameter is not supported. To set QPS and burst values, see Setting Node QPS and Burst Values .

    Enables local storage quotas on each node for each FSGroup. At present this is only implemented for emptyDir volumes, and if the underlying volumeDirectory is on an XFS filesystem.

    Basic Audit

    Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system.

    Audit works at the API server level, logging all requests coming to the server. Each audit log contains two entries:

    AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" ip="127.0.0.1" method="GET" user="admin" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
    AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" response="200"

    Enable Basic Auditing

    The following procedure enables basic auditing post installation.

    Edit the /etc/origin/master/master-config.yaml file on all master nodes as shown in the following example:

    auditConfig:
      auditFilePath: "/var/log/origin/audit-ocp.log"
      enabled: true
      maximumFileRetentionDays: 14
      maximumFileSizeMegabytes: 500
      maximumRetainedFiles: 15

    The advanced audit feature provides several improvements over the basic audit functionality , including fine-grained events filtering and multiple output back ends.

    To enable the advanced audit feature, you create an audit policy file and specify the following values in the openshift_master_audit_config and openshift_master_audit_policyfile parameters:

    openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/log/origin/audit-ocp.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5, "policyFile": "/etc/origin/master/adv-audit.yaml", "logFormat":"json"}
    openshift_master_audit_policyfile="/<path>/adv-audit.yaml"

    Specifies the strategy for sending audit events. Allowed values are block (blocks processing another event until the previous has fully processed) and batch (buffers events and delivers in batches).

    # Do not log watch requests by the "system:kube-proxy" on endpoints or services - level : None (1) users : [ " system:kube-proxy" ] (2) verbs : [ " watch" ] (3) resources : (4) - group : " " resources : [ " endpoints" , " services" ] # Do not log authenticated requests to certain non-resource URL paths. - level : None userGroups : [ " system:authenticated" ] (5) nonResourceURLs : (6) - " /api*" # Wildcard matching. - " /version" # Log the request body of configmap changes in kube-system. - level : Request resources : - group : " " # core API group resources : [ " configmaps" ] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces : [ " kube-system" ] (7) # Log configmap and secret changes in all other namespaces at the metadata level. - level : Metadata resources : - group : " " # core API group resources : [ " secrets" , " configmaps" ] # Log all other resources in core and extensions at the request level. - level : Request resources : - group : " " # core API group - group : " extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level : Metadata (1) # Log login failures from the web console or CLI. Review the logs and refine your policies. - level : Metadata nonResourceURLs : - /login* (8) - /oauth* (9)

    Metadata - Log request metadata (requesting user, time stamp, resource, verb, etc.), but not request or response body. This is the same level as the one used in basic audit.

    Request - Log event metadata and request body, but not response body.

    RequestResponse - Log event metadata, request, and response bodies.

    A list of verbs this rule applies to. An empty list implies every verb. This is Kubernetes verb associated with API requests (including get , list , watch , create , update , patch , delete , deletecollection , and proxy ). A list of resources the rule applies to. An empty list implies every resource. Each resource is specified as a group it is assigned to (for example, an empty for Kubernetes core API, batch, build.openshift.io, etc.), and a resource list from that group. A list of groups the rule applies to. An empty list implies every group. A list of non-resources URLs the rule applies to. A list of namespaces the rule applies to. An empty list implies every namespace. Endpoint used by the web console. Endpoint used by the CLI.

    On each master host, specify the ciphers to enable in the /etc/origin/master/master-config.yaml file:

    servingInfo:
      minTLSVersion: VersionTLS12
      cipherSuites:
      - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
      - TLS_RSA_WITH_AES_256_CBC_SHA
      - TLS_RSA_WITH_AES_128_CBC_SHA
    

    Confirm that the cipher is applied. For example, for TLSv1.2 cipher ECDHE-RSA-AES128-GCM-SHA256, run the following command:

    # openssl s_client -connect etcd1.example.com:2379 (1)
    CONNECTED(00000003)
    depth=0 CN = etcd1.example.com
    verify error:num=20:unable to get local issuer certificate
    verify return:1
    depth=0 CN = etcd1.example.com
    verify error:num=21:unable to verify the first certificate
    verify return:1
    139905367488400:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:s3_pkt.c:1493:SSL alert number 42
    139905367488400:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
    Certificate chain
     0 s:/CN=etcd1.example.com
       i:/CN=etcd-signer@1529635004
    Server certificate
    -----BEGIN CERTIFICATE-----
    MIIEkjCCAnqgAwIBAgIBATANBgkqhkiG9w0BAQsFADAhMR8wHQYDVQQDDBZldGNk
    ........
    eif87qttt0Sl1vS8DG1KQO1oOBlNkg==
    -----END CERTIFICATE-----
    subject=/CN=etcd1.example.com
    issuer=/CN=etcd-signer@1529635004
    Acceptable client certificate CA names
    /CN=etcd-signer@1529635004
    Client Certificate Types: RSA sign, ECDSA sign
    Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1
    Shared Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1
    Peer signing digest: SHA384
    Server Temp Key: ECDH, P-256, 256 bits
    SSL handshake has read 1666 bytes and written 138 bytes
    New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
    Server public key is 2048 bit
    Secure Renegotiation IS supported
    Compression: NONE
    Expansion: NONE
    No ALPN negotiated
    SSL-Session:
        Protocol  : TLSv1.2
        Cipher    : ECDHE-RSA-AES128-GCM-SHA256
        Session-ID:
        Session-ID-ctx:
        Master-Key: 1EFA00A91EE5FC5EDDCFC67C8ECD060D44FD3EB23D834EDED929E4B74536F273C0F9299935E5504B562CD56E76ED208D
        Key-Arg   : None
        Krb5 Principal: None
        PSK identity: None
        PSK identity hint: None
        Start Time: 1529651744
        Timeout   : 300 (sec)
        Verify return code: 21 (unable to verify the first certificate)

    To make configuration changes to an existing node, edit the appropriate configuration map. A sync pod on each node watches for changes in the configuration maps. During installation, the sync pods are created by using sync Daemonsets, and a /etc/origin/node/node-config.yaml file, where the node configuration parameters reside, is added to each node. When a sync pod detects configuration map change, it updates the node-config.yaml on all nodes in that node group and restarts the atomic-openshift-node.service on the appropriate nodes.

    $ oc get cm -n openshift-node
    Example Output
    NAME                       DATA      AGE
    node-config-all-in-one     1         1d
    node-config-compute        1         1d
    node-config-infra          1         1d
    node-config-master         1         1d
    node-config-master-infra   1         1d
    Sample configuration map for the node-config-compute group
    apiVersion: v1
    authConfig:      (1)
      authenticationCacheSize: 1000
      authenticationCacheTTL: 5m
      authorizationCacheSize: 1000
      authorizationCacheTTL: 5m
    dnsBindAddress: 127.0.0.1:53
    dnsDomain: cluster.local
    dnsIP: 0.0.0.0               (2)
    dnsNameservers: null
    dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
    dockerConfig:
      dockerShimRootDirectory: /var/lib/dockershim
      dockerShimSocket: /var/run/dockershim.sock
      execHandlerName: native
    enableUnidling: true
    imageConfig:
      format: registry.reg-aws.openshift.com/openshift3/ose-${component}:${version}
      latest: false
    iptablesSyncPeriod: 30s
    kind: NodeConfig
    kubeletArguments: (3)
      bootstrap-kubeconfig:
      - /etc/origin/node/bootstrap.kubeconfig
      cert-dir:
      - /etc/origin/node/certificates
      cloud-config:
      - /etc/origin/cloudprovider/aws.conf
      cloud-provider:
      - aws
      enable-controller-attach-detach:
      - 'true'
      feature-gates:
      - RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
      node-labels:
      - node-role.kubernetes.io/compute=true
      pod-manifest-path:
      - /etc/origin/node/pods  (4)
      rotate-certificates:
      - 'true'
    masterClientConnectionOverrides:
      acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
      burst: 40
      contentType: application/vnd.kubernetes.protobuf
      qps: 20
    masterKubeConfig: node.kubeconfig
    networkConfig:   (5)
      mtu: 8951
      networkPluginName: redhat/openshift-ovs-subnet  (6)
    servingInfo:                   (7)
      bindAddress: 0.0.0.0:10250
      bindNetwork: tcp4
      clientCA: client-ca.crt (8)
    volumeConfig:
      localQuota:
        perFSGroup: null
    volumeDirectory: /var/lib/origin/openshift.local.volumes
    Key value pairs that are passed directly to the Kubelet that match the Kubelet’s command line arguments. The path to the pod manifest file or directory. A directory must contain one or more manifest files. OpenShift Container Platform uses the manifest files to create pods on the node. The pod network settings on the node. Software defined network (SDN) plug-in. Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in; redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in; or redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in. Certificate information for the node. Optional: PEM-encoded certificate bundle. If set, a valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names.

    The node configuration file determines the resources of a node. See the Allocating node resources section in the Cluster Administrator guide for more information.

    Pod and Node Configuration

    Table 20. Pod and Node Configuration Parameters

    The value used to identify this particular node in the cluster. If possible, this should be your fully qualified hostname. If you are describing a set of static nodes to the master, this value must match one of the values in the list.

    You can use the XFS quota subsystem to limit the size of emptyDir volumes and volumes based on an emptyDir volume, such as secrets and configuration maps, on each node.

    To limit the size of emptyDir volumes in an XFS filesystem, configure local volume quota for each unique FSGroup using the node-config-compute configuration map in the openshift-node project.

    apiVersion: kubelet.config.openshift.io/v1
    kind: VolumeConfig
      localQuota: (1)
        perFSGroup: 1Gi (2)
    Set this value to a resource quantity representing the desired quota per [FSGroup], per node, such as 1Gi, 512Mi, and so forth. Requires the volumeDirectory to be on an XFS filesystem mounted with the grpquota option. The matching security context constraint fsGroup type must be set to MustRunAs.

    If no FSGroup is specified, indicating the request matched an SCC with RunAsAny, the quota application is skipped.

    Do not edit the /etc/origin/node/volume-config.yaml file directly. The file is created from the node-config-compute configuration map. Use the node-config-compute configuration map to create or edit the paramaters in the volume-config.yaml file.

    Setting Node Queries per Second (QPS) Limits and Burst Values

    The rate at which kubelet talks to API server depends on qps and burst values. The default values are good enough if there are limited pods running on each node. Provided there are enough CPU and memory resources on the node, the qps and burst values can be tweaked in the /etc/origin/node/node-config.yaml file:

    kubeletArguments:
      kube-api-qps:
      - "20"
      kube-api-burst:
      - "40"

    Parallel Image Pulls with Docker 1.9+

    If you are using Docker 1.9+, you may want to consider enabling parallel image pulling, as the default is to pull images one at a time.

    There is a potential issue with data corruption prior to Docker 1.9. However, starting with 1.9, the corruption issue is resolved and it is safe to switch to parallel pulls.

    For some authentication configurations, an LDAP bindPassword or OAuth clientSecret value is required. Instead of specifying these values directly in the master configuration file, these values may be provided as environment variables, external files, or in encrypted files.

    Environment Variable Example
      bindPassword:
        env: BIND_PASSWORD_ENV_VAR_NAME
    External File Example
      bindPassword:
        file: bindPassword.txt
    Encrypted External File Example
      bindPassword:
        file: bindPassword.encrypted
        keyFile: bindPassword.key

    To create the encrypted file and key file for the above example:

    $ oc adm ca encrypt --genkey=bindPassword.key --out=bindPassword.encrypted
    > Data to encrypt: B1ndPass0rd!

    Run oc adm commands only from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts.

    When defining an OpenShift Container Platform configuration from scratch, start by creating new configuration files.

    For master host configuration files, use the openshift start command with the --write-config option to write the configuration files. For node hosts, use the oc adm create-node-config command to write the configuration files.

    The following commands write the relevant launch configuration file(s), certificate files, and any other necessary files to the specified --write-config or --node-dir directory.

    Generated certificate files are valid for two years, while the certification authority (CA) certificate is valid for five years. This can be altered with the --expire-days and --signer-expire-days options, but for security reasons, it is recommended to not make them greater than these values.

    To create configuration files for an all-in-one server (a master and a node on the same host) in the specified directory:

    $ openshift start --write-config=/openshift.local.config

    To create a master configuration file and other required files in the specified directory:

    $ openshift start master --write-config=/openshift.local.config/master

    To create a node configuration file and other related files in the specified directory:

    $ oc adm create-node-config \
        --node-dir=/openshift.local.config/node-<node_hostname> \
        --node=<node_hostname> \
        --hostnames=<node_hostname>,<ip_address> \
        --certificate-authority="/path/to/ca.crt" \
        --signer-cert="/path/to/ca.crt" \
        --signer-key="/path/to/ca.key"
        --signer-serial="/path/to/ca.serial.txt"
        --node-client-certificate-authority="/path/to/ca.crt"

    When creating node configuration files, the --hostnames option accepts a comma-delimited list of every host name or IP address you want server certificates to be valid for.

    After you have modified the master and node configuration files to your specifications, you can use them when launching servers by specifying them as an argument. If you specify a configuration file, none of the other command line options you pass are respected.

    Start the network proxy and SDN plug-ins using a node configuration file and a node.kubeconfig file:

    $ openshift start network \
        --config=/openshift.local.config/node-<node_hostname>/node-config.yaml \
        --kubeconfig=/openshift.local.config/node-<node_hostname>/node.kubeconfig

    OpenShift Container Platform collects log messages for debugging, using the systemd-journald.service for nodes and a script, called master-logs, for masters.

    The logging uses five log message severities based on Kubernetes logging conventions, as follows:

    Table 23. Log Level Options

    Configuring Logging Levels

    You can control which INFO messages are logged by setting the DEBUG_LOGLEVEL option in the /etc/origin/master/master.env file for the master or /etc/sysconfig/atomic-openshift-node file for the nodes. Configuring the logs to collect all messages can lead to large logs that are difficult to interpret and can take up excessive space. Only collect all messages when you need to debug your cluster.

    Edit the /etc/origin/master/master.env file for the master or /etc/sysconfig/atomic-openshift-node file for the nodes.

    Enter a value from the Log Level Options table in the DEBUG_LOGLEVEL field.

    For example:

    DEBUG_LOGLEVEL=4

    The default log level can be set using the standard cluster installation process. For more information, see Cluster Variables.

    The following examples are excerpts of redirected master log files at various log levels. System information has been removed from these examples.

    Excerpt of master-logs api api 2> file output at loglevel=2
    W1022 15:08:09.787705       1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
    I1022 15:08:09.787894       1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
    I1022 15:08:09.787913       1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
    I1022 15:08:09.889022       1 dns_server.go:63] DNS listening at 0.0.0.0:8053
    I1022 15:08:09.893156       1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
    I1022 15:08:09.893500       1 master.go:431] Starting OAuth2 API at /oauth
    I1022 15:08:09.914759       1 master.go:431] Starting OAuth2 API at /oauth
    I1022 15:08:09.942349       1 master.go:431] Starting OAuth2 API at /oauth
    W1022 15:08:09.977088       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
    W1022 15:08:09.977176       1 swagger.go:38] No API exists for predefined swagger description /api/v1
    [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
    [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
    I1022 15:08:10.231405       1 master.go:431] Starting OAuth2 API at /oauth
    W1022 15:08:10.259523       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
    W1022 15:08:10.259555       1 swagger.go:38] No API exists for predefined swagger description /api/v1
    I1022 15:08:23.895493       1 logs.go:49] http: TLS handshake error from 10.10.94.10:46322: EOF
    I1022 15:08:24.449577       1 crdregistration_controller.go:110] Starting crd-autoregister controller
    I1022 15:08:24.449916       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
    I1022 15:08:24.496147       1 logs.go:49] http: TLS handshake error from 127.0.0.1:39140: EOF
    I1022 15:08:24.821198       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
    I1022 15:08:24.833022       1 cache.go:39] Caches are synced for AvailableConditionController controller
    I1022 15:08:24.865087       1 controller.go:537] quota admission added evaluator for: { events}
    I1022 15:08:24.865393       1 logs.go:49] http: TLS handshake error from 127.0.0.1:39162: read tcp4 127.0.0.1:443->127.0.0.1:39162: read: connection reset by peer
    I1022 15:08:24.966917       1 controller_utils.go:1026] Caches are synced for crd-autoregister controller
    I1022 15:08:24.967961       1 autoregister_controller.go:136] Starting autoregister controller
    I1022 15:08:24.967977       1 cache.go:32] Waiting for caches to sync for autoregister controller
    I1022 15:08:25.015924       1 controller.go:537] quota admission added evaluator for: { serviceaccounts}
    I1022 15:08:25.077984       1 cache.go:39] Caches are synced for autoregister controller
    W1022 15:08:25.304265       1 lease_endpoint_reconciler.go:176] Resetting endpoints for master service "kubernetes" to [10.10.94.10]
    E1022 15:08:25.472536       1 memcache.go:153] couldn't get resource list for servicecatalog.k8s.io/v1beta1: the server could not find the requested resource
    E1022 15:08:25.550888       1 memcache.go:153] couldn't get resource list for servicecatalog.k8s.io/v1beta1: the server could not find the requested resource
    I1022 15:08:29.480691       1 healthz.go:72] /healthz/log check
    I1022 15:08:30.981999       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.servicecatalog.k8s.io
    E1022 15:08:30.990914       1 controller.go:111] loading OpenAPI spec for "v1beta1.servicecatalog.k8s.io" failed with: OpenAPI spec does not exists
    I1022 15:08:30.990965       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.servicecatalog.k8s.io: Rate Limited Requeue.
    I1022 15:08:31.530473       1 trace.go:76] Trace[1253590531]: "Get /api/v1/namespaces/openshift-infra/serviceaccounts/serviceaccount-controller" (started: 2018-10-22 15:08:30.868387562 +0000 UTC m=+24.277041043) (total time: 661.981642ms):
    Trace[1253590531]: [661.903178ms] [661.89217ms] About to write a response
    I1022 15:08:31.531366       1 trace.go:76] Trace[83808472]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:30.831296749 +0000 UTC m=+24.239950203) (total time: 700.049245ms):
    Excerpt of master-logs api api 2> file output at loglevel=4
    I1022 15:08:09.746980       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: AlwaysDeny.
    I1022 15:08:09.747597       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: ResourceQuota.
    I1022 15:08:09.748038       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/ClusterResourceQuota.
    I1022 15:08:09.786771       1 start_master.go:458] Starting master on 0.0.0.0:443 (v3.10.45)
    I1022 15:08:09.786798       1 start_master.go:459] Public master address is https://openshift.com:443
    I1022 15:08:09.786844       1 start_master.go:463] Using images from "registry.access.redhat.com/openshift3/ose-<component>:v3.10.45"
    W1022 15:08:09.787046       1 dns_server.go:37] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
    W1022 15:08:09.787705       1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
    I1022 15:08:09.787894       1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
    I1022 15:08:09.787913       1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
    I1022 15:08:09.889022       1 dns_server.go:63] DNS listening at 0.0.0.0:8053
    I1022 15:08:09.893156       1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
    I1022 15:08:09.893500       1 master.go:431] Starting OAuth2 API at /oauth
    I1022 15:08:09.914759       1 master.go:431] Starting OAuth2 API at /oauth
    I1022 15:08:09.942349       1 master.go:431] Starting OAuth2 API at /oauth
    W1022 15:08:09.977088       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
    W1022 15:08:09.977176       1 swagger.go:38] No API exists for predefined swagger description /api/v1
    [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
    [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
    I1022 15:08:10.231405       1 master.go:431] Starting OAuth2 API at /oauth
    W1022 15:08:10.259523       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
    W1022 15:08:10.259555       1 swagger.go:38] No API exists for predefined swagger description /api/v1
    [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
    [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
    I1022 15:08:10.444303       1 master.go:431] Starting OAuth2 API at /oauth
    W1022 15:08:10.492409       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
    W1022 15:08:10.492507       1 swagger.go:38] No API exists for predefined swagger description /api/v1
    [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
    [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
    I1022 15:08:10.774824       1 master.go:431] Starting OAuth2 API at /oauth
    I1022 15:08:23.808685       1 logs.go:49] http: TLS handshake error from 10.128.0.11:39206: EOF
    I1022 15:08:23.815311       1 logs.go:49] http: TLS handshake error from 10.128.0.14:53054: EOF
    I1022 15:08:23.822286       1 customresource_discovery_controller.go:174] Starting DiscoveryController
    I1022 15:08:23.822349       1 naming_controller.go:276] Starting NamingConditionController
    I1022 15:08:23.822705       1 logs.go:49] http: TLS handshake error from 10.128.0.14:53056: EOF
    +24.277041043) (total time: 661.981642ms):
    Trace[1253590531]: [661.903178ms] [661.89217ms] About to write a response
    I1022 15:08:31.531366       1 trace.go:76] Trace[83808472]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:30.831296749 +0000 UTC m=+24.239950203) (total time: 700.049245ms):
    Trace[83808472]: [700.049245ms] [700.04027ms] END
    I1022 15:08:31.531695       1 trace.go:76] Trace[1916801734]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:31.031163449 +0000 UTC m=+24.439816907) (total time: 500.514208ms):
    Trace[1916801734]: [500.514208ms] [500.505008ms] END
    I1022 15:08:44.675371       1 healthz.go:72] /healthz/log check
    I1022 15:08:46.589759       1 controller.go:537] quota admission added evaluator for: { endpoints}
    I1022 15:08:46.621270       1 controller.go:537] quota admission added evaluator for: { endpoints}
    I1022 15:08:57.159494       1 healthz.go:72] /healthz/log check
    I1022 15:09:07.161315       1 healthz.go:72] /healthz/log check
    I1022 15:09:16.297982       1 trace.go:76] Trace[2001108522]: "GuaranteedUpdate etcd3: *core.Node" (started: 2018-10-22 15:09:15.139820419 +0000 UTC m=+68.548473981) (total time: 1.158128974s):
    Trace[2001108522]: [1.158012755s] [1.156496534s] Transaction committed
    I1022 15:09:16.298165       1 trace.go:76] Trace[1124283912]: "Patch /api/v1/nodes/master-0.com/status" (started: 2018-10-22 15:09:15.139695483 +0000 UTC m=+68.548348970) (total time: 1.158434318s):
    Trace[1124283912]: [1.158328853s] [1.15713683s] Object stored in database
    I1022 15:09:16.298761       1 trace.go:76] Trace[24963576]: "GuaranteedUpdate etcd3: *core.Node" (started: 2018-10-22 15:09:15.13159057 +0000 UTC m=+68.540244112) (total time: 1.167151224s):
    Trace[24963576]: [1.167106144s] [1.165570379s] Transaction committed
    I1022 15:09:16.298882       1 trace.go:76] Trace[222129183]: "Patch /api/v1/nodes/node-0.com/status" (started: 2018-10-22 15:09:15.131269234 +0000 UTC m=+68.539922722) (total time: 1.167595526s):
    Trace[222129183]: [1.167517296s] [1.166135605s] Object stored in database
    Excerpt of master-logs api api 2> file output at loglevel=8
    1022 15:11:58.829357       1 plugins.go:84] Registered admission plugin "NamespaceLifecycle"
    I1022 15:11:58.839967       1 plugins.go:84] Registered admission plugin "Initializers"
    I1022 15:11:58.839994       1 plugins.go:84] Registered admission plugin "ValidatingAdmissionWebhook"
    I1022 15:11:58.840012       1 plugins.go:84] Registered admission plugin "MutatingAdmissionWebhook"
    I1022 15:11:58.840025       1 plugins.go:84] Registered admission plugin "AlwaysAdmit"
    I1022 15:11:58.840082       1 plugins.go:84] Registered admission plugin "AlwaysPullImages"
    I1022 15:11:58.840105       1 plugins.go:84] Registered admission plugin "LimitPodHardAntiAffinityTopology"
    I1022 15:11:58.840126       1 plugins.go:84] Registered admission plugin "DefaultTolerationSeconds"
    I1022 15:11:58.840146       1 plugins.go:84] Registered admission plugin "AlwaysDeny"
    I1022 15:11:58.840176       1 plugins.go:84] Registered admission plugin "EventRateLimit"
    I1022 15:11:59.850825       1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
    I1022 15:11:59.859108       1 register.go:154] Admission plugin AlwaysAdmit is not enabled.  It will not be started.
    I1022 15:11:59.859284       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: AlwaysAdmit.
    I1022 15:11:59.859809       1 register.go:154] Admission plugin NamespaceAutoProvision is not enabled.  It will not be started.
    I1022 15:11:59.859939       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceAutoProvision.
    I1022 15:11:59.860594       1 register.go:154] Admission plugin NamespaceExists is not enabled.  It will not be started.
    I1022 15:11:59.860778       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceExists.
    I1022 15:11:59.863999       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceLifecycle.
    I1022 15:11:59.864626       1 register.go:154] Admission plugin EventRateLimit is not enabled.  It will not be started.
    I1022 15:11:59.864768       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: EventRateLimit.
    I1022 15:11:59.865259       1 register.go:154] Admission plugin ProjectRequestLimit is not enabled.  It will not be started.
    I1022 15:11:59.865376       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: ProjectRequestLimit.
    I1022 15:11:59.866126       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: OriginNamespaceLifecycle.
    I1022 15:11:59.866709       1 register.go:154] Admission plugin openshift.io/RestrictSubjectBindings is not enabled.  It will not be started.
    I1022 15:11:59.866761       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/RestrictSubjectBindings.
    I1022 15:11:59.867304       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/JenkinsBootstrapper.
    I1022 15:11:59.867823       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/BuildConfigSecretInjector.
    I1022 15:12:00.015273       1 master_config.go:476] Initializing cache sizes based on 0MB limit
    I1022 15:12:00.015896       1 master_config.go:539] Using the lease endpoint reconciler with TTL=15s and interval=10s
    I1022 15:12:00.018396       1 storage_factory.go:285] storing { apiServerIPInfo} in v1, reading as __internal from storagebackend.Config{Type:"etcd3", Prefix:"kubernetes.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
    I1022 15:12:00.037710       1 storage_factory.go:285] storing { endpoints} in v1, reading as __internal from storagebackend.Config{Type:"etcd3", Prefix:"kubernetes.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
    I1022 15:12:00.054112       1 compact.go:54] compactor already exists for endpoints [https://master-0.com:2379]
    I1022 15:12:00.054678       1 start_master.go:458] Starting master on 0.0.0.0:443 (v3.10.45)
    I1022 15:12:00.054755       1 start_master.go:459] Public master address is https://openshift.com:443
    I1022 15:12:00.054837       1 start_master.go:463] Using images from "registry.access.redhat.com/openshift3/ose-<component>:v3.10.45"
    W1022 15:12:00.056957       1 dns_server.go:37] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
    W1022 15:12:00.065497       1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
    I1022 15:12:00.066061       1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
    I1022 15:12:00.066265       1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
    I1022 15:12:00.158725       1 dns_server.go:63] DNS listening at 0.0.0.0:8053
    I1022 15:12:00.167910       1 htpasswd.go:118] Loading htpasswd file /etc/origin/master/htpasswd...
    I1022 15:12:00.168182       1 htpasswd.go:118] Loading htpasswd file /etc/origin/master/htpasswd...
    I1022 15:12:00.231233       1 storage_factory.go:285] storing {apps.openshift.io deploymentconfigs} in apps.openshift.io/v1, reading as apps.openshift.io/__internal from storagebackend.Config{Type:"etcd3", Prefix:"openshift.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
    I1022 15:12:00.248136       1 compact.go:54] compactor already exists for endpoints [https://master-0.com:2379]
    I1022 15:12:00.248697       1 store.go:1391] Monitoring deploymentconfigs.apps.openshift.io count at <storage-prefix>//deploymentconfigs
    W1022 15:12:00.256861       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
    W1022 15:12:00.258106       1 swagger.go:38] No API exists for predefined swagger description /api/v1