Version v1.16 of the documentation is no longer actively maintained. The site that you are currently viewing is an archived snapshot. For up-to-date documentation, see the latest version.
If your Operator creates or manages NetworkPolicy configurations ensure that your solution:
- applies fine-grained network policies to the extent that is required for your managed application to function properly
- applies fine-grained network policies to enable your managed application internal components to communicate among each other
- allows users to configure your operator so it does not create or manage NetworkPolicy instances
- does not create
allow traffic from everywhere in the clustertype policies
NetworkPolicies are popular in multi-tenant cluster to provide an extra layer
of segregation among tenants within SDN solutions. Users typically customize this extensively with goal of disallowing network traffic among unrelated
tenants. Operators that deploy an
accept all traffic from anywhere in the cluster style policy, are creating an obstacle in
pursuing this goal, especially in instances where these policies cannot be disabled. In security-conscious environments policies like these are not allowed
in production. In such cases your operator should minimally have the option to prevent NetworkPolicy objects from being
created and leave this responsibility to the user. In more advanced cases, your operator should create NetworkPolicy
configurations that follow the least-privilege principle, i.e. denying access to everything and from everywhere by default and only allowing access to specific authorized resources from specific authorized components.
The goal is to split or to isolated ingress traffic from certain environments, e.g. production and development environments, ending up on different routers and in this way, being managed by a different Ingress controller. This is a popular configuration option for heavily populated multi-tenant clusters, with several IngressController deployed.
If your Operator creates ingress resources the recommendation is to allow the users to customize them, through the use of a CRD. The required IngressClass needs then to be propagated to the ingress resources created so that they get picked up by the desired IngressController. Annotations are deprecated in favour of IngressClass.
To run on the OpenShift distribution of Kubernetes you probably will use
the Route API. When sharding these routes may be configured with a label selector.
Based on this label selector they will amend their configuration when a route (having the label)
is created or not (if the route does not have the label). The label is applied at the
and there is no pre-defined convention here, so users set these custom labels in
accordance to how they configured their IngressController from operator.openshift.io/v1 instances.