Installing in AWS Outposts

Installing advanced event mesh in AWS Outposts is a Controlled-Availability (CA) feature. Contact SAP to see if this feature is suitable for your use case and deployment requirements.

SAP supports deploying event broker services in Amazon Elastic Kubernetes Service (EKS) in AWS Outposts as a controlled availability feature. AWS Outposts provide the ability to deploy your event broker service to EKS clusters on Amazon hardware located on-premises. For more information, see the AWS Outposts Family documentation.

There are a number of environment-specific steps that you must perform to install advanced event mesh for SAP Integration Suite.

Before you perform the environment-specific steps described below, ensure that you review and fulfill the general requirements listed in Common Kubernetes Prerequisites.

SAP does not support event broker service integration with service meshes. Service meshes include Istio, Cilium, Linkerd, Consul, and others. If deploying to a cluster with a service mesh, you must:

  • exclude the target-namespace used by advanced event mesh services from the service mesh.
  • set up connectivity to event broker service in the cluster using LoadBalancer or NodePort. See Exposing Event Broker Services to External Traffic for more information.

For customer-owned deployments, you are responsible for the set up of the Kubernetes cluster and the maintenance and operation of the cluster. The following information can help you to understand the requirements of that Kubernetes cluster that you create:

Available from SAP are sample scripts and Terraform modules you can use as reference example to understand what is required in your Kubernetes cluster. The example is provided as-is. You (the customer) can modify the files as required to create your Kubernetes cluster. If you choose to do so, then you are responsible to maintain and modify the files for your deployment. For more information, contact SAP.

Considerations for Deploying advanced event mesh Using AWS Outposts

Be aware of the following considerations when choosing to deploy advanced event mesh to AWS Outposts:

  • SAP only supports deployment of advanced event mesh to Amazon Elastic Kubernetes Service (EKS) for AWS Outposts.

  • SAP only supports the use of AWS Outposts Rack for deploying advanced event mesh to EKS on AWS Outposts. See AWS Outposts Rack in the AWS Outposts documentation for more information.

  • You must configure the AWS Outposts to use direct VPC routing for AWS Outposts. Direct VPC routing is the default configuration option for local gateway route tables when deploying to AWS Outposts. See Direct VPC Routing in the AWS Outposts documentation for more information.

  • You must create a placement group as partition with a count of 2. See Placement Groups.

  • You must configure your EKS cluster storage_class with GP2 storage when deploying advanced event mesh to EKS on AWS Outposts. For more information, see Storage Class.

  • You must size your CIDR correctly to accommodate both system usage, and the number of event broker services your cluster will host. For more information, see IP Range and CIDR Sizing.

  • If you intend to expose your event broker services to external networks, you must use SAP's custom MetalLB load balancer. For more information, see Using the Custom MetalLB Load Balancer.

Prerequistes

Permissions
You require an AWS account with the permissions listed below. These permissions are required only by the individual when deployment is done using a Terraform module:
  • All the permissions that are required to create and manage the EKS cluster (eksClusterRole).
  • Permission to create IAM roles and IAM policies in the EKS cluster. These permissions are required by the Terraform module. The example module creates these IAM roles and policies that are used by the EKS cluster, and the following permissions to create and manage resources in the EKS cluster:
    IAM Role
    Gives permissions to the following:
    • EKS cluster control plane
    • EKS cluster worker nodes
    • EKS Load balancer controller
    • EKS auto-scaler
    IAM Policy
    Creates a set of permissions that the is required by the following in the deployment:
    All EC2 resources
    The Kubernetes and Terraform modules require this permission to access tags and the metadata of resources. The autoscaler and Load Balancer provisioning look at what security group each instance has, and modifies the security groups to add rules for the load balancer services.
    VPC
    The Terraform module requires this permission to create the VPC. The Kubernetes module also requires this permission to scan VPC and subnets to retrieve networking parameters.
    EBS
    Permission is required for Kubernetes dynamic PVCS that require access to EBS in order to dynamically create values and attach on the correct host. The host also requires EC2 access as described above.
    Security Groups
    The Kubernetes module requires this permission to create security groups. The load balancer services also requires rules added as well.
    Routing tables
    The Terraform module requires this permission to create routing tables.
    Internet Gateways
    Terraform module requires this permission to create an Internet gateway.
    Elastic IPs
    The Terraform module requires this permission to attach Elastic IP addresses (EIPs) to the NAT gateway.
    NAT Gateways
    The Terraform module requires this permission to create the NAT gateway and attach EIPs to the NAT gateway.
    OIDC Provider
    The Terraform module requires permission to create an OIDC provider that will be used to authenticate Kubernetes modules against IAM roles. The Auto-scaler and AWS Load Balancer Controller are two of the Kubernetes modules that use OIDC to authenticate the IAM role.
    Elastic Load Balancers
    The worker node group requires this permission to install the EKS cluster.
Networking
If you plan to use AWS NAT gateways for outgoing traffic, you must create an Elastic IP (EIP) address for each NAT Gateway that you intend to use with the following considerations:
  • The EIPs for the NAT gateways must be created upfront.
  • SAP recommends two EIPs are created. A minimum of one EIP allocation ID is required.
EIPs are not required if you plan to route traffic over your on-premises network.

EKS Cluster Specifications

Before you (the customer) install the Mission Control Agent, you must configure the EKS cluster on your AWS Outpost with the technical specifications listed in the following sections:

For more detailed information about using Amazon EKS, see the User Guide on the Amazon EKS documentation site.

Placement Groups

AWS Outposts don't have traditional availability zones, as the AWS Outposts is a physical object residing in one of your datacenters. This has implications for the configuration of your cluster in order to maintain the High Availability (HA) status of your event broker services.

To maintain the HA status of your event broker service, you must create a placement group on the AWS Outposts, configured as a partition with a count of 2.

The partition placement group ensures the placement of the individual nodes of the HA event broker service in different partitions. The different partitions do not share underlying hardware. If the hardware in one partition fails, the backup node of the HA event broker service takes over.

Instance Type Requirements

Because of the additional resources required to run Kubernetes, the instance types that are required for some of the scaling tiers are larger than their instance-based cousins. the following are the instance size type requirements for an EKS. For details about the core and RAM requirements for each scaling tier, see General Resource Requirements for Kubernetes and Default Port Configuration.

Scaling Tier Instance Type Required
Monitor M5.large
Standard R5.large
Broker 250 R5.large
Broker 1K R5.large
Broker 5K R5.xlarge
Broker 10K R5.xlarge
Broker 50K R5.2xlarge
Broker 100K R5.2xlarge

Storage Class

The EKS storage class (type) must be GP2.

It's important remember that the disk size (the size of the EBS volume) is larger than the message spool size.
  • For event broker services 10.6.1 and earlier, the disk requirement is twice the Message Spool size specified when you create an event broker service. For example, if you configure an event broker service to use a Message Spool of 500 GiB, you require a 1 TB disk size.
  • For event broker services version 10.7.1 and later, the disk size requirement is 30% greater than the message spool size for the event broker service class. For example, the Enterprise 250 class has a message spool size of 50 GiB, requiring a 65 GiB disk size.
You must consider the disk space overhead when planning your Kubernetes cluster. See Volume Size for High-Availability Event Broker Services for a list of disk size requirements for all event broker service versions.

To deploy advanced event mesh, you must configure the StorageClass in EKS to use the WaitForFirstConsumer binding mode (volumeBindingMode). To support scale-up, the StorageClass must contain the allowVolumeExpansion property, and have it set to "true". You should always use XFS  as the filesystem type (fsType).

The properties of your StorageClass yaml should be similar to the following:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gp2
parameters:
  csi.storage.k8s.io/fstype: xfs
  type: gp2
  encrypted: "true"
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

NAT Gateway

The following network configurations are not required if you route outgoing traffic through your on-premises network.

You require one Elastic IP (EIP) for each NAT gateway for your cluster. SAP recommends that you have two Elastic IPs (and two NAT gateways) for a production system.

You can have up to three EIPSs and NAT gateways, which allows you to have multi-AZ NAT redundancy . This requires that you have three EIPs. These NAT EIPs must be created upfront. If you use a Terafom module, ensure you use the EIPs.

IP Range and CIDR Sizing

You must consider the CIDR requirement for your worker nodes when deploying advanced event mesh to EKS on AWS Outpost. Note that if you are using SAP's custom MetalLB load balancer, it requires one CIDR for its IP pool definition.

The calculations below are based on custom settings for WARM_IP_TARGET and WARM_ENI_TARGET:

kubectl set env ds aws-node -n kube-system WARM_IP_TARGET=1     
kubectl set env ds aws-node -n kube-system WARM_ENI_TARGET=0

Details about these settings are available in the Amazon Kubernetes VPC CNI documentation on GitHub.

You can calculate your CIDR requirement for the EKS worker node subnet with the following equation:

5 + 10 + HA*10 + (SA+1) * 4

The values in the equation are explained below:

  • The first number (5) represents the number of IPs reserved for the AWS subnet. This includes the first four IPs and the last IP (for example, in a /24 CIDR, IP .255 would be the last IP reserved).

  • The second number (10) represents the IPs required for system usage, including:

    • Two for the autoscaler

    • Two for the Core DNS

    • Two for the CSI controller

    • Two for the MetalLB controller

    • Two for the worker nodes IPs.

  • The third section of the equation (HA*10) represents the total IPs required by high availability (HA) event broker services in your cluster. Each HA event broker service requires 10 IPs, including:

    • Three for the pods

    • One for the loadbalancer

    • Six for worker nodes

  • The forth section of the equation (SA+1) represents the total IPs required by standard class event broker services in your cluser. Each standard event broker service requires four IPs, including:

    • One for pods

    • One for the loadbalancer

    • Two for worker nodes.

    • The plus one accounts for the IP required by the extra worker node used during upgrades. Only one additional node is required regardless of how many standard event broker service you deploy.

Using the Custom MetalLB Load Balancer

If you choose to use a load balancer to connect to the event broker services you deploy to your EKS cluster in AWS Ouptost, you must use the custom MetalLB load balancer provided by SAP to do so. Deploying the MetalLB load balancer must be done before you deploy the Mission Control Agent. You must also update the clusters kube-proxy-config, in the kube-system namespace, which you can do after creating the cluster.

  1. To update the kube-proxy-config, perform a PATCH to the kube-proxy-config in the kube-system namespace with the following payload:

    localDetectMode: InterfaceNamePrefix
    detectLocal:
      interfaceNamePrefix: eni

    The PATCH adds the fields in the payload to the kube-proxy-config, which are not there by default

  2. Generate the metallb.yaml helm value file as an output of the worker_group module with the following script:

    terraform -chdir=worker_group/ output -raw metallb_values > metallb.yaml
    
    helm upgrade --install metallb \
      "https://test-charts-solace.s3.eu-central-1.amazonaws.com/metallb-0.0.0.tgz" \
      --namespace kube-system \
      --values metallb.yaml
  3. You must reserve a subset worker group's IPs for the metalLB load balancer. This requires that you decide on an IP range and define the reservation_type as explicit. You can do this with the following terraform module:
    resource "aws_ec2_subnet_cidr_reservation" "metal_lb_reservation" {
      cidr_block       = var.metal_lb_cidr_block
      reservation_type = "explicit"
      subnet_id        = var.subnet_id
    }
  4. After the MetalLB controller pod is running, you can apply the MetalLB Custom Resource Definitions (CRD) to the cluster.

    1. Use the following script to apply the IP address pool that you defined with the terraform module in step 3, where <First IP of MetalLB CIDR>-<Last IP of MetalLB CIDR> must match the CIDR passed by the metal_lb_cidr_block variable when deploying the worker_group. This represents the pool of IP addresses which are reserved for MetalLB to assign to services.

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: first-pool
        namespace: kube-system
      spec:
        addresses:
          - <First IP of MetalLB CIDR>-<Last IP of MetalLB CIDR>
    2. Create an AWS Advertisement object. This configures MetalLB to use an AWS advertisement strategy to advertise its IP address to services.

      apiVersion: metallb.io/v1beta1
      kind: AWSAdvertisement
      metadata:
        name: aws
        namespace: kube-system
      spec:
        ipAddressPools:
        - first-pool

Autoscaling

Your cluster requires autoscaling to provide the appropriate level of available resources for your event broker services as their demands change. SAP recommends using the Kubernetes Cluster Autoscaler, which you can find in the Kuberenetes GitHub repository at: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler.

See the Autoscaling documentation on the Amazon EKS documentation site for information about implementing a Cluster Autoscaler.