본문 바로가기

VMware/NSX-T

(DK) VMWARE NSX-T And K8S Integration

728x90

아래는 NSX-T 2.5 기준으로 K8S 연동하는 부분을 작성합니다.

 

below compatibility link.

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/rn/NSX-Container-Plugin-251-Release-Notes.html

 

 

download File from vmware page

 

 

1. Docker Install. (Each All Node)


sudo yum install -y yum-utils \

  device-mapper-persistent-data \

  lvm2 -y

 

sudo yum-config-manager \

    --add-repo \

    https://download.docker.com/linux/centos/docker-ce.repo

 

sudo yum install docker-ce*  -y

 

systemctl enable docker && systemctl start docker

 

 

2. K8S Install. (Each All Node)

 

 

sudo swapoff --all
sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
# Enable br_netfilter Kernel Module
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

yum -y install kubelet-1.16* kubeadm-1.16* kubectl-1.16* 
systemctl enable kubelet && systemctl start kubelet



3. K8S Master Initiation
kubeadm init  --apiserver-advertise-address 10.253.4.10 --ignore-preflight-errors=all

4. K8S Worker Node join for cluster
## write sentence on just master node



mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
sudo chown $(id -u):$(id -g) $HOME/.kube/config

## wirte sentence on Just Worker Node
kubeadm join --token 2dd5af.1482be6721cdab2a 1

5. Install Python (each all Node)
yum install python -y

6. Kubectl namespaces create
kubectl create ns nsx-system

7. certification create and certificate assign for nsx-t

 

7.1 certificate file create 

 

## vi edit use

 

vi create_certificate.sh 

 

#!/bin/bash#create_certificate.sh
NSX_MANAGER="nsx1.xxx.local"NSX_USER="admin"
PI_NAME="k8s-superuser"NSX_SUPERUSER_CERT_FILE="k8s-superuser.crt"NSX_SUPERUSER_KEY_FILE="k8s-superuser.key"
stty -echoprintf "Password: "read NSX_PASSWORDstty echo
cp /etc/pki/tls/openssl.cnf /etc/ssl/openssl.cnf   
openssl req \  -newkey rsa:2048 \  -x509 \  -nodes \  -keyout "$NSX_SUPERUSER_KEY_FILE" \  -new \  -out "$NSX_SUPERUSER_CERT_FILE" \  -subj /CN=k8s.oblab.local \  -extensions client_server_ssl \  -config <(    cat /etc/ssl/openssl.cnf \    <(printf '[client_server_ssl]\nextendedKeyUsage = clientAuth\n')  ) \  -sha256 \  -days 730
cert_request=$(cat <<END  {    "display_name": "$PI_NAME",    "pem_encoded": "$(awk '{printf "%s\\n", $0}' $NSX_SUPERUSER_CERT_FILE)"  }END)
curl -k -X POST \    "https://${NSX_MANAGER}/api/v1/trust-management/certificates?action=import" \    -u "$NSX_USER:$NSX_PASSWORD" \    -H 'content-type: application/json' \    -d "$cert_request"

 

7.2 assign on nsx-t

## vi edit use

 

vi create_pi 

#!/bin/bash#create_pi.sh
NSX_MANAGER="nsx1.xxx.local"NSX_USER="admin"CERTIFICATE_ID='9a043597-9500-46a8-ad9b-57d6ee2d6646'
PI_NAME="k8s-superuser"NSX_SUPERUSER_CERT_FILE="k8s-superuser.crt"NSX_SUPERUSER_KEY_FILE="k8s-superuser.key"NODE_ID=$(cat /proc/sys/kernel/random/uuid)
stty -echoprintf "Password: "read NSX_PASSWORDstty echo
pi_request=$(cat <<END    {         "display_name": "$PI_NAME",         "name": "$PI_NAME",         "permission_group": "superusers",         "certificate_id": "$CERTIFICATE_ID",         "node_id": "$NODE_ID"    }END)
curl -k -X POST \    "https://${NSX_MANAGER}/api/v1/trust-management/principal-identities" \    -u "$NSX_USER:$NSX_PASSWORD" \    -H 'content-type: application/json' \    -d "$pi_request"
curl -k -X GET \    "https://${NSX_MANAGER}/api/v1/trust-management/principal-identities" \    --cert $(pwd)/"$NSX_SUPERUSER_CERT_FILE" \    --key $(pwd)/"$NSX_SUPERUSER_KEY_FILE"

 

8. Docker run from all each nodes

 

docker load -i /var/tmp/nsx-ncp-rhel-2.5.1.15287458.tar

 

9. Certification Base64 check

cat k8s-superuser.crt | base64 -w 0

cat k8s-superuser.key | base64 -w 0

 

 

10. ncp-rhel.yaml 수정


 # Client certificate and key used for NSX authentication 주석제거

kind: Secret

metadata:

  name: nsx-secret

  namespace: nsx-system

type: kubernetes.io/tls

apiVersion: v1

data:

  # Fill in the client cert and key if using cert based auth with NSX

  tls.crt: 

LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN6akNDQWJhZ0F3SUJBZ0lKQU4rM2UrYUVqZ2YzTUEwR0NTcUdTSWIzRFFFQkN3VUFNQm94R0RBV0JnTlYKQkFNTUQyczRjeTV2WW14aFlpNXNiMk5oYkRBZUZ3MHlNREF5TURreE16QTBNamxhRncweU1qQXlNRGd4TXpBMApNamxhTUJveEdEQVdCZ05WQkFNTUQyczRjeTV2WW14aFlpNXNiMk5oYkRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQVBkcmNjZzlNYWRZd3Q5bnl1ZGRIeFd0MktVZVIxYTd0TTJLcG52emtFN2kKVmZVRHhCb2FZZmdUSkh5MTMzbWtoM0RPZUMzdk1KR093ditPalduK0Y3RndjU2tUNGJBcmZKVzZMaTdadFVmTApIaGRMSk5xbHdRaEhHTUx6S1EvWEJTRkxTTXh0TWNSLzhveDZZU0FIYWNBd0E0WGNHMFdtSlovMFBaTTdnbTd0ClYwemN3eGZKZmI4eU1PYS9DTW1yUkRWSXNJbmVBN1ZMcVlaa0d6Z0NUc1puOEpRVVU2cmVWaG5TaWk2RWhNUmwKaDFxTUR1UFZLNSt4bE1PektVelRrZDRxZnVWcEM3VzI4SVlsck9nVGJSSHJNeGFLL2pvVEhlMkt2TVVuMXFXUgoxZ0pRdGw4MVBYV2syM3d3L1VnUnU2eUFwYVhiczJKQzRQUzB4WlNIMjUwQ0F3RUFBYU1YTUJVd0V3WURWUjBsCkJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNxZ0N5Z3puRnFZRVRId2ZGOXcKVmU4UG9mQ21JSmtOaXR4K1d0REpSK2Y1VWN0L2o2ZzNQVjh4OTg1ZVdtMlRZTlBjZFF5ODdFemlWK2R2SCs5ZgpKUm5maG5yaVR4djdhLzB0ZEtFbzltMTVHbHdaQTR1cSt4eUVJN3lweDZZVEcrbm1zRDkyTU9WU0pPU3hhTHlvCk01d1U5N0p5QVMveU5vM3puWjl0WlF0bVNWa0Y2bTdvRGxMeXU5SURHcGZwa3N3QnFBaXkyUTRyd29TRUJMUVIKem5QM29OdVdkL240cnpVVHArZjdwRC81aEN4UUxlOGZEc1o3dW1rZVRIVE5kZnltK1RKWWcwbnFUQjRXcEVWTwo5ZVBNZEpuaDNTeUlXWnZZQklaTkJ4ZDlaZkZaQXNlMDZKZGVCKzJzTEVMUDJ4YTdxRjU5RXNNNmJTamd3WHBwCkFTMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=

  tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV3QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktvd2dnU21BZ0VBQW9JQkFRRDNhM0hJUFRHbldNTGYKWjhyblhSOFZyZGlsSGtkV3U3VE5pcVo3ODVCTzRsWDFBOFFhR21INEV5Ujh0ZDk1cElkd3puZ3Q3ekNSanNMLwpqbzFwL2hleGNIRXBFK0d3SzN5VnVpNHUyYlZIeXg0WFN5VGFwY0VJUnhqQzh5a1Axd1VoUzBqTWJUSEVmL0tNCmVtRWdCMm5BTUFPRjNCdEZwaVdmOUQyVE80SnU3VmRNM01NWHlYMi9NakRtdndqSnEwUTFTTENKM2dPMVM2bUcKWkJzNEFrN0daL0NVRkZPcTNsWVowb291aElURVpZZGFqQTdqMVN1ZnNaVERzeWxNMDVIZUtuN2xhUXUxdHZDRwpKYXpvRTIwUjZ6TVdpdjQ2RXgzdGlyekZKOWFsa2RZQ1VMWmZOVDExcE50OE1QMUlFYnVzZ0tXbDI3TmlRdUQwCnRNV1VoOXVkQWdNQkFBRUNnZ0VCQUk4QjAycGh0R2w5ckdWa29Lcko4RVdmampFaUp5bkNwT2FJMVdHZmpqR0sKTVJURmNIdUY5RXBuQmhmdko1UXZ2UGsrM2NmdkVLdVJVTFJWdVBjaW5wODQxcTlmaG50RkoyV0RqMTRWQityUApSRDFDZWJMSFM2TjFOS0s1MldvR1pqaXdRd3Vsc2JaYUhkK0VmTTcrZWVsVDF6VnYyM09PYlFEZk14Zit0WTFYCkRzVzM0NWtiSXBkck9wM0tMMjJ0Y1AwNU0xSkZJQ01SMi9WSzlOQzJ1UXZNRm9oV0twSVJySWJqbDR6VUVIQ0wKeEhqMzdXVGQzUDFVYXlzZDVRcWp2TlhaTjBVRGdOcks0OGEvUDcwbEVlT1V4WGw0VEp0akhqbUtsc1BaS2ZBcQowVGozUUEwTnlIeU5vandCY0NkRXJwa3pkTHhGWEd5d1FOa0JoZnhrSnlFQ2dZRUEvVFFIRVFQQUlhWWpjZzlYCnMrUllSUjdpZ3ovVXBnUGNvb1FnTVp6bWRXeVdYelZvWjlZTkFjSSs1cDZmbTVXK2pwcVkrM296bkxSSkVFK2QKcXIrclNna2RTWWVrSFQ4ZzJTOVlzVEN2bi8zNnkzOFhteXdpYlYxdnYvSCtuVXBOSW9RYlhBczhCZDJ1L3NHbgpFREJrUm5TNjhoSE1NVVA3eEVJd2laWEd2MnNDZ1lFQStpY1FJYW1rNzJpMTlNcTRTVk9qa0NDTWM1cmNoNG5ZCkl5RHRnR3g2L2d0VnZmOThEYnFSTjZFb3RUMFlkQ2RGV1g2M2dicFVMWlNEK3JLaGFpcEVMcEtzSUZqR3lMeXYKRkQ4THA4YnRudGoyYys0djBXNWxGTzVXQm1ldDZtbDBtQUNPbTZvQm5TcTA4Qkt4MmpnSHRTTTJxSDh0V1FQQgpDOS9jaDdOdE94Y0NnWUVBa0x3cXhla1U3S2NoWDlPeFdGMVFyOElsek15eDYyd050TUE5L3Q0blJqd2FBTFp3Cnhkb3ZlUy9sOE1IL2psb2NvVHR4ODE0NUhueFh2NEVqS1RXQzNrRXpncEtNbDBNOHJhbEkwNUIyODhla2txcEYKZmlmT1RpRzQvVW1CTjd2L041bTRZZmJ5Q3BCYnRiaFFuUXBzWjNIV1l3VVZhWnZvMEpqZFVlaFJ3WjBDZ1lFQQpseDhTTjhQc3lGVlIxMWpBakV2aS9DY3Rzb2xUd080ZGpOdFBuODNwWDZBcFpHYjc0cTliRzJoWTEyVFphUkp3CmF1aUtvK3lVL2hSQ3h5a3pLcGZ1S05TaTk4ZXFENHN0bWVXY2ZQZEloalk4YlR6djFtNEMwdXBKUGdWVW85Q2gKaDFLTzFLdVgzZ0wyM0RIdkVBM1pXaXl6MElkRU5ncDJqVjNvTkhMSkFuRUNnWUVBM01TQk5pNm02VGkrd2JxLwpPUmxoWHBUUmV0blAzL1FzWlI2Z21wcmdCZlJ3d0JpRW55TkxDbkF0bHk5K24vd0J4emlhWmxyR2JsWFlYdmdkClZXWUV5N1hlMVY3eE5nRnkzZTh3TWhjNjNzVmpJOGEzZVo2TG5yT09rZ0RHMDhNNHZ0RTIrOGt0enQ3VElXUlEKZHByRUQ3a09DLzlVYW1jY1d5RkRqam9nRGw4PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==

---

 

 

    [nsx_v3]

 

    # Set NSX API adaptor to NSX Policy API adaptor. If unset, NSX adaptor will

    # be set to the NSX Manager based adaptor.

    #policy_nsxapi = False

 

 

 

    # Path to NSX client certificate file. If specified, the nsx_api_user and

    # nsx_api_password options will be ignored. Must be specified along with

    # nsx_api_private_key_file option

    nsx_api_cert_file = /etc/nsx-ujo/nsx-cert/tls.crt

 

    # Path to NSX client private key file. If specified, the nsx_api_user and

    # nsx_api_password options will be ignored. Must be specified along with

    # nsx_api_cert_file option

    nsx_api_private_key_file = /etc/nsx-ujo/nsx-cert/tls.key

 

    # IP address of one or more NSX managers separated by commas. The IP

    # address should be of the form:

    # [<scheme>://]<ip_adress>[:<port>]

    # If

    # scheme is not provided https is used. If port is not provided port 80 is

    # used for http and port 443 for https.

    nsx_api_managers = nsx1.xxx.local

    #nsx_api_user = []

    #nsx_api_password = []

    # is available to serve a request, and retry the request instead

    #cluster_unavailable_retry = False

 

    # Maximum number of times to retry API requests upon stale revision errors.

    #retries = 10

 

    # Specify one or a list of CA bundle files to use in verifying the NSX

    # Manager server certificate. This option is ignored if "insecure" is set

    # to True. If "insecure" is set to False and ca_file is unset, the system

    # root CAs will be used to verify the server certificate.

    #ca_file = k8scluste

    #ca_file = /var/tmp/cert/nsx.cer

    

    # If true, the NSX Manager server certificate is not verified. If false the

    # CA bundle specified via "ca_file" will be used or if unset the default

    # system root CAs will be used.

    #insecure = False

    insecure = True

 

  # Name or UUID of the container ip blocks that will be used for creating

    # subnets. If name, it must be unique. If policy_nsxapi is enabled, it also

    # support automatically creating the IP blocks. The definition is a comma

    # separated list: CIDR,CIDR,... Mixing different formats (e.g. UUID,CIDR)

    # is not supported.

    container_ip_blocks = 92f40f5d-9a5b-47b8-bf2c-a4d410279cdb

 

 

    # Name or UUID of the top-tier router for the container cluster network,

    # which could be either tier0 or tier1. If policy_nsxapi is enabled, should

    # be ID of a tier0/tier1 gateway.

    top_tier_router = 5c598ff8-085a-44ec-af8b-3beaed9204d2

 

    # Option to use single-tier router for the container cluster network, Shared T1 used True

    #single_tier_topology = False

    single_tier_topology = True

    # Name or UUID of the external ip pools that will be used only for

    # allocating IP addresses for Ingress controller and LB service. If

    # policy_nsxapi is enabled, it also supports automatically creating the ip

    # pools. The definition is a comma separated list: CIDR,IP_1-IP_2,...

    # Mixing different formats (e.g. UUID, CIDR&IP_Range) is not supported.

    #external_ip_pools_lb = []

 

    # Name or UUID of the NSX overlay transport zone that will be used for

    # creating logical switches for container networking. It must refer to an

    # already existing resource on NSX and every transport node where VMs

    # hosting containers are deployed must be enabled on this transport zone

    overlay_tz = de0af621-11e9-4f81-a6a7-fb6093f5acb0

 

    # Edge cluster ID needed when creating Tier1 router for loadbalancer

    # service. Information could be retrieved from Tier0 router

    edge_cluster = e186c942-5b0d-4260-9de8-3a319e03607d

 

 [coe]

 

    # Container orchestrator adaptor to plug in.

    #adaptor = kubernetes

 

    # Specify cluster for adaptor.

    cluster = k8scluster

 

  [k8s]

 

    # Kubernetes API server IP address.

    apiserver_host_ip = 10.253.4.10

 

    # Kubernetes API server port.

    apiserver_host_port = 6443

 

 

#### Find word Image and add docker image link 

 

      serviceAccountName: ncp-svc-account

      containers:

        - name: nsx-ncp

          # Docker image for NCP

          image: registry.local/2.5.1.15287458/nsx-ncp-rhel:latest

 

 

### openvswitch config

    [nsx_node_agent]

 

    # Prefix of node /proc path to mount on nsx_node_agent DaemonSet

    #proc_mount_path_prefix = /host

 

 

 

 

    # The log level of NSX RPC library

    # Choices: NOTSET DEBUG INFO WARNING ERROR CRITICAL

    #nsxrpc_loglevel = ERROR

 

    # OVS bridge name

    ovs_bridge = br-int

 

    # The time in seconds for nsx_node_agent to wait CIF config from HyperBus

    # before returning to CNI

    #config_retry_timeout = 300

 

    # The time in seconds for nsx_node_agent to backoff before re-using an

    # existing cached CIF to serve CNI request. Must be less than

    # config_retry_timeout.

    #config_reuse_backoff_time = 15

 

 

    # The OVS uplink OpenFlow port where to apply the NAT rules to.

    ovs_uplink_port = ens256

 

11. NSX-T Config

11.1 Logical-Switch Tag config

tag: k8scluster , scope: ncp/cluster

tag: k8s-master.oblab.local , scope: ncp/node_name

 

## check hostname to each all server

 

 

 

12. ncp-rhel.yaml 실행

kubectl apply -f ncp-rhel.yaml -n nsx-system

 

13. apply complete

 

반응형

'VMware > NSX-T' 카테고리의 다른 글

(DK) AVI BAREMETAL INSTALL  (0) 2021.06.11
(DK) NSXT GM Config  (0) 2021.06.08
(DK) Bare Metal EDGE AND NSXT Join  (0) 2021.05.16
(DK) Bare Metal EDGE Install  (0) 2021.05.16