Minikube

To run, check the version then if its v 0.28.2

 minikube version

On linux

 minikube start --kubernetes-version v1.10.0 --logtostderr --bootstrapper localkube

Dashboard

Deploy the dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

There may be problems with https, the recommended url :

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Onminikube this needs to be non-https like :

http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/

Profiles

Allow us to set up multiple running Minikube installations, for example, a dev, stage, qa and production profile.

Note that you will have to set up port forwarding on each one!!

minikube profile stage

Show current config including profile :

minikube config view ->
- profile: stage
- WantReportError: true

Then check kubectl current context

kubectl config current-context ->
stage

To set up docker to use the minikube registry us :

 eval $(minikube docker-env)

If there is a problem upgrading minikube dont forget to delete the current cluster and if necessary delete the old minikube configuration. This will force a re-download of the images.

If the deployment has a spec like :

"spec": {
    "volumes": [
      {
        "name": "pgdata",
        "hostPath": {
          "path": "/tmp/my-db",
          "type": "Directory"
        }
      },
      .....
      "containers": [
      {
        "name": "my-db",
        "image": "my-db-image",
        "ports": [
          {
            "name": "my-db",
            "containerPort": 5432,
            "protocol": "TCP"
          }
        ],
        "env": [
          {
            "name": "PGDATA",
            "value": "/var/lib/postgresql/data/pgdata"
          }
        ],
        "resources": {},
        "volumeMounts": [
          {
            "name": "pgdata",
            "mountPath": "/var/lib/postgresql/data/pgdata"
          },

Create a volume :

minikube ssh
sudo mkdir /tmp/my-db

Kubernetes And Minikube

Monitoring

Repeated watch, saves running dashboard.

watch -n 3 kubectl get pods -n sa

Creating The Config Maps

kubectl create configmap --namespace=sa config-name --from-env-file=config-maps/name.properties

Installing knative On Minikube

See https://github.com/knative/docs/blob/master/install/Knative-with-Minikube.md

Start minikube :

minikube start --memory=8192 --cpus=4 \
  --kubernetes-version=v1.11.3 \
  --vm-driver=kvm2 \
  --bootstrapper=kubeadm \
  --extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"

See Also

Istio https://istio.io/docs/concepts/what-is-istio/

Kubernetes

Pods

Get a pod list then a terminal on a pod

kubectl get pods --namespace=kube-system

kubectl exec --namespace=kube-system -it kube-dns-3092422022-sk465 /bin/bash

GCloud

Public IP Address

GCloud only permits a single IP Address. If the chart is redeployed it may be that the public IP is no longer associated with proxy service.

To re-set the public ip get the service, edit the load balancer ip then redeploy.

kubectl get service proxy-public --namespace=int -o yaml > proxy-public.yaml

then edit the yaml to be like

sessionAffinity: None
type: LoadBalancer
loadBalancerIP: public-ip

then update the service

kubectl apply -f proxy-public.yaml

Disks

Mount disks like :

gcloud compute disks create --size=50GB --zone=europe-west1-b disk-name

Kubernetes Dashboard Against GCloud

To run the dashboard on the cloud system :

kubectl proxy --address='0.0.0.0' --port=8002 --accept-hosts='.\*'

(Not the \ !!! Just for formatting.)

Seems to work when the simple kubectl proxy fails.

kubectl

Get the default rolebindings for discovery

kubectl get clusterroles system:discovery -o yaml

Create a new service account

kubectl -n sa create sa kube-meta

After this check the secret has been created with

kubectl get secrets
NAME                    TYPE                                  DATA      AGE
default-token-k9hjv     kubernetes.io/service-account-token   3         6d
istio.default           istio.io/key-and-cert                 3         6d
istio.kube-meta         istio.io/key-and-cert                 3         2h
kube-meta-token-t8v25   kubernetes.io/service-account-token   3         2h
kubectl describe secrets/kube-meta-token-t8v25 -n sa
Name:         kube-meta-token-t8v25
Namespace:    sa
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kube-meta
              kubernetes.io/service-account.uid=9c0c01d4-d5e1-11e8-83ee-080027c7cf2d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  2 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlLW1ldGEtdG9rZW4tdDh2MjUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZS1tZXRhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOWMwYzAxZDQtZDVlMS0xMWU4LTgzZWUtMDgwMDI3YzdjZjJkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OnNhOmt1YmUtbWV0YSJ9.czUZtSdZNVr-WUfrmWi5qv5bgBU_55IVIigkgzjd3b8DsAGmxm2XP2-hRMeoipSaqTAxjQYcq4rXh9yxSs8e4rLMIbz2Yqqtui18eVepfAjnvCFWe7vMbZinloD8e9utErV6VRBX7WQNHkWzJ9le9FUDxwxk8fPasVPAn_j0vL8GerV1uYy1JK-9eWtxc7DX5IwmjF_YTrN9O2ir62cPdnzhPyJ3kawHjlhq8zTZ1IsV5GOkWL1B_0HxvW9x1TzybhjGNbsoRhnaUkV7tQ7KVpFxcFJnb735XFsSqnX9NZEJlzW7xjKuLD9UBsPlNpLrt1g7mQa0bUw3IYyZW1JrzQ
kubectl get policies.authentication.istio.io --all-namespaces

There can only be one mesh scoped policy and it applies to the whole mesh

kubectl get meshpolicies.authentication.istio.io

Check for destination rules

kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"
host: istio-policy.istio-system.svc.cluster.local
host: istio-telemetry.istio-system.svc.cluster.local