Kubernetes at a glance
WARN! Kubernetes is not for you!
External Links
- About Desired State management   @[https://www.youtube.com/watch?v=PH-2FfFD2PU]

- Online training:

- @[https://kubernetes.io/]
- @[https://kubectl.docs.kubernetes.io/]   ← Book
- @[https://kubernetes.io/docs/reference/] (Generated from source)
- @[https://kubernetes.io/docs/reference/glossary/]
- @[https://kubernetes.io/docs/reference/tools/]

- Most voted questions k8s on serverfault:
- Most voted questions k8s on stackoverflow:
- Real app Examples:

ºk8s Issues:º
- @[https://kubernetes.io/docs/reference/issues-security/issues/]
- @[https://kubernetes.io/docs/reference/issues-security/security/]

ºk8s Enhancements/Backlogº
- @[https://github.com/kubernetes/enhancements]

- k8s orchestates pools of CPUs, storage and networks

Kubernetes Cluster Components:
│ºetcdº:                      │    ºMasterº:                         │ºnodeº (Pool):
│- High availability key/value│    (Or cluster of masters)           │- (VM) computer serving as 
│ddbb used to save the cluster│     - manages the cluster.           │  a worker machine where pods      
│metadata, service registrat. │     - keeps tracks of:               │  are scheduled for execution.   takes a set of PodSpecs and ensures
│and discovery                │       - desired state.               │- Contains k8s services:         thate described containers are     
└─────────────────────────────┘       ─ application scalation        │  ─ Kubelet agent        ←------ running and healthy.               
                                      - rolling updates              │  - Container runtime Iface
                                    - Raft consensus used in         │    (Docker or rkt)
                                      multimaster mode               │- kube-proxy:(Gº"SDN enabler"º)
                                      (requires 1,3,5,... masters    │  simple or round-robin TCP/UDP 
                                     @[https://raft.github.io/])     │  stream forwarding across set 
                                     └────────┬─────────────────┘    │  of back-ends.
                                              │                      └─────────────────────────────
ºkube─apiserver:º       │ºkube─controller─managerº             │ºkube-schedulerº               ºcloud─controller─managerº
 ─ (REST) Wrapper around│ ─Oº:EMBEDS K8S CORE CONTROL LOOPS!!!º│ ─ Assigns workloads to nodes  ─ k8s 1.6Rºalpha featureº
   k8s objects          │ ─ºhandles a number of controllers:º  │ ─ tracks host-resource ussage ─ runs controllers 
 - Listen for management│   ─ regulating the state of  cluster │ ─ tracks total resources        interacting with underlying 
   tools (kubectl,      │   ─ perform routine tasks            │   available per server and      cloud provider
   dashboard,...)       │     ─ ensures number of replicas for │   resources allocated to      ─ affected controllers
                        │       a service,                     │   existing workloads assigned   ─ Node Controller
                        │     ─ ...                            │   to each server                ─ Route Controller
                        │                                      │ ─ Manages availability,         ─ Service Controller
                        │                                      │   performance and capacity      ─ Volume ControllerCluster
                        │                                      │
                ºfederation-apiserveRº                   ºfederation-controller-managerº
                - API server for federated               - embeds the core control loops shipped
                  clusters                                 with Kubernetes federation.

CLUSTER  1 ←───→ N Namespace  1 ←─┬─→  N Deployment       1 ←───→ N Pods
───────           "Virtual        │      "Application"           · Minimum Unit of Exec/Sched.
─ etcd             cluster"       │                              · At creation time resource
─ Master          Can be assigned │      ─ Load balancing          limits for CPU/memory are
─ Node Pool       max Memory/CPU  │        with(out) session       defined (min and max-opt)
                  Quotas          │        affinity              · Ussually Pod 1←→1 Container 
                                  │      ─ Persistence storage     (rarely 2 when such 
                                  │      ─ Scheduing of pods into   containers are closely
                                  │        node pool                related to make Pod work)
                                  │      ─ Rolling deployments   ·RºPods are ephemeral.º
                                  │                                 Deploy Controller+
                                  └──  N "Other" Controllers        Pods is high available.
                                         (ReplicaSet, ...)

REF: @[https://kubernetes.io/docs/Architecture/architecture_map.html?query=service+registration+discovery]
   ─ Describes the resources available ─┐       │Node Status│
     ─ CPU                              │       ├───────────┤
     ─ memory                           │       │Addresses  ←─(HostName, ExternalIP, InternalIP)
     ─ max.number of pods supported     └───────→Capacity   │                        ^^^^^^^^^^
    ┌───────────────────────────────────────────→Condition  │                        Typically visible
    status ofºallºRunning nodes.                │Info       ← General info. like     within cluster
    Node               Description              └───────────┘ kernel/k8s/docker ver,
    -----------------  -------------------------------------  OS name,...
    OutOfDisk          True → insufficient free space on the
                              node for adding new pods
    Ready              True → node is healthy and ready to accept pods
                       if not True after ºpod─eviction─timeoutº 
                       (5 minutes by default)
                       an argument is passed to the kube-controller-manager 
                       and all Pods
    MemoryPressure     True → pressure exists on the node memory
    PIDPressure        True → pressure exists on the processes
    DiskPressure       True → pressure exists on the disk size
    NetworkUnavailable True → node network not correctly configured


  o Cloud connector              _                 _  | o Cloud connector                   _                 _
                             ___| | ___  _   _  __| | |                                 ___| | ___  _   _  __| |
kube─controller─managero──→ / __| |/ _ \| | | |/ _` | |   cloud-controller─managero──→ / __| |/ _ \| | | |/ _` |
                    ^      | (__| | (_) | |_| | (_| | |                         ^     | (__| | (_) | |_| | (_| |
                    │       \___|_|\___/ \__,_|\__,_| | kube─controller─manager │      \___|_|\___/ \__,_|\__,_|
                    │                  ^              |       ^                 │
                    │                  │              |       │                 │
                    │                  │              |       └─────────────────┤
                    v                  │  node        |                         │
  etcd  ←───→ kube─apiserver ←──┬─┐  ┌─o────────┐     |                         v                      node
                                │ └──→kubelet   │     |   etcd  ←───────→ kube─apiserver ←──┬─┐ ┌──────────┐
                    ^           │    │          │     |                                     │ └─→kubelet   │
                    │           │    │          │     |                         ^           │   │          │
                    v           │    │          │     |                         │           │   │          │
              kube─scheduler    └────→kube─proxy│     |                         v           │   │          │
                                     └──────────┘     |                   kube─scheduler    └───→kube─proxy│
                                                      |                                         └──────────┘
Object Management
- @[https://kubernetes.io/docs/tasks/tools/install-kubectl/]
- @[https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/]
- @[https://kubernetes.io/docs/reference/kubectl/cheatsheet/]

º┌──────────┐º                                ┌───────────┐
º│k8s object│º                             ┌─→│metadata   │
º├──────────┤º                             │  ├───────────┤
º│kind      │º one of the GºResource typesº│  │name       │← maps to /api/v1/pods/name
*│metadata  │←─────────────────────────────┘  │UID        │← Distinguish between historical
º│spec      │← desired stateº                 │namespace  │  occurrences of similar object
º│state     │← present stateº                 │labels     │
º└──────────┘º                                │annotations│
                                         metadata is organized 
                                         around the concept of an     
                                         application. k8s does NOT    
                                         enforce a formal notion of   
                                         application. Apps are        
                                         described thorugh metadata   
                                         in a loose definition.             

etcd                 kubernetes                (/openapi/v2)       cli management:
(serialized  ←────── objects    ←────────────────→ API    ←───→   ºkubectlº'action'      Gº'resource'º
 API resource    - Represent API resources        Server *1                 ^^^^^^
 states)           (persistent entities                          ºget     º: list resources 
                    in a cluster)                                ºdescribeº: show details (and events for pods)
                 - The can describe:                             ºlogs    º: print container
                   - apps running on nodes                                   logs
                   - resources available to                      ºexec    º: exec command on
                     a given app                                             container
                   - policies around apps                        ºapply   º: creates and updates resources
                     (restart, upgrades, ...)                     ...
                                                                  common kubectl flags:
                                                                  -o wide

*1: @[https://kubernetes.io/docs/conceptsverview/kubernetes-api/]

    clusters            │podtemplates               │statefulsets
(cs)componentstatuses   │(rs)replicasets            │(pvc)persistentvolumeclaims
(cm)configmaps          │(rc)replicationcontrollers │(pv) persistentvolumes
(ds)daemonsets          │(quota)resourcequotas      │(po) pods
(deploy)deployments     │cronjob                    │(psp)podsecuritypolicies
(ep)endpoints           │jobs                       │secrets
(ev)event               │(limits)limitranges        │(sa)serviceaccount
(hpa)horizon...oscalers │(ns)namespaces             │(svc)services
(ing)ingresses          │networkpolicies            │storageclasses
                        │(no)nodes                  │thirdpartyresources

GºRESOURCE TYPES EXTENDEDº @[https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go]
001	Volume                               021	FlexPersistentVolumeSource           041	DownwardAPIProjection
002	VolumeSource                         022	FlexVolumeSource                     042	AzureFileVolumeSource
003	PersistentVolumeSource               023	AWSElasticBlockStoreVolumeSource     043	AzureFilePersistentVolumeSource
004	PersistentVolumeClaimVolumeSource    024	GitRepoVolumeSource                  044	VsphereVirtualDiskVolumeSource
005	PersistentVolume                     025	SecretVolumeSource                   045	PhotonPersistentDiskVolumeSource
006	PersistentVolumeSpec                 026	SecretProjection                     046	PortworxVolumeSource
007	VolumeNodeAffinity                   027	NFSVolumeSource                      047	AzureDiskVolumeSource
008	PersistentVolumeStatus               028	QuobyteVolumeSource                  048	ScaleIOVolumeSource
009	PersistentVolumeList                 029	GlusterfsVolumeSource                049	ScaleIOPersistentVolumeSource
010	PersistentVolumeClaim                030	GlusterfsPersistentVolumeSource      050	StorageOSVolumeSource
011	PersistentVolumeClaimList            031	RBDVolumeSource                      051	StorageOSPersistentVolumeSource
012	PersistentVolumeClaimSpec            032	RBDPersistentVolumeSource            052	ConfigMapVolumeSource
013	PersistentVolumeClaimCondition       033	CinderVolumeSource                   053	ConfigMapProjection
014	PersistentVolumeClaimStatus          034	CinderPersistentVolumeSource         054	ServiceAccountTokenProjection
015	HostPathVolumeSource                 035	CephFSVolumeSource                   055	ProjectedVolumeSource
016	EmptyDirVolumeSource                 036	SecretReference                      056	VolumeProjection
017	GCEPersistentDiskVolumeSource        037	CephFSPersistentVolumeSource         057	KeyToPath
018	ISCSIVolumeSource                    038	FlockerVolumeSource                  058	LocalVolumeSource
019	ISCSIPersistentVolumeSource          039	DownwardAPIVolumeSource              059	CSIPersistentVolumeSource
020	FCVolumeSource                       040	DownwardAPIVolumeFile                060	CSIVolumeSource

061	ContainerPort         081	Handler                           101	PreferredSchedulingTerm
062	VolumeMount           082	Lifecycle                         102	Taint
063	VolumeDevice          083	ContainerStateWaiting             103	Toleration
064	EnvVar                084	ContainerStateRunning             104	PodReadinessGate
065	EnvVarSource          085	ContainerStateTerminated          105	PodSpec
066	ObjectFieldSelector   086	ContainerState                    106	HostAlias
067	ResourceFieldSelector 087	ContainerStatus                   107	Sysctl
068	ConfigMapKeySelector  088	PodCondition                      108	PodSecurityContext
069	SecretKeySelector     089	PodList                           109	PodDNSConfig
070	EnvFromSource         090	NodeSelector                      110	PodDNSConfigOption
071	ConfigMapEnvSource    091	NodeSelectorTerm                  111	PodStatus
072	SecretEnvSource       092	NodeSelectorRequirement           112	PodStatusResult
073	HTTPHeader            093	TopologySelectorTerm              113	Pod
074	HTTPGetAction         094	TopologySelectorLabelRequirement  114	PodTemplateSpec
075	TCPSocketAction       095	Affinity                          115	PodTemplate
076	ExecAction            096	PodAffinity                       116	PodTemplateList
077	Probe                 097	PodAntiAffinity                   117	ReplicationControllerSpec
078	Capabilities          098	WeightedPodAffinityTerm           118	ReplicationControllerStatus
079	ResourceRequirements  099	PodAffinityTerm                   119	ReplicationControllerCondition
080	Container             100	NodeAffinity                      120	ReplicationController

121	ReplicationControllerList 141	DaemonEndpoint       161	Preconditions             181	ResourceQuotaSpec
122	ServiceList               142	NodeDaemonEndpoints  162	PodLogOptions             182	ScopeSelector
123	SessionAffinityConfig     143	NodeSystemInfo       163	PodAttachOptions          183	ScopedResourceSelerRequirement
124	ClientIPConfig            144	NodeConfigStatus     164	PodExecOptions            184	ResourceQuotaStatus
125	ServiceStatus             145	NodeStatus           165	PodPortForwardOptions     185	ResourceQuota
126	LoadBalancerStatus        146	AttachedVolume       166	PodProxyOptions           186	ResourceQuotaList
127	LoadBalancerIngress       147	AvoidPods            167	NodeProxyOptions          187	Secret
128	ServiceSpec               148	PreferAvoidPodsEntry 168	ServiceProxyOptions       188	SecretList
129	ServicePort               149	PodSignature         169	ObjectReference           189	ConfigMap
130	Service                   150	ContainerImage       170	LocalObjectReference      190	ConfigMapList
131	ServiceAccount            151	NodeCondition        171	TypedLocalObjectReference 191	ComponentCondition
132	ServiceAccountList        152	NodeAddress          172	SerializedReference       192	ComponentStatus
133	Endpoints                 153	NodeResources        173	EventSource               193	ComponentStatusList
134	EndpointSubset            154	Node                 174	Event                     194	SecurityContext
135	EndpointAddress           155	NodeList             175	EventSeries               195	SELinuxOptions
136	EndpointPort              156	NamespaceSpec        176	EventList                 196	WindowsSecurityContextOptions
137	EndpointsList             157	NamespaceStatus      177	LimitRangeItem            197	RangeAllocation
138	NodeSpec                  158	Namespace            178	LimitRangeSpec
139	NodeConfigSource          159	NamespaceList        179	LimitRange
140	ConfigMapNodeConfigSource 160	Binding              180	LimitRangeList
- @[https://kubernetes.io/docs/reference/kubectl/jsonpath/]

$ source ˂(kubectl completion bash)  # kubectl autocomple
$ source ˂(kubectl completion zsh)
# use multiple kubeconfig files
$ KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 \
kubectl config view # Show Merged kubeconfig settings.

$ kubectl config current-context
$ kubectl config use-context my-cluster-name

$ kubectl run nginx --image=nginx
$ kubectl explain pods,svc

$ kubectl edit svc/my-service-1  # ← Edit resource

ºkubectl Field Selectorsº
- filter k8s objects based on the value of one or more object fields.
- the possible field depends on each object type/kind but 
  all resource-types support 'metadata.name' and 'metadata.namespace'
  $ kubectl get pods --field-selector       metadata.name==my-service,spec.restartPolicy=Always
  $ kubectl get pods --field-selector  metadata.namespace!=default,status.phase==Running
Cluster Access
Object kustomization
Building Blocks
- Provide a scope for names
- Namespaces are intended for use in environments with many users spread
  across multiple teams, or projects. For clusters with a few to tens of users,
  you should not need to create or think about namespaces at all. Start
  using namespaces when you need the features they provide.
- use labels, not namespaces, to distinguish resources within the same namespace
- Services are created with DNS entry

$ kubectlºget namespacesº
NAME          STATUS    AGE
default       Active    1d
kube-system   Active    1d

$ kubectlºcreate namespaceºmy-namespace // ← create namespace:

$ kubectlº--namespace=my-namespaceº \ ← set NS for request 
    run nginx --image=nginx
$ kubectlºconfig set-contextº\           ← permanently save the NS
    $(kubectl ºconfig current-contextº) \ ← shell syntax sugar: Exec command and take output
    --namespace=MyFavouriteNS                                   as input for next command
$ kubectl config view | \
    grep namespace: # Validate

Manage Mem./CPU/API Resrc.
Configure Default Memory Requests and Limits for a Namespace
Configure Default CPU Requests and Limits for a Namespace
Configure Minimum and Maximum Memory Constraints for a Namespace
Configure Minimum and Maximum CPU Constraints for a Namespace
Configure Memory and CPU Quotas for a Namespace
Configure a Pod Quota for a Namespace

Namespaces Walkthrough
Share a Cluster with Namespaces
- (Handling passwords, OAuth tokens, ssh keys, ... in k8s)

- Users (and system) create secrets
- At runtime Pods reference the secrets in three ways:
  - as files in a mounted volume
  - as ENVIRONMENT variables.
- Also used by kubelet when pulling images for the pod

ºBuilt-in Secretsº
- Service Accounts automatically create and attach secrets with API Credentials
- Kubernetes automatically creates secrets which contain credentials for 
  accessing the API and it automatically modifies your pods to use this type 
  of secret.

ºCreating User Secretsº
│       ALTERNATIVE 1                           │ ALTERNATIVE 2                                    │ALTERNATIVE 3 
│ ºSTEP 1º: Create un-encrypted secrets locally:│ºSTEP 1:ºCreate Secret with data                  │ºSTEP 1º: Create Secret with stringData
│   $ echo -n 'admin'        ˃ ./username.txt   │   cat ˂˂ EOF ˃ secret.yaml                       │apiVersion: v1
│   $ echo -n '1f2d1e2e67df' ˃ ./password.txt   │   apiVersion: v1                                 │kind:ºSecretº
│              ^^^^^^^^^^^^                     │   kind:ºSecretº                                  │metadata:
│              special chars. must be           │   metadata:                                      │  name: mysecret
│              '\' escaped.                     │     name: mysecret                               │type: Opaque
│                                               │   type: Opaque                                   │stringData:
│                                               │   data:                                          │  config.yaml: |-
│                                               │     username: $(echo -n 'admin'        | base64) │    apiUrl: "https://my.api.com/api/v1"
│                                               │     password: $(echo -n '1f2d1e2e67df' | base64) │    username: admin
│                                               │   EOF                                            │    password: 1f2d1e2e67df
│                                               │                                                  │ 
│ ºSTEP 2:º Package into a Secret k8s object    │ºSTEP 2:º Apply                                   │ºSTEP 2:ºApply                   
│   $ kubectlºcreate secretº\                   │  $ kubectl apply -f ./secret.yaml                │  $ kubectl apply -f ./secret.yaml
│     genericGºdb-user-passº \                  │                                                  │                        
│     --from-file=./username.txt \              │                                                  │  
│     --from-file=./password.txt                │                                                  │                     

│ºSTEP 3:º Check that Secret object has been ┌────────────────────────────────────────────
│  properly created.                         │ ºUsing the Secrets in a Podº
│  $ kubectl get secrets                     │                                   Control secret path with items:         Consume as ENV.VARS
│  → NAME           TYPE    DATA  AGE        │ apiVersion: v1                  │ apiVersion: v1                        | apiVersion: v1
│  → db-user-pass   Opaque  2     51s        │ kind:ºPodº                      │ kind:ºPodº                            | kind: Pod
│                                            │ metadata:                       │ metadata:                             | metadata:
│  $ kubectl describe secrets/db-user-pass   │   name: mypod                   │   name: mypod                         |   name: secret-env-pod
│  → Name:          Gºdb-user-passº          │ spec:                           │ spec:                                 | spec:
│  → Namespace:       default                │   containers:                   │   containers:                         |  containers:
│  → Labels:                           │   ─ name: mypod                 │   - name: mypod                       |  - name: mycontainer
│  → Annotations:                      │     image: redis                │     image: redis                      |   image: redis
│  →                                         │     volumeMounts:               │     volumeMounts:                     |   env:
│  → Type:            Opaque                 │     ─ name:ºfoOº                │     - name:ºfooº                      |    - name: SECRET_USERNAME
│  →                                         │       mountPath: "/etc/foo"     │       mountPath: "/etc/foo"           |     valueFrom:
│  → Data                                    │       readOnly:ºtrueº           │       readOnly: true                  |      secretKeyRef:
│  → ====                                    │   volumes:                      │   volumes:                            |       name: Gºdb─user─passº
│  →ºpassword.txt:º   12 bytes               │   ─ name:ºfoOº                  │   - name:ºfooº                        |       key: username
│  →ºusername.txt:º   5 bytes                │     secret:                     │     secret:                           |    - name: SECRET_PASSWORD
                                             │       secretName:Gºdb─user─passº│       secretName:Gºdb─user─passº      |     valueFrom:
                                             │       defaultMode: 256          │       items:                          |      secretKeyRef:
                                             │                     ^           │       - key: username                 |       name: Gºdb─user─passº
                                             │                     |           │         path: my-group/my-username    |       key: password
                                                                   |                           ^^^^^^^^^^^^^^^^^^^^
                                                            JSON does NOT support     · username will be seen in container as:
                                                            octal notation.             /etc/foo/my-group/my-username
                                                                     256 = 0400       · password secret is not projected

Ex 2:
SECRET CREATION                                      | SECRET USSAGE
$ kubectl create secret generic \                    | apiVersion: v1
Gºssh-key-secretº \                                  | kind: Pod
  --from-file=ssh-privatekey=/path/to/.ssh/id_rsa \  | metadata:
  --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub |   name: secret-test-pod
                                                     |   labels:
                                                     |     name: secret-test
                                                     | spec:
                                                     |   volumes:
                                                     |   - name: secret-volume
                                                     |     secret:
                                                     |       secretName:Gºssh-key-secretº
                                                     |   containers:
                                                     |   - name: ssh-test-container
                                                     |     image: mySshImage
                                                     |     volumeMounts:
                                                     |     - name: secret-volume
                                                     |       readOnly: true
                                                     |       mountPath: "/etc/secret-volume"
                                                     |                  ^^^^^^^^^^^^^^^^^^^^
                                                     |          secret files available visible like:
                                                     |          /etc/secret-volume/ssh-publickey
                                                     |          /etc/secret-volume/ssh-privatekey

FEATURE STATE: Kubernetes v1.13 beta
You can enable encryption at rest for secret data, so that the secrets are not stored in the clear into etcd .

ConfigMaps: decouple configuration artifacts from image content

ºCreate a ConfigMapº

$ kubectl create configmap \
   'map-name' \
   'data-source'   ← data-source: directories, files or literal values
                                  translates to the key-value pair in the ConfigMap where
                                  key   = file_name or key               provided on the cli
                                  value = file_contents or literal_value provided on the cli

You can use kubectl describe or kubectl get to retrieve information about a ConfigMap.

ConfigMapºfrom config-fileº:
┌───────────────────────────── ┌──────────────────────────────────────
│ºSTEP 0:º Input to config map │ºSTEP 1:º create ConfigMap object
│ ...configure─pod─container/  │ $ kubectl create configmap \             
│    └─ configmap/             │   game-config                            
│       ├─ºgame.propertiesº    │   --from-file=configmap/                 
│       └─º  ui.propertiesº    │               ^^^^^^^^^^                 
│                              │               combines the contents of   
│                              │               all files in the directory 
│ºSTEP 2:º check STEP 1
│$ kubectlºget configmapsºgame-config -o yaml                     │$ kubectlºdescribe configmapsºgame-config
│→ apiVersion: v1                                                 │→ 
│→ kind: ConfigMap                                                │→ Name:           game-config
│→ metadata:                                                      │→ Namespace:      default
│→   creationTimestamp: 2016-02-18T18:52:05Z                      │→ Labels:         
│→   name: game-config                                            │→ Annotations:    
│→   namespace: default                                           │→ 
│→   resourceVersion: "516"                                       │→ Data
│→   selfLink: /api/v1/namespaces/default/configmaps/game-config  │→ ====
│→   uid: b4952dc3-d670-11e5-8cd0-68f728db1985                    │→ºgame.properties:º 158 bytes
│→ data:                                                          │→ºui.properties:  º  83 bytes
│→  ºgame.properties:º|
│→     enemies=aliens
│→     lives=3                                  ┌───────────────────────────────────
│→     enemies.cheat=true                       │ºSTEP 3:ºUse in Pod container*
│→     enemies.cheat.level=noGoodRotten         │ apiVersion: v1
│→     secret.code.passphrase=UUDDLRLRBABAS     │ kind: Pod
│→     secret.code.allowed=true                 │ metadata:
│→     secret.code.lives=30                     │   name: dapi-test-pod
│→  ºui.properties:º|                           │ spec:
│→     color.good=purple                        │  containers:
│→     color.bad=yellow                         │   - name: test-container
│→     allow.textmode=true                      │     ...
│→     how.nice.to.look=fairlyNice              │     env:
                                                │     º- name: ENEMIES_CHEAT     º
                                                │     º  valueFrom:              º
                                                │     º    configMapKeyRef:      º
                                                │     º      name: game-config   º
                                                │     º      key: enemies.cheat  º

ConfigMapºfrom env-fileº:
│ºSTEP 0:º Input to config map       │ºSTEP 1:º create ConfigMap object
│ ...configure─pod─container/        │ $ kubectl create configmap \             
│    └─ configmap/                   │   game-config-env-file \
│       └─ºgame-env-file.propertiesº │   --from-env-file=game-env-file.properties
│          ^^^^^^^^^^^^^^^^^^^^^^^^
│          game-env-file.properties
│          enemies=aliens            │ ºSTEP 2:º Check STEP 1
│          lives=3                   │ $ kubectl get configmap game-config-env-file -o yaml
│          allowed="true"            │ → apiVersion: v1
                                     │ → kind: ConfigMap
                                     │ → metadata:
                                     │ →   creationTimestamp: 2017-12-27T18:36:28Z
                                     │ →   name: game-config-env-file
                                     │ →   namespace: default
                                     │ →   resourceVersion: "809965"
                                     │ →   selfLink: /api/v1/namespaces/default/configmaps/game-config-env-file
                                     │ →   uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8
                                     │ → data:
                                     │ →   allowed: '"true"'
                                     │ →   enemies: aliens
                                     │ →   lives: "3"

NOTE: kubectl create configmap ... --from-file=Bº'my-key-name'º='path-to-file'
      will create the data under:
      → ....
      → data:
      → Bºmy-key-name:º
      →     key1: value1
      →     ...

ºConfigMaps from literal valuesº
º STEP 1º                                     | º STEP 2º
$ kubectl create configmap special-config \   | → ...
   --from-literal=special.how=very \          | → data:
   --from-literal=special.type=charm          | →   special.how: very
                                              | →   special.type: charm

  restartPolicy: Never

Container: (lightweight and portable) executable image that contains software and
           all of its dependencies.
NOTE: 1 Pod = one or moreºrunningºcontainers.

ºLifecycle Hooksº
 - PostStart Hook
 - PreStop   Hook

ºtype Container struct {º
    Name string
    Image string
    Command []string        
    Args []string           
    WorkingDir string       
    Ports []ContainerPort   
    EnvFrom []EnvFromSource 
    Env []EnvVar            
    Resources ResourceRequirements
    VolumeMounts []VolumeMount
    VolumeDevices []VolumeDevice
    LivenessProbe *Probe         ← kubelet exec the probes
    ReadinessProbe *Probe          (ExecAction, TCPSocketAction, HTTPGetAction)
    Lifecycle *Lifecycle
    TerminationMessagePath string
    TerminationMessagePolicy TerminationMessagePolicy
    ImagePullPolicy PullPolicy
    SecurityContext *SecurityContext
    Stdin bool
Init Containers

- one or more specialized Containers that run before app Containers 
-ºcan contain utilities or setup scripts not present in an app imageº
- exactly equal to regular Containers, except:
  - They always run to completion.
    k8s will restart it repeatedly until succeeds
    (unless restartPolicy == Never)
  - Each one must complete successfully before the next one is started.
  - status is returned in .status.initContainerStatuses
                       vs .status.containerStatuses 
  - readiness probes do not apply

Note: Init Containers can also be given access to Secrets that app Containers 
      are not able to access.

- Ex: Wait for 'myService' and then for 'mydb'.
  │ apiVersion: v1                  │apiVersion: v1        │apiVersion: v1
  │ kind:ºPodº                      │kind:ºServiceº        │kind:ºServiceº
  │ metadata:                       │metadata:             │metadata:
  │   name: myapp─pod               │  name:Gºmyserviceº   │  name:Bºmydbº
  │   labels:                       │spec:                 │spec:
  │     app: myapp                  │  ports:              │  ports:
  │ spec:                           │  ─ protocol: TCP     │  ─ protocol: TCP
  │   containers:                   │    port: 80          │    port: 80
  │   ─ name: myapp─container       │    targetPort: 9376  │    targetPort: 9377
  │     image: busybox:1.28         └────────────────────  └─────────────────────
  │     command: ['sh', '─c', 'echo The app is running! ⅋⅋ sleep 3600']
  │  ºinitContainers:º
  │   ─ name:Oºinit─myserviceº
  │     image: busybox:1.28
  │     command: ['sh', '─c', 'until nslookup Gºmyserviceº; sleep 2; done;']
  │   ─ name:Oºinit─mydbº
  │     image: busybox:1.28
  │     command: ['sh', '─c', 'until nslookup Bºmydbº     ; sleep 2; done;']

  Inspect init Containers like:
  $ kubectl logs myapp-pod -c Oºinit-myserviceº
  $ kubectl logs myapp-pod -c Oºinit-mydb     º

- Use activeDeadlineSeconds on the Pod and livenessProbe on the Container
  to prevent Init Containers from failing forever. 
Debug Init Containers
Containers Resources
- TODO: difference between requests and limits, see Resource QoS.

A Container:
-   ºcan exceedº its resourceºrequestºif the Node has memory available, 
- isºnot allowedºto use more than its resource ºlimitº.

- spec.containers[].resources.limits.cpu              ← can NOT exceed resource limit → it will be scheduled 
- spec.containers[].resources.limits.memory                                             for termination
- spec.containers[].resources.requests.cpu            ← can     exceed resource request 
- spec.containers[].resources.requests.memory
- spec.containers[].resources.limits.ephemeral-storage     k8s 1.8+
- spec.containers[].resources.requests.ephemeral-storage   k8s 1.8+

Note: resource quota feature can be configured to limit the total amount of resources
      that can be consumed. In conjunction namespaces, it can prevent one team from
      hogging all the resources.

ContaineRºComputeºResource types:
- CPU    : units of cores
           - total amount of CPU time that a container can use every 100ms. 
             (minimum resolution can be setup to 1ms)
           - 1 cpu is equivalent to: 1 (AWS vCPU|GCP Core|Azure vCore,IBM vCPU,Hyperthread bare-metal )
           - 0.1 cpu == 100m
- memory : units of bytes:
           - Ex: 128974848, 129e6, 129M, 123Mi
- Local ephemeral storage(k8s 1.8+):
           - No QoS can be applied.
           - Ex: 128974848, 129e6, 129M, 123Mi

When using Docker:
 greater of this number or 2 is used as the value of the --cpu-shares flag in the docker run command.

ºExtended resourcesº
- fully-qualified resource names outside the kubernetes.io domain.

- STEP 1: cluster operator must advertise an Extended Resource.
- STEP 2: users must request the Extended Resource in Pods.
(See official doc for more info)
It is planned to add new resource types, including a node disk space resource, and a framework for adding custom resource types.
Get a Shell to a Running Container
basic execution
unit of k8s apps
Pod: Minimalºschedulable-for-execution unitºin a k8s application (or k8s service)
 ┌───────┐            ┌─────┐                 ┌───────────┐
 │k8s app│1←───→(1..N)│pod/s│ 1 ←───→ (1,2,..)│container/s│
 └───────┘            └─────┘                 └───────────┘
           ^^^^^                ^^^^^ 
  ex: - 1 DDBB pod              Best practice:
      - 2 FrontEnd pods         only a single container or
      - 1 ESB pod               a few tightly-coupled ones.
      - ...

- one or more application containers (such as Docker or rkt) plus
  shared resources for those containers:
  - Volumes   : (Shared storage)
  - Networking: (Shared unique cluster IP)
  - Metadata  : (Container image version, ports to use, ...)

- Represent a k8s app. "logical host"

- Each Pod is tied to the Node where it is scheduled, and remains there 
  until termination (according to restart policy) or deletion.

- In case of a Node failure, identical Pods are scheduled on other 
  available Nodes in the cluster.

- The containers in a Pod are automatically co-located and co-scheduled
  on the same physical or virtual machine in the cluster.             

- Podsºdo not, by themselves, self-healº
  Using a k8s Controller is used for Pod scaling or healing (restart).

ºPod Presetº
- objects used to inject information into pods at creation time
  like secrets, volumes, volume mounts, and environment variables.

- label selectors are used to specify Pods affected.

ºDisable Pod Preset for a Specific Podº
Add next annotation to the Pod Spec:
-  podpreset.admission.kubernetes.io/exclude: "true".

Pod Life Cicle
 DEFINITION                                               END
  defined   → scheduled in node → run → alt1: run until their container(s) exit
                                        alt2: pod removed (for any other reason)
                                                  depending on policy and exit code,
                                                  may be removed after exiting, or
                                                  may be retained in order to enable
                                                  access containers logs.

Pod.PodStatus.phase := Pending|Running|Succeeded|Failed
Pod QoS

Pod.spec.qosClass := 
Guaranteed   Assigned to pod when:
             - All Pod-Containers have a defined cpu/mem. limit.
             - All Pod-Containers have same cpu/mem.limit and same cpu/mem.request

Burstable    Assigned to pod when: 
             - criteria for QoS class Guaranteed isºNOTºmet.
             - 1+ Pod-Containers have memory or CPU request.

BestEffort   Assigned to pod when:
             - Pod-Containers do NOT have any memory/CPU request/limit
Ex. Pod YAML Definition:
apiVersion: v1
kind: Pod
name: test-pd
  - image: gcr.io/google_containers/test-webserver
    name: test-container
    - mountPath: /cache    ← where to mount
      name:Bºcache-volumeº  ← volume (name) to mount
    - mountPath: /test-pd
  - name:Bºcache-volumeº
    emptyDir: {}
  - name:Gºtest-volumeº
      # directory location on host
      path: /data
      # this field is optional
      type: Directory
Mng. running pod
Mng. running pod:
  $ kubectl logs my-pod (-c my-container) (-f)
                                           ^"tail -f"
  $ kubectl run -i --tty busybox --image=busybox -- sh
  $ kubectl attach my-pod -i
  $ kubectl port-forward my-pod 5000:6000  # Forward port 6000 of Pod → 5000 local machine
  $ kubectl exec my-pod (-c my-container) -- ls / # Run command in existing pod
  $ kubectl top pod POD_NAME --containers
Debug Pods and ReplicationControllers
Determine the Reason for Pod Failure
Pod.spec.securityContext Ex:
apiVersion: v1                            Exec a terminal into a running container adnd execute:
kind: Pod                                 $ id
metadata:                                ºuid=1000 gid=3000 groups=2000º
  name: security-context-demo                 ^^^^     ^^^^ 
spec:                                         uid/gid 0/0 if not specified
 ºsecurityContext:  º
 º  runAsUser: 1000 º
 º  runAsGroup: 3000º
 º  fsGroup: 2000º ← owned/writable by this GID when supported by volume
  - name: sec-ctx-vol   ← Volumes will be relabeled with provided seLinuxOptions values
    emptyDir: {}
  - name: sec-ctx-demo
    - name: sec-ctx-vol
      mountPath: /data/demo
   º  allowPrivilegeEscalation: falseº
   º  capabilities:º
   º    add: ["NET_ADMIN", "SYS_TIME"]º ← Provides a subset of 'root' capabilities:
    º seLinuxOptions:º
    º   level: "s0:c123,c456"º

SecurityContext holds security configuration that will be applied to a container.
Some fields are present in both SecurityContext and PodSecurityContext.  When both
are set, the values in SecurityContext take precedence.
typeºSecurityContextºstruct {
    // The capabilities to add/drop when running containers.
    // Defaults to the default set of capabilities granted by the container runtime.
    // +optional
    Capabilities *Capabilities
    // Run container in privileged mode.
    // Processes in privileged containers are essentially equivalent to root on the host.
    // Defaults to false.
    // +optional
    Privileged *bool
    // The SELinux context to be applied to the container.
    // If unspecified, the container runtime will allocate a random SELinux context for each
    // container.  May also be set in PodSecurityContext.  If set in both SecurityContext and
    // PodSecurityContext, the value specified in SecurityContext takes precedence.
    // +optional
    SELinuxOptions *SELinuxOptions
    // Windows security options.
    // +optional
    WindowsOptions *WindowsSecurityContextOptions
    // The UID to run the entrypoint of the container process.
    // Defaults to user specified in image metadata if unspecified.
    // May also be set in PodSecurityContext.  If set in both SecurityContext and
    // PodSecurityContext, the value specified in SecurityContext takes precedence.
    // +optional
    RunAsUser *int64
    // The GID to run the entrypoint of the container process.
    // Uses runtime default if unset.
    // May also be set in PodSecurityContext.  If set in both SecurityContext and
    // PodSecurityContext, the value specified in SecurityContext takes precedence.
    // +optional
    RunAsGroup *int64
    // Indicates that the container must run as a non-root user.
    // If true, the Kubelet will validate the image at runtime to ensure that it
    // does not run as UID 0 (root) and fail to start the container if it does.
    // If unset or false, no such validation will be performed.
    // May also be set in PodSecurityContext.  If set in both SecurityContext and
    // PodSecurityContext, the value specified in SecurityContext takes precedence.
    // +optional
    RunAsNonRoot *bool
    // The read-only root filesystem allows you to restrict the locations that an application can write
    // files to, ensuring the persistent data can only be written to mounts.
    // +optional
    ReadOnlyRootFilesystem *bool
    // AllowPrivilegeEscalation controls whether a process can gain more
    // privileges than its parent process. This bool directly controls if
    // the no_new_privs flag will be set on the container process.
    // +optional
    AllowPrivilegeEscalation *bool
    // ProcMount denotes the type of proc mount to use for the containers.
    // The default is DefaultProcMount which uses the container runtime defaults for
    // readonly paths and masked paths.
    // +optional
    ProcMount *ProcMountType
// PodSecurityContext holds pod-level security attributes and common container settings
// Some fields are also present in container.securityContext.  Field values of
// container.securityContext take precedence over field values of PodSecurityContext.
typeºPodSecurityContextºstruct {
    // Use the host's network namespace.  If this option is set, the ports that will be
    // used must be specified.
    // Optional: Default to false
    // +k8s:conversion-gen=false
    // +optional
    HostNetwork bool
    // Use the host's pid namespace.
    // Optional: Default to false.
    // +k8s:conversion-gen=false
    // +optional
    HostPID bool
    // Use the host's ipc namespace.
    // Optional: Default to false.
    // +k8s:conversion-gen=false
    // +optional
    HostIPC bool
    // Share a single process namespace between all of the containers in a pod.
    // When this is set containers will be able to view and signal processes from other conts
    // in the same pod, and the first process in each container will not be assigned PID 1.
    // HostPID and ShareProcessNamespace cannot both be set.
    // Optional: Default to false.
    // This field is beta-level and may be disabled with the PodShareProcessNamespace feature.
    // +k8s:conversion-gen=false
    // +optional
    ShareProcessNamespace *bool
    // The SELinux context to be applied to all containers.
    // If unspecified, the container runtime will allocate a random SELinux context for each
    // container.  May also be set in SecurityContext.  If set in
    // both SecurityContext and PodSecurityContext, the value specified in SecurityContext
    // takes precedence for that container.
    // +optional
    SELinuxOptions *SELinuxOptions
    // Windows security options.
    // +optional
    WindowsOptions *WindowsSecurityContextOptions
    // The UID to run the entrypoint of the container process.
    // Defaults to user specified in image metadata if unspecified.
    // May also be set in SecurityContext.  If set in both SecurityContext and
    // PodSecurityContext, the value specified in SecurityContext takes precedence
    // for that container.
    // +optional
    RunAsUser *int64
    // The GID to run the entrypoint of the container process.
    // Uses runtime default if unset.
    // May also be set in SecurityContext.  If set in both SecurityContext and
    // PodSecurityContext, the value specified in SecurityContext takes precedence
    // for that container.
    // +optional
    RunAsGroup *int64
    // Indicates that the container must run as a non-root user.
    // If true, the Kubelet will validate the image at runtime to ensure that it
    // does not run as UID 0 (root) and fail to start the container if it does.
    // If unset or false, no such validation will be performed.
    // May also be set in SecurityContext.  If set in both SecurityContext and
    // PodSecurityContext, the value specified in SecurityContext takes precedence
    // for that container.
    // +optional
    RunAsNonRoot *bool
    // A list of groups applied to the first process run in each container, in addition
    // to the container's primary GID.  If unspecified, no groups will be added to
    // any container.
    // +optional
    SupplementalGroups []int64
    // A special supplemental group that applies to all containers in a pod.
    // Some volume types allow the Kubelet to change the ownership of that volume
    // to be owned by the pod:
    // 1. The owning GID will be the FSGroup
    // 2. The setgid bit is set (new files created in the volume will be owned by FSGroup)
    // 3. The permission bits are OR'd with rw-rw----
    // If unset, the Kubelet will not modify the ownership and permissions of any volume.
    // +optional
    FSGroup *int64
    // Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported
    // sysctls (by the container runtime) might fail to launch.
    // +optional
    Sysctls []Sysctl

- Processes in containers inside pods can also contact the apiserver.
  When they do, they are authenticated as a particular Service Account 
- provides an identity for processes that run in a Pod.
- to access the API from inside a pod a mounted service account with
  a 1-hour-expiration-token is provided.
  (can be disabled with automountServiceAccountToken)
- User accounts are for humans. Service accounts are for processes running on pods.
- Service accounts are namespaced.
- cluster users can create service accounts for specific tasks (i.e. principle of least privilege).
- Allows to split auditing between humans and services.

- Three separate components cooperate to implement the automation around service accounts:
  - AºService account admission controllerº, part of the apiserver, thatsynchronously modify 
    pods as they are created or updated:
    - sets the ServiceAccount to default when not specified by pod.
    - abort if ServiceAccount in the pod spec does NOT exists.
    - ImagePullSecrets of ServiceAccount are added to the pod if none is specified by pod.
    - Adds a volume to the pod which contains a token for API access.
      (service account token expires after 1 hour or on pod deletion)
    - Adds a volumeSource to each container of the pod mounted at
  - AºToken controllerº, running as as part of controller-manager. It acts asynchronously and
    - observes serviceAccount creation and creates a corresponding Secret to allow API access.
    - observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets.
    - observes secret addition, and ensures the referenced ServiceAccount exists, and adds a
      token to the secret if needed.
    - observes secret deletion and removes a reference from the corresponding ServiceAccount if needed.
      controller-manageRº'--service-account-private-key-file'º indicates the priv.key signing tokens
     - kube-apiserver   º'--service-account-key-file'º         indicates the matching pub.key verifying signatures
    - A controller loop ensures a secret with an API token exists for each service account.

    To create additional API tokens for a service account:
    - create a new secret of type ServiceAccountToken like: 
      (controller will update it with a generated token)
        "apiVersion": "v1",
        "metadata": {
          "name": "mysecretname",
          "annotations": {
            "kubernetes.io/service-account.name" :
              "myserviceaccount"  ← reference to service account
  - AºService account controller:º
    - manages ServiceAccount inside namespaces,
    - ensures a ServiceAccount named “default” exists in every active namespace.
      $ kubectl get serviceAccounts
      NAME      SECRETS    AGE
      default   1          1d   ← Exists for each namespace
    - Additional ServiceAccount's can be created like:
    - $ kubectl apply -f - ˂˂EOF
    - apiVersion: v1
    - kind: ServiceAccount
    - metadata:
    -   name: build-robot
    - EOF
Liveness  Probes
Readiness Probes

- liveness probes  : used by kubelet to know when to restart a Container.
- readiness probes : used by kubelet to know whether a Container can accept new requests.
                     - A Pod is considered ready when all of its Containers are ready. 
                     - The Pod is not restarted in this case but no new requests are
                     - Readiness probes continue to execute for the lifetime of the Pod
                       (not only at startup)

ºliveness COMMANDº            │ ºliveness HTTP requestº         │ ºTCP liveness probeº
                              │                                 │ 
apiVersion: v1                │ apiVersion: v1                  │ apiVersion: v1
kind:ºPodº                    │ kind:ºPodº                      │ kind:ºPodº
metadata:                     │ metadata:                       │ metadata:
  ...                         │   ...                           │   ...
spec:                         │ spec:                           │ spec:
 ºcontainers:º                │  ºcontainers:º                  │  ºcontainers:º
  - name: ...                 │   - name: ...                   │   - name: ...
    ...                       │     ...                         │    ºreadinessProbe:          º
   ºlivenessProbe:          º │    ºlivenessProbe:           º  │    º  tcpSocket:             º
   º  exec:                 º │    º  httpGet:               º  │    º    port: 8080           º
   º    command:            º │    º    path: /healthz       º  │    º  initialDelaySeconds: 5 º
   º    - cat               º │    º    port: 8080           º  │    º  periodSeconds: 10      º
   º    - /tmp/healthy      º │    º    httpHeaders:         º  │    ºlivenessProbe:           º
   º  initialDelaySeconds: 5º │    º    - name: Custom-HeadeRº  │    º  tcpSocket:             º
   º  periodSeconds: 5      º │    º      value: Awesome     º  │    º    port: 8080           º
                              │    º  initialDelaySeconds: 3 º  │    º  initialDelaySeconds: 15º
                              │    º  periodSeconds: 3       º  │    º  periodSeconds: 20      º
                                HTTP proxy ENV.VARS is ignored 
                                in liveness probes (k8s v1.13+) 

                                      ↑                                ↑
                                      │                                │
ºnamedºContainerPort can also   ──────┴────────────────────────────────┘ 
be used for HTTP and TCP probes
 │ ports:
 │ - name: liveness-port
 │   containerPort: 8080
 │   hostPort: 8080
 │ livenessProbe:
 │   httpGet:
 │     path: /healthz
 │     port: liveness-port

ºMonitor liveness test resultº
$ kubectl describe pod liveness-exec

Other optional parameters:
- timeoutSeconds  : secs after which the probe times out. (1 sec by default)
- successThreshold: Minimum consecutive successes for the probe to be
                    considered successful after having failed. (1 by default)
- failureThreshold: number of failures before "giving up": Pod will be marked Unready.(Def 3)

HTTP optional parameters: 
- host  : for example (defaults to pod IP)
- port  : Name or number(1-65535)
- scheme: HTTP or HTTPS(skiping cert.validation)
- path  : 
- httpHeaders: Custom headers 
Config Pods/Cont.
Assign Pods to Nodes
Config.Pod I12n
Attach Cont.Lifecycle Handlers
Share Process Namespace between Containers in a Pod
Translate Docker-Compose to k8s Resources

Pod Priority and Preemption
Assigning Pods to Nodes
Pod Security Policies
Resource Quotas
  ^^^ contains -many more info- about kube-proxy + iptables +... advanced config settings

- Service:ºDefines a logical set of Pods (Label Selected)
  and ºa network policy by which to access them*

- Kubernetes-native applications: K8s offers a simple Endpoints API
- non-native        applications: K8s offers virtual-IP-based-2-Service bridge

 internal-only applications that support other workloads within the                                           
    Diagram showing Cluster IP traffic flow in an AKS cluster

kind:ºServiceº              ºServiceTypes:º
apiVersion: v1               ┌─────────────┬────────────────────────────┬───────────────────┐
metadata:                    │Type         │ Description                │ External Access   │
  name: my-service           ├─────────────┼────────────────────────────┼───────────────────┤
spec:                        │OºClusterIPº │ Exposes the service        │ (None)            │
  selector:                  │ virtual IP  │ on a cluster─internal IP.  │                   │
    app: MyApp               │(default)    │ Use-case:internal-only apps│                   │
  ports:                     ├─────────────┼────────────────────────────┼───────────────────┤
  - protocol: TCP            │NodePort     │                            │ NodeIP:NodePort   │
    port: 80                 ├─────────────┼────────────────────────────┼───────────────────┤
    targetPort: 9376         │LoadBalancer │ uses a cloud provider      │ ClusterIP:NodePort│
  - name: https              │             │ load balancer              │ ^spec.clusterIP   │
    protocol: TCP            │             │ (round robin by default)   │                   │
    port: 443                ├─────────────┼────────────────────────────┼───────────────────┤
    targetPort: 9377         │ExternalName │ Maps app to DNS entry      │ foo.example.com   │
  sessionAffinity: "ClientIP"└─────────────┴────────────────────────────┴───────────────────┘
  (opt)sessionAffinityConfig.clientIP.timeoutSeconds: 100 (defaults to 10800)

│                     ┌───┐ │ :3100┌────┐       :80┌───┐ │              ┌────┐:80───┐
│               ┌─→:80│Pod│ │ ┌───→│node├───┐  ┌──→│Pod│ │           ┌─→│node├─→│Pod│
│               │     └───┘ │ │    └────┘   │  │   └───┘ │           │  └────┘  └───┘
│         :80┌───────────┐  │ Incomming  ┌──v───────┐    │     :80 ┌──────────┐     
│ Internal──→│ºClusterIPº│  │ traffic    │ºNodePortº│    │     ┌─→:│  ºLoadº  │ ┌───┐
│ Traffic    └───────────┘  │ │          └──^───────┘    │     │   │ºBalancerº│ │Pod│
│               │     ┌───┐ │ │    ┌────┐   │  │   ┌───┐ │     │   └──────────┘ └───┘
│               └─→:80│Pod│ │ └───→│node├───┘  └──→│Pod│ │ Incomming │   ┌────┐  ↑:80
│                     └───┘ │ :3100└────┘       :80└───┘ │   traffic └──→│node├──┘
│                           │ ^                 ^        │               └────┘  
│                           │ Creates mapping between    │  configures an external   
│                           │ node port(3100) and        │  IP address in balancer.  
│                           │ Pod port (80)              │  Then connects requested  
│                           │                            │  pods to balancer         
                                              ºLoad Balancerºworks at ºlayer 4º.           
                                               unaware of the actual apps.
 (Layer 7)                  ┌───┐    ┌───┐ ☜ ☞ Use OºIngress controllersº(Oºlayer 7º)
┌─────────────────────┐     │Pod│    │Pod│     for advanced rules based on inbound
│OºIngress controllerº│     └─^─┘    └─^─┘     URL, ...
│ ┌─────────────────┐ │     ┌─┴────────┴───┐ - Another common feature of Ingress
│ │ myapp.com/blog  │ │───→ │Blog service  │   isºSSL/TLS terminationºremoving
│ └─────────────────┘ │     └──────────────┘   TLS cert complexity from Apps
│ ┌─────────────────┐ │     ┌──────────────┐
│ │ myapp.com/store │ │────→│Store service │
│ └─────────────────┘ │     └─┬────────┬───┘
└───^─────────────────┘     ┌─v─┐    ┌─v─┐
    |                       │Pod│    │Pod│
  Incomming                 └───┘    └───┘

-ºService.spec.clusterIP can be used to force a given IPº

│   USER SPACE proxy─mode                     │   IPTABLES proxy─mode 
│                                             │   (or ipvs)
│     NODE                                    │     NODE
│   ┌─────────────────────────┐               │   ┌─────────────────────────┐
│   │ ┌──────┐                │               │   │ ┌──────┐  ┌──────────┐  │
│   │ │Client│                │               │   │ │Client│  │kube─proxy│←─── API─Server
│   │ └──────┘                │               │   │ └┬─────┘  └────────┬─┘  │
│   │    ↓                    │               │   │  │                 │    │   
│   │ OºClusterIPº            │               │   │  │                 │    │   
│   │ (iptables)              │               │   │  │   ┌───────────┐ │    │
│   │    │      ┌──────────┐  │               │   │  └──→│OºcluserIPº│←┘    │
│   │    └─────→│kube─proxy│←──API─Server     │   │      │(iptables) │      │
│   │           └────┬─────┘  │               │   │      └────┬──────┘      │
│   └────────────────│────────┘               │   └───────────│─────────────┘
│      ┌─────────────┴─────────┐              │      ┌────────┴──────────────┐
│      │                       │              │      │                       │
│┌─────↓───────────┐  ┌────────↓────────┐     │┌─────↓───────────┐  ┌────────↓────────┐
││Backend Pod 1    │  │Backend Pod 2    │     ││Backend Pod 1    │  │Backend Pod 2    │
││lables: app=MyApp│  │lables: app=MyApp│     ││lables: app=MyApp│  │lables: app=MyApp│
││IP:    │  │IP:    │     ││IP:    │  │IP:    │
││port:9113   ↑    │  │port:9113   ↑    │     ││port:9113        │  │port:9113        │
│└────────────│────┘  └────────────│────┘     │└─────────────────┘  └─────────────────┘
              │                    │
   Note that the Pod IP dies when the pod dies,
   while the ClusterIP persists for the 
   application (Deployment) life.

│  ALT.1, using ENV.VARS                          │  ALT.2, using the k8s DNS add─on
│                                                 │  (strongly recommended).
│Ex: given service "redis─master" with            │  Updates dynamically the set of DNS records for services.
│  OºClusterIPº =                  │  The entry will be similar to:  "my─service.my─namespace"
│  next ENV.VARs are created:                     │
│  REDIS_MASTER_SERVICE_PORT=6379                 │
│  REDIS_MASTER_PORT=tcp://         │
│  REDIS_MASTER_PORT_6379_TCP=tcp://│
│  REDIS_MASTER_PORT_6379_TCP_PROTO=tcp           │
│  REDIS_MASTER_PORT_6379_TCP_PORT=6379           │
ºQuick App(Deployment) service creationº
REF: @[https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/]
$ kubectl expose deployment/my-nginx

is equivalent to:

$ kubectl apply -f service.yaml
                   apiVersion: v1
                   kind: Service
                     name: my-nginx
                       run: my-nginx
                     - port: 80
                       protocol: TCP
                       run: my-nginx


Debug Services
- Labels = key/value pairs attached to objects.
- labelsºdo not provide uniquenessº 
  - we expect many objects to carry the same label(s)
- used to specify ºidentifying attributes of objectsº
 ºmeaningful and relevant to usersº, (vs k8s core system)
- Normally used to organize and to select subsets of objects.
- Label format:
  name : [a-z0-9A-Z\-_.]{1,63}
  prefix : must be a DNS subdomain no longer than 253 chars

- Example labels:
  "release"    : "stable" # "canary" ...
  "environment": "dev" # "qa", "pre",  "production"
  "tier"       : "frontend" # "backend" "cache"
  "partition"  : "customerA", "partition" : "customerB"
  "track"      : "daily" # "weekly" "monthly" ...

ºRecommended Labelsº
Key                           Description                      Example             Type
app.kubernetes.io/name        The name of the application      mysql               string
app.kubernetes.io/instance    A unique name identifying        wordpress-abcxzy    string
                              the instance of an application
app.kubernetes.io/version     current version of the app       5.7.21              string
app.kubernetes.io/component   component within the arch        database            string
app.kubernetes.io/part-of     name of higher level app         wordpress           string
app.kubernetes.io/managed-by  tool used to manage the          helm                string
                              operation of an application

Ex.1 use in an StatefulSet object:

apiVersion: apps/v1                           |apiVersion: apps/v1             |apiVersion: v1
kind: StatefulSet                             |kind: Deployment                |kind: Service
metadata:                                     |metadata:                       |metadata:
 labels:                                      | labels:                        | labels:
  app.kubernetes.io/name      : mysql         |  .../name      : wordpress     |  .../name      : wordpress
  app.kubernetes.io/instance  : wordpress-abc |  .../instance  : wordpress-abc |  .../instance  : wordpress-abcxzy
  app.kubernetes.io/version   : "5.7.21"      |  .../version   : "4.9.4"       |  .../version   : "4.9.4"
  app.kubernetes.io/component : database      |  .../component : server        |  .../component : server
 ºapp.kubernetes.io/part-of   :ºwordpressº    |  .../part-of   :ºwordpressº    |  .../part-of   : wordpressº
  app.kubernetes.io/managed-by: helm          |  .../managed-by: helm          |  .../managed-by: helm
                                              |...                             |...

-ºWell-Known Labels, Annotations and Taintsº
    beta.kubernetes.io/arch (deprecated)
    beta.kubernetes.io/os (deprecated)
Label selectors
- core (object) grouping primitive.
- two types of selectors: 
  - equality-based   
    Ex. environment=production,tier!=frontend
                            "AND" represented with ','

  - set-based
    - filter keys according to a set of values.
      (in, notin and exists ) 
    Ex.'ºkeyºequal to environment andºvalueºequal to production or qa'

    Ex.'environment in (production, qa)'

    Ex.'tier notin (frontend, backend)'
        all resources with 
            (key == "tier" AND
             values != frontend or backend )  AND
        all resources with 
            (key != "tier")

    Ex:'partition'                    !partition
        ^^^^^^^^^                     ^^^^^^^^^
        all resources including       all resources without
        a label with key 'partition'  a label with key 'partition'
- LIST and WATCH operations may specify label selectors to filter the sets
  of objects returned using a query parameter. 
  $ kubectl get pods -l environment=production,tier=frontend             # equality-based
  $ kubectl get pods -l 'environment in (production),tier in (frontend)' # set-based
  $ kubectl get pods -l 'environment in (production, qa)'                # "OR" only set-based
  $ kubectl get pods -l 'environment,environment notin (frontend)'       # "NOTIN"


    REF: https://www.youtube.com/watch?v=OulmwTYTauI
                             once underlying         "hardware" resource applying
                             storage has been        to the whole cluster 
                             assigned to a pod       (ºlifespanº independent of pods)
  ┌──────────────────┐       PV is bound to PVC      ┌─↓───────────┐
  │ Pod              │       one-to-one relation┌···→│ Persistent  ·····plugin to
  │                  │     ┌───────────────────┐·    │ Volume(ºPVº)│    one of
  │┌───────────────┐ │ ┌··→│ Persistent *1     │·    └─────────────┘        ·       Underlying                
  ││ºVolume Mountsº│ │ ·   │ Volume            │·                           ·          Storage
  ││ /foo          │ │ ·   │ Claim (ºPVCº)     │·                       ┌───↓────────────────┐
  │└───────────────┘ │ ·   │                   │·    ┌────────────┐     │-Local HD           │
  │    ┌──────────────┐·   │- 100Gi            │· ┌·→│ Storage    │     │-NFS: path,serverDNS│
  │    │ Volumes:     │·   │- Selector ·········┘ ·  │ Class(ºSCº)│     │-Amazon:EBS,EFS,... │
  │    │ -PVC ·········┘   │- StorageClassName····┘  └────────────┘     │-Azure:...          │
  │    │ ─claimName   ←┐   └───────────────────┘                        │-...                │
  │    └──────────────┘└─ Volumeºlifespamºis                            └────────────────────┘
  └──────────────────┘    that of the Pod.

├────────Created by users (Dev)───────────────────┤ ├──────Created by Admin (Ops)────────────┤

- many types supported, and a Pod can use any number of them simultaneously.
- a pod specifies what volumes to provide for the pod (spec.volumes) and where 
  to mount those into each container(spec.containers.volumeMounts).

- Sometimes, it is useful to share one volume for multiple uses in a single 
  pod. The volumeMounts.subPath property can be used to specify a sub-path inside 
  the referenced volume instead of its root.

*1:Similar to how Pods can request specific levels of resources
   (CPU and Memory), Claims can request specific size and access modes
   (e.g., can be mounted once read/write or many times read-only)
☞ BºPV contains max size, PVC contains min sizeº
    wih PV max.size ˃ PVC min.size must be ˂ PV max.size

Dynamic Volume Provisioning @[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/]
Node-specific Volume Limits @[https://kubernetes.io/docs/concepts/storage/storage-limits/]
- ensures that "N" pod replicas are running simultaneously.
- ReplicaSets can be used independently, but normally used
  indirectly by Deployments to orchestrate pod creation/deletion/updates.
 BºNOTE:º Job       is prefered for pods that terminate on their own.
 BºNOTE:º DaemonSet is prefered for pods that provide a machine-level function.
                    (monitoring,logging, pods that need to be running before 
                     others pods starts).
                  OºDaemonSet pods  lifetime == machine lifetimeº
|apiVersion: apps/v1
|kind: ReplicaSet
|  name: frontend
|  labels:
|    app: guestbook
|    tier: frontend
|  # modify replicas according to your case
|  replicas: 3   ← Default to 1
|  selector:
|    matchLabels:
|      tier: frontend  ← 
|    matchExpressions:   
|      - {key: tier, operator: In, values: [frontend]}
|  template:  ← Pod template (nested schema pod,
|    metadata:                no apiVersion/kind-)
|      labels:             ← Needed in pod-template (vs isolated pod)
|        app: guestbook     .spec.template.metadata.labels must match
|        tier: frontend     .spec.selector
|    spec:
|      restartPolicy: Always ← (default/only allowed value)
|      containers:
|      - name: php-redis
|        image: gcr.io/google_samples/gb-frontend:v3
|        resources:
|          requests:
|            cpu: 100m
|            memory: 100Mi
|        env:
|        - name: GET_HOSTS_FROM
|          value: dns
|          # If your cluster config does not include a dns service, then to
|          # instead access environment variables to find service host
|          # info, comment out the 'value: dns' line above, and uncomment the
|          # line below.
|          # value: env
|        ports:
|        - containerPort: 80

$ kubectl create -f http://.../frontend.yaml
→ replicaset.apps/frontend created

$ kubectl describe rs/frontend
→ Name:       frontend
→ Namespace:  default
→ Selector:   tier=frontend,tier in (frontend)
→ Labels:     app=guestbook
→         tier=frontend
→ Annotations:    ˂none˃
→ Replicas:   3 current / 3 desired
→ Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
→ Pod Template:
→   Labels:       app=guestbook
→                 tier=frontend
→   Containers:
→    php-redis:
→     Image:      gcr.io/google_samples/gb-frontend:v3
→     Port:       80/TCP
→     Requests:
→       cpu:      100m
→       memory:   100Mi
→     Environment:
→       GET_HOSTS_FROM:   dns
→     Mounts:             ˂none˃
→   Volumes:              ˂none˃
→ Events:
→   FirstSeen    LastSeen    Count    From                SubobjectPath    Type        Reason            Message
→   ---------    --------    -----    ----                -------------    --------    ------            -------
→   1m           1m          1        {replicaset-controller }             Normal      SuccessfulCreate  Created pod: frontend-qhloh
→   1m           1m          1        {replicaset-controller }             Normal      SuccessfulCreate  Created pod: frontend-dnjpy
→   1m           1m          1        {replicaset-controller }             Normal      SuccessfulCreate  Created pod: frontend-9si5l

$ kubectl get pods
→ NAME             READY     STATUS    RESTARTS   AGE
→ frontend-9si5l   1/1       Running   0          1m
→ frontend-dnjpy   1/1       Running   0          1m
→ frontend-qhloh   1/1       Running   0          1m
- Declarative updates for Pods and ReplicaSets
- "one or more identical pods", managed by the Deployment Controller.
  A deployment defines the number of replicas (pods) to 
  create, and the Kubernetes Scheduler ensures that if pods or nodes encounter 
  problems, additional pods are scheduled on healthy nodes.     
  Ex. Deployment:
# for versions before 1.7.0 use apps/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
  name: nginx-deployment
    app: nginx
# namespace: production
  replicas: 3                  ← 3 replicated Pods
   - type : Recreate           ← Recreate | RollingUpdate*
                               # Alt. strategy example
                               # strategy:
                               #   rollingUpdate:
                               #     maxSurge: 2
                               #     maxUnavailable: 0
                               #   type: RollingUpdate
      app: nginx
  template:                    ← pod template
        app: nginx
    spec:                      ← template pod spec
      containers:                change triggers new rollout
      - name: nginx
        image: nginx:1.7.9
        - containerPort: 80
          path: /heartbeat
          port: 80
          scheme: HTTP

$ kubectl create -f nginx-deployment.yaml

$ kubectl get deployments
NAME             DESIRED CURRENT  ...
nginx-deployment 3       0

$ kubectl rollout status deployment/nginx-deployment
  Waiting for rollout to finish: 2 out of 3 new replicas
  have been updated...
  deployment "nginx-deployment" successfully rolled out

$ kubectl get deployments
NAME             DESIRED CURRENT ...
nginx-deployment 3       3

ºTo see the ReplicaSet (rs) created by the deployment:º
$ kubectl get rs
NAME                     DESIRED ...
nginx-deployment-...4211 3

*1:format [deployment-name]-[pod-template-hash-value]

ºTo see the labels automatically generated for each podº
$ kubectl get pods --show-labels
NAME          ... LABELS
nginx-..7ci7o ... app=nginx,...,
nginx-..kzszj ... app=nginx,...,
nginx-..qqcnn ... app=nginx,...,

*Ex: Update nginx Pods from nginx:1.7.9 to nginx:1.9.1:
$ kubectl set image deployment/nginx-deployment \

ºCheck the revisions of deployment:º
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
1 kubectl create -f nginx-deployment.yaml ---record
2 kubectl set image deployment/nginx-deployment \
3 kubectl set image deployment/nginx-deployment \

$ kubectl rollout undo deployment/nginx-deployment \

ºScale Deployment:º
$ kubectl scale deployment \
  nginx-deployment --replicas=10

$ kubectl autoscale deployment nginx-deployment \
  --min=10 --max=15 --cpu-percent=80
Inject Data Into Apps
Define a Command and Arguments for a Container
Define Environment Variables for a Container
Expose Pod Information to Containers Through Environment Variables
Expose Pod Information to Containers Through Files
Distribute Credentials Securely Using Secrets
Inject Information into Pods Using a PodPreset
Run Applications
Run a Stateless Application Using a Deployment
Run a Single-Instance Stateful Application
Run a Replicated Stateful Application
Update API Objects in Place Using kubectl patch
Scale a StatefulSet
Delete a StatefulSet
Force Delete StatefulSet Pods
Perform Rolling Update Using a Replication Controller
Horizontal Pod Autoscaler
Horizontal Pod Autoscaler Walkthrough
Troubleshoot Applications
App Introspection and Debugging
Developing and debugging services locally
Access Applications in a Cluster
Web UI (Dashboard)
Accessing Clusters
Configure Access to Multiple Clusters
Use Port Forwarding to Access Applications in a Cluster
Provide Load-Balanced Access to an Application in a Cluster
Use a Service to Access an Application in a Cluster
Connect a Front End to a Back End Using a Service
Create an External Load Balancer
Configure Your Cloud Provider's Firewalls
List All Container Images Running in a Cluster
Communicate Between Containers in the Same Pod Using a Shared Volume
Configure DNS for a Cluster
StatefulSet (v:1.9+)

- naming convention, network names, and storage persist as replicas are 
- underlying persistent storage remains even when the StatefulSet is deleted.
- Pods in StatefulSet are scheduled and run across any available node in an 
  AKS cluster (vs DaemonSet pods, attached to a given node).

- Manages stateful apps:
  - Useful for apps requiring one+ of:
    - Stable, unique network identifiers.
    - persistent storage across Pod (re)scheduling
    - Ordered, graceful deployment and scaling.
    - Ordered, graceful deletion and termination.
    - Ordered, automated rolling updates.

  - Manages the deploy+scaling of Pods providing
      guarantees about ordering and uniqueness
  - Unlike Deployments, a StatefulSet maintains a sticky identity for each
    of their Pods. These pods are created from the same spec, but are not
    interchangeable: each has a persistent identifier that it maintains
    across any rescheduling
  - Pod Identity: StatefulSet Pods have a unique identity that is comprised
    of [ordinal, stable network identity, stable storage] that sticks even
    if Pods is rescheduled on another node.
    - Ordinal:Each Pod will be assigned an unique integer ordinal, from 0 up through N-1, where N = number of replicas - Stable Network Pod host-name = $(statefulset name)-$(ordinal) Ex. full DNS using Stateless service: Pod full DNS (web == StatefullSet.name) Oº← pod-host →º Bº← service ns →º Qº←clusterDoma→º Oºweb-{0..N-1}º.Bºnginx.default.svcº.Qºcluster.localº Oºweb-{0..N-1}º.Bºnginx.foo .svcº.Qºcluster.localº Oºweb-{0..N-1}º.Bºnginx.foo .svcº.Qºkube.local º *1: Cluster Domain defaults to cluster.local - Pod Name Label: When the controller creates a Pod, it adds a label statefulset.kubernetes.io/"pod-name" set to the name of the pod, allowing to attach a Service to an unique Pod ºLimitationsº - The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin. - Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet in order to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources. - StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service. ºExample StatefulSetº The Bºheadless Service (named nginx)º, is used to control the network domain The StatefulSet(named web), has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods. The GºvolumeClaimTemplatesº will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: replicas: 3 # by default is 1 selector: matchLabels: Qºapp: nginx # has to match .spec.template.metadata.labelsº serviceName: "nginx" template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html ºvolumeClaimTemplatesº: # Kubernetes creates one PersistentVolume for each VolumeClaimTemplate - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] # each Pod will receive a single PersistentVolume # When a Pod is (re)scheduled onto a node, its volumeMounts mount # the PersistentVolumes associated with its PersistentVolume Claims. storageClassName: "my-storage-class" resources: requests: storage: 1Gi
Deployment and Scaling Guarantees
  - Pods are deployed sequentially in order from {0..N-1}.
  - When Pods are deleted they are terminated in reverse order, from {N-1..0}
  - Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready
  - Before a Pod is terminated, all of its successors must be completely shutdown
  -ºOrdering Policies guarantees can be relaxed via  .spec.podManagementPolicy(K8s 1.7+)º
    - OrderedReady: Defaults, implements the behavior described above - Parallel Pod Management: launch/terminate all Pods in parallel, not waiting for Pods to become Running and Ready or completely terminated -
-º.spec.updateStrategyº allows to configure and disable automated rolling updates for containers, labels, resource request/limits, and annotations for the Pods in a StatefulSet.
    - "OnDelete" implements the legacy (1.6 and prior) behavior. StatefulSet controller will not automatically update the Pods in a StatefulSet. Users must manually delete Pods to cause the controller to create new Pods that reflect modifications made to a StatefulSet’s .spec.template. - "RollingUpdate" (default 1.7+) implements automated, rolling update for Pods. The StatefulSet controller will delete and recreate each Pod proceeding in the same order as Pod termination (largest to smallest ordinal), updating each Pod one at a time waiting until an updated Pod is Running and Ready prior to updating its predecessor. Partitions { - RollingUpdate strategy can be partitioned, by specifying a .spec.updateStrategy.rollingUpdate.partition - If specified all Pods with an ordinal greater than or equal to the partition will be updated when the StatefulSet’s .spec.template is updated. All Pods with an ordinal that is less than the partition will not be updated, and, even if they are deleted, they will be recreated at the previous version. - If it is greater than its .spec.replicas, updates to its .spec.template will not be propagated to its Pods. - In most cases you will not need to use a partition, but they are useful if you want to stage an update, roll out a canary, or perform a phased roll out. }
  - The StatefulSet should NOT specify a pod.Spec.TerminationGracePeriodSeconds of 0.
      Unsafe and ºstrongly discouragedº
Debug a StatefulSet
- ensures "N" Nodes run a Pod instance
- typical uses: cluster storage, log collection or monitoring
  - As nodes are added to the cluster, Pods are added to them. As nodes are 
    removed from the cluster, those Pods are garbage collected. Deleting a 
    DaemonSet will clean up the Pods it created.
  - Ex (simple case): one DaemonSet, covering all nodes, would be used
    for each type of daemon. A more complex setup might use multiple DaemonSets
    for a single type of daemon, but with different flags and/or different memory
    and cpu requests for different hardware types.

Ex.  DaemonSet for Bºfluentd-elasticsearchº:
$ cat daemonset.yaml
apiVersion: apps/v1
kind: ºDaemonSetº
  name: fluentd-elasticsearch
  namespace: kube-system
    k8s-app: fluentd-logging
      name: fluentd-elasticsearch
  template: # Pod template
    # Pod Template must have RestartPolicy equal to Always (default if un-specified)
        name: fluentd-elasticsearch
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - name: fluentd-elasticsearch
        image: Bºk8s.gcr.io/fluentd-elasticsearch:1.20º
            memory: 200Mi
            cpu: 100m
            memory: 200Mi
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      - name: varlog
          path: /var/log
      - name: varlibdockercontainers
          path: /var/lib/docker/containers

Garbage Collection
Jobs - Run to Completion
Run Jobs
Running Automated Tasks with a CronJob
Parallel Processing using Expansions
Coarse Parallel Processing Using a Work Queue
Fine Parallel Processing Using a Work Queue
- reliably run one+ Pod to "N" completions
  - creates one+ pods and ensures that a specified number of them successfully terminate
  - Jobs are complementary to Deployment Controllers. A Deployment Controller
    manages pods which are not expected to terminate (e.g. web servers), and
    a Job manages pods that are expected to terminate (e.g. batch jobs).
  - As pods successfully complete, the job tracks the successful completions. When a specified number of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the pods it created.
  - Pod Backoff failure policy: ifyou want to fail a Job after N retries set
      .spec.backoffLimit (defaults to 6).
  - Pods are not deleted on completion in order to allow view logs/output/errors for completed pods. They will show up with kubectl get pods º-aº.
    Neither the job object in order to allow viewing its status.
  -  Another way to terminate a Job is by setting an active deadline
    in .spec.activeDeadlineSeconds or .specs.template.specs.activeDeadlineSeconds

  example. Compute  2000 digits of "pi"
$ cat job.yaml

apiVersion: batch/v1
kind: ºJobº
  name: pi
  template: # Required (== Pod template - apiVersion - kind)
      - name: pi
        ºimage: perlº
        ºcommand: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]º
      OºrestartPolicy: Neverº # Only Never/OnFailure allowed
  backoffLimit: 4

# ºRun job using:º
# $ kubectl create -f ./job.yaml

# ºCheck job current status like:º
# $ kubectl describe jobs/pi
# output will be similar to:
# Name:             pi
# Namespace:        default
# Selector:         controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
# Labels:           controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
#                   job-name=pi
# Annotations:      ˂none˃
# Parallelism:      1
# Completions:      1
# Start Time:       Tue, 07 Jun 2016 10:56:16 +0200
# Pods Statuses:    0 Running / 1 Succeeded / 0 Failed
# Pod Template:
#   Labels:       controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
#                 job-name=pi
#   Containers:
#    pi:
#     Image:      perl
#     Port:
#     Command:
#       perl
#       -Mbignum=bpi
#       -wle
#       print bpi(2000)
#     Environment:        ˂none˃
#     Mounts:             ˂none˃
#   Volumes:              ˂none˃
# Events:
#   FirstSeen LastSeen  Count From            SubobjectPath  Type    Reason            Message
#   --------- --------  ----- ----            -------------  ------- ------            -------
#   1m        1m        1     {job-controller}               Normal  SuccessfulCreate  Created pod: pi-dtn4q
# ºTo view completed pods of a job, use º
# $ kubectl get pods

# ºTo list all pods belonging to job in machine-readable-formº:
# $ pods=$(kubectl get pods --selector=ºjob-name=piº --output=ºjsonpath={.items..metadata.name}º)
# $ echo $pods

# ºView the standard output of one of the pods:º
# $ kubectl logs $pods
# 3.1415926535897a....9
Parallel Jobs
  - Parallel Jobs with a fixed completion count 
    (.spec.completions greater than zero). 
    the job is complete when there is one successful pod for
    each value in the range 1 to .spec.completions.

  - Parallel Jobs with a work queue: do not specify .spec.completions:
     pods must coordinate with themselves or external service to determine
     what each should work on.
     each pod is independently capable of determining whether or not all its peers
     are done, thus the entire Job is done.

  - For Non-parallel job, leave both .spec.completions and
      .spec.parallelism unset.
  - Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons
  - (read official K8s docs for Job Patterns ussages)
  Cron Jobs (1.8+)
  - written in Cron format (question mark (?) has the same meaning as an asterisk *)
  - Concurrency Policy
    - Allow (default): allows concurrently running jobs
    - Forbid: forbids concurrent runs, skipping next
      if previous still running
    - Replace: cancels currently running job and replaces with new one
Ex. cronjob:
$ cat cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
  name: hello
  schedule: Oº"*/1 º º º º"º
        Oº- name: hello    º
        Oº  image: busybox º
        Oº  args:          º
        Oº  - /bin/sh      º
        Oº  - -c           º
        Oº  - date; echo 'Hi from K8s'º
          restartPolicy: OnFailure

# Alternatively:
$ kubectl run hello \
    --schedule="*/1 º º º º"  \
    --restart=OnFailure \
    --image=busybox \
    -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"

# get status:

$ kubectl get cronjob hello
hello     */1 º º º º   False     0         

# Watch for the job to be created:
$ kubectl get jobs --watch
hello-4111706356   1         1         2s
Node Monitoring
Node Health
Debugging Kubernetes nodes with crictl
Monitoring the cluster
Monitor, Log, and Debug
Core metrics pipeline
Events in Stackdriver
Metrics API+Pipeline

Extracted from @[https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/]
"""If you are running Minikube, run the following command to enable the metrics-server:
   $ minikube addons enable metrics-server

... to see whether the metrics-server is running, or another provider of the resource metrics
API (metrics.k8s.io), run the following command:

   $ kubectl get apiservices

   output must include a reference to metrics.k8s.io.
   → ...
   → v1beta1.metrics.k8s.io


Logging Using Elasticsearch and Kibana
Logging Using Stackdriver
Tools for Monitoring Resources
Debuggin the Cluster
Unordered notes
Get External
IPs of all

$ kubectl get nodes -o jsonpath=\
Run stateless app. deployments
GPU and Kubernetes
The easiest way
to get a production
grade k8s cluster
up and running
  - What is kops?  """We like to think of it as kubectl for clusters."""
  -  kops helps you create, destroy, upgrade and maintain production-grade,
    highly available, Kubernetes clusters from the command line.
    AWS is currently officially supported, with GCE in beta support,
    and VMware vSphere in alpha, and other platforms planned.
HELM Chars
- Application Package stored locally or in remote repo                                                                                                        
 (Ex: A.Container Registry Helm chart repo).
 - packaged version of app code + YAML deployment manifests.

 - Install Tiller in k8s cluster.
 - Install Helm client locally (or cloud shell)

 - chart → Tiller → Exec. Install

  Special Interest Group for deploying and operating apps in Kubernetes.
  - They meet each week to demo and discuss tools and projects.
  • Covers deploying and operating applications in Kubernetes. We focus on the developer and devops experience of running applications in Kubernetes. We discuss how to define and run apps in Kubernetes, demo relevant tools and projects, and discuss areas of friction that can lead to suggesting improvements or feature requests
  • Skaffold
      Tool to facilitate Continuous Development with Kubernetes
    6 Tips for Running
    Scalable Workloads
    - tool to build container images inside an unprivileged container or
      Kubernetes cluster.
    - Although kaniko builds the image from a supplied Dockerfile, it does
      not depend on a Docker daemon, and instead executes each command completely
      in userspace and snapshots the resulting filesystem changes.
    - The majority of Dockerfile commands can be executed with kaniko, with
      the current exception of SHELL, HEALTHCHECK, STOPSIGNAL, and ARG.
      Multi-Stage Dockerfiles are also unsupported currently. The kaniko team
      have stated that work is underway on both of these current limitations.
    K&A with K8s...
    Distributed Systems programming is not for the faint of heart, and despite
    the evolution of platforms and tools from COM, CORBA, RMI, Java EE, Web
    Services, Services Oriented Architecture (SOA) and so on, it's more of an art
    than a science.
    Brendan Burns outlined many of the patterns that enables distributed systems
    programming in the blog he wrote in 2015. He and David Oppenheimer, both
    original contributors for Kubernetes, presented a paper at Usenix based
    around design patterns and containers shortly after.
    InfoQ caught up with Burns, who recently authored an ebook titled Designing
    Distributed Systems, Patterns and Paradigms for Scaleable Microservices. He
    talks about distributed systems patterns and how containers enable it.
    Yaml Tips
    Container Network Iface (CNI)
      - specification and libraries for writing plugins to configure network interfaces
    in Linux containers, along with a number of supported plugins.
      - CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
      - CNI Spec
    K8s Networking Explained
    Best Practices
    (Rolling, Blue/Green,
     Canary, BigBan,...)
    k8s as code
    @[https://www.youtube.com/watch?v=zpgp3yCmXok] Writing less yaml
    A tool for managing Kubernetes resources as code.
    kubecfg allows you to express the patterns across your infrastructure and reuse
    these powerful "templates" across many services, and then manage those templates
    as files in version control. The more complex your infrastructure is, the more
    you will gain from using kubecfg.
    The idea is to describe as much as possible about your configuration as files
    in version control (eg: git).
    Atlassian escalator
    In Kubernetes, scaling can mean different things to different users. We
    distinguish between two cases:
    - Cluster scaling, sometimes called infrastructure-level scaling, refers to
      the (auto\u2010 mated) process of adding or removing worker nodes based on cluster utilization.
    - Application-level scaling, sometimes called pod scaling, refers to the (automated) process
      of manipulating pod characteristics based on a variety of metrics, from low-level signals
      such as CPU utilization to higher-level ones, such as HTTP requests served per
      second, for a given pod. Two kinds of pod-level scalers exist:
    - Horizontal Pod Autoscalers (HPAs), which increase or decrease the number
      of pod replicas depending on certain metrics.
    - Vertical Pod Autoscalers (VPAs), which increase or decrease the resource
      requirements of containers running in a pod.
    Atlassian released
     their in-house tool Escalator as an open source
     project. It provides configuration-driven preemptive scale-up and faster scale-down
    for Kubernetes nodes.

    Atlassian adopted containers and built their own

    Kubernetes has two autoscalers - horizontal pod autoscaler scales pods: an abstraction over a container or a set of related containers - up and down, and thus depends upon the availability of underlying compute (usually VM) resources. pods can scale down very quickly, - cluster autoscaler. scales the compute infrastructure itself. Understandably, it takes a longer time to scale up and down due to the higher provisioning time of virtual machines. Any delays in the cluster autoscaler would translate to delays in the pod autoscaler. Atlassian’s problem was very specific to batch workloads, with a low tolerance for delay in scaling up and down. They decided to write their own autoscaling functionality to solve these problems on top of Kubernetes. Escalator configurable thresholds for upper and lower capacity of the compute VMs. Some of the configuration properties work by modifying a Kubernetes feature called taint’. A VM node can be ‘tainted’ (marked) with a certain value so that pods with a related marker are not scheduled onto it. Unused nodes would be brought down faster by the Kubernetes standard cluster autoscaler when they are marked. The scale-up configuration parameter is a threshold expressed as a percentage of utilization, usually less than 100 so that there is a buffer. Escalator autoscales the compute VMs when utilization reaches the threshold, thus making room for containers that might come up later, and allowing them to boot up fast.

    Job patterns
    One example of this pattern would be a Job which starts a Pod which runs a script
    that in turn starts a Spark master controller (see spark example), runs a spark
    driver, and then cleans up.
    A PodDisruptionBudget object (PDB) can be defined for each deployment(application),
    limiting the number of pods of a replicated application that 
    are down simultaneously from ºvoluntaryº disruptions.
    Ex: A Deployment has:
      - .spec.replicas: 5  (5 pods desired at any given time)
      - The PDB is defined as:
        apiVersion: policy/v1beta1
        kind: PodDisruptionBudget
          name: zookeeper-pdb
          maxUnavailable: 1  ←--   Eviction API will allow voluntary disruption of one,
          selector:                but not two pods, at a time trying to have 4 running
            matchLabels:           at all times
              app: zookeeper
    Eviction API compatible tools/commands
    like 'kubectl drain' must be used
    (vs directly deleting pods/deployments)
    Install a new cluster
    WKSctl GitOps install
    - Tool for Kubernetes Cluster Management Using GitOps 
     WKSctl is an open-source project to install, bootstrap, and manage Kubernetes 
     clusters, including add-ons, through SSH. WKS is a provider of the Cluster API 
     (CAPI) using the GitOps approach. Kubernetes cluster configuration is defined 
     in YAML, and WKSctl applies the updates after every push in Git, allowing users 
     to have repeatable clusters on-demand.
    Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
    Application definitions, configurations, and environments should be declarative 
    and version controlled. Application deployment and lifecycle management should 
    be automated, auditable, and easy to understand.
    Overview of kubeadm
    kubeadm init
    kubeadm join
    kubeadm upgrade
    kubeadm config
    kubeadm reset
    kubeadm token
    kubeadm version
    kubeadm alpha
    Implementation details
    Upgrading kubeadm HA clusters from 1.9.x to 1.9.y
    Upgrading kubeadm clusters from 1.7 to 1.8
    Upgrading kubeadm clusters from v1.10 to v1.11
    Upgrading/downgrading kubeadm clusters between v1.8 to v1.9
    (default) TCP Ports
    REF @[https://kubernetes.io/docs/tasks/tools/install-kubeadm/]
                                           │Master │Worker │
    │Port Range   │ Purpose                │   X   │       │
    │6443*        │ Kubernetes API server  │   X   │       │
    │2379─2380    │ etcd server client API │   X   │       │
    │10250        │ Kubelet API            │   X   │  X    │
    │10251        │ kube─scheduler         │   X   │       │
    │10252        │ kube─controller─manager│   X   │       │
    │10255        │ Read─only Kubelet API  │   X   │  X    │
    │ 30000─32767 │ NodePort Services      │       │  X    │
    kubelet TLS bootstrap
    private (image) registries
    - method to pass a secret that contains a Docker image registry password 
     to the Kubelet so it can pull a private image on behalf of your Pod.
    Pull Image from a Private Registry
    Certificate Rotation
    Manage TLS Certificates in a Cluster
    Manage Cluster Daemons
    Perform a Rollback on a DaemonSet
    Perform a Rolling Update on a DaemonSet
    Install Service Catalog
    Install Service Catalog using Helm
    Install Service Catalog using SC
    ....There is no latency limitation between nodes in kubernetes cluster. They are configurable parameters.
    - For kubelet on worker node it is:
       --node-status-update-frequency duration    Specifies how often kubelet posts node status to master. 
                                                  Note: be cautious when changing the constant, it must work
                                                        with nodeMonitorGracePeriod in nodecontroller. (default 10s)
    - For controller-manager on master node they are:
      --node-monitor-grace-period duration    Amount of time which we allow running Node to be unresponsive
                                              before marking it unhealthy. Must be N times more than kubelet's
                                              nodeStatusUpdateFrequency, where N means number of retries allowed 
                                              for kubelet to post node status. (default 40s)
      --node-monitor-period duration          The period for syncing NodeStatus in NodeController. (default 5s)
      --node-startup-grace-period duration    Amount of time which we allow starting Node to be unresponsive before
                                              marking it unhealthy. (default 1m0s)
    Cluster Admin
    Admin Tasks
    Config Ref()
    Extending Kubernetes
    Extending k8s Cluster
    Extending the Kubernetes API
    Extending the Kubernetes API with the aggregation layer
    Custom Resources
    Config. files
    (documented in the Reference section of the online documentation, under each binary:)
    Compute, Storage, and Networking Extensions
    Network Plugins
    Device Plugins
    Service Catalog
    Cluster Federation
    Cluster Federation
    Run an App on 
    Multiple Clusters
    ºFederation APIº:
     - @[https://kubernetes.io/docs/reference/federation/extensions/v1beta1/definitions/]
     - @[https://kubernetes.io/docs/reference/federation/extensions/v1beta1/operations/]
     - @[https://kubernetes.io/docs/reference/federation/v1/definitions/]
     - @[https://kubernetes.io/docs/reference/federation/v1/operations/]
    ºExternal references:º
    - @[https://kubernetes.io/docs/reference/command-line-tools-reference/federation-apiserver/]
    - @[https://kubernetes.io/docs/reference/command-line-tools-reference/federation-controller-manager/]
    - Cross-cluster Service Discovery using Federated Services
    - Set up Cluster Federation with Kubefed
    - Set up CoreDNS as DNS provider for Cluster Federation
    - Set up placement policies in Federation
    - Federated Cluster
    - Federated ConfigMap
    - Federated DaemonSet
    - Federated Deployment
    - Federated Events
    - Federated Horizontal Pod Autoscalers (HPA)
    - Federated Ingress
    - Federated Jobs
    - Federated Namespaces
    - Federated ReplicaSets
    - Federated Secrets
    - Extend kubectl with plugins
    - Manage HugePages
    - Federated Secrets
    Controls cluster
    kubefed options
    kubefed init
    kubefed join
    kubefed unjoin
    kubefed version
    Federated Services
    Using the k8s API
    Kubernetes API Overview
    Accessing the API
    Controlling Access to the Kubernetes API
    Authenticating with Bootstrap Tokens
    Using Admission Controllers
    Dynamic Admission Control
    Managing Service Accounts
    Authorization Overview
    Using RBAC Authorization
    Using ABAC Authorization
    Using Node Authorization
    Webhook Mode
    Extend Kubernetes
    Use Custom Resources
    Extend the Kubernetes API with CustomResourceDefinitions
    Versions of CustomResourceDefinitions
    Migrate a ThirdPartyResource to CustomResourceDefinition
    Configure the Aggregation Layer
    Setup an Extension API Server
    Use an HTTP Proxy to Access the Kubernetes API
    Understanding K8S Code
    K8s Implementation Summary
    - Julia Evans "A few things I've learned about Kubernetes"
    - """... you can run the kubelet by itself! And if you have a kubelet, you
    can add the API server and just run those two things by themselves! Okay,
    awesome, now let’s add the scheduler!"""
    - the “kubelet” is in charge of running containers on nodes
    - If you tell the API server to run a container on a node, it will tell the kubelet to get it done (indirectly)
    - The scheduler translates "run a container" to "run a container on node X"
    ºetcd is Kubernetes’ brainº
    - Every component in Kubernetes (API server, scheduler, kubelets, controller manager, ...) is stateless.
       All of the state is stored in the (key-value store) etcd database.
    - Communication between components (often) happens via etcd.
    -Oºbasically everything in Kubernetes works by watching etcd for stuff it has to do,º
     Oºdoing it, and then writing the new state back into etcd º
      Ex 1: Run a container on Machine "X":
       Wrong way: ask kubelet@Machine"X" to run the container.
       Right way: kubectl*1 →(API Server)→ etcd: "This pod should run on Machine X"
                  kubelet@Machine"X"     → etcd: check work to do
                  kubelet@Machine"X"     ← etcd: "This pod should run on Machine X"
                  kubelet@Machine"X"     ← kubelet@Machine"X": Run pod
      Ex 2: Run a container anywhere on the k8s cluster
        kubectl*1 → (API Server) → etcd: "This pod should run somewhere"
        scheduler                → etcd: Something to run?
        scheduler                ← etcd: "This pod should run somewhere"
        scheduler                → kuberlet@Machine"Y":  Run pod
     *1 The kubectl is used from the command line.
        In the sequence diagram it can be replaced for any
        of the existing controllers (ReplicaSet, Deployment, DaemonSet, Job,...)
    ºAPI server roles in cluster:º
    API Server is responsible for:
    1.- putting stuff into etcd
        kubectl    → API Server : put "stuff" in etcd
        API Server → API Server : check "stuff"
        alt 1:
           kubectl ← API Server : error: "stuff" is wrong
        alt 2:
           API Server → etcd    : set/update "stuff"
    2.- Managing authentication:
        ("who is allowed to put what stuff into etcd")
        The normal way is through X509 client certs.
    ºcontroller manager does a bunch of stuffº
    Responsible for:
    - Inspect etcd for pending to schedule pods.
    - daemon-set-controllers will inspect etcd for
      pending daemonsets and will call the scheduler
      to run them on every machine with the given
      pod configuration.
    - The "replica set controller" will inspect etcd for
      pending replicasets and will create 5 pods that
      the scheduler will then schedule.
    - "deployment controller" ...
    something isn’t working? figure out which controller is
    responsible and look at its logs
    ºCore K8s components run inside of k8sº
    - Only 5 things needs to be running before k8s starts up:
      - the scheduler
      - the API server
      - etcd
      - kubelets on every node (to actually execute containers)
      - the controller manager (because to set up daemonsets you
                                need the controller manager)
      Any other core system (DNS, overlay network,... ) can
      be scheduled by k8s inside k8s
    API Conventions
    Source Code Layout
    Note: Main k8s are placed in kubernetes/pkg/
          (API, kubectl, kubelet, controller, ...)
    - REF: A Tour of the Kubernetes Source Code Part One: From kubectl to API Server
    ºExamining kubectl sourceº
    Locating the implementation of kubectl commands in the Kubernetes source code
    - kubectl entry point for all commands
      - Inside there is a name of a go directory that matches the kubectl command:
        kubectl create/create.go
    ºK8s loves the Cobra Command Frameworkº
    - k8s commands are implemented using the Cobra command framework.
    - Cobra provides lot of features for building command line interfaces
      amongst them, Cobra puts the command usage message and command
      descriptions adjacent to the code that runs the command.
      | // NewCmdCreate returns new initialized instance of create sub command
      | func NewCmdCreate(f cmdutil.Factory, ioStreams genericclioptions.IOStreams) *cobra.Command {
      |     o := NewCreateOptions(ioStreams)
      |     cmd := ⅋cobra.Command{
      |         Use:                   "create -f FILENAME",
      |         DisableFlagsInUseLine: true,
      |         Short:                 i18n.T("Create a resource from a file or from stdin."),
      |         Long:                  createLong,
      |         Example:               createExample,
      |         Run: func(cmd *cobra.Command, args []string) {
      |             if cmdutil.IsFilenameSliceEmpty(o.FilenameOptions.Filenames) {
      |                 defaultRunFunc := cmdutil.DefaultSubCommandRun(ioStreams.ErrOut)
      |                 defaultRunFunc(cmd, args)
      |                 return
      |             }
      |             cmdutil.CheckErr(o.Complete(f, cmd))
      |             cmdutil.CheckErr(o.ValidateArgs(cmd, args))
      |             cmdutil.CheckErr(o.RunCreate(f, cmd))
      |         },
      |     }
      |     // bind flag structs
      |     o.RecordFlags.AddFlags(cmd)
      |     usage := "to use to create the resource"
      |     cmdutil.AddFilenameOptionFlags(cmd, ⅋o.FilenameOptions, usage)
      |     ...
      |     o.PrintFlags.AddFlags(cmd)
      |     // create subcommands
      |     cmd.AddCommand(NewCmdCreateNamespace(f, ioStreams))
      |     ...
      |     return cmd
      | }
    ºBuilders and Visitors Abound in Kubernetesº
      Ex. code:
      | r := f.NewBuilder().
      |     Unstructured().
      |     Schema(schema).
      |     ContinueOnError().
      |     NamespaceParam(cmdNamespace).DefaultNamespace().
      |     FilenameParam(enforceNamespace, ⅋o.FilenameOptions).
      |     LabelSelectorParam(o.Selector).
      |     Flatten().
      |     Do()
      The functions Unstructured, Schema, ContinueOnError,...  Flatten
      all take in a pointer to a Builder struct, perform some form of
      modification on the Builder struct, and then return the pointer to
      the Builder struct for the next method in the chain to use when it
      performs its modifications defined at:
      | ...
      | func (b ºBuilder) Schema(schema validation.Schema) ºBuilder {
      |     b.schema = schema
      |     return b
      | }
      | ...
      | func (b ºBuilder) ContinueOnError() ºBuilder {
      |     b.continueOnError = true
      |     return b
      | }
     The Do function finally returns a Result object that will be used to drive
    the creation of our resource. It also creates a Visitor object that can be
    used to traverse the list of resources that were associated with this
    invocation of resource.NewBuilder. The Do function implementation is shown below.
      a new DecoratedVisitor is created and stored as part of the Result object
    that is returned by the Builder Do function. The DecoratedVisitor has a Visit
    function that will call the Visitor function that is passed into it.
      |// Visit implements Visitor
      |func (v DecoratedVisitor) Visit(fn VisitorFunc) error {
      |    return v.visitor.Visit(func(info *Info, err error) error {
      |        if err != nil {
      |            return err
      |        }
      |        for i := range v.decorators {
      |            if err := v.decorators[i](info, nil); err != nil {
      |                return err
      |            }
      |        }
      |        return fn(info, nil)
      |    })
      Create eventually will call the anonymous function that contains the
      createAndRefresh function that will lead to the code making a REST call
      to the API server.
      The createAndRefresh function invokes the Resource NewHelper
      function found in ...helper.go returning a new Helper object:
      | func NewHelper(client RESTClient, mapping ºmeta.RESTMapping) ºHelper {
      |     return ⅋Helper{
      |         Resource:        mapping.Resource,
      |         RESTClient:      client,
      |         Versioner:       mapping.MetadataAccessor,
      |         NamespaceScoped: mapping.Scope.Name() == meta.RESTScopeNameNamespace,
      |     }
      | }
      Finally the Create function iwill invoke a createResource function of the
      Helper Create function.
      The Helper createResource function, will performs the actual REST call to
      the API server to create the resource we defined in our YAML file.
    ºCompiling and Running Kubernetesº
    - we are going to use a special option that informs the Kubernetes build process
    $ make WHAT='cmd/kubectl'  # ← compile only kubectl
    Test it like:
    On terminal 1 boot up local test hack cluster:
    $ PATH=$PATH KUBERNETES_PROVIDER=local hack/local-up-cluster.sh
    On terminal 2 execute the compiled kubectl:
    $ cluster/kubectl.sh create -f nginx_replica_pod.yaml
    ºCode Learning Toolsº
    Tools and techniques that can really help accelerate your ability to learn the k8s src:
    - Chrome Sourcegraph Plugin:
      provides several advanced IDE features that make it dramatically
      easier to understand Kubernetes Go code when browsing GitHub repositories.
      - start by looking at an absolutely depressing snippet of code,
        with ton of functions.
      - Hover over each code function with Chrome browser + Sourcegraph extension
        It will popup a description of the function, what is passed into it
        and what it returns.
      - It also provides advanced view with the ability to peek into the function
        being invoked.
    - Properly formatted print statements:
      fmt.Println("\n createAndRefresh Info = %#v", info)
    - Use of a go panic to get desperately needed stack traces:
      | func createAndRefresh(info *resource.Info) error {
      |     fmt.Println("\n createAndRefresh Info = %#v", info)
      |     ºpanic("Want Stack Trace")º
      |     obj, err := resource.NewHelper(info.Client, info.Mapping).Create(info.Namespace, true, info.Object)
      |     if err != nil {
      |         return err
      |     }
      |     info.Refresh(obj, true)
      |     return nil
      | }
    - GitHub Blame to travel back in time:
      "What was the person thinking when they committed those lines of code?"
      - GitHub browser interface has a blame option available as a button on the user interface:
          It returns a view of the code that has the commits responsible for each line of code
        in the source file. This allows you to go back in time and look at the commit that added
        a particular line of code and determine what the developer was trying to accomplish when
        that line of code was added.
    getting started
    - provide sdns: allows admin teams to control network traffic in
          complex networking topologies through a centralized panel, 
          rather than handling each network device, such as routers
          and switches, manually ("hierarchical" topology)
    ºcontainer network interface (cni):º
    - library definition and a set of tools to configure network 
      interfaces in linux containers through many supported plugins.
    - multiple plugins can run at the same time in a container
      that participates in a network driven by different plugins.
    - networks use json configuration files and instantiated as
      new namespaces when the cni plugin is invoked.
    - common cni plugins include:
        - high scalability.
        - provides network connectivity and load balancing 
          between application workloads, such as application 
          containers and processes, and ensures transparent security.
        - integrates containers, virtualization, and physical servers
          based on the container network using a single networking fabric.
        - provides overlay networking for multi-cloud and
          hybrid cloud through network policy enforcement.
        - makes it easier for developers to configure a layer 3
          network fabric for kubernetes.
        - supports multiple network interfaces in a single pod on
          kubernetes for sriov, sriov-dpdk, ovs-dpdk, and vpp workloads.
      -ºopen vswitch (ovs)º:
        - production-grade cni platform with a standard management
          interface on openshift and openstack.
        - enables virtual networks for multiple containers on different
          hosts using an overlay function.
        - makes cloud network functions less expensive to build, 
          easier to operate, and better performing than traditional
          cloud networks.
    - in addition to network namespaces, an sdn should increase security by 
    offering isolation between multiple namespaces with the multi-tenant plugin: 
      packets from one namespace, by default, will not be visible to 
      other namespaces, so containers from different namespaces cannot 
      send packets to or receive packets from pods and services of a
      different namespace.
    network policy providers
    use cilium for networkpolicy
    use kube-router for networkpolicy
    romana for networkpolicy
    weave net for networkpolicy
    access clusters using the kubernetes api
    access services running on clusters
    advertise extended resources for a node
    autoscale the dns service in a cluster
    change the reclaim policy of a persistentvolume
    change the default storageclass
    cluster management
    configure multiple schedulers
    configure out of resource handling
    configure quotas for api objects
    control cpu management policies on the node
    customizing dns service
    debugging dns resolution
    declare network policy
    developing cloud controller manager
    encrypting secret data at rest
    guaranteed scheduling for critical add-on pods
    ip masquerade agent user guide
    kubernetes cloud controller manager
    limit storage consumption
    operating etcd clusters for kubernetes
    reconfigure a node's kubelet in a live cluster
    reserve compute resources for system daemons
    safely drain a node while respecting application slos
    securing a cluster
    set kubelet parameters via a config file
    set up high-availability kubernetes masters
    static pods
    storage object in use protection
    using coredns for service discovery
    using a kms provider for data encryption
    using sysctls in a kubernetes cluster
    service mesh