metadata
metadata contains important information about Kubernetes objects.
There are many attributes can be specified as metadata, But following are most common used attributes:
- name
- namespace
- labels
- annotations
metadata.name
metadata.name is a the only required string when creating or modifying a Kubernetes objects such as Pod, Deployment, Service, Configs and Volumes etc.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-frontend
......
........
apiVersion: v1
kind: Service
metadata:
name: frontend-service
......
........
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
.......
..........
In the examples above, we named our deployment, service and config objects with app-frontend, frontend-service and nginx-config.
We can use kubectl to query for the objects by using their names.
$ kubectl get deployments app-frontend $ kubectl describe deployments app-frontend $ kubectl get service frontend-service $ kubectl describe service frontend-service $ kubectl get configmap nginx-config $ kubectl describe configmap nginx-config
metadata.namespace
Each Kubernetes objects is scoped to a namespace. The metadata.namspace attribute specifies which namespace the object belongs to.
Kubernetes objects are uniquely identified within a namespace by their name. As a result, multiple objects in the same namespace cannot use the same name.
You can simply specify the metadata.namespace string attribute.
apiVersion: apps/v1 kind: Deployment metadata: name: app-fronted namespace: development ...... ........ apiVersion: v1 kind: Service metadata: name: fronted-service namespace: development ...... ........
If the namespace attribute is omitted from your specification, the namespace “default” is used.
metadata.labels
Labels are key/value pairs that are attached to Kubernetes objects. Labels are typically used to specify identifying attributes of objects that might be used to identify it, or to select is as a member of some logical grouping of objects.
Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
apiVersion: apps/v1 kind: Deployment metadata: name: app-fronted namespace: development labels: tier: fronted env: development release: stable version: v1.8 ....... .........
metadata.annotations
Annotations used to attach arbitrary non-identifying metadata to the objects. Annotations are also key / value pairs that can be used by external tools and libraries.
One common use case is that of Ingress object to force redirect to https (ssl-redirect) and creating certificate from the letsencrypt issuer.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-app-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / certmanager.k8s.io/cluster-issuer: 'letsencrypt-prod' nginx.ingress.kubernetes.io/ssl-redirect: 'true'
Label selectors
Labels do not provide uniqueness. In general, we expect many objects to carry the same label(s). Via a label selector, the client/user can identify a set of objects.
The label selector is the core grouping primitive in Kubernetes.
The API currently supports two types of selectors:
- Equality-based
- Set-based.
Equality-based
Equality-based labels allow filtering by key and value. Matching objects should satisfy all the specified labels. The supported operators are =, ==, !=.
Set-based
Set-based labels allow filtering keys according to a set of values. The supported operators are in, notin and exists.
Create a Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: app-frontend labels: app: website tier: frontend spec: replicas: 3 selector: matchLabels: app: website tier: frontend template: metadata: labels: app: website tier: frontend spec: containers: - name: fronted-website image: learninghub/website:1.0 ports: - containerPort: 80
$ kubectl apply -f app-deployment.yaml deployment.apps "app-frontend" created
Create a Pod
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: webserver tier: frontend spec: volumes: - name: shared-data emptyDir: {} containers: - name: nginx image: nginx volumeMounts: - name: shared-data mountPath: /usr/share/nginx/html
$ kubectl apply -f nginx-demo.yaml pod "nginx" created
$ kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS app-fronted-fx95c 1/1 Running 0 10m app=website,pod-template-hash=1215812976,tier=frontend app-fronted-gh9rs 1/1 Running 0 10m app=website,pod-template-hash=1215812976,tier=frontend app-fronted-w8kdl 1/1 Running 0 10m app=website,pod-template-hash=1215812976,tier=frontend nginx 1/1 Running 0 47s app=webserver,tier=frontend
Equality-based
Get all the web server pods
$ kubectl get pods -l app=webserver NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 4m
Get all the fronted pods but not web server
$ kubectl get pods -l tier=frontend,app!=webserver NAME READY STATUS RESTARTS AGE app-fronted-fx95c 1/1 Running 0 18m app-fronted-gh9rs 1/1 Running 0 18m app-fronted-w8kdl 1/1 Running 0 18m
Here comma (,) act as AND operation.
Set-based
Get all the fronted pods but not website.
$ kubectl get pods -l 'tier in (frontend),app notin(website)' NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 11m
Selection Via Fields(Field Selector)
$ kubectl get pod --field-selector metadata.name=nginx NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 12m
$ kubectl get pod --field-selector metadata.namespace=default NAME READY STATUS RESTARTS AGE app-fronted-5659d56fcb-fx95c 1/1 Running 0 23m app-fronted-5659d56fcb-gh9rs 1/1 Running 0 23m app-fronted-5659d56fcb-w8kdl 1/1 Running 0 23m nginx 1/1 Running 0 13m
Specifying selector in Service
Example deployment metadata:
metadata: labels: app: website tier: frontend stage: production
Service manifest:
kind: Service apiVersion: v1 metadata: name: my-app spec: selector: app: website stage: production ports: - port: 80 protocol: TCP
It will a create service and selects pods only on app and stage labels. Then the containers running in the pods listen on Port 80. If selector is omitted in the service, the service will not select any pods.
Node labels
You can constrain a pod to run only on a particular nodes. The recommended approach to do this is use label selectors to make the selection.
We can attach labels to nodes using “kubectl label nodes” command.
Get all nodes
$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-cluster-1-75a9c0b4-83dn Ready < none > 3h v1.11.6-gke.2 gke-cluster-1-75a9c0b4-cgzw Ready < none > 3h v1.11.6-gke.2 gke-cluster-1-75a9c0b4-rd54 Ready < none > 3h v1.11.6-gke.2
Add a label to a node
$ kubectl label nodes gke-cluster-1-75a9c0b4-83dn enviroment=dev node "gke-cluster-1-75a9c0b4-83dn" labeled
You can verify that it worked by running
$ kubectl get nodes --show-labels
You can also use
$ kubectl describe node gke-cluster-1-75a9c0b4-83dn Name: gke-cluster-1-75a9c0b4-83dn Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/fluentd-ds-ready=true beta.kubernetes.io/instance-type=n1-standard-1 beta.kubernetes.io/os=linux cloud.google.com/gke-nodepool=default-pool cloud.google.com/gke-os-distribution=cos enviroment=dev failure-domain.beta.kubernetes.io/region=us-central1 ............ ...............
Built-in node labels
In addition to labels you attach, nodes come pre-populated with a standard set of labels.
kubernetes.io/hostname
failure-domain.beta.kubernetes.io/zone
failure-domain.beta.kubernetes.io/region
beta.kubernetes.io/instance-type
beta.kubernetes.io/os
beta.kubernetes.io/arch
NodeSelector
nodeSelector is the simplest recommended form of node selection constraint.
nodeSelector is a field of PodSpec. It specifies a map of key-value pairs.
Pod manifest :
apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: enviroment: dev
If you are using workload controller for your application, you have to specifie nodeSelector in pod template (spec.template.spec.nodeSelector).
spec: replicas: 3 selector: matchLabels: app: website tier: frontend template: metadata: labels: app: website tier: frontend spec: containers: - name: fronted-website image: learninghub/website:1.0 ports: - containerPort: 80 nodeSelector: enviroment: dev