support
--io 4: Initiates 4 I/O stressors, each generating continuous I/O operations to stress the system's disk and filesystem.
--vm 2: Starts 2 virtual memory stressors, each allocating and deallocating memory repeatedly to test the system's memory management.
--vm-bytes 128M: Specifies that each virtual memory stressor should allocate 128 megabytes of memory.
--timeout 60s: Sets the duration of the stress test to 60 seconds, after which all stressors will terminate.
open another terminal to watch the live pods
-- kubectl get po --watch
On main server
-- kubectl top pods
-- kubectl get hpa
-- kubectl get pods
-- kubectl describe hpa mb-deployment [ This will show scaling activities ]
-- kubectl get events [ This will also show same things ]
-- kubectl logs mb-deployment-8585b755c5-p2bzv [ To see logs of the pod ]
After few mins, scale in happens as we dont have load.
-- kubectl delete -f auto.yml
Example using Manifestfile
--------------------------
First we need to have deployment and then we can autscale on that deployment / deployment name
vi auto.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ib-deployment
labels:
app: bank
spec:
replicas: 2
selector:
matchLabels:
app: bank
template:
metadata:
labels:
app: bank
spec:
containers:
- name: cont1
image: reyadocker/internetbankingrepo:latest
-- kubectl apply -f auto.yml
--------------------------------
vi hpa.yml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: ib-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ib-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 20