Day 01
Docker, Bare Metal, VMs & Containers 🐳💻
Bare Metal runs software directly on hardware. VMs use a hypervisor, have their own kernel, run a full OS, and are resource-heavy.
Containers are lightweight, share the host OS kernel, and run only the processes you start. They’re isolated from the host using namespaces (mnt, pid, net, uts, ipc) and their resources are managed by cgroups. Filesystem uses OverlayFS with layers and copy-on-write for efficiency.
Docker has three parts: the Client (sends commands), the Daemon (builds images, runs containers, manages networks/volumes), and the Runtime (containerd + runc, actually runs the container process).
#30DaysDevSecOps
@codemaxing
Docker, Bare Metal, VMs & Containers 🐳💻
Bare Metal runs software directly on hardware. VMs use a hypervisor, have their own kernel, run a full OS, and are resource-heavy.
Containers are lightweight, share the host OS kernel, and run only the processes you start. They’re isolated from the host using namespaces (mnt, pid, net, uts, ipc) and their resources are managed by cgroups. Filesystem uses OverlayFS with layers and copy-on-write for efficiency.
Docker has three parts: the Client (sends commands), the Daemon (builds images, runs containers, manages networks/volumes), and the Runtime (containerd + runc, actually runs the container process).
#30DaysDevSecOps
@codemaxing
Which of these underlying technologies make running process in isolation (contained) possible?
Anonymous Quiz
0%
Dedicated Hardware
46%
Linux Namespaces
46%
Container Libraries
8%
None
Learned something cool at my Day 2 of DevSecOps
Prolly rarely used Docker command.
Honestly, it’s one of the coolest commands in Docker imo.
So what does it actually do?
We know Docker follows a client–server architecture. When we use basic commands like docker logs, we’re mostly seeing what’s happening inside the container, things that are already exposed to the client.
But we rarely see what’s happening on the Docker daemon (server) side.
When you run it:
it starts streaming real-time events directly from the Docker daemon.
Now, open another terminal and run something like:
docker run alpine uptime
You’ll literally see the entire lifecycle happening in real time:
Image pulling
Container creation
Network attachment
Container start
Command execution
Container stop
@codemaxing
docker system events
Prolly rarely used Docker command.
Honestly, it’s one of the coolest commands in Docker imo.
So what does it actually do?
We know Docker follows a client–server architecture. When we use basic commands like docker logs, we’re mostly seeing what’s happening inside the container, things that are already exposed to the client.
But we rarely see what’s happening on the Docker daemon (server) side.
When you run it:
it starts streaming real-time events directly from the Docker daemon.
Now, open another terminal and run something like:
docker run alpine uptime
You’ll literally see the entire lifecycle happening in real time:
Image pulling
Container creation
Network attachment
Container start
Command execution
Container stop
@codemaxing
❤1
Docker events command will print the logs for the application running inside the container
Anonymous Quiz
64%
TRUE
36%
FALSE
Building a Realtime CCTV footage analysis AI
Currently collecting footages to train my model on
Currently collecting footages to train my model on
🔥2
So I gave YOLO a shot for some CCTV analysis AI project, and honestly... it was fire. 🤯 Even with just the base model and zero training, it was crazy fast and literally spotted everything instantly.
They made it so easy you can get it runing in like two lines of code.
Try on https://colab.research.google.com
@codemaxing
They made it so easy you can get it runing in like two lines of code.
Try on https://colab.research.google.com
@codemaxing
❤3
Forwarded from Corax
every python code i see these days is this
> "how to make nuclear bomb in python"
> "from bomb import nuclear_bomb"
>"newbomb = nuclear_bomb()"
> "how to make nuclear bomb in python"
> "from bomb import nuclear_bomb"
>"newbomb = nuclear_bomb()"
😁2
AWS Services - AWS Lambda
This is a series where we explore AWS Services
Lambda is like a normal server, but it only runs when it is requested, so it sleeps when it has no job.
The main concept in Lambda is the handler. You can’t write multiple endpoints like a normal server that handles many things, you only configure one handler that does one job.
Example: You want a Python noscript that sends a message to a Telegram bot.
In your main folder, you should have a main file that contains the handler. The prototype should look like this:
Let’s say you use an API to trigger your Lambda with this JSON body:
You access it with event["message"], then do:
Pawww 💥 message sent. Once done, the server shuts off.
Quick facts:
Max uptime: 15 minutes
Max storage (/tmp): 10 GB
I post cloud related contents @codemaxing
This is a series where we explore AWS Services
Lambda is like a normal server, but it only runs when it is requested, so it sleeps when it has no job.
The main concept in Lambda is the handler. You can’t write multiple endpoints like a normal server that handles many things, you only configure one handler that does one job.
Example: You want a Python noscript that sends a message to a Telegram bot.
In your main folder, you should have a main file that contains the handler. The prototype should look like this:
def lambda_handler(event, context):
Let’s say you use an API to trigger your Lambda with this JSON body:
{ "message": "Hello from Lambda" }You access it with event["message"], then do:
bot.send(message)
Pawww 💥 message sent. Once done, the server shuts off.
Quick facts:
Max uptime: 15 minutes
Max storage (/tmp): 10 GB
I post cloud related contents @codemaxing
Which of the following is the most suitable use case for AWS Lambda? (Surprise on the explanation)
Anonymous Quiz
25%
Running a live chat server that handles thousands of users continuously.
55%
Generating a daily sales report at midnight automatically.
5%
Hosting a website that needs to serve pages to thousands of visitors all day.
15%
Running a multiplayer game server where players are always connected.
AWS Services Series - AWS Step Functions
This is a series where we explore AWS Services
Step Functions is a visual workflow service that coordinates multiple AWS services. If you have two jobs that need to run at the same time like uploading a file to S3 and simultaneously updating a database you use a Parallel State.
The main concept is the State Machine. Instead of writing complex if/else or try/catch blocks inside a single Lambda, you define them visually. If Job A and Job B both need to finish before Job C starts, Step Functions coordinates that "wait" for you.
In your definition, you create two branches. Both start as soon as the video is uploaded
It supports State Persistence which automatically passes the output of one step as the event input for the next.
I post cloud related contents @codemaxing
This is a series where we explore AWS Services
Step Functions is a visual workflow service that coordinates multiple AWS services. If you have two jobs that need to run at the same time like uploading a file to S3 and simultaneously updating a database you use a Parallel State.
The main concept is the State Machine. Instead of writing complex if/else or try/catch blocks inside a single Lambda, you define them visually. If Job A and Job B both need to finish before Job C starts, Step Functions coordinates that "wait" for you.
In your definition, you create two branches. Both start as soon as the video is uploaded
It supports State Persistence which automatically passes the output of one step as the event input for the next.
I post cloud related contents @codemaxing
👍1
Which state type triggers two independent actions simultaneously (e.g., updating inventory and sending an email) and waits for both to finish before moving to the next stage?
Anonymous Quiz
38%
Wait State
0%
Map State
8%
Choice State
54%
Parallel State
Computer Science Career Playlist
https://youtube.com/playlist?list=PLnvsSqWTNhcEb8V8R67Q9gSl1Mnp-kqE5&si=O5evxVLDauZH8GXo
Which stage ur at?
@codemaxing
https://youtube.com/playlist?list=PLnvsSqWTNhcEb8V8R67Q9gSl1Mnp-kqE5&si=O5evxVLDauZH8GXo
Which stage ur at?
@codemaxing
Today I learned about Kubernetes (K8s) architecture , and honestly, it’s both beautiful and sophisticated.
Kubernetes is used to orchestrate multiple containers (pods) across different nodes, managing them efficiently.
K8s has two core parts:
1. Control Plane
2. Worker Nodes
The control plane has five main components: the API server, etcd, scheduler, and controller manager. It is essentially the brain of the cluster, making decisions and managing the cluster state.
Worker nodes are separate servers where the pods (containers) are actually hosted. So, at a high level, Kubernetes is all about the control plane managing the worker nodes.
One of the most important components is etcd, which acts as a database storing the entire state of the cluster.
#k8s
@codemaxing
Kubernetes is used to orchestrate multiple containers (pods) across different nodes, managing them efficiently.
K8s has two core parts:
1. Control Plane
2. Worker Nodes
The control plane has five main components: the API server, etcd, scheduler, and controller manager. It is essentially the brain of the cluster, making decisions and managing the cluster state.
Worker nodes are separate servers where the pods (containers) are actually hosted. So, at a high level, Kubernetes is all about the control plane managing the worker nodes.
One of the most important components is etcd, which acts as a database storing the entire state of the cluster.
#k8s
@codemaxing
👍1
The Raft consensus algorithm requires an even number of etcd servers to ensure high availability.
Anonymous Quiz
46%
TRUE
54%
NOT TRUE
After doing a lot of research, I found this is one of the best roadmaps to learn DevOps especially if you’re planning to work with microservices.
Assuming you already have a base knowledge of Linux and basic networking:
1️⃣ Start with Docker – Learn Docker architecture, essential commands, Docker Compose, and how to build and push your images.
2️⃣ Move to Kubernetes – Understand Kubernetes architecture, pods, clusters, nodes, and kubectl/API basics.
3️⃣ Pick one cloud provider – In this case, AWS. Focus on core services like EC2, ECS, EKS, VPC, IAM, S3, and Fargate.
4️⃣ Learn Infrastructure as Code (IaC) – Study Terraform fundamentals: commands, project structure, variables, and modules.
5️⃣ Monitoring & Logging – Learn Prometheus and Grafana for observability.
I create cloud-related content @codemaxing 🚀
Assuming you already have a base knowledge of Linux and basic networking:
1️⃣ Start with Docker – Learn Docker architecture, essential commands, Docker Compose, and how to build and push your images.
2️⃣ Move to Kubernetes – Understand Kubernetes architecture, pods, clusters, nodes, and kubectl/API basics.
3️⃣ Pick one cloud provider – In this case, AWS. Focus on core services like EC2, ECS, EKS, VPC, IAM, S3, and Fargate.
4️⃣ Learn Infrastructure as Code (IaC) – Study Terraform fundamentals: commands, project structure, variables, and modules.
5️⃣ Monitoring & Logging – Learn Prometheus and Grafana for observability.
I create cloud-related content @codemaxing 🚀