The post Home Lab Chronicles: Building a Robust Network Infrastructure appeared first on GK.
]]>I recently put together a home lab designed to be both affordable and powerful, with a strong focus on flexibility and scalability.
In this blog post, I’ll share the details of my setup and why I chose the components I did, including the TP-Link Omada ecosystem and various open-source solutions.
My setup revolves around TP-Link’s Omada ecosystem, a choice I made for its balance of functionality and budget-friendliness. Here’s a quick look at the core pieces of hardware:

One of the most important aspects of my home lab is the virtualized environment I’ve built using Proxmox.
Proxmox is an open-source type 1 hypervisor platform, and it runs on my old Dell computer, which is equipped with a quad-core Intel Core i7-3770 processor and 32GB of RAM.
Not the most powerful PC, but it’s more than enough to run a variety of projects.
This setup allows me to run a range of virtual machines (VMs) for different use cases:
This environment gives me the flexibility to test different systems without needing multiple physical machines, making it cost-effective and space-efficient.
A key part of my network setup is the use of VLANs (Virtual Local Area Networks). I’ve created multiple VLANs to segregate traffic for specific purposes:

Segmenting traffic like this not only improves security but also ensures better performance and easier troubleshooting.

Backing up my virtual environments and data is crucial, and for that, I use two 4TB external drives.
One is dedicated to backing up my Proxmox server, while the other backs up my Synology NAS.
This backup strategy ensures that even if something goes wrong with my setup, I can restore my data quickly and efficiently.
To keep everything organized and avoid the typical clutter of cables, I’ve rack-mounted my equipment in a 6U rack.
This holds my 16-port switches, a 24-port patch panel, and an outlet power strip surge protector.
The clean, efficient setup not only saves space but also makes managing the hardware much easier.
To crimp my Cat6 cables, I stripped about 1.5 inches of the jacket, arranged the wires in the T568B configuration, and inserted them into RJ-45 connectors.

After crimping the connectors with a tool, I made 3 to 5-inch patch cables for better cable management in my rack.
Finally, I used a cable tester to verify proper alignment and ensure each connection was secure and functioning correctly.

Proper wire labeling is essential in my home lab for organization and clarity. I used self-adhesive labels and a label maker to create clear, durable tags, ensuring consistent formatting.

Each cable is labeled on both ends with key details like cable type (e.g., Cat6), destination (e.g., “Living Room AP” for the access point), and any relevant VLAN information.
This systematic approach simplifies troubleshooting and makes future modifications more efficient.
Instead of purchasing outdated Cisco hardware, I decided to use Packet Tracer for my networking simulations.
This software provides me with all the tools I need to experiment with Cisco configurations and hone my networking skills without the expense or space requirements of actual hardware.
One of the most interesting aspects of my home lab is running Wazuh, an open-source security platform, on a Linux Ubuntu VM.
With Wazuh, I can test endpoint protection, intrusion detection, and other cybersecurity measures, allowing me to practice handling real-world security incidents in a safe environment.

One of the main reasons I decided to go with TP-Link’s suite of products was the Omada controller function.
Having a single, centralized interface to manage all of my devices makes network administration much more efficient.
The TP-Link Omada ecosystem is also incredibly budget-friendly, which was a big factor in my decision.
Building this home lab has been an incredibly rewarding experience.
It’s not just about having a space to test and experiment—it’s about creating a real-world learning environment that mimics the challenges faced in professional IT and network administration.
Whether you’re new to networking or looking to deepen your skills, a home lab can provide endless opportunities to grow.
If you’re looking to build your own lab, my advice is simple: start with a clear goal, find budget-friendly equipment that meets your needs, and take advantage of open-source software wherever possible.
The post Home Lab Chronicles: Building a Robust Network Infrastructure appeared first on GK.
]]>The post How We Recovered from a Ransomware Attack appeared first on GK.
]]>While many heard about the infamous Qlocker ransomware that encrypted files using the 7-zip utility, we had the misfortune of dealing with eCh0raix, a different strain that exploited vulnerabilities in QNAP’s firmware.
The attack wasn’t the result of poor network security or a careless mistake on our part—hackers took advantage of an unpatched security flaw in the NAS firmware, making our defenses futile against the attack.
It started as a regular day—until we noticed that many of our company files were suddenly inaccessible.
Digging deeper, we discovered the files had been encrypted, and a ransom note demanding 0.01 Bitcoin (approximately $550 at the time) was left in place of the original files.
Hackers provided a Tor link, requiring us to pay the ransom in exchange for the decryption key.
The attackers were swift and precise, locking down our incremental backups, leaving us with a decision: either pay the ransom or restore from a backup, knowing we’d lose about 24 hours’ worth of data. We made the choice not to pay the ransom.
Luckily, we had been diligent about our backups. We were able to restore our critical data from an earlier backup, effectively losing only a day’s worth of work.
Despite the relatively minor data loss, we decommissioned the QNAP NAS as soon as we could.
We replaced it with a Synology NAS, which has a more robust reputation for security.
In hindsight, this decision was crucial—just a year later, QNAP devices were hit with yet another ransomware attack on September 3, 2022.
At that point, it became clear that hackers seemed to have a persistent target on QNAP.
This experience brought to light a few crucial lessons. One of the most important? Always change the default ports on your NAS.
Many NAS devices, including QNAP, use default ports (like 8080 and 443) that are easy targets for hackers scanning for vulnerable systems.
By changing these ports, you can add an extra layer of protection against automated attacks.
Additionally, always make sure your NAS firmware is up to date.
QNAP released patches after these vulnerabilities were exploited, but many users had already been affected by the time those updates were made available.
Regular patching is key to staying ahead of potential attacks.
This experience taught us several key lessons. First, no matter how secure your network infrastructure is, vulnerabilities in your storage devices can still be exploited.
The eCh0raix ransomware proved that even strong network defenses are useless if firmware is left unpatched. The second lesson was the importance of a reliable backup strategy.
Because we regularly performed full and incremental backups, we avoided the devastating consequences that many businesses face when ransomware strikes.
Our ability to quickly restore from a backup made all the difference in resolving the issue without paying the ransom.
During the attack, I scoured the Reddit forums for a possible fix, where many other QNAP users shared their frustration with the eCh0raix and Qlocker attacks.
Some had opted to pay the ransom, while others were able to recover files through backups or decryption tools that were being developed by security researchers.
But for many, there was no easy fix, and they were left weighing the cost of the ransom against the potential loss of critical business data.
The decision to switch to a Synology NAS was a proactive measure to avoid future ransomware attacks.
While no system is invulnerable, Synology has a solid track record in terms of security patches and rapid response to vulnerabilities.
Additionally, we implemented even more stringent backup policies, ensuring multiple layers of redundancy, including offsite storage solutions.
Ultimately, this experience served as a harsh reminder that technology can be both an asset and a liability.
Vulnerabilities will always exist, and bad actors are constantly looking for ways to exploit them.
If you’re using a QNAP or any NAS device, make sure it’s regularly updated, and more importantly, maintain a reliable and well-tested backup system.
Avoiding ransomware attacks may not always be possible, but being prepared for recovery is within your control.
And as for QNAP? They seem to have angered someone in the hacker world because their vulnerabilities keep being targeted—proving once again that cybersecurity is a cat-and-mouse game.
The post How We Recovered from a Ransomware Attack appeared first on GK.
]]>The post Encapsulation vs. Decapsulation: Data Transmission in Networking appeared first on GK.
]]>Two of the most fundamental concepts in this process are encapsulation and decapsulation.
These terms refer to how data is packaged and unpackaged as it travels through different layers of the OSI (Open Systems Interconnection) model.
Encapsulation is the process of wrapping data with the necessary protocol information before it is transmitted over a network.
Think of it like packing a letter into an envelope and then placing it into a series of larger boxes, each box representing a different layer of the OSI model.

At each layer of the OSI model, additional information is added to the original data (payload). This information, known as headers (and sometimes footers), contains details necessary for proper routing, error checking, and more.
Here’s a quick breakdown of the encapsulation process:
At each step, the original data is wrapped in additional layers of information, ensuring it can be correctly routed, received, and understood at its destination.
Imagine sending a message, “Hello World,” from one computer to another. At the application layer, “Hello World” is the raw data.
As it passes down the OSI model, each layer adds its own protocol header, such as TCP/IP at the transport and network layers, and MAC addresses at the data link layer.
By the time it reaches the physical layer, the data has been encapsulated with all the information it needs to reach the other device.
Decapsulation is the reverse of encapsulation. It refers to the process of removing the protocol headers from the data as it moves up the OSI model layers on the receiving side.
If encapsulation is packing a series of boxes, decapsulation is the process of opening each one until the original data is extracted.
Once the encapsulated data reaches its destination, the process begins to reverse. At each layer, the appropriate protocol information is removed, and the data moves up the OSI model until it reaches the application layer in its original form.
Here’s how the decapsulation process unfolds:
Continuing with the “Hello World” message, once the receiving computer gets the data, it starts the decapsulation process.
It strips off the MAC addresses, then the IP address, then the TCP header, until finally, the application layer receives the raw “Hello World” message as it was originally sent.
These two processes are crucial to network communication. Without encapsulation, data would lack the necessary instructions to be routed, error-checked, or even delivered.
Similarly, without decapsulation, the data would be unreadable once it reached its destination.
In protocols like TCP/IP, encapsulation allows different layers to handle their specific tasks (like routing at the network layer or error-checking at the transport layer), while decapsulation ensures that the raw data is eventually received in the same form it was sent.
Understanding encapsulation and decapsulation is essential to grasping how data is sent and received over a network.
Encapsulation packages the data with the necessary instructions for successful transmission, while decapsulation extracts that data at the other end.
Together, these processes ensure reliable and efficient communication across complex networks, forming the backbone of how modern digital communication works.
The post Encapsulation vs. Decapsulation: Data Transmission in Networking appeared first on GK.
]]>The post My Experience Installing Docker: A Journey into Containerization appeared first on GK.
]]>Recently, I took the plunge into the world of containerization by installing Docker on my dusty old Dell computer, equipped with a quad-core 3.4GHz Intel Core i7-3770 and maxed out at 32GB of RAM. Several years ago I kept hearing about containerization. I was familiar with virtualization but didn’t know much about running apps in containers. Did a Google what
Eager to explore how Docker can simplify application development and deployment, I set out to familiarize myself with this powerful tool using Proxmox as my virtualization platform.
I decided to install Docker as a container within Proxmox, leveraging its capabilities to create a streamlined environment. The installation process was relatively straightforward, thanks to the wealth of online resources available.

After getting Docker up and running, I quickly loaded Portainer to make deploying virtual machines (VMs) and containers (CTs) easier. Running my first container with the hello-world image was a thrilling moment. Seeing the success message confirmed everything was working as expected, and I felt a sense of accomplishment. I allocated 14GB of RAM, 4 CPU cores, and 90GB of hard drive space to the Docker container I created, optimizing the performance on my old hardware.
With Docker installed and Portainer set up, I began to explore its capabilities further. I experimented with pulling and running various container images, including setting up a simple web server using Nginx. The convenience of quickly launching containers and managing services through Portainer was a game-changer for my workflow. Using Docker Compose also simplified the orchestration of multi-container applications, making it even more intuitive to manage.
Despite the smooth installation, I faced a few challenges along the way. Navigating Docker’s networking configurations initially proved tricky, and I spent some time learning how to connect containers effectively. Additionally, getting accustomed to Docker’s command-line interface took some practice, but as I continued to use it, I became more comfortable with the commands.
Overall, installing Docker on my old Dell machine was a rewarding experience that opened up new avenues in application development. While there were some challenges, each one provided valuable lessons and insights. As I continue to explore Docker’s capabilities, I’m excited about the potential it holds for streamlining my future projects and enhancing my skills in the ever-evolving tech landscape. The combination of Proxmox, Docker, and Portainer has truly transformed my approach to virtualization and containerization.
The post My Experience Installing Docker: A Journey into Containerization appeared first on GK.
]]>The post Deploying Wazuh in My Home Lab: A Personal Experience appeared first on GK.
]]>My goal was to create a robust monitoring solution for three devices:
a physical Windows desktop, a virtualized Linux machine running Ubuntu, and another virtualized Windows machine.
By leveraging Proxmox for virtualization, I aimed to gain practical experience that would help me navigate the complexities of these compliance frameworks and improve my skills in securing networks.
However, the journey was filled with excitement and a few challenges along the way.
I’ve been running Proxmox for several months now and it was relatively straightforward to create an environment for Wazuh to run on.
I created virtual machines for both the Ubuntu and Windows systems.
However, I quickly realized that configuring the networking correctly took some trial and error.
I had to double-check IP addresses and ensure that all devices were communicating properly, which led to a few frustrating moments.
I needed to allocate more resources(RAM) so that everything would run smoothly. Here’s a snapshot of the resource allocation for my Wazuh deployment:
Once I got the virtual machines up and running, I dove into installing Wazuh. At first, I was optimistic, but I faced some challenges. The installation process was not as smooth as I had hoped.
For instance, I had some trouble getting the Wazuh agent on my Windows desktop to connect to the Wazuh manager. I spent quite a bit of time tweaking configurations and checking logs to figure out what was wrong.
It turned out that I had to adjust some firewall settings to allow communication between the devices.
To access the Wazuh dashboard remotely, I decided to set up NGINX as a reverse proxy. This part of the process was a bit daunting for me as a newcomer.
I followed various guides but ran into issues with the DNS setup. There were moments of confusion when the site wouldn’t load, usually when I add a new A Record the site appears right away.
After a bit of refreshing site popped up with an SSL certificate.
Once Wazuh was up and running, I was excited to start monitoring the activity on my devices.
I was amazed at how Wazuh gathered data from the Windows desktop, tracking login attempts and system changes.
However, I soon realized that I needed to spend time familiarizing myself with the dashboard.
At first, it was overwhelming to interpret the alerts and logs. I found myself sifting through notifications, trying to determine what was normal and what might be a genuine threat.
I also had to spend time learning how to customize rules for monitoring, especially for the Ubuntu VM. The initial settings didn’t quite match my needs, so I had to dig into the documentation to figure out how to tailor the alerts for my setup.
Deploying Wazuh in my home lab has been a journey of discovery filled with its share of challenges.
While I faced issues with networking, installation, and configuration, each hurdle taught me something new about cybersecurity and system monitoring.
As I continue to refine my setup and expand my knowledge, I’m excited to see how Wazuh can help me stay vigilant against potential threats in a network.
The post Deploying Wazuh in My Home Lab: A Personal Experience appeared first on GK.
]]>The post Understanding IPv4 Address Classes: A Beginner’s Guide appeared first on GK.
]]>IPv4 is a 32-bit addressing scheme, which means each address consists of four octets (8-bit sections), separated by periods. An IPv4 address looks like this: 192.168.1.1. Each octet can have a value between 0 and 255, leading to a total of 4.3 billion possible addresses (2^32).
Given the size of the internet and the variety of users, IPv4 addresses are divided into five classes, labeled A through E. Each class serves a specific purpose, with different default network sizes.
Here’s a remade version of the chart you provided:

0.0.0.0 to 127.255.255.255255.0.0.010.0.0.1128.0.0.0 to 191.255.255.255255.255.0.0172.16.0.1192.0.0.0 to 223.255.255.255255.255.255.0192.168.1.1224.0.0.0 to 239.255.255.255239.0.0.1240.0.0.0 to 255.255.255.255255.255.255.254IPv4 also includes certain special addresses that don’t fall into the standard classes:
127.0.0.1 is a loopback address used to test network interfaces within a host. It is commonly used for diagnostics and network testing.10.0.0.0 to 10.255.255.255172.16.0.0 to 172.31.255.255192.168.0.0 to 192.168.255.255These addresses are not routable on the internet and are used internally within homes, offices, or private networks.
In addition to dividing IPv4 into different classes, network administrators often use a process called subnetting to divide a single class into smaller sub-networks. Subnetting allows better management of IP addresses and reduces the wastage of IP resources by allocating only the necessary number of addresses to a network. Subnetting is especially useful as the demand for unique IP addresses continues to rise.
Due to the limited number of IPv4 addresses, there has been a transition to IPv6, which uses a 128-bit addressing scheme and offers a significantly larger pool of IP addresses. However, IPv4 remains widely used across the internet.
Understanding IPv4 address classes is a crucial step in learning how network addressing works. From large enterprise networks using Class A addresses to home networks with Class C addresses, this system has supported the internet for decades. While IPv6 may eventually take over, knowledge of IPv4 is still foundational for anyone studying computer networking or pursuing IT certifications.
As the digital world continues to grow, concepts like IP classes, subnetting, and private address ranges remain essential for creating efficient, scalable networks.
The post Understanding IPv4 Address Classes: A Beginner’s Guide appeared first on GK.
]]>The post The OSI Model: Breaking Down How Data Travels Across Networks appeared first on GK.
]]>In this guide, we’ll explain each layer of the OSI model in a simple and easy-to-understand way.
The OSI model is a 7-layer framework used to explain how data travels from one device (like your computer) to another (like a server or printer). Each layer has a specific job in making sure the data gets to where it needs to go. Think of the layers like steps in a process, where each step has a role in preparing and sending the data along its journey.
To remember the layers, you can use this phrase: “Please Do Not Throw Sausage Pizza Away” — each word stands for a layer, starting from the bottom: Physical, Data Link, Network, Transport, Session, Presentation, Application.
Now, let’s break down each layer:
This is the first and most basic layer. The Physical Layer deals with the hardware — the actual physical parts of the network.
Once the data gets to the next device, the Data Link Layer steps in to help organize the data so it can move smoothly between devices on the same network.
When data needs to travel across different networks (like from your home network to a website), the Network Layer takes over. This layer is all about finding the best path for the data.
The Transport Layer makes sure that the data arrives safely and in the right order.
The Session Layer manages the connection between two devices.
The Presentation Layer makes sure that the data is in the right format so the receiving device can understand it.
Finally, the Application Layer is what the end user interacts with. This is where network services happen — like sending an email or browsing the web.
Let’s say you’re sending an email. Here’s how the OSI model works in simple terms:
When the email reaches its destination, the process works in reverse, and the email is pieced back together and delivered to the recipient.
The OSI model is a helpful tool for anyone learning about networking because it breaks down complex processes into simple steps. Each layer has its own role, making it easier to troubleshoot issues or design networks. It also helps ensure that different devices and networks can communicate with each other, even if they’re using different technologies.
Understanding the OSI model is like having a roadmap for how data travels across a network. Whether you’re just starting out or moving into more advanced networking, knowing the OSI model will give you a strong foundation to build on.
The post The OSI Model: Breaking Down How Data Travels Across Networks appeared first on GK.
]]>The post VLANs vs. Subnetting: What’s the Difference and How Are They Used? appeared first on GK.
]]>A VLAN (Virtual Local Area Network) allows network administrators to logically segment a physical network into smaller, isolated sections. These sections can function as independent networks while sharing the same physical infrastructure, such as switches and cabling. VLANs operate at Layer 2 (Data Link Layer) of the OSI model, which is the layer responsible for MAC address-based communication between devices.
VLANs are primarily used to:
VLANs are created on network switches. Each port on the switch can be assigned to a specific VLAN, ensuring that devices connected to those ports are part of the same virtual network. Devices within the same VLAN can communicate with each other as if they were on the same physical network, but devices on different VLANs cannot communicate without the help of a router or Layer 3 switch to route the traffic between VLANs.
For example, in a large office, the Sales department could be assigned to VLAN 10, while the IT department might be assigned to VLAN 20. Even though all devices are connected to the same physical switch, Sales cannot directly communicate with IT unless routing is set up between the two VLANs.
While VLANs segment a network at the Layer 2 level, subnetting operates at Layer 3 (Network Layer) of the OSI model. Subnetting is the process of dividing a larger IP network into smaller, more manageable subnetworks, or subnets. Each subnet has its own range of IP addresses and typically represents a group of devices that share a common geographic location, function, or security level.
Subnetting serves several key functions:
Subnetting is done by manipulating the subnet mask associated with an IP address. The subnet mask determines which portion of the IP address represents the network and which part represents the host devices. By changing the subnet mask, you can carve out smaller subnets from a larger IP address range.
For example, a company with the network 192.168.1.0/24 could divide this into two subnets:
Each subnet has its own set of IP addresses and devices within the same subnet can communicate directly. However, if HR and IT want to communicate with each other, the traffic must pass through a router.
While both VLANs and subnetting are used to divide networks into smaller, more manageable parts, they operate differently and serve distinct purposes:
| Aspect | VLAN | Subnetting |
|---|---|---|
| OSI Layer | Layer 2 (Data Link) | Layer 3 (Network) |
| Device Type | Switches | Routers |
| Communication | Devices in the same VLAN can communicate directly; different VLANs need routing. | Devices in different subnets need a router to communicate. |
| Purpose | Logical segmentation of a network within a switch for security, broadcast control. | Dividing IP address ranges into smaller, more manageable segments. |
| Management | Based on switch port configurations. | Based on IP addresses and subnet masks. |
In many networks, VLANs and subnetting are used in tandem to maximize network efficiency, security, and performance. A common practice is to assign each VLAN its own subnet. This setup allows for the logical grouping of devices (through VLANs) while also controlling IP traffic and broadcast domains through subnetting.
For instance, you could have:
A router or Layer 3 switch would then manage communication between the VLANs and subnets.
In summary, both VLANs and subnetting are vital tools in network design and management. VLANs segment networks at Layer 2, allowing for the logical separation of devices on the same physical switch, while subnetting operates at Layer 3 to manage IP address ranges and control traffic flow. When used together, they offer a powerful way to enhance network performance, security, and scalability.
Understanding the differences and synergy between these two concepts is crucial for any network administrator or IT professional. Whether you’re building a small office network or managing a complex enterprise system, mastering VLANs and subnetting will help ensure your network runs efficiently and securely.
The post VLANs vs. Subnetting: What’s the Difference and How Are They Used? appeared first on GK.
]]>