Homelabbing experience

October 9, 2025

Shopaholic Journey: From Pixel to Homelab

📱 My Pixel Detour

When my Oppo broke, I went for a Google Pixel 6a ($600). Back then, I could’ve gotten a Xiaomi with the same specs for less, but paranoia about Chinese brands (cybersecurity news, spyware fears, etc.) swayed me. Ironically, I’m much more relaxed about it now.

Overall? Solid experience. Smooth browser, no lock-in, Apple-like feel. Only downside is the social sharing gap (no AirDrop).

🖱️ Of Mice, Keyboards & Poly Life

💻 Laptop Choices

🎮 Dad’s Legion 5

Dad bought a Legion 5 for design. Lucky for me, I could finally game. Steam ran smooth, no more negative FPS like my IdeaPad. Racing in F1 with a GameSir made me get the hype.

🖥️ PC & Homelab Rabbit Hole

Head

Day 1 – $0.50

Before jumping into another device, I had to ask myself: what’s the real purpose of this remote setup?

I already own three laptops, and my Pixel phone alone covers most daily computing needs — solid RAM, camera, storage, and even AI features built right in. Do I really need a fourth machine?

The answer lies in dependability and future-proofing:

And what’s the first requirement for such a setup? Internet. Preferably a wired connection via RJ45 for stability. So I decided to start small — picking up a bag of RJ45 connectors on Taobao for just $0.50.

A modest beginning, but every setup starts with the basics.

Sockets

Day 2 – $30.50

So my house hardware shop was closing down, and there were decent deals available.

Day 3 – $600.50

This project is starting to burn the wallet a bit, but here’s the plan. The network flow goes like this: ISP → house modem → repeater → wall → network switch. To make that happen, I picked up:

(I’m not counting mouse, keyboard, or monitor costs here.)

Crimper

Day 4 – $700.50

With networking gear sorted, it’s time to get hands-on. I grabbed a crimping tool + testing kit for about $100. This will let me terminate my own Cat6 cables with RJ45 connectors and do some simple network runs.

The target? To have my setup consistently running at 2.5 Gbps.

⚡ Why 2.5 Gbps Became the Sweet Spot

  1. Backwards Compatibility with Cat5e

    • Works with old wiring without re-cabling.
    • Delivers ~2.5× faster speeds with no infrastructure changes.
  2. Wi-Fi 6/6E Backhaul

    • Modern access points push beyond 1 Gbps.
    • 2.5G uplink is the affordable, practical middle ground.
  3. Lower Power & Heat

    • Easier to cool, better for silent consumer gear and homelabs.
  4. Cost Efficiency

    • Much cheaper than 5G/10G hardware, but a big jump over 1G.
  5. Industry Adoption

    • Pushed by the NBASE-T Alliance; widely compatible today.
  6. Practical Speeds

    • Perfect for SSDs, NAS, and 2 Gbps fiber plans.
    • Few home users need more unless doing enterprise work.

In short: 2.5 Gbps is the practical bridge between legacy 1 Gbps and high-end 10 Gbps — affordable, plug-and-play, and more than enough for most homelabs.

Screwset

Day 5 – $705

With the hardware store about to close, I made one last quick run:

That brings me to $705 in so far… and still nothing is actually usable yet. Time to start planning for the first real device soon.

Day 6 – $750

I bought some HDMI and Thunderbolt cables, which are always a good investment, and did some research. Well, something I have always wanted is more storage, not counting them into the total cost, I have 4 1 TB Seagate drives lying around. That's when it hit me: I should start with a file server—something lightweight, affordable, and excellent for serving files. Having a server is not it, and neither is a desktop/workstation. And am I really that geeked out to run a Rpi server? Also, I don't have enough spare phones to run this, so I'll have to look into finding a mini-PC for this task. Well there's this brand that I have always liked it's motto that seemed perfect for this job. A small intel pc, with 2gb ports, hdmi and 4 usb would be what I am looking for.

You asked: What’s the story of Trigkey, and what’s their motto?

Company Background

Trigkey is a Chinese mini-PC and computer hardware brand headquartered in Shenzhen, Guangdong. It was founded fairly recently, in 2021, as a sibling brand to Beelink under the same parent compan

They entered the consumer electronics market with a clear focus: crafting compact desktops that enhance office environments while conserving space

Their Motto

While Trigkey doesn’t prominently use a stylized tagline like some brands, their guiding principle is clearly stated as:

"Exploring technology, green life" (sometimes rendered as “discovery technology green life”) This captures their mission to integrate the latest tech into daily life while delivering energy-efficient, high-performance mini PCs.

Vision & Approach

Summary

Nas

Day 7 – $855

Today marks the start of my actual homelab build with a small NAS system, powered by a Trigkey N100 mini-PC I picked up from Amazon for around $150. It runs on an Intel N100 (4 cores / 4 threads) — very power efficient, but not exactly built for heavy multitasking.

For storage, I’ve set it up with:

This gives me two mount points:

✅ Pros of the Trigkey N100

⚠️ Cons of the Trigkey N100

🛠️ Setup Journey

  1. Installed Ubuntu Server over ethernet, then moved it to WiFi to keep it as an isolated island.
  2. Connected it back via Tailscale as an edge node.
  3. Deployed AdGuard to run DHCP and route Tailscale traffic through it.
  4. Installed Active Directory and enabled an SMB share on the 500GB disk for multi-device access.
  5. Added Filebrowser, mapped to the RAID 10 array, with a daemon to survive reboots, proxied via ngrok.
  6. Installed Cockpit for quick phone-based monitoring.

🌡️ Long-Term Reliability Considerations

To make the NAS last, I focused on heat and electricity:

Security-wise, I sandboxed it: endpoint antivirus + IDS/IPS. Worst case, only my $150 NAS gets hit.

With this, my first real homelab node is alive 🎉. It’s not a powerhouse, but it’s a dependable starting point. From here, I can expand and layer more services over time.

Day 7 – $860

Some to give you guys some context here is what raiding means in "My World"

RAID 0Striping

RAID 1 – Mirroring

RAID 5 – Striping + Parity

RAID 6 – Double Parity

RAID 10 – Striping + Mirroring

Other RAID Levels

🆚 RAID vs Other Storage Systems

🔹 Unraid

🔹 TrueNAS (FreeNAS) + ZFS

🔹 ZFS RAID-Z (RAID 5/6 Equivalent)

Quick Comparison

SystemExpandablePerformanceFault ToleranceBest For
RAID 0No🚀🚀🚀❌ NoneSpeed only
RAID 1No🚀🚀✅ 1drive failSmall but critical
RAID 5No🚀🚀✅ 1 drive failBalanced home NAS
RAID 6No🚀✅ 2 drives failLarge arrays
RAID 10No🚀🚀🚀✅ 1 per mirrorFast + safe, but costly
Unraid✅ Yes🚀 (with SSD cache)✅ 1–2 drive failFlexible mixed disks
ZFS/TrueNASLimited🚀🚀✅ up to 3 drivesEnterprise-grade
# ✅ TL;DR

I mnted seagate to /mnt, set fstab as service I then forgot about it and added 5 more disks, tried a raid 10, of course, it wouldn't work. I thought the tailscale issue, so I went to "0 superblock". The UUID of the disk surely changes. I then proceeded to make NTFS of it, and it confirmed fail, but Idk why (conflict proc), then I tried sudo reboot cause of tailscale. fstab confirm crash the whole pc. Now remember by tailscale routes through nas a dhcp, so whole lan kenna crash. Now I panic because no lan, no outbound. Then I understood why. Wasted whole day but using raid 5 now waiting for 9.9 sales. Shifted homelab from living room because I have singtel wifi for a reason. Therefore, I transitioned it to a ZFS file system. Spent $5 on a blow brush for dusting off stuff in the future.

Day 8 – $880

⚡ Power, Resilience, and Why a UPS Matters for Your Homelab

When building a homelab or even just a reliable home PC setup, one of the most overlooked components isn’t a CPU or GPU — it’s power protection. Let’s break down why.

🔌 What Is a UPS?

A UPS (Uninterruptible Power Supply) is more than just a fancy power strip. It’s essentially a battery backup combined with power conditioning:

Think of it as a shock absorber for your electronics — smoothing the ride between your wall socket and your PC/NAS.

⚠️ Why Wall Power Is “Dirty”

The electricity that comes out of your wall isn’t as perfect as we imagine. It’s considered “dirty” power because it fluctuates due to:

Your devices might survive these hiccups day-to-day, but over time they degrade power supplies, shorten component lifespan, and corrupt data during outages.

🛡️ What Is Resilience in Computing?

Resilience means your system can continue operating — or at least recover gracefully — even when something goes wrong.

For homelabs, resilience =

Without resilience, a single power flicker can lead to RAID rebuilds, file system corruption, or even fried components.

💾 How Different Memory/Storage Types React to Power Outages

Not all data lives in the same kind of “memory.” Here’s how each behaves when the lights go out:

1. RAM (Random Access Memory)

2. HDD (Hard Disk Drive)

3. SSD (SATA SSD)

4. NVMe SSD

5. VRAM (GPU Memory)

📝 Putting It Together

✅ TL;DR

A UPS isn’t a luxury — it’s a resilience enabler.

If you value your homelab, NAS, or even just your main PC, a UPS is one of the smartest investments you can make.

Eaton is considered one of the better UPS brands because it blends enterprise-grade reliability, pure sine wave output, better efficiency, and longer warranties, often at competitive pricing. APC remains the popular household brand, but Eaton is the go-to when you want something that feels closer to datacenter quality for your homelab or critical devices.

After the scare that was yesterday, I decided to buy a ups as well for future proofing of devices for shorter downtimes. $20

Day 9 – $1,350

Ironwolf pro 18tb $470

Why I Chose Seagate (IronWolf Pro) for My NAS — and Why 18TB Is the Sweet Spot

I’ve been slowly shaping my homelab into something I can actually rely on day-to-day. The heart of it is a NAS I first stood up on a Trigkey N100: low power, quiet, and perfect for learning. I started with mixed HDDs (RAID10 + a separate 500 GB volume), then set my sights on proper NAS-grade drives as I scale capacity.

Before the choice, I had to clarify the tech:

CMR vs SMR vs HAMR (super short)

For a NAS that will rebuild, scrub, and serve mixed workloads, CMR is the safe, boring, and correct choice.

So… why Seagate, specifically?

I went with Seagate IronWolf Pro because it hits the NAS checklist without drama:

Bottom line: Seagate’s IronWolf Pro gives me predictable CMR performance, NAS-specific features, and better guardrails if something goes sideways—exactly what I want when my array is rebuilding at 3 AM.

Why 18 TB is my current sweet spot

I picked up 18 TB at ~S$24/TB — and here’s why that capacity makes sense:

  1. Best value curve right now: Below 16 TB, $/TB is usually worse; above 18–20 TB, you start paying early-adopter tax.
  2. CMR + sanity: Many 18 TB SKUs are CMR, so I get NAS-friendly behaviour without stepping into newer HAMR pricing.
  3. Rebuild risk vs time: Bigger drives mean longer rebuilds (and higher second-failure risk). 18 TB is still manageable compared to 20–22 TB, especially on modest CPUs like my N100.
  4. Planning headroom: With 18 TB units, even a small 3-bay gives real, future-proof capacity (and I can still move to ZFS RAIDZ2 later if I expand).

My setup story (the short version)

If you’re building similar: pick CMR for NAS, lean into IronWolf Pro for the NAS-specific perks and recovery safety net, and don’t sleep on 18 TB while the price curve is friendly. Your future self (and your rebuild times) will thank you.

Day 10 – $1,360

Picked up a Maiwo HDD enclosure for $10. This was honestly the hardest decision so far — there are just so many brands, so many form factors, and endless “future-proof” options. But at the end of the day, I decided to stay practical: who really needs more than 30 TB of local data in a home setup, especially when I still use cloud services?

🗂️ Different Ways to Store & Use HDDs

When it comes to spinning disks, there isn’t a single “right” way — it all depends on your budget, use case, and how much resilience you want:

🔹 NAS (Network Attached Storage)

🔹 Dedicated Mini PC as a NAS

🔹 Docking Stations

🔹 Open Drive Arrays (JBOD / DIY Racks)

🔹 Direct Wire Connection

🔹 External Enclosures

🌀 S2D vs. Ceph vs. vSAN – A Quick Guide to Distributed Storage

When you scale storage beyond a single box, you enter the world of distributed storage systems. Three big names often come up: Microsoft S2D (Storage Spaces Direct), Ceph, and VMware vSAN. They all serve the same purpose — pooling disks across multiple servers into one resilient storage fabric — but they do it in different ways.

🔹 Storage Spaces Direct (S2D)

🔹 Ceph

🔹 VMware vSAN

⚖️ Key Differences

FeatureS2D (Microsoft)Ceph (Open-Source)vSAN (VMware)
EcosystemWindows onlyLinux/open-sourceVMware ESXi
ScalabilityModerate (dozens of nodes)Massive (1000s of nodes)High (hundreds of nodes)
CostWindows licensingFree (hardware + ops cost)VMware licensing (expensive)
FlexibilityLimited (Windows stack)Very high (block, file, object)Limited (VMware only)
Best Use CaseWindows shopsCloud-style storageVMware shops
# 🔑 Why Consensus Is Needed

Distributed storage relies on multiple servers (nodes) working together. To maintain consistency and reliability, the system needs consensus — an agreement between nodes about the current state of the data.

Without consensus, you risk:

How consensus is achieved

File systems are one of those things we all use daily but rarely stop to compare. Here’s a blog-style comparison of common ones:

📂 File Systems Explained: Object, NTFS, ext, ext4, and More

When data sits on disk, it isn’t just “raw bits” — it’s organized by a file system. Different file systems are designed with different trade-offs in mind: performance, reliability, compatibility, and features.

🔹 Object Storage

🔹 FAT / FAT32 (File Allocation Table)

🔹 NTFS (New Technology File System)

🔹 ext (Extended File System Family)

🔹 XFS

🔹 Btrfs (B-tree FS)

🔹 ZFS

📊 Quick Comparison

File SystemOS EcosystemJournalingMax File SizeBest Use Case
**Object Storage**Cloud-nativeN/AVirtually unlimitedScalable backups, cloud apps
**FAT32**Universal4 GBUSBs, cross-device
**NTFS**Windows16 TBWindows PCs/servers
**ext2/3**Linuxext2 ❌, ext3 ✅2–16 TBLegacy Linux, boot
**ext4**Linux16 TBDefault Linux FS
**XFS**Linux8 EBLarge files, media
**Btrfs**Linux✅ (CoW)16 EBSnapshots, homelabs
**ZFS**Cross-platform✅ (checksums)16 EBNAS, enterprise
---

🗂️ File Systems and How They Interact with RAID, ZFS, and Unraid

When you put multiple drives together, you need both a storage strategy (RAID, Unraid, ZFS) and a file system (ext4, NTFS, etc.). They work at different layers:

🔹 Classic RAID (0/1/5/6/10) + File Systems

Example:

Good for: Simple, predictable setups. ⚠️ Limitations: RAID doesn’t protect against silent corruption; it only handles drive failure.

🔹 ZFS (Integrated FS + RAID)

Good for: Data integrity, enterprise NAS, archival. ⚠️ Limitations: High RAM requirement (rule of thumb: 1 GB RAM per 1 TB storage).

Why people love ZFS: If you write a file, ZFS can verify later that it hasn’t silently corrupted — something RAID + ext4/NTFS can’t guarantee.

🔹 Unraid (Flexible Array + File System Choice)

Good for: Homelabs, mixed drives, expandable storage. ⚠️ Limitations: Slower writes unless you use an SSD cache; not as fast as RAID10 or ZFS.

Why file system choice matters in Unraid:

🔹 How ext4, NTFS, XFS, Btrfs fit in

📊 At a Glance

ApproachHandles RAIDFile System(s) UsedStrengthsWeaknesses
**Classic RAID**Separate (HW/SW)ext4, NTFS, XFSSimple, familiarNo corruption checks
**ZFS**IntegratedZFS (only)Data integrity, snapshotsRAM-hungry
**Unraid**Flexible, per-diskXFS, BtrfsMix sizes, easy recoverySlower writes
**Btrfs RAID**IntegratedBtrfsSnapshots, CoWNot as proven as ZFS
# 🏡 File Systems, RAID, and My Homelab Choices

When I started my homelab on the Trigkey N100, I knew storage would be the heart of it. I had five HDDs on hand, so I went with a RAID 10 setup for balance — decent performance and redundancy, without overloading the N100’s modest CPU.

🔹 Why ext4 Made Sense (for Now)

I formatted my RAID 10 array with ext4, the Linux default. It’s:

For a beginner-friendly NAS build, ext4 just works.

⚠️ Limitation: ext4 doesn’t detect silent corruption. If a bit flips on disk, RAID won’t catch it, and ext4 won’t notice until it’s too late.

🔹 When I Might Want ZFS

Down the road, I see myself moving to ZFS when I add bigger disks (like the 18TB IronWolf Pro I’m eyeing). Why?

⚠️ Trade-off: ZFS eats RAM (rule of thumb: 1 GB RAM per 1 TB of storage). My N100 can’t handle it gracefully, so I’d need to run ZFS on a beefier box (like my Evo X1 or a future node).

🔹 Why Not Unraid (Yet)

Unraid is tempting, especially since I’ve collected a mix of drive sizes. Its flexibility — letting each disk keep its own file system (usually XFS) with parity on top — is attractive.

⚠️ Limitation: Unraid isn’t as fast for writes unless you use SSD cache. Since I’m running on budget gear and want consistent speeds, I stuck with RAID10 for now.

🔹 Where Btrfs Fits In

Some NAS distros default to Btrfs for snapshots and CoW (copy-on-write). But it’s still not as mature as ZFS in heavy RAID use. For my setup, ext4 is simpler and safer.

That said, I might experiment with Btrfs on smaller pools or SSDs later, especially for snapshotting configs.

✅ My Homelab Storage Journey (So Far)

👉 In short: ext4 fits my current lightweight NAS, but as my homelab grows, ZFS will protect my bigger investments, and Unraid may be my playground for flexibility.

Day 11 – $1,960

Today was a big spend day: I picked up 2 × Lexar 870 SSDs for $600. It felt like a leap, but it’s an investment in reliability and efficiency for where my homelab is headed.

🔹 Why Lexar Over Samsung 990?

Most people default to Samsung NVMe drives like the 980/990 series because of speed benchmarks. But for a homelab — especially one that might evolve into an AI-focused system — raw speed isn’t everything.

🔹 Storage Philosophy – What Belongs Where

Not all data deserves the same type of storage. Here’s how I’m planning mine:

🔹 Why Lexar Works Well with Unraid

Unraid thrives on mixing different drives, and Lexar SSDs slot in perfectly:

🔹 The Values Behind Lexar

Lexar has always branded itself around reliability, accessibility, and endurance. Unlike consumer gaming SSDs that market maximum read/write speeds, Lexar’s pitch is:

For me, those values align with what I want: a setup that runs quiet, cool, and stable while I experiment with NAS, AI inference, and Unraid services.

🔹 Why It Matters for AI

AI workloads thrive on predictability — the ability to stream datasets in and out of storage without bottlenecks or overheating. Lexar SSDs:

Day 12 – $2,980—Evo X1 Joins the Homelab

Today was a milestone—I invested $1,020 into upgrading my setup with the GMKtec Evo X1 mini PC. (Roughly US 800) + Future $50 rebate from a cashback program

What Makes the Evo X1 Shine:

Why GMKtec? The Backstory & Why Evo X1 is a Smart Pick for Homelabs

GMKtec—based in Shenzhen—is quietly delivering mini PCs with serious compute power, especially for AI development. The Evo X1 stands out in their lineup as a capable, compact workhorse. It’s designed for enthusiasts and creators who need robust performance in a small form factor.

Priced under $1,000 before shipping, it’s a notable bargain compared to bigger desktops or upgraded laptops Its high price-to-performance ranking in your spreadsheet validates that it punches well above its weight.

How It Fits into My Homelab

I’m using the Evo X1 as the new compute backbone of the homelab:

Basically, it’s like moving from a reliable but modest car (my N100) into a sleek performance EV (Evo X1)—more power when I need it, but still efficient and easy to place.

Quick Recap Table

AttributeDetails
**Price Paid**\$1,020
**CPU**AMD Ryzen AI 9 HX 370 (12c/24t)
**RAM**64 GB LPDDR5X
**Storage**1 TB PCIe 4.0 NVMe + dual M.2 expandability
**Connectivity**2.5 GbE ×2, Wi-Fi 6, USB4, Oculink, HDMI, DP
**Modes**Quiet (35 W) / Balanced (54 W) / Performance (65 W)
**Why Bought**Excellent price-to-performance value, AI-capable, compact
Ultimately, the Evo X1 pushes my homelab forward—powerful enough for serious workloads, yet efficient and space-savvy enough to keep my setup clean and flexible.

Day 13 – $2,980 Freebie

$0 Gold-plated Ethernet Patch Cable Scrolling through Shopee today, I stumbled upon one of those quirky deals that make online bargain-hunting fun. A gold-plated Cat 8 SFTP Ethernet cable—rated 40Gbps, 2000MHz—for just $5, effectively free after applying a $5 off voucher. Not bad for a piece of kit that ensures a rock-solid connection when you’re moving big datasets or streaming at low latency. I also poked around some voucher-earning games for extra discounts, though the real temptation of the day wasn’t a cable—it was the thought of finally pulling the trigger on a CUDA-based GPU.

Why NVIDIA CUDA Still Reigns Supreme

When it comes to AI workloads, not all GPUs are created equal. Yes, there are competitors like Cambricon (China’s AI accelerators), AMD ROCm, and Intel GPUs, but NVIDIA CUDA GPUs remain the gold standard for developers, researchers, and even hobbyists.

Here’s why:

  1. CUDA Ecosystem & Libraries

    • CUDA (Compute Unified Device Architecture) is more than hardware—it’s a mature software stack.
    • Frameworks like TensorFlow, PyTorch, MXNet have deep CUDA support.
    • NVIDIA’s cuDNN, TensorRT, and RAPIDS provide ready-to-use optimizations. Competitors simply can’t match the breadth of libraries and community support.
  2. Developer Community & Stability

    • Tens of thousands of tutorials, GitHub repos, and Stack Overflow threads exist for CUDA.
    • AMD’s ROCm is improving, but driver stability and ecosystem maturity remain weak points.
    • Cambricon is niche (mainly in Chinese data centers), and Intel GPUs are still finding their place.
  3. Hardware Features

    • CUDA GPUs integrate Tensor Cores (for AI matrix math), RT Cores (for rendering), and highly optimized memory pipelines.
    • NVIDIA has long experience in balancing FP32, FP16, INT8, and now FP8 precision, making training and inference faster and more power-efficient.

What Specs to Look Out For in an AI GPU

When shopping for a CUDA-capable GPU, raw teraflops aren’t the only metric. Consider these:

Enter Blackwell: NVIDIA’s Next Big Leap

If Hopper (H100) was NVIDIA’s answer to AI’s explosive growth, then Blackwell (B100/B200) is the evolution for the next era:

In short, Blackwell isn’t just about more speed—it’s about redefining scalability and efficiency in AI compute.

Final Thoughts

Today I scored a practically free Ethernet cable, but the real takeaway is this: while cables and vouchers are fun, the GPU choice is what shapes the future of any AI project. NVIDIA CUDA GPUs continue to dominate because of their software ecosystem, community support, and forward-looking architectures like Blackwell.

Whether it’s tinkering with older cards like the GT 640 or GT 730 I have lying around, or dreaming of the power of a future 5070 Ti or even a B200 Blackwell, one truth stands: in AI, the right GPU isn’t just hardware—it’s the gateway into an entire ecosystem.

Day 14 – $2,980 2nd Freebie

Today’s “freebie hunt” on Shopee landed me yet another useful cable — a $0 HDMI cable after vouchers. Between yesterday’s Cat 8 Ethernet and today’s HDMI, I think my collection of wires is reaching “mission complete.” But that got me thinking: I’ve got all the cables for displays and networking… what about something more exotic, like hooking up an Oculink cable to a GPU?

Oculink (SFF-8612/8611) is a compact cable standard developed by PCI-SIG. It’s designed to carry PCIe lanes directly, without the overhead of additional protocols. Think of it as a slimmed-down PCIe extension cable that’s perfect for compact PCs, servers, and eGPU builds.

In short: Thunderbolt is great for plug-and-play universality, but Oculink is better for raw GPU performance and cost-effectiveness.

There are several ways to connect Oculink to a desktop GPU:

  1. Oculink-to-PCIe Riser Board (Budget Option)

    • Cable plugs into riser/dock → PCIe x16 slot.
    • Needs 24-pin ATX + GPU PCIe power.
    • Most flexible, cheapest.
  2. Oculink-to-M.2 Adapter

    • Converts an NVMe slot into Oculink.
    • Great for laptops and mini-PCs.
  3. Thunderbolt-to-Oculink Hybrids

    • Convert Thunderbolt into Oculink.
    • Useful if your host only supports Thunderbolt, but pricier.
  4. Direct Oculink GPU Boxes

    • Purpose-built GPU enclosures.
    • Sleek but often expensive.

Other eGPU Models & Docks

Why Budget Adapters Are Usually Enough

Unless you need polished hot-plug support, a riser + good PSU is all you really need.

Why People Choose Thermalright PSUs

Pairing an Oculink riser with a reliable PSU is the best way to stabilize an eGPU build. Thermalright PSUs stand out because:

Final Thoughts

A free HDMI cable today reminded me that while I’ve got the basics covered, the real leap comes from connecting GPUs externally. Oculink offers a clean, direct PCIe link that outperforms Thunderbolt in speed, latency, and price. While premium eGPU boxes look sleek, the budget Oculink riser + Thermalright PSU combo remains the smartest, most reliable path for anyone experimenting with external CUDA power.

Day 15 – $3,000 Last Freebie and a Disappointment

Today’s “haul” ended up being a mixed bag. I scored a $0 dust blower ball (those squeeze bulbs that puff air to clean electronics), but my real buy — a $60 Eaton UPS 5A (1200VA/650W) — never showed up. Lost in transit, refunded, but still a disappointment. Backup power is critical, so I’ll grab another later.

Thinking about resilience got me reflecting on something similar in the network world: how we expose and secure our services. That led me to port forwarding, tunnels, and Tailscale.

Cloudflare Tunnels, ngrok, and Tailscale

If you’ve ever tried to run an app from home, you’ve probably looked at Cloudflare Tunnels or ngrok. Here’s how they stack up against Tailscale Funnel:

Cloudflare Tunnels

ngrok

Tailscale Funnel

👉 In practice:

That’s why hobbyists and self-hosters lean heavily toward Tailscale — it replaces clunky port forwarding and adds VPN functionality for free.

Using Tailscale Traffic Routing for the Whole Network

One of Tailscale’s most underrated features is exit nodes and subnet routing. This lets a single machine act like a gateway for your entire LAN.

For example, your Intel N100 box could serve as:

Feasibility

In other words: your N100 could effectively become the router + domain controller + VPN concentrator for your entire network. All while staying simple to manage.

Final Thoughts

Today I lost a UPS, but I gained perspective: just like hardware redundancy, network resilience is about architecture, not just devices.

Instead of opening ports and hardening firewalls, you simply extend your LAN securely with Tailscale. That’s why so many homelabbers choose it: it feels like networking reimagined for the personal era of cloud-native.

Day 16 – $3,150 I Am Big Fan of Yours

Today’s build upgrades were all about power, cooling, and connectivity. I picked up a $2 PWM hub to manage fans, a $100 Thermalright KG750 PSU for stable power, and a $48 Oculink dock with cable to slot in another GPU. The theme here? Keeping things tidy, cool, and efficient — because if there’s one thing I’ve learned, it’s that messy wires and poor cooling kill performance, especially when you start stacking drives.

Cooling: More Than Just Fans

When people think cooling, they usually think of CPU or GPU temps. But drives — especially HDDs and enterprise SSDs — also generate heat that can shorten lifespan if ignored.

This is where a PWM hub shines. Instead of plugging fans randomly into the motherboard, the hub centralizes control:

In short: proper fan control isn’t just about silence — it’s about giving drives the airflow they need, only when they need it.

Wires: The Hidden Cooling Factor

Messy cables do more than look bad — they block airflow. Optimizing wires is almost free performance:

A clean wiring job doesn’t just make you proud when you peek inside — it directly improves thermals by giving fans unobstructed paths.

Drives: Getting the Most from Cooling + Wires

With multiple drives, heat builds fast. Here’s how to optimize:

  1. Stacking HDDs

    • Always leave a small gap if possible.
    • Position a low-RPM 120mm/140mm fan to blow directly across them.
  2. NVMe SSDs

    • Use heatsinks or thermal pads if they’re bare.
    • Avoid cramming multiple drives under a single heatsink without airflow.
  3. External GPU/Drive setups via Oculink

    • Oculink docks add flexibility, but don’t forget they still need cooling.
    • Keep GPU/drive risers in open airflow paths, not buried in cable jungles.

Final Thoughts

Today’s upgrades — a PWM hub, Thermalright PSU, and Oculink dock — might not sound flashy, but they’re the foundation of a system that runs cool and clean. Optimizing cooling and wires isn’t just about aesthetics; it directly affects drive longevity, performance stability, and serviceability.

In the end, it’s simple: drives want consistent airflow, GPUs want clean power, and your sanity wants tidy cables. Nail those three, and you’ll get a rig that’s quiet, efficient, and ready to scale without cooking your data.

Day 17 – $3,500 I Am Poor

Well, it happened. I blew the budget and ended this whole run with an RTX 5060 Ti. Wallet says “I am poor,” but the mind says, “time to train and serve some models.” Naturally, my first stop will be LM Studio — running large language models locally. The challenge: getting the most out of CPU, NPU, and GPU together without melting hardware or drowning in memory errors.

CPU, NPU, GPU: Who Does What in LM Studio?

LM Studio can flexibly assign workloads, making it possible to use each processor where it excels.

Reducing Model Size

Big models choke hardware unless optimized. Techniques include:

Why VRAM Matters in AI Serving

VRAM is the GPU’s working memory:

With the RTX 5060 Ti’s 12 GB VRAM, quantized 13B models are still possible, but you’ll need offloading to system RAM or SSD. In practice:

Once VRAM is exceeded, the GPU constantly swaps data to system RAM — introducing slowdowns.

Storage: Why an Efficient SSD Like the Lexar NM790 Helps

This is where a fast, efficient SSD comes in. The Lexar NM790 4TB is a great example:

For LM Studio, this means:

An efficient SSD becomes the bridge between system RAM and GPU VRAM — minimizing bottlenecks when models don’t fit fully in VRAM.

Blackwell: The Next Frontier

NVIDIA’s upcoming Blackwell GPUs are designed with this exact ecosystem in mind:

Together with fast NVMe storage and smarter software frameworks, Blackwell will push local AI even further.

1. RTX 5060 Ti vs 5070 Ti (AI workloads)

SpecRTX 5060 TiRTX 5070 TiAI Impact
VRAM**12 GB GDDR6****16 GB GDDR6X**More VRAM = run bigger models without offloading. 16 GB lets you fit quantized 13B models fully in GPU; 12 GB often needs SSD/RAM offload.
Memory Bus192-bit256-bitWider bus = higher memory bandwidth, helps in graphics and some AI, but not as critical as VRAM size for LLM inference.
Bandwidth\~432 GB/s\~672 GB/sBetter for games/rendering; in AI, bandwidth helps but isn’t the bottleneck unless running at FP16/full precision.
CUDA CoresFewer (mid-tier)More (upper-mid)More parallelism = faster throughput.
Typical Wattage\~200W\~250W5070 Ti runs hotter and draws more power.
Price (est.)Cheaper, more accessibleHigher upfront cost5070 Ti gives more headroom for future-proofing.
👉 Bottom line: For LLMs, VRAM size is the gating factor. If a model doesn’t fit in VRAM, performance collapses due to SSD/RAM offloading. The 5070 Ti’s extra 4 GB VRAM often unlocks whole model tiers (like 13B quantized) without slowdowns.

2. Why VRAM matters more Bus Size (for value in AI)

✅ In AI inference: VRAM = capacity ceiling. Bus width = performance polish.

Final Thoughts

$3,500 later, I’m broke but grinning with an RTX 5060 Ti humming in my setup. The lesson here: running AI locally is about more than just the GPU. It’s about balancing CPU/NPU/GPU roles, reducing model size smartly, having enough VRAM to breathe, and pairing it all with a reliable SSD like the Lexar NM790 4TB to keep data flowing.

Blackwell may change the game again, but with the right mix of quantization, VRAM, and SSD speed, local AI is already within reach for homelabbers like us.

Day 18 – $4,700 An Old Friend

$1,200, Old Laptop

Five years ago, I picked up an IdeaPad 514LT7—an Intel i7 laptop with 16GB of RAM and a 1TB SSD. At the time, it was a beast for a student, and I definitely didn’t hold back. I pushed it hard: dozens of Chrome tabs, spinning up virtual machines for Kali Linux and RHEL, and juggling multiple school projects at once. Naturally, the wear and tear came quickly.

Eventually, Windows became unbearable to run on it. I swapped it for Ubuntu as my main drive, which worked fine for about five months before it too started struggling. During my final year projects, I riced it with Arch Linux for the fun of it, but one by one, the hardware began to give way. First the Wi-Fi and Bluetooth modules, and then the performance dips.

Instead of letting it collect dust, I decided to put the laptop back to work.

Repurposing Into a Server

I flashed Ubuntu Server onto it, installed Tailscale for easy networking, and slotted it into my setup. Since my UPS and switch ports were already fully occupied, I connected it via USB 3.0 to Ethernet. Surprisingly, it ran decently and found a second life—not as a daily driver laptop, but as a Coolify server for hosting my personal projects.

What is Coolify?

For those unfamiliar, Coolify is an open-source, self-hostable alternative to platforms like Heroku or Netlify. Think of it as a PaaS (Platform-as-a-Service) that you own. Instead of relying on third-party cloud providers to deploy your apps, Coolify gives you a web-based dashboard to:

All without needing to manually mess with docker-compose files every single time.

Why Host on Coolify?

While I could have just run my apps with Docker manually, Coolify makes self-hosting far more manageable. It reduces the friction of deployment, especially for side projects and experiments. A few reasons it’s perfect for my repurposed laptop:

What used to be a laptop gasping for life under Windows is now quietly humming as a lightweight PaaS server.

Closing Thoughts

There’s a certain satisfaction in breathing new life into old hardware. Instead of throwing away a five-year-old laptop, I turned it into something genuinely useful for my workflow. With Coolify, I get the best of both worlds: the fun of self-hosting, and the convenience of modern deployment.

Sometimes, an old friend just needs a new role.

Day 19 – $4,830 A Failed Hackathon

$130, Raspberry Pi 5 (8GB)

As usual, I was browsing Reddit and drooling over all the cool cluster projects people were building with Raspberry Pis. The idea of creating my own little cluster was too tempting to resist—I definitely needed to get my hands on one. At first, I was sold on the LattePanda series. The specs looked exactly like what I wanted, and I was ready to make it my next purchase.

And then my phone rang.

A new hackathon had just opened for registration: SUTD What the Hack. It was a hardware-focused competition, sponsored by ESP32. Sounded like fun… until we actually started building.

Working with the ESP32 was frustrating. We bounced from trying an NVIDIA Jetson Xavier, back down to an ESP32, then over to an RPi. The constant back-and-forth left us exhausted, and in the end, our project collapsed. We failed the hackathon—horribly.

But not everything was wasted. That experience made me appreciate something I had overlooked: just how much easier it is to work with a Raspberry Pi 5 compared to most other hardware.

Why Raspberry Pi 5 Shines for AI Edge Computing

The Raspberry Pi 5 is more than just a cheap single-board computer—it’s a sweet spot for AI edge projects. Here’s why:

1. Plug-and-Play Ecosystem

Unlike the ESP32, which requires custom toolchains and lots of low-level setup, the RPi 5 feels like a regular computer. Flash an OS, plug in peripherals, and you’re up and running. For hackathons or rapid prototyping, that’s priceless.

2. Solid Performance at Low Power

With its quad-core ARM Cortex-A76 CPU running up to 2.4GHz and 8GB of RAM, the Pi 5 can handle surprisingly heavy workloads. It’s not a Jetson Xavier powerhouse, but for lightweight AI inference (think image recognition, voice models, or anomaly detection), it’s more than enough—without burning through watts or budget.

3. GPU + AI Acceleration Options

Out of the box, the Pi 5 has a capable VideoCore VII GPU. But where it really shines is compatibility: you can add accelerators like the Google Coral USB TPU or Intel’s Neural Compute Stick 2. This means you can offload neural network inference for edge AI tasks without needing expensive boards.

4. Massive Community Support

AI and ML tutorials for the Pi are everywhere. Whether it’s TensorFlow Lite, PyTorch Mobile, or OpenCV, there’s likely a community-tested guide for running it on a Pi. That’s something Jetson and LattePanda just can’t match in terms of beginner-friendliness.

5. Affordable & Scalable

At ~$130 for the 8GB model, it’s accessible enough to buy multiple units and scale into clusters. This makes it perfect not just for one-off edge AI projects but for experimenting with distributed computing.

Comparison: RPi 5 vs ESP32 vs Jetson Xavier vs LattePanda

Feature**Raspberry Pi 5 (8GB)****ESP32****Jetson Xavier NX****LattePanda Alpha/Delta**
**CPU**Quad-core ARM Cortex-A76, 2.4GHzDual-core Xtensa LX6, \~240MHz6-core Carmel ARMv8.2, 1.9GHzIntel Core m3 / Celeron
**RAM**8GB LPDDR4X\~520KB SRAM8–16GB LPDDR48GB DDR4
**GPU / AI Accel.**VideoCore VII GPU, supports Coral TPU / Intel NCS2None (basic I/O MCU)384-core Volta GPU + 48 Tensor CoresIntel UHD GPU, supports external AI accelerators
**Power Draw**\~10–15W under loadless than 1W10–15W idle, up to 30W+6–15W depending on model
**OS / Ecosystem**Full Linux distros (Raspberry Pi OS, Ubuntu, Arch, etc.)Arduino / ESP-IDF SDKNVIDIA JetPack (Ubuntu-based)Windows 10, Linux
**Ease of Use**⭐⭐⭐⭐☆ (plug-and-play, huge community)⭐⭐☆☆☆ (low-level coding, steep curve)⭐⭐⭐☆☆ (powerful but complex)⭐⭐⭐☆☆ (PC-like, Windows support)
**AI Suitability**Great for edge inference with accelerators (e.g., Coral TPU, TFLite)Very limited (good for sensors/IoT, not AI)Excellent for heavy AI models (CV, robotics)Decent for ML frameworks, but higher cost
**Cost**\~\$130 (8GB)\~\$5–\$10\~\$400–\$600\~\$400–\$800
**Best Use Case**AI at the edge, small clusters, prototypingIoT, sensors, simple MCU projectsRobotics, computer vision, high-performance AIWindows/Linux dev, mid-range AI workloads
### Takeaway

Closing Thoughts

We may have failed miserably at the hackathon, but the lessons stuck. The ESP32 taught me about constraints. The Jetson Xavier showed me raw performance. But the Raspberry Pi 5 hit the sweet spot—easy to set up, powerful enough for AI at the edge, and backed by a huge ecosystem.

Sometimes failure teaches you where the real wins are hiding.

Day 20 – $4,960 Edging the AI

$130 Hailo-8 (13 TOPS), An Old USB Stick, A Free Screen

Looks like I’m firmly planted in the Raspberry Pi ecosystem for now. With so many libraries and community projects around it, it makes sense—especially when hackathons are increasingly hardware-based and almost always come with an AI angle.

I was really tempted by the newer Hailo-8 26 TOPS accelerator (~$300), but in the end, I went with the 13 TOPS version for $130—much friendlier on my budget.

At first, I imagined just spamming the GPIO pins with random projects. But the more I thought about it, the more I realized: every project has different hardware quirks, sensors, and requirements. What remains consistent is the need for an AI edge node—a device dedicated to running inference quickly, locally, and efficiently. That’s where the Hailo makes sense.

What is AI Edge Computing?

AI edge computing is all about running machine learning models close to where the data is generated. Instead of sending images, sensor data, or video streams up to the cloud for processing, you run inference locally—on a Raspberry Pi, Jetson, or a USB accelerator like the Hailo-8.

Benefits of edge AI:

In short: edge computing lets AI happen here and now, instead of “out there” in some distant data center.

How Others Use Hailo in Their Homelabs

I’m not alone in experimenting with the Hailo-8. In the homelab and maker communities, people have been getting creative with these little NPUs:

The theme is always the same: speed + privacy + efficiency. Instead of burning GPU cycles or relying on cloud APIs, the Hailo-8 accelerates inference locally, at a fraction of the power draw.

How This Differs From Self-Hosting an AI Model Like Qwen

Of course, there’s a huge difference between running a specialized NPU like Hailo and spinning up a self-hosted model like Qwen on your server.

In short:

For a balanced homelab, many enthusiasts actually use both:

Together, they cover very different but complementary domains of AI.

The Hailo Series

Hailo is an Israeli company specializing in AI accelerators—compact chips that deliver massive inference performance at very low power. Their USB and M.2 modules are designed for edge devices like Raspberry Pi, industrial PCs, and embedded systems.

What makes Hailo special is its balance: low cost, low power draw, and optimized software libraries, making it easy to integrate into edge devices.

NPU, TPU, CUDA, and VRAM – How It All Fits

There’s a whole alphabet soup of terms when it comes to AI hardware. Let’s break them down:

Bringing It Together

When you put this all together:

In my setup, the Hailo-8 is the edge enabler. It gives me the ability to run AI models faster than the Pi 5 could alone, while staying compact, efficient, and hackathon-ready.

Closing Thoughts

Sometimes, the best investment isn’t in raw power, but in modularity and balance. By adding a Hailo-8 to my Raspberry Pi ecosystem, I now have an AI edge node that’s portable, efficient, and flexible enough for whatever the next hackathon throws at me.

It’s not just about having more TOPS—it’s about having the right tools at the right edge.

Day 21 – $5,000 Rpi 5 Kit

Decided to upgrade my RPI a bit, 10 Rechargable Batteries - $5 Rpi Touchscreen 3.5 inch - $15 Micro sd sandisk 2tb -$5 GPIO Header Kit - $2.50 Mini Hdmi to Hdmi adapter - $2.50 Breadboard Kit - $2 GPIO to usb adapter - $2 Lenovo micro sd 2tb - $6

Day 22 – $5,220 Rpi 5 Tools

Cooling Hat + GPS + UPS - $100 Af imx519 16mp Camera - $32 Passive FM receiver - $16 SIM card emulator - $10 58 AWUS036ACM Passive Wifi Reciever - $62

Day 23 – $5,250 Another Switch Up

4port 2.5gb 2port 10gb SFP+ Switch - $30

Day 24 – $5,500 3D printer

Bambu A1 Mini + White PLA - $250

Day 25 – $5,500 What works and what don't, RPI

Day 26 – $8,000 2.5k PC build for My NS friend

Windows 11 Pro - $10 2tb Lexar Nm620 - $250 Corsair SF1000L PSU - $230 27" APEX monitor - $110 Lian Li o11 Dyanamic - $130 Thermalright Peerless Assassin 120e - $40 RTX 5060ti 16gb - $540 2 * 32gb crucial CT32G48C40u5 - $500 Asrock B650M Pro RS WIFI + Ryzen 5 7600 - $500

Day 27 – $8,000 Some troubleshooting

Day 28 – $8,100 Small add ons

VIHALS bedside table - $60 Portable Monitor - $40

Day 29 – $8,600 Sound System

Vinnfier Hyperbar 800 BTR - $500

Day 30 – $9,300

Gskill trident 4 * 8gb DDR4 - $278 Gigabyte Z270-PHOENIX GAMING Motherboard - $120 MBOX nemesis 550w - $20 i7 7700t - $100 Seagate 1tb - $30 - 3 mirror dell wyse - $40 Mic + camera - $40 Asus ac3000 triband - $32 aimesh offload

Back to blogs