blog.gadfly.ai

Reader

Read the latest posts from blog.gadfly.ai.

from Vaishali Rawat

Kubernetes is the backbone of modern container orchestration, powering everything from small projects to enterprise-scale applications.

Whether you're a beginner or a seasoned engineer, understanding how to set up a Kubernetes cluster is a fundamental skill. In this tutorial, we'll take a hands-on approach using Kubeadm, one of the most popular tools for bootstrapping a Kubernetes cluster.

⚠️ WARNING: Single Control Plane = Single Point of Failure! 🚨 A cluster with only one control plane is not highly available. If the control plane node goes offline, the entire cluster becomes unmanageable—no scheduling, no updates, no kubectl commands. This setup is fine for learning and testing but not recommended for production.

In our next tutorial, we will add more control planes to achieve high availability and a production-ready cluster.

This tutorial is an extension of Drew's Playlist. Check it out for more background on Kubernetes setup and best practices.


Introduction

Kubernetes is a platform that helps you run and manage containers (like Docker) at scale across multiple machines. It handles scheduling, networking, scaling, and fault tolerance for applications.

Kubeadm is a powerful tool that simplifies Kubernetes cluster setup. It provides best-practice defaults while ensuring a secure and production-ready environment. In this guide, we will walk through setting up a Kubernetes cluster using Kubeadm, discuss networking options, security considerations, common pitfalls, and next steps for deploying workloads.

There are multiple ways to set up a Kubernetes cluster. While Minikube and kind are great for local development and testing, Kubeadm is a more stable and production-ready option for setting up real multi-node clusters. If you're looking for alternative approaches, you can check out tutorials on Minikube and kind


SCENE 1: Pre-requisites

  • Note: With VirtualBox's help for these tutorials, there's no need to install them on physical servers. In terms of just setting things up, unless you're using managed servers, this should get you going!

1.Setting up the VM

We're going to start with one control plane node and three worker nodes, all set up using this Ubuntu tutorial. The only difference between this setup and the tutorial is that we are not using RAID and do not have a swap partition. Instead, each VM consists of just a root partition and a boot partition.

🔴 Ubuntu Version: This guide uses Ubuntu 22.04 LTS, ensuring compatibility with the latest Kubernetes versions. Using different versions may lead to unexpected issues.

🔴 Recommended VM Resources: For a smooth Kubernetes setup, each node should have at least: – 2 CPUs – 2 GB RAM (4 GB recommended for better performance) – 10 GB of disk space

VM_k8s_nodes

Before proceeding, check the official Kubernetes Docs for the latest prerequisites.

2. Time to Play with the Terminal 🖥️

We’re working with four nodes in this setup (Since the author of this blog has ADHD, multitasking is a must!) To manage all four nodes efficiently, we’ll use TMUX to synchronize our terminals, allowing us to run the same commands across multiple machines simultaneously.

After following this tutorial, your terminal should look something like this:

TMUX Screen

All the nodes are synchronized by panes, making it easier to execute commands across all four machines.

🔴 Follow this TMUX Tutorial: Link to TMUX tutorial

🔹 Alternative Tools: TMUX is powerful, but it has a learning curve. If you’re new to terminal multiplexing, you might prefer:

GNU Screen (Simpler, built-in on many systems)

Byobu (User-friendly wrapper around TMUX & Screen)

🧭 TMUX Navigation Tips:

Split panes vertically → Ctrl + B, then % Split panes horizontally → Ctrl + B, then “ Detach from a session → Ctrl + B, then D Reattach to a session → Run tmux attach

Now that we’re set up, let's move on to configuring KubeADM !!

3.Looking at KubeADM doc

Before proceeding, make sure your system meets all the necessary requirements by checking out the official Kubeadm installation guide:

🔴 Kubeadm Installation Docs

KubeADM_docs

Going through these steps will help us efficiently install all required dependencies. Let's get started!


SCENE 2: Installing a Container Runtime (ContainerD)

One thing that's gonna be common in every tutorial and installations is pre-requisites, which will appear again and again and again specially when we're setting up Kubernetes cluster. So bare with me coz I'm dropping another pre-req but it's important to look out for any.

Step 1: Install and Configure Prerequisites for containerD

Before we install Kubernetes, we need to configure our system properly.

Enable IPv4 Packet Forwarding

Kubernetes requires packet forwarding to allow network traffic between pods across different nodes. Without it, pods on one node won't be able to communicate with pods on another node.

Run the following command to enable IPv4 forwarding:

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF

🔴 Paste it into your terminal: Terminal_1

Now, apply the changes without rebooting:

# Apply sysctl params without reboot
sudo sysctl --system

To verify if IPv4 forwarding is enabled, run:

sysctl net.ipv4.ip_forward

It should return 1, indicating that forwarding is active.

Disable Swap (Why It's Necessary)

Kubernetes requires swap to be disabled because the Kubernetes scheduler relies on precise memory allocation. If swap is enabled, the system may overcommit memory, causing pods to crash unexpectedly.

To disable swap, open the fstab file:

sudo vim /etc/fstab

Find the line containing /swap.img and comment it out by adding # at the beginning:

(in shell)
#/swap.img

Then, run:

sudo swapoff -a

To confirm that swap is disabled, check the memory usage:

free -h

You should see 0 in the swap column:

Now, your system is ready for Kubeadm! 🍭 Skipping these steps may cause Kubeadm setup failures, so make sure they are properly configured.


Step 2: Install ContainerD

I am bad at pointing out locations in real life—LOL. But I’ll try my best to guide you through this documentation step-by-step. Kindly bear with me, and don’t come for me in the comments! 😂

Where to Go

  1. Head over to: ContainerD Documentation
  2. Check out: Getting Started with ContainerD (GitHub Repo)
  3. Find the Latest Release: ContainerD Releases
  4. Scroll down to find the Assets section.
  5. Choose the right binary for your machine.
  6. DO NOT double-click! Instead, copy the link address of the release—we’ll use it in the terminal.

Phew! 😮‍💨 Now, let's get into it.

Download & Extract ContainerD

Using environment variables makes future updates easier. Run the following commands:

export CONTAINERD_VERSION=$(curl -s https://api.github.com/repos/containerd/containerd/releases/latest | grep -oP '"tag_name": "v\K[^"]+')
export ARCH=$(uname -m)
if [ "$ARCH" == "x86_64" ]; then ARCH="amd64"; fi

2. Download ContainerD

wget https://github.com/containerd/containerd/releases/download/v$CONTAINERD_VERSION/containerd-$CONTAINERD_VERSION-linux-$ARCH.tar.gz

3. Extract to /usr/local

sudo su  # Switch to root user (optional)
tar Cxzvf /usr/local containerd-$CONTAINERD_VERSION-linux-$ARCH.tar.gz

Common FAQ 🧐

Enable ContainerD as a Systemd Service

If you plan to start ContainerD via systemd, download the service unit file and enable it:

  1. Download the service file:

    wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -O /usr/lib/systemd/system/containerd.service
    
  2. Reload systemd & enable the service:

    systemctl daemon-reload
    systemctl enable --now containerd
    
  3. Check ContainerD status:

    systemctl status containerd
    

If everything is running fine, you’re all set! 🚀

Using version variables & architecture detection ensures users download the correct binary and keeps this guide easy to update. Hope this helps! 💡


Step 3: Install runc

What is runc and Why is it Needed?

runc is a lightweight command-line tool used to spawn and run containers on Linux systems according to the Open Container Initiative (OCI) specification. It is the underlying runtime that most container engines, like Docker and containerd, rely on to execute containers.

Why is runc Needed?

  • Low-level container runtime: runc sets up the container’s environment, including namespaces and cgroups, and starts the process inside the container.
  • Standardized and OCI-Compliant: It follows the OCI runtime spec, making it compatible across different container orchestration systems.
  • Used by Higher-Level Container Runtimes: Tools like containerd and CRI-O use runc to actually start and manage container processes.
  • Security & Isolation: It ensures proper container isolation using Linux security features.

How containerd Uses runc

  • containerd does not run containers directly; instead, it delegates container execution to runc.
  • runc handles the low-level execution, while containerd focuses on container lifecycle management.

If containerd is the brain, runc is the hands that actually do the work of running containers. 🚀


Install runc

  1. Download the right version (for my machine, it's runc.amd64):

    • Copy the link address and run:
      wget https://github.com/opencontainers/runc/releases/download/v1.2.6/runc.amd64
  2. Install runc:

    install -m 755 runc.amd64 /usr/local/sbin/runc
    
  3. Verify Installation:

    • Run runc to check if it's installed:
      runc

Step 4: Install CNI Plugin

Why Do We Need CNI Plugins?

The Container Network Interface (CNI) plugin is required to enable networking for containers. It ensures that containers can communicate with each other and with external networks.

  1. Create a directory for CNI plugins:

    mkdir -p /opt/cni/bin
    
  2. Download the CNI plugin:

    • Head over to Getting Started with containerd > Step 3 Install CNI Plugin
    • Go to Releases and choose the right plugin.
    • For Linux (amd64), run:
      wget https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
  3. Extract and Install CNI Plugins:

    tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz
    

Ensure everything is updated to the latest version.


Step 5: Exploring config.toml in ContainerD

What is config.toml?

The config.toml file is ContainerD’s main configuration file. It controls networking, container isolation, storage, logging, and security settings.

Generate & Edit config.toml

🍭 Create the directory (if not exists):

mkdir -p /etc/containerd

🍭 Generate the default config:

containerd config default > /etc/containerd/config.toml

🍭 Open the file to edit:

vim /etc/containerd/config.toml

If you see TOML configurations, you’re in the right place! Exit the file (:q!) and proceed to Step 6. 🚀

Step 6: Configuring the systemd cgroup driver

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, head over to the config.toml file.

  1. Open the config file:

    vim /etc/containerd/config.toml
    
  2. Locate the following section:

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
    
  3. Inside this section, find:

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    
  4. Modify the SystemdCgroup setting:
    Change:

    SystemdCgroup = false
    

    to:

    SystemdCgroup = true
    

💡 Why is this important?
– The kubelet and containerd must use the same cgroup driver for stability. – systemd is the default on most Linux distros and integrates better with cgroup v2.


Check for cgroup v2 Support

Run:

stat -fc %T /sys/fs/cgroup

If the output is cgroup2fs, then cgroup v2 is enabled.

💡 Why prefer cgroup v2?
– Simplifies resource management. – Avoids inconsistencies seen in cgroup v1. – Recommended by Kubernetes for better stability.


Restart & Verify ContainerD

  1. Restart containerd:

    systemctl restart containerd
    
  2. Check the status:

    systemctl status containerd
    

    If everything is active ✅, congrats! 🎉 Your setup is good to go!


SCENE 3: Install Kubeadm, Kubelet, and Kubectl

We've warmed up—now it's time for the real workout! But don't worry, it's pretty straightforward. (At least, I hope so! 😆)


Step 1: Update apt and Install Dependencies

First, let's update the package list to ensure we're working with the latest versions:

apt-get update

Now, install the necessary dependencies:

# apt-transport-https may be a dummy package; if so, you can skip that package
apt-get install -y apt-transport-https ca-certificates curl gpg

📝 Why these packages?
apt-transport-https → Allows APT to fetch packages over HTTPS (secure connections).
ca-certificates → Ensures system trusts SSL certificates.
curl → Fetches files from the internet.
gpg → Verifies package signatures for security.


Step 2: Download the Google Cloud Public Signing Key

Run the following command to add Google's public signing key:

# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Step 3: Add the Kubernetes APT Repository

Run the following command to add the Kubernetes repository:

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Along with following this tutorial, I would highly suggest opening up the official documentation alongside to get the right version at all times.

📝 Step 3: Install Tools and Pin Versions

🔹 Why is pinning important?
If you don’t pin versions, an upgrade to a newer Kubernetes version might happen during a system update, which can break compatibility in a multi-node setup. AND THIS IS BAD AND SAD AND AWFUL FOR CYBERSECURITY—DO NOT DO IT OR I WILL BE DISAPPOINTED. 😤 But fear not, this is what Dependabot was made for! Look into it—even though it’s not totally secure either, but at least it’s a start.

🔹 Quick Definitions:
kubelet → The agent that manages containers on each node.
kubeadm → Initializes the cluster.
kubectl → The CLI you use to interact with the cluster.


Step 4: Update the apt Package Index, Install Kubernetes Components, and Pin Their Versions

Run the following commands to install kubelet, kubeadm, and kubectl:

apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

📝 Component Responsibilities:
Kubelet → Ensures that containers are running in a node.
Kubeadm → Responsible for initializing a Kubernetes cluster.
Kubectl → CLI tool for interacting with the Kubernetes API.

Next up: Verifying installation and configuring your cluster! 🚀


SCENE 4: Creating a Cluster

Initializing Your Control Plane Mode

We want to set up a Highly Available (HA) control plane and add worker nodes. To achieve this, we need to define a --control-plane-endpoint. If we were in the cloud, we could use a load balancer's IP. But we are setting up on our machines, so we need an alternative.

Enter KubeVIP! ✨

KubeVIP allows us to use a virtual IP address (VIP) for the control plane without requiring an external cloud-based load balancer.

Step 1: Preparing for HA Control Plane with KubeVIP

🔹 Why HA Matters? – The control plane schedules workloads and manages cluster state. – In an HA setup, multiple control planes prevent downtime in case one fails.

Follow the KubeVIP documentation to get started.

📝 If you're NOT worried about HA (which you should be), you can skip this. But be warned: without HA, you'll need to configure external load balancers later!

Step 2: Generating a KubeVIP Manifest

First, find a free IP in your network:

ip a

Set a VIP address (ensure it doesn't conflict with existing IPs):

export VIP=xxx.xxx.x.xxx

Set the interface name where KubeVIP will announce the VIP:

export INTERFACE=interface_name

Install jq to parse the latest KubeVIP release:

apt install jq -y

Fetch the latest version:

KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")

Note: KubeVIP setup only applies to the control plane node, not worker nodes.

Step 3: Creating the KubeVIP Manifest

For containerd, run:

alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"

Ensure the manifest directory exists:

mkdir -p /etc/kubernetes/manifests

Generate the manifest:

kube-vip manifest pod \
    --interface $INTERFACE \
    --address $VIP \
    --controlplane \
    --services \
    --arp \
    --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml

Step 4: Initializing the Control Plane

Run:

kubeadm init --control-plane-endpoint $VIP

🔹 Breaking Down kubeadm init Flags:--control-plane-endpoint → Defines the shared endpoint for all control planes. – --pod-network-cidr → Defines the pod network range (must match CNI plugin requirements).

Step 5: Setting Up kubeconfig

🔹 What is kubeconfig? This file stores credentials and cluster info so kubectl can securely communicate with the cluster.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

🔴 Next: Adding Worker Nodes and Deploying a Pod Network!


Kubernetes Cluster Setup with Calico Networking

Step 1: Choose a Pod Network Add-on

A Pod Network Add-on enables communication between pods in your cluster. We are using Calico, a widely adopted choice due to its scalability and support for network policies.

Some network providers require you to pass specific flags to kubeadm init. For Calico, you must set --pod-network-cidr appropriately.

Download the Calico Networking Manifest

Run the following command to download the Calico manifest:

curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/calico.yaml

Step 2: Understanding Pod CIDR

CIDR (Classless Inter-Domain Routing) assigns IP addresses in a structured manner. Each pod in your cluster requires a unique IP from the CIDR block. This ensures that pods can communicate across nodes within the cluster.


Step 3: Initialize the Kubernetes Cluster

To initialize your cluster, use the kubeadm init command. Set the --pod-network-cidr to match Calico's expected value:

kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint=192.168.0.200 --apiserver-advertise-address=192.168.0.201

Note: The control-plane-endpoint should ideally be a DNS name or a static IP that won’t change. If the IP changes later, the cluster may become inaccessible due to certificate mismatches.

Once the initialization is complete, configure your kubeconfig:

export KUBECONFIG=/etc/kubernetes/admin.conf

Step 4: Set Up a DNS Record

To ensure smooth communication within the cluster, add a DNS record in /etc/hosts:

vim /etc/hosts

Add the following entry:

192.168.0.200 kube-api-server

Test the connection:

apt install iputils-ping
ping kube-api-server

Step 5: Deploy Calico Networking

Apply the Calico manifest to configure networking:

kubectl apply -f calico.yaml

Verify installation:

kubectl get -f calico.yaml
kubectl get po -n kube-system

If all pods are running successfully, your network setup is complete.


Step 6: Add Worker Nodes to the Cluster

To join worker nodes, first retrieve the join command from the control plane:

kubeadm token create --print-join-command

Run the command outputted on each worker node:

kubeadm join 192.168.0.200:6443 --token <your-token> \
    --discovery-token-ca-cert-hash sha256:<your-ca-cert-hash>

Important Notes:

  • kubeadm tokens expire after 24 hours. If the token expires, use kubeadm token create --print-join-command to generate a new one.
  • The ca-cert-hash and token values are unique to your cluster. Do not reuse sample values from tutorials.

Once the nodes have joined, verify the cluster status:

kubectl get nodes -o wide

Troubleshooting

Common Issues:

  1. Node Not Ready:

    kubectl get nodes
    journalctl -xeu kubelet
    

    Check the logs for errors and ensure that kubelet is running properly.

  2. Pod Stuck in Pending State:

    kubectl describe pod <pod-name> -n kube-system
    kubectl logs <pod-name> -n kube-system
    

    Look for networking issues or missing node components.

  3. Control Plane Inaccessible:

    • Ensure the control-plane-endpoint is resolvable.
    • Check that the API server is running: kubectl cluster-info.

Security Considerations

  • Do not expose the Kubernetes API server to the internet without proper authentication and firewall rules.
  • Use RBAC (Role-Based Access Control) to limit access to cluster resources.
  • Implement TLS encryption for secure communication between components.
  • Regularly update your cluster and monitor for security vulnerabilities.

Next Steps

Now that your cluster is up and running, here are some recommended next steps:

  1. Deploy a sample application: kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --type=NodePort --port=80
  2. Install Kubernetes Dashboard to monitor your cluster visually.
  3. Explore Kubernetes Concepts like Deployments, Services, and RBAC.

🚀 Congratulations! Your Kubernetes cluster with Calico networking is ready for use!

 
Read more...

from Vaishali Rawat

Introduction

Kubernetes, often abbreviated as K8s, is an open-source platform for automating the deployment, scaling, and management of containerized applications. Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes has become the industry standard for managing applications in cloud environments.

If you're new to Kubernetes, don’t worry! This guide will break it down in a simple way so that even beginners can understand what it is, why it's important, and how to get started.

Why Do We Need Kubernetes?

Before Kubernetes, managing applications across multiple servers was a complicated task. Organizations used to run applications on physical servers or virtual machines, leading to inefficiencies such as:

  • Over-provisioning (allocating more resources than necessary to handle peak loads)

  • Under-utilization (wasting resources when demand is low)

  • Scaling challenges (difficulty in managing application traffic efficiently)

Kubernetes solves these problems by providing an intelligent orchestration system that efficiently manages workloads, ensuring applications run reliably across different environments.

Key Features of Kubernetes

1. Container Orchestration

Kubernetes automates the deployment and scaling of containers, ensuring applications run efficiently without manual intervention.

2. Automatic Scaling

It automatically increases or decreases application instances based on demand, optimizing resource usage.

3. Self-Healing

If a container crashes, Kubernetes automatically restarts or replaces it to maintain application availability.

4. Service Discovery & Load Balancing

Kubernetes efficiently distributes network traffic between application instances to ensure smooth performance.

5. Rolling Updates & Rollbacks

Kubernetes enables seamless application updates without downtime and allows reverting to previous versions if needed.

Core Components of Kubernetes

To understand how Kubernetes works, let's break down its key components:

  1. Cluster

A Kubernetes cluster consists of multiple machines (nodes) that work together to run applications.

  1. Nodes
  • Master Node: Controls the cluster and manages workload scheduling.

  • Worker Nodes: Run the application workloads.

  1. Pods

The smallest deployable unit in Kubernetes. A pod contains one or more containers that share resources.

  1. Deployments

Manage application rollouts, ensuring reliable and automated updates.

  1. Services

Provide a stable networking endpoint, allowing different parts of an application to communicate with each other.

  1. ConfigMaps & Secrets

Help manage configuration data and sensitive information securely.

Getting Started with Kubernetes

Step 1: Install Kubernetes Locally

For local development, you can use tools like:

  • Minikube: A lightweight Kubernetes cluster for local testing.

  • Kind: Runs Kubernetes clusters inside Docker containers.

Step 2: Deploy Your First Application

Run a simple Nginx web server using Kubernetes with these steps:

  1. Create a deployment:
kubectl create deployment nginx --image=nginx
  1. Expose the deployment as a service:
kubectl expose deployment nginx --type=LoadBalancer --port=80

3.Get the service details:

kubectl get services

Access your application using the provided URL.

Step 3: Learn Kubernetes Concepts in Depth

To deepen your understanding, check out online courses, YouTube playlists, and hands-on labs. The Kubernetes tutorial playlist is a great place to start.

Conclusion

Kubernetes is a powerful tool that simplifies container management, making it easier to deploy and scale applications efficiently. Whether you’re a beginner or an experienced developer, learning Kubernetes will open doors to cloud-native development and DevOps practices.

Start small, practice regularly, and soon you'll be managing Kubernetes like a pro!

 
Read more...

from sen

I just set up some fun nonsense today to handle images, so please enjoy this fun public domain picture of a waterfall:

public domain picture of Chōshi_Falls

Ok, so what is going on here?

The blog engine we use here is called writefreely, it's federated and generates the blogposts themselves from a markdown texteditor in the browser. images can be added via link syntax, so literally:

![public domain picture of Chōshi_Falls](https://cdn.gadfly.ai/Chōshi_Falls.jpg)

Now we uh, just have to host the pictures somehow.

enter the CDN

The folks behind writefreely maintain a hosted version called write.as and if you are a Write.as Pro member you can use their snap.as service which is, you guessed it, a hosted version of their picture hosting project called snapfreely, so we just head on over to the repo and...

A screenshot of the snapfreely repository on github, its EMPTY.

oh.

Well the good news is that we already have a perfectly good webserver (several actually, most of the backdoor backend services used by The Gadfly Horde™ here at HQ use nginx as a reverse proxy, the others ARE just nginx) that we use to serve our main site gadfly.ai.

Now we just need to put the files there somehow.

TBQH I am a big fan of scp but I also live in the terminal, guzzle litres of pourover and read man pages voluntarily(ok that last one was a lie, but you get my point)

but I don't always have a console at hand, or it's not always convenient and neither is giving everyone shell access to a machine so it wold actually be nice if I could upload files the same way I am writing this blogpost, with several million lines of C++ a web browser.

The Rabbithole

this actually was one of the shallower ones I went down but still I learned a few things.

I had 3-ish main goals, I wanted to:

  1. upload files thru the browser (duh)
  2. use nginx
  3. have authentication (for upload)

More specifically I wanted the files in a bare dir so I could write an nginx config pointing to it as the document root, easy peasy.

This proved difficult.

lots of photohosty and other cdn things do cool image re-encoding stuff, but they also store things in databases and I already have enoughofthosethankyouverymuch.

awesome-selfhosted was a good resource to look thru all this stuff, and after I sort of learned that what i wanted was a web file manager I took to the high seas.⛵

and returned to port immediately.

I found DAMS or “Digital Asset Management Software” which reeks of Enterprise™ so I avoided it, and most of the big FTP clients were desktop based, so bit of a no-go.

back to the awesome-selfhosted list again there were some cool small things but they either used their own server, database or both.

The outliers though,

sigh

were written in PHP.

here we go

I actually pivoted at the last minute (hour, day, whatever).

originally i was going to use IFM but I finally chickened out because the authentication infra was uh... barebones.

I then somehow stumbled across filebrowser(inspired name, I know), and actually manged to wrestle an nginx config into shape for it.

Wound up having to install a bunch more packages (go figure) and then mess with permissions and ownership around unix sockets and php_fpm (did you know nginx runs as nginx:nginx on debian things now? what a time to be alive...) but I got it up and running behind a subdomain.

We still have a problem tho.

The cdn (no really)

For my own sake (and others) I wanted to make it really easy to find and copy a link to the bare image file for use in the blog, and uh now maybe I'm just not up on my nginx-fu when it comes to url rewriting (ok face it, I'm not) and also not willing to deeply mess with the internals of the file browsers routes so their arent any collisons, but this means I probably can't (easily) host the filemanager interface on the same subdomain as the files themselves.

This means I need 2 subdomains (which is fine) and to somehow have the file browser UI give me a link to that same file, but from the other subdomain. That should be straightforward at least.

Well, the jokes on me bc i downloaded the release tarball of “file manager” and so I was met with a blob of post-masticated JS, this caused me to retreat into the sources where I learned...

that “file browser” was just a fork of FileGator

sigh

I had talked myself into using “file browser” even though it hadn't been updated in 2 years bc it had the best auth infrastructure of anything I could find. and here's filegator that just got a new release last week

queue the fury of me ripping up /var/www/filebrowser and installing /var/www/filegator, the good news is that all the work I put into the nginx configs was still valid, I just had to do a find-replace. it of course didn't start up and I had once again installed a release tarball. Then I did some more stuff that didn't work until I pulled down the main git branch and followed the install instructions, then it worked. Hooray!!!

Ok now uh where was I...

Oh yeah, cdn link.

Now I have the source so I can dig through and look for pieces of user interface to modify.

HERE!

<b-dropdown-item v-if="props.row.type == 'file' && can('download')" v-clipboard:copy="getDownloadLink(props.row.path)" aria-role="listitem"> <b-icon icon="clipboard" size="is-small" /> {{ lang('Copy link') }}

its the bit of code responsible for the “copy link” button from the per file dropdown menu on the filegator ux.

The "Copy Link" button from the per file dropdown menu on the filegator ux

queue some trepidation about plumbing the depths of vue.js docs and second-guessing myself about JS syntax, oh and several npm run build errors before I got something workable.

<b-dropdown-item v-if="props.row.type == 'file' && can('download')" v-clipboard:copy="'https://cdn.gadfly.ai/' + props.row.name" aria-role="listitem"> <b-icon icon="clipboard" size="is-small" /> {{ lang('Copy cdn link') }}

taa-daa!

Now I have another button in the drop down, that says “Copy cdn Link” which uh, does that. There are certainly more elegant, more configurable ways of doing this that I would implement if i wanted to get this code upstreamed, but it works for now. Also, you still have to write the markdown around the link when using it on the blog. Who knows maybe I will add a “Copy markdown Link” button in the future.

until next time... EOL.

 
Read more...

from sen

lots of fun and exciting things have been happening over here at gadfly ai HQ, we have added a bunch of services, both public and private facing.

 
Read more...