<![CDATA[CyberHost]]>https://cyberhost.uk/https://cyberhost.uk/favicon.pngCyberHosthttps://cyberhost.uk/Ghost 5.79Tue, 20 Feb 2024 13:02:30 GMT60<![CDATA[Offsite Proxmox Backup Server with S3 Glacier]]>In this post, we'll be deploying Proxmox Backup Server (PBS) on a Debian 12 VPS and storing the backups on Scaleway Glacier S3 compatible storage.

Warning
If deploying on a cloud VPS, it's probably best to not directly expose this to the internet. Here are some

]]>
https://cyberhost.uk/deploying-proxmox/65d0a1c3df4ffc0001483c84Tue, 20 Feb 2024 13:00:00 GMT

In this post, we'll be deploying Proxmox Backup Server (PBS) on a Debian 12 VPS and storing the backups on Scaleway Glacier S3 compatible storage.

Warning
If deploying on a cloud VPS, it's probably best to not directly expose this to the internet. Here are some safer ways to connect:

Things to note:

  • Initialisation of the Datastore took me 19 Hours
  • Backing up with Encryption is recommended
  • 8 fairly minimal Debian VM's only used 14GB of storage with ZSTD compression.

Installing Proxmox Backup Server on Debian 12

Follow the official docs or use our simplified version below.

Add Proxmox Repositories

wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg

Validate SHA256 Hash
sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg

Expected output:
7da6fe34168adc6e479327ba517796d4702fa2f8b4f0a9833f5ea6e6b48f6507a6da403a274fe201595edc86a84463d50383d07f64bdde2e3658108db7d6dc87

If your output is not the above, validate with the official docs.

Add to apt sources
sudo nano /etc/apt/sources.list
Paste the following:

deb http://download.proxmox.com/debian/pbs bookworm pbs-no-subscription

Install

sudo apt update
sudo apt install proxmox-backup-server

Set root password:

sudo proxmox-backup-manager user update root@pam --password SecurePassword

Reboot
sudo reboot

Login
https://IP:8007

Configuring S3 Backend

PBS does not support S3 storage for a Datastore, however we can make it work by using s3fs.

Install s3fs
sudo apt install s3fs

Add Credentials
sudo nano /etc/passwd-s3fs

Add the folowing line to the file, using your creds.
ACCESS_KEY_ID:SECRET_ACCESS_KEY

Set Permissions
sudo chmod 600 /etc/passwd-s3fs

Create mount point location
sudo mkdir /mnt/pbs-s3

Connect to S3
With debug on:
sudo s3fs YOUR-BUCKET /mnt/pbs-s3 -o allow_other -o passwd_file=/etc/passwd-s3fs -o url=https://s3.nl-ams.scw.cloud -o use_path_request_style -o endpoint=nl-ams -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o dbglevel=info -f -o curldbg

Make sure it all works ok, with a 200 response and you can touch text.txt into /mnt/pbs-s3 without issues.

Connect on boot
Ideally you'd use fstab for this however I kept running into issues setting this up. I resulted in using a cron @restart script to set it up on boot.

Create Boot up script
nano s3-mount.sh

Paste in your connect command above

Add execute permissions
sudo chmod u+x s3-mount.sh

Sudo crontab
sudo crontab -e

Enter the following:
@reboot /bin/sh /home/YOUR-USER/s3-mount.sh

Proxmox Configuration

In Proxmox Backup Server, click "Add Datastore" (bottom of the left panel). Fill in the details:

Offsite Proxmox Backup Server with S3 Glacier

After clicking add it can take some time to create the Datastore, and I mean a crazy long time. Took just over 19 hours for me after writing 65,539 empty directories to disk.

Offsite Proxmox Backup Server with S3 Glacier
Scaleway S3 Bucket Stats
Offsite Proxmox Backup Server with S3 Glacier

PBS Setup in Proxmox Hypervisor

On your Proxmox Hypervisor head to Datacenter > Storage > Add > Proxmox Backup Server

Offsite Proxmox Backup Server with S3 Glacier

Fill in the details:
As you are storing your VM's on the cloud you'll probably want to enable Encryption for your backups to keep them secure. Just make sure you keep the key safe!

Offsite Proxmox Backup Server with S3 Glacier

Back in Proxmox Backup Server click "Show Fingerprint"
Paste this in the Fingerprint field on Proxmox Hypervisor

Offsite Proxmox Backup Server with S3 Glacier

Then create you're backups under the Backup tab within Proxmox Hypervisor. These are going to take a fair bit longer than local storage due to the upload and overhead of S3.

Moving backups to Glacier

Create a Lifecycle rule in S3 this can be achived with most S3 providers that sell Glacier/Long Term storage.

Deltetion should be done by Proxmox but it may be worth checking up on this. You may need to utilise a Lifecycle Delete rule instead.

Offsite Proxmox Backup Server with S3 Glacier

That's it! It's worth perdiodically checking in to ensure everything is being backed up as expected.

Follow the Blog on Mastodon:

CyberHost (@cyberhost@infosec.exchange)
Offsite Proxmox Backup Server with S3 Glacier: https://cyberhost.uk/deploying-proxmox/
]]>
<![CDATA[Octopus Energy Home Mini Review - A Network Teardown]]>After waiting a few months, my Octopus Home Mini has finally arrived.

As this is quite an interesting product and one of the first products where your energy supplier uses your Wi-Fi, I thought I'd do a full teardown and see what it gets up to. Hint: I&

]]>
https://cyberhost.uk/octopus-home-mini-teardown-review/65cceeabdf4ffc0001483b7dWed, 14 Feb 2024 18:46:49 GMT

After waiting a few months, my Octopus Home Mini has finally arrived.

As this is quite an interesting product and one of the first products where your energy supplier uses your Wi-Fi, I thought I'd do a full teardown and see what it gets up to. Hint: I'm quite impressed

Not on Octopus Energy yet? Get £50 credit and help the blog: https://share.octopus.energy/ebon-snail-338

What's a Home Mini?

"The Octopus Home Mini is a small, palm-sized device that beams live readings from your smart meter to our cloud-based platform Kraken, so we can show you up-to-the-minute smart insights via your Octopus Energy app." - https://octopus.energy/blog/octopus-home-mini/

What's in the box

  • Instructions
  • Home Mini
  • Micro USB Cable
  • UK 5W USB Adapter
Octopus Energy Home Mini Review - A Network Teardown

Setup Process

This is pretty straight forward, just scan the QR code in the intructions, this launches the Octopus app.

  1. Enable Location Services and Bluetooth permissions to the app
  2. Connect to the Home Mini
  3. Enter your Wi-Fi Password
  4. All sorted!

I would suggest placing this on a separate IoT network just out of principle. The SSID sent to the Home Mini is tied to what your phone is on. So this will mean connecting your phone to the IoT network temporarily.

After setup you can remove Location and Bluetooth permissions from the App without causing any issues.

Network Analysis

I took a PCAP of it's network traffic on my router and looked at it's DNS requests to understand what it is up to.

What does the home mini talk to?

Only two domains:

  • pool.ntp.org
  • aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com

pool.ntp.org
NTP Pool is an open pool of NTP servers contributed by volunteers. It's nice to see this project being used however technically they should be using a vendor zone instead of the main domain due to it being baked into the firmware.

aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com
AWS IOT Serverless Platform. Located in the Ireland AWS Datacenter.
Can't complain at this, looks like they understand how to properly design cloud systems!

It's nice to see that the Home Mini doesn't ping off to any other random servers.

Octopus Energy Home Mini Review - A Network Teardown
DNS Lookups preformed by the Home Mini (Adguard)

Encryption?

The Home Mini talks to the Amazon endpoint using TLS1.2 although this is not the latest (1.3), it was using the cipher suite of TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 this is still considered secure.

Other than the NTP lookup being in plaintxt (standard) all traffic was encrypted.

How often does it send data?

Every 10 seconds a packet of around 250 bytes is sent.

After having it plugged in for just over 2 Hours it had sent 315KB and received 242KB, so no need to worry about it slowing down your network.

Octopus Energy Home Mini Review - A Network Teardown
Wireshark Screenshot

Is it going to hack my network???

No, all is does is the following in time order:

  1. DHCP (To get a local IP address)
  2. DNS Lookup of pool.ntp.org
  3. Connect to 1 NTP Pool server to get the time
  4. DNS Lookup of aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com
  5. Connect to aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com
  6. Send a 250 byte packet every 10 seconds

The app

Enough about packets, what does the app look like?

It's not packed with features, just your live usage and 5 or 30 min usage graphs.

Octopus Energy Home Mini Review - A Network Teardown

I originally thought that data after 30 mins was lost, however this data is moved the the "Day" tab along with gas usage which is pretty neat.

The good and bad

No telemetry or bad network acitivty Only 2.4Ghz Wi-Fi
All traffic under TLS1.2 USB C would have been nice
Minimal network traffic Basic app experience
Made from recycled ocean plastic
Completly Free!

Summary

This is quite impressive, you can tell it's been properly developed by the cloud setup to minimal and correctly configured network traffic. It would have been nice if it had 5Ghz Wi-Fi and USB C for power but for a free gadget you can understand the cost cutting here.

Follow cyberhost on Mastodon, we'll be linking it to Home Assistant in the next post. @cyberhost@infosec.exchange

]]>
<![CDATA[macOS External Monitor Scaling Fix]]>Do you want your external display to have these options:

Then read on...

I spent way too long trying to figure out why my new Dell 4k montiors wouldn't allow for scaling by ajusting the text. The screens were unusable being super small at 4k or blury at

]]>
https://cyberhost.uk/macos-monitor/65c4d9bd782a81000172d02cThu, 08 Feb 2024 17:40:31 GMT

Do you want your external display to have these options:

macOS External Monitor Scaling Fix

Then read on...

I spent way too long trying to figure out why my new Dell 4k montiors wouldn't allow for scaling by ajusting the text. The screens were unusable being super small at 4k or blury at 1080/1440p. As this fix doesn't seem to be documented anywhere here it is:

You may notice that when using an external monitor with your MacBook you're unable to adjust the text size. At 1080p or 1440p it's not normally an issue, but with a 4k monitor it's incredible hard to use without scaling options.

The solution: Active HDMI Adapter - Skip to buying one

Why is that?

Most HDMI adapters on the market are passive and this causes us issues. To carry video USB-C uses DisplayPort Alternative Mode (Alt Mode), this requires our HDMI adapters to convert that DisplayPort signal into HDMI.

The chips used within active adapters come with many benefits:

Better signal conversion

Support for Advanced Features
Such as higher resolutions, refresh rates, and color depths. They can handle the necessary signal processing to ensure compatibility with the capabilities of the connected devices.

Resolution Scaling
Active adapters can scale the resolution to match the display's capabilities. This ensures optimal image quality without stretching or distortion.

Enhanced Compatibility
Active adapters are generally more versatile and compatible with a wider range of devices compared to passive adapters. They can handle a variety of signal formats and specifications, making them suitable for diverse connectivity needs.

If I'm honest, I don't know which feature means that we can we can correctly scale on macOS but we can, and that's what matters!

Buying an Active Adapter

This is harder than you'd imagine, for some reason manufacturers don't want to tell us if it's passive/active.

As active adapters can handle increased throughput for high resolution and framrates, this is what I used to find an adapter. Searching for 4K 120hz or 8K worked for me in finding an adapter.

I currently use a couple of Cable Matters adapters and they work perfectly. I've dropped their affiliate links below, this helps out my blog at no cost to you 😄

Cable Matters Multi Port: Amazon UK - £54 - Amazon US - $55

Cable Matters just HDMI: Amazon UK - £20 - Amazon US $20

Have another solution or verified adapter? Let us know

]]>
<![CDATA[In depth review of Kamatera Cloud - $4 VPS]]>There are a lot of review online for Kamatera Cloud, most of these look like sponsored posts just reading off the marketing rubbish. In this review I’ll be completing a full deep dive into Kamatera cloud, without any filtering or marketing BS.

To be clear this isn’

]]>
https://cyberhost.uk/a-proper-review-of-kamatera-cloud-4-vps/6591478f69be83000162d610Mon, 01 Jan 2024 14:59:20 GMT

There are a lot of review online for Kamatera Cloud, most of these look like sponsored posts just reading off the marketing rubbish. In this review I’ll be completing a full deep dive into Kamatera cloud, without any filtering or marketing BS.

To be clear this isn’t a sponsored post. If you think Kamatera cloud is right for you please support this ad-free site by using the affiliate link below, this is at no cost to you. As always this will not influence the opinions in this review.

Start a Kamatera 30 day trial

Pricing
CPU
Networking
Disk
Hot Add Demo
Stats
Setup Process
Support
YABS Benchmark

The Good

  • Reasonably priced with 30 day trial
  • Crazy fast networking
  • Crazy fast disk speeds
  • Fast Support

The Bad

  • No IPv6
  • Network blocked due to running a single speed test
  • No custom images
  • Timezone was set to ETS as default

Pricing

Pricing starts at $4 (£3.15) per month including 5TB of transfer at 10Gbit. These can be finely adjusted to suite your needs.

Start a Kamatera 30 day trial

In depth review of Kamatera Cloud - $4 VPS

Hourly billing is an option this works out to be around $3.65 for a month but does not include any data transfer.

Data transfer is billed at $0.01 per GB. This is for both in and outgoing traffic.

In depth review of Kamatera Cloud - $4 VPS
Bandwidth Bill

CPU

Kamatera offer 4 types of CPU, for most people type A will be sufficient. Pricing does change when selecting a different type.

One nice thing is that these are hot swappable, so you don’t need to reboot to change CPU type.

Type A – Availability

Server CPUs are assigned to a non-dedicated physical CPU thread with no resources guaranteed.

Geekbench 6 Benchmark Test:

Test | Value
Single Core | 847
Multi Core | 649
Full Test | https://browser.geekbench.com/v6/cpu/4148053

Type B – General Purpose

Server CPUs are assigned to a dedicated physical CPU Thread with reserved resources guaranteed.

Geekbench 6 Benchmark Test:

Test | Value
|
Single Core | 886
Multi Core | 701
Full Test | https://browser.geekbench.com/v6/cpu/4148754

Type T – Burstable

Server CPUs are assigned to a dedicated physical CPU thread with reserved resources guaranteed.

Type D – Dedicated

Server CPUs are assigned to a dedicated physical CPU Core (2 threads) with reserved resources guaranteed.

Geekbench 6 Benchmark Test:

Test | Value
|
Single Core | 886
Multi Core | 692
Full Test | https://browser.geekbench.com/v6/cpu/4149244

More can be found here: https://www.kamatera.com/faq/answer/which-cpu-types-are-offered-by-kamatera/

Networking

5TB of included traffic at 10Gbit is very good when comparing against the likes of AWS (100GB), DigitalOcean (1TB) and Vultr (3TB).

The IP’s I was allocated were very clean with 0 hits on mxtoolbox.
e.g. https://mxtoolbox.com/SuperTool.aspx?action=blacklist%3A185.127.19.95

No IPv6 Support

This was confirmed when asking their support desk:


Question asked: "Are there any plans to support IPv6 in the future?"

Support Response: "As of today, we do not provide and allow IPv6 to be used on our infrastructure. You can use only IPv4."

Speedtest

When running speedtests it did trigger their automated security to block the IP, this block was in place for a couple hours before automatically being removed.

I spoke to their technical support to understand why the network was blocked, they told me this was likely down to the high traffic on the server side caused by the speed tests. Although I understand why, as they don’t want their infrastructure being used to launch powerful DDoS attacks, the immediate blocking seems a little over the top for me. I can’t fault the support who initially responded in 25 mins and the follow up reply in 9 mins.

Disk

Disk Benchmarks

Test 1

Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 202.51 MB/s (50.6k) | 2.51 GB/s (39.3k)
Write | 203.04 MB/s (50.7k) | 2.53 GB/s (39.5k)
Total | 405.55 MB/s (101.3k) | 5.04 GB/s (78.8k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 3.61 GB/s (7.0k) | 3.23 GB/s (3.1k)
Write | 3.80 GB/s (7.4k) | 3.45 GB/s (3.3k)
Total | 7.42 GB/s (14.4k) | 6.68 GB/s (6.5k)

Test 2

Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 197.21 MB/s (49.3k) | 2.50 GB/s (39.1k)
Write | 197.73 MB/s (49.4k) | 2.52 GB/s (39.3k)
Total | 394.94 MB/s (98.7k) | 5.02 GB/s (78.5k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 3.46 GB/s (6.7k) | 3.17 GB/s (3.0k)
Write | 3.65 GB/s (7.1k) | 3.38 GB/s (3.3k)
Total | 7.12 GB/s (13.9k) | 6.55 GB/s (6.3k)

Test 3

Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 185.06 MB/s (46.2k) | 2.65 GB/s (41.5k)
Write | 185.55 MB/s (46.3k) | 2.67 GB/s (41.7k)
Total | 370.61 MB/s (92.6k) | 5.32 GB/s (83.2k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 3.55 GB/s (6.9k) | 3.46 GB/s (3.3k)
Write | 3.74 GB/s (7.3k) | 3.69 GB/s (3.6k)
Total | 7.30 GB/s (14.2k) | 7.15 GB/s (6.9k)

Hot Add

Unlike other cloud proviers I've used, Kamatera allows you to adjust the CPU Type, CPU Cores and Memory without a reboot. A reboot is often required to downgrade.

0:00
/0:32

Hot Add Demo

Stats Page

Like most providers Kamatera provide a system monitoring tool for CPU, Memory, Network and disk.

In depth review of Kamatera Cloud - $4 VPS

Setup Process

I started with a Debian VPS (1 CPU, 1GB RAM, 20GB SSD) this took 1 Minute 21 Seconds to deploy and had quite a fancy initialisation log showing what’s happening being the scenes which is nice to see. At the start your given the allocated IPv4 address, this is handy as you can go off setting up DNS while the VPS is provisioning.

In depth review of Kamatera Cloud - $4 VPS

One thing to note is the Debian image wasn’t up to date and was running 12.2, this was released on the 7th October 2023. Not a massive issue but it’s nice to see cloud providers shipping up-to date fully patches images.

The image used is also super minimal, I had install sudo and curl and create a new user.

It would be better if Kamatera setup the user account for you and installed your ssh key on that user to promote better security of not using the root account.

Support

As previously mentioned I got in contact with their support department. They guarantee to get back within 24 hours however they were so much faster than this in practice. I sent them 4 emails the response times were 25 mins, 9 mins, 1 min and 10 mins.

I was put through to the cloud services department for this and all the staff were fairly technical, answering my questions well.

YABS Full Benchmark

A full benchmark of the $4 a month VPS plan was completed using YABS

# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
#              Yet-Another-Bench-Script              #
#                     v2023-11-30                    #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Sun Dec 24 12:07:42 PM EST 2023

Basic System Information:
---------------------------------
Uptime     : 0 days, 18 hours, 46 minutes
Processor  : Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz
CPU cores  : 1 @ 2593.905 MHz
AES-NI     : ✔ Enabled
VM-x/AMD-V : ❌ Disabled
RAM        : 925.8 MiB
Swap       : 1024.0 MiB
Disk       : 19.6 GiB
Distro     : Debian GNU/Linux 12 (bookworm)
Kernel     : 6.1.0-13-amd64
VM Type    : VMWARE
IPv4/IPv6  : ✔ Online / ❌ Offline

IPv4 Network Information:
---------------------------------
ISP        : Kamatera Inc
ASN        : AS210329 Kamatera Inc
Host       : Cloudwebmanage EU LO
Location   : Poplar, England (ENG)
Country    : United Kingdom

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/sda1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 178.16 MB/s  (44.5k) | 2.59 GB/s    (40.5k)
Write      | 178.64 MB/s  (44.6k) | 2.61 GB/s    (40.8k)
Total      | 356.80 MB/s  (89.2k) | 5.21 GB/s    (81.4k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 5.17 GB/s    (10.1k) | 5.02 GB/s     (4.9k)
Write      | 5.44 GB/s    (10.6k) | 5.35 GB/s     (5.2k)
Total      | 10.61 GB/s   (20.7k) | 10.38 GB/s   (10.1k)

Geekbench 6 Benchmark Test:
---------------------------------
Test            | Value                         
                |                               
Single Core     | 849                           
Multi Core      | 636                           
Full Test       | https://browser.geekbench.com/v6/cpu/4123094

Start a Kamatera 30 day trial

]]>
<![CDATA[Off-site UniFi Protect Backup]]>I'm a massive fan of the UniFi Protect eco-system, yes it can get a little pricey compared to real consumer gear but it simply just works and it works well, all while on-site away from prying eyes. The issue with all of this is that a savvy burglar

]]>
https://cyberhost.uk/offsite-unifi-protect-backup/657c73d15b92190001664681Sat, 16 Dec 2023 00:14:17 GMT

I'm a massive fan of the UniFi Protect eco-system, yes it can get a little pricey compared to real consumer gear but it simply just works and it works well, all while on-site away from prying eyes. The issue with all of this is that a savvy burglar is going to look for your NVR system and take all the footage with them. Unfortunately I've seen a few threads on Reddit of this happening.

Physical Security

The Unifi Cloud Key Gen2 Plus has a security slot so that a Kensington lock can be used to secure the system. Having a look at the other Unifi products suggests that all the other systems that can run the Protect application don't have this which is a little disapointing.

Off-site UniFi Protect Backup

Off-Site Backup

I would have hoped this is a feature that Unifi would build into the Protect application. I did hear rumors many years ago that this was something they were working on, given the number of years that have passed I'm guessing they have given up on this.

There are a few ways to backup you footage off-site:

  • rsync the video directories on the Cloud Key to a separate machine
  • Enable the RTSP to send the video streams off-site
  • Endless janky scripts

I've been using a docker container from Sebastian Goscik - https://github.com/ep1cman/unifi-protect-backup for close to 2 years now and it has effortlessly shipped of all my motion events to S3 as they happen.

Setting up Unifi-Protect-Backup with Docker

We'll be deploying this in Docker, I personally run a Debian VM just for this but a Raspberry Pi will also do the job. You'll want this to be local and likely on the same network as the Unifi Cloud Key.

Unifi-Protect-Backup uses rclone for the backend support meaning a load of Cloud providers and protocols are supported out the box such as:

  • S3
  • Dropbox
  • FTP
  • SFTP
  • SMB
  • WebDAV

Look through there docs and create a rclone.conf config for your cloud provider, I'll be using Scaleway S3 compatible storage. The template for this:

[scaleway]
type = s3
provider = Scaleway
env_auth = false
endpoint = s3.nl-ams.scw.cloud
access_key_id = SCWXXXXXXXXXXXXXX
secret_access_key = 1111111-2222-3333-44444-55555555555555
region = nl-ams
location_constraint =
acl = private
server_side_encryption =
storage_class =

Once your rclone config is sorted we'll generate a Unifi user account for the backups, we can give this just basic view only access.

Off-site UniFi Protect Backup

Docker Compose:

version: '3.1'
services:
  protect-backup-scaleway:
    image: ghcr.io/ep1cman/unifi-protect-backup
    container_name: unifi-protect-backup
    restart: unless-stopped
    environment:
      UFP_USERNAME: <username>
      UFP_PASSWORD: <password>
      UFP_ADDRESS: <Unifi-IP>
      UFP_SSL_VERIFY: 'false'
      RCLONE_DESTINATION: scaleway:<bucket-id/directory>
    volumes:
      - ./data:/data
      - /home/cyberhost/rclone.conf:/root/.config/rclone/rclone.conf:ro

You'll want to set the credentials, Unifi IP and the rclone path in the config.

Start it up: sudo docker compose up -d

You can check the logs with sudo docker compose logs

To prevent your bills going sky high you'll want to setup some lifecycle rules. I have this to move the videos to Glacier storage after 7 days and then delete after 180 days, this reduces my monthly costs to well below £1.

Off-site UniFi Protect Backup
Scaleway Lifecycle Rules

That's it, you'll likely want to check periodically to ensure it is still backing up 😄

]]>
<![CDATA[Ionos £1 VPS Review]]>Jump to YABS

What's the deal?

UK Cost: £1.20 (Including VAT) - 12 Month Contract

US Cost: $2 - 1 Month Contract

Get Ionos VPS - This supports the hosting of this blog at no cost to you

VPS Linux XS Specs:

  • 1 vCPU
  • 1GB RAM
]]>
https://cyberhost.uk/ionos-1-vps-review/657c4ada94293600016089d3Fri, 15 Dec 2023 15:01:39 GMT

Jump to YABS

What's the deal?

UK Cost: £1.20 (Including VAT) - 12 Month Contract

US Cost: $2 - 1 Month Contract

Get Ionos VPS - This supports the hosting of this blog at no cost to you

VPS Linux XS Specs:

  • 1 vCPU
  • 1GB RAM
  • 10GB NVME
  • IPv4 and IPv6
  • 1Gbit Port
  • Unlimited Transfer

Other benefits:

  • Firewall
  • DDoS Protection

Locations: UK, US, Germany, Spain, France

These are great little boxes if you don't need much horsepower. I personally have a couple of these, one is currently hosting this blog. Granted I've converted it to be static as 1GB/1CPU isn't powerfull enough to run Ghost CMS. The other I use for the public IPv4 which I can then port forward to my router with Wireguard. As I'm stuck behind a CG-NAT it's either this or pay £5/month to my ISP.

I've not had any issues with uptime while running these boxes. I did have some issues in the early days with packet loss this was when the network port was capped at 400 Mbits. It looks like Ionos are now deploying VPSs in a different datacenter and with 1Gbit ports, happy to say I've not had any packet loss issues on the new VPSs.

I've run a YABS benchmark which can be found at the end of this post. The NVMEs seem slightly on the slow side when testing with 4k blocks all the other block sizes are fairly good, I guess for £1 you can't really complain!

CPU

I scored 700 with Geekbench6 on the Intel Xeon CPU. My other VPS has a AMD EPYC CPU. These are both meant to be in the same datacenter so it will be pot luck with CPU you end up with.

Networking

One thing to note is that IPv6 is not enabled by default but can be done for free via the control panel without rebooting the VPS.

The IP of my recently purchased VPS seems to locate to Germany on quite a few GeoIP databases, I'm guessing this will be updated in the coming weeks.

These come with 1Gbit ports and unlimited transfer, I wasn't even able to find the fair usage small print. They do say "We won't throttle or restrict traffic at any time, so you never have to worry about limits or extra costs.". The iperf network tests are contantly hitting over 1Gbit, the highest I've observed was 3.65Gbit download and 1.9Gbit upload, very impressive!

Ionos £1 VPS Review

Network Speed:

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 1.87 Gbits/sec  | 3.48 Gbits/sec  | 1.46 ms        
Scaleway        | Paris, FR (10G)           | 1.86 Gbits/sec  | 3.65 Gbits/sec  | 7.93 ms        
NovoServe       | North Holland, NL (40G)   | 1.90 Gbits/sec  | 3.26 Gbits/sec  | 14.1 ms        
Uztelecom       | Tashkent, UZ (10G)        | 454 Mbits/sec   | 1.02 Gbits/sec  | 191 ms         
Clouvider       | NYC, NY, US (10G)         | 1.74 Gbits/sec  | 374 Mbits/sec   | 74.3 ms        
Clouvider       | Dallas, TX, US (10G)      | 1.68 Gbits/sec  | 1.71 Gbits/sec  | 107 ms         
Clouvider       | Los Angeles, CA, US (10G) | 1.04 Gbits/sec  | 1.25 Gbits/sec  | 133 ms         

Firewall Setting

Although you can always use a tool such as [UFW](https://en.wikipedia.org/wiki/Uncomplicated_Firewall) to manage a firewall on the VPS. I always prefer to have my firewall separate from the VPS. The main reason for this is Docker and how it doesn't play nicely with iptables. Previously I was under the assumption there was a firewall in place but Docker was just bypassing it, not very helpful...

Ionos has a basic interface for managing this, with changes being deployed almost instantly.

Ionos £1 VPS Review

YABS

Yet-Another-Bench-Script

# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
#              Yet-Another-Bench-Script              #
#                     v2023-11-30                    #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Fri Dec 15 15:03:20 UTC 2023

Basic System Information:
---------------------------------
Uptime     : 1 days, 21 hours, 29 minutes
Processor  : Intel Xeon Processor (Skylake, IBRS)
CPU cores  : 1 @ 2100.000 MHz
AES-NI     : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM        : 942.4 MiB
Swap       : 1024.0 MiB
Disk       : 9.8 GiB
Distro     : Debian GNU/Linux 12 (bookworm)
Kernel     : 6.1.0-15-cloud-amd64
VM Type    : MICROSOFT
IPv4/IPv6  : ✔ Online / ✔ Online

IPv4 Network Information:
---------------------------------
ISP        : IONOS SE
ASN        : AS8560 IONOS SE
Host       : Ionos
Location   : City of Westminster, England (ENG)
Country    : United Kingdom

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 40.02 MB/s   (10.0k) | 498.01 MB/s   (7.7k)
Write      | 40.12 MB/s   (10.0k) | 500.63 MB/s   (7.8k)
Total      | 80.15 MB/s   (20.0k) | 998.64 MB/s  (15.6k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 474.75 MB/s    (927) | 469.26 MB/s    (458)
Write      | 499.98 MB/s    (976) | 500.51 MB/s    (488)
Total      | 974.73 MB/s   (1.9k) | 969.78 MB/s    (946)

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 1.87 Gbits/sec  | 3.48 Gbits/sec  | 1.46 ms        
Scaleway        | Paris, FR (10G)           | 1.86 Gbits/sec  | 3.65 Gbits/sec  | 7.93 ms        
NovoServe       | North Holland, NL (40G)   | 1.90 Gbits/sec  | 3.26 Gbits/sec  | 14.1 ms        
Uztelecom       | Tashkent, UZ (10G)        | 454 Mbits/sec   | 1.02 Gbits/sec  | 191 ms         
Clouvider       | NYC, NY, US (10G)         | 1.74 Gbits/sec  | 374 Mbits/sec   | 74.3 ms        
Clouvider       | Dallas, TX, US (10G)      | 1.68 Gbits/sec  | 1.71 Gbits/sec  | 107 ms         
Clouvider       | Los Angeles, CA, US (10G) | 1.04 Gbits/sec  | 1.25 Gbits/sec  | 133 ms         

iperf3 Network Speed Tests (IPv6):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 1.81 Gbits/sec  | 2.92 Gbits/sec  | 2.21 ms        
Scaleway        | Paris, FR (10G)           | busy            | busy            | 9.88 ms        
NovoServe       | North Holland, NL (40G)   | 1.82 Gbits/sec  | 1.73 Gbits/sec  | 9.39 ms        
Uztelecom       | Tashkent, UZ (10G)        | busy            | busy            | --             
Clouvider       | NYC, NY, US (10G)         | 1.62 Gbits/sec  | 1.12 Gbits/sec  | 74.4 ms        
Clouvider       | Dallas, TX, US (10G)      | 1.27 Gbits/sec  | 1.02 Gbits/sec  | 111 ms         
Clouvider       | Los Angeles, CA, US (10G) | 1000 Mbits/sec  | 1.06 Gbits/sec  | 135 ms

Geekbench 6 Benchmark Test:
---------------------------------
Test            | Value                         
                |                               
Single Core     | 700                           
Multi Core      | 502                           
Full Test       | https://browser.geekbench.com/v6/cpu/3999447

Visit Ionos - This supports the hosting of this blog at no cost to you

]]>
<![CDATA[Deploying a RIPE Atlas Software Node]]>The RIPE Atlas project is a global network of probes used to measure internet connectivity. At the time of writing there are 12868 probes online, sitting on 3792 ASNs (Networks) in 177 countries. This provides a great understanding of the state of the internet in real time, and the best

]]>
https://cyberhost.uk/deploying-a-ripe-atlas-software-node-in-a-vm/657238ef3bc0c20001b6e543Wed, 13 Dec 2023 21:45:00 GMT

The RIPE Atlas project is a global network of probes used to measure internet connectivity. At the time of writing there are 12868 probes online, sitting on 3792 ASNs (Networks) in 177 countries. This provides a great understanding of the state of the internet in real time, and the best thing is at this is a public project with the data produced being freely available.

If you're unsure who RIPE are, they the non-profit that assign IPv4, IPv6 and ASN numbers for the Europe region.

View measurments: https://atlas.ripe.net/measurements

Deploying a RIPE Atlas Software Node
Map of online probes - 7th December 2023

Running your own probe used to require custom hardware from RIPE, there have been a few different versions over the years. The most common was a repurposed TP-Link router.

Deploying a RIPE Atlas Software Node

You can request a hardware node here: https://atlas.ripe.net/apply/, however these are in limited supply. Running a software probe is an easier alternative especially if you're not sure about running one long term.

This is a dead easy setup using docker thanks to James Swineson. His docker compose is on GitHub: https://raw.githubusercontent.com/Jamesits/docker-ripe-atlas/master/docker-compose.yaml

This is my slightly modified version:

version: '3.1'
services:
  ripe-atlas:
    image: jamesits/ripe-atlas:latest
    restart: always
    environment:
      RXTXRPT: "yes"
    volumes:
      - atlas-probe/etc:/var/atlas-probe/etc
      - atlas-probe/status:/var/atlas-probe/status
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETUID
      - SETGID
      - DAC_OVERRIDE
      - NET_RAW
    mem_limit: "64000000000"
    mem_reservation: 64m

Save it in a docker-compose.yml file and then run with sudo docker compose up -d.

This will create a set of keys within the etc directory. Copy the contents of the probe_key.pub and use within the apply form: https://atlas.ripe.net/apply/swprobe/

If you're unsure of you're ASN number you can run curl https://ipinfo.io/org

That's it once the form it submit you should get an email with your probe ID, you should receive around 21,600 every 24 hours. I had to wait a full 24 hours before receiving any.

After your credits come in you'll then be able to run measurements via the measurements form:

Deploying a RIPE Atlas Software Node
Create Measurement Form

Now it's time to sit back and grab a coffee as the credits role in and the internet is made better!

]]>
<![CDATA[Diving into a hidden macOS tool - networkQuality]]>Getting Started with networkQuality

The networkQuality tool is a built-in tool released in macOS Monterey that can help diagnose network issues and measure network performance. In this post, we'll go over how to use the networkQuality tool and some of its key features.

Running the Default Tests

To

]]>
https://cyberhost.uk/the-hidden-macos-speedtest-tool-networkquality/65722b923bc0c20001b6e41eSun, 14 May 2023 00:22:59 GMTGetting Started with networkQuality Diving into a hidden macOS tool - networkQuality

The networkQuality tool is a built-in tool released in macOS Monterey that can help diagnose network issues and measure network performance. In this post, we'll go over how to use the networkQuality tool and some of its key features.

Running the Default Tests

To access the Network Quality tool, open the Terminal app on your Mac and enter the following command:

networkQuality -v

This command starts the tool and performs the default set of tests, displaying the results in the Terminal window.

Example output:

==== SUMMARY ====
Uplink capacity: 44.448 Mbps (Accuracy: High)
Downlink capacity: 162.135 Mbps (Accuracy: High)
Responsiveness: Low (73 RPM) (Accuracy: High)
Idle Latency: 50.125 milliseconds (Accuracy: High)
Interface: en0
Uplink bytes transferred: 69.921 MB
Downlink bytes transferred: 278.340 MB
Uplink Flow count: 16
Downlink Flow count: 12
Start: 13/05/2023, 15:04:13
End: 13/05/2023, 15:04:27
OS Version: Version 13.3.1 (a) (Build 22E772610a)

Usage:

USAGE: networkQuality [-C <configuration_url>] [-c] [-h] [-I <network interface name>] [-k] [-p] [-r host] [-s] [-v]
    -C: override Configuration URL or path (with scheme file://)
    -c: Produce computer-readable output
    -h: Show help (this message)
    -I: Bind test to interface (e.g., en0, pdp_ip0,...)
    -k: Disable certificate validation
    -p: Use Private Relay
    -r: Connect to host or IP, overriding DNS for initial config request
    -s: Run tests sequentially instead of parallel upload/download
    -v: Verbose output

Using Private Relay

The Network Quality tool also supports Apple's Private Relay feature, which encrypts and routes all network traffic through two separate servers for added privacy and security. To use Private Relay with the tool, you can add the "-p" flag to the command:

networkQuality -v -p

Customizing the Configuration

You can customize the configuration used by the Network Quality tool. By default, the tool requests a configuration file from Apple everytime via https://mensura.cdn-apple.com/api/v1/gm/config. However, you can specify a different Configuration URL using the -C flag.

Apple's default config:

{ "version": 1,
  "test_endpoint": "uklon6-edge-bx-031.aaplimg.com",
  "urls": {
      "small_https_download_url": "https://mensura.cdn-apple.com/api/v1/gm/small",
      "large_https_download_url": "https://mensura.cdn-apple.com/api/v1/gm/large",
      "https_upload_url": "https://mensura.cdn-apple.com/api/v1/gm/slurp",
      "small_download_url": "https://mensura.cdn-apple.com/api/v1/gm/small",
      "large_download_url": "https://mensura.cdn-apple.com/api/v1/gm/large",
      "upload_url": "https://mensura.cdn-apple.com/api/v1/gm/slurp"
   }
}

Apple's test_endpoint changes on each request, selecting a different nearby server to reduce latency and distributing their server load.

For example, if you have a custom configuration file located at https://networkquality.example.com/config, you can use the following command to run the Network Quality tool with your custom configuration:

networkQuality -v -C https://networkquality.example.com/config

This command starts the tool and performs the tests using the configuration specified in the custom configuration file.

Creating Your Own Server

If you want to set up your own server for the Network Quality tool, you can find documentation on how to do so on the project's GitHub page at https://github.com/network-quality/server. This allows you to customize the tool to your specific needs and run it on your own infrastructure.

Go Server: https://github.com/network-quality/goserver

Conclusion

The networkQuality tool in macOS is a powerful tool for measuring the performance of your network connection. The fact you can run the test via Apple Private Relay without iCloud+ is a neat feature. Next steps for me will be to self-host my own server which will be a good way to test network speeds locally.

networkQuality Wiki/Community: https://github.com/network-quality/community/wiki

IPPM Responsiveness Draft RFC: https://www.ietf.org/archive/id/draft-cpaasch-ippm-responsiveness-00.html

]]>
<![CDATA[Self-Host Mastodon with Docker Compose]]>https://cyberhost.uk/mastodon-docker-compose/65722b923bc0c20001b6e41aFri, 12 May 2023 19:19:00 GMTRequirements Self-Host Mastodon with Docker Compose

A small instance to support up to 10 users requires around:
2 CPUs
2GB RAM
50GB Storage

Need a Server?

Hetzner - €20 credit on us (ref) - Free Arm instance for 4 months!
Vultr - $100 credit on us (ref) - Expires after 30 days

Setup

Install docker

Follow the docs for your distro: https://docs.docker.com/engine/install/

Create docker network

sudo docker network create --driver=bridge --subnet=10.10.10.0/24 --gateway=10.10.10.1 mastodonnet

Docker compose:
version: "2.1"
services:
  mastodon:
    image: lscr.io/linuxserver/mastodon:latest
    container_name: mastodon
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - LOCAL_DOMAIN=<domain>    #All other hostnames are blocked by default
      - REDIS_HOST=10.10.10.3
      - REDIS_PORT=6379
      - DB_HOST=10.10.10.2
      - DB_USER=mastodon
      - DB_NAME=mastodon
      - DB_PASS=<PASSWORD>
      - DB_PORT=5432
      - ES_ENABLED=false
      - SECRET_KEY_BASE=
      - OTP_SECRET=
      - VAPID_PRIVATE_KEY=
      - VAPID_PUBLIC_KEY=
      - SMTP_SERVER=mail.example.com
      - SMTP_PORT=25
      - SMTP_LOGIN=
      - SMTP_PASSWORD=
      - SMTP_FROM_ADDRESS=notifications@example.com
      - S3_ENABLED=false
#      - WEB_DOMAIN=mastodon.example.com #optional
#      - ES_HOST=es #optional
#      - ES_PORT=9200 #optional
#      - ES_USER=elastic #optional
#      - ES_PASS=elastic #optional
#      - S3_BUCKET= #optional
#      - AWS_ACCESS_KEY_ID= #optional
#      - AWS_SECRET_ACCESS_KEY= #optional
#      - S3_ALIAS_HOST= #optional
    volumes:
      - ./config:/config
    networks:
      default:
        ipv4_address: 10.10.10.4
    expose:
      - "80"
      - "443"
    restart: unless-stopped

  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      default:
        ipv4_address: 10.10.10.2
    environment:
      POSTGRES_USER: mastodon
      POSTGRES_PASSWORD: <PASSWORD>
      POSTGRES_DB: mastodon
      POSTGRES_HOST_AUTH_METHOD: trust
    healthcheck:
      test: ['CMD', 'pg_isready', '-U', 'postgres']
    volumes:
      - ./postgres14:/var/lib/postgresql/data

  redis:
    restart: always
    image: redis:7-alpine
    networks:
      default:
        ipv4_address: 10.10.10.3
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
    volumes:
      - ./redis:/data

networks:
  default:
    external:
      name: mastodonnet

SMTP is optional if you don't have an email provider

S3 Media Storage

It is recommended that you enable and configure Mastodon to use an S3 bucket for media. Media storage can easily get to over 100GB.

I'd recommend using Scaleway Object Storage as you get 75GB of Storage and Bandwidth for free.

We'll configure the removal of old media later so you're not storing loads of junk!

Create secrets

Run sudo docker run --rm -it -w /app/www --entrypoint rake lscr.io/linuxserver/mastodon secret

Run the above command twice and store the values under the variables SECRET_KEY_BASE and OTP_SECRET within the docker-compose.yml file.

Create Vapid Keys

Run sudo docker run --rm -it -w /app/www --entrypoint rake lscr.io/linuxserver/mastodon mastodon:webpush:generate_vapid_key

Copy and paste the output into the environmental variables in the docker-compose.yml file.

Example config:

- SECRET_KEY_BASE=20ef02cab8dcdf7a787aa2f1aa29d4414b02c18d97af78ba4860c78bd456edaae4c30a97b846f368e8c58c8625b79d3a9220ea247a4e5e62be1cb1bbbb041705
- OTP_SECRET=20a8013e5fd8d0f2d2d87017e6e6c00fc87fe3d354622b463ff2b40ea42c952459362cb02a7e5fa51c90906546dcd1ecf2de48a2e236944df9070fbee8d190d1
- VAPID_PRIVATE_KEY=M5PjTAf4GbE4zlqAh1lY8NGy1P_JTbCL2i_zly7XOG0=
- VAPID_PUBLIC_KEY=BFRu73k9lDkQUw0pysubc7cpoByCYMm__km6APeLtwrtPPSlWWn064fFjFYAdZ-AiXBRWgWAanB9lm0BP83hBkg=
Start it up!

Run: sudo docker compose up -d

Create your Owner/Admin account:
sudo docker exec -it -w /app/www mastodon bin/tootctl accounts create \
  <YOUR-USERNAME> \
  --email <YOUR-EMAIL> \
  --confirmed \
  --role Owner

A password will be displayed after a few seconds

Reverse Proxy Setup

In order to add TLS (HTTPS) to your Mastodon instance we'll be placing a reverse proxy in front.

We recommend using Caddy however Nginx is a popular alternative.

If you'd like to place your instance behind Cloudflare, follow the Cloudflare Tunnel section.

Caddy

Need to setup Caddy? Follow this guide

mastodon.example.com {
      reverse_proxy https://10.10.10.4  {
                transport http {
                        tls_insecure_skip_verify
                }
        }
}
Cloudflare Tunnel

Need to setup Cloudflare tunnel? Follow this guide
Add the folowing to your config:

  - hostname: mastodon.example.com
    service: https://10.10.10.4:443
    originRequest:
       noTLSVerify: true
Auto removal of old media

Mastodon caches media from other servers, which can potentially use a significant amount of storage.

We'll be using a built-in tool to automatically delete cached media to reduce storage usage.

Open crontab: sudo crontab -e

Add the following:

15 * * * * docker exec -w /app/www mastodon bin/tootctl media remove --days 7 > /dev/null 2>&1

This will run at quater past every hour, removing media that is over 7 days old.

Replace /dev/null if you'd like to store the output to a file.

You'll probably want to adjust the number of days you cache media for.

Enjoy using Mastodon!

Say hello👋
@cyberhost@infosec.exchange

]]>
<![CDATA[Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT]]>https://cyberhost.uk/cloudflare-argo-tunnel/65722b923bc0c20001b6e412Sat, 16 Oct 2021 21:11:00 GMTContents: Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT

What is a Cloudflare Argo Tunnel
How it works
CG-NAT
Install
Setting up Cloudflare Repositories
Temporary Argo Tunnel
Permanent Argo Tunnel
Run in the background and on boot
Adding more services

What is a Cloudflare Argo Tunnel?

Cloudflare Tunnel provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address. With Tunnel, you do not send traffic to an external IP — instead, a lightweight daemon in your infrastructure (cloudflared) creates outbound-only connections to Cloudflare’s edge. Cloudflare Tunnel can connect HTTP web servers, SSH servers, remote desktops, and other protocols safely to Cloudflare. This way, your origins can serve traffic through Cloudflare without being vulnerable to attacks that bypass Cloudflare. - Cloudflare

How it works

Cloudflared establishes outbound connections (tunnels) between your resources and the Cloudflare edge. Tunnels are persistent objects that route traffic to DNS records. Within the same tunnel, you can run as many cloudflared processes (connectors) as needed. These processes will establish connections to the Cloudflare edge and send traffic to the nearest Cloudflare data center. - Cloudflare

Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT
Cloudflare Argo Tunnel Diagram

CG-NAT's

As the IPv4 address space has been exhausted, many ISP's have reduced their usage by implementing a CG-NAT, this is where multiple customers share the same IPv4 address. This basically makes port-forwarding impossible.

Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT

Using a Cloudflare Argo Tunnel removes the need to port forward, allowing users to self-host behind a CG-NAT, strict firewall or any ISP limitation.

Cloudflare Setup Docs

Install

Setup Cloudflare Repositories

You can download the cloudflared binary from Cloudflare. I feel that setting up Cloudflare Repositories is a better solution as it can then be managed and updated via your package manager.

Follow the Official Setup Docs for your distribution.

Example setup for Debian 12 (Bookworm):

# Add cloudflare gpg key
sudo mkdir -p --mode=0755 /usr/share/keyrings
curl -fsSL https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-main.gpg >/dev/null

# Add this repo to your apt repositories
echo 'deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared bookworm main' | sudo tee /etc/apt/sources.list.d/cloudflared.list

# install cloudflared
sudo apt-get update && sudo apt-get install cloudflared

Temporary Argo Tunnel (Cloudflare account not required!)

Run: cloudflared tunnel --url localhost:<PORT>
Example: cloudflared tunnel --url localhost:80

Output:

+--------------------------------------------------------------------------------------------+
2021-10-14T21:01:42Z INF |  Your quick Tunnel has been created! Visit it at (it may take some time to be reachable):  |
2021-10-14T21:01:42Z INF |  https://bloomberg-car-giant-removed.trycloudflare.com                               |
2021-10-14T21:01:42Z INF +--------------------------------------------------------------------------------------------+

Just head to the URL outputted: https://bloomberg-car-giant-removed.trycloudflare.com.

It's that simple!

Permanent Argo Tunnel

  1. Login: cloudflared tunnel login

  2. Copy the URL and open in your browser

  3. Create a new tunnel: cloudflared tunnel create cyberhost

This can be viewed by running cloudflared tunnel list

ID                                   NAME      CREATED              CONNECTIONS 
28c78ae-9ba2-40cc-c187-1892be52da8b cyberhost 2021-10-14T12:10:05Z
  1. Navigate to .cloudflared you may find this in your home directory cd ~/.cloudflared.

  2. Create a configuration file within the .cloudflared directory:
    nano config.yml

  3. Use the following config:

tunnel: <Tunnel-UUID>
credentials-file: <PATH>/.cloudflared/<Tunnel-UUID>.json

ingress:
  - hostname: demo.example.com
    service: http://localhost:80
  - service: http_status:404

Replace <PORT>, <Tunnel-UUID>, <PATH> and demo.example.com. To find <PATH> run pwd in the .cloudflared directory.

  1. Connect the Argo tunnel with a hostname
    eg: cloudflared tunnel route dns <UUID or NAME> demo.example.com

  2. Now run the tunnel cloudflared tunnel run <UUID or NAME>

Run in the background and on boot

  1. Create a system service: sudo cloudflared --config ~/.cloudflared/config.yml service install

  2. Start and enable service at boot: sudo systemctl start cloudflared && sudo systemctl enable cloudflared

Adding more services

  1. Pair another hostname: cloudflared tunnel route dns <UUID or NAME> demo2.example.com

  2. Add another ingress point to the config:

ingress:
  - hostname: demo.example.com
    service: http://localhost:80
  - hostname: demo2.example.com
    service: http://localhost:8080
  - service: http_status:404
  1. Remove the existing service config sudo rm /etc/cloudflared/config.yml
  2. Install the new config: sudo cloudflared --config ~/.cloudflared/config.yml service install
  3. Restart service: sudo systemctl restart cloudflared
]]>
<![CDATA[How to Self-Host Matrix and Element (Docker Compose)]]>Version 2

This is a complete guide on setting up Matrix (Synapse) and Element on a fresh Ubuntu 20.04, Debian 11, CentOS or Fedora server.

Need a Server?
Hetzner - €20 credit on us (ref) - Free Arm instance for 4 months!
Vultr - $100 credit on us

]]>
https://cyberhost.uk/element-matrix-setup/65722b923bc0c20001b6e3f0Sun, 10 Oct 2021 21:43:00 GMT

Version 2

This is a complete guide on setting up Matrix (Synapse) and Element on a fresh Ubuntu 20.04, Debian 11, CentOS or Fedora server.

Need a Server?
Hetzner - €20 credit on us (ref) - Free Arm instance for 4 months!
Vultr - $100 credit on us (ref) - Expires after 30 days

Contents
What is Matrix?
Server Setup
Install UFW
Setup Sudo User
Install Docker
Install Matrix and Element
Create New Users
Reverse Proxy
Login

If your server is already setup feel free to skip.

What is Matrix?

Matrix is an open standard and communication protocol for real-time communication. It aims to make real-time communication work seamlessly between different service providers, just like standard Simple Mail Transfer Protocol email does now for store-and-forward email service, by allowing users with accounts at one communications service provider to communicate with users of a different service provider via online chat, voice over IP, and videotelephony. Such protocols have been around before such as XMPP but Matrix is not based on that or another communication protocol. From a technical perspective, it is an application layer communication protocol for federated real-time communication. It provides HTTP APIs and open source reference implementations for securely distributing and persisting messages in JSON format over an open federation of servers. It can integrate with standard web services via WebRTC, facilitating browser-to-browser applications. Wikipedia

Server Setup

  1. Update
    Ubuntu/Debian: sudo apt update && sudo apt upgrade
    CentOS/Fedora: sudo dnf upgrade

  2. Install automatic updates
    Ubuntu/Debian: sudo apt install unattended-upgrades
    CentOS/Fedora: sudo dnf install -y dnf-automatic

  3. Change SSH Port: sudo nano /etc/ssh/sshd_config

Remove the # infront of Port 22 and then change it (30000-50000 is ideal).

This is security though obsucurity which is not ideal but port 22 just gets abused by bots.

  1. Setup SSH Keys

  2. Restart SSH: sudo systemctl restart sshd

  3. Install fail2ban
    Ubuntu/Debian: sudo apt install fail2ban
    CentOS/Fedora: sudo dnf install fail2ban

Install UFW Firewall

  1. Install
    Ubuntu/Debian: sudo apt install ufw
    CentOS/Fedora: sudo dnf install ufw

  2. Replace SSH-PORT to your SSH port: sudo ufw allow <SSH-PORT>/tcp

  3. Allow HTTP/s traffic:

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
  1. Enable Firewall: sudo ufw enable

Setup a sudo user

  1. adduser <USERNAME>
  2. Add user to sudoers sudo adduser <USERNAME> sudo
  3. Login as the new user su - <USERNAME>

Install Docker

Official Docs:
Ubuntu
Debian
CentOS
Fedora

Install Matrix and Element

  1. Create docker network, this is so Matrix and Element can be on their own isolated network:
    sudo docker network create --driver=bridge --subnet=10.10.10.0/24 --gateway=10.10.10.1 matrix_net

  2. Create Matrix directory: sudo mkdir matrix && cd matrix

  3. Use the following template:
    sudo nano docker-compose.yaml

version: '2.3'
services:
  postgres:
    image: postgres:14
    restart: unless-stopped
    networks:
      default:
        ipv4_address: 10.10.10.2
    volumes:
     - ./postgresdata:/var/lib/postgresql/data

    # These will be used in homeserver.yaml later on
    environment:
     - POSTGRES_DB=synapse
     - POSTGRES_USER=synapse
     - POSTGRES_PASSWORD=STRONGPASSWORD
     
  element:
    image: vectorim/element-web:latest
    restart: unless-stopped
    volumes:
      - ./element-config.json:/app/config.json
    networks:
      default:
        ipv4_address: 10.10.10.3
        
  synapse:
    image: matrixdotorg/synapse:latest
    restart: unless-stopped
    networks:
      default:
        ipv4_address: 10.10.10.4
    volumes:
     - ./synapse:/data

networks:
  default:
    external:
      name: matrix_net
  1. Create Element Config sudo nano element-config.json
    Copy and paste example contents into your file.

  2. Remove "default_server_name": "matrix.org" (top line) from element-config.json as this is deprecated

  3. Add our custom homeserver to the top of element-config.json:

    "default_server_config": {
        "m.homeserver": {
            "base_url": "https://matrix.example.com",
            "server_name": "matrix.example.com"
        },
        "m.identity_server": {
            "base_url": "https://vector.im"
        }
    },
  1. Generate Synapse Config:
sudo docker run -it --rm \
    -v "$HOME/matrix/synapse:/data" \
    -e SYNAPSE_SERVER_NAME=matrix.example.com \
    -e SYNAPSE_REPORT_STATS=yes \
    matrixdotorg/synapse:latest generate
  1. Comment out sqlite database (as we have setup postgres to replace this):
    sudo nano synapse/homeserver.yaml:
#database:
#  name: sqlite3
#  args:
#    database: /data/homeserver.db
  1. Add the Postgres config:
    sudo nano synapse/homeserver.yaml
database:
  name: psycopg2
  args:
    user: synapse
    password: STRONGPASSWORD
    database: synapse
    host: postgres
    cp_min: 5
    cp_max: 10
  1. Deploy: sudo docker compose up -d

Create New Users

  1. Access docker shell: sudo docker exec -it matrix_synapse_1 bash
  2. register_new_matrix_user -c /data/homeserver.yaml http://localhost:8008
  3. Follow the on screen prompts
  4. Enter exit to leave the container's shell

To allow anyone to register an account set 'enable_registration' to true in the homeserver.yaml. This is NOT recomended.

Install Reverse Proxy (Caddy)

Caddy will be used for the reverse proxy. This will handle incomming HTTPS connections and forward them to the correct docker containers. It a simple setup process and Caddy will automatically fetch and renew Let's Encrypt certificates for us!

  1. Follow this setup guide:
Caddy Server v2 Reverse Proxy Setup Guide
Last Updated: 07/06/2021 What is Caddy? Caddy has a wide range of use cases including: Web Server Reverse Proxy Sidecar Proxy Load Balancer API Gateway Ingress Controller System Manager Process Supervisor Task Scheduler Today we will be installing and setting up Caddy as a Reverse Proxy. This will
How to Self-Host Matrix and Element (Docker Compose)
  1. Head to your user directory: cd
  2. Create Caddy file: sudo nano Caddyfile

Recommended Caddy Template:

  • Limited Matrix paths (based on docs)
  • Security Headers
  • No search engine indexing
matrix.example.com {
  reverse_proxy /_matrix/* 10.10.10.4:8008
  reverse_proxy /_synapse/client/* 10.10.10.4:8008
  
  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

element.example.com {
  encode zstd gzip
  reverse_proxy 10.10.10.3:80

  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

Matrix Reverse Proxy Docs

  1. Enable the config: caddy reload

Login

  1. Head to your element domain and login!

Don't forget to update frequently

Pull the new docker images and then restart the containers:
sudo docker compose pull && sudo docker compose up -d

]]>
<![CDATA[Manjaro Mirror Live!]]>

We now operate an Official Manjaro Linux Mirror!

Hostworld.uk kindly donated us a high spec VPS to provide a fast and reliable Linux mirror located in the UK.

Here are a few stats:

  • Syncs every 15 minutes
  • 1Gbps
  • IPv4 & IPv6
  • HTTPS only - TLS 1.1/2
  • Currently
]]>
https://cyberhost.uk/manjaro-mirror-live/65722b923bc0c20001b6e410Tue, 21 Sep 2021 20:26:39 GMTManjaro Mirror Live!

We now operate an Official Manjaro Linux Mirror!

Hostworld.uk kindly donated us a high spec VPS to provide a fast and reliable Linux mirror located in the UK.

Here are a few stats:

  • Syncs every 15 minutes
  • 1Gbps
  • IPv4 & IPv6
  • HTTPS only - TLS 1.1/2
  • Currently 99.995% uptime
  • 3 DNS providers - High Availability
  • UK Based Server

View more info at https://mirror.cyberhost.uk

If you would like to run a public Manjaro/Linux mirror, we'd be more than happy to give you a hand. Contact Page

Comments

]]>
<![CDATA[Domain Security 101]]>

Contents:

DNSSEC
CAA Record
Domain Transfer Lock
2FA
Auto-Renew
Email Security 101

DNSSEC

DNSSEC adds a layer of trust on top of the DNS system by adding cryptographic signatures to existing DNS records. This ensures the record wasn't modified, protecting against DNS injection and man-in-the-middle (MITM) attacks.

]]>
https://cyberhost.uk/domain-security/65722b923bc0c20001b6e40fTue, 07 Sep 2021 15:51:00 GMT

Contents:

Domain Security 101

DNSSEC
CAA Record
Domain Transfer Lock
2FA
Auto-Renew
Email Security 101

DNSSEC

DNSSEC adds a layer of trust on top of the DNS system by adding cryptographic signatures to existing DNS records. This ensures the record wasn't modified, protecting against DNS injection and man-in-the-middle (MITM) attacks.

Cloudflare have a detailed post on DNSSEC here: https://www.cloudflare.com/dns/dnssec/how-dnssec-works/

CAA Record

CAA is a type of DNS record that allows site owners to specify which Certificate Authorities (CAs) are allowed to issue certificates containing their domain names. - Let's Encrypt Docs

Example:
dig CAA cyberhost.uk

;; ANSWER SECTION:
cyberhost.uk.		600	IN	CAA	0 issue "letsencrypt.org"
cyberhost.uk.		600	IN	CAA	0 issue "sectigo.com"
cyberhost.uk.		600	IN	CAA	0 iodef "mailto:hidden@email.com" 

The above records allow for only Let's Encrypt and Sectigo* to issue certificates for cyberhost.uk. If a Certificate Authority that is not listed, tries to create a certificate it will fail and an email should be sent to hidden@email.com.

*Let's Encrypt is our main CA however this will failover to ZeroSSL (which uses sectigo), thanks to Caddy :)

Domain Transfer Lock

It's good practice to enable a Domain Transfer Lock this prevents domain hijacking where the domain is transferred to another register under the control of someone else.

This is typically a simple toggle in your domain registrar's control panel.

Use 2FA

To prevent your domain from being hijacked ensure that all accounts that are used to manage that domain has some form of 2FA.

Accounts:

  • Domain Registrar
  • DNS Management

Auto-Renew

It's simple don't let your domain expire and be purchased by someone else!

Email Domain Security 101

Alex Blackie has a simple guide on email domain security:
https://www.alexblackie.com/articles/email-authenticity-dkim-spf-dmarc/

Comments

]]>
<![CDATA[Caddy setup with Cloudflare]]>https://cyberhost.uk/caddy-cloudflare/65722b923bc0c20001b6e408Tue, 31 Aug 2021 18:46:03 GMT

What we want to achieve

Caddy setup with Cloudflare

In this post, we'll setup a Caddy reverse proxy situated behind the protections of Cloudflare. We'll setup authenticated TLS pulls, so only connections from Cloudflare servers are allowed and use a Cloudflare certificate to encrypt the data from Caddy to Cloudflare.

Caddy setup with Cloudflare
  1. Head to SSL/TLS > Origin Server
Caddy setup with Cloudflare
  1. Click Create Certificate
  2. Select RSA (2048) and you'll probably want a period of over 3 years for convenience.
Caddy setup with Cloudflare
  1. Create a cert folder.
cd
mkdir cert
cd cert
  1. Using the PEM key format copy and paste the Origin Certificate to example.com.pem and the Private Key into example.com.key.
  1. Add the following line to your Caddyfile
    tls <Path to PEM> <Path to KEY>

Example:

cyberhost.uk {
  tls /file_path/cyberhost.uk.pem /file_path/cyberhost.uk.key
  reverse_proxy 127.0.0.1
}

Reload config: caddy reload

Setup Cloudflare TLS Authenticated Pull

The issue

Cloudflare is used for their industry-leading DDoS and security benefits, so we don't want anyone being able to bypass this protection!

Cloudflare can be bypassed by sending a host header to the origin IP.

Example:
curl --resolve '<DOMAIN>:<PORT>:<Origin-IP>' https://<DOMAIN> -k
curl --resolve 'example.com:443:93.184.216.34' https://example.com -k

The fix

One solution is to use an allowlist/whitelist with Cloudflare's IP Ranges however these can change over time. The best solution is to cryptographically authenticate all clients trying to load your website.

  1. Enable 'Authenticated Origin Pulls' in Cloudflare.
    SSL/TLS -> Origin Server
Caddy setup with Cloudflare
  1. Get a copy of the Cloudflare Certificate here.

Also available here:

-----BEGIN CERTIFICATE-----
MIIGCjCCA/KgAwIBAgIIV5G6lVbCLmEwDQYJKoZIhvcNAQENBQAwgZAxCzAJBgNV
BAYTAlVTMRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMRQwEgYDVQQLEwtPcmln
aW4gUHVsbDEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzETMBEGA1UECBMKQ2FsaWZv
cm5pYTEjMCEGA1UEAxMab3JpZ2luLXB1bGwuY2xvdWRmbGFyZS5uZXQwHhcNMTkx
MDEwMTg0NTAwWhcNMjkxMTAxMTcwMDAwWjCBkDELMAkGA1UEBhMCVVMxGTAXBgNV
BAoTEENsb3VkRmxhcmUsIEluYy4xFDASBgNVBAsTC09yaWdpbiBQdWxsMRYwFAYD
VQQHEw1TYW4gRnJhbmNpc2NvMRMwEQYDVQQIEwpDYWxpZm9ybmlhMSMwIQYDVQQD
ExpvcmlnaW4tcHVsbC5jbG91ZGZsYXJlLm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD
ggIPADCCAgoCggIBAN2y2zojYfl0bKfhp0AJBFeV+jQqbCw3sHmvEPwLmqDLqynI
42tZXR5y914ZB9ZrwbL/K5O46exd/LujJnV2b3dzcx5rtiQzso0xzljqbnbQT20e
ihx/WrF4OkZKydZzsdaJsWAPuplDH5P7J82q3re88jQdgE5hqjqFZ3clCG7lxoBw
hLaazm3NJJlUfzdk97ouRvnFGAuXd5cQVx8jYOOeU60sWqmMe4QHdOvpqB91bJoY
QSKVFjUgHeTpN8tNpKJfb9LIn3pun3bC9NKNHtRKMNX3Kl/sAPq7q/AlndvA2Kw3
Dkum2mHQUGdzVHqcOgea9BGjLK2h7SuX93zTWL02u799dr6Xkrad/WShHchfjjRn
aL35niJUDr02YJtPgxWObsrfOU63B8juLUphW/4BOjjJyAG5l9j1//aUGEi/sEe5
lqVv0P78QrxoxR+MMXiJwQab5FB8TG/ac6mRHgF9CmkX90uaRh+OC07XjTdfSKGR
PpM9hB2ZhLol/nf8qmoLdoD5HvODZuKu2+muKeVHXgw2/A6wM7OwrinxZiyBk5Hh
CvaADH7PZpU6z/zv5NU5HSvXiKtCzFuDu4/Zfi34RfHXeCUfHAb4KfNRXJwMsxUa
+4ZpSAX2G6RnGU5meuXpU5/V+DQJp/e69XyyY6RXDoMywaEFlIlXBqjRRA2pAgMB
AAGjZjBkMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8ECDAGAQH/AgECMB0GA1Ud
DgQWBBRDWUsraYuA4REzalfNVzjann3F6zAfBgNVHSMEGDAWgBRDWUsraYuA4REz
alfNVzjann3F6zANBgkqhkiG9w0BAQ0FAAOCAgEAkQ+T9nqcSlAuW/90DeYmQOW1
QhqOor5psBEGvxbNGV2hdLJY8h6QUq48BCevcMChg/L1CkznBNI40i3/6heDn3IS
zVEwXKf34pPFCACWVMZxbQjkNRTiH8iRur9EsaNQ5oXCPJkhwg2+IFyoPAAYURoX
VcI9SCDUa45clmYHJ/XYwV1icGVI8/9b2JUqklnOTa5tugwIUi5sTfipNcJXHhgz
6BKYDl0/UP0lLKbsUETXeTGDiDpxZYIgbcFrRDDkHC6BSvdWVEiH5b9mH2BON60z
0O0j8EEKTwi9jnafVtZQXP/D8yoVowdFDjXcKkOPF/1gIh9qrFR6GdoPVgB3SkLc
5ulBqZaCHm563jsvWb/kXJnlFxW+1bsO9BDD6DweBcGdNurgmH625wBXksSdD7y/
fakk8DagjbjKShYlPEFOAqEcliwjF45eabL0t27MJV61O/jHzHL3dknXeE4BDa2j
bA+JbyJeUMtU7KMsxvx82RmhqBEJJDBCJ3scVptvhDMRrtqDBW5JShxoAOcpFQGm
iYWicn46nPDjgTU0bX1ZPpTpryXbvciVL5RkVBuyX2ntcOLDPlZWgxZCBp96x07F
AnOzKgZk4RzZPNAxCXERVxajn/FLcOhglVAKo5H0ac+AitlQ0ip55D2/mf8o72tM
fVQ6VpyjEXdiIXWUq/o=
-----END CERTIFICATE-----

Paste the certificate into a file e.g /file_path/origin-pull-ca.pem

  1. Now extend the tls directive in your Caddyfile to add client auth
cyberhost.uk {
  tls /file_path/cyberhost.uk.pem /file_path/cyberhost.uk.key {
    client_auth {
      mode require_and_verify
      trusted_ca_cert_file /file_path/origin-pull-ca.pem
      }
   }
   reverse_proxy 127.0.0.1
}
  1. Now when we try to bypass Cloudflare we are denied access!

curl --resolve 'example.com:443:93.184.216.34' https://example.com -k

Error:
curl: (56) OpenSSL SSL_read: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate, errno 0

Comments

]]>
<![CDATA[Best Free DNS Hosting Providers]]>Last Updated: 13th September 2021

Jump to DNS provider list

Anycast DNS

An anycast IP address is where 1 IP is connected to more than 1 server. Users will be routed to the server that is closed to them reducing response times. Anycast IP's are utilised within DNS

]]>
https://cyberhost.uk/free-dns-providers/65722b923bc0c20001b6e407Wed, 28 Jul 2021 17:00:00 GMT

Last Updated: 13th September 2021

Jump to DNS provider list

Anycast DNS

An anycast IP address is where 1 IP is connected to more than 1 server. Users will be routed to the server that is closed to them reducing response times. Anycast IP's are utilised within DNS networks so that they can provide low response times globally. If your audience is worldwide you will want to consider having an anycast DNS provider to improve loading times.

DNSSEC

DNSSEC adds an additional layer of trust on top of the DNS system by adding cryptographic signatures to existing DNS records. This ensures the record wasn't modified, protecting against DNS injection and man-in-the-middle (MITM) attacks.

Cloudflare have a detailed post on DNSSEC here: https://www.cloudflare.com/dns/dnssec/how-dnssec-works/

Secondary DNS

When running a website where availability is important a secondary DNS provider could be a good idea. This provides two benefits, if one provider goes down the other one is still there to answer your DNS queries, it can also speed up response times as the secondary provider may be the quickest for some users.

The two DNS servers typically run in tandem, one will be the main (master) and the other is the secondary (slave). The webmaster configures the records on the main server these are then synchronised with the secondary server via an AXFR (Authoritative Zone Transfer) record.

Query Limits

Some DNS providers offer a free plan but they often come with a cap on the number of DNS queries typically between 300k and 3M. I decided not to include these as they are freemium plans and you don't want to be stressing about the number of queries.

Provider Comparison

Provider Uses Cloudflare Network DNSSEC Secondary Support
1984.is 🇮🇸/🇳🇱/🇬🇧
Cloudflare 🌎
DigitialOcean 🌎
HE.net 🌎
Hetzner 🇩🇪
Linode 🌎
Vultr 🌎

🌎 = Global Anycast Network

Check response times with dnsperf.com

DNS Marketshare

Provider Percentage
GoDaddy DNS 50%
Cloudflare DNS 16%
Amazon Route 53 4%
Google Cloud DNS 3%
Google Domains 3%
Data from Datanyze on 20th July 2021

Comments

]]>