<![CDATA[CyberHost]]>https://cyberhost.uk/https://cyberhost.uk/favicon.pngCyberHosthttps://cyberhost.uk/Ghost 5.79Sun, 24 Mar 2024 13:23:50 GMT60<![CDATA[S3 Compatbile Object Storage Comparison]]>I spent too long hunting around for an alternative to AWS S3 storage to find the cheapest price for my needs.

Provider Per GB Storage Per GB Transfer Per Mil GET/SELECT reqests Free Storage Free Transfer
AWS $0.024 $0.07 $0.35 5GB 100GB
Google Cloud $0.023
]]>
https://cyberhost.uk/s3-object-storage-comparison/65722b923bc0c20001b6e41bThu, 21 Mar 2024 17:00:00 GMT

I spent too long hunting around for an alternative to AWS S3 storage to find the cheapest price for my needs.

Provider Per GB Storage Per GB Transfer Per Mil GET/SELECT reqests Free Storage Free Transfer
AWS $0.024 $0.07 $0.35 5GB 100GB
Google Cloud $0.023 $0.12 $1.00 5GB 100GB
Microsoft Azure $0.019 $0.087 $0.60 5GB 100GB
Scaleway €0.012 €0.01 Free - 75GB
Cloudflare $0.015 Free £0.30 10GB Unlimited
Backblaze $0.006 $0.01 $0.40 10GB 3x stored
Wasabi* $0.0068 N/A Free - 1x stored
Ionos $0.018 $0.036 Free - -
Vultr* $0.006 $0.01 Free - 3000GB
DigitalOcean* $0.02 $0.01 Free - 1000GB
Linode* $0.02 $0.005 Free - 1000GB
Contabo* $2.99/250GB Free Free - Unlimited

* Requires minimum monthly payment

What's the best

This is just my personal opinion based on experience and feedback from others:

Fast: Scaleway
High Transfer: Cloudflare - Followed by Vultr and Contabo
High Storage: Backblaze or Vultr - However Glacier storage could be a better option

Have other thoughts? Drop a comment below

Google Cloud

Free Tier:

  • Limit to buckets in us-east1, us-west1, and us-central1 regions only
  • 5 GB of regional storage (US regions only) per month
  • 5,000 Class A Operations per month
  • 50,000 Class B Operations per month
  • 100 GB of outbound data transfer from North America to all region destinations (excluding China and Australia) per month

Cloudflare

R2 currently does not support all S3 API Endpoints

To use a custom domain you must use Cloudflare NameServers

Backblaze

I've heard some people complaining about slow access speeds and the fact it does not fully support all S3 API endpoints.

Wasabi

I've personally not used Wasabi, but have seen a few people complaining about them. This was down to their slow speeds and lack of transparency around 'Zero Egress Fees'.

Egress is free but you break their fair usage if you transfer more than you have stored. So if you pay for a 1TB of storage you can only transfer out 1TB.

Wasabi have a minimum spend of $6 per month, getting you a 1TB of storage.

Ionos

Although the egrees fees at $0.03 are on the high end, internal transfer to an VM within the same region is not charged. So you could utilises a £1/$1 Ionos VPS as a proxy to skip these egress fees.

We made a post on reviewing the £1 Ionos VPS here: https://cyberhost.uk/ionos-1-vps-review/

Vultr

Requires initial $5 bundle: 250GB Storage + 1TB Transfer

All Vultr accounts start with 2TB of bandwidth without any services. Add the 1TB that comes with the object storage plan gives you 3TB transfer.

DigitalOcean

Requires initial $5 bundle: 250GB Storage + 1TB Transfer

Linode

Requires initial $5 bundle: 250GB Storage + 1TB Transfer

Contabo

Requires storage to be purchased in blocks starting from 250GB costing $2.99 / £3.35 bundle

250 GB - $2.99

500 GB - $5.98

1 TB - $11.96

]]>
<![CDATA[Malware Domain DNS Blocklist]]>This post marks the release of the CyberHost Malware and Phishing blocklist.

This is being put together by collecting domains published in public threat intelligence reporting, Infosec Twitter/Mastodon groups and your contributions below.

Please use the following URL: https://lists.cyberhost.uk/malware.txt

The aim here is not

]]>
https://cyberhost.uk/malware-blocklist/65e70bd14a5069000107ddd3Wed, 06 Mar 2024 12:45:00 GMT

This post marks the release of the CyberHost Malware and Phishing blocklist.

This is being put together by collecting domains published in public threat intelligence reporting, Infosec Twitter/Mastodon groups and your contributions below.

Please use the following URL: https://lists.cyberhost.uk/malware.txt

The aim here is not to create a massive list but to put together a concise collection of malicious domains that have been verified to be utilised in malicious activity.

For transparency each domain listed contains the original source of the intelligence and the date it was added. Therefore please only utilise domains on lines that don't start with a #.

We will try our best to verify the domains added to prevent any false positives however no guarantee can be made. Cyber criminals are known to compromise sites so false positives are inevitable. If you do come across one please use the form below.

The blocklist has been tested with Adguard Home, Pi-hole and pfBlockerNG (pfSense).

License: CC BY-SA 4.0

Blocklist Details (Live)

Blocklist Contact Form



Comments

]]>
<![CDATA[Bulk Restore from S3 Glacier]]>I'll be using S3 compatible storage with Scaleway however this is the same process for Amazon AWS and other object storage providers.

Ideally this would be nice if it was baked into the Web UI however with Scaleway this is limited to one object at a time and

]]>
https://cyberhost.uk/bulk-restore-from-s3-glacier/65d9da3a9e99a900011ef65bTue, 27 Feb 2024 18:30:00 GMT

I'll be using S3 compatible storage with Scaleway however this is the same process for Amazon AWS and other object storage providers.

Ideally this would be nice if it was baked into the Web UI however with Scaleway this is limited to one object at a time and isn't viable if you have thousands of objects like me.

We'll be using the s3cmd tool to quickly restore all our objects, this is an open source tool and the code can be found on GitHub https://github.com/s3tools/s3cmd.

Please note you will likely incur restore costs.

Install s3cmd
sudo apt install s3cmd

Create config file:

cd $HOME
touch .s3cfg

Enter the file and use fill in the template:

[default]
host_base = s3.nl-ams.scw.cloud
host_bucket = BUCKET-HERE.s3.nl-ams.scw.cloud
bucket_location = nl-ams
use_https = True
# Login credentials
access_key = <ACCESS_KEY>
secret_key = <SECRET_KEY>

Calculate the number of files in Glacier that are to be restored:
s3cmd ls -l --recursive s3://BUCKET-HERE |grep GLACIER|wc -l

Restore for 3 days
s3cmd restore -v --recursive s3://<bucket-name> --restore-days=3

Permanent Restore
Due to the way s3cmd and the S3 API works there is no way to indefinitely restore the objects. The easy way would be to set the restore days to the max of 10000 or 27 years.

However to do it properly, after temporarily restoring the files we can move them which will remove the restore duration tied to them.

s3cmd mv s3://your-bucket-name/path/to/object s3://your-bucket-name/path/to/destination --recursive

]]>
<![CDATA[Adguard Home - Docker Compose Setup]]>Spice up your day by setting up a local DNS Server for network-wide ad, tracking and malware blocking.

Pi-hole used to be the old favourite for this type of setup however they've fallen behind in recent years and most of the fans have moved onto AdGuard Home.

AdGuard

]]>
https://cyberhost.uk/adguard-setup/65722b923bc0c20001b6e3feSat, 24 Feb 2024 11:00:00 GMT

Spice up your day by setting up a local DNS Server for network-wide ad, tracking and malware blocking.

Pi-hole used to be the old favourite for this type of setup however they've fallen behind in recent years and most of the fans have moved onto AdGuard Home.

AdGuard Home is another open-source DNS with blocking capabilities. If you aren't already aware, you set up your network to use the local AdGuard/Pi-hole server for DNS. When you try and access example.com your computer will send a lookup for the IP address of example.com, this query gets sent to your local server it then goes to the upstream such as Cloudflare (1.1.1.1), the response will then be forwarded to your device. The benefit of this setup is the blocking and caching it provides.

You can load in blocklists containing advertising or malware domains, so when your browser tries to access annoyingpopupads.com the request will simply get blocked (technically the IP of 0.0.0.0 is returned).

The caching a local DNS server provides can also be handy; if you request example.com and you already requested it 30 seconds ago it will serve that same IP without going off to fetch it. This can save a significant amount of time when browsing the web, usually around 3-200ms per lookup.

Pi-hole vs AdGuard Home

Feature Pi-hole AdGuard Home
DNS Blocking
DHCP Server
Docker Installation
Local DNS Entries (rewrites)
DoH/DoT Upstream
Answer queries via DoH/DoT
Upload HTTPS Certificate
Block Services (eg Discord/TikTok)
Blocklist Update frequency Once Per Week 1 Hour-1 Week

We have a post on setting up Pi-hole here.

Install

  1. Ensure you have Docker installed.
  2. Head to your home (or docker) directory: cd
  3. Create AdGuard Directory: sudo mkdir adguard
  4. Use the following template: sudo nano docker-compose.yaml:
version: "3"
services:
  adguardhome:
    image: adguard/adguardhome
    container_name: adguardhome
    ports:
      - 53:53/tcp
      - 53:53/udp
      - 784:784/udp
      - 853:853/tcp
      - 3000:3000/tcp
      - 80:80/tcp
      - 443:443/tcp
    volumes:
      - ./workdir:/opt/adguardhome/work
      - ./confdir:/opt/adguardhome/conf
    restart: unless-stopped

You may want to add - 67:67/udp -p 68:68/tcp -p 68:68/udp to use AdGuard as DHCP Server. You'll want to use this if you can't set DNS settings on your router. Ensure you turn off the router DHCP service first (Only one DHCP Server can be run on a network).

  1. Spin it up: sudo docker-compose up -d
  2. Setup AdGuard Home via the WebUI at http://[IP-Here]>:3000

Now just head to your router setting and configure the DCHP settings and set the DNS Server as your Adguard IP address. If you are unable to configure this you can turn off DHCP and use the AdGuard to handle DHCP (Look at the section above).

DNS Upstreams

These are configured under Settings > DNS Settings

You'll most likely want to be using DoH (DNS over HTTPS) or DoT (DNS over TLS) for your upstream. These will encrypt outgoing requests by using TLS. This stops your ISP from monitoring the lookups and protects you from DNS Cache Poisoning and DNS Hijacking.

Some DoH Options:

Provider Endpoint Notes
Quad9 https://dns.quad9.net/dns-query Malware domains blocked
Cloudflare https://cloudflare-dns.com/dns-query DoH for 1.1.1.1
Cloudflare https://security.cloudflare-dns.com/dns-query 1.1.1.1 with malware blocking
NextDNS Account Required nextdns.io 300k free monthly queries

Want more options? Visit https://dnscrypt.info/public-servers

Best Malware Blocklists for Pi-hole Adguard Home
Top malware, virus and phishing DNS blocklists/blacklists for Pi-hole and Adguard Home.
Adguard Home - Docker Compose Setup
]]>
<![CDATA[Offsite Proxmox Backup Server with S3 Storage]]>In this post, we'll be deploying Proxmox Backup Server (PBS) on a Debian 12 VPS and storing the backups on Scaleway S3 compatible storage.

Warning
If deploying on a cloud VPS, it's probably best to not directly expose this to the internet. Here are some safer

]]>
https://cyberhost.uk/deploying-proxmox/65d0a1c3df4ffc0001483c84Tue, 20 Feb 2024 13:00:00 GMT

In this post, we'll be deploying Proxmox Backup Server (PBS) on a Debian 12 VPS and storing the backups on Scaleway S3 compatible storage.

Warning
If deploying on a cloud VPS, it's probably best to not directly expose this to the internet. Here are some safer ways to connect:

Things to note:

  • Initialisation of the Datastore took me 19 Hours
  • Backing up with Encryption is recommended
  • 8 fairly minimal Debian VM's only used 14GB of storage with ZSTD compression.

Installing Proxmox Backup Server on Debian 12

Follow the official docs or use our simplified version below.

Add Proxmox Repositories

wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg

Validate SHA256 Hash
sha512sum /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg

Expected output:
7da6fe34168adc6e479327ba517796d4702fa2f8b4f0a9833f5ea6e6b48f6507a6da403a274fe201595edc86a84463d50383d07f64bdde2e3658108db7d6dc87

If your output is not the above, validate with the official docs.

Add to apt sources
sudo nano /etc/apt/sources.list
Paste the following:

deb http://download.proxmox.com/debian/pbs bookworm pbs-no-subscription

Install

sudo apt update
sudo apt install proxmox-backup-server

Set root password:

sudo proxmox-backup-manager user update root@pam --password SecurePassword

Reboot
sudo reboot

Login
https://IP:8007

Configuring S3 Backend

PBS does not support S3 storage for a Datastore, however we can make it work by using s3fs.

Install s3fs
sudo apt install s3fs

Add Credentials
sudo nano /etc/passwd-s3fs

Add the folowing line to the file, using your creds.
ACCESS_KEY_ID:SECRET_ACCESS_KEY

Set Permissions
sudo chmod 600 /etc/passwd-s3fs

Create mount point location
sudo mkdir /mnt/pbs-s3

Connect to S3
With debug on:
sudo s3fs YOUR-BUCKET /mnt/pbs-s3 -o allow_other -o passwd_file=/etc/passwd-s3fs -o url=https://s3.nl-ams.scw.cloud -o use_path_request_style -o endpoint=nl-ams -o parallel_count=15 -o multipart_size=128 -o nocopyapi -o dbglevel=info -f -o curldbg

Make sure it all works ok, with a 200 response and you can touch text.txt into /mnt/pbs-s3 without issues.

Connect on boot
Ideally you'd use fstab for this however I kept running into issues setting this up. I resulted in using a cron @restart script to set it up on boot.

Create Boot up script
nano s3-mount.sh

Paste in your connect command above

Add execute permissions
sudo chmod u+x s3-mount.sh

Sudo crontab
sudo crontab -e

Enter the following:
@reboot /bin/sh /home/YOUR-USER/s3-mount.sh

Proxmox Configuration

In Proxmox Backup Server, click "Add Datastore" (bottom of the left panel). Fill in the details:

Offsite Proxmox Backup Server with S3 Storage

After clicking add it can take some time to create the Datastore, and I mean a crazy long time. Took just over 19 hours for me after writing 65,539 empty directories to disk.

Offsite Proxmox Backup Server with S3 Storage
Scaleway S3 Bucket Stats
Offsite Proxmox Backup Server with S3 Storage

PBS Setup in Proxmox Hypervisor

On your Proxmox Hypervisor head to Datacenter > Storage > Add > Proxmox Backup Server

Offsite Proxmox Backup Server with S3 Storage

Fill in the details:
As you are storing your VM's on the cloud you'll probably want to enable Encryption for your backups to keep them secure. Just make sure you keep the key safe!

Offsite Proxmox Backup Server with S3 Storage

Back in Proxmox Backup Server click "Show Fingerprint"
Paste this in the Fingerprint field on Proxmox Hypervisor

Offsite Proxmox Backup Server with S3 Storage

Then create you're backups under the Backup tab within Proxmox Hypervisor. These are going to take a fair bit longer than local storage due to the upload and overhead of S3.

That's it! It's worth perdiodically checking in to ensure everything is being backed up as expected.

Follow the Blog on Mastodon: @cyberhost@infosec.exchange

]]>
<![CDATA[Octopus Energy Home Mini Review - A Network Teardown]]>After waiting a few months, my Octopus Home Mini has finally arrived.

As this is quite an interesting product and one of the first products where your energy supplier uses your Wi-Fi, I thought I'd do a full teardown and see what it gets up to. Hint: I&

]]>
https://cyberhost.uk/octopus-home-mini-teardown-review/65cceeabdf4ffc0001483b7dWed, 14 Feb 2024 18:46:49 GMT

After waiting a few months, my Octopus Home Mini has finally arrived.

As this is quite an interesting product and one of the first products where your energy supplier uses your Wi-Fi, I thought I'd do a full teardown and see what it gets up to. Hint: I'm quite impressed

Not on Octopus Energy yet? Get £50 credit and help the blog: https://share.octopus.energy/ebon-snail-338

What's a Home Mini?

"The Octopus Home Mini is a small, palm-sized device that beams live readings from your smart meter to our cloud-based platform Kraken, so we can show you up-to-the-minute smart insights via your Octopus Energy app." - https://octopus.energy/blog/octopus-home-mini/

What's in the box

  • Instructions
  • Home Mini
  • Micro USB Cable
  • UK 5W USB Adapter
Octopus Energy Home Mini Review - A Network Teardown

Setup Process

This is pretty straight forward, just scan the QR code in the intructions, this launches the Octopus app.

  1. Enable Location Services and Bluetooth permissions to the app
  2. Connect to the Home Mini
  3. Enter your Wi-Fi Password
  4. All sorted!

I would suggest placing this on a separate IoT network just out of principle. The SSID sent to the Home Mini is tied to what your phone is on. So this will mean connecting your phone to the IoT network temporarily.

After setup you can remove Location and Bluetooth permissions from the App without causing any issues.

Network Analysis

I took a PCAP of it's network traffic on my router and looked at it's DNS requests to understand what it is up to.

What does the home mini talk to?

Only two domains:

  • pool.ntp.org
  • aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com

pool.ntp.org
NTP Pool is an open pool of NTP servers contributed by volunteers. It's nice to see this project being used however technically they should be using a vendor zone instead of the main domain due to it being baked into the firmware.

aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com
AWS IOT Serverless Platform. Located in the Ireland AWS Datacenter.
Can't complain at this, looks like they understand how to properly design cloud systems!

It's nice to see that the Home Mini doesn't ping off to any other random servers.

Octopus Energy Home Mini Review - A Network Teardown
DNS Lookups preformed by the Home Mini (Adguard)

Encryption?

The Home Mini talks to the Amazon endpoint using TLS1.2 although this is not the latest (1.3), it was using the cipher suite of TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 this is still considered secure.

Other than the NTP lookup being in plaintxt (standard) all traffic was encrypted.

How often does it send data?

Every 10 seconds a packet of around 250 bytes is sent.

After having it plugged in for just over 2 Hours it had sent 315KB and received 242KB, so no need to worry about it slowing down your network.

Octopus Energy Home Mini Review - A Network Teardown
Wireshark Screenshot

Is it going to hack my network???

No, all is does is the following in time order:

  1. DHCP (To get a local IP address)
  2. DNS Lookup of pool.ntp.org
  3. Connect to 1 NTP Pool server to get the time
  4. DNS Lookup of aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com
  5. Connect to aw1e0kzydzq4m-ats.iot.eu-west-1.amazonaws.com
  6. Send a 250 byte packet every 10 seconds

The app

Enough about packets, what does the app look like?

It's not packed with features, just your live usage and 5 or 30 min usage graphs.

Octopus Energy Home Mini Review - A Network Teardown

I originally thought that data after 30 mins was lost, however this data is moved the the "Day" tab along with gas usage which is pretty neat.

The good and bad

No telemetry or bad network acitivty Only 2.4Ghz Wi-Fi
All traffic under TLS1.2 USB C would have been nice
Minimal network traffic Basic app experience
Made from recycled ocean plastic
Completly Free!

Summary

This is quite impressive, you can tell it's been properly developed by the cloud setup to minimal and correctly configured network traffic. It would have been nice if it had 5Ghz Wi-Fi and USB C for power but for a free gadget you can understand the cost cutting here.

Follow cyberhost on Mastodon, we'll be linking it to Home Assistant in the next post. @cyberhost@infosec.exchange

]]>
<![CDATA[macOS External 4K Monitor Scaling Fix]]>Do you want your external display to have these options:

Then read on...

I spent way too long trying to figure out why my new Dell 4k montiors wouldn't allow for scaling by ajusting the text. The screens were unusable being super small at 4k or blury at

]]>
https://cyberhost.uk/macos-monitor/65c4d9bd782a81000172d02cThu, 08 Feb 2024 17:40:31 GMT

Do you want your external display to have these options:

macOS External 4K Monitor Scaling Fix

Then read on...

I spent way too long trying to figure out why my new Dell 4k montiors wouldn't allow for scaling by ajusting the text. The screens were unusable being super small at 4k or blury at 1080/1440p. As this fix doesn't seem to be documented anywhere here it is:

You may notice that when using an external monitor with your MacBook you're unable to adjust the text size. At 1080p or 1440p it's not normally an issue, but with a 4k monitor it's incredible hard to use without scaling options.

The solution: Active HDMI Adapter - Skip to buying one

Why is that?

Most HDMI adapters on the market are passive and this causes us issues. To carry video USB-C uses DisplayPort Alternative Mode (Alt Mode), this requires our HDMI adapters to convert that DisplayPort signal into HDMI.

The chips used within active adapters come with many benefits:

Better signal conversion

Support for Advanced Features
Such as higher resolutions, refresh rates, and color depths. They can handle the necessary signal processing to ensure compatibility with the capabilities of the connected devices.

Resolution Scaling
Active adapters can scale the resolution to match the display's capabilities. This ensures optimal image quality without stretching or distortion.

Enhanced Compatibility
Active adapters are generally more versatile and compatible with a wider range of devices compared to passive adapters. They can handle a variety of signal formats and specifications, making them suitable for diverse connectivity needs.

If I'm honest, I don't know which feature means that we can we can correctly scale on macOS but we can, and that's what matters!

Buying an Active Adapter

This is harder than you'd imagine, for some reason manufacturers don't want to tell us if it's passive/active.

As active adapters can handle increased throughput for high resolution and framrates, this is what I used to find an adapter. Searching for 4K 120hz or 8K worked for me in finding an adapter.

I currently use a couple of Cable Matters adapters and they work perfectly. I've dropped their affiliate links below, this helps out my blog at no cost to you 😄

Cable Matters Multi Port: Amazon UK - £54 - Amazon US - $55

Cable Matters just HDMI: Amazon UK - £20 - Amazon US $20

Have another solution or verified adapter? Let us know

]]>
<![CDATA[In depth review of Kamatera Cloud - $4 VPS]]>There are a lot of review online for Kamatera Cloud, most of these look like sponsored posts just reading off the marketing rubbish. In this review I’ll be completing a full deep dive into Kamatera cloud, without any filtering or marketing BS.

To be clear this isn’

]]>
https://cyberhost.uk/a-proper-review-of-kamatera-cloud-4-vps/6591478f69be83000162d610Mon, 01 Jan 2024 14:59:20 GMT

There are a lot of review online for Kamatera Cloud, most of these look like sponsored posts just reading off the marketing rubbish. In this review I’ll be completing a full deep dive into Kamatera cloud, without any filtering or marketing BS.

To be clear this isn’t a sponsored post. If you think Kamatera cloud is right for you please support this ad-free site by using the affiliate link below, this is at no cost to you. As always this will not influence the opinions in this review.

Start a Kamatera 30 day trial

Pricing
CPU
Networking
Disk
Hot Add Demo
Stats
Setup Process
Support
YABS Benchmark

The Good

  • Reasonably priced with 30 day trial
  • Crazy fast networking
  • Crazy fast disk speeds
  • Fast Support

The Bad

  • No IPv6
  • Network blocked due to running a single speed test
  • No custom images
  • Timezone was set to ETS as default

Pricing

Pricing starts at $4 (£3.15) per month including 5TB of transfer at 10Gbit. These can be finely adjusted to suite your needs.

Start a Kamatera 30 day trial

In depth review of Kamatera Cloud - $4 VPS

Hourly billing is an option this works out to be around $3.65 for a month but does not include any data transfer.

Data transfer is billed at $0.01 per GB. This is for both in and outgoing traffic.

In depth review of Kamatera Cloud - $4 VPS
Bandwidth Bill

CPU

Kamatera offer 4 types of CPU, for most people type A will be sufficient. Pricing does change when selecting a different type.

One nice thing is that these are hot swappable, so you don’t need to reboot to change CPU type.

Type A – Availability

Server CPUs are assigned to a non-dedicated physical CPU thread with no resources guaranteed.

Geekbench 6 Benchmark Test:

Test | Value
Single Core | 847
Multi Core | 649
Full Test | https://browser.geekbench.com/v6/cpu/4148053

Type B – General Purpose

Server CPUs are assigned to a dedicated physical CPU Thread with reserved resources guaranteed.

Geekbench 6 Benchmark Test:

Test | Value
|
Single Core | 886
Multi Core | 701
Full Test | https://browser.geekbench.com/v6/cpu/4148754

Type T – Burstable

Server CPUs are assigned to a dedicated physical CPU thread with reserved resources guaranteed.

Type D – Dedicated

Server CPUs are assigned to a dedicated physical CPU Core (2 threads) with reserved resources guaranteed.

Geekbench 6 Benchmark Test:

Test | Value
|
Single Core | 886
Multi Core | 692
Full Test | https://browser.geekbench.com/v6/cpu/4149244

More can be found here: https://www.kamatera.com/faq/answer/which-cpu-types-are-offered-by-kamatera/

Networking

5TB of included traffic at 10Gbit is very good when comparing against the likes of AWS (100GB), DigitalOcean (1TB) and Vultr (3TB).

The IP’s I was allocated were very clean with 0 hits on mxtoolbox.
e.g. https://mxtoolbox.com/SuperTool.aspx?action=blacklist%3A185.127.19.95

No IPv6 Support

This was confirmed when asking their support desk:


Question asked: "Are there any plans to support IPv6 in the future?"

Support Response: "As of today, we do not provide and allow IPv6 to be used on our infrastructure. You can use only IPv4."

Speedtest

When running speedtests it did trigger their automated security to block the IP, this block was in place for a couple hours before automatically being removed.

I spoke to their technical support to understand why the network was blocked, they told me this was likely down to the high traffic on the server side caused by the speed tests. Although I understand why, as they don’t want their infrastructure being used to launch powerful DDoS attacks, the immediate blocking seems a little over the top for me. I can’t fault the support who initially responded in 25 mins and the follow up reply in 9 mins.

Disk

Disk Benchmarks

Test 1

Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 202.51 MB/s (50.6k) | 2.51 GB/s (39.3k)
Write | 203.04 MB/s (50.7k) | 2.53 GB/s (39.5k)
Total | 405.55 MB/s (101.3k) | 5.04 GB/s (78.8k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 3.61 GB/s (7.0k) | 3.23 GB/s (3.1k)
Write | 3.80 GB/s (7.4k) | 3.45 GB/s (3.3k)
Total | 7.42 GB/s (14.4k) | 6.68 GB/s (6.5k)

Test 2

Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 197.21 MB/s (49.3k) | 2.50 GB/s (39.1k)
Write | 197.73 MB/s (49.4k) | 2.52 GB/s (39.3k)
Total | 394.94 MB/s (98.7k) | 5.02 GB/s (78.5k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 3.46 GB/s (6.7k) | 3.17 GB/s (3.0k)
Write | 3.65 GB/s (7.1k) | 3.38 GB/s (3.3k)
Total | 7.12 GB/s (13.9k) | 6.55 GB/s (6.3k)

Test 3

Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 185.06 MB/s (46.2k) | 2.65 GB/s (41.5k)
Write | 185.55 MB/s (46.3k) | 2.67 GB/s (41.7k)
Total | 370.61 MB/s (92.6k) | 5.32 GB/s (83.2k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 3.55 GB/s (6.9k) | 3.46 GB/s (3.3k)
Write | 3.74 GB/s (7.3k) | 3.69 GB/s (3.6k)
Total | 7.30 GB/s (14.2k) | 7.15 GB/s (6.9k)

Hot Add

Unlike other cloud proviers I've used, Kamatera allows you to adjust the CPU Type, CPU Cores and Memory without a reboot. A reboot is often required to downgrade.

0:00
/0:32

Hot Add Demo

Stats Page

Like most providers Kamatera provide a system monitoring tool for CPU, Memory, Network and disk.

In depth review of Kamatera Cloud - $4 VPS

Setup Process

I started with a Debian VPS (1 CPU, 1GB RAM, 20GB SSD) this took 1 Minute 21 Seconds to deploy and had quite a fancy initialisation log showing what’s happening being the scenes which is nice to see. At the start your given the allocated IPv4 address, this is handy as you can go off setting up DNS while the VPS is provisioning.

In depth review of Kamatera Cloud - $4 VPS

One thing to note is the Debian image wasn’t up to date and was running 12.2, this was released on the 7th October 2023. Not a massive issue but it’s nice to see cloud providers shipping up-to date fully patches images.

The image used is also super minimal, I had install sudo and curl and create a new user.

It would be better if Kamatera setup the user account for you and installed your ssh key on that user to promote better security of not using the root account.

Support

As previously mentioned I got in contact with their support department. They guarantee to get back within 24 hours however they were so much faster than this in practice. I sent them 4 emails the response times were 25 mins, 9 mins, 1 min and 10 mins.

I was put through to the cloud services department for this and all the staff were fairly technical, answering my questions well.

YABS Full Benchmark

A full benchmark of the $4 a month VPS plan was completed using YABS

# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
#              Yet-Another-Bench-Script              #
#                     v2023-11-30                    #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Sun Dec 24 12:07:42 PM EST 2023

Basic System Information:
---------------------------------
Uptime     : 0 days, 18 hours, 46 minutes
Processor  : Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz
CPU cores  : 1 @ 2593.905 MHz
AES-NI     : ✔ Enabled
VM-x/AMD-V : ❌ Disabled
RAM        : 925.8 MiB
Swap       : 1024.0 MiB
Disk       : 19.6 GiB
Distro     : Debian GNU/Linux 12 (bookworm)
Kernel     : 6.1.0-13-amd64
VM Type    : VMWARE
IPv4/IPv6  : ✔ Online / ❌ Offline

IPv4 Network Information:
---------------------------------
ISP        : Kamatera Inc
ASN        : AS210329 Kamatera Inc
Host       : Cloudwebmanage EU LO
Location   : Poplar, England (ENG)
Country    : United Kingdom

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/sda1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 178.16 MB/s  (44.5k) | 2.59 GB/s    (40.5k)
Write      | 178.64 MB/s  (44.6k) | 2.61 GB/s    (40.8k)
Total      | 356.80 MB/s  (89.2k) | 5.21 GB/s    (81.4k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 5.17 GB/s    (10.1k) | 5.02 GB/s     (4.9k)
Write      | 5.44 GB/s    (10.6k) | 5.35 GB/s     (5.2k)
Total      | 10.61 GB/s   (20.7k) | 10.38 GB/s   (10.1k)

Geekbench 6 Benchmark Test:
---------------------------------
Test            | Value                         
                |                               
Single Core     | 849                           
Multi Core      | 636                           
Full Test       | https://browser.geekbench.com/v6/cpu/4123094

Start a Kamatera 30 day trial

]]>
<![CDATA[Off-site UniFi Protect Backup]]>I'm a massive fan of the UniFi Protect eco-system, yes it can get a little pricey compared to real consumer gear but it simply just works and it works well, all while on-site away from prying eyes. The issue with all of this is that a savvy burglar

]]>
https://cyberhost.uk/offsite-unifi-protect-backup/657c73d15b92190001664681Sat, 16 Dec 2023 00:14:17 GMT

I'm a massive fan of the UniFi Protect eco-system, yes it can get a little pricey compared to real consumer gear but it simply just works and it works well, all while on-site away from prying eyes. The issue with all of this is that a savvy burglar is going to look for your NVR system and take all the footage with them. Unfortunately I've seen a few threads on Reddit of this happening.

Physical Security

The Unifi Cloud Key Gen2 Plus has a security slot so that a Kensington lock can be used to secure the system. Having a look at the other Unifi products suggests that all the other systems that can run the Protect application don't have this which is a little disapointing.

Off-site UniFi Protect Backup

Off-Site Backup

I would have hoped this is a feature that Unifi would build into the Protect application. I did hear rumors many years ago that this was something they were working on, given the number of years that have passed I'm guessing they have given up on this.

There are a few ways to backup you footage off-site:

  • rsync the video directories on the Cloud Key to a separate machine
  • Enable the RTSP to send the video streams off-site
  • Endless janky scripts

I've been using a docker container from Sebastian Goscik - https://github.com/ep1cman/unifi-protect-backup for close to 2 years now and it has effortlessly shipped of all my motion events to S3 as they happen.

Setting up Unifi-Protect-Backup with Docker

We'll be deploying this in Docker, I personally run a Debian VM just for this but a Raspberry Pi will also do the job. You'll want this to be local and likely on the same network as the Unifi Cloud Key.

Unifi-Protect-Backup uses rclone for the backend support meaning a load of Cloud providers and protocols are supported out the box such as:

  • S3
  • Dropbox
  • FTP
  • SFTP
  • SMB
  • WebDAV

Look through there docs and create a rclone.conf config for your cloud provider, I'll be using Scaleway S3 compatible storage. The template for this:

[scaleway]
type = s3
provider = Scaleway
env_auth = false
endpoint = s3.nl-ams.scw.cloud
access_key_id = SCWXXXXXXXXXXXXXX
secret_access_key = 1111111-2222-3333-44444-55555555555555
region = nl-ams
location_constraint =
acl = private
server_side_encryption =
storage_class =

Once your rclone config is sorted we'll generate a Unifi user account for the backups, we can give this just basic view only access.

Off-site UniFi Protect Backup

Docker Compose:

version: '3.1'
services:
  protect-backup-scaleway:
    image: ghcr.io/ep1cman/unifi-protect-backup
    container_name: unifi-protect-backup
    restart: unless-stopped
    environment:
      UFP_USERNAME: <username>
      UFP_PASSWORD: <password>
      UFP_ADDRESS: <Unifi-IP>
      UFP_SSL_VERIFY: 'false'
      RCLONE_DESTINATION: scaleway:<bucket-id/directory>
    volumes:
      - ./data:/data
      - /home/cyberhost/rclone.conf:/root/.config/rclone/rclone.conf:ro

You'll want to set the credentials, Unifi IP and the rclone path in the config.

Start it up: sudo docker compose up -d

You can check the logs with sudo docker compose logs

To prevent your bills going sky high you'll want to setup some lifecycle rules. I have this to move the videos to Glacier storage after 7 days and then delete after 180 days, this reduces my monthly costs to well below £1.

Off-site UniFi Protect Backup
Scaleway Lifecycle Rules

That's it, you'll likely want to check periodically to ensure it is still backing up 😄

]]>
<![CDATA[Ionos £1 VPS Review]]>Jump to YABS

What's the deal?

UK Cost: £1.20 (Including VAT) - 12 Month Contract

US Cost: $2 - 1 Month Contract

Get Ionos VPS - This supports the hosting of this blog at no cost to you

VPS Linux XS Specs:

  • 1 vCPU
  • 1GB RAM
]]>
https://cyberhost.uk/ionos-1-vps-review/657c4ada94293600016089d3Fri, 15 Dec 2023 15:01:39 GMT

Jump to YABS

What's the deal?

UK Cost: £1.20 (Including VAT) - 12 Month Contract

US Cost: $2 - 1 Month Contract

Get Ionos VPS - This supports the hosting of this blog at no cost to you

VPS Linux XS Specs:

  • 1 vCPU
  • 1GB RAM
  • 10GB NVME
  • IPv4 and IPv6
  • 1Gbit Port
  • Unlimited Transfer

Other benefits:

  • Firewall
  • DDoS Protection

Locations: UK, US, Germany, Spain, France

These are great little boxes if you don't need much horsepower. I personally have a couple of these, one is currently hosting this blog. Granted I've converted it to be static as 1GB/1CPU isn't powerfull enough to run Ghost CMS. The other I use for the public IPv4 which I can then port forward to my router with Wireguard. As I'm stuck behind a CG-NAT it's either this or pay £5/month to my ISP.

I've not had any issues with uptime while running these boxes. I did have some issues in the early days with packet loss this was when the network port was capped at 400 Mbits. It looks like Ionos are now deploying VPSs in a different datacenter and with 1Gbit ports, happy to say I've not had any packet loss issues on the new VPSs.

I've run a YABS benchmark which can be found at the end of this post. The NVMEs seem slightly on the slow side when testing with 4k blocks all the other block sizes are fairly good, I guess for £1 you can't really complain!

CPU

I scored 700 with Geekbench6 on the Intel Xeon CPU. My other VPS has a AMD EPYC CPU. These are both meant to be in the same datacenter so it will be pot luck with CPU you end up with.

Networking

One thing to note is that IPv6 is not enabled by default but can be done for free via the control panel without rebooting the VPS.

The IP of my recently purchased VPS seems to locate to Germany on quite a few GeoIP databases, I'm guessing this will be updated in the coming weeks.

These come with 1Gbit ports and unlimited transfer, I wasn't even able to find the fair usage small print. They do say "We won't throttle or restrict traffic at any time, so you never have to worry about limits or extra costs.". The iperf network tests are contantly hitting over 1Gbit, the highest I've observed was 3.65Gbit download and 1.9Gbit upload, very impressive!

Ionos £1 VPS Review

Network Speed:

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 1.87 Gbits/sec  | 3.48 Gbits/sec  | 1.46 ms        
Scaleway        | Paris, FR (10G)           | 1.86 Gbits/sec  | 3.65 Gbits/sec  | 7.93 ms        
NovoServe       | North Holland, NL (40G)   | 1.90 Gbits/sec  | 3.26 Gbits/sec  | 14.1 ms        
Uztelecom       | Tashkent, UZ (10G)        | 454 Mbits/sec   | 1.02 Gbits/sec  | 191 ms         
Clouvider       | NYC, NY, US (10G)         | 1.74 Gbits/sec  | 374 Mbits/sec   | 74.3 ms        
Clouvider       | Dallas, TX, US (10G)      | 1.68 Gbits/sec  | 1.71 Gbits/sec  | 107 ms         
Clouvider       | Los Angeles, CA, US (10G) | 1.04 Gbits/sec  | 1.25 Gbits/sec  | 133 ms         

Firewall Setting

Although you can always use a tool such as [UFW](https://en.wikipedia.org/wiki/Uncomplicated_Firewall) to manage a firewall on the VPS. I always prefer to have my firewall separate from the VPS. The main reason for this is Docker and how it doesn't play nicely with iptables. Previously I was under the assumption there was a firewall in place but Docker was just bypassing it, not very helpful...

Ionos has a basic interface for managing this, with changes being deployed almost instantly.

Ionos £1 VPS Review

YABS

Yet-Another-Bench-Script

# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
#              Yet-Another-Bench-Script              #
#                     v2023-11-30                    #
# https://github.com/masonr/yet-another-bench-script #
# ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Fri Dec 15 15:03:20 UTC 2023

Basic System Information:
---------------------------------
Uptime     : 1 days, 21 hours, 29 minutes
Processor  : Intel Xeon Processor (Skylake, IBRS)
CPU cores  : 1 @ 2100.000 MHz
AES-NI     : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM        : 942.4 MiB
Swap       : 1024.0 MiB
Disk       : 9.8 GiB
Distro     : Debian GNU/Linux 12 (bookworm)
Kernel     : 6.1.0-15-cloud-amd64
VM Type    : MICROSOFT
IPv4/IPv6  : ✔ Online / ✔ Online

IPv4 Network Information:
---------------------------------
ISP        : IONOS SE
ASN        : AS8560 IONOS SE
Host       : Ionos
Location   : City of Westminster, England (ENG)
Country    : United Kingdom

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 40.02 MB/s   (10.0k) | 498.01 MB/s   (7.7k)
Write      | 40.12 MB/s   (10.0k) | 500.63 MB/s   (7.8k)
Total      | 80.15 MB/s   (20.0k) | 998.64 MB/s  (15.6k)
           |                      |                     
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ---- 
Read       | 474.75 MB/s    (927) | 469.26 MB/s    (458)
Write      | 499.98 MB/s    (976) | 500.51 MB/s    (488)
Total      | 974.73 MB/s   (1.9k) | 969.78 MB/s    (946)

iperf3 Network Speed Tests (IPv4):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 1.87 Gbits/sec  | 3.48 Gbits/sec  | 1.46 ms        
Scaleway        | Paris, FR (10G)           | 1.86 Gbits/sec  | 3.65 Gbits/sec  | 7.93 ms        
NovoServe       | North Holland, NL (40G)   | 1.90 Gbits/sec  | 3.26 Gbits/sec  | 14.1 ms        
Uztelecom       | Tashkent, UZ (10G)        | 454 Mbits/sec   | 1.02 Gbits/sec  | 191 ms         
Clouvider       | NYC, NY, US (10G)         | 1.74 Gbits/sec  | 374 Mbits/sec   | 74.3 ms        
Clouvider       | Dallas, TX, US (10G)      | 1.68 Gbits/sec  | 1.71 Gbits/sec  | 107 ms         
Clouvider       | Los Angeles, CA, US (10G) | 1.04 Gbits/sec  | 1.25 Gbits/sec  | 133 ms         

iperf3 Network Speed Tests (IPv6):
---------------------------------
Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
-----           | -----                     | ----            | ----            | ----           
Clouvider       | London, UK (10G)          | 1.81 Gbits/sec  | 2.92 Gbits/sec  | 2.21 ms        
Scaleway        | Paris, FR (10G)           | busy            | busy            | 9.88 ms        
NovoServe       | North Holland, NL (40G)   | 1.82 Gbits/sec  | 1.73 Gbits/sec  | 9.39 ms        
Uztelecom       | Tashkent, UZ (10G)        | busy            | busy            | --             
Clouvider       | NYC, NY, US (10G)         | 1.62 Gbits/sec  | 1.12 Gbits/sec  | 74.4 ms        
Clouvider       | Dallas, TX, US (10G)      | 1.27 Gbits/sec  | 1.02 Gbits/sec  | 111 ms         
Clouvider       | Los Angeles, CA, US (10G) | 1000 Mbits/sec  | 1.06 Gbits/sec  | 135 ms

Geekbench 6 Benchmark Test:
---------------------------------
Test            | Value                         
                |                               
Single Core     | 700                           
Multi Core      | 502                           
Full Test       | https://browser.geekbench.com/v6/cpu/3999447

Visit Ionos - This supports the hosting of this blog at no cost to you

]]>
<![CDATA[Deploying a RIPE Atlas Software Node]]>The RIPE Atlas project is a global network of probes used to measure internet connectivity. At the time of writing there are 12868 probes online, sitting on 3792 ASNs (Networks) in 177 countries. This provides a great understanding of the state of the internet in real time, and the best

]]>
https://cyberhost.uk/deploying-a-ripe-atlas-software-node-in-a-vm/657238ef3bc0c20001b6e543Wed, 13 Dec 2023 21:45:00 GMT

The RIPE Atlas project is a global network of probes used to measure internet connectivity. At the time of writing there are 12868 probes online, sitting on 3792 ASNs (Networks) in 177 countries. This provides a great understanding of the state of the internet in real time, and the best thing is at this is a public project with the data produced being freely available.

If you're unsure who RIPE are, they the non-profit that assign IPv4, IPv6 and ASN numbers for the Europe region.

View measurments: https://atlas.ripe.net/measurements

Deploying a RIPE Atlas Software Node
Map of online probes - 7th December 2023

Running your own probe used to require custom hardware from RIPE, there have been a few different versions over the years. The most common was a repurposed TP-Link router.

Deploying a RIPE Atlas Software Node

You can request a hardware node here: https://atlas.ripe.net/apply/, however these are in limited supply. Running a software probe is an easier alternative especially if you're not sure about running one long term.

This is a dead easy setup using docker thanks to James Swineson. His docker compose is on GitHub: https://raw.githubusercontent.com/Jamesits/docker-ripe-atlas/master/docker-compose.yaml

This is my slightly modified version:

version: '3.1'
services:
  ripe-atlas:
    image: jamesits/ripe-atlas:latest
    restart: always
    environment:
      RXTXRPT: "yes"
    volumes:
      - atlas-probe/etc:/var/atlas-probe/etc
      - atlas-probe/status:/var/atlas-probe/status
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETUID
      - SETGID
      - DAC_OVERRIDE
      - NET_RAW
    mem_limit: "64000000000"
    mem_reservation: 64m

Save it in a docker-compose.yml file and then run with sudo docker compose up -d.

This will create a set of keys within the etc directory. Copy the contents of the probe_key.pub and use within the apply form: https://atlas.ripe.net/apply/swprobe/

If you're unsure of you're ASN number you can run curl https://ipinfo.io/org

That's it once the form it submit you should get an email with your probe ID, you should receive around 21,600 every 24 hours. I had to wait a full 24 hours before receiving any.

After your credits come in you'll then be able to run measurements via the measurements form:

Deploying a RIPE Atlas Software Node
Create Measurement Form

Now it's time to sit back and grab a coffee as the credits role in and the internet is made better!

]]>
<![CDATA[Diving into a hidden macOS tool - networkQuality]]>Getting Started with networkQuality

The networkQuality tool is a built-in tool released in macOS Monterey that can help diagnose network issues and measure network performance. In this post, we'll go over how to use the networkQuality tool and some of its key features.

Running the Default Tests

To

]]>
https://cyberhost.uk/the-hidden-macos-speedtest-tool-networkquality/65722b923bc0c20001b6e41eSun, 14 May 2023 00:22:59 GMTGetting Started with networkQuality Diving into a hidden macOS tool - networkQuality

The networkQuality tool is a built-in tool released in macOS Monterey that can help diagnose network issues and measure network performance. In this post, we'll go over how to use the networkQuality tool and some of its key features.

Running the Default Tests

To access the Network Quality tool, open the Terminal app on your Mac and enter the following command:

networkQuality -v

This command starts the tool and performs the default set of tests, displaying the results in the Terminal window.

Example output:

==== SUMMARY ====
Uplink capacity: 44.448 Mbps (Accuracy: High)
Downlink capacity: 162.135 Mbps (Accuracy: High)
Responsiveness: Low (73 RPM) (Accuracy: High)
Idle Latency: 50.125 milliseconds (Accuracy: High)
Interface: en0
Uplink bytes transferred: 69.921 MB
Downlink bytes transferred: 278.340 MB
Uplink Flow count: 16
Downlink Flow count: 12
Start: 13/05/2023, 15:04:13
End: 13/05/2023, 15:04:27
OS Version: Version 13.3.1 (a) (Build 22E772610a)

Usage:

USAGE: networkQuality [-C <configuration_url>] [-c] [-h] [-I <network interface name>] [-k] [-p] [-r host] [-s] [-v]
    -C: override Configuration URL or path (with scheme file://)
    -c: Produce computer-readable output
    -h: Show help (this message)
    -I: Bind test to interface (e.g., en0, pdp_ip0,...)
    -k: Disable certificate validation
    -p: Use Private Relay
    -r: Connect to host or IP, overriding DNS for initial config request
    -s: Run tests sequentially instead of parallel upload/download
    -v: Verbose output

Using Private Relay

The Network Quality tool also supports Apple's Private Relay feature, which encrypts and routes all network traffic through two separate servers for added privacy and security. To use Private Relay with the tool, you can add the "-p" flag to the command:

networkQuality -v -p

Customizing the Configuration

You can customize the configuration used by the Network Quality tool. By default, the tool requests a configuration file from Apple everytime via https://mensura.cdn-apple.com/api/v1/gm/config. However, you can specify a different Configuration URL using the -C flag.

Apple's default config:

{ "version": 1,
  "test_endpoint": "uklon6-edge-bx-031.aaplimg.com",
  "urls": {
      "small_https_download_url": "https://mensura.cdn-apple.com/api/v1/gm/small",
      "large_https_download_url": "https://mensura.cdn-apple.com/api/v1/gm/large",
      "https_upload_url": "https://mensura.cdn-apple.com/api/v1/gm/slurp",
      "small_download_url": "https://mensura.cdn-apple.com/api/v1/gm/small",
      "large_download_url": "https://mensura.cdn-apple.com/api/v1/gm/large",
      "upload_url": "https://mensura.cdn-apple.com/api/v1/gm/slurp"
   }
}

Apple's test_endpoint changes on each request, selecting a different nearby server to reduce latency and distributing their server load.

For example, if you have a custom configuration file located at https://networkquality.example.com/config, you can use the following command to run the Network Quality tool with your custom configuration:

networkQuality -v -C https://networkquality.example.com/config

This command starts the tool and performs the tests using the configuration specified in the custom configuration file.

Creating Your Own Server

If you want to set up your own server for the Network Quality tool, you can find documentation on how to do so on the project's GitHub page at https://github.com/network-quality/server. This allows you to customize the tool to your specific needs and run it on your own infrastructure.

Go Server: https://github.com/network-quality/goserver

Conclusion

The networkQuality tool in macOS is a powerful tool for measuring the performance of your network connection. The fact you can run the test via Apple Private Relay without iCloud+ is a neat feature. Next steps for me will be to self-host my own server which will be a good way to test network speeds locally.

networkQuality Wiki/Community: https://github.com/network-quality/community/wiki

IPPM Responsiveness Draft RFC: https://www.ietf.org/archive/id/draft-cpaasch-ippm-responsiveness-00.html

]]>
<![CDATA[Self-Host Mastodon with Docker Compose]]>https://cyberhost.uk/mastodon-docker-compose/65722b923bc0c20001b6e41aFri, 12 May 2023 19:19:00 GMTRequirements Self-Host Mastodon with Docker Compose

A small instance to support up to 10 users requires around:
2 CPUs
2GB RAM
50GB Storage

Need a Server?

Hetzner - €20 credit on us (ref) - Free Arm instance for 4 months!
Vultr - $100 credit on us (ref) - Expires after 30 days

Setup

Install docker

Follow the docs for your distro: https://docs.docker.com/engine/install/

Create docker network

sudo docker network create --driver=bridge --subnet=10.10.10.0/24 --gateway=10.10.10.1 mastodonnet

Docker compose:
version: "2.1"
services:
  mastodon:
    image: lscr.io/linuxserver/mastodon:latest
    container_name: mastodon
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - LOCAL_DOMAIN=<domain>    #All other hostnames are blocked by default
      - REDIS_HOST=10.10.10.3
      - REDIS_PORT=6379
      - DB_HOST=10.10.10.2
      - DB_USER=mastodon
      - DB_NAME=mastodon
      - DB_PASS=<PASSWORD>
      - DB_PORT=5432
      - ES_ENABLED=false
      - SECRET_KEY_BASE=
      - OTP_SECRET=
      - VAPID_PRIVATE_KEY=
      - VAPID_PUBLIC_KEY=
      - SMTP_SERVER=mail.example.com
      - SMTP_PORT=25
      - SMTP_LOGIN=
      - SMTP_PASSWORD=
      - SMTP_FROM_ADDRESS=notifications@example.com
      - S3_ENABLED=false
#      - WEB_DOMAIN=mastodon.example.com #optional
#      - ES_HOST=es #optional
#      - ES_PORT=9200 #optional
#      - ES_USER=elastic #optional
#      - ES_PASS=elastic #optional
#      - S3_BUCKET= #optional
#      - AWS_ACCESS_KEY_ID= #optional
#      - AWS_SECRET_ACCESS_KEY= #optional
#      - S3_ALIAS_HOST= #optional
    volumes:
      - ./config:/config
    networks:
      default:
        ipv4_address: 10.10.10.4
    expose:
      - "80"
      - "443"
    restart: unless-stopped

  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      default:
        ipv4_address: 10.10.10.2
    environment:
      POSTGRES_USER: mastodon
      POSTGRES_PASSWORD: <PASSWORD>
      POSTGRES_DB: mastodon
      POSTGRES_HOST_AUTH_METHOD: trust
    healthcheck:
      test: ['CMD', 'pg_isready', '-U', 'postgres']
    volumes:
      - ./postgres14:/var/lib/postgresql/data

  redis:
    restart: always
    image: redis:7-alpine
    networks:
      default:
        ipv4_address: 10.10.10.3
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
    volumes:
      - ./redis:/data

networks:
  default:
    external:
      name: mastodonnet

SMTP is optional if you don't have an email provider

S3 Media Storage

It is recommended that you enable and configure Mastodon to use an S3 bucket for media. Media storage can easily get to over 100GB.

I'd recommend using Scaleway Object Storage as you get 75GB of Storage and Bandwidth for free.

We'll configure the removal of old media later so you're not storing loads of junk!

Create secrets

Run sudo docker run --rm -it -w /app/www --entrypoint rake lscr.io/linuxserver/mastodon secret

Run the above command twice and store the values under the variables SECRET_KEY_BASE and OTP_SECRET within the docker-compose.yml file.

Create Vapid Keys

Run sudo docker run --rm -it -w /app/www --entrypoint rake lscr.io/linuxserver/mastodon mastodon:webpush:generate_vapid_key

Copy and paste the output into the environmental variables in the docker-compose.yml file.

Example config:

- SECRET_KEY_BASE=20ef02cab8dcdf7a787aa2f1aa29d4414b02c18d97af78ba4860c78bd456edaae4c30a97b846f368e8c58c8625b79d3a9220ea247a4e5e62be1cb1bbbb041705
- OTP_SECRET=20a8013e5fd8d0f2d2d87017e6e6c00fc87fe3d354622b463ff2b40ea42c952459362cb02a7e5fa51c90906546dcd1ecf2de48a2e236944df9070fbee8d190d1
- VAPID_PRIVATE_KEY=M5PjTAf4GbE4zlqAh1lY8NGy1P_JTbCL2i_zly7XOG0=
- VAPID_PUBLIC_KEY=BFRu73k9lDkQUw0pysubc7cpoByCYMm__km6APeLtwrtPPSlWWn064fFjFYAdZ-AiXBRWgWAanB9lm0BP83hBkg=
Start it up!

Run: sudo docker compose up -d

Create your Owner/Admin account:
sudo docker exec -it -w /app/www mastodon bin/tootctl accounts create \
  <YOUR-USERNAME> \
  --email <YOUR-EMAIL> \
  --confirmed \
  --role Owner

A password will be displayed after a few seconds

Reverse Proxy Setup

In order to add TLS (HTTPS) to your Mastodon instance we'll be placing a reverse proxy in front.

We recommend using Caddy however Nginx is a popular alternative.

If you'd like to place your instance behind Cloudflare, follow the Cloudflare Tunnel section.

Caddy

Need to setup Caddy? Follow this guide

mastodon.example.com {
      reverse_proxy https://10.10.10.4  {
                transport http {
                        tls_insecure_skip_verify
                }
        }
}
Cloudflare Tunnel

Need to setup Cloudflare tunnel? Follow this guide
Add the folowing to your config:

  - hostname: mastodon.example.com
    service: https://10.10.10.4:443
    originRequest:
       noTLSVerify: true
Auto removal of old media

Mastodon caches media from other servers, which can potentially use a significant amount of storage.

We'll be using a built-in tool to automatically delete cached media to reduce storage usage.

Open crontab: sudo crontab -e

Add the following:

15 * * * * docker exec -w /app/www mastodon bin/tootctl media remove --days 7 > /dev/null 2>&1

This will run at quater past every hour, removing media that is over 7 days old.

Replace /dev/null if you'd like to store the output to a file.

You'll probably want to adjust the number of days you cache media for.

Enjoy using Mastodon!

Say hello👋
@cyberhost@infosec.exchange

]]>
<![CDATA[Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT]]>https://cyberhost.uk/cloudflare-argo-tunnel/65722b923bc0c20001b6e412Sat, 16 Oct 2021 21:11:00 GMTContents: Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT

What is a Cloudflare Argo Tunnel
How it works
CG-NAT
Install
Setting up Cloudflare Repositories
Temporary Argo Tunnel
Permanent Argo Tunnel
Run in the background and on boot
Adding more services

What is a Cloudflare Argo Tunnel?

Cloudflare Tunnel provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address. With Tunnel, you do not send traffic to an external IP — instead, a lightweight daemon in your infrastructure (cloudflared) creates outbound-only connections to Cloudflare’s edge. Cloudflare Tunnel can connect HTTP web servers, SSH servers, remote desktops, and other protocols safely to Cloudflare. This way, your origins can serve traffic through Cloudflare without being vulnerable to attacks that bypass Cloudflare. - Cloudflare

How it works

Cloudflared establishes outbound connections (tunnels) between your resources and the Cloudflare edge. Tunnels are persistent objects that route traffic to DNS records. Within the same tunnel, you can run as many cloudflared processes (connectors) as needed. These processes will establish connections to the Cloudflare edge and send traffic to the nearest Cloudflare data center. - Cloudflare

Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT
Cloudflare Argo Tunnel Diagram

CG-NAT's

As the IPv4 address space has been exhausted, many ISP's have reduced their usage by implementing a CG-NAT, this is where multiple customers share the same IPv4 address. This basically makes port-forwarding impossible.

Cloudflare Argo Tunnel Setup - Self-Host with a CG-NAT

Using a Cloudflare Argo Tunnel removes the need to port forward, allowing users to self-host behind a CG-NAT, strict firewall or any ISP limitation.

Cloudflare Setup Docs

Install

Setup Cloudflare Repositories

You can download the cloudflared binary from Cloudflare. I feel that setting up Cloudflare Repositories is a better solution as it can then be managed and updated via your package manager.

Follow the Official Setup Docs for your distribution.

Example setup for Debian 12 (Bookworm):

# Add cloudflare gpg key
sudo mkdir -p --mode=0755 /usr/share/keyrings
curl -fsSL https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-main.gpg >/dev/null

# Add this repo to your apt repositories
echo 'deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared bookworm main' | sudo tee /etc/apt/sources.list.d/cloudflared.list

# install cloudflared
sudo apt-get update && sudo apt-get install cloudflared

Temporary Argo Tunnel (Cloudflare account not required!)

Run: cloudflared tunnel --url localhost:<PORT>
Example: cloudflared tunnel --url localhost:80

Output:

+--------------------------------------------------------------------------------------------+
2021-10-14T21:01:42Z INF |  Your quick Tunnel has been created! Visit it at (it may take some time to be reachable):  |
2021-10-14T21:01:42Z INF |  https://bloomberg-car-giant-removed.trycloudflare.com                               |
2021-10-14T21:01:42Z INF +--------------------------------------------------------------------------------------------+

Just head to the URL outputted: https://bloomberg-car-giant-removed.trycloudflare.com.

It's that simple!

It's worth nothing there is some undocumented rate limiting that occurs on the this temporary setup, although you won't realise for dev project which it's designed for.

Permanent Argo Tunnel

  1. Login: cloudflared tunnel login

  2. Copy the URL and open in your browser

  3. Create a new tunnel: cloudflared tunnel create cyberhost

This can be viewed by running cloudflared tunnel list

ID                                   NAME      CREATED              CONNECTIONS 
28c78ae-9ba2-40cc-c187-1892be52da8b cyberhost 2021-10-14T12:10:05Z
  1. Navigate to .cloudflared you may find this in your home directory cd ~/.cloudflared.

  2. Create a configuration file within the .cloudflared directory:
    nano config.yml

  3. Use the following config:

tunnel: <Tunnel-UUID>
credentials-file: <PATH>/.cloudflared/<Tunnel-UUID>.json

ingress:
  - hostname: demo.example.com
    service: http://localhost:80
  - service: http_status:404

Replace <PORT>, <Tunnel-UUID>, <PATH> and demo.example.com. To find <PATH> run pwd in the .cloudflared directory.

  1. Connect the Argo tunnel with a hostname
    eg: cloudflared tunnel route dns <UUID or NAME> demo.example.com

  2. Now run the tunnel cloudflared tunnel run <UUID or NAME>

Run in the background and on boot

  1. Create a system service: sudo cloudflared --config ~/.cloudflared/config.yml service install

  2. Start and enable service at boot: sudo systemctl start cloudflared && sudo systemctl enable cloudflared

Adding more services

  1. Pair another hostname: cloudflared tunnel route dns <UUID or NAME> demo2.example.com

  2. Add another ingress point to the config:

ingress:
  - hostname: demo.example.com
    service: http://localhost:80
  - hostname: demo2.example.com
    service: http://localhost:8080
  - service: http_status:404
  1. Remove the existing service config sudo rm /etc/cloudflared/config.yml
  2. Install the new config: sudo cloudflared --config ~/.cloudflared/config.yml service install
  3. Restart service: sudo systemctl restart cloudflared
]]>
<![CDATA[How to Self-Host Matrix and Element (Docker Compose)]]>Version 2

This is a complete guide on setting up Matrix (Synapse) and Element on a fresh Ubuntu 20.04, Debian 11, CentOS or Fedora server.

Need a Server?
Hetzner - €20 credit on us (ref) - Free Arm instance for 4 months!
Vultr - $100 credit on us

]]>
https://cyberhost.uk/element-matrix-setup/65722b923bc0c20001b6e3f0Sun, 10 Oct 2021 21:43:00 GMT

Version 2

This is a complete guide on setting up Matrix (Synapse) and Element on a fresh Ubuntu 20.04, Debian 11, CentOS or Fedora server.

Need a Server?
Hetzner - €20 credit on us (ref) - Free Arm instance for 4 months!
Vultr - $100 credit on us (ref) - Expires after 30 days

Contents
What is Matrix?
Server Setup
Install UFW
Setup Sudo User
Install Docker
Install Matrix and Element
Create New Users
Reverse Proxy
Login

If your server is already setup feel free to skip.

What is Matrix?

Matrix is an open standard and communication protocol for real-time communication. It aims to make real-time communication work seamlessly between different service providers, just like standard Simple Mail Transfer Protocol email does now for store-and-forward email service, by allowing users with accounts at one communications service provider to communicate with users of a different service provider via online chat, voice over IP, and videotelephony. Such protocols have been around before such as XMPP but Matrix is not based on that or another communication protocol. From a technical perspective, it is an application layer communication protocol for federated real-time communication. It provides HTTP APIs and open source reference implementations for securely distributing and persisting messages in JSON format over an open federation of servers. It can integrate with standard web services via WebRTC, facilitating browser-to-browser applications. Wikipedia

Server Setup

  1. Update
    Ubuntu/Debian: sudo apt update && sudo apt upgrade
    CentOS/Fedora: sudo dnf upgrade

  2. Install automatic updates
    Ubuntu/Debian: sudo apt install unattended-upgrades
    CentOS/Fedora: sudo dnf install -y dnf-automatic

  3. Change SSH Port: sudo nano /etc/ssh/sshd_config

Remove the # infront of Port 22 and then change it (30000-50000 is ideal).

This is security though obsucurity which is not ideal but port 22 just gets abused by bots.

  1. Setup SSH Keys

  2. Restart SSH: sudo systemctl restart sshd

  3. Install fail2ban
    Ubuntu/Debian: sudo apt install fail2ban
    CentOS/Fedora: sudo dnf install fail2ban

Install UFW Firewall

  1. Install
    Ubuntu/Debian: sudo apt install ufw
    CentOS/Fedora: sudo dnf install ufw

  2. Replace SSH-PORT to your SSH port: sudo ufw allow <SSH-PORT>/tcp

  3. Allow HTTP/s traffic:

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
  1. Enable Firewall: sudo ufw enable

Setup a sudo user

  1. adduser <USERNAME>
  2. Add user to sudoers sudo adduser <USERNAME> sudo
  3. Login as the new user su - <USERNAME>

Install Docker

Official Docs:
Ubuntu
Debian
CentOS
Fedora

Install Matrix and Element

  1. Create docker network, this is so Matrix and Element can be on their own isolated network:
    sudo docker network create --driver=bridge --subnet=10.10.10.0/24 --gateway=10.10.10.1 matrix_net

  2. Create Matrix directory: sudo mkdir matrix && cd matrix

  3. Use the following template:
    sudo nano docker-compose.yaml

version: '2.3'
services:
  postgres:
    image: postgres:14
    restart: unless-stopped
    networks:
      default:
        ipv4_address: 10.10.10.2
    volumes:
     - ./postgresdata:/var/lib/postgresql/data

    # These will be used in homeserver.yaml later on
    environment:
     - POSTGRES_DB=synapse
     - POSTGRES_USER=synapse
     - POSTGRES_PASSWORD=STRONGPASSWORD
     
  element:
    image: vectorim/element-web:latest
    restart: unless-stopped
    volumes:
      - ./element-config.json:/app/config.json
    networks:
      default:
        ipv4_address: 10.10.10.3
        
  synapse:
    image: matrixdotorg/synapse:latest
    restart: unless-stopped
    networks:
      default:
        ipv4_address: 10.10.10.4
    volumes:
     - ./synapse:/data

networks:
  default:
    external:
      name: matrix_net
  1. Create Element Config sudo nano element-config.json
    Copy and paste example contents into your file.

  2. Remove "default_server_name": "matrix.org" (top line) from element-config.json as this is deprecated

  3. Add our custom homeserver to the top of element-config.json:

    "default_server_config": {
        "m.homeserver": {
            "base_url": "https://matrix.example.com",
            "server_name": "matrix.example.com"
        },
        "m.identity_server": {
            "base_url": "https://vector.im"
        }
    },
  1. Generate Synapse Config:
sudo docker run -it --rm \
    -v "$HOME/matrix/synapse:/data" \
    -e SYNAPSE_SERVER_NAME=matrix.example.com \
    -e SYNAPSE_REPORT_STATS=yes \
    matrixdotorg/synapse:latest generate
  1. Comment out sqlite database (as we have setup postgres to replace this):
    sudo nano synapse/homeserver.yaml:
#database:
#  name: sqlite3
#  args:
#    database: /data/homeserver.db
  1. Add the Postgres config:
    sudo nano synapse/homeserver.yaml
database:
  name: psycopg2
  args:
    user: synapse
    password: STRONGPASSWORD
    database: synapse
    host: postgres
    cp_min: 5
    cp_max: 10
  1. Deploy: sudo docker compose up -d

Create New Users

  1. Access docker shell: sudo docker exec -it matrix_synapse_1 bash
  2. register_new_matrix_user -c /data/homeserver.yaml http://localhost:8008
  3. Follow the on screen prompts
  4. Enter exit to leave the container's shell

To allow anyone to register an account set 'enable_registration' to true in the homeserver.yaml. This is NOT recomended.

Install Reverse Proxy (Caddy)

Caddy will be used for the reverse proxy. This will handle incomming HTTPS connections and forward them to the correct docker containers. It a simple setup process and Caddy will automatically fetch and renew Let's Encrypt certificates for us!

  1. Follow this setup guide:
Caddy Server v2 Reverse Proxy Setup Guide
Last Updated: 07/06/2021 What is Caddy? Caddy has a wide range of use cases including: Web Server Reverse Proxy Sidecar Proxy Load Balancer API Gateway Ingress Controller System Manager Process Supervisor Task Scheduler Today we will be installing and setting up Caddy as a Reverse Proxy. This will
How to Self-Host Matrix and Element (Docker Compose)
  1. Head to your user directory: cd
  2. Create Caddy file: sudo nano Caddyfile

Recommended Caddy Template:

  • Limited Matrix paths (based on docs)
  • Security Headers
  • No search engine indexing
matrix.example.com {
  reverse_proxy /_matrix/* 10.10.10.4:8008
  reverse_proxy /_synapse/client/* 10.10.10.4:8008
  
  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

element.example.com {
  encode zstd gzip
  reverse_proxy 10.10.10.3:80

  header {
    X-Content-Type-Options nosniff
    Referrer-Policy  strict-origin-when-cross-origin
    Strict-Transport-Security "max-age=63072000; includeSubDomains;"
    Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
    X-Frame-Options SAMEORIGIN
    X-XSS-Protection 1
    X-Robots-Tag none
    -server
  }
}

Matrix Reverse Proxy Docs

  1. Enable the config: caddy reload

Login

  1. Head to your element domain and login!

Don't forget to update frequently

Pull the new docker images and then restart the containers:
sudo docker compose pull && sudo docker compose up -d

]]>