Kevin Neely's Security Notes

Nextcloud

I finally decided to move my #NextCloud instance from one that I had been operating on the #Vultr hosting service to my #HomeLab.

A note on Vultr: I am impressed with this service. I have used them for multiple projects and paid with various means, from credit card to #cryptocurrency for about 10 years and I cannot even remember a downtime that impacted me. (In fact, I think there was only one real downtime, which was planned, well-communicated, and didn’t impact me because my setup was fairly resilient). With a growing volume of data, and sufficient spare hardware that wasn’t doing anything, I decided to bring it in-house.

This is not going to be a full guide, as there are plenty of those, but I did run into some hurdles that may be common, especially if a pre-built Nextcloud instance was used. So this is meant to provide some color and augment the official and popular documentation.

Getting started

Plan out the migration

Migration Overview

Essentially, there are three high-level steps to this process 1. Build a new Nextcloud server in the homelab 2. Copy the configuration (1 file), database (1 backup file), apps (install apps), and data (all user files) over to the new system 3. Restore all the copied data to the new instance

Preparing to Migrate

  1. Start with the NextCloud official documentation for migrating to a different server as well as:
    1. Backing up Nextcloud
    2. and the restoring a server doc
  2. Check out Nicholas Henkey’s migrate Nextcloud to a new server blog post. This is very thorough and has some great detail if you’re not super familiar with Nextcloud (because you used a pre-built instance)
  3. For the new build:
    1. A full set of installation instructions, placing [Nextcloud behind an Nginx proxy](https://github.com/jameskimmel/Nextcloud_Ubuntu/blob/main/nextcloud_behind_NGINX_proxy.md.
    2. An older install document for Installing Nextcloud on Ubuntu with Redis, APCu, SSL & Apache

Migration

While the official documentation describes the basics, the following is the steps I recommend following. This is at a medium level, providing the details, but not the specific command-line arguments (mostly).

  1. Build the new server
    1. Use your favorite flavor of Linux (I used Debian, and these notes will reflect that)
      1. install all updates,
      2. install fail2ban or similar security if you’re exposing this to the Internet.
      3. name the new system the same as the outgoing server
    2. Download the Nextcloud install from the nextcloud download site and choose either:
      1. update the current system to the latest version of whatever major version your running, and then download latest-XX.tar.bz2 where ‘XX’ is your version
      2. identify your exact version and download it from nextcloud
    3. Install the dependencies (mariaDB, redis, php, apache, etc. etc.)
      1. note: if the source server is running nginx, I recommend sticking with that for simplicity, keeping in mind that only Apache is officially supported
    4. Unpack Nextcloud
    5. Validate that it’s working
    6. Place it into maintenance mode
  2. Backup the data

    1. If using multi-factor authentication, find your recovery codes or create new ones
    2. Place the server into maintenance mode
    3. Backup the database
    4. copy the database backup to a temporary location on the new server
  3. Restore the data

    1. Restore the database
    2. copy /path/to/nextcloud/config/config.php over the existing config.php
    3. rsync the data/ directory to the new server
      1. you can remove old logs in the data directory
      2. you may need to use an intermediary step, like a USB drive. It’s best if this is ext4 formatted so you can retain attributes
      3. the rsync options should include -Aaxr you may want -v and/or --progress to get a better feel for what’s going on
      4. if rsync-ing over ssh, the switch is -e ssh
    4. If you have installed any additional apps for your Nextcloud environment, rsync the apps/ directory in the same way as the data dir above
    5. Validate the permissions in your nextcloud, data, and apps directories. Fix as necessary, see the info Nicholas Henkey’s post (linked above) for commands
    6. Redirect your A or CNAME record to the new system
    7. Configure SSL on the new system
    8. Turn off maintenance mode
    9. Log in and test! :fingers-crossed:

Troubleshooting

Hopefully everything is working. Make sure to check the logs if something is broken.

log locations – the nextcloud.log in the data/ directory – the apache logs in /var/log/apache2 – the redis logs in /var/log/redis – the system logs, accessible with journalctl

Reiterating: Remember or check for these items

These are the specific notes I took as I ran into problems that I had to work around or solve. These are incorporated in the above, so this is basically a restatement of the gotchas I ran into:

  • upgrade the current one to the latest version of the current release (i.e. the latest of the major version you are on, so if you were running 29.0.3, get to 29.0.9)
    • this makes it easier when you download <version>-latest.tar.bz2
    • If you’d prefer to skip that, use the nextcloud download site with all available versions. Make sure to grab the same one and compare the specific version as listed in config.php. Example: 'version' => '29.0.9.2',
  • use the same name on the new server
  • use the same web server. Apache is officially supported, but if you’re using nginx, it will be easier to stay on that.
  • Most multi-factor authentication, like WebAuthN, FIDO hardware keys, etc. will not work over HTTP in the clear.
    • IOW: make sure you have recovery codes
  • If the apps aren’t copied over, the new server sees them as installed rather than installable. I suppose one could “delete” or remove them in the admin GUI and then reinstall, but otherwise, there was no button to force a reinstall.
  • Files and data you need to copy over after creating the install. Do each of these separately, rather
    • if you have any additional apps, copy the apps/ directory over
    • copy config.php
    • copy the data/ directory
  • Is your current install using Redis-based transactional file locking?
    • If the previous system was using Redis and it is still in the configuration, the new system will not be able to obtain file-locking and essentially all users will be read-only and not able to modify or create new files.
    • In config.php, you will see settings such as 'redis' and 'memcache.locking' => '\\OC\\Memcache\\Redis',
    • make sure Redis is installed on the new system and running on the same port (or change the port in config.php)
    • Install the necessary software: apt install redis-server php-redis php-apcu
    • Ensure that the Redis and APCu settings in config.php are according to the documented single-server settings

The Memcache settings should look something like the following configuration snippet. Alternatively, you could enable and use the process socket.


'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
     'host' => 'localhost',
     'port' => 6379,
],

Nextcloud administration notes

These instructions and administrative notes were written for the pre-built Nextcloud provided by hosting provider Vultr. As a way to de- #Google my life and take back a bit of #privacy, I have been using a Vultr-hosted instance for a couple years now and it has run quite well. These notes are really aimed at the small instance for personal use. Please don’t use my notes if you’re responsible for an enterprise server!

Upgrading Nextcloud

#Nextcloud, with all it's PHP-based functionality, can become temperamental if not upgraded appropriately.  These are my notes to remind me how to now completely break things. When upgrading, the first pass will usually bring you to the most up-to-date version of Nextcloud in your major release, e.g. an instance running 27.1.4 would be brought up to 27.1.11. Running the script again would bring the instance to 28.0.x.

To update a Nextcloud server running on the #Vultr service to the latest version, you need to follow the steps below:

  1. Backup your Nextcloud data: Before starting any update process, it's always a good idea to create a backup of your Nextcloud data. This will ensure that you can restore your data in case of any unexpected issues during the update process.
    1. Shutdown the OS with shutdown -h now
    2. Power down the instance in Vultr
    3. Create a snapshot
    4. Wait
    5. Wait some more – depending on how much data is hosted on the system
    6. Power it back up
  2. SSH into the Vultr server: To update the Nextcloud server, you need to access the server using SSH. You can use an SSH client such as PuTTY to connect to the Vultr server.
  3. Switch to the Nextcloud user: Once you are logged in, switch to the Nextcloud user using the following command: sudo su -s /bin/bash www-data.
  4. Navigate to the Nextcloud directory: Navigate to the Nextcloud directory using the following command: cd/var/www/html  (could be /var/www/nextcloud or other.  Check what's in use)
  5. Stop the Nextcloud service: To avoid any conflicts during the update process, stop the Nextcloud service using the following command (as www-data): php occ maintenance:mode --on 
  6. Update the Nextcloud server: To update the Nextcloud server, you need to run the following command(as www-data): php updater/updater.phar. This will start the update process and download the latest version of Nextcloud.
  7. Update the OS, as needed, with apt upgrade
  8. Start the Nextcloud service: Once the update is complete and verified, you can start the Nextcloud service using the following command: sudo -u www-data php occ maintenance:mode --off.
  9. Verify the update: After the update process is complete, you can verify the update by accessing the Nextcloud login page. You should see the latest version of Nextcloud listed on the login page.
  10. Assuming all is running smoothly, the snapshot that was created in step 1 can be safely deleted. Otherwise, they accrue charges on the order of pennies / gigabyte / day.

Some other notes

Remove files in the trash

When a user deletes files, it can take a long time from them to actually disappear from the server.

root@cloud:/var/www/html# sudo -u www-data php -f /var/www/html/cron.php root@cloud:/var/www/html# sudo -u www-data php occ config:app:delete files_trashbin background_job_expire_trash

Set files to expire

root@cloud:/var/www/html# sudo -u www-data php occ config:app:set —value=yes iles_trashbin background_job_expire_trash

This is a log of experiences and experimentation in moving from more traditional home computing –ATX cases, components, water cooling, and continual upgrades– to something a bit more modular in terms or GPU computing power. This guide probably isn’t for most people. It’s a collection of notes I took during the process, strung together in case they might help someone also looking to pack multiple power-use-cases into as small a format as possible.

[Note:] A later evolution should involve a similar down-sizing of a home storage appliance.

Objectives

An external GPU requires more setup, and -let’s face it- fiddling than getting a gaming laptop or a full PC case that can handle multi-PCIe slot GPUs. So why do it? A couple objectives had been bouncing around in my head that led me to this: – I need a system that can run compute-intensive and GPU-intensive tasks for long periods of time, e.g. machine learning, and training large language models – I need a light laptop for travel (i.e. I don’t want to carry around a 5+lb./2.5 kilo gaming laptop) – I want to be able to play recent games, but don’t need to be on the cutting edge of gaming – I want to reduce the overall space footprint for my computing devices.

In summary, I want my systems to be able to handle the more intensive tasks I plan to throw at them: Windows laptop for gaming and also travel, the stay–at-home system can perform long-running tasks such as AI model training, password cracking, and daily cron jobs.

Things I don’t care about: – being able to play games while traveling – document data diverging due to on multiple systems: I use a personal #NextCloud instance to keep my documents in sync.

Current State

I have a number of personal computing devices in my home lab for testing things and running different tasks, but they’re all aging a bit, so it is time to upgrade: – my Razer Blade 13 laptop is from 2016 – my main tower/gaming PC is from 2015 with an Nvidia GTX 1060 – an i5 NUC from 2020 (unused) – an i3 NUC from 2013 (unused) – A 6TB NAS with 4 aging 2TB drives from 2014 – Raspberry Pis and some other non-relevant computing devices

Configurations

With the objectives in mind, and realizing that my workload system would almost certainly run Linux, the two configurations for experimentation were: – Intel NUC with an eGPU – Lightweight laptopi (e.g. Dell XPS 13) with an eGPU

[Note:] The computing systems must support at least Thunderbolt3, though version 4 would be best for future-proofing.

Shows an Nvidia GTX 1060 in a Razer Core X Chroma eGPU enclosure Image: Original GTX 1060 GPU slotted in the Razer Core X Chroma enclosure

Background Research

Before starting on this endeavor, I did a lot of research to see how likely I’d be able to succeed. The two best sources I found was the eGPU.io site with many reviews and descriptions of how well specific configurations worked (or didn’t). They also have nice “best laptop for eGPU” and Best eGPU Enclosures matrices.

Nvidia drivers and Ubuntu

Installing Nvidia drivers under #Ubuntu is pretty straightforward these days, with a one-click install option built-in to the operating system itself. The user can choose between versions, and my research showed that most applications required either version 525 or 530. I installed 530.

eGPU information

The best two sources I found for information on configuring and using eGPUs were: – r/eGPU on reddit – their “so you’re thinking about an eGPU” guideegpu.io

Proof-of-concept

Having read a fair amount about the flakiness of certain #eGPU setups, I approached this project with a bit of caution. My older tower had a respectable, if aging, GTX 1060 6GB in it. Since I already had a recent Core i5 Intel NUC running Ubuntu and some test machine learning applications, so all I needed to fully test this was the enclosure. Researching the various enclosure options, I chose this one because: – the Razer Core X series appears to have some of the best out-of-the-box compatibility – I’ve been impressed with my aging Razer laptop, so I know they build quality components – The Chroma version has what is basically an USB hub in the back with 4 USB 3.x ports and an ethernet jack added to the plain Core X version My thinking was that this system could not only provide GPU, but also act as an easy dock-hub for my primary computers. This didn’t work out quite as I planned (more in the next post).

The included thunderbolt cable is connected from the NUC to the eGPU. Theoretically, the standard peripherals (keyboard, mouse, etc.) should be connected to the eGPU hub and everything will “just work”. However, in my testing, things worked best with the peripheral hub I use plugged into the NUC and only the #Thunderbolt cable plugged into the enclosure. In the spirit of IT troubleshooters everywhere: start by making the least amount of change and iterate from there.

Intel NUC on top of Razer Core X Chroma eGPU Image: Just the enclosure with a NUC on top.

Experience

The NUC was on Ubuntu 20.04. The drivers installed just fine, but the system just wouldn’t see the GPU. Doing some research, it looked like people were having better results with more recent versions of Ubuntu, so I did a quick sudo apt dist-upgrade and upgraded the system to 22.XX. The GPU worked! However, the advice I’d been given was to upgrade to 23.04, so I did that and still the system worked fine.

Migrating PasswordSafe to KeepassXC

I’ve been a longtime user of #PasswordSafe (or, “PWsafe”), back since Bruce Schneier was managing authorship and maintenance. With all the issues experienced by online providers like LastPass and 1Password (but especially LastPass, by miles), I think the usage of a local password database with sync to a personal #NextCloud instance is the way to go. I’m happy with PWsafe; it’s worked well over the years, but I need to share a few passwords and would like some expanded functionality such as managing SSH keys, so I looked to #KeePassXC, which appears to be the most up-to-date and maintained branch of the KeePass and KeePassX family. KeePassXC is desirable because it is natively multi-platform, whereas the original KeePass is written for Windows, and emulators are required to use it on operating systems like Linux.

Importing passwords

There is no direct import from a PasswordSafe format to KeePass database format using KeePassXC like there is from LastPass to KeePass. A tab-delimited file can be exported from PWsafe, and KeePassXC can import a comma-delimited (“CSV”) file, however, I make heavy use of nested groups, and the work to prepare the CSV file looked like a major pain. Luckily, the original version of KeePass supports direct import from PWsafe.

Armed with that knowledge, this was my path to import my passwords 1. Open PasswordSafe and export the database in the XML format (be careful with this file and delete when done!) 2. Download latest KeePass 2.x from https://keepass.info/ 3. Open KeePass, create a new KeePass version 2database, and import the XML file 4. Export the file as KeePass version 1.x database format 5. Close KeePass 2.x 6. Open KeePassXC and create a new database in a temporary location (doesn’t matter, we wont’ use it) 7. Import the KeePass 1.x database with the passwords 8. When prompted, choose the location and name where you want the database 9. Done!

KeePass import dialogue box

Finishing Up

Make sure to explore the settings, such as adding a Yubikey and/or keyfile. When everyhing is as you want it and working, delete the interim files (XML, KP 1.x and 2.x databases), and make a plan to retire the old PasswordSafe data.

References