Go to main content

Textpattern CMS support forum

You are not logged in. Register | Login | Help

#61 2025-11-03 12:09:57

gaekwad
Server grease monkey
From: People's Republic of Cornwall
Registered: 2005-11-19
Posts: 4,611
Bitbucket GitHub

Re: Adventures in Linux Land

This was the proposed command:

curl -fsSL https://repo.mysql.com/RPM-GPG-KEY-mysql-2022 | gpg --dearmor | sudo tee /usr/share/keyrings/mysql.gpg > /dev/null

…let’s chop it out a bit. The first task is to look at the | characters as essentially ‘glue’ between three commands that are concatenated into one. The three commands are:

curl -fsSL https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
gpg --dearmor
sudo tee /usr/share/keyrings/mysql.gpg > /dev/null

The pipe (|) character essentially takes the output of the previous command and chucks it into the following to process. In the case of the proposed command at the top, there are three commands with two bits of pipe ‘glue’, so the result of command 1 of 3 gets chucked into command 2 of 3, and the results of command 2 of 3 get chucked int command 3 of 3.

Command 1 of 3

curl -fsSL https://repo.mysql.com/RPM-GPG-KEY-mysql-2022 fetches the file at that URL, and the -fsSL parameter chunk are the options for curl to run in a certain way. Briefly (and man curl is your friend here for bedtime reading):

  • -f is short for --fail which means if the web server at the other end throws a result that’s not the file you’re expecting (e.g. file not found HTML page), it won’t download that file as the thing you’re looking for…it just won’t download anything.
  • -s is short for --silent: don’t output anything to the screen.
  • -S (note: capital) is short for --show-error: shows the error if there is one. This overrides the -s part if there’s an error, so you will see there’s an error.
  • -L is short for --location: bounces the curl request to the new location if it’s moved, for example.

Command 1 of 3 should – in theory, at least – show nothing on the screen at all if there are no errors: it’s downloading a URL, but not saving it, and not displaying it (i.e. -s is preventing it from being shown. Here’s where the fun stuff happens: the | command essentially sends (pipes) the curl output into the second command.

Command 2 of 3

Command 2 of 3 literally runs the de-armoring part of gpg of the stuff it’s been given. In this case, the original URL contents are:

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBGG4urcBEACrbsRa7tSSyxSfFkB+KXSbNM9rxYqoB78u107skReefq4/+Y72
TpDvlDZLmdv/lK0IpLa3bnvsM9IE1trNLrfi+JES62kaQ6hePPgn2RqxyIirt2se
Si3Z3n3jlEg+mSdhAvW+b+hFnqxo+TY0U+RBwDi4oO0YzHefkYPSmNPdlxRPQBMv
4GPTNfxERx6XvVSPcL1+jQ4R2cQFBryNhidBFIkoCOszjWhm+WnbURsLheBp757l
qEyrpCufz77zlq2gEi+wtPHItfqsx3rzxSRqatztMGYZpNUHNBJkr13npZtGW+kd
N/xu980QLZxN+bZ88pNoOuzD6dKcpMJ0LkdUmTx5z9ewiFiFbUDzZ7PECOm2g3ve
Jrwr79CXDLE1+39Hr8rDM2kDhSr9tAlPTnHVDcaYIGgSNIBcYfLmt91133klHQHB
IdWCNVtWJjq5YcLQJ9TxG9GQzgABPrm6NDd1t9j7w1L7uwBvMB1wgpirRTPVfnUS
Cd+025PEF+wTcBhfnzLtFj5xD7mNsmDmeHkF/sDfNOfAzTE1v2wq0ndYU60xbL6/
yl/Nipyr7WiQjCG0m3WfkjjVDTfs7/DXUqHFDOu4WMF9v+oqwpJXmAeGhQTWZC/Q
hWtrjrNJAgwKpp263gDSdW70ekhRzsok1HJwX1SfxHJYCMFs2aH6ppzNsQARAQAB
tDZNeVNRTCBSZWxlYXNlIEVuZ2luZWVyaW5nIDxteXNxbC1idWlsZEBvc3Mub3Jh
Y2xlLmNvbT6JAlQEEwEIAD4WIQSFm+jXxYb1OEMLGcJGe5QtOnm9KQUCYbi6twIb
AwUJA8JnAAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRBGe5QtOnm9KUewD/99
2sS31WLGoUQ6NoL7qOB4CErkqXtMzpJAKKg2jtBGG3rKE1/0VAg1D8AwEK4LcCO4
07wohnH0hNiUbeDck5x20pgS5SplQpuXX1K9vPzHeL/WNTb98S3H2Mzj4o9obED6
Ey52tTupttMF8pC9TJ93LxbJlCHIKKwCA1cXud3GycRN72eqSqZfJGdsaeWLmFmH
f6oee27d8XLoNjbyAxna/4jdWoTqmp8oT3bgv/TBco23NzqUSVPi+7ljS1hHvcJu
oJYqaztGrAEf/lWIGdfl/kLEh8IYx8OBNUojh9mzCDlwbs83CBqoUdlzLNDdwmzu
34Aw7xK14RAVinGFCpo/7EWoX6weyB/zqevUIIE89UABTeFoGih/hx2jdQV/NQNt
hWTW0jH0hmPnajBVAJPYwAuO82rx2pnZCxDATMn0elOkTue3PCmzHBF/GT6c65aQ
C4aojj0+Veh787QllQ9FrWbwnTz+4fNzU/MBZtyLZ4JnsiWUs9eJ2V1g/A+RiIKu
357Qgy1ytLqlgYiWfzHFlYjdtbPYKjDaScnvtY8VO2Rktm7XiV4zKFKiaWp+vuVY
pR0/7Adgnlj5Jt9lQQGOr+Z2VYx8SvBcC+by3XAtYkRHtX5u4MLlVS3gcoWfDiWw
CpvqdK21EsXjQJxRr3dbSn0HaVj4FJZX0QQ7WZm6WLkCDQRhuLq3ARAA6RYjqfC0
YcLGKvHhoBnsX29vy9Wn1y2JYpEnPUIB8X0VOyz5/ALv4Hqtl4THkH+mmMuhtndo
q2BkCCk508jWBvKS1S+Bd2esB45BDDmIhuX3ozu9Xza4i1FsPnLkQ0uMZJv30ls2
pXFmskhYyzmo6aOmH2536LdtPSlXtywfNV1HEr69V/AHbrEzfoQkJ/qvPzELBOjf
jwtDPDePiVgW9LhktzVzn/BjO7XlJxw4PGcxJG6VApsXmM3t2fPN9eIHDUq8ocbH
dJ4en8/bJDXZd9ebQoILUuCg46hE3p6nTXfnPwSRnIRnsgCzeAz4rxDR4/Gv1Xpz
v5wqpL21XQi3nvZKlcv7J1IRVdphK66De9GpVQVTqC102gqJUErdjGmxmyCA1OOO
RqEPfKTrXz5YUGsWwpH+4xCuNQP0qmreRw3ghrH8potIr0iOVXFic5vJfBTgtcuE
B6E6ulAN+3jqBGTaBML0jxgj3Z5VC5HKVbpg2DbB/wMrLwFHNAbzV5hj2Os5Zmva
0ySP1YHB26pAW8dwB38GBaQvfZq3ezM4cRAo/iJ/GsVE98dZEBO+Ml+0KYj+ZG+v
yxzo20sweun7ZKT+9qZM90f6cQ3zqX6IfXZHHmQJBNv73mcZWNhDQOHs4wBoq+FG
QWNqLU9xaZxdXw80r1viDAwOy13EUtcVbTkAEQEAAYkCPAQYAQgAJhYhBIWb6NfF
hvU4QwsZwkZ7lC06eb0pBQJhuLq3AhsMBQkDwmcAAAoJEEZ7lC06eb0pSi8P/iy+
dNnxrtiENn9vkkA7AmZ8RsvPXYVeDCDSsL7UfhbS77r2L1qTa2aB3gAZUDIOXln5
1lSxMeeLtOequLMEV2Xi5km70rdtnja5SmWfc9fyExunXnsOhg6UG872At5CGEZU
0c2Nt/hlGtOR3xbt3O/Uwl+dErQPA4BUbW5K1T7OC6oPvtlKfF4bGZFloHgt2yE9
YSNWZsTPe6XJSapemHZLPOxJLnhs3VBirWE31QS0bRl5AzlO/fg7ia65vQGMOCOT
LpgChTbcZHtozeFqva4IeEgE4xN+6r8WtgSYeGGDRmeMEVjPM9dzQObf+SvGd58u
2z9f2agPK1H32c69RLoA0mHRe7Wkv4izeJUc5tumUY0e8OjdenZZjT3hjLh6tM+m
rp2oWnQIoed4LxUw1dhMOj0rYXv6laLGJ1FsW5eSke7ohBLcfBBTKnMCBohROHy2
E63Wggfsdn3UYzfqZ8cfbXetkXuLS/OM3MXbiNjg+ElYzjgWrkayu7yLakZx+mx6
sHPIJYm2hzkniMG29d5mGl7ZT9emP9b+CfqGUxoXJkjs0gnDl44bwGJ0dmIBu3aj
VAaHODXyY/zdDMGjskfEYbNXCAY2FRZSE58tgTvPKD++Kd2KGplMU2EIFT7JYfKh
HAB5DGMkx92HUMidsTSKHe+QnnnoFmu4gnmDU31i
=Xqbo
-----END PGP PUBLIC KEY BLOCK-----

That’s an encoded (not encrypted) text block that gpg needs to de-armor (decode) before it can be used. The contents are a (presumably valid) key for MySQL updates in a format that the system understands. I won’t post the de-armor’d results here as it’s gibberish on the human-readable front.

Story so far: curl (command 1 of 3) has acquired (but not saved) content from a URL, and fed it into gpg (command 2 of 3) which has de-armor’d it, and now command 3 of 3 has the net result of those two commands.

Command 3 of 3

tee is a read + write command. It’s handy when you have a need for it, and it works really well at what it sets out to do. tee in this case takes the output of command 2 of 3, which is piped (|) into tee, and it saves that output to the file /usr/share/keyrings/mysql.gpg. Since it’s doing this as root (sudo) it should go straight through and save to that file without any permissions notifications. The clever part here is the > /dev/null bit on the end…that sends the screen output of the tee command (which does show its output to the screen by default) to /dev/null, which is a UNIX / Linux device that is a black hole…that is to say, the output goes to somewhere that it can’t be seen or retrieved from…in other words, there’s nothing to see here.

  • Command 1 of 3 has no output to screen (assuming no HTTP errors);
  • Command 2 of 3 has no output to screen since it’s piping the result to command 3 of 3; and…
  • Command 3 of 3 has no output to screen since it’s outputting (>) to /dev/null.

…so if you run that original command, if all goes to plan you’ll see nothing on the screen and be returned to a prompt. Kinda scary, but expected.

That hopefully explains the | stuff to an extent that you can wrap your head around the basics. Now take a look at my command a few posts later:

gaekwad wrote #341053:
curl -Lo \
"$HOME"/mysql-apt-config.deb \
https://dev.mysql.com/get/mysql-apt-config_0.8.35-1_all.deb \
&& sudo dpkg -i \
"$HOME"/mysql-apt-config.deb

There are two things I’d like to explain to you that might help you out: && and \.

The && part is another ‘glue’ connector between commands, but in this case it translates to “if that command completes, run the next command”. In this instance, if my curl command completes, then run the dpkg command.

The \ part means the command runs onto the following line, and is mainly for legibility and clarity in my own docs. The example above is two commands split across 5 lines. Removing the \ and && boils it down to two lines:

curl -Lo "$HOME"/mysql-apt-config.deb https://dev.mysql.com/get/mysql-apt-config_0.8.35-1_all.deb
sudo dpkg -i "$HOME"/mysql-apt-config.deb

Essentially: download the URL to a local file, then run dpkg against that downloaded file. I then clean up the downloaded file so there’s nothing left over.

The \ is really helpful for me with docs since I can do a mass find & replace should I decide to change a directory structure or filename format, and if I’m doing emergency stuff early or late in the day, I don’t have to have peak brainmeat to decipher some of the stuff I’ve written. It also makes for improved copy & pasting for text blocks if I need to repeat stuff in a more complex install, like compiling something from source.

Example (don’t be scared) – let’s compile libgd from source:

libgd_version="2.3.3" \
&& sudo mkdir -p \
/opt/libgd/ \
&& rm -fr \
"$HOME"/libgd-source \
"$HOME"/libgd-source.tar.gz \
&& mkdir -p \
"$HOME"/libgd-source \
&& curl -Lo \
"$HOME"/libgd-source.tar.gz \
https://github.com/libgd/libgd/releases/download/gd-"$libgd_version"/libgd-"$libgd_version".tar.gz \
&& tar xzvf \
"$HOME"/libgd-source.tar.gz \
-C "$HOME"/libgd-source \
&& cd \
"$HOME"/libgd-source/libgd-"$libgd_version" \
&& ./configure \
--prefix=/opt/libgd \
&& make \
-j"$(nproc)" \
&& sudo make \
-j"$(nproc)" \
install \
&& sudo ldconfig \
/usr/local/lib \
&& sudo make \
-j"$(nproc)" clean \
&& cd \
"$HOME" \
&& rm -fr \
"$HOME"/libgd-source \
"$HOME"/libgd-source.tar.gz \
&& echo -e '*********************' \
&& echo -e '* `libgd` compiled. *' \
&& echo -e '*********************' \
&& echo -e "$(date --iso-8601=seconds)"' libgd '$libgd_version >> /var/log/build/core-helpers-libraries.log

Essentially:

  • Make a directory in /opt for stuff to live in.
  • Remove any existing directory and tarball relating to libgd.
  • Download the libgd source.
  • Unpack the libgd source.
  • Configure the libgd source.
  • Build the libgd source.
  • Install the built libgd.
  • Remove any existing directory and tarball relating to libgd.

That’s about 30 or so lines of run-on commands to compile a library, but in reality it’s only about 15-20 commands. It’s easier for me to read and wonder what the hell I was thinking when I come back to it after some time. When a new libgd is released, I update the version number on line 1 and recompile. Easy peasy, and takes a few minutes tops.

My compile script for Nginx is about 200 lines, PHP is about 140, and Percona Server for MySQL is about 30. The more finicky finagling needed, the more lines in the script…and having && ‘glue’ is handy.

Edits: Textile hates me.

Last edited by gaekwad (2025-11-03 12:12:55)

Offline

#62 2025-11-03 15:44:21

Algaris
Member
From: England
Registered: 2006-01-27
Posts: 601

Re: Adventures in Linux Land

😮 Wow, thank you so much, Pete. This is amazing and incredibly well explained. It makes a lot more sense now. I understood the | piping glue and got that && was another type of glue, but I didn’t realise it was conditional. I’ll need to reread your post a few times for it to completely sink in.

Whenever we meet up next, I definitely owe you a beer or two for all the assistance you’ve given me.

Offline

#63 2025-11-21 16:08:06

Algaris
Member
From: England
Registered: 2006-01-27
Posts: 601

Re: Adventures in Linux Land

I’m about to dive into a rabbit hole and could use some advice.

As I tighten security, I’m considering moving away from Samba shares and using SSH instead. I’d like to set up a dedicated remote access user with access to the /etc directory for editing config files and uploading certificates to the /etc/ssl/certs directory. They’ll also have access to /srv/www for uploading web-related files.

I’ve been looking into using chroot to make /srv/www their root directory but I’m unsure how to reconcile this with the need to access /etc. For security, the remote user should have sudo restricted to specific commands.

The plan is to generate public and private SSH keys, keeping one on my Mac and the other on a Debian server. All remote access will be done via Cyberduck or Mountain Duck (or possibly Nova depending on the situation), which will also be used to upload files to /etc/ssl/certs, and /srv/www. Config files will be edited in Nova and saved back to the server using Nova.

Does anyone have any best practices advice for this setup?

Offline

#64 2025-11-21 16:18:16

skewray
Member
From: Sunny Southern California
Registered: 2013-04-25
Posts: 273
Website Mastodon

Re: Adventures in Linux Land

I use ssh for everything, even site backups. An alternative is rsync; you keep a local copy and then run rsync to make your version and the server’s identical.

If the ssh user is chroot, it might still work if the /etc access for ssh access takes place before “login”. That’s what I would expect, although I’ve never tried it.

Offline

#65 2025-11-21 17:04:45

gaekwad
Server grease monkey
From: People's Republic of Cornwall
Registered: 2005-11-19
Posts: 4,611
Bitbucket GitHub

Re: Adventures in Linux Land

I use ssh with Terminal & Transmit with SFTP, which is file transfer over the ssh port. I use rsync sparingly because of past trauma (fat fingers at $EMPLOYER back in the day, plus a fraught few hours restoring shaky backups).

If you’re hosting multiple web apps, they might have their own ownership and permissions requirements, which is where you might find snags. For typical PHP apps, they can be owned / operated by the web server and / or PHP so they operate as expected. Often you’ll find a user www-data that owns >1 site.

There are some considerations with this route: if the user with ownership gets popped, or PHP misbehaves, then any site that’s owned by the web server / PHP process can be affected. I’ve seen this a lot with out of date Wordpress plugins being compromised and clobbering the other hosted sites on the same VPS. Yuck.

One route to consider is the one I’m trailing at the moment: a per-server owner for each site. That is, if you have a site example.com, the system user is examplecom and it only has access to /srv/www/example.com/ – no other rights.

With that in mind, I tend to to upload files to the www side with Transmit, and then run a script from Terminal to assign permissions & ownership. There may quirks with some CMSes that need directories to be set to a given set of permissions, while files need to have another set of permissions…and if you’re factoring in Composer you may need a user that has specific rights to do certain things. Magento is a good example of this, and so is Kimai2:

www.kimai.org/documentation/installation.html

To that end, you could consider containering things inside your /srv/www so you can choose who has access to what with fine-grained controls. For example: let’s say I run www.example.org on my server, I would follow my current namespace and set up the following directory:

/var/www/servers/example.org/www/

…and within there, I’d have some subdirectories: one for the live site, another for scripts, another for backup configs, another for logs etc. In the case of release-demo.textpattern.co, it looks like this:

$ ls -al
total 52
drwxr-xr-x 13 root     root     4096 Nov 21 15:00 .
drwxr-xr-x  7 root     root     4096 Jul  2  2024 ..
-rwxrwxr-x  1 root     sudo        0 Feb 21  2024 1f173340
drwxrwxr-x  3 www-data www-data 4096 Feb 21  2024 _well-known
drwxrwxr-x  2 root     sudo     4096 Feb 21  2024 backups
drwxrwxr-x  2 root     sudo     4096 Feb 21  2024 conf
drwxrwxr-x  2 www-data www-data 4096 Feb 21  2024 holding
drwxrwxr-x  8 www-data www-data 4096 Nov 21 15:00 live
drwxrwxr-x  2 root     sudo     4096 Feb 21  2024 logs
drwxrwxr-x  2 www-data www-data 4096 Feb 21  2024 maintenance
drwxrwxr-x  2 www-data www-data 4096 Feb 21  2024 private
drwxrwxr-x  2 root     sudo     4096 Feb 21  2024 scripts
drwxrwxr-x  3 www-data www-data 4096 Nov 21 14:50 staging
drwxrwxr-x  2 www-data www-data 4096 Feb 21  2024 temp

I can connect with Transmit (FTP over ssh) as my non-@root@ user that is part of the sudo and www-data group, get my web files into live, and then run the relevant script from Terminal to set those files to how they should be. There’s a script for Textpattern, one for Wordpress, and so on. If I upload any new contents to the live directory, I just re-run the script and it sorts itself out.

Last edited by gaekwad (2025-11-21 17:06:19)

Offline

Board footer

Powered by FluxBB