Migrated articles not directly related to ISPmail

This commit is contained in:
Christoph Haas 2024-12-22 01:29:27 +01:00
parent e27ead3926
commit 24d91bbbbf
15 changed files with 853 additions and 4 deletions

View file

@ -0,0 +1,152 @@
---
title: Backups with rsnaphot to external USB drives
slug: rsnapshot-and-usb-drives/
---
How long has it been since you last backed up your Linux system? Let me guess you tried various backup systems and hate all of them? Let me show you how to use [rsnapshot](http://www.rsnapshot.org/) and an external inexpensive USB drive to back up precious data easily.
## Why?
Im a sysadmin in my day job. How could I not care about half decent backups at home. For years I have been running Bacula which has served me half well. An old AIT drive, a couple of tapes, my trusted Adaptec 2940 card and a PostgreSQL-driven Bacula installation worked moderately well but became increasingly cumbersome and fragile. The server (a retired desktop computer) crashed randomly during backups (some ancient SCSI component started to die). Or I forgot to change one of the three needed tapes (as I lacked a changer) in time so that the backup job timeout killed the running backup. Then I had to declare the tapes as free again because cancelling a backup doesnt make Bacula free the tapes again.Or I played with PostgreSQL and inadvertently killed the director process. So maybe one backup every two weeks really ran through. And restoring files took minutes until the database finally got me the list of files. Finally one of my tapes got stuck in the drive and the drive refused to eject it. Of course the emergency ejection screw did nothing. Enough was enough. So I thought I could use an external USB drive instead of tapes but Bacula did not actually support that. An ancient shell script (vchanger) should emulate a tape changer with USB disk drives. I was too far off from [KISS](http://en.wikipedia.org/wiki/KISS_principle). What in theory sounded like decent hard- and software failed me.
## How?
I decided to spend 50€ (the price of one AIT tape) on a 500 GB external USB disk drive and learn about rsnapshot. And in no time I had a simple backup running where I didnt have to worry about a huge index database and could instantly access any files backed up. What I did:
### Format, label and get the UUID
After plugging in the disk for the first time I ran “dmesg” to find out which device the disk was occupying:
```
[219991.641225] scsi 12:0:0:0: Direct-Access     Seagate  Portable         0130 PQ: 0 ANSI: 4
[219991.641765] sd 12:0:0:0: Attached scsi generic sg4 type 0
[219991.642462] sd 12:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB)
[219991.643080] sd 12:0:0:0: [sdc] Write Protect is off
[219991.643083] sd 12:0:0:0: [sdc] Mode Sense: 2f 08 00 00
[219991.643085] sd 12:0:0:0: [sdc] Assuming drive cache: write through
[219991.644964] sd 12:0:0:0: [sdc] Assuming drive cache: write through
[219991.646599]  sdc: sdc1
[219991.694834] sd 12:0:0:0: [sdc] Assuming drive cache: write through
[219991.695212] sd 12:0:0:0: [sdc] Attached SCSI disk
```
So the disk was at /dev/sdc1. I formatted the disk using
```
mkfs.ext4 /dev/sdc1
```
and read the UUID (a unique identifier assigned to each disk while formatting) using
```
tune2fs -l /dev/sdc1 | grep UUID
```
which gave me
```
Filesystem UUID:          44449456-2b13-47df-bfcf-9c5eedf3b287
```
### Set up autofs
You will want to have your USB mounted automatically when you plug it in and use it. On a server there is no plug-and-play like that by default. But the “[autofs](http://wiki.debian.org/AutoFs)” software does that well. Install it:
```
apt-get install autofs
```
Edit the /etc/auto.master file and add this line:
```
/var/autofs/removable /etc/auto.usbdrive timeout=2,sync,nodev,nosuid
```
Also create an /etc/auto.usbdrive file (that you just pointed to) and add the line into it:
```
usbdrive -fstype=auto    UUID=44449456-2b13-47df-bfcf-9c5eedf3b287
```
And finally restart the autofs process:
```
/etc/init.d/autofs restart
```
This does not yet mount the disk though. But if you change into the /var/autofs/removable/usbdrive directory then autofs will look for a disk with the given UUID and mount it there on-the-fly. Try it:
```
cd /var/autofs/removable/usbdrive
```
You may notice a short delay while autofs mounts the disk. Then you should find yourself on the mounted USB drive. Type “df .” to see the filesystem. It should look like:
```
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdc1             459G  198M  435G   1% /var/autofs/removable/usbdrive
```
### Install and configure rsnapshot
Install the rsnapshot package:
```
apt-get install rsnapshot
```
The default configuration file is located in /etc/rsnapshot.conf. Edit it. But beware that all elements have to be seperated by actual Tabs. Im using VIM and in my default settings I used “expandtabs” which automatically turned my Tabs into spaces. You dont want that.
In that file configure “snapshot\_root” to point to your autofs directory:
```
snapshot_root   /var/autofs/removable/usbdrive
```
Unless you are happy with the default backup times you will want to change the “interval” section. Make sure that you edit the /etc/cron.d/rsnapshot, too, or else rsnapshot wont run automatically at all. I found the intervals a bit tricky but the the “man rsnapshot” manpage helped me understand it. You can use different names for the different frequencies of backups you run. But the names like “hourly” or “daily” do not mean anything. rsnapshot doesnt have any association of “hourly” to 60 minutes for example.
My configuration reads:
```
retain  daily   7
retain  weekly  4
```
This is much less magical than you might imagine. It just means that if you run “rsnapshot daily” then it will create backups called daily.0 to daily.6 and rotate the numbers on every rsnapshot run. You wont have more than 7 “daily” directories though which is what you specify in the “retain” line. And you need to make sure that you call “rsnapshot daily” through a crontab. As you can imagine Im running 7 daily backups (up to one week) and 4 weekly backups (up to one month). So my /etc/cron.d/rsnapshot file has these lines:
```
30 2      * * *        root    /usr/bin/rsnapshot daily
0  4      * * 1        root    /usr/bin/rsnapshot weekly
```
Are you unfamiliar with crontab entries? Its quite easy. You specify the times that you want the certain command run. The columns stand for minute, hour, day of month, month and the day of the week. So my daily job runs at 2:30 at night every day. And the weekly job is run at 4:00 at night on every monday. See “man 5 crontab” for a reference.
Back to your /etc/rsnapshot.conf. Define which directories you want to back up and which you want to have excluded. This is what I use:
```
backup  /var/       myserver/
backup  /home/      myserver/
backup  /etc/       myserver/
exclude /home/*/tmp/
exclude /home/*/.local/share/Trash/
exclude /home/*/.cache/
exclude /var/lib/mysql/
exclude /var/lib/postgresql/
exclude /var/tmp/
exclude /var/log/
exclude /var/cache/apt/archives/
```
Of course you can decide to backup your entire server and just exclude evil mount points like /mnt, /dev, /sys, /media and /proc. But in a case of total emergency Id rather reinstall Debian, install the packages and restore the files. Im excluding the database directories for MySQL and PostgreSQL here because I cannot just copy the files but need to run a proper backup.
What I also do back up a list of installed Debian packages in case I would need to reinstall:
```
backup_script   /usr/bin/dpkg get-selections > packages.txt   installed-packages/
```
And I backup the databases:
```
backup_script   /usr/bin/mysqldump opt databases mailserver mysql | gzip > mysqldump    mysql/
```
I have the MySQL root password stored in /root/.my.cnf so I dont need to mention it here.
### Test the rsnapshot configuration
To make sure your configuration is correct run
```
rsnapshot configtest
```
Fix any errors until rsnapshot is happy and shows “Syntax OK”.
You can simulate a daily backup by running:
```
rsnapshot -t daily
```
It will print out the commands that rsnapshot would run.
### Restoring files
If you want to access the files that rsnapshot backed up this is as simple as could be. In /var/autofs/removable/usbdrive/… you will find directories for hourly, daily and weekly backups. Since rsnapshot cleverly uses hardlinks unchanged files barely take up any space. You can just browse around in the respective subdirectories and access your files.
That way you can even buy a second external USB disk drive and put the first disk off-site in case your house burns down, get burglared or your cat pees on the first disk.
### Off-site backup
Of course if you lost the one external disk then all your backups would be ruined. So I suggest you get a second external disk and once a month swap them. Depending on your paranoia you can lock them in your banks deposit box or give it to your mother-in-law. As opposed to other backup solutions you can just use the second disk without much configuration. Make sure the autofs knows about it and plug it in.
## Thanks
Kudos to Jochen R. who recommended rsnapshot to me.

View file

@ -0,0 +1,55 @@
---
title: Debian packages are so old
slug: debian-packages-are-so-old/
---
Debian comes with tens of thousands of software packages that you can easily install on your system. But Debian only publishes a new “stable” release every 2-3 years. That creates the impression that Debian packages must always be up to 3 years old. And who wants to work with a three year old piece of software? Are the package maintainers lazy? Should I download my software from its own project website instead?
I feel obliged to briefly discuss this topic because it is a common source of trouble and surprises. And it may make you say…
_“Whats the deal? Shouldnt we just use Debian packages where we find it comfortable and install any other software in a newer version? Just look at the shiny new Roundcube version. Oh, and rspamd has some new features to filter out spam mails. Why dont we use that? Even their developers urge me to take their updated packages.”_
So… why are the packages in the “stable” version of Debian so old? Actually that question is pretty funny when you think about it. “stable” means “stay on the same versions if possible”. You might say that it means “old by design”. The idea is that your system will not surprise you with an unexpected update that breaks a service for many users on a monday morning.
## Are stable packages more robust?
No. “stable” relates to the version. I does not necessarily mean more “robust”. The Debian developers do not pick a certain version that they think is especially good or bug-free. The version that made it into a new stable release of Debian was just there when it was about time to create a new release.
“_And what about security issues? Software developer usually fix issues in a newer version. How does Debian deal with that?_“
That is true. Developers hardly ever fix issues in old software. Thats just not fun. Developers like to go forward. While they implement new features they also fix issues on their way. I totally understand that motivation. But that way they force users to accept their changes and new features even if the users just want to fix the security issues. And new versions come with new bugs.
Debian however always tries to _backport_ bug fixes. A Debian package maintainer will try to apply the fix to the stable version in Debian. So you get the benefit of staying on a certain version but at the same time get security updates. That makes it the best possible way to deal with security issues for server administrators. They can rely on their systems without having to fear breaking changes. Only when a new stable version arrives they are forced to consider upgrading.
See also the [Debian FAQ on security](https://www.debian.org/security/faq.en.html#oldversion).
## Give me newer software
Aside from _stable_ packages there are also _unstable_ packages. Lets take a look at [packages.debian.org/vlc](https://packages.debian.org/vlc) to see which package versions of the famous VLC software are available. This is just a screenshot so you will get different versions:
![Screenshot of packages.debian.org](images/debian-packages-too-old-pdo.png)
As you can see there are the different releases like “jessie” (very old), “stretch” (old), “buster” (the stable version when this was written), “bullseye” (the upcoming stable release) and “sid”. By default your system will install the current stable version. If you are on Debian “buster” then it will be 3.0.12-0.
If you wanted newer software you could just replace “buster” by “sid” in your /etc/apt/sources.list and upgrade your system. Done. That way every “apt upgrade” will give you the newest packages that Debian has to offer. And in many cases that is very close to the newest version of the actual (upstream) software. Just be aware that sid/unstable is a moving target. It is impossible to coordinate all updated packages. So it may occur that a new package cannot be properly installed, does not work or even breaks something else on your system. I know people who have sid/unstable on their laptops. But those are the kind of people who know a lot about Debian and do not despair when something breaks.
As a middle ground you might use the “testing” variant. Only packages without any serious bugs will be copied to “testing” so you rule out the worst problems. Still this is not failsafe. There are also _backported_ packages that try to make newer software work on an otherwise “stable” system. But this may give you security issues that are not fixed as quickly as needed. Every choice comes with its own compromises.
So as you see Debians packages are not old at all. Just when you do a fresh installation you are usually on a “stable” installation. And thats for a reason. On a server nobody cares if a piece of software is 2 years old. The focus there is on reliability. And “stable” gives you that.
## What about third-party APT repositories?
“_A lot of software is available directly by its developers through their own APT repositories. Cant I just use them?_“
Of course. I also pick specific pieces of software from their third-party APT repositories. But those are very rare exceptions and I keep track of what software I get from there. Common problems are:
- unexpected updates that break your system
- broken packages because building proper Debian packages is not trivial
- uninstallable packages the first weeks after a new Debian release because the software was not properly tested with it
- conflicts with other packages in your system because a package did not properly specify that
- missing integration with other software (e.g. Apache configuration, logrotate or systemd)
- no security support like in Debian. To stay safe you are required to run every update. That in turn means adopting any breaking changes.
- files of third-party software should go to /opt but quite often they are scattered in the wrong places
- expired APT keys because the developers didnt care about expiry dates
However not all is bad. Some third-party packages are very good. Your mileage may vary though.

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View file

@ -0,0 +1,121 @@
---
title: Pipes and redirection
slug: linuxtip/pipes
---
Many system administrators seem to have problems with the concepts of pipes and redirection in a shell. A coworker recently asked me how to deal with log files. How to find the information he was looking for. This article tries to shed some light on it.
# Input / Output of shell commands
Many of the basic Linux/UNIX shell commands work in a similar way. Every command that you start from the shell gets three _channels_ assigned:
- STDIN (channel 0):
Where your command draws the input from. If you dont specify anything special this will be your keyboard input.
- STDOUT (channel 1):
Where your commands output is sent to. If you dont specify anything special the output is displayed in your shell.
- STDERR (channel 2):
If anything wrong happens the command will send error message here. By default the output is also displayed in your shell.
Try it yourself. The most basic command that just passes everything through from STDIN to STDOUT is the cat command. Just open a shell and type cat and press Enter. Nothing seems to happen. But actually cat is waiting for input. Type something like “hello world”. Every time you press Enter after a line cat will output your input. So you will get an echo of everything you type. To let cat know that you are done with the input send it an end-of-file (EOF) signal by pressing Ctrl-D on an empty line.
# The pipe(line)
A more interesting application of the STDIN/STDOUT is to chain commands together. The output of the first command becomes the input of the second command. Imagine the following chain:
![Diagram of STDIN, STDOUT and STDERR](images/stdinouterr.png)
The contents of the file /var/log/syslog are sent (as input) to the grep command. grep will filter the stream for lines containing the word postfix and output that. Now the next grep picks up what was filtered and filter it further for the word removed. So now we have only lines containing both postfix and removed. And finally these lines are sent to wc -l which is a shell command counting the lines of some input. In my case it found 27 of such lines and printed that number to my shell. In shell syntax this reads:
```
cat /var/log/syslog | grep 'postfix' | grep 'removed' | wc -l
```
The | character is called _pipe_. A sequence of such commands joined together with pipes are called _pipeline_.
# Useless use of cat
Actually cat is supposed to be used for con**cat**enating files. Like “cat file1 file2”. But some administrators abuse the command to put something into a pipeline. Thats bad style and the reason why Randal L. Schwartz (a seasoned programmer) used to hand out virtual [“Useless use of cat” awards](http://partmaps.org/era/unix/award.html). Shell commands usually can take a filename as the last argument as an input. So this would be right:
```
grep something /var/log/syslog | wc -l
```
While this works but is considered bad style:
```
cat /var/log/syslog | grep something | wc
```
Or if you knew that grep even has a “-c” option to count lines the whole task could be done with just grep:
```
grep -c something /var/log/syslog
```
# Using files as input and output
## Output (STDOUT)
Instead of using the console for input and the screen for output you can use files instead. While
```
date
```
shows you the current date on the console you can use
```
date >currentdatefile
```
to redirect the _output_ of the command (STDOUT) to the file named currentdatefile.
## Input (STDIN)
This also works as _input_. The command
```
grep something
```
will search for the word something in what you type on your keyboard. But if you want to look for something in a file called somefile you could run
```
grep something <somefile
```
## Input and output
You can also redirect both _input and output_ in the same command. A politically incorrect way to copy a file would be
```
cat <oldfile >newfile
```
Of course you would use `cp` for that purpose in real life.
## Errors (STDERR)
So far this covers STDIN (`<`) and STDOUT (`>`) but you also redirect the STDERR channel by using (`2>`). An example would be
```
grep something <somefile >resultfile 2>errorfile
```
## 2>&1 magic
Many admins stumble when it comes to redirecting one channel to another. Say you want to redirect both STDOUT and STDERR to the same file. Then you cannot do
```
grep something >resultfile 2>resultfile
```
It will only redirect the STDOUT (`>`) there and keep the resultfile open so “`2>`” fails to write to it. Instead you need to do
```
grep something >resultfile 2>&1
```
This redirects STDOUT (1) to the resultfile and tells STDERR (2) to send the output to what STDOUT is set to (also resultfile).
What does _not_ work is this order:
```
grep something 2>&1 >resultfile
```
It may look right to us humans but in fact does not redirect STDERR to the resultfile. The explanation: the shell interprets this line from left to right. So first the “2>&1” is evaluated which means “send STDERR to the same that STDOUT is _currently_ set to”. As STDOUT is usually just printed to the shell it will send STDERR also to the shell. Next the shell finds “>resultfile” which sends STDOUT to the resultfile but does _not_ touch the previous destination of STDERR. So STDERR output will still end up in the shell.
# Interesting commands
- grep
Filters out lines with certain search words. “grep -v” searches for all lines that do _not_ contain the search word.
- sort
Sort the output alphabetically (needs to wait until EOF before doing its work). “sort -n” sorts numerically. “sort -u” filters out duplicate lines.usel
- wc
Word count. Counts the bytes, words and lines. “wc -l” just outputs how many lines were counted.
- [awk](http://linux.die.net/man/1/awk)
A sophisticated language (similar to Perl) that can be used to do something with every line. `awk {print $3}'` outputs the third column of every line.
- [sed](http://linux.die.net/man/1/sed) (stream editor)
A search/replace tool to change something in every line.
- less
Useful at the end of a pipe. Allows you to browse through the output one page at a time. (“less” refers to a similar but less capable tool called “more” that allowed you to see the first page and then press Space to view more.)
- head
Shows the first ten lines only. “head -50” shows the first 50 lines.
- tail
Shows the last ten lines only. “tail -50” shows the last 50 lines. “tail -f” follows a certain file.

View file

@ -0,0 +1,24 @@
---
title: Renaming multiple files
slug: linux/renaming-multiple-files/
---
If you need to rename a larger number of files following a certain pattern then you will long for an automated solution. The rename command helps you here that is (at least on my Debian installation) part of the [Perl](http://www.perl.org/) installation. All you need to know is the basics of [regular expressions](http://www.regular-expressions.info/) to define how the renaming should happen.
Say you want to add a .old to every file in your current directory. At the end of each expression ($) a .old will be set:
```
rename 's/$/.old' *
```
Or you want to make the filenames lowercase:
```
rename 'tr/A-Z/a-z/' *
```
Or you want to remove all double characters:
```
rename 'tr/a-zA-Z//s' *
```
Or you have many JPEG files that look like “img0000154.jpg” but you want the first five zeros removed as you dont need them:
```
rename 's/img00000/img/' *.jpg
```
In fact you can use any Perl operator as an argument. The actual documentation for the s and y/tr operators are found in the perlop manpage.

View file

@ -0,0 +1,151 @@
---
title: How Squid ACLs work
slug: squid-acls
---
For less experienced Squid administrators the concept of ACLs can be confusing at first. But they offer a great way of controlling who is allowed to access which web pages when.
## ACLs
First you need to define certain criteria like _accesses from the marketing department_ or _accesses to google.com_ or _need to authenticate_. There are certain types of ACLs for that purpose. The complete list of ACLs can be found at [http://www.visolve.com/squid/squid24s1/access\_controls.php](http://www.visolve.com/squid/squid24s1/access_controls.php)
The syntax of an acl is:
```
acl name type definition1 definition2 definition3 ...
```
Examples:
```
acl accesses_to_google dstdomain .google.com
acl accesses_to_search_engines dstdomain .yahoo.com .google.com .vivisimo.com
acl accesses_from_marketing_department src 10.52.0.0/16
acl need_to_authenticate proxy_auth
```
You can also use lists of definitions that are stored in files on your hard disk. Lets assume you have a list of search engines URLs that you want to allow:
```
/etc/squid/search-engines-urls.txt:
.google.com
.yahoo.com
.altavista.com
.vivisimo.com
```
Then the ACL for that file would look like:
```
acl accessess_to_search_engines dstdomain "/etc/squid/search-engines-urls.txt"
```
The quotes are important here to tell Squid it needs to look up definitions in that file.
## Using the ACLs: http\_access
Defining the ACLs alone does not actually block anything its just a definition. ACLs can be used in various places of your squid.conf. The most useful feature is the http\_access statement. It works similar to the way a firewall would handle rules. For each request that Squid receives it will look through all the http\_access statements in order until it finds a line that matches. It then either _accept_s or _deny_s depending on your setting. The remaining rules are ignored.
The general syntax of an http\_access line is:
```
http_access (allow|deny) acl1 acl2 acl3 ...
```
Example:
```
http_access allow accesses_from_admins
http_access deny accesses_to_porn_urls
http_access allow accesses_during_lunchtime
http_access deny all
```
This would allow accessing from the admins (whatever that ACL looks like probably a **src** ACL pointing to the subnet where the admin workstations are in). For everyone else it will deny accesses to porn URLs. Then it would allow accesses from everyone to every web site during lunch time. And finally all other accesses would be denied.
## Combining ACLs (AND/OR)
Often you need to combine ACLs. Lets say you want to allow access to google.com only for the back office. This combines two ACLS with an **AND**. This would look like this:
```
http_access allow accesses_to_google.com accesses_from_back_office
```
If you wanted to use an **OR** and say either accesses from the back office or accesses to google.com are allowed then the line would look like this:
```
http_access allow accesses_to_google.com
http_access allow accesses_from_back_office
```
To summarize: **AND** means putting the conditions in one line. **OR** means using seperate lines.
## Custom error pages (deny\_info)
By default when you deny access the user gets the error page that is stored in the _ERR\_ACCESS\_DENIED_ file. But luckily you can define your own custom error pages and display them when you deny certain accesses. A simple example:
```
acl google dstdomain google.com
deny_info error-google google
http_access deny google
```
Put an error page into the directory where the HTML files are stored (look for _error\_directory_ in your squid.conf) and name it _error-google_. If the user tries to access `www.google.com` the access is denied and your error page is shown.
Careful when you combine ACLs on a _http\_access_ line. Example:
```
acl google dstdomain google.com
acl admin src 10.0.5.16
deny_info google error-google
http_access deny admin google
```
This will deny access only for the user from the IP address 10.0.5.16 when `www.google.com` is accessed. As you can see I have combined the ACLs _admin_ and _google_. In such a combination the _last_ ACL in the line is taken into account for lookups of _deny\_info_. So its important that you define a _deny\_info_ for the _google_ ACL.
## Re-Authentication control
Usually when a user is authenticated at the proxy you cannot “log out” and re-authenticate. The user has to close and re-open the browser windows to be able to re-login at the proxy. A simple configuration will probably look like this:
```
acl my_auth proxy_auth REQUIRED
http_access allow my_auth
http_access deny all
```
Now there is a tricky change that was introduced in Squid 2.5.10. It allows to control when the user is prompted to authenticate. Now its possible to force the user to re-authenticate although the username and password are still correct. Example configuration:
```
acl my_auth proxy_auth REQUIRED
acl google dstdomain .google.com
http_access allow my_auth
http_access deny google my_auth
http_access deny all
```
In this case if the user requests `www.google.com` then the second _http\_access_ line matches and triggers re-authentication. Remember: its always the last ACL on a _http\_access_ line that “matches”. If the matching ACL has to do with authentication a re-authentication is triggered. If you didnt want that you would need to switch the order of ACLs so that you get http\_access deny my\_auth google.
You might also run into an **authentication loop** if you are not careful. Assume that you use LDAP group lookups and want to deny access based on an LDAP group (e.g. only members of a certain LDAP group are allowed to reach certain web sites). In this case you may trigger re-authentication although you dont intend to. This config is likely wrong for you:
```
acl ldap-auth proxy_auth REQUIRED
acl ldapgroup-allowed external LDAP_group PROXY_ALLOWED
http_access deny !ldap-auth
http_access deny !ldapgroup-allowed
http_access allow all
```
The second _http\_access_ line would force the user to re-authenticate time and again if he/she is not member of the PROXY\_ALLOWED group. This is perhaps not what you want. You rather wanted to deny access to non-members. So you need to rewrite this _http\_access_ line so that an ACL matches that has nothing to do with authentication. This is the correct example:
```
acl ldap-auth proxy_auth REQUIRED
acl ldapgroup-allowed external LDAP_group PROXY_ALLOWED
acl dummy src 0.0.0.0/0.0.0.0
http_access deny !ldap-auth
http_access deny !ldapgroup-allowed dummy
http_access allow all
```
This way the second _http\_access_ line still matches. But its the _dummy_ ACL which is now last in the line. Since _dummy_ is a static ACL (that always matches) and has nothing to do with authentication you will find that the access is just denied.

View file

@ -0,0 +1,189 @@
---
title: Understanding the Logical Volume Manager (LVM)
slug: understanding-lvm/
---
## What is LVM and what does it offer?
LVM is a neat feature that some system administrators still shy away from. But its really not that hard to learn. And these are some awesome features you get:
- Create a larger (virtual) disk from smaller disk (similar to RAID-0)
- Extend partitions without any downtime
- Add space by adding disks without any downtime
- Remove unused partitions and get back the space without fragmentation
- Take snapshots of partitions. You can try out things and just roll back. Or you can create consistent database backups without keeping the database down for long.
- Replace disks without losing data.
LVM is just a thin layer of software between the disks on your system and the partitions. On a Debian system you just “apt install lvm2” and you are ready to go.
## The essential concepts
Three terms are commonly used:
- PV (physical volume). A disk. (Or a partition.) Simple as that. An SSD. A hard drive. An SD card.
- VG (volume group). A group of disks. Take three 2 TiB disks and you get a 6 TiB volume group. (Think of it as a RAID-0 or a JBOD _just a bunch of disks_.)
- LV (logical volume). A fraction of such a group. Just take 200 GiB of the volume group and put a file system on it.
A diagram is worth a thousand words so lets use an illustration:
![Diagram explaining components of the LVM](images/lvm-diagram.png)
### PVs the physical volumes
On the left you see your three hard disks. Your computer has found them and made them accessible as /dev/sda, /dev/sdb and /dev/sdc. Usually you would create partitions on them (e.g. using _cfdisk_), put a file system on the partitions (_mkfs_) and mount them into your file system (_mount /dev/sda1 /home_).
But this time we create a volume group from it. So first we turn the disks into PVs so that LVM recognizes them:
```
pvcreate /dev/sda
pvcreate /dev/sdb
pvcreate /dev/sdc
```
All this does is write a little meta-data onto each disk.
You can use the “pvs” command command to list the PVs you have just created.
When you take a close look at a PV for example (“pvdisplay” command) you will notice terms like “PE size” or “Free PE” or “Allocated PE”. PE is short for _physical extent_. Such an extent is the smallest data size that LVM handles. By default its set to 4 MiB. That means you can grow or shrink a logical volume only by a factor of 4 MiB. Using “lvextend” you can specify the number of extents using “-l …” (lowercase L) instead of the size “-L …” (uppercase L). Further down on this page you will find a tip on replacing a small harddisk by a larger harddisk. That essentially moves the _extents_ from one disk to another.
### VG the volume group
Next we create a new volume group (VG) from these three disks:
```
vgcreate vg1 /dev/sda /dev/sdb /dev/sdc
```
Now you have VG called “vg1” consisting of the three disks. The “vgs” command shows you an overview:
```
VG #PV #LV #SN Attr VSize VFree
vg1 3 0 0 wz--n- <6t <6t
```
So you see that there is one VG called “vg1” which consists of 3 PVs (disks). And so far no LVs are using it. We will get to that in a moment. Its size is roughly 6 TiB and all of that is free to use.
Using the “vgdisplay” command shows you even more information about it.
### LVs the logical volumes
The final step is to bite chunks out of the VG. Check out the diagram above. We want a partition for “/home” with a size of 100 GiB. So the command to create your LV is:
```
lvcreate -n lvhome -L 100G vg1
```
Pretty simple. The “-n” parameter sets the name of the new PV. “-L” is the size you want to use. And “vg1” is the name of the VG you want to cut a piece out of.
The “lvs” command will show you an overview of your LVs.
```
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lvhome vg1 -wi-ao---- 100,00g
```
There is also an “lvdisplay” command showing more verbose information about the LV.
## Put a file system on the LV
Finally we have something to put a file system on. You have probably used partitions on devices like /dev/sda1 before. But now you are using LVM. And the device for your “lvhome” is “/dev/vg1/lvhome”. Right, its “dev” + VG + LV. You could also use “/dev/mapper/vg1-lvhome”.
Put an EXT4 file system onto it:
```
mkfs.ext4 /dev/vg1/lvhome
```
And mount that file system:
```
mount /dev/vg1/lvhome /home
```
## To sum it up
There are PVs (disks), VGs (groups of disk) and LVs (fractions of a VG).
To use LVM first turn disks into PVs (pvcreate), then join them to a VG (vgcreate), then take a fraction of that (lvcreate) and finally create a file system on that (/dev/vgfoo/lvbar).
Every part has a _list_ and a _display_ command. These are:
- PV -> pvs, pvdisplay
- VG -> vgs, vgdisplay
- LV -> lvs, lvdisplay
## Cool everyday tricks with LVM
You may not be impressed yet. LVM just made your life more complicated. Of course there is a reason for it because now begins the fun part. These are some common features:
### Extend a partition without any downtime
Oh, no. Your /home partition is 99% full? With LVM this is easy to solve. If you have free space on your VG (check with “vgs”) you can just extend the disk. No need to unmount anything. No downtime. Lets give the partition 20 GiB more space:
```
lvextend -L +20G -r /dev/vg1/lvhome
```
The “-r” parameter not only extends the LV but also the file system that lives on top. That allows you to enlarge a partition without taking it offline. This is the neatest feature that LVM delivers in my opinion.
If your _volume group_ is also out of space then you could add another disk (_physical volume)_ and use “pvcreate” and “vgextend” to enlarge it.
### Replace a disk by a larger disk
No problem either. Lets assume that one of your disks (_physical volumes_) on /dev/sda was 2 TB and you just bought a shiny new 10 TB disk (found on /dev/sdg). Now you want to move the data over to the new disk. As usual you need to turn /dev/sdg into a PV:
```
pvcreate /dev/sdg
```
And now you can just move all blocks (aka _physical extents_ see below) to the new disk:
```
pvmove /dev/sda /dev/sdg
```
And finally you can remove the PV from your VG:
```
vgreduce vg1 /dev/sda
```
By the way: once a disk is a PV it doesnt matter whether your system finds it on /dev/sdb, /dev/sdc or any other device. As long as all the necessary PVs are found somewhere the VG will work. Just if your boot sector was written on /dev/sda you may need to re-install it if you change that disk.
### Creating snapshots for backups and profit
A snapshot is like taking a photo with your camera. You get an image of a situation at a certain point in time. Reality will continue to alter the world but your photo will always show that specific moment. You can still take a pen and draw something on the photo so its not read-only. (It used to be on LVM 1.x.) I commonly use this technique to get consistent database snapshots of large MySQL/MariaDB databases.
Lets just say that you have a huge 1 TiB-sized LV called “lvmysql” that is mounted to /var/lib/mysql. Running a backup of those files takes an hour. And while you back up one file after another the SQL database is accessing the various files making arbitrary changes. Your backup would contain unusable garbage. Some files were from minute 5 while others might be from minute 30. Such a backup is unusable.
Now lets instead use snapshots. Briefly stop the database and take a snapshot:
```
lvcreate -n mysnap -L 20G -s /dev/vg1/lvmysql
```
Note that we use “lvcreate” to take the /dev/vg1/lvmysql LV and create a new /dev/vg1/mysnap LV. Just that the latter is a snapshot.
You can start your database again. With a bit of luck this has just taken a few seconds. And now you have a perfectly consistent copy of the MySQL data directory. You can mount this snapshot anywhere in your file system:
```
mount /dev/vg1/mysnap /mnt/mysnap
```
Now you can take your time and just make a backup of /mnt/mysnap. It wont change.
However the magic comes at a price. Have you noticed the “-L 20 G” parameter? That does not mean that the snapshot has a size of 20 GiB. After all we started with a 1 TiB LV. So why did we specify a size at all?
The answer lies in the way that snapshots work. Once you started MySQL again the data directory was changed. LVM needs to provide you with your snapshot but at the same time allow MySQL to continue doing its work. That works by a mechanism called _copy-on-write_. If the original LV would not change then it would be identical to the snapshot. If however the files on the LV are changed then LVM needs to keep a copy of the snapshotted state. The more changes you do the more space for those copies you will need. And thats what is meant by “-L 20 G”. It gives your snapshot a 20 GiB storage area to track the changes.
The size depends on how much change you expect while you want to use the snapshot. If the backup takes an hour and the database typically changes 100 GiB during that period then you should give the snapshot at least that space. The “lvs” command shows you much much of that space has been used already. So you should keep the snapshot no longer than needed for a backup. Should you hit the 100% mark then your snapshot becomes unusable and all you can do it remove it. That wont affect the original LV fortunately. So you wont break your database.
Another use case of snapshots would be to try out things on the snapshot. And if you like what you did then merge the changes to the original LV. That can be done using “lvconvert merge /dev/vg1/mysnap”. But I suggest you consult the man page of “lvconvert” before you do that.
### Booting from an LV
Using LV for all partitions used to be a problem in the past. Debian created an ext2 partition for /boot to make sure the system boots. This has become obsolete for quite a while. You can use LVs everywhere and Debian will happily boot the system.
### RAID
By default LVM uses RAID-0. That is the [RAID](https://de.wikipedia.org/wiki/RAID) level that makes you lose everything if a single disk fails. LVM support RAID levels 1 and 5 though. Besides the LVM man pages I mainly found [this web page](https://blog.programster.org/create-raid-with-lvm) describing it.

View file

@ -0,0 +1,26 @@
---
title: Updating the BIOS on Lenovo laptops from Linux using a USB flash stick
slug: article/updating-the-bios-on-lenovo-laptops-from-linux-using-a-usb-flash-stick/
---
Arent hardware manufacturers funny? They either require an old-fashioned operating system (Windows) or museum hardware (floppy drives) to update a BIOS. Apparently they never learn and are instead busy adding features like DRM and UEFI to make our lives even more miserable.
However updating the BIOS on my Lenovo X230 laptop was surprisingly easy once I learned how to do that (kudos to a G+ post I stumbled upon).
1. Go to [support.lenovo.com](http://support.lenovo.com/) (or better use a search engine becaues the Lenovo website is beautiful but technically pretty broken and slow) and search for the BIOS upgrade of your laptop model.
2. Download the most recent ISO file. Look for “BIOS bootable update CD”.
3. Convert the ISO image using the geteltorito utility (if you dont have it: apt-get install genisoimage).
Example:
`geteltorito -o bios.img g2uj18us.iso`
4. Insert any USB stick into your laptop that you have lying around. The image file is just 50 MB in size so even USB sticks with low capacity will work. Keep in mind that the stick will be completely overwritten.
5. If you are in a graphical environment then unmount the USB stick again.
6. Find out the device name of the stick. Enter a terminal window and enter “dmesg | tail”. You are looking for something like:\[ 2101.614860\] sd 6:0:0:0: \[sdb\] Attached SCSI disk
The “sdb” tells you that your USB stick is available on /dev/sdb. Dont just assume its sdb. If its on another device on your laptop then you will destroy your data.
7. Copy the image to the USB stick:
`dd if=bios.img of=/dev/sdb bs=1M`
8. Reboot your laptop.
9. After the Lenovo logo appears press ENTER.
10. Press F12 to make your laptop boot from something else than your harddisk.
11. Select the USB stick.
12. Make sure your laptop has its power supply plugged in. (It will refuse to update otherwise.)
13. Follow the instructions.