Jan 08, 2023

Pushing git-notes to one specific remote via pre-push hook

I've recently started using git notes as a good way to track metadata associated with the code that's likely of no interest to anyone else, and would only litter git-log if was comitted and tracked in the repo as some .txt file.

But that doesn't mean that they shouldn't be backed-up, shared and merged between different places where you yourself work on and use that code from.

Since I have a git mirror on my own host (as you do with distributed scm), and always clone from there first, adding other "free internet service" remotes like github, codeberg, etc later, it seems like a natural place to push such notes to, as you'd always pull them from there with the repo.

That is not straightforward to configure in git to do on basic "git push" however, because "push" operation there works with "[<repository> [<refspec>...]]" destination concept. I.e. you give it a single remote for where to push, and any number of specific things to update as "<src>[:<dst>]" refspecs.

So when "git push" is configured with "origin" having multiple "url =" lines under it in .git/config file (like home-url + github + codeberg), you don't get to specify "push main+notes to url-A, but only main to url-B" - all repo URLs get same refs, as they are under same remote.

Obvious fix conceptually is to run different "git push" commands to different remotes, but that's a hassle, and even if stored as an alias, it'd clash with muscle memory that'll keep typing "git push" out of habit.

Alternative is to maybe override git-push command itself with some alias, but git explicitly does not allow that, probably for good reasons, so that's out as well.

git-push does run hooks however, and those can do the extra pushes depending on the URL, so that's an easy solution I found for this:

#!/bin/dash
set -e

notes_remote=home
notes_url=$(git remote get-url "$notes_remote")
notes_ref=$(git notes get-ref)

push_remote=$1 push_url=$2
[ "$push_url" = "$notes_url" ] || exit 0

master_push= master_oid=$(git rev-parse master)
while read local_ref local_oid remote_ref remote_oid; do
  [ "$local_oid" = "$master_oid" ] && master_push=t && break || continue
done
[ -n "$master_push" ] || exit 0

echo "--- notes-push [$notes_remote]: start -> $notes_ref ---"
git push --no-verify "$notes_remote" "$notes_ref"
echo "--- notes-push [$notes_remote]: success ---"

That's a "pre-push" hook, which pushes notes-branch only to "home" remote, when running a normal "git push" command to a "master" branch (to be replaced with "main" in some repos).

Idea is to only augment "normal" git-push, and don't bother running this on every weirder updates or tweaks, keeping git-notes generally in sync between different places where you can use them, with no cognitive overhead in a day-to-day usage.

As a side-note - while these notes are normally attached to commits, for something more global like "my todo-list for this project" not tied to specific ref that way, it's easy to attach it to some descriptive tag like "todo", and use with e.g. git notes edit todo, and track in the repo as well.

Jan 04, 2023

FIDO2 hardware password/secret management

Passwords are bad, and they leak, but services are slow to adopt other auth methods - even TOTP is better, and even for 1-factor-auth (e.g using oathtool).

But even without passwords, there are plenty of other easy-to-forget secrets to store in a big pile somewhere, like same TOTP seed values, API keys, government ID numbers, card PINs and financial info, security answers, encryption passphrases, and many other stuff.

  • Easiest thing is to dump all these into a big .txt file somewhere.

    Problem: any malware, accidental or deliberate copy ("evil maid"), or even a screen-scrape taken at an unfortunate time exposes everything!

    And these all seem to be reasonably common threats/issues.

  • Next best thing - store that file in some encrypted form.

    Even short-lived compromise can get the whole list along with the key from memory, and otherwise it's still reasonably easy to leak both key/passphrase and ciphertext over time separately, esp. with long-lived keys.

    It's also all on-screen when opened, can be exposed/scraped from there, but still an improvement over pure plaintext, at the expense of some added key-management hassle.

  • Encrypt whole file, but also have every actual secret in there encrypted separately, with unique key for each one:

    banks:
    
      Apex Finance:
        url: online.apex-finance.com
        login: jmastodon
        pw: fhd.eop0.aE6H/VZc36ZPM5w+jMmI
        email: reg.apexfinance.vwuk@jmastodon.me
    
        current visa card:
          name: JOHN MASTODON
          no: 4012888888881881
          cvv2: fhd.KCaP.QHai
          expires: 01/28
          pin: fhd.y6En.tVMHWW+C
    
      Team Overdraft: ...
    
    google account:
      -- note: FIDO2 2FA required --
      login/email: j.x.mastodon789@gmail.com
      pw: fhd.PNgg.HdKpOLE2b3DejycUGQO35RrtiA==
      recovery email: reg.google.ce21@jmastodon.me
      API private key: fhd.pxdw.QOQrvLsCcLR1X275/Pn6LBWl72uwbXoo/YiY
    
    ...
    

    In this case, even relatively long-lived malware/compromise can only sniff secrets that were used during that time, and it's mostly fine if this ends up being opened and scrolled-through on a public video stream or some app screencaps it by accident (or not) - all important secrets are in encrypted "fhd.XXX.YYY" form.

    Downside of course is even more key management burden here, since simply storing all these unique keys in a config file or a list won't do, as it'll end up being equivalent to "encrypted file + key" case against leaks or machine compromise.

  • Storing encryption keys defeats the purpose of the whole thing, typing them is insecure vs various keyloggers, and there's also way too many to remember!

    FIDO2 USB token on a keychain

    Solution: get some cheap FIDO2 hardware key to do all key-management for you, and then just keep it physically secure, i.e. put it on the keychain.

    This does not require remembering anything (except maybe a single PIN, if you set one, and can remember it reliably within 8 attempts), is reasonably safe against all common digital threats, and pretty much as secure against physical ones as anything can be (assuming rubber-hose cryptoanalysis works uniformly well), if not more secure (e.g. permanent PIN attempts lockout).


Given the recent push for FIDO2 WebAuthn-compatible passkeys by major megacorps (Google/Apple/MS), and that you'd probably want to have such FIDO2 token for SSH keys and simple+secure full disk encryption anyway, there seems to be no good reason not to use it for securing passwords as well, in a much better way than with any memorized or stored-in-a-file schemes for secrets/keys, as outlined above.

There's no go-to way to do this yet (afaik), but all tools to implement it exist.

Filippo Valsorda described one way to do it via plugin for a common "age" encryption tool in "My age+YubiKeys Password Management Solution" blog post, using Yubikey-specific PIV-smartcard capability (present in some of Yubico tokens), and a shell script to create separate per-service encrypted files.

I did it a bit differently, with secrets stored alongside non-secret notes and other info/metadata, and with a common FIDO2-standard hmac-secret extension (supported by pretty much all such devices, I think?), used in the following way:

  • Store ciphertext as a "fhd.y6En.tVMHWW+C" string, which is:

    "fhd." || base64(salt) || "." || base64(wrapped-secret)
    

    And keep those in the common list of various important info (also encrypted), to view/edit with the usual emacs.

  • When specific secret or password is needed, point to it and press "copy decrypted" hotkey (as implemented by fhd-crypt in my emacs).

  • Parsing that "fhd. ..." string gets "y6En" salt value, and it is sent to USB/NFC token in the assertion operation (same as fido2-assert cli tool runs).

  • Hardware token user-presence/verification requires you to physically touch button on the device (or drop it onto NFC pad), and maybe also enter a PIN or pass whatever biometric check, depending on device and its configuration (see fido2-token tool for that).

  • Token/device returns "hmac-sha256(salt, key=secret-generated-on-device)", unique and unguessable for that salt value, which is then used to decrypt "tVMHWW+C" part of the fhd-string into original "secret" string (via simple XOR).

  • Resulting "secret" value is copied into clipboard, to use wherever it was needed.

This ensures that every single secret string in such password-list is only decryptable separately, also demanding a separate physical verification procedure, very visible and difficult to do unintentionally, same as with WebAuthn.

Only actual secret key in this case resides on a FIDO2 device, and is infeasible to extract from there, for any common threat model at least.

Encryption/wrapping of secret-string to fhd-string above works in roughly same way - generate salt value, send to token, get back HMAC and XOR it with the secret, cutting result down to that secret-string length.

Last part introduces a small info-leak - secret length - but don't think that should be an issue in practice (always use long random passwords), while producing nicer short ciphertexts.

There are also still some issues with using these physical dongles in a compomised environment, which can lie about what it is being authorized by a device, as they usually have no way to display that, but it's still a big improvement, and can be somewhat mitigated by using multiple tokens for different purposes.


I've wrapped all these crypto bits into a simple C fido2-hmac-desalinate tool here:

https://github.com/mk-fg/fgtk#hdr-fido2-hmac-desalinate.c

Which needs "Relying Party ID" value to compile - basically an unique hostname that ideally won't be used for anything else with that authenticator (e.g. "token1.fhd.jmastodon.me" for some owned domain name), which is itself not a secret of any kind.

FIDO2 "credential" can be generated and stored on device first, using cli tools that come with libfido2, for example:

% fido2-token -L
% fido2-cred -M -rh -i cred.req.txt -o cred.info.txt /dev/hidraw5 eddsa

Such credential would work well on different machines with authenticators that support FIDO2 Discoverable Credentials (aka Resident Keys), with HMAC key stored on the same portable authenticator, but for simpler tokens that don't support that and have no storage, static credential-id value (returned by fido2-cred tool without "-r" option) also needs to be built-in via -DFHD_CID= compile-time parameter (and is also not a secret).

(technically that last "credential-id value" has device-master-key-wrapped HMAC-key in it, but it's only possible to extract from there by the device itself, and it's never passed or exposed anywhere in plaintext at any point)

On the User Interface side, I use Emacs text editor to open/edit password-list (also transparently-encrypted/decrypted using ghg tool), and get encrypted stuff from it just by pointing at the needed secret and pushing the hotkey to copy its decrypted value, implemented by fhd-crypt routine here:

https://github.com/mk-fg/emacs-setup/blob/21479cc/core/fg_sec.el#L178-L281

(also, with universal-arg, fhd-crypt encrypts/decrypts and replaces pointed-at or region-selected thing in-place, instead of copying into clipboard)

Separate binary built against common libfido2 ensures that it's easy to use such secret strings in any other way too, or fallback to manually decoding them via cli, if necessary.

At least until push for passkeys makes no-password WebAuthn ubiquitous enough, this seem to be the most convenient and secure way of password management for me, but auth passwords aren't the only secrets, so it likely will be useful way beyond that point as well.


One thing not mentioned above are (important!) backups for that secret-file. I.e. what if FIDO2 token in question gets broken or lost? And how to keep such backup up-to-date?

My initial simple fix is having a shell script that does basically this:

#!/bin/bash
set -eo pipefail
echo "### Paste new entry, ^D after last line to end, ^C to cancel"
echo "### Make sure to include some context for it - headers at least"
chunk=$(ghg -eo -r some-public-key | base64 -w80)
echo -e "--- entry [ $(date -Is) ]\n${chunk}\n--- end\n" >>backup.log

Then on any updates, to run this script and paste the updated plaintext secret-block into it, before encrypting all secrets in that block for good.

It does one-way public-key encryption (using ghg tool, but common age or GnuPG will work just as well), to store those encrypted updates, which can then be safely backed-up alongside the main (also encrypted) list of secrets, and everything can be restored from these using corresponding secure private key (ideally not exposed or used anywhere for anything outside of such fallback-recovery purposes).

Update 2024-02-21: secret-token-backup wrapper/tool is a more modern replacement for that, which backs stuff up automatically, and can also be used for safely getting specific secret out of there using other PIV yubikeys (e.g. YK Nano stuck in a laptop's USB slot).


And one more aside - since plugging devices into USB rarely aligns correctly on the first try (USB curse), is somewhat tedious, and can potentially wear-out contacts or snap-off the device, I've grabbed a cheap PC/SC-compatible ACR122U NFC reader from aliexpress, and have been using it instead of a USB interface, as modern FIDO2 tokens tend to support NFC for use with smartphones.

It works great for this password-management purpose, placing the key on NFC pad works instead of the touch presence-check with USB (at least with cheap Yubico Security Key devices), with some short (<1 minute) timeout on the pad in which token stops responding with ERR_PIN, to avoid misuse if one forgets to remove it.

libfido2 supports PC/SC interface, and PCSC lite project providing it on typical linux distros seem to support pretty much all NFC readers in existance.

libfido2 is in turn used by systemd, OpenSSH, pam-u2f, its fido2-token/cred/assert cli, my fido2-hmac-desalinate password-management hack above, and many other tools. So through it, all these projects automatically have easy and ubiquitous NFC support too.

(libfido2 also supports linux kernel AF_NFC interface in addition to PC/SC one, which works for much narrower selection of card-readers implemented by in-kernel drivers, so PC/SC might be easier to use, but kernel interface doesn't need an extra pcscd dependency, if works for your specific reader)

Notable things that don't use that lib and have issues with NFC seem to be browsers - both Firefox and Chromium on desktop (and their forks, see e.g. mozbug-1669870) - which is a shame, but hopefully will be fixed there eventually.

Nov 30, 2022

How to reliably set MTU on a weird (batman-adv) interface

I like and use B.A.T.M.A.N. (batman-adv) mesh-networking protocol on the LAN, to not worry about how to connect local linuxy things over NICs and WiFi links into one shared network, and been using it for quite a few years now.

Everything sensitive should run over ssh/wg links anyway (or ipsec before wg was a thing), so it's not a problem to have any-to-any access in a sane environment.

But due to extra frame headers, batman-adv benefits from either lower MTU on the overlay interface or higher MTU on all interfaces which it runs over, to avoid fragmentation. Instead of remembering to tweak all other interfaces, I think it's easier to only bother with one batman-adv iface on each machine, but somehow that proved to be a surprising challenge.

MTU on iface like "bat0" jumps on its own when slave interfaces in it change state, so obvious places to set it, like networkd .network/.netdev files or random oneshot boot scripts don't work - it can/will randomly change later (usually immediately after these things set it on boot) and you'll only notice when ssh or other tcp conns start to hang mid-session.

One somewhat-reliable and sticky workaround for having issues is to mangle TCP MSS by the firewall (e.g. nftables), so that MTU changes are not an issue for almost all connections, but that still leaves room for issues and fragmentation in a few non-TCP things, and is obviously a hack - wrong MTU value is still there.

After experimenting with various "try to set mtu couple times after delay", "wait for iface state and routes then set mtu" and such half-measures - none of which worked reliably for that odd interface - here's what I ended up with:

[Unit]
Wants=network.target
After=network.target
Before=network-online.target

StartLimitBurst=4
StartLimitIntervalSec=3min

[Service]
Type=exec
Environment=IF=bat0 MTU=1440
ExecStartPre=/usr/lib/systemd/systemd-networkd-wait-online -qi ${IF}:off --timeout 30
ExecStart=bash -c 'rl=0 rl_win=100 rl_max=20 rx=" mtu [0-9]+ "; \
  while read ev; do [[ "$ev" =~ $rx ]] || continue; \
    printf -v ts "%%(%%s)T" -1; ((ts-=ts%%rl_win)); ((rld=++rl-ts)); \
    [[ $rld -gt $rl_max ]] && exit 59 || [[ $rld -lt 0 ]] && rl=ts; \
    ip link set dev $IF mtu $MTU || break; \
  done < <(ip -o link show dev $IF; exec stdbuf -oL ip -o monitor link dev $IF)'

Restart=on-success
RestartSec=8

[Install]
WantedBy=multi-user.target

It's a "F this sh*t" approach of "anytime you see mtu changing, change it back immediately", which seem to be the only thing that works reliably so far.

Couple weird things in there on top of "ip monitor" loop are:

  • systemd-networkd-wait-online -qi ${IF}:off --timeout 30

    Waits for interface to appear for some time before either restarting the .service, or failing when StartLimitBurst= is reached.

    The :off networkd "operational status" (see networkctl(1)) is the earliest one, and enough for "ip monitor" to latch onto interface, so good enough here.

  • rl=0 rl_win=100 rl_max=20 and couple lines with exit 59 on it.

    This is rate-limiting in case something else decides to manage interface' MTU in a similar "persistent" way (at last!), to avoid pulling the thing back-and-forth endlessly in a loop, or (over-)reacting to interface state flapping weirdly.

    I.e. stop service with failure on >20 relevant events within 100s.

  • Restart=on-success to only restart on "break" when "ip link set" fails if interface goes away, limited by StartLimit*= options to also fail eventually if it does not (re-)appear, or if that operation fails consistently for some other reason.

With various overlay tunnels becoming commonplace lately, MTU seem to be set incorrectly by default about 80% of the time, and I almost feel like I'm done fighting various tools with their way of setting it guessed/hidden somewhere (if implemented at all), and should just extend this loop into a more generic system-wide "mtud.service" that'd match interfaces by wildcard and enforce some admin-configured MTU values, regardless of whatever creating them (wrongly) thinks might be the right value.

As seem to be common with networking stuff - you either centralize configuration like that on a system, or deal with constant never-ending stream of app failures. Other good example here are in-app ACLs, connection settings and security measures vs system firewalls and wg tunnels, with only latter actually working, and former proven to be an utter disaster for decades now.

Nov 25, 2022

Information as a disaggregated buffet instead of firehose or a trough

Following information sources on the internet has long been compared to "drinking from a firehose", and trying to "keep up" with that is how I think people end up with hundreds of tabs in some kind of backlog, overflowing inboxes, feeds, podcast queues, and feeling overwhelmed in general.

Main problem for me with that (I think) was aggregation - I've used web-based feed reader in the past, aggregated feeds from slashdot/reddit/hn/lobsters (at different times), and followed "influencers" on twitter to get personally-curated feeds from there in one bundle - and none of it really worked well.

For example, even when following podcast feeds, you end up with a backlog (of things to listen to) that is hard to catch-up with - natually - but it also adds increasingly more tracking and balancing issues, as simply picking things in order will prioritize high-volume stuff, and sometimes you'll want to skip the queue and now track "exceptions", while "no-brainer" pick pulls increasingly-old and irrelevant items first.

Same thing tends to happen with any bundle of feeds that you try to "follow", which always ends up being overcrowded in my experience, and while you can "declare bankrupcy" resetting the follow-cursor to "now", skipping backlog, that doesn't solve fundamental issue with this "following" model - you'll just fall behind again, likely almost immediately - the approach itself is wrong/broken/misguided.

Obvious "fix" might be to curate feeds better so that you can catch-up, but if your interests are broad enough and changing, that's rarely an option, as sources tend to have their own "take it or leave it" flow rate, and narrowing scope to only a selection that you can fully follow is silly and unrepresentative for those interests, esp. if some of them are inconsistent or drown out others, even while being generally low-noise.

Easy workable approach, that seem to avoid all issues that I know of, and worked for me so far, goes something like this:

  • Bookmark all sources you find interesting individually.
  • When you want some podcast to listen-to, catch-up on news in some area, or just an interesting thing to read idly - remember it - as in "oh, would be nice to read/know-about/listen-to this right now".
  • Then pull-out a bookmark, and pick whatever is interesting and most relevant there, not necessarily latest or "next" item in any way.

This removes the mental burden of tracking and curating these sources, balancing high-traffic with more rarefied ones, re-weighting stuff according to your current interests, etc - and you don't loose on anything either!

I.e. with something relevant to my current interests I'll remember to go back to it for every update, but stuff that is getting noisy or falling off from that sphere, or just no longer entertaining or memorable, will naturally slip my mind more and more often, and eventually bookmark itself can be dropped as unused.

Things that I was kinda afraid-of with such model before - and what various RSS apps or twitter follows "help" to track:

  • I'll forget where/how to find important info sources.
  • Forget to return to them.
  • Miss out on some stuff there.
  • Work/overhead of re-checking for updates is significant.

None of these seem to be an issue in practice, as most interesting and relevant stuff will natually be the first thing that will pop-up in memory to check/grab/read, you always "miss out" on something when time is more limited than amount of interesting goodies (i.e. it's a problem of what to miss-out on, not whether you do or don't), and time spent checking couple bookmarks is a rounding error compared to processing the actual information (there's always a lot of new stuff, and for something you check obsessively, you'd know the rate well).

This kind of "disaggregated buffet" way of zero-effort "controlling" information intake is surprisingly simple, pretty much automatic (happens on its own), very liberating (no backlog anywhere), and can be easily applied to different content types:

  • Don't get podcast rss-tracking app, bookmark individual sites/feeds instead.

    I store RSS/Atom feed URLs under one bookmarks-dir in Waterfox, and when wanting for something to listen on a walk or while doing something monotonous, remember and pull out an URL via quickbar (bookmarks can be searched via * <query> there iirc, I just have completion/suggestions enabled for bookmarks only), run rss-get script on the link, pick specific items/ranges/links to download via its aria2c or yt-dlp (with its built-in SponsorBlock), play that.

  • Don't "follow" people/feeds/projects on twitter or fediverse/mastodon and then read through composite timeline, just bookmark all individual feeds (on whatever home instances, or via nitter) instead.

    This has a great added advantage of maintaining context in these platforms which are notoriously bad for that, i.e. you read through things as they're posted in order, not interleaved with all other stuff, or split over time.

    Also this doesn't require an account, running a fediverse instance, giving away your list of follows (aka social graph), or having your [current] interests being tracked anywhere (even if "only" for bamboozling you with ads on the subject to death).

    With many accounts to follow and during some temporary twitter/fediverse duplication, I've also found it useful (so far) to have a simple ff-cli script to "open N bookmarks matching /@", when really bored, and quickly catch-up on something random, yet interesting enough to end up being bookmarked.

  • Don't get locked into subscribing or following "news" media that is kinda shit.

    Simply not having that crap bundled with other things in same reader/timeline/stream/etc will quickly make brain filter-out and "forget" sources that become full of ads, <emotion>bait, political propaganda and various other garbage, and brain will do such filtering all in the background on its own, without wasting any time or conscious cycles.

    There's usually nothing of importance to miss with such sources really, as it's more like taking a read on the current weather, only occasionally interesting/useful, and only for current/recent stuff.

It's obviously not a guide to something "objectively best", and maybe only works well for me this way, but as I've kinda-explained it (poorly) in chats before, thought to write it down here too - hopefully somewhat more coherently - and maybe just link to later from somewhere.

Nov 18, 2022

AWK script to convert long integers to human-readable number format and back

Haven't found anything good for this on the internet before, and it's often useful in various shell scripts where metrics or tunable numbers can get rather long:

3400000000 -> 3_400M
500000 -> 500K
8123455009 -> 8_123_455_009

That is, to replace long stretches of zeroes with short Kilo/Mega/Giga/etc suffixes and separate the usual 3-digit groups by something (underscore being programming-language-friendly and hard to mistake for field separator), and also convert those back to pure integers from cli arguments and such.

There's numfmt in GNU coreutils, but that'd be missing on Alpine, typical network devices, other busybox Linux distros, *BSD, MacOS, etc, and it doesn't have "match/convert all numbers you want" mode anyway.

So alternative is using something that is available everywhere, like generic AWK, with a reasonably short scripts to implement that number-mangling logic.

  • Human-format all space-separated long numbers in stdin, like in example above:

    awk '{n=0; while (n++ <= NR) { m=0
      while (match($n,/^[0-9]+[0-9]{3}$/) && m < 5) {
        k=length($n)-3
        if (substr($n,k+1)=="000") { $n=substr($n,1,k); m++ }
        else while (match($n,/[0-9]{4}(_|$)/))
          $n = substr($n,1,RSTART) "_" substr($n,RSTART+1) }
      $n = $n substr("KMGTP", m, m ? 1 : 0) }; print}'
    
  • Find all human-formatted numbers in stdin and convert them back to long integers:

    awk '{while (n++ <= NR) {if (match($n,/^([0-9]+_)*[0-9]+[KMGTP]?$/)) {
      sub("_","",$n); if (m=index("KMGTP", substr($n,length($n),1))) {
        $n=substr($n,1,length($n)-1); while (m-- > 0) $n=$n "000" } }}; print}'
    

    I.e. reverse of the operation above.

Code is somewhat compressed for brevity within scripts where it's not the point. It should work with any existing AWK implementations afaik (gawk, nawk, busybox awk, etc), and not touch any fields that don't need such conversion (as filtered by the first regexp there).

Line-match pattern can be added at the start to limit conversion to lines with specific fields (e.g. match($1,/(Count|Threshold|Limit|Total):/) {...}), "n" bounds and $n regexps adjusted to filter-out some inline values.

Numbers here will use SI prefixes, not 2^10 binary-ish increments, like in IEC units (kibi-, mebi-, gibi-, etc), as is more useful in general, but substr() and "000"-extension can be replaced by /1024 (and extra -iB unit suffix) when working with byte values - AWK can do basic arithmetic just fine too.

Took me couple times misreading and mistyping overly long integers from/into scripts to realize that this is important enough and should be done better than counting zeroes with arrow keys like some tech-barbarian, even in simple bash scripts, and hopefully this hack might eventually pop-up in search for someone else coming to that conclusion as well.

Oct 21, 2022

Useful git hook - prepare-commit-msg with repo path, branch and last commits

I've been using this as a prepare-commit-msg hook everywhere for couple years now:

#!/bin/bash
msg_file=$1 commit_src=$2 hash=$3
[[ -z "$msg_file" || "$GIT_EDITOR" = : ]] && exit 0
[[ -z "$commit_src" ]] || exit 0 # cherry-picks, merges, etc

echo >>"$msg_file" '#'
echo >>"$msg_file" "# Commit dir: ${GIT_PREFIX%/}"
echo >>"$msg_file" "#   Repo dir: $(realpath "$PWD")"
echo >>"$msg_file" '#'
git log -10 --format='# %s' >>"$msg_file"

Which saves a lot of effort of coming up with commit-messages, helps in monorepos/collections and to avoid whole bunch of mistakes.

Idea is that instead of just "what is to be comitted", comment below commit-msg in $EDITOR will now include something like this:

# On branch master
# Changes to be committed:
# modified:   PKGBUILD
# new file:   9005.do-some-other-thing.patch
#
# Commit dir: waterfox
#   Repo dir: /home/myuser/archlinux-pkgbuilds
#
# waterfox: +9004.rebind_screenshot_key_to_ctrl_alt_s.patch
# waterfox: fix more build/install issues
# waterfox: +fix_build_with_newer_cbindgen.patch
# waterfox: update to G5.0.1
# +re2g-git
# waterfox: bump to G4.1.4
# +mount-idmapped-git
# telegram-tdlib-purple-*: makedepends=telegram-tdlib - static linking
# waterfox: update for G4.1.1
# +b2tag-git

It helps as a great sanity-check and reminder of the following things:

  • Which subdir within the repo you are working in, e.g. "waterfox" pkg above, so that it's easy to identify and/or use that as a part of commit message.

    With a lot of repositories I work with, there are multiple subdirs and components in there, not to mention collection-repos, and it's useful to have that in the commit msg - I always try to add them as a prefix, unless repo uses entirely different commit message style (and has one).

  • What is the repository directory that you're running "git commit" in.

    There can be a local dev repo, a submodule of it in a larger project, sshfs-mounted clone of it somewhere, more local clones for diff branches or different git-worktree dirs, etc.

    This easily tells that you're in the right place (or where you think you are), usually hard to miss by having repo under local simple dev dir, and not some weird submodule path or mountpoint in there.

  • List of last commits on this branch - incredibly useful for a ton of reasons.

    For one, it easily keeps commit-msgs consistent - you don't use different language, mood and capitalization in there by forgetting what is the style used in this particular repo, see any relevant issue/PR numbers, prefixes/suffixes.

    But also it immediately shows if you're on the wrong branch, making a duplicate commit by mistake, forgot to make commit/merge for something else important before this, undo some reset or other recent shenanigans - all at a glance.

    It was a usual practice for me to check git-log before every commit, and this completely eliminated the need for it.

Now when I don't see this info in the commit-msg comment, first thing to do is copy the hook script to whatever repo/host/place I'm working with, as it's almost as bad as not seeing which files you commit in there without it. Can highly recommend this tiny time-saver when working with any git repos from the command line.

Implementation has couple caveats, which I've added there over time:

[[ -z "$msg_file" || "$GIT_EDITOR" = : ]] && exit 0
[[ -z "$commit_src" ]] || exit 0 # cherry-picks, merges, etc

These lines are to skip running this hook for various non-interactive git operations, where anything you put into commit-msg will get appended to it verbatim, without skipping comment-lines, as it is done with interactive "git commit" ops.

Canonical version of the hook is in the usual mk-fg/fgtk dumping ground:

https://github.com/mk-fg/fgtk#git-prepare-commit-msg-hook

Which might get more caveats like above fixed in the future, should I bump into any, so might be better than current version in this post.

Oct 19, 2022

Make cursor stand-out more in Emacs by having it blink through different colors

When playing some game (Starsector, probably) on the primary display recently, having aux emacs frame (an extra X11 window) on the second one with some ERC chat in there (all chats translate to irc), I've had a minor bug somewhere and noticed that cursor in that frame/window wasn't of the usual translucent-green-foreground-text color (on a dark bg), but rather stayed simply #fff-white (likely because of some screwup in my theme-application func within that frame).

And an interesting observation is that it's actually really good when cursor is of a different color from both foreground-text and the background colors, because then it easily stands out against both, and doesn't get lost in a wall of text either - neat!

Quickly googling around for a hack to make this fluke permanent, stumbled upon this reply on Stack Overflow, which, in addition to easy (set-cursor-color ...) answer, went on to give an example of changing color on every cusor blink.

Which seems like a really simple way to make the thing stand out not just against bg/fg colors, but also against any syntax-highlighting color anywhere, and draw even more attention to itself, which is even better.

With only a couple lines replacing a timer-func that normally turns cursor on-and-off (aka blinking), now it blinks with a glorious rainbow of colors:

https://github.com/mk-fg/emacs-setup/blob/afc1477/core/fg_lookz.el#L485-L505

And at least I can confirm that it totally works for even greater/more-immediate visibility, especially in multiple emacs windows (e.g. usual vertical split showing two code buffers), where all these cursors now change colors and impossible to miss at a glance, so you always know which point in code you're jumping to upon switching there - can recommend to at least try it out.

Bit weird that most code editor windows seem to have narrow-line cursor, even if of a separate color, given how important that detail is within that window, compared to e.g. any other arbitrary glyph in there, which would be notably larger and often more distinct/visible, aside from blinking.


One interesting part with a set of colors, as usual, is to generate or pick these somehow, which can be done on a color wheel arbitrarily, with results often blending-in with other colors and each other, more-or-less.

But it's also not hard to remember about L*a*b* colorspace and pick colors that will be almost certainly distinct according to ranges there, as you'd do with a palette for dynamic lines on a graph or something more serious like that.

These days I'm using i want hue generator-page to get a long-ish list of colors within specified ranges for Lightness (L component, to stand-out against light/dark bg) and Chroma parameters, and then pass css list of these into a color-b64sort script, with -b/--bg-color and -c/--avoid-color settings/thresholds.

Output is a filtered list with only colors that are far enough from the specified ones, to stand-out against them in the window, but is also sorted to have every next color picked to be most visually-distinct against preceding ones (with some decay-weight coefficient to make more-recent diff[s] most relevant), so that color doesn't blink through similar hues in a row, and you don't have to pick/filter/order and test these out manually.

Which is the same idea as with the usual palette-picks for a line on a chart or other multi-value visualizations (have high contrast against previous/other values), that seem to pop-up like this in surprisingly many places, which is why at some point I just had to write that dedicated color-b64sort thingie to do it.

Tweaking parameters to only get as many farthest-apart colors as needed for blinks until cursor goes static (to avoid wasting CPU cycles blinking in an idle window), ended up with a kind of rainbow-cursor, counting bright non-repeating color hues in a fixed order, which is really nice for this purpose and also just kinda cool and fun to look at.

Oct 18, 2022

Revisiting POSIX ACLs and Capabilities in python some 15 years later

A while ago as I've discovered for myself using xattrs, ACLs and capabilities for various system tasks, and have been using those in python2-based tools via C wrappers for libacl and libcap (in old mk-fg/fgc repo) pretty much everywhere since then.

Tools worked without any issues for many years now, but as these are one of the last scripts left still in python2, time has come to update those, and revisit how to best access same things in python3.

Somewhat surprisingly, despite being supported on linux since forever, and imo very useful, support for neither ACLs nor capabilities haven't made it into python 3.10's stdlib, but there is now at least built-in support for reading/writing extended attributes (without ctypes, that is), and both of these are simple structs stored in them.

So, disregarding any really old legacy formats, parsing ACL from a file in a modern python can be packed into something like this:

import os, io, enum, struct, pwd, grp

class ACLTag(enum.IntEnum):
  uo = 0x01; u = 0x02; go = 0x04; g = 0x08
  mask = 0x10; other = 0x20
  str = property(lambda s: s._name_)

class ACLPerm(enum.IntFlag):
  r = 4; w = 2; x = 1
  str = property(lambda s: ''.join(
    (v._name_ if v in s else '-') for v in s.__class__ ))

def parse_acl(acl, prefix=''):
  acl, lines = io.BytesIO(acl), list()
  if (v := acl.read(4)) != b'\2\0\0\0':
    raise ValueError(f'ACL version mismatch [ {v} ]')
  while True:
    if not (entry := acl.read(8)): break
    elif len(entry) != 8: raise ValueError('ACL length mismatch')
    tag, perm, n = struct.unpack('HHI', entry)
    tag, perm = ACLTag(tag), ACLPerm(perm)
    match tag:
      case ACLTag.uo | ACLTag.go:
        lines.append(f'{tag.str[0]}::{perm.str}')
      case ACLTag.u | ACLTag.g:
        try:
          name = ( pwd.getpwuid(n).pw_name
            if tag is tag.u else grp.getgrgid(n).gr_name )
        except KeyError: name = str(n)
        lines.append(f'{tag.str}:{name}:{perm.str}')
      case ACLTag.other: lines.append(f'o::{perm.str}')
      case ACLTag.mask: lines.append(f'm::{perm.str}')
      case _: raise ValueError(tag)
  lines.sort(key=lambda s: ('ugmo'.index(s[0]), s))
  return '\n'.join(f'{prefix}{s}' for s in lines)

p = 'myfile.bin'
xattrs = dict((k, os.getxattr(p, k)) for k in os.listxattr(p))
if acl := xattrs.get('system.posix_acl_access'):
  print('Access ACLs:\n' + parse_acl(acl, '  '))
if acl := xattrs.pop('system.posix_acl_default', ''):
  print('Default ACLs:\n' + parse_acl(acl, '  d:'))

Where it's just a bunch of 8B entries with uids/gids and permission bits in them, and capabilities are even simpler, except for ever-growing enum of them:

import os, io, enum, struct, dataclasses as dcs

CapSet = enum.IntFlag('CapSet', dict((cap, 1 << n) for n, cap in enumerate((
  ' chown dac_override dac_read_search fowner fsetid kill setgid setuid setpcap'
  ' linux_immutable net_bind_service net_broadcast net_admin net_raw ipc_lock'
  ' ipc_owner sys_module sys_rawio sys_chroot sys_ptrace sys_pacct sys_admin'
  ' sys_boot sys_nice sys_resource sys_time sys_tty_config mknod lease'
  ' audit_write audit_control setfcap mac_override mac_admin syslog wake_alarm'
  ' block_suspend audit_read perfmon bpf checkpoint_restore' ).split())))

@dcs.dataclass
class Caps: effective:bool; permitted:CapSet; inheritable:CapSet

def parse_caps(cap):
  cap = io.BytesIO(cap)
  ver, eff = ((v := struct.unpack('I', cap.read(4))[0]) >> 3*8) & 0xff, v & 1
  if ver not in [2, 3]: raise ValueError(f'Unsupported capability v{ver}')
  perm1, inh1, perm2, inh2 = struct.unpack('IIII', cap.read(16))
  if (n := len(cap.read())) != (n_tail := {2:0, 3:4}[ver]):
    raise ValueError(f'Cap length mismatch [ {n} != {n_tail} ]')
  perm_bits, inh_bits = perm2 << 32 | perm1, inh2 << 32 | inh1
  perm, inh = CapSet(0), CapSet(0)
  for c in CapSet:
    if perm_bits & c.value: perm |= c; perm_bits -= c.value
    if inh_bits & c.value: inh |= c; inh_bits -= c.value
  if perm_bits or inh_bits:
    raise ValueError(f'Unrecognized cap-bits: P={perm_bits:x} I={inh_bits:x}')
  return Caps(eff, perm, inh)

p = 'myfile.bin'
try: print(parse_caps(os.getxattr(p, 'security.capability')))
except OSError: pass

Bit weird that wrappers along these lines can't be found in today's python 3.10, but maybe most people sadly still stick to suid and more crude hacks where more complex access permissions are needed.

One interesting thing I found here is how silly my old py2 stracl.c and strcaps.c look in comparison - it's screenfuls of lines of more complicated C code, tied into python's c-api, and have to be compiled wherever these tools are used, with an extra python wrappers on top - all for parsing a couple of trivial structs, which under linux ABI compatibility promises, can be relied upon to be stable enough anyway.

Somehow it's been the obvious solution back then, to have compiler check all headers and link these libs as compatibility wrappers, but I'd never bother these days - it'll be either ctypes wrapper, or parsing simple stuff in python, to avoid having extra jank and hassle of dependencies where possible.

Makes me wonder if that's also the dynamic behind relatively new js/rust devs dragging in a bunch of crap (like the infamous left-pad) into their apps, still thinking that it'd make life simpler or due to some "good practice" dogmas.

May 30, 2022

LESSOUTPUT filter workaround for broken unicode en-dash characters in manpages

Using man <something> in the terminal as usual, I've been noticing more and more manpages being broken by tools that produce them over the years in this one specific way - long command-line options with double-dash are being mangled into having unicode en-dash "–" prefix instead of "--".

Most recent example that irked me was yt-dlp(1) manpage, which looks like this atm (excerpt as of 2022-05-30):

-h, –help
  Print this help text and exit
–version
  Print program version and exit
-i, –ignore-errors
  Ignore download and postprocessing errors. The download
  will be considered successful even if the postprocessing fails
–no-abort-on-error
  Continue with next video on download errors;
  e.g. to skip unavailable videos in a playlist (default)
–abort-on-error
  Abort downloading of further videos if an error occurs
  (Alias: –no-ignore-errors)

Update 2023-09-10: current feh(1) manpage has another issue - unicode hyphens instead of the usual ascii hyphen/minus signs (which look similar, but aren't the same thing - even more evil!):

OPTIONS
       ‐A, ‐‐action [flag][[title]]action

If you habitually copy-paste any of the long opts there into a script (or yt-dlp config, as it happens), to avoid retyping these things, it won't work, because e.g –help cli option should of course actually be --help, i.e. have two ascii hyphens and not any kind of unicode dash characters.

From a brief look, this seem to happen because of conversion from markdown and probably not enough folks complaining about it, which is a pattern that I too chose to follow, and make a workaround instead of reporting a proper bug :)

(tbf, youtube-dl forks have like 1000s of these in tracker, and I'd rather not add to that, unless I have a patch for some .md tooling it uses, which I'm too lazy to look into)

"man" command (from man-db on linux) uses a pager tool to display its stuff in a terminal cli (controlled by PAGER= or MANPAGER= env vars), which is typically set to use less tool on desktop linuxes (with the exception of minimal distros like Alpine, where it comes from busybox).

"less" somewhat-infamously supports filtering of its output (which is occasionally abused in infosec contexts to make a file that installs rootkit if you run "less" on it), which can be used here for selective filtering and fixes when manpage is being displayed through it.

Relevant variable to set in ~/.zshrc or ~/.bashrc env for running a filter-script is:

LESSOPEN='|-man-less-filter %s'

With |- magic before man-less-filter %s command template indicating that command should be also used for pipes and when less is displaying stdin.

"man-less-filter" helper script should be in PATH, and can look something like this to fix issues in yt-dlp manpage excerpt above:

#!/bin/sh
## Script to fix/replace bogus en-dash unicode chars in broken manpages
## Intended to be used with LESSOPEN='|-man-less-filter %s'

[ -n "$MAN_PN" ] || exit 0 # no output = use original input

# \x08 is backspace-overprint syntax that "man" uses for bold chars
# Bold chars are used in option-headers, while opts in text aren't bold
seds='s/‐/-/g;'`
  `'s/–\x08–\(.\x08\)/-\x08--\x08-\1/g;'`
  `' s/\([ [:punct:]]\)–\([a-z0-9]\)/\1--\2/'
[ "$1" != - ] || exec sed "$seds"
exec sed "$seds" "$1"

It looks really funky for a reason - simply using s/–/--/ doesn't work, as manpages use old typewriter/teletype backspace-overtype convention for highlighting options in manpages.

So, for example, -h, –help line in manpage above is actually this sequence of utf-8 - -\x08-h\x08h,\x08, –\x08–h\x08he\x08el\x08lp\x08p\n - with en-dash still in there, but \x08 backspaces being used to erase and retype each character twice, which makes "less" display them in bold font in the terminal (using its own different set of code-bytes for that from ncurses/terminfo).

Simply replacing all dashes with double-hyphens will break that overtyping convention, as each backspace erases a single char before it, and double-dash is two of those.

Which is why the idea in the script above is to "exec sed" with two substitution regexps, first one replacing all overtyped en-dash chars with correctly-overtyped hyphens, and second one replacing all remaining dashes in the rest of the text which look like options (i.e. immediately followed by letter/number instead of space), like "Alias: –no-ignore-errors" in manpage example above, where text isn't in bold.

MAN_PN env-var check is to skip all non-manpage files, where "less" understands empty script output as "use original text without filtering". /bin/dash can be used instead of /bin/sh on some distros (e.g. Arch, where sh is usually symlinked to bash) to speed up this tiny script startup/runs.

Not sure whether "sed" might have to be a GNU sed to work with unicode char like that, but any other implementation can probably use \xe2\x80\x93 escape-soup instead of in regexps, which will sadly make them even less readable than usual.

Such manpage bugs should almost certainly be reported to projects/distros and fixed, instead of using this hack, but thought to post it here anyway, since google didn't help me find an exising workaround, and fixing stuff properly is more work.


Update 2023-09-10: Current man(1) from man-db has yet another issue to work around - it sanitizes environment before running "less", dropping LESSOPEN from there entirely.

Unfortunately, at least current "less" doesn't seem to support config file, to avoid relying entirely on apps passing env-vars around (maybe for a good reason, given how they like to tinker with its configuration), so easy fix is a tiny less.c wrapper for it:

#include <unistd.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
  if (getenv("MAN_PN")) setenv("LESSOPEN", "|-man-less-filter %s", 1);
  execv("/usr/bin/less", argv); return 1; }

gcc -O2 less.c -o less && strip less, copy it to ~/bin or such, and make sure env uses PAGER=less or a wrapper path, instead of original /usr/bin/less.

May 05, 2022

Alpine Linux on ODROID-C2 or any other ARM boards

Alpine Linux is a tiny distro (like ~50 MiB for the whole system), which is usually very good for a relatively single-purpose setup, dedicated to running only a couple things at once.

It doesn't really sacrifice much in the process, since busybox and OpenRC there provide pretty complete toolkit for setting things up to start and debugging everything, its repos have pretty much anything, and what they don't have is trivial to build via APKBUILD files very similar to Arch's PKGBUILDs.

Really nice thing seem to have happened to it with the rise of docker/podman and such single-app containers, when it got a huge boost in popularity, being almost always picked as an OS base in these, and so these days it seem to be well-supported to cover all linux architectures, and likely will be for quite a while.

So it's a really good choice for an appliance-type ARM boards to deploy and update for years, but it doesn't really bother specializing itself for each of those or supporting board-specific quirks, which is why some steps to set it up for a specific board have to be done manually.

Since I've done it for a couple different boards relatively recently, and been tinkering with these for a while, thought to write down a list of steps on how to set alpine up for a new board here.

In this case, it's a relatively-old ODROID-C2 that I have from 2016, which is well-supported in mainline linux kernel by now, as popular boards tend to be, after a couple years since release, and roughly same setup steps should apply to any of them.

Current Alpine is 3.15.4 and Linux is 5.18 (or 5.15 LTS) atm of writing this.

  • Get USB-UART dongle (any dirt-cheap $5 one will do) and a couple jumper wires, lookup where main UART pins are on the board (ones used by firmware/u-boot).

    Official documentation or wiki is usually the best source of info on that. There should be RX/TX and GND pins, only those need to be connected.

    screen /dev/ttyUSB0 115200 command is my go-to way to interact with those, but there're other tools for that too.

    If there's another ARM board laying around, they usually have same UART pins, so just twist TX/RX and connect to the serial console from there.

  • Check which System-on-Chip (SoC) family/vendor device uses, look it up wrt mainline linux support, and where such effort might be coordinated.

    For example, for Amlogic SoC in ODC2, linux-meson.com has all the info.
    Allwinner SoCs tend to be documented at linux-sunxi.org wiki.
    Beaglebones' TI SoC at elinux.org, RPi's site/community, ... etc etc.

    There're usually an IRC or whatever chat channels linked on those too, and tons of helpful info all around, though sometimes outdated and distro-specific.

  • Get U-Boot for the board, ideally configured for some sane filesystem layout.

    "Generic ARM" images on Alpine Downloads page tend to have these, but architecture is limited to armv7/aarch64 boards, but a board vendor's u-boot will work just as well, and it might come with pre-uboot stages or signing anyway, which definitely have to be downloaded from their site and docs.

    With ODC2, there're docs and download links for bootloader specifically, which include an archive with a signing tool and a script to sign/combine all blobs and dd them into right location on microSD card, where firmware will try to run it from.

    Bootloader can also be extracted from some vendor-supplied distro .img (be it ubuntu or android), if there's no better option - dd that on the card and rm -rf all typical LFS dirs on the main partition there, as well as kernel/initramfs in /boot - what's left should be a bootloader (incl. parts of it dd'ed before partitions) and its configs.

    Often all magic there is "which block to dd u-boot.img to", and if Alpine Generic ARM has u-boot blob for specific board (has one for ODROID-C2), then that one should be good to use, along with blobs in /efi and its main config file in /extlinux - dd/cp u-boot to where firmware expects it to be and copy those extra stage/config dirs.

    Alpine wiki might have plenty of board-specific advice too, e.g. check out Odroid-C2 page there.

  • Boot the board into u-boot with UART console connected, get that working.

    There should be at least output from board's firmware, even without any microSD card, then u-boot will print a lot of info about itself and what it's doing, and eventually it (or later stages) will try to load linux kernel, initramfs and dtb files, specified in whatever copied configs.

    If "Generic ARM" files or whatever vendor distro was not cleaned-up from card partitions, it might even start linux and some kind of proper OS, though kernel in that generic alpine image seem to be lacking ODC2 board support and has no dtb for it.

    Next part is to drop some working linux kernel/dtb (+initramfs maybe), rootfs, and that should boot into something interactive. Conversely, no point moving ahead with the rest of the setup until bootloader and console connected there all work as expected.

  • Get or build a working Linux kernel.

    It's not a big deal (imo at least) to configure and build these manually, as there aren't really that many options when configuring those for ARM - you basically pick the SoC and all options with MESON or SUNXI in the name for this specific model, plus whatever generic features/filesystems/etc you will need in there. Idea is to enable with =y (compile-in) all drivers that are needed for SoC to boot, and the rest can be =m (loadable modules).

    Ideally after being built somewhere, kernel should be installed via proper Alpine package, and for my custom ODC2 kernel, I just reused linux-lts APKBUILD file from "aports" git repository, running "make nconfig" in src/build-... dir to make new .config and then copy it back for abuild to use.

    Running abuild -K build rootpkg is generally how kernel package can be rebuilt after any .config modifications in the build directory, i.e. after fixing whatever forgotten things needed for boot. (good doc/example of the process can be found at strfry's blog here)

    These builds can be run in same-arch alpine chroot, in a VM or on some other board. I've used an old RPi3 board to do aarch64 build with an RPi Alpine image there - there're special pre-configured ones for those boards, likely due to their massive popularity (so it was just the easiest thing to do).

    I'd also suggest renaming the package from linux-lts to something unique for specific build/config, like "linux-odc2-5-15-36", with idea there being able to install multiple such linux-* packages alongside each other easily.

    This can help a lot if one of them might have issues in the future - not booting, crashes/panics, any incompatibilities, etc - as you can then simply flip bootloader config to load earlier version, and test new versions without comitting to them easily via kexec (especially cool for remote headless setups - see below).

    For C2 board kconfig, I've initially grabbed one from superna9999/meta-meson repo, built/booted that (on 1G-RAM RPi3, that large build needed zram enabled for final vmlinuz linking), to test if it works at all, and then configured much more minimal kernel from scratch.

    One quirk needed there for ODC2 was enabling CONFIG_REGULATOR_* options for "X voltage supplied by Y" messages in dmesg that eventually enable MMC reader, but generally kernel's console output is pretty descriptive wrt what might be missing, especially if it can be easily compared with an output from a working build like that.

    For the purposes of initial boot, linux kernel is just one "vmlinuz" file inside resulting .apk (tar can unpack those), but dtbs-* directory must be copied from package alongside it too for ARM boards like these odroids, as they use blobs there to describe what is connected where and how to kernel.

    Specific dtb can also be concatenated into kernel image file, to have it all-in-one, but there should be no need to do that, as all such board bootloaders expect to have dtbs and have configuration lines to load them already.

    So, to recap:

    • Linux configuration: enable stuff needed to boot and for accessing MMC slot, or whatever rootfs/squashfs storage.
    • Build scripts: use APKBUILD from Alpine aports, just tweak config there and maybe rename it, esp. if you want multiple/fallback kernels.
    • Build machine: use other same-arch board or VM with alpine or its chroot.
    • Install: "tar -xf" apk, drop vmlinuz file and dtbs dir into /boot.
  • Update extlinux.conf or whatever bootloader config for new kernel files, boot that.

    E.g. after dropping vmlinuz and dtbs from "linux-c51536r01" package for u-boot.bin from Alpine's "Generic ARM" build (running efi blob with extlinux.conf support):

    TIMEOUT 20
    PROMPT 1
    DEFAULT odc2
    
    LABEL odc2
    MENU LABEL Linux odc2
    KERNEL /boot/vmlinuz-c51536r01
    # INITRD /boot/initramfs-c51536r01
    FDTDIR /boot/dtbs-c51536r01
    APPEND root=/dev/mmcblk0p1
    

    Idea with this boot config is to simply get kernel to work and mount some rootfs without issues, so e.g. for custom non-generic/modular one, built specifically for ODROID-C2 board in prev step, there's no need for that commented-out INITRD line.

    Once this boots and mounts rootfs (and then presumably panics as it can't find /sbin/init there), remaining part is to bootstrap/grab basic alpine rootfs for it to run userspace OS parts from.

    This test can also be skipped for more generic and modular kernel config, as it's not hard to test it with proper roots and initramfs later either.

  • Setup bootable Alpine rootfs.

    It can be grabbed from the same Alpine downloads URL for any arch, but if there's already a same-arch alpine build setup for kernel package, might be easier to bootstrap it with all necessary packages there instead:

    # apk='apk -p odc2-root -X https://dl.alpinelinux.org/alpine/v3.15/main/ -U --allow-untrusted'
    # $apk --arch aarch64 --initdb add alpine-base
    # $apk add linux-firmware-none mkinitfs
    # $apk add e2fsprogs # or ones for f2fs, btrfs or whatever
    # $apk add linux-c51536r01-5.15.36-r1.apk
    

    To avoid needing --allow-untrusted for anything but that local linux-c51536r01 package, apk keys can be pre-copied from /etc/apk to odc2-root/etc/apk just like Alpine's setup-disk script does it.

    That rootfs will have everything needed for ODC2 board to boot, including custom kernel with initramfs generated for it, containing any necessary modules. Linux files on /boot should overwrite manually-unpacked/copied ones in earlier test, INITRD or similar line can be enabled/corrected for bootloader, but after cp/rsync, rootfs still needs a couple additional tweaks (again similar to what setup-disk Alpine script does):

    • etc/fstab: add e.g. /dev/mmcblk0p1 / ext4 rw,noatime 0 1 to fsck/remount rootfs.

      I'd add tmpfs /tmp tmpfs size=30%,nodev,nosuid,mode=1777 0 0 there as well, since Alpine's OpenRC is not systemd and doesn't have default mountpoints all over the place.

    • etc/inittab: comment-out all default tty's and enable UART one.

      E.g. ttyAML0::respawn:/sbin/getty -L ttyAML0 115200 vt100 for ODROID-C2 board.

      This is important to get login prompt on the right console, so make sure to check kernel output to find the right one, e.g. with ODC2 board:

      c81004c0.serial: ttyAML0 at MMIO 0xc81004c0 (irq = 23, base_baud = 1500000) is a meson_uart
      

      But it can also be ttyS0 or ttyAMA0 or something else on other ARM platforms.

    • etc/securetty: add ttyAML0 line for console there too, same as with inittab.

      Otherwise even though login prompt will be there, getty will refuse to ask for password on that console, and immediately respond with "user is disabled" or something like that.

    This should be enough to boot and login into working OS on the same UART console.

  • Setup basic stuff on booted rootfs.

    With initial rootfs booting and login-able, all that's left to get this to a more typical OS with working network and apk package management.

    "vi" is default editor in busybox, and it can be easy to use once you know to press "i" at the start (switching it to a notepad-like "insert" mode), and press "esc" followed by ":wq" + "enter" at the end to save edits (or ":q!" to discard).

    • /etc/apk/repositories should have couple lines like https://dl.alpinelinux.org/alpine/v3.15/main/, appropriate for this Alpine release and repos (just main + community, probably)

    • echo nameserver 1.1.1.1 > /etc/resolv.conf + /etc/network/interfaces:

      printf '%s\n' >/etc/network/interfaces \
        'auto lo' 'iface lo inet loopback' 'auto eth0' 'iface eth0 inet dhcp'
      

      Or something like that.

      Also, to dhcp-configure network right now:

      ip link set eth0 up && udhcpc
      ntpd -dd -n -q -p pool.ntp.org
      
    • rc-update add anything used/useful from /etc/init.d.

      There aren't many scripts in there by default, and all should be pretty self-explanatory wrt what they are for.

      Something like this can be a good default:

      for s in bootmisc hostname modules sysctl syslog ; do rc-update add $s boot; done
      for s in devfs dmesg hwdrivers mdev ; do rc-update add $s sysinit; done
      for s in killprocs mount-ro savecache ; do rc-update add $s shutdown; done
      for s in networking ntpd sshd haveged ; do rc-update add $s; done
      

      Run rc-update show -v to get a full picture of what is setup to run when with openrc, should be much simpler and more comprehensible than systemd in general.

      sshd and haveged there can probably be installed after network works, or in earlier rootfs-setup step (and haveged likely unnecessary with more recent kernels that fixed blocking /dev/random).

    • ln -s /usr/share/zoneinfo/... /etc/localtime maybe?

    That should be it - supported and upgradable Alpine with a custom kernel apk and bootloader for this specific board.

  • Extra: kexec for updating kernels safely, without breaking the boot.

    Once some initial linux kernel boots and works, and board is potentially tucked away somewhere where it's hard to reach for pulling microSD card out, it can be scary to update kernel to potentially something that won't be able to boot, start crashing, etc.

    Easy solution for that is kexec - syscall/mechanism to start a new kernel from another working kernel/OS.

    Might need to build/install kexec-tools apk to use it - it's missing on some architectures, but APKBUILD from aports and the tool itself should work just fine without changes. Also don't forget to enable it in the kernel.

    Using multiple kernel packages alongside each other like suggested above, something like this should work for starting new linux from an old one immediately:

    # apk install --allow-untrusted linux-c51705r08-5.17.5-r0.apk
    # kexec -sf /boot/vmlinuz-c51705r08 --initrd /boot/initramfs-c51705r08 --reuse-cmdline
    

    It works just like userspace exec() syscalls, but gets current kernel to exec a new one instead, which generally looks like reboot, except without involving firmware and bootloader at all.

    This way it's easy to run and test anything in new kernel or in its cmdline options safely, with simple reboot or power-cycling reverting it back to an old known-good linux/bootloader setup.

    Once everything is confirmed-working with new kernel, bootloader config can potentially be updated to use it instead, and old linux-something package replaced with a new one as a known-good fallback on the next update.

    This process is not entirely foolproof however, as sometimes linux drivers or maybe hardware have some kind of init-state mismatch, which actually happens with C2 board, where its built-in network card fails to work after kexec, unless its linux module is unloaded before that.

    Something like this inside "screen" session can be used to fix that particular issue:

    # rmmod dwmac_meson8b stmmac_platform stmmac && kexec ... ; reboot
    

    "reboot" at the end should never run if "kexec" works, but if any of this fails, C2 board won't end up misconfigured/disconnected, and just reboot back into old kernel instead.

    So far I've only found such hack needed with ODROID-C2, other boards seem to work fine after kexec, so likely just a problem in this specific driver expecting NIC to be in one specific state on init.

This post is kinda long because of how writing works, but really it all boils down to "dd" for board-specific bootloader and using board-specific kernel package with a generic Alpine rootfs, so not too difficult, especially given how simple and obvious everything is in Alpine Linux itself.

Supporting many different boards, each with its unique kernel, bootloader and quirks seem to be a pain for most distros, which can barely get by with x86 support, but if the task is simplified to just providing rootfs and pre-built package repository, a reasonably well-supported distro like Alpine has a good chance to work well in the long run, I think.

ArchLinuxARM, Armbian and various vendor distros like Raspbian provide nicer experience out-of-the-box, but eventually have to drop support for old hardware, while these old ARM boards don't really go anywhere in my experience, and keep working fine for their purpose, hence more long-term bet on something like alpine seems more reasonable than distro-hopping or needless hardware replacement every couple years, and alpine in particular is just a great fit for such smaller systems.

← Previous Next → Page 2 of 17