Feb 13, 2017
Got to reading short stories in Column Reader from laptop screen before sleep
recently, and for an extra-lazy points, don't want to drag my hand to keyboard
to flip pages (or columns, as the case might be).
Easy fix - get any input device and bind stuff there to keys you'd normally use.
As it happens, had Xbox 360 controller around for that.
Hard part is figuring out how to properly do it all in Xorg - need to build
xf86-input-joystick first (somehow not in Arch core), then figure out how to
make it act like a dumb event source, not some mouse emulator, and then stuff
like xev and xbindkeys will probably help.
This is way more complicated than it needs to be, and gets even more so when you
factor-in all the Xorg driver quirks, xev's somewhat cryptic nature (modifier
maps, keysyms, etc), fact that xbindkeys can't actually do "press key" actions
(have to use stuff like xdotool for that), etc.
All the while reading these events from linux itself is as trivial as evtest
/dev/input/event11 (or for event in dev.read_loop(): ...) and sending them
back is just ui.write(e.EV_KEY, e.BTN_RIGHT, 1) via uinput device.
Hence whole binding thing can be done by a tiny python loop that'd read events
from whatever specified evdev and write corresponding (desired) keys to uinput.
So instead of +1 pre-naptime story, hacked together a script to do just that -
evdev-to-xev (python3/asyncio) - which reads mappings from simple YAML and runs
the loop.
For example, to bind right joystick's (on the same XBox 360 controller) extreme
positions to cursor keys, plus triggers, d-pad and bumper buttons there:
map:
## Right stick
# Extreme positions are ~32_768
ABS_RX <-30_000: left
ABS_RX >30_000: right
ABS_RY <-30_000: up
ABS_RY >30_000: down
## Triggers
# 0 (idle) to 255 (fully pressed)
ABS_Z >200: left
ABS_RZ >200: right
## D-pad
ABS_HAT0Y -1: leftctrl leftshift equal
ABS_HAT0Y 1: leftctrl minus
ABS_HAT0X -1: pageup
ABS_HAT0X 1: pagedown
## Bumpers
BTN_TL 1: [h,e,l,l,o,space,w,o,r,l,d,enter]
BTN_TR 1: right
timings:
hold: 0.02
delay: 0.02
repeat: 0.5
Run with e.g.: evdev-to-xev -c xbox-scroller.yaml /dev/input/event11
(see also less /proc/bus/input/devices and evtest /dev/input/event11).
Running the thing with no config will print example one with comments/descriptions.
Given how all iterations of X had to work with whatever input they had at the
time, plus not just on linux, even when evdev was around, hard to blame it for
having a bit of complexity on top of way simpler input layer underneath.
In linux, aforementioned Xbox 360 gamepad is supported by "xpad" module (so that
you'd get evdev node for it), and /dev/uinput for simulating arbitrary evdev
stuff is "uinput" module.
No need for any extra Xorg drivers beyond standard evdev.
Most similar tool to such script seem to be actkbd, though afaict, one'd still
need to run xdotool from it to simulate input :O=
Github link: evdev-to-xev script (in the usual mk-fg/fgtk scrap-heap)
Feb 06, 2017
Honestly didn't think NAT'ing traffic from "lo" interface was even possible,
because traffic to host's own IP doesn't go through *ROUTING chains with iptables,
and never used "-j DNAT" with OUTPUT, which apparently works there as well.
And then also, according to e.g. Netfilter-packet-flow.svg, unlike with
nat-prerouting, nat-output goes after routing decision was made, so no point
mangling IPs there, right?
Wrong, totally possible to redirect "OUT=lo" stuff to go out of e.g. "eth0" with
the usual dnat/snat, with something like this:
table ip nat {
chain in { type nat hook input priority -160; }
chain out { type nat hook output priority -160; }
chain pre { type nat hook prerouting priority -90; }
chain post { type nat hook postrouting priority 110; }
}
add rule ip nat out oifname lo \
ip saddr $own-ip ip daddr $own-ip \
tcp dport {80, 443} dnat $somehost
add rule ip nat post oifname eth0 \
ip saddr $own-ip ip daddr $somehost \
tcp dport {80, 443} masquerade
Note the bizarre oifname lo ip saddr $own-ip ip daddr $own-ip thing.
One weird quirk - if "in" (arbitrary name, nat+input hook is the important bit)
chain isn't defined, dnat will only work one-way, not rewriting IPs in response packets.
One explaination wrt routing decision here might be arbitrary priorities that
nftables allows to set for hooks (and -160 is before iptables mangle stuff).
So, from-loopback-and-back forwarding, huh.
To think of all the redundant socats and haproxies I've seen and used for this purpose earlier...
Jan 29, 2017
Recently bumped into apparently not well-supported scenario of accessing
gitolite instance transparently on a host that is only accessible through
some other gateway (often called "bastion" in ssh context) host.
Something like this:
+---------------+
| | git@myhost.net:myrepo
| dev-machine ---------------------------+
| | |
+---------------+ |
+------------v------+
git@gitolite:myrepo | |
+---------------------------- myhost.net (gw) |
| | |
+-v-------------------+ +-------------------+
| |
| gitolite (gl) |
| host/container/vm |
| |
+---------------------+
Here gitolite instance might be running on a separate machine, or on the same
"myhost.net", but inside a container or vm with separate sshd daemon.
From any dev-machine you want to simply use git@myhost.net:myrepo to access
repositories, but naturally that won't work because in normal configuration
you'd hit sshd on gw host (myhost.net) and not on gl host.
There are quite a few common options to work around this:
Use separate public host/IP for gitolite, e.g. git.myhost.net (!= myhost.net).
TCP port forwarding or similar tricks.
E.g. simply forward ssh port connections in a "gw:22 -> gl:22" fashion,
and have gw-specific sshd listen on some other port, if necessary.
This can be fairly easy to use with something like this for odd-port sshd
in ~/.ssh/config:
Host myhost.net
Port 1234
Host git.myhost.net
Port 1235
Can also be configured in git via remote urls like
ssh://git@myhost.net:1235/myrepo.
Use ssh port forwarding to essentially do same thing as above, but with
resulting git port accessible on localhost.
Configure ssh to use ProxyCommand, which will login to gw host and setup
forwarding through it.
All of these, while suitable for some scenarios, are still nowhere near what
I'd call "transparent", and require some additional configuration for each git
client beyond just git add remote origin git@myhost.net:myrepo.
One advantage of such lower-level forwarding is that ssh authentication to
gitolite is only handled on gitolite host, gw host has no clue about that.
If dropping this is not a big deal (e.g. because gw host has root access to
everything in gl container anyway), there is a rather easy way to forward only
git@myhost.net connections from gw to gl host, authenticating them only on gw
instead, described below.
Gitolite works by building ~/.ssh/authorized_keys file with essentially
command="gitolite-shell gl-key-id" <gl-key> for each public key pushed to
gitolite-admin repository.
Hence to proxy connections from gw, similar key-list should be available there,
with key-commands ssh'ing into gitolite user/host and running above commands there
(with original git commands also passed through SSH_ORIGINAL_COMMAND env-var).
To keep such list up-to-date, post-update trigger/hook for gitolite-admin repo
is needed, which can use same git@gw login (with special "gl auth admin"
key) to update key-list on gw host.
Steps to implement/deploy whole thing:
useradd -m git on gw and run ssh-keygen -t ed25519 on both gw and gl
hosts for git/gitolite user.
Setup all connections for git@gw to be processed via single "gitolite
proxy" command, disallowing anything else, exactly like gitolite does for its
users on gl host.
gitolite-proxy.py script (python3) that I came up with for this purpose can be
found here: https://github.com/mk-fg/gitolite-ssh-user-proxy/
It's rather simple and does two things:
When run with --auth-update argument, receives gitolite authorized_keys list,
and builds local ~/.ssh/authorized_keys from it and authorized_keys.base file.
Similar to gitolite-shell, when run as gitolite-proxy key-id, ssh'es
into gl host, passing key-id and git command to it.
This is done in a straightforward os.execlp('ssh', 'ssh', '-qT', ...)
manner, no extra processing or any error-prone stuff like that.
When installing it (to e.g. /usr/local/bin/gitolite-proxy as used below),
be sure to set/update "gl_host_login = ..." line at the top there.
For --auth-update, ~/.ssh/authorized_keys.base (note .base) file on gw should
have this single line (split over two lines for readability, must be all on
one line for ssh!):
command="/usr/local/bin/gitolite-proxy --auth-update",no-port-forwarding
,no-X11-forwarding,no-agent-forwarding,no-pty ssh-ed25519 AAA...4u3FI git@gl
Here ssh-ed25519 AAA...4u3FI git@gl is the key from ~git/.ssh/id_ed25519.pub
on gitolite host.
Also run:
# install -m0600 -o git -g git ~git/.ssh/authorized_keys{.base,}
# install -m0600 -o git -g git ~git/.ssh/authorized_keys{.base,.old}
To have initial auth-file, not yet populated with gitolite-specific keys/commands.
Note that only these two files need to be writable for git user on gw host.
From gitolite (gl) host and user, run: ssh -qT git@gw < ~/.ssh/authorized_keys
This is to test gitolite-proxy setup above - should populate
~git/.ssh/authorized_keys on gw host and print back gw host key and proxy
script to run as command="..." for it (ignore them, will be installed by trigger).
Add trigger that'd run after gitolite-admin repository updates on gl host.
On gl host, put this to ~git/.gitolite.rc right before ENABLE line:
LOCAL_CODE => "$rc{GL_ADMIN_BASE}/local",
POST_COMPILE => ['push-authkeys'],
Commit/push push-authkeys trigger script (also from gitolite-ssh-user-proxy repo)
to gitolite-admin repo as local/triggers/push-authkeys,
updating gw_proxy_login line in there.
gitolite docs on adding triggers: http://gitolite.com/gitolite/gitolite.html#triggers
Once proxy-command is in place on gw and gitolite-admin hook runs at least once
(to setup gw->gl access and proxy-command), git@gw (git@myhost.net) ssh
login spec can be used in exactly same way as git@gl.
That is, fully transparent access to gitolite on a different host through that
one user, while otherwise allowing to use sshd on a gw host, without any
forwarding tricks necessary for git clients.
Whole project, with maybe a bit more refined process description and/or whatever fixes
can be found on github here: https://github.com/mk-fg/gitolite-ssh-user-proxy/
Huge thanks to sitaramc (gitolite author) for suggesting how to best setup gitolite triggers
for this purpose on the ML.
Oct 16, 2016
My problem was this: how do you do essentially a split-horizon DNS for different
apps in the same desktop session.
E.g. have claws-mail mail client go to localhost for someserver.com (because it
has port forwarded thru "ssh -L"), while the rest of them (e.g. browser and
such) keep using normal public IP.
Usually one'd use /etc/hosts for something like that, but it applies to all apps
on the machine, of course.
Next obvious option (mostly because it's been around forever) is to LD_PRELOAD
something that'd either override getaddrinfo() or open() for /etc/hosts, but
that sounds like work and not included in util-linux (yet?).
Easiest and newest (well, new-ish, CLONE_NEWNS has been around since linux-3.8
and 2013) way to do that is to run the thing in its own "mount namespace", which
sounds weird until you combine that with the fact that you can bind-mount files
(like that /etc/hosts one).
So, the magic line is:
# unshare -m sh -c\
'mount -o bind /etc/hosts.forwarding /etc/hosts
&& exec sudo -EHin -u myuser -- exec claws-mail'
Needs /etc/hosts.forwarding replacement-file for this app, which it will see as
a proper /etc/hosts, along with root privileges (or CAP_SYS_ADMIN) for CLONE_NEWNS.
Crazy "sudo -EHin" shebang is to tell sudo not to drop much env, but still
behave kinda as if on login, run zshrc and all that.
"su - myuser" or "machinectl shell myuser@ -- ..." can also be used there.
Replacing files like /etc/nsswitch.conf or /etc/{passwd,group} that way, one can
also essentially do any kind of per-app id-mapping - cool stuff.
Of course, these days sufficiently paranoid or advanced people might as well run
every app in its own set of namespaces anyway, and have pretty much everything
per-app that way, why the hell not.
Sep 25, 2016
As of linux-4.8, something like xt_policy is still - unfortunately - on the
nftables TODO list, so to match traffic pre-authenticated via IPSec, some
workaround is needed.
Obvious one is to keep using iptables/ip6tables to mark IPSec packets with old
xt_policy module, as these rules interoperate with nftables just fine, with
only important bit being ordering of iptables hooks vs nft chain priorities,
which are rather easy to find in "netfilter_ipv{4,6}.h" files, e.g.:
enum nf_ip_hook_priorities {
NF_IP_PRI_FIRST = INT_MIN,
NF_IP_PRI_CONNTRACK_DEFRAG = -400,
NF_IP_PRI_RAW = -300,
NF_IP_PRI_SELINUX_FIRST = -225,
NF_IP_PRI_CONNTRACK = -200,
NF_IP_PRI_MANGLE = -150,
NF_IP_PRI_NAT_DST = -100,
NF_IP_PRI_FILTER = 0,
NF_IP_PRI_SECURITY = 50,
NF_IP_PRI_NAT_SRC = 100,
NF_IP_PRI_SELINUX_LAST = 225,
NF_IP_PRI_CONNTRACK_HELPER = 300,
NF_IP_PRI_CONNTRACK_CONFIRM = INT_MAX,
NF_IP_PRI_LAST = INT_MAX,
};
(see also Netfilter-packet-flow.svg by Jan Engelhardt for general overview of
the iptables hook positions, nftables allows to define any number of chains
before/after these)
So marks from iptables/ip6tables rules like these:
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -m policy --dir in --pol ipsec --mode transport -j MARK --or-mark 0x101
-A OUTPUT -m policy --dir out --pol ipsec --mode transport -j MARK --or-mark 0x101
COMMIT
Will be easy to match in priority=0 input/ouput hooks (as NF_IP_PRI_RAW=-300) of
nft ip/ip6/inet tables (e.g. mark and 0x101 == 0x101 accept)
But that'd split firewall configuration between iptables/nftables, adding more
hassle to keep whole "iptables" thing initialized just for one or two rules.
xfrm transformation (like ipsec esp decryption in this case) seem to preserve
all information about the packet intact, including packet marks (but not
conntrack states, which track esp connection), which - as suggested by Florian
Westphal in #netfilter - can be utilized to match post-xfrm packets in nftables
by this preserved mark field.
E.g. having this (strictly before ct state {established, related} accept for
stateful firewalls, as each packet has to be marked):
define cm.ipsec = 0x101
add rule inet filter input ip protocol esp mark set mark or $cm.ipsec
add rule inet filter input ip6 nexthdr esp mark set mark or $cm.ipsec
add rule inet filter input mark and $cm.ipsec == $cm.ipsec accept
Will mark and accept both still-encrypted esp packets (IPv4/IPv6) and their
decrypted payload.
Note that this assumes that all IPSec connections are properly authenticated and
trusted, so be sure not to use anything like that if e.g. opportunistic
encryption is enabled.
Much simpler nft-only solution, though still not a full substitute for what
xt_policy does, of couse.
Aug 31, 2016
Lack of some basic tool to "wait for connection" in linux toolkit always annoyed
me to no end.
root@alarm~:~# reboot
Shared connection to 10.0.1.75 closed.
% ssh root@10.0.1.75
...time passes, ssh doesn't do anything...
ssh: connect to host 10.0.1.75 port 22: No route to host
% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused
% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused
% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused
...[mashing Up/Enter] start it up already!...
% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused
% ssh root@10.0.1.75
root@alarm~:~#
...finally!
Working a lot with ARM boards, can have this thing repeating few dozen times a day.
Same happens on every power-up, after fiddling with sd cards, etc.
And usually know for a fact that I'll want to reconnect to the thing in question
asap and continue what I was doing there, but trying luck a few times with
unresponsive or insta-failing ssh is rather counter-productive and just annoying.
Instead:
% tping 10.0.1.75 && ssh root@10.0.1.75
root@alarm~:~#
That's it, no ssh timing-out or not retrying fast enough, no "Connection
refused" nonsense.
tping (code link, name is ping + fping + tcp ping) is a trivial ad-hoc
script that opens new TCP connection to specified host/port every second
(default for -r/--retry-delay) and polls connections for success/error/timeout
(configurable) in-between, exiting as soon as first connection succeeds, which
in example above means that sshd is now ready for sure.
Doesn't need extra privileges like icmp pingers do, simple no-deps python3 script.
Used fping as fping -qr20 10.0.1.75 && ssh root@10.0.1.75 before finally
taking time to write that thing, but it does what it says on the tin - icmp
ping, and usually results in "Connection refused" error from ssh, as there's gap
between network and sshd starting.
One of these "why the hell it's not in coreutils or util-linux" tools for me now.
Aug 05, 2016
More D3 tomfoolery!
It's been a while since I touched the thing, but recently been asked to make a
simple replacement for processing common-case time-series from temperature +
relative-humidity (calling these "t" and "rh" here) sensors (DHT22, sht1x, or what
have you), that's been painstakingly done in MS Excel (from tsv data) until now.
So here's the plot:
Misc feats of the thing, in no particular order:
- Single-html d3.v4.js + ES6 webapp (assembed by html-embed script) that
can be opened from localhost or any static httpd on the net.
- Drag-and-drop or multi-file browse/pick box, for uploading any number of tsv
files (in whatever order, possibly with gaps in data) instantly to JS on the page.
- Line chart with two Y axes (one for t, one for rh).
- Smaller "overview" chart below that, where one can "brush" needed timespan
(i.e. subset of uploaded data) for all the other charts and readouts.
- Mouseover "vertical line" display snapping to specific datapoints.
- List of basic stats for picked range - min/max, timespan, value count.
- Histograms for value distribution, to easily see typical values for picked
timespan, one for t and rh.
Kinda love this sort of interactive vis stuff, and it only takes a bunch of
hours to put it all together with d3, as opposed to something like rrdtool,
its dead images and quirky mini-language.
Also, surprisingly common use-case for this particular chart, as having such
sensors connected to some RPi is pretty much first thing people usually want to
do (or maybe close second after LEDs and switches).
Will probably look a bit further to make it into an offline-first Service
Worker app, just for the heck of it, see how well this stuff works these days.
No point to this post, other than forgetting to write stuff for months is bad ;)
May 15, 2016
My current Razer E-Blue mouse had this issue since I've got it - Mouse-2 /
BTN_MIDDLE / middle-click (useful mostly as "open new tab" in browsers and
"paste" in X) sometimes produces two click events (in rapid succession) instead
of one.
It was more rare before, but lately it feels like it's harder to make it click
once than twice.
Seem to be either hardware problem with debouncing circuitry or logic in the
controller, or maybe a button itself not mashing switch contacts against each
other hard enough... or soft enough (i.e. non-elastic), actually, given that
they shouldn't "bounce" against each other.
Since there's no need to double-click that wheel-button ever, it looks rather
easy to debounce the click on Xorg input level, by ignoring repeated button
up/down events after producing the first full "click".
Easiest solution of that kind that I've found was to use guile (scheme) script
with xbindkeys tool to keep that click-state data and perform clicks
selectively, using xdotool:
(define razer-delay-min 0.2)
(define razer-wait-max 0.5)
(define razer-ts-start #f)
(define razer-ts-done #f)
(define razer-debug #f)
(define (mono-time)
"Return monotonic timestamp in seconds as real."
(+ 0.0 (/ (get-internal-real-time) internal-time-units-per-second)))
(xbindkey-function '("b:8") (lambda ()
(let ((ts (mono-time)))
(when
;; Enforce min ts diff between "done" and "start" of the next one
(or (not razer-ts-done) (>= (- ts razer-ts-done) razer-delay-min))
(set! razer-ts-start ts)))))
(xbindkey-function '(Release "b:8") (lambda ()
(let ((ts (mono-time)))
(when razer-debug
(format #t "razer: ~a/~a delay=~a[~a] wait=~a[~a]\n"
razer-ts-start razer-ts-done
(and razer-ts-done (- ts razer-ts-done)) razer-delay-min
(and razer-ts-start (- ts razer-ts-start)) razer-wait-max))
(when
(and
;; Enforce min ts diff between previous "done" and this one
(or (not razer-ts-done) (>= (- ts razer-ts-done) razer-delay-min))
;; Enforce max "click" wait time
(and razer-ts-start (<= (- ts razer-ts-start) razer-wait-max)))
(set! razer-ts-done ts)
(when razer-debug (format #t "razer: --- click!\n"))
(run-command "xdotool click 2")))))
Note that xbindkeys actually grabs "b:8" here, which is a "mouse button 8", as
if it was "b:2", then "xdotool click 2" command will recurse into same code, so
wheel-clicker should be bound to button 8 in X for that to work.
Rebinding buttons in X is trivial to do on-the-fly, using standard "xinput" tool
- e.g. xinput set-button-map "My Mouse" 1 8 3 (xinitrc.input script can
be used as an extended example).
Running "xdotool" to do actual clicks at the end seem a bit wasteful, as
xbindkeys already hooks into similar functionality, but unfortunately there's no
"send input event" calls exported to guile scripts (as of 1.8.6, at least).
Still, works well enough as it is, fixing that rather annoying issue.
[xbindkeysrc.scm on github]
Mar 03, 2016
I've been really conservative with the whole py2 -> py3 migration (shiny new
langs don't seem to be my thing), but one feature that finally makes it worth
the effort is well-integrated - by now (Python-3.5 with its "async" and "await"
statements) - asyncio eventloop framework.
Basically, it's a twisted core, including eventloop hooked into standard
socket/stream ops, sane futures implementation, all the
Transports/Protocols/Tasks base classes and such concepts, standardized right
there in Python's stdlib.
On one hand, baking this stuff into language core seem to be somewhat backwards,
but I think it's actually really smart thing to do - not only it eliminates
whole "tech zoo" problem nodejs ecosystem has, but also gets rid of "require
huge twisted blob or write my own half-assed eventloop base" that pops-up in
every second script, even the most trivial ones.
Makes it worth starting any py script with py3 shebang for me, at last \o/
Dec 29, 2015
There's multitail thing to tail multiple logs, potentially interleaved, in
one curses window, which is a painful-to-impossible to browse through, as you'd
do with simple "less".
There's lnav for parsing and normalizing a bunch of logs, and continuously
monitoring these, also interactive.
There's rainbow to color specific lines based on regexp, which can't really do
any interleaving.
And this has been bugging me for a while - there seem to be no easy way to get this:
This is an interleaved output from several timestamped log files, for events
happening at nearly the same time (which can be used to establish the sequence
between these and correlate output of multiple tools/instances/etc), browsable
via the usual "less" (or whatever other $PAGER) in an xterm window.
In this case, logfiles are from "btmon" (bluetooth sniffer tool), "bluetoothd"
(bluez) debug output and an output from gdb attached to that bluetoothd pid
(showing stuff described in previous entry about gdb).
Output for neither of these tools have timestamps by default, but this is easy
to fix by piping it through any tool which would add them into every line,
svlogd for example.
To be concrete (and to show one important thing about such log-from-output
approach), here's how I got these particular logs:
# mkdir -p debug_logs/{gdb,bluetoothd,btmon}
# gdb -ex 'source gdb_device_c_ftrace.txt' -ex q --args\
/usr/lib/bluetooth/bluetoothd --nodetach --debug\
1> >(svlogd -r _ -ttt debug_logs/gdb)\
2> >(svlogd -r _ -ttt debug_logs/bluetoothd)
# stdbuf -oL btmon |\
svlogd -r _ -ttt debug_logs/btmon
Note that "btmon" runs via coreutils stdbuf tool, which can be critical for
anything that writes to its stdout via libc's fwrite(), i.e. can have
buffering enabled there, which causes stuff to be output delayed and in batches,
not how it'd appear in the terminal (where line buffering is used), resulting in
incorrect timestamps, unless stdbuf or any other option disabling such
buffering is used.
With three separate logs from above snippet, natural thing you'd want is to see
these all at the same time, so for each logical "event", there'd be output from
btmon (network packet), bluetoothd (debug logging output) and gdb's function
call traces.
It's easy to concatenate all three logs and sort them to get these interleaved,
but then it can be visually hard to tell which line belongs to which file,
especially if they are from several instances of the same app (not really the
case here though).
Simple fix is to add per-file distinct color to each line of each log, but then
you can't sort these, as color sequences get in the way, it's non-trivial to do
even that, and it all adds-up to a script.
Seem to be hard to find any existing tools for the job, so wrote a script to do
it - liac (in the usual mk-fg/fgtk github repo), which was used to produce
output in the image above - that is, interleave lines (using any field for
sorting btw), add tags for distinct ANSI colors to ones belonging to different
files and optional prefixes.
Thought it might be useful to leave a note for anyone looking for something
similar.
[script source link]