My blog_title_herehttp://blog.fraggod.net/2024-01-17T22:48:00+05:00Basic markdown syntax/links checks after rst -> md migration2024-01-17T22:48:00+05:002024-01-17T22:48:00+05:00Mike Kazantsevtag:blog.fraggod.net,2024-01-17:/2024/01/17/basic-markdown-syntaxlinks-checks-after-rst-md-migration.html<p>Some days ago, I've randomly noticed that github stopped rendering long
<a class="reference external" href="https://docutils.sourceforge.io/rst.html">rst (as in reStructuredText)</a> README files on its repository pages.
Happened in a couple repos, with no warning or anything, it just said "document
takes too long to preview" below the list of files, with a link to view …</p><p>Some days ago, I've randomly noticed that github stopped rendering long
<a class="reference external" href="https://docutils.sourceforge.io/rst.html">rst (as in reStructuredText)</a> README files on its repository pages.
Happened in a couple repos, with no warning or anything, it just said "document
takes too long to preview" below the list of files, with a link to view raw .rst file.</p>
<p>Sadly that's not the only issue with rst rendering, as <a class="reference external" href="https://codeberg.org">codeberg</a> (and pretty
sure its <a class="reference external" href="https://about.gitea.com/">Gitea</a> / <a class="reference external" href="https://forgejo.org/">Forgejo</a> base apps) had issues with some syntax there as well -
didn't make any repo links correctly, didn't render table of contents, missed
indented references for links, etc.</p>
<p>So thought to fix all that by converting these few long .rst READMEs to .md (<a class="reference external" href="https://spec.commonmark.org/current/">markdown</a>),
which does indeed fix all issues above, as it's a much more popular format nowadays,
and apparently well-tested and working fine in at least those git-forges.</p>
<p>One nice thing about rst however, is that it has one specification and a
reference implementation of tools to parse/generate its syntax -
<a class="reference external" href="https://docutils.sourceforge.io/">python docutils</a> - which can be used to go over .rst file in a strict
manner and point out all syntax errors in it (<tt class="docutils literal">rst2html</tt> does it nicely).</p>
<p>Good example of such errors that always gets me, is using links in the text with
a reference-style URLs for those below (instead of inlining them), to avoid
making plaintext really ugly, unreadable and hard to edit due to giant
mostly-useless URLs in middle of it.</p>
<p>You have to remember to put all those references in, ideally not leave any
unused ones around, and then keep them matched to tags in the text precisely,
down to every single letter, which of course doesn't really work with typing
stuff out by hand without some kind of machine-checks.</p>
<p>And then also, for git repo documentation specifically, all these links should
point to files in the repo properly, and those get renamed, moved and removed
often enough to be a constant problem as well.</p>
<p>Proper static-HTML doc-site generation tools like <a class="reference external" href="https://www.mkdocs.org/">mkdocs</a> (or its
popular <a class="reference external" href="https://squidfunk.github.io/mkdocs-material/">mkdocs-material</a> fork) do some checking for issues like that
(though confusingly not nearly enough), but require a bit of setup,
with configuration and whole venv for them, which doesn't seem very
practical for a quick README.md syntax check in every random repo.</p>
<p>MD linters apparently go the other way and check various garbage metrics like
whether plaintext conforms to some style, while also (confusingly!) often not
checking basic crap like whether it actually works as a markdown format.</p>
<p>Task itself seems ridiculously trivial - find all <tt class="docutils literal">... [some link] ...</tt> and
<tt class="docutils literal">[some link]: ...</tt> bits in the file and report any mismatches between the two.</p>
<p>But looking at md linters a few times now, couldn't find any that do it nicely
that I can use, so ended up writing my own one - <a class="reference external" href="https://github.com/mk-fg/fgtk#hdr-markdown-checks">markdown-checks tool</a> - to
detect all of the above problems with links in .md files, and some related quirks:</p>
<ul class="simple">
<li>link-refs - Non-inline links like "[mylink]" have exactly one "[mylink]: URL" line for each.</li>
<li>link-refs-unneeded - Inline URLs like "[mylink](URL)" when "[mylink]: URL" is also in the md.</li>
<li>link-anchor - Not all headers have "<a name=hdr-...>" line. See also -a/--add-anchors option.</li>
<li>link-anchor-match - Mismatch between header-anchors and hashtag-links pointing to them.</li>
<li>link-files - Relative links point to an existing file (relative to them).</li>
<li>link-files-weird - Relative links that start with non-letter/digit/hashmark.</li>
<li>link-files-git - If .md file is in a git repo, warn if linked files are not under git control.</li>
<li>link-dups - Multiple same-title links with URLs.</li>
<li>... - and probably a couple more by now</li>
<li>rx-in-code - Command-line-specified regexp (if any) detected inside code block(s).</li>
<li>tabs - Make sure md file contains no tab characters.</li>
<li>syntax - Any kind of incorrect syntax, e.g. blocks opened and not closed and such.</li>
</ul>
<p>ReST also has a nice <tt class="docutils literal">.. contents::</tt> feature that <a class="reference external" href="https://docutils.sourceforge.io/docs/ref/rst/directives.html#table-of-contents">automatically renders Table of Contents</a>
from all document headers, quite like mkdocs does for its sidebars, but afaik basic
<a class="reference external" href="https://spec.commonmark.org/current/">markdown</a> does not have that, and maintaining that thing with all-working links manually,
without any kind of validation, is pretty much impossible in particular,
and yet absolutely required for large enough documents with a non-autogenerated ToC.</p>
<p>So one interesting extra thing that I found needing to implement there was for script
to automatically (with <tt class="docutils literal"><span class="pre">-a/--add-anchors</span></tt> option) insert/update
<tt class="docutils literal"><a <span class="pre">name=hdr-some-section-header></a></span></tt> anchor-tags before every header,
because otherwise internal links within document are impossible to maintain either -
github makes hashtag-links from headers according to its own inscrutable logic,
gitlab/codeberg do their own thing, and there's no standard for any of that
(which is a historical problem with .md in general - poor ad-hoc standards on
various features, while .rst has internal links in its spec).</p>
<p>Thus making/maintaining table-of-contents kinda requires stable internal links and
validating that they're all still there, and ideally that all headers have such
internal link as well, i.e. new stuff isn't missing in the ToC section at the top.</p>
<p>Script addresses both parts by adding/updating those anchor-tags, and having
them in the .md file itself indeed makes all internal hashtag-links "stable"
and renderer-independent - you point to a name= set within the file, not guess
at what name github or whatever platform generates in its html at the moment
(which inevitably won't match, so kinda useless that way too).
And those are easily validated as well, since both anchor and link pointing to
it are in the file, so any mismatches are detected and reported.</p>
<p>I was also thinking about generating the table-of-contents section itself,
same as it's done in rst, for which surely many tools exist already,
but as long as it stays correct and checked for not missing anything,
there's not much reason to bother - editing it manually allows for much greater
flexibility, and it's not long enough for that to be any significant amount
of work, either to make initially or add/remove a link there occasionally.</p>
<p>With all these checks for wobbly syntax bits in place, markdown READMEs
seem to be as tidy, strict and manageable as rst ones. Both formats have rough
feature parity for such simple purposes, but .md is definitely only one with
good-enough support on public code-forge sites, so a better option for public docs atm.</p>
(Ab-)Using fanotify as a container event/message bus2024-01-09T15:43:00+05:002024-01-09T15:43:00+05:00Mike Kazantsevtag:blog.fraggod.net,2024-01-09:/2024/01/09/ab-using-fanotify-as-a-container-eventmessage-bus.html<p>Earlier, as I was setting-up <a class="reference external" href="https://blog.fraggod.net/2023/12/28/trimming-down-list-of-trusted-tls-ca-certificates-system-wide-using-a-whitelist-approach.html">filtering for ca-certificates</a> on a host running
a bunch of <a class="reference external" href="https://wiki.archlinux.org/title/systemd-nspawn">systemd-nspawn containers</a> (similar to <a class="reference external" href="https://linuxcontainers.org/">LXC</a>), simplest way to handle
configuration across all those consistently seem to be just rsyncing filtered
p11-kit bundle into them, and running (distro-specific) <tt class="docutils literal"><span class="pre">update-ca-trust</span></tt> there,
to easily have same expected CA …</p><p>Earlier, as I was setting-up <a class="reference external" href="https://blog.fraggod.net/2023/12/28/trimming-down-list-of-trusted-tls-ca-certificates-system-wide-using-a-whitelist-approach.html">filtering for ca-certificates</a> on a host running
a bunch of <a class="reference external" href="https://wiki.archlinux.org/title/systemd-nspawn">systemd-nspawn containers</a> (similar to <a class="reference external" href="https://linuxcontainers.org/">LXC</a>), simplest way to handle
configuration across all those consistently seem to be just rsyncing filtered
p11-kit bundle into them, and running (distro-specific) <tt class="docutils literal"><span class="pre">update-ca-trust</span></tt> there,
to easily have same expected CA roots across them all.</p>
<p>But since these are mutable full-rootfs multi-app containers with init (systemd)
in them, they update their filesystems separately, and routine package updates
will overwrite cert bundles in /usr/share/, so they'd have to be rsynced again
after that happens.</p>
<p>Good mechanism to handle this in linux is <a class="reference external" href="https://man.archlinux.org/man/fanotify.7">fanotify API</a>, which in practice is
used something like this:</p>
<div class="highlight"><pre><span></span><span class="gp"># </span>fatrace<span class="w"> </span>-tf<span class="w"> </span><span class="s1">'WD+<>'</span>
<span class="go">15:58:09.076427 rsyslogd(1228): W /var/log/debug.log</span>
<span class="go">15:58:10.574325 emacs(2318): W /home/user/blog/content/2024-01-09.abusing-fanotify.rst</span>
<span class="go">15:58:10.574391 emacs(2318): W /home/user/blog/content/2024-01-09.abusing-fanotify.rst</span>
<span class="go">15:58:10.575100 emacs(2318): CW /home/user/blog/content/2024-01-09.abusing-fanotify.rst</span>
<span class="go">15:58:10.576851 git(202726): W /var/cache/atop.d/atop.acct</span>
<span class="go">15:58:10.893904 rsyslogd(1228): W /var/log/syslog/debug.log</span>
<span class="go">15:58:26.139099 waterfox(85689): W /home/user/.waterfox/general/places.sqlite-wal</span>
<span class="go">15:58:26.139347 waterfox(85689): W /home/user/.waterfox/general/places.sqlite-wal</span>
<span class="go">...</span>
</pre></div>
<p>Where <a class="reference external" href="https://github.com/martinpitt/fatrace">fatrace</a> in this case is used to report all write, delete, create and
rename-in/out events for files and directories (that weird "-f WD+<>" mask),
as it promptly does.
It's useful to see what apps might abuse SSD/NVME writes, more generally
to understand what's going on with filesystem under some load, which app
is to blame for that and where it happens, or as a debugging/monitoring tool.</p>
<p>But also if you want to rsync/update files after they get changed under some
dirs recursively, it's an awesome tool for that as well.
With container updates above, can monitor /var/lib/machines fs, and it'll report
when anything in <some-container>/usr/share/ca-certificates/trust-source/ gets
changed under it, which is when aforementioned rsync hook should run again for
that container/path.</p>
<p>To have something more robust and simpler than a hacky bash script around
fatrace, I've made <a class="reference external" href="https://github.com/mk-fg/fgtk#run_cmd_pipenim">run_cmd_pipe.nim</a> tool, that reads ini config file like this,
with a list of input lines to match:</p>
<div class="highlight"><pre><span></span><span class="na">delay</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">1_000</span><span class="w"> </span><span class="c1"># 1s delay for any changes to settle</span>
<span class="na">cooldown</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">5_000</span><span class="w"> </span><span class="c1"># min 5s interval between running same rule+run-group command</span>
<span class="k">[ca-certs-sync]</span>
<span class="na">regexp</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">: \S*[WD+<>]\S* +/var/lib/machines/(\w+)/usr/share/ca-certificates/trust-source(/.*)?$</span>
<span class="na">regexp-env-group</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">1</span>
<span class="na">regexp-run-group</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">1</span>
<span class="na">run</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">./_scripts/ca-certs-sync</span>
</pre></div>
<p>And runs commands depending on regexp (<a class="reference external" href="https://en.wikipedia.org/wiki/Perl_Compatible_Regular_Expressions">PCRE</a>) matches on whatever input gets
piped into it, passing regexp-match through into via env, with sane debouncing delays,
deduplication, config reloads, tiny mem footprint and other proper-daemon stuff.
Can also setup its pipe without shell, for an easy <tt class="docutils literal">ExecStart=run_cmd_pipe rcp.conf
<span class="pre">--</span> fatrace <span class="pre">-cf</span> <span class="pre">WD+<></span></tt> systemd.service configuration.</p>
<p>Having this running for a bit now, and bumping into other container-related
tasks, realized how it's useful for a lot of things even more generally,
especially when multiple containers need to send some changes to host.</p>
<p>For example, if a bunch of containers should have custom network interfaces
bridged between them (in a root netns), which e.g. <a class="reference external" href="https://man.archlinux.org/man/systemd.nspawn.5#[NETWORK]_SECTION_OPTIONS">systemd.nspawn Zone=</a>
doesn't adequately handle - just add whatever custom
<tt class="docutils literal"><span class="pre">VirtualEthernetExtra=vx-br-containerA:vx-br</span></tt> into container, have a script
that sets-up those interfaces in those "touch" or create a file when it's done,
and then run host-script for that event, to handle bridging on the other side:</p>
<div class="highlight"><pre><span></span><span class="k">[vx-bridges]</span>
<span class="na">regexp</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">: \S*W\S* +/var/lib/machines/(\w+)/var/tmp/vx\.\S+\.ready$</span>
<span class="na">regexp-env-group</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">1</span>
<span class="na">run</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">./_scripts/vx-bridges</span>
</pre></div>
<p>This seem to be incredibly simple (touch/create files to pick-up as events),
very robust (as filesystems tend to be), and doesn't need to run anything more
than ~600K of fatrace + run_cmd_pipe, with a very no-brainer configuration
(which file[s] to handle by which script[s]).</p>
<p>Can be streamlined for any types and paths of containers themselves
(incl. <a class="reference external" href="https://linuxcontainers.org/">LXC</a> and <a class="reference external" href="https://en.wikipedia.org/wiki/Open_Container_Initiative">OCI app-containers</a> like <a class="reference external" href="https://www.docker.com/">docker</a>/<a class="reference external" href="https://podman.io/">podman</a>) by bind-mounting
dedicated filesystem/volume into those to pass such event-files around there,
kinda like it's done in <a class="reference external" href="https://systemd.io/PASSWORD_AGENTS/">systemd with its agent plug-ins, e.g. for handling
password inputs</a>, so not really a novel idea either.
<a class="reference external" href="https://man.archlinux.org/man/systemd.path.5">systemd.path</a> units can also handle simpler non-recursive "this one file changed" events.</p>
<p>Alternative with such shared filesystem can be to use any other IPC mechanisms,
like append/tail file, fcntl locks, fifos or unix sockets, and tbf <a class="reference external" href="https://github.com/mk-fg/fgtk#run_cmd_pipenim">run_cmd_pipe.nim</a>
can handle all those too, by running e.g. <tt class="docutils literal">tail <span class="pre">-F</span> shared.log</tt> instead of fatrace,
but latter is way more convenient on the host side, and can act on incidental or
out-of-control events (like pkg-mangler doing its thing in the initial ca-certs use-case).</p>
<p>Won't work for containers distributed beyond single machine or more self-contained VMs -
that's where you'd probably want more complicated stuff like <a class="reference external" href="https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol">AMQP</a>, <a class="reference external" href="https://en.wikipedia.org/wiki/MQTT">MQTT</a>, <a class="reference external" href="https://en.wikipedia.org/wiki/Kubernetes">K8s</a> and such -
but for managing one host's own service containers, regardless of whatever they run and
how they're configured, this seem to be a really neat way to do it.</p>
Trimming-down list of trusted TLS ca-certificates system-wide using a whitelist approach2023-12-28T15:19:00+05:002023-12-28T15:19:00+05:00Mike Kazantsevtag:blog.fraggod.net,2023-12-28:/2023/12/28/trimming-down-list-of-trusted-tls-ca-certificates-system-wide-using-a-whitelist-approach.html<p>It's no secret that Web PKI was always a terrible mess.</p>
<p>Idk of anything that can explain it better than <a class="reference external" href="https://www.youtube.com/watch?v=UawS3_iuHoA">Moxie Marlinspike's old
"SSL And The Future Of Athenticity" talk</a>, which still pretty much holds up
(and is kinda hilarious), as Web PKI for TLS is still up to >150 …</p><p>It's no secret that Web PKI was always a terrible mess.</p>
<p>Idk of anything that can explain it better than <a class="reference external" href="https://www.youtube.com/watch?v=UawS3_iuHoA">Moxie Marlinspike's old
"SSL And The Future Of Athenticity" talk</a>, which still pretty much holds up
(and is kinda hilarious), as Web PKI for TLS is still up to >150 certs,
couple of which get kicked-out after malicious misuse or gross malpractice
every now and then, and it's actually more worrying when they don't.</p>
<p>And as of 2023, <a class="reference external" href="https://www.theregister.com/2023/11/08/europe_eidas_browser/">EU eIDAS proposal</a> stands to make this PKI much worse in the
near-future, adding whole bunch of random national authorities to everyone's
list of trusted CAs, which of course have no rational business of being there
on all levels.</p>
<p>(with all people/orgs on the internet seemingly in agreement on that - see e.g.
<a class="reference external" href="https://www.eff.org/deeplinks/2022/12/eidas-20-sets-dangerous-precedent-web-security">EFF</a>, <a class="reference external" href="https://blog.mozilla.org/netpolicy/files/2023/11/eIDAS-Industry-Letter.pdf">Mozilla</a>, <a class="reference external" href="https://docs.google.com/document/d/1sGzaE9QTs-qorr4BTqKAe0AaGKjt5GagyEevDoavWU0/edit#heading=h.ipo800ypudh3">Ryan Hurst's excellent writeup</a>, etc - but it'll probably pass
anyway, for whatever political reasons)</p>
<p>So in the spirit of at least putting some bandaid on that, I had a long-standing
idea to write a logger for all CAs that my browser uses over time, then inspect
it after a while and kick <1% CAs out of the browser at least.
This is totally doable, and not that hard - e.g. <a class="reference external" href="https://github.com/JamesTheAwesomeDude/cerdicator/">cerdicator extension</a> can be
tweaked to log to a file instead of displaying CA info - but never got around to
doing it myself.</p>
<p><strong>Update 2024-01-03:</strong> there is now also <a class="reference external" href="https://github.com/RaymiiOrg/CertInfo/">CertInfo app</a> to scrape local history and
probe all sites there for certs, building a list of root and intermediate CAs to inspect.</p>
<p>But recently, scrolling through <a class="reference external" href="https://docs.google.com/document/d/1sGzaE9QTs-qorr4BTqKAe0AaGKjt5GagyEevDoavWU0/edit#heading=h.ipo800ypudh3">Ryan Hurst's "eIDAS 2.0 Provisional Agreement
Implications for Web Browsers and Digital Certificate Trust" open letter</a>,
pie chart on page-3 there jumped out to me, as it showed that 99% of certs use
only 6-7 CAs - so why even bother logging those, there's a simple list of them,
which should mostly work for me too.</p>
<p>I remember browsers and different apps using their own CA lists being a problem
in the past, having to tweak mozilla nss database via its own tools, etc,
but by now, as it turns out, this problem seem to have been long-solved on a
typical linux, via distro-specific "ca-certificates" package/scripts and <a class="reference external" href="https://p11-glue.github.io/p11-glue/">p11-kit</a>
(or at least it appears to be solved like that on my systems).</p>
<p>Gist is that <tt class="docutils literal"><span class="pre">/usr/share/ca-certificates/trust-source/</span></tt> and its /etc
counterpart have *.p11-kit CA bundles installed there by some package like
<a class="reference external" href="https://archlinux.org/packages/core/x86_64/ca-certificates-mozilla/">ca-certificates-mozilla</a>, and then package-manager runs <tt class="docutils literal"><span class="pre">update-ca-trust</span></tt>,
which exports that to <tt class="docutils literal">/etc/ssl/cert.pem</tt> and such places, where all other
tools can pickup and use same CAs.
Firefox (or at least my Waterfox build) even uses installed p11-kit bundle(s)
directly and immediately.
Those p11-kit bundles need to be altered or restricted somehow to affect
everything on the system, only needing <tt class="docutils literal"><span class="pre">update-ca-trust</span></tt> at most - neat!</p>
<p>One problem I bumped into however, is that p11-kit tools only support masking
specific individual CAs from the bundle via blacklist, and that will not be
future-proof wrt upstream changes to that bundle, if the goal is to "only use
these couple CAs and nothing else".</p>
<p>So ended up writing a simple script to go through .p11-kit bundle files and remove
everything unnecessary from them on a whitelist-bases - <a class="reference external" href="https://github.com/mk-fg/ca-certificates-whitelist-filter">ca-certificates-whitelist-filter</a> -
which uses a simple one-per-line format with wildcards to match multiple certs:</p>
<pre class="literal-block">
Baltimore CyberTrust Root # CloudFlare
ISRG Root X* # Let's Encrypt
GlobalSign * # Google
DigiCert *
Sectigo *
Go Daddy *
Microsoft *
USERTrust *
</pre>
<p>Picking whitelisted CAs from Ryan's list, found that GlobalSign should be added,
and that it already signs Google's GTS CA's (so latter are unnecessary), while
"Baltimore CyberTrust Root" seem to be a strange omission, as it signs CloudFlare's
CA cert, which should've been a major thing on the pie chart in that eIDAS open letter.</p>
<p>But otherwise, that's pretty much it, leaving a couple of CAs instead of a hundred,
and couple days into it so far, everything seem to be working fine with just those.
Occasional "missing root" error can be resolved easily by adding that root to the list,
or ignoring it for whatever irrelevant one-off pages, though this really doesn't seem
to be an issue at all.</p>
<p>This is definitely not a solution to Web PKI being a big pile of dung, made as
an afterthough and then abused relentlessly and intentionally, but I think a
good low-effort bandaid against clumsy mass-MitM by whatever random crooks on
the network, in ISPs and idiot governments.</p>
<p>One long-term issue with this approach though, is that if used at any scale, it
further shifts control over CA trust from e.g. Mozilla's p11-kit bundle to those
dozen giant CAs above, who will then realistically have to share their CAs with
other orgs and groups (as they're the only ones on the CA list), ossifies
them to be in control of Web PKI in the future over time, and makes "trusting"
them meaningless non-decision (as you can't avoid that, even as/if/when they
have to sign sub-CAs for whatever shady bad actors in secret).</p>
<p>To be fair, there are proposals and movements to remedy this situation, like
Certificate Transparency and various cert and TLS policies/parameters' pinning,
but I'm not hugely optimistic, and just hope that a quick fix like this might be
enough to be on the right side of "you don't need to outrun the bear, just the
other guy" metaphor.</p>
<p>Link: <a class="reference external" href="https://github.com/mk-fg/ca-certificates-whitelist-filter">ca-certificates-whitelist-filter script on github</a> (<a class="reference external" href="https://codeberg.org/mk-fg/ca-certificates-whitelist-filter">codeberg</a>, <a class="reference external" href="https://fraggod.net/code/git/ca-certificates-whitelist-filter">local git</a>)</p>
USB hub per-port power switching done right with a couple wires2023-11-17T15:54:00+05:002023-11-17T15:54:00+05:00Mike Kazantsevtag:blog.fraggod.net,2023-11-17:/2023/11/17/usb-hub-per-port-power-switching-done-right-with-a-couple-wires.html<p>Like probably most folks who are surrounded by tech, I have too many USB
devices plugged into the usual desktop, to the point that it kinda bothers me.</p>
<p>For one thing, some of those doohickeys always draw current and noticeably
heat up in the process, which can't be good on …</p><p>Like probably most folks who are surrounded by tech, I have too many USB
devices plugged into the usual desktop, to the point that it kinda bothers me.</p>
<p>For one thing, some of those doohickeys always draw current and noticeably
heat up in the process, which can't be good on the either side of the port.
Good examples of this are WiFi dongles (with iface left in UP state), a
cheap NFC reader I have (draws 300mA idling on the table 99.99% of the time),
or anything with "battery" or "charging" in the description.</p>
<p>Other issue is that I don't want some devices to always be connected.
Dual-booting into gaming Windows for instance, there's nothing good that
comes from it poking at and spinning-up USB-HDDs, Yubikeys or various
connectivity dongles' firmware, as well as jerking power on-and-off on those
for reboots and whenever random apps/games probe those (yeah, not sure why either).</p>
<p>Unplugging stuff by hand is work, and leads to replacing usb cables/ports/devices
eventually (more work), so toggling power on/off at USB hubs seems like an easy fix.</p>
<p>USB Hubs sometimes support that in one of two ways - either physical switches
next to ports, or using USB Per-Port-Power-Switching (PPPS) protocol.</p>
<p>Problem with physical switches is that relying on yourself not to forget to do
some on/off sequence manually for devices each time doesn't work well,
and kinda silly when it can be automated - i.e. if you want to run ad-hoc AP,
let the script running <a class="reference external" href="https://w1.fi/hostapd/">hostapd</a> turn the power on-and-off around it as well.</p>
<p>But sadly, at least in my experience with it, USB Hub PPPS is also a bad solution,
broken by two major issues, which are likely unfixable:</p>
<ul>
<li><p class="first">USB Hubs supporting per-port power toggling are impossible to find or identify.</p>
<p>Vendors don't seem to care about and don't advertise this feature anywhere,
its presence/support changes between hardware revisions (probably as a
consequence of "don't care"), and is often half-implemented and dodgy.</p>
<p><a class="reference external" href="https://github.com/mvp/uhubctl">uhubctl project</a> has a <a class="reference external" href="https://github.com/mvp/uhubctl/#compatible-usb-hubs">list of Compatible USB hubs</a> for example, and note
how hubs there have remarks like "DUB-H7 rev D,E (black). Rev B,C,F,G not
supported" - shops and even product boxes mostly don't specify these revisions
anywhere, or even list the wrong one.</p>
<p>So good luck finding the right revision of one model even when you know it
works, within a brief window while it's still in stock.
And knowing which one works is pretty much only possible through testing -
same list above is full of old devices that are not on the market, and that
market seem to be too large and dynamic to track models/revisions accurately.</p>
<p>On top of that, sometimes hubs toggle data lines and not power (VBUS),
making feature marginally less useful for cases above, but further confusing
the matter when reading specifications or even relying on reports from users.</p>
<p>Pretty sure that hubs with support for this are usually higher-end
vendors/models too, so it's expensive to buy a bunch of them to see what
works, and kinda silly to overpay for even one of them anyway.</p>
</li>
<li><p class="first">PPPS in USB Hubs has no memory and defaults to ON state.</p>
<p>This is almost certainly by design - when someone plugs hub without obvious
buttons, they might not care about power switching on ports, and just want it
to work, so ports have to be turned-on by default.</p>
<p>But that's also the opposite of what I want for all cases mentioned above -
turning on all power-hungry devices on reboot (incl. USB-HDDs that can draw
like 1A on spin-up!), all at once, in the "I'm starting up" max-power mode, is
like the worst thing such hub can do!</p>
<p>I.e. you disable these ports for a reason, maybe a power-related reason, which
"per-port <strong>power</strong> switching" name might even hint at, and yet here you go,
on every reboot or driver/hw/cable hiccup, this use-case gets thrown out of the
window completely, in the dumbest and most destructive way possible.</p>
<p>It also negates the other use-cases for the feature of course - when you
simply don't want devices to be exposed, aside from power concerns - hub does
the opposite of that and gives them all up whenever it bloody wants to.</p>
</li>
</ul>
<p>In summary - even if controlling hub port power via PPPS USB control requests
worked, and was easy to find (which it very much is not), it's pretty much
useless anyway.</p>
<p>My simple solution, which I can emphatically recommend:</p>
<ul>
<li><p class="first">Grab robust USB Hub with switches next to ports, e.g. 4-port USB3 ones like
that seem to be under $10 these days.</p>
</li>
<li><p class="first">Get a couple of <$1 direct-current solid-state relays or mosfets, one per port.</p>
<p>I use locally-made <a class="reference external" href="https://optron.proton-orel.ru/upload/library/information/optoelectronsReleMedium/k293kp12ap.pdf">К293КП12АП</a> ones, rated for toggling 0-60V 2A DC via
1.1-1.5V optocoupler input, just sandwitched together at the end - they don't
heat up at all and easy to solder wires to.</p>
</li>
<li><p class="first">Some $3-5 microcontroller with the usual USB-TTY, like any Arduino or RP2040
(e.g. <a class="reference external" href="https://www.waveshare.com/rp2040-zero.htm">Waveshare RP2040-Zero</a> from aliexpress).</p>
</li>
<li><p class="first">Couple copper wires pulled from an ethernet cable for power, and M/F jumper
pin wires to easily plug into an MCU board headers.</p>
</li>
<li><p class="first">An hour or few with a soldering iron, multimeter and a nice podcast.</p>
</li>
</ul>
<p>Open up USB Hub - cheap one probably doesn't even have any screws - probe which
contacts switches connect in there, solder short thick-ish copper ethernet wires
from their legs to mosfets/relays, and jumper wires from input pins of the latter
to plug into a tiny rp2040/arduino control board on the other end.</p>
<blockquote>
I like SSRs instead of mosfets here to not worry about controller and hub
being plugged into same power supply that way, and they're cheap and foolproof -
pretty much can't connect them disastorously wrong, as they've diodes on both
circuits. Optocoupler LED in such relays needs one 360R resistor on shared GND
of control pins to drop 5V -> 1.3V input voltage there.</blockquote>
<p>This approach solves both issues above - components are easy to find,
dirt-common and dirt-cheap, and are wired into default-OFF state, to only be
toggled into ON via whatever code conditions you put into that controller.</p>
<p>Simplest way, with an RP2040 running the usual <a class="reference external" href="https://micropython.org/">micropython</a> firmware,
would be to upload a <tt class="docutils literal">main.py</tt> file of literally this:</p>
<div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">sys</span><span class="o">,</span> <span class="nn">machine</span>
<span class="n">pins</span> <span class="o">=</span> <span class="nb">dict</span><span class="p">(</span>
<span class="p">(</span><span class="nb">str</span><span class="p">(</span><span class="n">n</span><span class="p">),</span> <span class="n">machine</span><span class="o">.</span><span class="n">Pin</span><span class="p">(</span><span class="n">n</span><span class="p">,</span> <span class="n">machine</span><span class="o">.</span><span class="n">Pin</span><span class="o">.</span><span class="n">OUT</span><span class="p">,</span> <span class="n">value</span><span class="o">=</span><span class="mi">0</span><span class="p">))</span>
<span class="k">for</span> <span class="n">n</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">4</span><span class="p">)</span> <span class="p">)</span>
<span class="k">while</span> <span class="kc">True</span><span class="p">:</span>
<span class="k">try</span><span class="p">:</span> <span class="n">port</span><span class="p">,</span> <span class="n">state</span> <span class="o">=</span> <span class="n">sys</span><span class="o">.</span><span class="n">stdin</span><span class="o">.</span><span class="n">readline</span><span class="p">()</span><span class="o">.</span><span class="n">strip</span><span class="p">()</span>
<span class="k">except</span> <span class="ne">ValueError</span><span class="p">:</span> <span class="k">continue</span> <span class="c1"># not a 2-character line</span>
<span class="k">if</span> <span class="n">port_pin</span> <span class="o">:=</span> <span class="n">pins</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="n">port</span><span class="p">):</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Setting port </span><span class="si">{</span><span class="n">port</span><span class="si">}</span><span class="s1"> state = </span><span class="si">{</span><span class="n">state</span><span class="si">}</span><span class="s1">'</span><span class="p">)</span>
<span class="k">if</span> <span class="n">state</span> <span class="o">==</span> <span class="s1">'0'</span><span class="p">:</span> <span class="n">port_pin</span><span class="o">.</span><span class="n">off</span><span class="p">()</span>
<span class="k">elif</span> <span class="n">state</span> <span class="o">==</span> <span class="s1">'1'</span><span class="p">:</span> <span class="n">port_pin</span><span class="o">.</span><span class="n">on</span><span class="p">()</span>
<span class="k">else</span><span class="p">:</span> <span class="nb">print</span><span class="p">(</span><span class="s1">'ERROR: Port state value must be "0" or "1"'</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'ERROR: Port </span><span class="si">{</span><span class="n">port</span><span class="si">}</span><span class="s1"> is out of range'</span><span class="p">)</span>
</pre></div>
<p>And now sending trivial "<port><0-or-1>" lines to /dev/ttyACM0 will
toggle the corresponding pins 0-3 on the board to 0 (off) or 1 (on) state,
along with USB hub ports connected to those, while otherwise leaving ports
default-disabled.</p>
<p>From a linux machine, serial terminal is easy to talk to by running <a class="reference external" href="https://docs.micropython.org/en/latest/reference/mpremote.html">mpremote</a>
used with micropython fw (note - "mpremote run ..." won't connect stdin to tty),
<tt class="docutils literal">screen /dev/ttyACM0</tt> or <a class="reference external" href="https://wiki.archlinux.org/title/Working_with_the_serial_console#Making_Connections">many other tools</a>, incl. just "echo" from shell scripts:</p>
<div class="highlight"><pre><span></span>stty<span class="w"> </span>-F<span class="w"> </span>/dev/ttyACM0<span class="w"> </span>raw<span class="w"> </span>speed<span class="w"> </span><span class="m">115200</span><span class="w"> </span><span class="c1"># only needed once for device</span>
<span class="nb">echo</span><span class="w"> </span><span class="m">01</span><span class="w"> </span>>/dev/ttyACM0<span class="w"> </span><span class="c1"># pin/port-0 enabled</span>
<span class="nb">echo</span><span class="w"> </span><span class="m">30</span><span class="w"> </span>>/dev/ttyACM0<span class="w"> </span><span class="c1"># pin/port-3 disabled</span>
<span class="nb">echo</span><span class="w"> </span><span class="m">21</span><span class="w"> </span>>/dev/ttyACM0<span class="w"> </span><span class="c1"># pin/port-2 enabled</span>
...
</pre></div>
<p>I've started with finding a D-Link PPPS hub, quickly bumped into above
limitations, and have been using this kind of solution instead for about
a year now, migrating from old arduino uno to rp2040 mcu and hooking up
a second 4-port hub recently, as this kind of control over USB peripherals
from bash scripts that actually use those devices turns out to be very convenient.</p>
<p>So can highly recommend to not even bother with PPPS hubs from the start,
and wire your own solution with whatever simple logic for controlling these
ports that you need, instead of a silly braindead way in that USB PPPS works.</p>
<p>An example of a bit more complicated control firmware that I use, with watchdog
timeout/pings logic on a controller (to keep device up only while script using
it is alive) and some other tricks can be found in <a class="reference external" href="https://github.com/mk-fg/hwctl">mk-fg/hwctl</a> repository
(<a class="reference external" href="https://github.com/mk-fg/hwctl">github</a>/<a class="reference external" href="https://codeberg.org/mk-fg/hwctl">codeberg</a> or a <a class="reference external" href="https://fraggod.net/code/git/hwctl">local mirror</a>).</p>
Auto-generated hash-petnames for things2023-09-05T04:34:00+05:002023-09-05T04:34:00+05:00Mike Kazantsevtag:blog.fraggod.net,2023-09-05:/2023/09/05/auto-generated-hash-petnames-for-things.html<p>Usually auto-generated names aim for being meaningful instead of distinct,
e.g. LEAFAL01A-P281, LEAFAN02A-P281, LEAFAL01B-P282, LEAFEL01A-P281,
LEAFEN01A-P281, etc, where single-letter diffs are common and decode to
something like different location or purpose.</p>
<p>Sometimes they aren't even that, and are assigned sequentially or by hash,
like in case of contents hashes …</p><p>Usually auto-generated names aim for being meaningful instead of distinct,
e.g. LEAFAL01A-P281, LEAFAN02A-P281, LEAFAL01B-P282, LEAFEL01A-P281,
LEAFEN01A-P281, etc, where single-letter diffs are common and decode to
something like different location or purpose.</p>
<p>Sometimes they aren't even that, and are assigned sequentially or by hash,
like in case of contents hashes, or interfaces/vlans/addresses in a network
infrastructure.</p>
<p>You always have to squint and spend time mentally decoding such identifiers,
as one letter/digit there can change whole meaning of the message, so working
with them is unnecessarily tiring, especially if a system often presents many of
those without any extra context.</p>
<p>Usual fix is naming things, i.e. assigning hostnames to separate hardware
platforms/VMs, DNS names to addresses, and such, but that doesn't work well
with modern devops approaches where components are typically generated with
"reasonable" but less readable naming schemes as described above.</p>
<p>Manually naming such stuff up-front doesn't work, and even assigning <a class="reference external" href="https://en.wikipedia.org/wiki/Petname">petnames</a>
or descriptions by hand gets silly quickly (esp. with some churn in the system),
and it's not always possible to store/share that extra metadata properly
(e.g. on rebuilds in entirely different places).</p>
<p>Useful solution I found is hashing to an automatically generated petnames,
which seem to be kinda overlooked and underused - i.e. to hash the name
to an easily-distinct, readable and often memorable-enough strings:</p>
<ul class="simple">
<li>LEAFAL01A-P281 [ Energetic Amethyst Zebra ]</li>
<li>LEAFAN02A-P281 [ Furry Linen Eagle ]</li>
<li>LEAFAL01B-P282 [ Suave Mulberry Woodpecker ]</li>
<li>LEAFEL01A-P281 [ Acidic Black Flamingo ]</li>
<li>LEAFEN01A-P281 [ Prehistoric Raspberry Pike ]</li>
</ul>
<p>Even just different length of these names makes them visually stand apart from
each other already, and usually you don't really need to memorize them in any way,
it's enough to be able to tell them apart at a glance in some output.</p>
<p>I've bumped into only one de-facto standard scheme for generating those -
"Angry Purple Tiger", with a long list of compatible implementations
(e.g. <a class="reference external" href="https://github.com/search?type=repositories&q=Angry+Purple+Tiger">https://github.com/search?type=repositories&q=Angry+Purple+Tiger</a> ):</p>
<pre class="code console literal-block">
<span class="gp">% </span>angry_purple_tiger<span class="w"> </span>LEAFEL01A-P281<span class="w">
</span><span class="go">acidic-black-flamingo
</span><span class="gp">% </span>angry_purple_tiger<span class="w"> </span>LEAFEN01A-P281<span class="w">
</span><span class="go">prehistoric-raspberry-pike</span>
</pre>
<p>(default output is good for identifiers, but can use proper spaces and
capitalization to be more easily-readable, without changing the words)</p>
<p>It's not as high-entropy as "human hash" tools that use completely random words
or babble (see <a class="reference external" href="https://github.com/volution/z-tokens/">z-tokens</a> for that), but imo wins by orders of magnitude in readability
and ease of memorization instead, and on the scale of names, it matters.</p>
<p>Since those names don't need to be stored anywhere, and can be generated
anytime, it is often easier to add them in some wrapper around tools and APIs,
without the need for the underlying system to know or care that they exist,
while making a world of difference in usability.</p>
<p>Honorable mention here to occasional tools like <a class="reference external" href="https://en.wikipedia.org/wiki/Docker_(software)">docker</a> that have those already,
but imo it's more useful to remember about this trick for your own scripts
and wrappers, as that tends to be the place where you get to pick how to print
stuff, and can easily add an extra hash for that kind of accessibility.</p>
More FIDO2 hardware auth/key uses on a linux machine and their quirks2023-01-26T21:21:00+05:002023-01-26T21:21:00+05:00Mike Kazantsevtag:blog.fraggod.net,2023-01-26:/2023/01/26/more-fido2-hardware-authkey-uses-on-a-linux-machine-and-their-quirks.html<p>As I kinda went on to replace a lot of silly long and insecure passwords with
FIDO2 USB devices - aka "yubikeys" - in various ways (e.g. <a class="reference external" href="/2023/01/04/fido2-hardware-passwordsecret-management.html">earlier post about
password/secret management</a>), support for my use-cases was mostly good:</p>
<ul>
<li><p class="first"><a class="reference external" href="https://webauthn.guide/">Webauthn</a> - works ok, and been working well for me with U2F …</p></li></ul><p>As I kinda went on to replace a lot of silly long and insecure passwords with
FIDO2 USB devices - aka "yubikeys" - in various ways (e.g. <a class="reference external" href="/2023/01/04/fido2-hardware-passwordsecret-management.html">earlier post about
password/secret management</a>), support for my use-cases was mostly good:</p>
<ul>
<li><p class="first"><a class="reference external" href="https://webauthn.guide/">Webauthn</a> - works ok, and been working well for me with U2F/FIDO2 on various
important sites/services for quite a few years by now.</p>
<p>Wish it <a class="reference external" href="https://bugzilla.mozilla.org/show_bug.cgi?id=1669870">worked with NFC reader in Firefox on Linux Desktop</a> too, but oh
well, maybe someday, if Mozilla doesn't implode before that.</p>
<p><strong>Update 2024-02-21:</strong> <a class="reference external" href="https://github.com/BryanJacobs/fido2-hid-bridge">fido2-hid-bridge</a> seem to be an ok workaround for
this shortcoming, and other apps not using libfido2 with its pcscd support.</p>
</li>
<li><p class="first"><a class="reference external" href="https://developers.yubico.com/pam-u2f/">pam-u2f</a> to login with the token using much simpler
and hw-rate-limited PIN (with pw fallback).</p>
<p>Module itself worked effortlessly, but had to be added to various pam services
properly, so that password fallback is available as well, e.g. system-local-login:</p>
<pre class="literal-block">
#%PAM-1.0
# system-login
auth required pam_shells.so
auth requisite pam_nologin.so
# system-auth + pam_u2f
auth required pam_faillock.so preauth
# auth_err=ignore will try same string as password for pam_unix
-auth [success=2 authinfo_unavail=ignore auth_err=ignore] pam_u2f.so \
origin=pam://my.host.net authfile=/etc/secure/pam-fido2.auth \
userpresence=1 pinverification=1 cue
auth [success=1 default=bad] pam_unix.so try_first_pass nullok
auth [default=die] pam_faillock.so authfail
auth optional pam_permit.so
auth required pam_env.so
auth required pam_faillock.so authsucc
# auth include system-login
account include system-login
password include system-login
session include system-login
</pre>
<p>"auth" section is an exact copy of system-login and system-auth lines from the
current Arch Linux, with pam_u2f.so line added in the middle, jumping over
pam_unix.so on success, or ignoring failure result to allow for entered string
to be tried as password there.</p>
<p>Using <a class="reference external" href="https://www.enlightenment.org/">Enlightenment Desktop Environment</a> here, also needed to make a trivial
"include system-local-login" file for its lock screen, which uses
"enlightenment" PAM service by default, falling back to basic system-auth or
something like that, instead of system-local-login.</p>
</li>
<li><p class="first">sk-ssh-ed25519 keys work out of the box with <a class="reference external" href="https://www.openssh.com/">OpenSSH</a>.</p>
<p>Part that gets loaded in ssh-agent is much less sensitive than the usual
private-key - here it's just a cred-id blob that is useless without FIDO2 token,
and even that can be stored on-device with Discoverable/Resident Creds,
for some extra security or portability.</p>
<p>SSH connections can easily be cached using ControlMaster / ControlPath /
ControlPersist opts in the client config, so there's no need to repeat touch
presence-check too often.</p>
<p>One somewhat-annoying thing was with signing git commits - this can't be
cached like ssh connections, and doing physical ack on every git commit/amend
is too burdensome, but fix is easy too - add separate ssh key just for signing.
Such key would naturally be less secure, but not as important as an access key anyway.</p>
<p><a class="reference external" href="https://github.com/">Github</a> supports adding "signing" ssh keys that don't allow access,
but <a class="reference external" href="https://codeberg.org/">Codeberg</a> (and its underlying <a class="reference external" href="https://gitea.io/">Gitea</a>) currently does not - access keys
can be marked as "Verified", but can't be used for signing-only on the account,
which will probably be fixed, eventually, not a huge deal.</p>
</li>
<li><p class="first">Early-boot <a class="reference external" href="https://gitlab.com/cryptsetup/cryptsetup/-/blob/main/FAQ.md">LUKS / dm-crypt disk encryption</a> unlock with offline key and a
simpler + properly rate-limited "pin", instead of a long and hard-to-type passphrase.</p>
<p><a class="reference external" href="https://0pointer.net/blog/unlocking-luks2-volumes-with-tpm2-fido2-pkcs11-security-hardware-on-systemd-248.html">systemd-cryptenroll</a> can work for that, if you have typical "Full Disk Encryption"
(FDE) setup, with one LUKS-encrypted SSD, but that's not the case for me.</p>
<p>I have more flexible LUKS-on-LVM setup instead, where some LVs are encrypted
and needed on boot, some aren't, some might have <a class="reference external" href="https://www.kernel.org/doc/html/latest/filesystems/fscrypt.html">fscrypt</a>, <a class="reference external" href="https://nuetzlich.net/gocryptfs/">gocryptfs</a>, some
other distro or separate post-boot unlock, etc etc.</p>
<p>systemd-cryptenroll does not support such use-case well, as it generates and
stores different credentials for each LUKS volume, and then prompts for
separate FIDO2 user verification/presence check for each of them, while I need
something like 5 unlocks on boot - no way I'm doing same thing 5 times, but
it is unavoidable with such implementation.</p>
<p>So had to make my own key-derivation <a class="reference external" href="https://github.com/mk-fg/fgtk#hdr-fido2_hmac_boot.nim">fido2-hmac-boot tool</a> for this,
described in more detail separately below.</p>
</li>
<li><p class="first">Management of legacy passwords, passphrases, pins, other secrets and similar
sensitive strings of information - described in a lot more detail in an
earlier <a class="reference external" href="/2023/01/04/fido2-hardware-passwordsecret-management.html">"FIDO2 hardware password/secret management" post</a>.</p>
<p>This works great, required an (simple) extra binary, and integrating it into
emacs for my purposes, but also easy to setup in various other ways, and a lot
better than all alternatives (memory + reuse, plaintext somewhere, crappy
third-party services, paper, etc).</p>
</li>
<li><p class="first">One notable problem with FIDO2 devices is that they don't really show what it
is you are confirming, so as a user, I can think that it wants to authorize
one thing, while whatever compromised code secretly requests something else
from the token.</p>
<p>But that's reasonably easy to mitigate by splitting usage by different
security level and rarity, then using multiple separate U2F/FIDO2 tokens for those,
given how tiny and affordable they are these days - I ended up having three of
them (so far!).</p>
<p>So using token with "ssh-git" label, you have a good idea what it'd authorize.</p>
</li>
</ul>
<p>Aside from reasonably-minor quirks mentioned above, it all was pretty common
sense and straightforward for me, so can easily recommend migrating to workflows
built around cheap FIDO2 smartcards on modern linux as a basic InfoSec hygiene -
it doesn't add much inconvenience, and should be vastly superior to outdated
(but still common) practices/rituals involving passwords or keys-in-files.</p>
<hr class="docutils" />
<p>Given how all modern PC hardware has <a class="reference external" href="https://en.wikipedia.org/wiki/TPM2">TPM2</a> chips in motherboards, and these can
be used <a class="reference external" href="https://github.com/tpm2-software/tpm2-pkcs11">as a regular smartcard via PKCS#11 wrapper</a>, they might also be a
somewhat nice malware/tamper-proof cryptographic backend for various use-cases above.</p>
<p>From my perspective, they seem to be strictly inferior to using portable FIDO2
devices however:</p>
<ul>
<li><p class="first">Soldered on the motherboard, so can't be easily used in multiple places.</p>
</li>
<li><p class="first">Will live/die, and have to be replaced with the motherboard.</p>
</li>
<li><p class="first">Non-removable and always-accessible, holding persistent keys in there.</p>
<p>Booting random OS with access to this thing seem to be a really bad idea,
as ideally such keys shouldn't even be physically connected most of the time,
especially to some random likely-untrustworthy software.</p>
</li>
<li><p class="first">There is no physical access confirmation mechanism, so no way to actually
limit it - anything getting ahold of the PIN is really bad, as secret keys can
then be used freely, without any further visibility, rate-limiting or confirmation.</p>
</li>
<li><p class="first">Motherboard vendor firmware security has a bad track record, and I'd rather
avoid trusting crappy code there with anything extra. In fact, part of the
point with having separate FIDO2 device is to trust local machine a bit less,
if possible, not more.</p>
</li>
</ul>
<p>So given that grabbing FIDO2 device(s) is an easy option, don't think TPM2 is
even worth considering as an alternative to those, for all the reasons above,
and probably a bunch more that I'm forgetting at the moment.</p>
<p>Might be best to think of TPM2 to be in the domain and managed by the OS vendor,
e.g. leave it to Windows 11 and <a class="reference external" href="https://en.wikipedia.org/wiki/Windows_10#System_security">Microsoft SSO system</a> to do <a class="reference external" href="https://0pointer.net/blog/brave-new-trusted-boot-world.html">trusted/measured
boot</a> and store whatever OS-managed secrets, being entirely uninteresting and
invisible to the end-user.</p>
<hr class="docutils" />
<p>As also mentioned above, least well-supported FIDO2-backed thing for me was
early-boot dm-crypt / LUKS volume init - <a class="reference external" href="https://0pointer.net/blog/unlocking-luks2-volumes-with-tpm2-fido2-pkcs11-security-hardware-on-systemd-248.html">systemd-cryptenroll</a> requires
unlocking each encrypted LUKS blkdev separately, re-entering PIN and re-doing
the touch thing multiple times in a row, with a somewhat-uncommon LUKS-on-LVM
setup like mine.</p>
<p>But of course that's easily fixable, having following steps with a typical
<a class="reference external" href="https://systemd.io/">systemd</a> init process:</p>
<ul>
<li><p class="first">Starting early on boot or in initramfs, Before=cryptsetup-pre.target, run
service to ask for FIDO2 token PIN via <a class="reference external" href="https://www.freedesktop.org/software/systemd/man/systemd-ask-password.html">systemd-ask-password</a>, then use that
with FIDO2 token and its hmac-secret extension to produce secure high-entropy
volume unlock key.</p>
<p>If PIN or FIDO2 interaction won't work, print error and repeat the query,
or exit if prompt is cancelled to fallback to default systemd passphrase
unlocking.</p>
</li>
<li><p class="first">Drop that key into <tt class="docutils literal"><span class="pre">/run/cryptsetup-keys.d/</span></tt> dir for each volume that it
needs to open, with whatever extra per-volume alterations/hashing.</p>
</li>
<li><p class="first">Let systemd pass cryptsetup.target, where <a class="reference external" href="https://www.freedesktop.org/software/systemd/man/systemd-cryptsetup@.service.html">systemd-cryptsetup</a> will
automatically lookup volume keys in that dir and use them to unlock devices.</p>
<p>If any keys won't work or missing, systemd will do the usual passphrase-prompting
and caching, so there's always a well-supported first-class fallback unlock-path.</p>
</li>
<li><p class="first">Run early-boot service to cleanup after cryptsetup.target,
Before=sysinit.target, to remove <tt class="docutils literal"><span class="pre">/run/cryptsetup-keys.d/</span></tt> directory,
as everything should be unlocked by now and these keys are no longer needed.</p>
</li>
</ul>
<p>I'm using common <a class="reference external" href="https://dracut.wiki.kernel.org/index.php/Main_Page">dracut initramfs generator</a> with systemd here, where it's
easy to add a custom module that'd do all necessary early steps outlined above.</p>
<p><a class="reference external" href="https://github.com/mk-fg/fgtk#fido2_hmac_bootnim">fido2_hmac_boot.nim</a> implements all actual asking and FIDO2 operations, and can
be easily run from an initramfs systemd unit file like this (fhb.service):</p>
<div class="highlight"><pre><span></span><span class="k">[Unit]</span>
<span class="na">DefaultDependencies</span><span class="o">=</span><span class="s">no</span>
<span class="na">Wants</span><span class="o">=</span><span class="s">cryptsetup-pre.target</span>
<span class="c1"># Should be ordered same as stock systemd-pcrphase-initrd.service</span>
<span class="na">Conflicts</span><span class="o">=</span><span class="s">shutdown.target initrd-switch-root.target</span>
<span class="na">Before</span><span class="o">=</span><span class="s">sysinit.target cryptsetup-pre.target cryptsetup.target</span>
<span class="na">Before</span><span class="o">=</span><span class="s">shutdown.target initrd-switch-root.target systemd-sysext.service</span>
<span class="k">[Service]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">oneshot</span>
<span class="na">RemainAfterExit</span><span class="o">=</span><span class="s">yes</span>
<span class="na">StandardError</span><span class="o">=</span><span class="s">journal+console</span>
<span class="na">UMask</span><span class="o">=</span><span class="s">0077</span>
<span class="na">ExecStart</span><span class="o">=</span><span class="s">/sbin/fhb /run/initramfs/fhb.key</span>
<span class="na">ExecStart</span><span class="o">=</span><span class="s">/bin/sh -c '</span>\
<span class="w"> </span><span class="s">key=/run/initramfs/fhb.key; [ -e "$key" ] || exit 0; </span>\
<span class="w"> </span><span class="s">mkdir -p /run/cryptsetup-keys.d; while read dev line; </span>\
<span class="w"> </span><span class="s">do cat "$key" >/run/cryptsetup-keys.d/"$dev".key; </span>\
<span class="w"> </span><span class="s">done < /etc/fhb.devices; rm -f "$key"'</span>
</pre></div>
<p>With that <tt class="docutils literal">fhb.service</tt> file and compiled binary itself installed via
<tt class="docutils literal"><span class="pre">module-setup.sh</span></tt> in the module dir:</p>
<div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
check<span class="o">()</span><span class="w"> </span><span class="o">{</span>
<span class="w"> </span>require_binaries<span class="w"> </span>/root/fhb<span class="w"> </span><span class="o">||</span><span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="m">1</span>
<span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="m">255</span><span class="w"> </span><span class="c1"># only include if asked for</span>
<span class="o">}</span>
depends<span class="o">()</span><span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s1">'systemd crypt fido2'</span>
<span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="m">0</span>
<span class="o">}</span>
install<span class="o">()</span><span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="c1"># fhb.service starts binary before cryptsetup-pre.target to create key-file</span>
<span class="w"> </span>inst_binary<span class="w"> </span>/root/fhb<span class="w"> </span>/sbin/fhb
<span class="w"> </span>inst_multiple<span class="w"> </span>mkdir<span class="w"> </span>cat<span class="w"> </span>rm
<span class="w"> </span>inst_simple<span class="w"> </span><span class="s2">"</span><span class="nv">$moddir</span><span class="s2">"</span>/fhb.service<span class="w"> </span><span class="s2">"</span><span class="nv">$systemdsystemunitdir</span><span class="s2">"</span>/fhb.service
<span class="w"> </span><span class="nv">$SYSTEMCTL</span><span class="w"> </span>-q<span class="w"> </span>--root<span class="w"> </span><span class="s2">"</span><span class="nv">$initdir</span><span class="s2">"</span><span class="w"> </span>add-wants<span class="w"> </span>initrd.target<span class="w"> </span>fhb.service
<span class="w"> </span><span class="c1"># Some custom rules might be relevant for making consistent /dev symlinks</span>
<span class="w"> </span><span class="k">while</span><span class="w"> </span><span class="nb">read</span><span class="w"> </span>p
<span class="w"> </span><span class="k">do</span><span class="w"> </span>grep<span class="w"> </span>-qiP<span class="w"> </span><span class="s1">'\b(u2f|fido2)\b'</span><span class="w"> </span><span class="s2">"</span><span class="nv">$p</span><span class="s2">"</span><span class="w"> </span><span class="o">&&</span><span class="w"> </span>inst_rules<span class="w"> </span><span class="s2">"</span><span class="nv">$p</span><span class="s2">"</span>
<span class="w"> </span><span class="k">done</span><span class="w"> </span><<span class="w"> </span><<span class="o">(</span>find<span class="w"> </span>/etc/udev/rules.d<span class="w"> </span>-maxdepth<span class="w"> </span><span class="m">1</span><span class="w"> </span>-type<span class="w"> </span>f<span class="o">)</span>
<span class="w"> </span><span class="c1"># List of devices that fhb.service will create key for in cryptsetup-keys.d</span>
<span class="w"> </span><span class="c1"># Should be safe to have all "auto" crypttab devices there, just in case</span>
<span class="w"> </span><span class="k">while</span><span class="w"> </span><span class="nb">read</span><span class="w"> </span>luks<span class="w"> </span>dev<span class="w"> </span>key<span class="w"> </span>opts<span class="p">;</span><span class="w"> </span><span class="k">do</span>
<span class="w"> </span><span class="o">[[</span><span class="w"> </span><span class="s2">"</span><span class="si">${</span><span class="nv">opts</span><span class="p">//,/ </span><span class="si">}</span><span class="s2">"</span><span class="w"> </span><span class="o">=</span>~<span class="w"> </span><span class="o">(</span>^<span class="p">|</span><span class="w"> </span><span class="o">)</span>noauto<span class="o">(</span><span class="w"> </span><span class="p">|</span>$<span class="o">)</span><span class="w"> </span><span class="o">]]</span><span class="w"> </span><span class="o">&&</span><span class="w"> </span><span class="k">continue</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s2">"</span><span class="nv">$luks</span><span class="s2">"</span>
<span class="w"> </span><span class="k">done</span><span class="w"> </span><<span class="s2">"</span><span class="nv">$dracutsysrootdir</span><span class="s2">"</span>/etc/crypttab<span class="w"> </span>><span class="s2">"</span><span class="nv">$initdir</span><span class="s2">"</span>/etc/fhb.devices
<span class="w"> </span>mark_hostonly<span class="w"> </span>/etc/fhb.devices
<span class="o">}</span>
</pre></div>
<p>Module would need to be enabled via e.g. <tt class="docutils literal"><span class="pre">add_dracutmodules+="</span> fhb "</tt>
in dracut.conf.d, and will include the "fhb" binary, service file to run it,
list of devices to generate unlock-keys for in <tt class="docutils literal">/etc/fhb.devices</tt> there,
and any udev rules mentioning u2f/fido2 from <tt class="docutils literal">/etc/udev/rules.d</tt>, in case
these might be relevant for consistent device path or whatever other basic
device-related setup.</p>
<p><a class="reference external" href="https://github.com/mk-fg/fgtk#fido2_hmac_bootnim">fido2_hmac_boot.nim</a> "fhb" binary can be built (using C-like <a class="reference external" href="https://nim-lang.org/">Nim</a> compiler) with
all parameters needed for its operation hardcoded via e.g. <tt class="docutils literal"><span class="pre">-d:FHB_CID=...</span></tt>
compile-time options, to avoid needing to bother with any of those in systemd
unit file or when running it anytime on its own later.</p>
<p>It runs same operation as <a class="reference external" href="https://developers.yubico.com/libfido2/Manuals/fido2-assert.html">fido2-assert</a> tool, producing HMAC secret for
specified Credential ID and Salt values.
Credential ID should be created/secured prior to that using related <a class="reference external" href="https://developers.yubico.com/libfido2/Manuals/fido2-cred.html">fido2-token</a>
and <a class="reference external" href="https://developers.yubico.com/libfido2/Manuals/fido2-cred.html">fido2-cred</a> binaries. All these tools come bundled with <a class="reference external" href="https://developers.yubico.com/libfido2/">libfido2</a>.</p>
<p>Since systemd doesn't nuke <tt class="docutils literal"><span class="pre">/run/cryptsetup-keys.d</span></tt> by default
(<tt class="docutils literal"><span class="pre">keyfile-erase</span></tt> option in <a class="reference external" href="https://www.freedesktop.org/software/systemd/man/crypttab.html">crypttab</a> can help, but has to be used consistently
for each volume), custom unit file to do that can be added/enabled to main
systemd as well:</p>
<div class="highlight"><pre><span></span><span class="k">[Unit]</span>
<span class="na">DefaultDependencies</span><span class="o">=</span><span class="s">no</span>
<span class="na">Conflicts</span><span class="o">=</span><span class="s">shutdown.target</span>
<span class="na">After</span><span class="o">=</span><span class="s">cryptsetup.target</span>
<span class="k">[Service]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">oneshot</span>
<span class="na">ExecStart</span><span class="o">=</span><span class="s">rm -rf /run/cryptsetup-keys.d</span>
<span class="k">[Install]</span>
<span class="na">WantedBy</span><span class="o">=</span><span class="s">sysinit.target</span>
</pre></div>
<p>And that should do it for implementing above early-boot unlocking sequence.</p>
<p>To enroll the key produced by "fhb" binary into LUKS headers, simply run it,
same as early-boot systemd would, and luksAddKey its output.</p>
<p>Couple additional notes on all this stuff:</p>
<ul>
<li><p class="first">HMAC key produced by "fhb" tool is a high-entropy uniformly-random 256-bit
(32B) value, so unlike passwords, does not actually need any kind of KDF
applied to it - it is the key, bruteforcing it should be about as infeasible
as bruteforcing 128/256-bit master symmetric cipher key (and likely even harder).</p>
<p>Afaik cryptsetup doesn't support disabling KDF for key-slot entirely,
but <tt class="docutils literal"><span class="pre">--pbkdf</span> pbkdf2 <span class="pre">--pbkdf-force-iterations</span> 1000</tt> options can be used to
set fastest parameters and get something close to disabling it.</p>
</li>
<li><p class="first"><tt class="docutils literal">cryptsetup config <span class="pre">--key-slot</span> N <span class="pre">--priority</span> prefer</tt> can be used to make
systemd-cryptsetup try unlocking volume with this no-KDF keyslot quickly first,
before trying other slots with memory/cpu-heavy argon2id and such proper PBKDF,
which should almost always be a good idea to do in this order, as it should
take almost no time to try 1K-rounds PBKDF2 slot.</p>
</li>
<li><p class="first">Ideally each volume should have its own sub-key derived from one that fhb
outputs, e.g. via simple HMAC-SHA256(volume-uuid, key=fhb.key) operation,
which is omitted here for simplicitly.</p>
<p>fhb binary includes --hmac option for that, to use instead of "cat" above:</p>
<pre class="literal-block">
fhb --hmac "$key" "$dev" /run/cryptsetup-keys.d/"$dev".key
</pre>
<p>Can be added to avoid any of LUKS keys/keyslots being leaked or broken (for
some weird reason) to have any effect on other keys - reversing such HMAC back
to fhb.key to use it for other volumes would still be cryptographically infeasible.</p>
</li>
</ul>
<p>Custom <a class="reference external" href="https://github.com/mk-fg/fgtk#fido2_hmac_bootnim">fido2_hmac_boot.nim</a> binary/code used here is somewhat similar to an
earlier <a class="reference external" href="https://github.com/mk-fg/fgtk#hdr-fido2-hmac-desalinate.c">fido2-hmac-desalinate.c</a> that I use for password management (see above),
but a bit more complex, so is written in an easier and much nicer/safer language
(<a class="reference external" href="https://nim-lang.org/">Nim</a>), while still being compiled through C to pretty much same result.</p>
Pushing git-notes to one specific remote via pre-push hook2023-01-08T07:58:00+05:002023-01-08T07:58:00+05:00Mike Kazantsevtag:blog.fraggod.net,2023-01-08:/2023/01/08/pushing-git-notes-to-one-specific-remote-via-pre-push-hook.html<p>I've recently started using <a class="reference external" href="https://git-scm.com/docs/git-notes">git notes</a> as a good way to track metadata
associated with the code that's likely of no interest to anyone else,
and would only litter git-log if was comitted and tracked in the repo
as some .txt file.</p>
<p>But that doesn't mean that they shouldn't be …</p><p>I've recently started using <a class="reference external" href="https://git-scm.com/docs/git-notes">git notes</a> as a good way to track metadata
associated with the code that's likely of no interest to anyone else,
and would only litter git-log if was comitted and tracked in the repo
as some .txt file.</p>
<p>But that doesn't mean that they shouldn't be backed-up, shared and merged
between different places where you yourself work on and use that code from.</p>
<p>Since I have a <a class="reference external" href="https://fraggod.net/code/git">git mirror on my own host</a> (as you do with distributed scm),
and always clone from there first, adding other "free internet service" remotes
like <a class="reference external" href="https://github.com/mk-fg/">github</a>, <a class="reference external" href="https://codeberg.org/mk-fg/">codeberg</a>, etc later, it seems like a natural place to push such
notes to, as you'd always pull them from there with the repo.</p>
<p>That is not straightforward to configure in git to do on basic "git push"
however, because "push" operation there works with "[<repository> [<refspec>...]]"
destination concept.
I.e. you give it a single remote for where to push, and any number of specific
things to update as "<src>[:<dst>]" refspecs.</p>
<p>So when "git push" is configured with "origin" having multiple "url =" lines
under it in .git/config file (like home-url + github + codeberg), you don't get
to specify "push main+notes to url-A, but only main to url-B" - all repo URLs
get same refs, as they are under same remote.</p>
<p>Obvious fix conceptually is to run different "git push" commands to different
remotes, but that's a hassle, and even if stored as an alias, it'd clash with
muscle memory that'll keep typing "git push" out of habit.</p>
<p>Alternative is to maybe override git-push command itself with some alias, but git
explicitly does not allow that, probably for good reasons, so that's out as well.</p>
<p>git-push does run hooks however, and those can do the extra pushes depending on
the URL, so that's an easy solution I found for this:</p>
<div class="highlight"><pre><span></span><span class="ch">#!/bin/dash</span>
<span class="nb">set</span><span class="w"> </span>-e
<span class="nv">notes_remote</span><span class="o">=</span>home
<span class="nv">notes_url</span><span class="o">=</span><span class="k">$(</span>git<span class="w"> </span>remote<span class="w"> </span>get-url<span class="w"> </span><span class="s2">"</span><span class="nv">$notes_remote</span><span class="s2">"</span><span class="k">)</span>
<span class="nv">notes_ref</span><span class="o">=</span><span class="k">$(</span>git<span class="w"> </span>notes<span class="w"> </span>get-ref<span class="k">)</span>
<span class="nv">push_remote</span><span class="o">=</span><span class="nv">$1</span><span class="w"> </span><span class="nv">push_url</span><span class="o">=</span><span class="nv">$2</span>
<span class="o">[</span><span class="w"> </span><span class="s2">"</span><span class="nv">$push_url</span><span class="s2">"</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"</span><span class="nv">$notes_url</span><span class="s2">"</span><span class="w"> </span><span class="o">]</span><span class="w"> </span><span class="o">||</span><span class="w"> </span><span class="nb">exit</span><span class="w"> </span><span class="m">0</span>
<span class="nv">master_push</span><span class="o">=</span><span class="w"> </span><span class="nv">master_oid</span><span class="o">=</span><span class="k">$(</span>git<span class="w"> </span>rev-parse<span class="w"> </span>master<span class="k">)</span>
<span class="k">while</span><span class="w"> </span><span class="nb">read</span><span class="w"> </span>local_ref<span class="w"> </span>local_oid<span class="w"> </span>remote_ref<span class="w"> </span>remote_oid<span class="p">;</span><span class="w"> </span><span class="k">do</span>
<span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="s2">"</span><span class="nv">$local_oid</span><span class="s2">"</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">"</span><span class="nv">$master_oid</span><span class="s2">"</span><span class="w"> </span><span class="o">]</span><span class="w"> </span><span class="o">&&</span><span class="w"> </span><span class="nv">master_push</span><span class="o">=</span>t<span class="w"> </span><span class="o">&&</span><span class="w"> </span><span class="k">break</span><span class="w"> </span><span class="o">||</span><span class="w"> </span><span class="k">continue</span>
<span class="k">done</span>
<span class="o">[</span><span class="w"> </span>-n<span class="w"> </span><span class="s2">"</span><span class="nv">$master_push</span><span class="s2">"</span><span class="w"> </span><span class="o">]</span><span class="w"> </span><span class="o">||</span><span class="w"> </span><span class="nb">exit</span><span class="w"> </span><span class="m">0</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"--- notes-push [</span><span class="nv">$notes_remote</span><span class="s2">]: start -> </span><span class="nv">$notes_ref</span><span class="s2"> ---"</span>
git<span class="w"> </span>push<span class="w"> </span>--no-verify<span class="w"> </span><span class="s2">"</span><span class="nv">$notes_remote</span><span class="s2">"</span><span class="w"> </span><span class="s2">"</span><span class="nv">$notes_ref</span><span class="s2">"</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"--- notes-push [</span><span class="nv">$notes_remote</span><span class="s2">]: success ---"</span>
</pre></div>
<p>That's a "pre-push" hook, which pushes notes-branch only to "home" remote,
when running a normal "git push" command to a "master" branch (to be replaced
with "main" in some repos).</p>
<p>Idea is to only augment "normal" git-push, and don't bother running this on every
weirder updates or tweaks, keeping git-notes generally in sync between different
places where you can use them, with no cognitive overhead in a day-to-day usage.</p>
<p>As a side-note - while these notes are normally attached to commits, for
something more global like "my todo-list for this project" not tied to specific
ref that way, it's easy to attach it to some descriptive tag like "todo", and
use with e.g. <tt class="docutils literal">git notes edit todo</tt>, and track in the repo as well.</p>
FIDO2 hardware password/secret management2023-01-04T05:06:00+05:002023-01-04T05:06:00+05:00Mike Kazantsevtag:blog.fraggod.net,2023-01-04:/2023/01/04/fido2-hardware-passwordsecret-management.html<p>Passwords are bad, and they leak, but services are slow to adopt other auth
methods - even <a class="reference external" href="https://en.wikipedia.org/wiki/Time-based_one-time_password">TOTP</a> is better, and even for 1-factor-auth (e.g using <a class="reference external" href="https://www.nongnu.org/oath-toolkit/oathtool.1.html">oathtool</a>).</p>
<p>But even without passwords, there are plenty of other easy-to-forget secrets to
store in a big pile somewhere, like same TOTP seed values …</p><p>Passwords are bad, and they leak, but services are slow to adopt other auth
methods - even <a class="reference external" href="https://en.wikipedia.org/wiki/Time-based_one-time_password">TOTP</a> is better, and even for 1-factor-auth (e.g using <a class="reference external" href="https://www.nongnu.org/oath-toolkit/oathtool.1.html">oathtool</a>).</p>
<p>But even without passwords, there are plenty of other easy-to-forget secrets to
store in a big pile somewhere, like same TOTP seed values, API keys,
government ID numbers, card PINs and financial info, security answers,
encryption passphrases, and many other stuff.</p>
<ul>
<li><p class="first">Easiest thing is to dump all these into a big .txt file somewhere.</p>
<p><strong>Problem</strong>: any malware, accidental or deliberate copy ("evil maid"),
or even a screen-scrape taken at an unfortunate time exposes everything!</p>
<p>And these all seem to be reasonably common threats/issues.</p>
</li>
<li><p class="first">Next best thing - store that file in some encrypted form.</p>
<p>Even short-lived compromise can get the whole list along with the key from
memory, and otherwise it's still reasonably easy to leak both key/passphrase
and ciphertext over time separately, esp. with long-lived keys.</p>
<p>It's also all on-screen when opened, can be exposed/scraped from there,
but still an improvement over pure plaintext, at the expense of some added
key-management hassle.</p>
</li>
<li><p class="first">Encrypt whole file, but also have every actual secret in there encrypted
separately, with unique key for each one:</p>
<pre class="literal-block">
banks:
Apex Finance:
url: online.apex-finance.com
login: jmastodon
pw: fhd.eop0.aE6H/VZc36ZPM5w+jMmI
email: reg.apexfinance.vwuk@jmastodon.me
current visa card:
name: JOHN MASTODON
no: 4012888888881881
cvv2: fhd.KCaP.QHai
expires: 01/28
pin: fhd.y6En.tVMHWW+C
Team Overdraft: ...
google account:
-- note: FIDO2 2FA required --
login/email: j.x.mastodon789@gmail.com
pw: fhd.PNgg.HdKpOLE2b3DejycUGQO35RrtiA==
recovery email: reg.google.ce21@jmastodon.me
API private key: fhd.pxdw.QOQrvLsCcLR1X275/Pn6LBWl72uwbXoo/YiY
...
</pre>
<p>In this case, even relatively long-lived malware/compromise can only sniff
secrets that were used during that time, and it's mostly fine if this ends up
being opened and scrolled-through on a public video stream or some app screencaps
it by accident (or not) - all important secrets are in encrypted "fhd.XXX.YYY" form.</p>
<p>Downside of course is even more key management burden here, since simply storing
all these unique keys in a config file or a list won't do, as it'll end up being
equivalent to "encrypted file + key" case against leaks or machine compromise.</p>
</li>
<li><p class="first">Storing encryption keys defeats the purpose of the whole thing, typing them
is insecure vs various keyloggers, and there's also way too many to remember!</p>
<img style="width: 10rem; float: right;" src="http://blog.fraggod.net/images/fido2-nfc-keychain.png" title="FIDO2 USB token on a keychain" alt="FIDO2 USB token on a keychain"> </a><p><strong>Solution</strong>: get some cheap FIDO2 hardware key to do all key-management
for you, and then just keep it physically secure, i.e. put it on the keychain.</p>
<p>This does not require remembering anything (except maybe a single PIN, if you
set one, and can remember it reliably within 8 attempts), is reasonably safe
against all common digital threats, and pretty much as secure against physical
ones as anything can be (assuming <a class="reference external" href="https://en.wikipedia.org/wiki/Rubber-hose_cryptanalysis">rubber-hose cryptoanalysis</a> works uniformly
well), if not more secure (e.g. permanent PIN attempts lockout).</p>
</li>
</ul>
<hr class="docutils" />
<p>Given the recent push for FIDO2 WebAuthn-compatible <a class="reference external" href="https://www.passkeys.io/">passkeys</a> by major megacorps
(Google/Apple/MS), and that you'd probably want to have such FIDO2 token for
<a class="reference external" href="https://github.blog/2021-05-10-security-keys-supported-ssh-git-operations/">SSH keys</a> and <a class="reference external" href="https://0pointer.net/blog/unlocking-luks2-volumes-with-tpm2-fido2-pkcs11-security-hardware-on-systemd-248.html">simple+secure full disk encryption</a> anyway, there seems to be
no good reason not to use it for securing passwords as well, in a much better way
than with any memorized or stored-in-a-file schemes for secrets/keys, as outlined above.</p>
<p>There's no go-to way to do this yet (afaik), but all tools to implement it exist.</p>
<p>Filippo Valsorda described one way to do it via plugin for a common "<a class="reference external" href="https://github.com/FiloSottile/age">age</a>"
encryption tool in <a class="reference external" href="https://words.filippo.io/dispatches/passage/">"My age+YubiKeys Password Management Solution" blog post</a>,
using Yubikey-specific PIV-smartcard capability (present in some of Yubico tokens),
and a shell script to create separate per-service encrypted files.</p>
<p>I did it a bit differently, with secrets stored alongside non-secret notes and
other info/metadata, and with a common FIDO2-standard <a class="reference external" href="https://fidoalliance.org/specs/fido2/fido-client-to-authenticator-protocol-v2.1-rd-20191217.html#sctn-hmac-secret-extension">hmac-secret extension</a>
(supported by pretty much all such devices, I think?), used in the following way:</p>
<ul>
<li><p class="first">Store ciphertext as a "fhd.y6En.tVMHWW+C" string, which is:</p>
<pre class="literal-block">
"fhd." || base64(salt) || "." || base64(wrapped-secret)
</pre>
<p>And keep those in the common list of various important info (also encrypted),
to view/edit with the usual emacs.</p>
</li>
<li><p class="first">When specific secret or password is needed, point to it and press "copy
decrypted" hotkey (as implemented by <a class="reference external" href="https://github.com/mk-fg/emacs-setup/blob/21479cc/core/fg_sec.el#L178-L240">fhd-crypt in my emacs</a>).</p>
</li>
<li><p class="first">Parsing that "fhd. ..." string gets "y6En" salt value, and it is sent to USB/NFC
token in the assertion operation (same as <a class="reference external" href="https://developers.yubico.com/libfido2/Manuals/fido2-assert.html">fido2-assert cli tool</a> runs).</p>
</li>
<li><p class="first">Hardware token user-presence/verification requires you to physically touch
button on the device (or drop it onto NFC pad), and maybe also enter a PIN
or pass whatever biometric check, depending on device and its configuration
(see <a class="reference external" href="https://developers.yubico.com/libfido2/Manuals/fido2-token.html">fido2-token tool</a> for that).</p>
</li>
<li><p class="first">Token/device returns "hmac-sha256(salt, key=secret-generated-on-device)",
unique and unguessable for that salt value, which is then used to decrypt
"tVMHWW+C" part of the fhd-string into original "secret" string (via simple XOR).</p>
</li>
<li><p class="first">Resulting "secret" value is copied into clipboard, to use wherever it was needed.</p>
</li>
</ul>
<p>This ensures that every single secret string in such password-list is only
decryptable separately, also demanding a separate physical verification procedure,
very visible and difficult to do unintentionally, same as with <a class="reference external" href="https://webauthn.guide/">WebAuthn</a>.</p>
<p>Only actual secret key in this case resides on a FIDO2 device, and is infeasible
to extract from there, for any common threat model at least.</p>
<p>Encryption/wrapping of secret-string to fhd-string above works in roughly same
way - generate salt value, send to token, get back HMAC and XOR it with the secret,
cutting result down to that secret-string length.</p>
<p>Last part introduces a small info-leak - secret length - but don't think
that should be an issue in practice (always use long random passwords),
while producing nicer short ciphertexts.</p>
<p>There are also still some issues with using these physical dongles in a compomised
environment, which can lie about what it is being authorized by a device,
as they usually have no way to display that, but it's still a big improvement,
and can be somewhat mitigated by using multiple tokens for different purposes.</p>
<hr class="docutils" />
<p>I've wrapped all these crypto bits into a simple C fido2-hmac-desalinate tool here:</p>
<blockquote>
<a class="reference external" href="https://github.com/mk-fg/fgtk#hdr-fido2-hmac-desalinate.c">https://github.com/mk-fg/fgtk#hdr-fido2-hmac-desalinate.c</a></blockquote>
<p>Which needs "Relying Party ID" value to compile - basically an unique hostname
that ideally won't be used for anything else with that authenticator
(e.g. "token1.fhd.jmastodon.me" for some owned domain name), which is itself
not a secret of any kind.</p>
<p>FIDO2 "credential" can be generated and stored on device first, using cli tools
that come with libfido2, for example:</p>
<pre class="literal-block">
% fido2-token -L
% fido2-cred -M -rh -i cred.req.txt -o cred.info.txt /dev/hidraw5 eddsa
</pre>
<p>Such credential would work well on different machines with authenticators that
support FIDO2 Discoverable Credentials (aka Resident Keys), with HMAC key stored
on the same portable authenticator, but for simpler tokens that don't support
that and have no storage, static credential-id value (returned by <a class="reference external" href="https://developers.yubico.com/libfido2/Manuals/fido2-cred.html">fido2-cred tool</a>
without "-r" option) also needs to be built-in via -DFHD_CID= compile-time parameter
(and is also not a secret).</p>
<blockquote>
(technically that last "credential-id value" has device-master-key-wrapped
HMAC-key in it, but it's only possible to extract from there by the device
itself, and it's never passed or exposed anywhere in plaintext at any point)</blockquote>
<p>On the User Interface side, I use <a class="reference external" href="https://www.gnu.org/software/emacs/">Emacs</a> text editor to open/edit password-list
(also <a class="reference external" href="/2015/12/09/transparent-bufferfile-processing-in-emacs-on-loadsavewhatever-io-ops.html">transparently-encrypted/decrypted</a> using <a class="reference external" href="https://github.com/mk-fg/ghg">ghg tool</a>), and get encrypted
stuff from it just by pointing at the needed secret and pushing the hotkey to
copy its decrypted value, implemented by fhd-crypt routine here:</p>
<blockquote>
<a class="reference external" href="https://github.com/mk-fg/emacs-setup/blob/21479cc/core/fg_sec.el#L178-L281">https://github.com/mk-fg/emacs-setup/blob/21479cc/core/fg_sec.el#L178-L281</a></blockquote>
<p>(also, with universal-arg, fhd-crypt encrypts/decrypts and replaces pointed-at
or region-selected thing in-place, instead of copying into clipboard)</p>
<p>Separate binary built against common <a class="reference external" href="https://github.com/Yubico/libfido2">libfido2</a> ensures that it's easy to use
such secret strings in any other way too, or fallback to manually decoding them
via cli, if necessary.</p>
<p>At least until push for passkeys makes no-password WebAuthn ubiquitous enough,
this seem to be the most convenient and secure way of password management for me,
but auth passwords aren't the only secrets, so it likely will be useful way
beyond that point as well.</p>
<hr class="docutils" />
<p>One thing not mentioned above are (important!) backups for that secret-file.
I.e. what if FIDO2 token in question gets broken or lost?
And how to keep such backup up-to-date?</p>
<p>My initial simple fix is having a shell script that does basically this:</p>
<div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="nb">set</span><span class="w"> </span>-eo<span class="w"> </span>pipefail
<span class="nb">echo</span><span class="w"> </span><span class="s2">"### Paste new entry, ^D after last line to end, ^C to cancel"</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">"### Make sure to include some context for it - headers at least"</span>
<span class="nv">chunk</span><span class="o">=</span><span class="k">$(</span>ghg<span class="w"> </span>-eo<span class="w"> </span>-r<span class="w"> </span>some-public-key<span class="w"> </span><span class="p">|</span><span class="w"> </span>base64<span class="w"> </span>-w80<span class="k">)</span>
<span class="nb">echo</span><span class="w"> </span>-e<span class="w"> </span><span class="s2">"--- entry [ </span><span class="k">$(</span>date<span class="w"> </span>-Is<span class="k">)</span><span class="s2"> ]\n</span><span class="si">${</span><span class="nv">chunk</span><span class="si">}</span><span class="s2">\n--- end\n"</span><span class="w"> </span>>>backup.log
</pre></div>
<p>Then on any updates, to run this script and paste the updated plaintext
secret-block into it, before encrypting all secrets in that block for good.</p>
<p>It does one-way public-key encryption (using <a class="reference external" href="https://github.com/mk-fg/ghg">ghg</a> tool, but common <a class="reference external" href="https://github.com/FiloSottile/age">age</a> or
<a class="reference external" href="https://gnupg.org/">GnuPG</a> will work just as well), to store those encrypted updates, which can then
be safely backed-up alongside the main (also encrypted) list of secrets,
and everything can be restored from these using corresponding secure private key
(ideally not exposed or used anywhere for anything outside of such
fallback-recovery purposes).</p>
<p><strong>Update 2024-02-21:</strong> <a class="reference external" href="https://github.com/mk-fg/fgtk#hdr-secret-token-backup">secret-token-backup</a> wrapper/tool is a more modern
replacement for that, which backs stuff up automatically, and can also be used
for safely getting specific secret out of there using other PIV yubikeys
(e.g. YK Nano stuck in a laptop's USB slot).</p>
<hr class="docutils" />
<p>And one more aside - since plugging devices into USB rarely aligns correctly
on the first try (USB curse), is somewhat tedious, and can potentially wear-out
contacts or snap-off the device, I've grabbed a cheap PC/SC-compatible ACR122U
NFC reader from aliexpress, and have been using it instead of a USB interface,
as modern FIDO2 tokens tend to support NFC for use with smartphones.</p>
<p>It works great for this password-management purpose, placing the key on NFC
pad works instead of the touch presence-check with USB (at least with cheap
Yubico Security Key devices), with some short (<1 minute) timeout on the pad
in which token stops responding with ERR_PIN, to avoid misuse if one forgets
to remove it.</p>
<p><a class="reference external" href="https://github.com/Yubico/libfido2">libfido2</a> supports PC/SC interface, and <a class="reference external" href="https://pcsclite.apdu.fr/">PCSC lite project</a> providing it on
typical linux distros seem to support pretty much all NFC readers in existance.</p>
<p>libfido2 is in turn used by <a class="reference external" href="https://systemd.io/">systemd</a>, <a class="reference external" href="https://www.openssh.com/">OpenSSH</a>, <a class="reference external" href="https://developers.yubico.com/pam-u2f/">pam-u2f</a>, its fido2-token/cred/assert
cli, my fido2-hmac-desalinate password-management hack above, and many other tools.
So through it, all these projects automatically have easy and ubiquitous NFC support too.</p>
<blockquote>
(libfido2 also supports linux kernel AF_NFC interface in addition to PC/SC
one, which works for much narrower selection of card-readers implemented by
in-kernel drivers, so PC/SC might be easier to use, but kernel interface
doesn't need an extra pcscd dependency, if works for your specific reader)</blockquote>
<p>Notable things that don't use that lib and have issues with NFC seem to be
browsers - both Firefox and Chromium on desktop (and their forks, see e.g.
<a class="reference external" href="https://bugzilla.mozilla.org/show_bug.cgi?id=1669870">mozbug-1669870</a>) - which is a shame, but hopefully will be fixed there eventually.</p>
How to reliably set MTU on a weird (batman-adv) interface2022-11-30T11:32:00+05:002022-11-30T11:32:00+05:00Mike Kazantsevtag:blog.fraggod.net,2022-11-30:/2022/11/30/how-to-reliably-set-mtu-on-a-weird-batman-adv-interface.html<p>I like and use <a class="reference external" href="https://www.open-mesh.org/projects/batman-adv/wiki">B.A.T.M.A.N. (batman-adv)</a> mesh-networking protocol on the LAN,
to not worry about how to connect local linuxy things over NICs and WiFi links
into one shared network, and been using it for quite a few years now.</p>
<p>Everything sensitive should run over …</p><p>I like and use <a class="reference external" href="https://www.open-mesh.org/projects/batman-adv/wiki">B.A.T.M.A.N. (batman-adv)</a> mesh-networking protocol on the LAN,
to not worry about how to connect local linuxy things over NICs and WiFi links
into one shared network, and been using it for quite a few years now.</p>
<p>Everything sensitive should run over ssh/wg links anyway (or ipsec before wg was
a thing), so it's not a problem to have any-to-any access in a sane environment.</p>
<p>But due to extra frame headers, batman-adv benefits from either lower MTU on the
overlay interface or higher MTU on all interfaces which it runs over, to avoid
fragmentation.
Instead of remembering to tweak all other interfaces, I think it's easier to
only bother with one batman-adv iface on each machine, but somehow that proved
to be a surprising challenge.</p>
<p>MTU on iface like "bat0" jumps on its own when slave interfaces in it change
state, so obvious places to set it, like networkd .network/.netdev files or
random oneshot boot scripts don't work - it can/will randomly change later
(usually immediately after these things set it on boot) and you'll only notice
when ssh or other tcp conns start to hang mid-session.</p>
<p>One somewhat-reliable and sticky workaround for having issues is to mangle TCP
MSS by the firewall (e.g. nftables), so that MTU changes are not an issue for
almost all connections, but that still leaves room for issues and fragmentation
in a few non-TCP things, and is obviously a hack - wrong MTU value is still there.</p>
<p>After experimenting with various "try to set mtu couple times after delay",
"wait for iface state and routes then set mtu" and such half-measures - none of
which worked reliably for that odd interface - here's what I ended up with:</p>
<div class="highlight"><pre><span></span><span class="k">[Unit]</span>
<span class="na">Wants</span><span class="o">=</span><span class="s">network.target</span>
<span class="na">After</span><span class="o">=</span><span class="s">network.target</span>
<span class="na">Before</span><span class="o">=</span><span class="s">network-online.target</span>
<span class="na">StartLimitBurst</span><span class="o">=</span><span class="s">4</span>
<span class="na">StartLimitIntervalSec</span><span class="o">=</span><span class="s">3min</span>
<span class="k">[Service]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">exec</span>
<span class="na">Environment</span><span class="o">=</span><span class="s">IF=bat0 MTU=1440</span>
<span class="na">ExecStartPre</span><span class="o">=</span><span class="s">/usr/lib/systemd/systemd-networkd-wait-online -qi ${IF}:off --timeout 30</span>
<span class="na">ExecStart</span><span class="o">=</span><span class="s">bash -c 'rl=0 rl_win=100 rl_max=20 rx=" mtu [0-9]+ "</span><span class="c1">; \</span>
<span class="w"> </span><span class="na">while read ev; do [[ "$ev"</span><span class="w"> </span><span class="o">=</span><span class="s">~ $rx ]] || continue</span><span class="c1">; \</span>
<span class="w"> </span><span class="na">printf -v ts "%%(%%s)T" -1; ((ts-</span><span class="o">=</span><span class="s">ts%%rl_win))</span><span class="c1">; ((rld=++rl-ts)); \</span>
<span class="w"> </span><span class="na">[[ $rld -gt $rl_max ]] && exit 59 || [[ $rld -lt 0 ]] && rl</span><span class="o">=</span><span class="s">ts</span><span class="c1">; \</span>
<span class="w"> </span><span class="na">ip link set dev $IF mtu $MTU || break; \</span>
<span class="w"> </span><span class="na">done < <(ip -o link show dev $IF; exec stdbuf -oL ip -o monitor link dev $IF)'</span>
<span class="na">Restart</span><span class="o">=</span><span class="s">on-success</span>
<span class="na">RestartSec</span><span class="o">=</span><span class="s">8</span>
<span class="k">[Install]</span>
<span class="na">WantedBy</span><span class="o">=</span><span class="s">multi-user.target</span>
</pre></div>
<p>It's a "F this sh*t" approach of "anytime you see mtu changing, change it back
immediately", which seem to be the only thing that works reliably so far.</p>
<p>Couple weird things in there on top of "ip monitor" loop are:</p>
<ul>
<li><p class="first"><tt class="docutils literal"><span class="pre">systemd-networkd-wait-online</span> <span class="pre">-qi</span> <span class="pre">${IF}:off</span> <span class="pre">--timeout</span> 30</tt></p>
<p>Waits for interface to appear for some time before either restarting the .service,
or failing when StartLimitBurst= is reached.</p>
<p>The :off networkd "operational status" (see <a class="reference external" href="https://www.freedesktop.org/software/systemd/man/networkctl.html">networkctl(1)</a>) is the earliest
one, and enough for "ip monitor" to latch onto interface, so good enough here.</p>
</li>
<li><p class="first"><tt class="docutils literal">rl=0 rl_win=100 rl_max=20</tt> and couple lines with <tt class="docutils literal">exit 59</tt> on it.</p>
<p>This is rate-limiting in case something else decides to manage interface' MTU
in a similar "persistent" way (at last!), to avoid pulling the thing back-and-forth
endlessly in a loop, or (over-)reacting to interface state flapping weirdly.</p>
<p>I.e. stop service with failure on >20 relevant events within 100s.</p>
</li>
<li><p class="first"><tt class="docutils literal"><span class="pre">Restart=on-success</span></tt> to only restart on "break" when "ip link set" fails if
interface goes away, limited by StartLimit*= options to also fail eventually if it
does not (re-)appear, or if that operation fails consistently for some other reason.</p>
</li>
</ul>
<p>With various overlay tunnels becoming commonplace lately, MTU seem to be set
incorrectly by default about 80% of the time, and I almost feel like I'm done
fighting <a class="reference external" href="https://github.com/tonarino/innernet/issues/102">various tools with their way of setting it</a> guessed/hidden somewhere
(if implemented at all), and should just extend this loop into a more generic
system-wide "mtud.service" that'd match interfaces by wildcard and enforce some
admin-configured MTU values, regardless of whatever creating them (wrongly)
thinks might be the right value.</p>
<p>As seem to be common with networking stuff - you either centralize configuration
like that on a system, or deal with constant never-ending stream of app failures.
Other good example here are in-app ACLs, connection settings and security
measures vs system firewalls and wg tunnels, with only latter actually working,
and former proven to be an utter disaster for decades now.</p>
Information as a disaggregated buffet instead of firehose or a trough2022-11-25T22:37:00+05:002022-11-25T22:37:00+05:00Mike Kazantsevtag:blog.fraggod.net,2022-11-25:/2022/11/25/information-as-a-disaggregated-buffet-instead-of-firehose-or-a-trough.html<p>Following information sources on the internet has long been compared to
"drinking from a firehose", and trying to "keep up" with that is
how I think people end up with hundreds of tabs in some kind of backlog,
overflowing inboxes, feeds, podcast queues, and feeling overwhelmed in general.</p>
<p>Main problem …</p><p>Following information sources on the internet has long been compared to
"drinking from a firehose", and trying to "keep up" with that is
how I think people end up with hundreds of tabs in some kind of backlog,
overflowing inboxes, feeds, podcast queues, and feeling overwhelmed in general.</p>
<p>Main problem for me with that (I think) was aggregation -
I've used <a class="reference external" href="https://github.com/mk-fg/feedjack">web-based feed reader</a> in the past, aggregated feeds
from slashdot/reddit/hn/lobsters (at different times), and followed
"influencers" on twitter to get personally-curated feeds from there
in one bundle - and none of it really worked well.</p>
<p>For example, even when following podcast feeds, you end up with a backlog
(of things to listen to) that is hard to catch-up with - natually -
but it also adds increasingly more tracking and balancing issues,
as simply picking things in order will prioritize high-volume stuff,
and sometimes you'll want to skip the queue and now track "exceptions",
while "no-brainer" pick pulls increasingly-old and irrelevant items first.</p>
<p>Same thing tends to happen with any bundle of feeds that you try to "follow",
which always ends up being overcrowded in my experience, and while you can
"declare bankrupcy" resetting the follow-cursor to "now", skipping backlog,
that doesn't solve fundamental issue with this "following" model -
you'll just fall behind again, likely almost immediately - the approach itself
is wrong/broken/misguided.</p>
<p>Obvious "fix" might be to curate feeds better so that you can catch-up,
but if your interests are broad enough and changing, that's rarely an option,
as sources tend to have their own "take it or leave it" flow rate,
and narrowing scope to only a selection that you can fully follow is silly and
unrepresentative for those interests, esp. if some of them are inconsistent or
drown out others, even while being generally low-noise.</p>
<p>Easy workable approach, that seem to avoid all issues that I know of, and worked
for me so far, goes something like this:</p>
<ul class="simple">
<li>Bookmark all sources you find interesting individually.</li>
<li>When you want some podcast to listen-to, catch-up on news in some area,
or just an interesting thing to read idly - remember it - as in
"oh, would be nice to read/know-about/listen-to this right now".</li>
<li>Then pull-out a bookmark, and pick whatever is interesting and
most relevant there, not necessarily latest or "next" item in any way.</li>
</ul>
<p>This removes the mental burden of tracking and curating these sources,
balancing high-traffic with more rarefied ones, re-weighting stuff according
to your current interests, etc - and you don't loose on anything either!</p>
<p>I.e. with something relevant to my current interests I'll remember to go back to
it for every update, but stuff that is getting noisy or falling off from that
sphere, or just no longer entertaining or memorable, will naturally slip my mind
more and more often, and eventually bookmark itself can be dropped as unused.</p>
<p>Things that I was kinda afraid-of with such model before -
and what various RSS apps or twitter follows "help" to track:</p>
<ul class="simple">
<li>I'll forget where/how to find important info sources.</li>
<li>Forget to return to them.</li>
<li>Miss out on some stuff there.</li>
<li>Work/overhead of re-checking for updates is significant.</li>
</ul>
<p>None of these seem to be an issue in practice, as most interesting and relevant
stuff will natually be the first thing that will pop-up in memory to check/grab/read,
you always "miss out" on something when time is more limited than amount of
interesting goodies (i.e. it's a problem of what to miss-out on, not whether you
do or don't), and time spent checking couple bookmarks is a rounding error
compared to processing the actual information (there's always a lot of new
stuff, and for something you check obsessively, you'd know the rate well).</p>
<p>This kind of "disaggregated buffet" way of zero-effort "controlling" information
intake is surprisingly simple, pretty much automatic (happens on its own),
very liberating (no backlog anywhere), and can be easily applied to different
content types:</p>
<ul>
<li><p class="first">Don't get podcast rss-tracking app, bookmark individual sites/feeds instead.</p>
<p>I store RSS/Atom feed URLs under one bookmarks-dir in Waterfox, and when
wanting for something to listen on a walk or while doing something monotonous,
remember and pull out an URL via quickbar (bookmarks can be searched via <tt class="docutils literal">*
<query></tt> there iirc, I just have completion/suggestions enabled for bookmarks
only), run <a class="reference external" href="https://github.com/mk-fg/fgtk#rss-get">rss-get</a> script on the link, pick specific items/ranges/links to
download via its <a class="reference external" href="https://aria2.github.io/">aria2c</a> or <a class="reference external" href="https://github.com/yt-dlp/yt-dlp">yt-dlp</a> (with its built-in <a class="reference external" href="https://sponsor.ajay.app/">SponsorBlock</a>), play that.</p>
</li>
<li><p class="first">Don't "follow" people/feeds/projects on twitter or fediverse/mastodon
and then read through composite timeline, just bookmark all individual feeds
(on whatever home instances, or via <a class="reference external" href="https://github.com/zedeus/nitter/wiki/Instances">nitter</a>) instead.</p>
<p>This has a great added advantage of maintaining context in these platforms
which are notoriously bad for that, i.e. you read through things as they're
posted in order, not interleaved with all other stuff, or split over time.</p>
<p>Also this doesn't require an account, running a fediverse instance,
giving away your list of follows (aka social graph), or having your [current]
interests being tracked anywhere (even if "only" for bamboozling you with ads
on the subject to death).</p>
<p>With many accounts to follow and during some temporary twitter/fediverse
duplication, I've also found it useful (so far) to have a simple <a class="reference external" href="https://github.com/mk-fg/fgtk#ff-cli">ff-cli script</a>
to "open N bookmarks matching /@", when really bored, and quickly catch-up on
something random, yet interesting enough to end up being bookmarked.</p>
</li>
<li><p class="first">Don't get locked into subscribing or following "news" media that is kinda shit.</p>
<p>Simply not having that crap bundled with other things in same
reader/timeline/stream/etc will quickly make brain filter-out and "forget"
sources that become full of ads, <emotion>bait, political propaganda
and various other garbage, and brain will do such filtering all in the
background on its own, without wasting any time or conscious cycles.</p>
<p>There's usually nothing of importance to miss with such sources really,
as it's more like taking a read on the current weather,
only occasionally interesting/useful, and only for current/recent stuff.</p>
</li>
</ul>
<p>It's obviously not a guide to something "objectively best", and maybe only
works well for me this way, but as I've kinda-explained it (poorly) in chats
before, thought to write it down here too - hopefully somewhat more coherently -
and maybe just link to later from somewhere.</p>