Aug 14, 2018

rst-based org-mode-like calendar generator for conky

Basically converting some free-form .rst (as in ReST or reStructuredText) like this:

Minor chores
------------

Stuff no one ever remembers doing.

- Pick up groceries

  :ts: 2018-08-27 12:00

  Running low on salt, don't forget to grab some.

- Do the laundry

  :ts: every 2w interval

  That pile in the corner across the room.


Total Annihilation
------------------

:ts-start: 2018-09-04 21:00
:ts-end: 2018-09-20 21:00

For behold, the LORD will come in fire And His chariots like the whirlwind,
To render His anger with fury, And His rebuke with flames of fire. ... blah blah

Into this:

conky calendar from rst

Though usually with a bit less theatrical evil and more useful events you don't want to use precious head-space to always keep track of.

Emacs org-mode is the most praised and common solution for this (well, in non-web/gui/mobile world, that is), but never bothered to learn its special syntax, and don't really want such basic thing to be emacs/elisp-specific (like, run emacs to parse/generate stuff!).

rst, on the other hand, is an old friend (ever since I found markdown to be too limiting and not blocky enough), supports embedding structured data in there (really like how github highlights it btw - check out cal.rst example here), and super-easy to work with in anything, not just emacs, and can be parsed by a whole bunch of stuff.

Project on github: mk-fg/rst-icalendar-event-tracker

Should be zero-setup, light on dependencies and very easy to use.

Has simple iCalendar export for anything else, but with conky in particular, its generated snippet can be included via $(catp /path/to/output} directive, and be configured wrt columns/offsets/colors/etc both on script-level via -o/--conky-params option and per-event via :conky: <opts> rst tags.

Calendar like that is not only useful if you wear suits, but also to check on all these cool once-per-X podcasts, mark future releases of some interesting games or movies, track all the monthly bills and chores that are easy to forget about.

And it feels great to not be afraid to forget or miss all this stuff anymore.
In fact, having an excuse to structure and write it down helps a ton already.

But beyond that, feel like using transient and passing reminders can be just as good for tracking updates on some RSS feeds or URLs, so that you can notice update and check it out if you want to, maybe even mark as a todo-entry somewhere, but it won't hang over the head by default (and forever) in an inbox, feed reader or a large un-*something* number in bold someplace else.

So plan to add url-grepping and rss-checking as well, and have updates on arbitrary pages or feeds create calendar events there, which should at least be useful for aforementioned media periodics.

In fact, rst-configurable feed-checker (not "reader"!) might be a much better idea even for text-based content than some crappy web-based django mostrosity like the one I used before.

Aug 09, 2018

Lean Raspberry Pi UIs with Python and OpenVG

Implementing rather trivial text + chart UI with RPi recently, was surprised that it's somehow still not very straightforward in 2018.

Basic idea is to implement something like this:

RPi ADC UI mockup

Which is basically three text labels and three rectangles, with something like 1/s updates, so nothing that'd require 30fps 3D rendering loop or performance of C or Go, just any most basic 2D API and python should work fine for it.

Plenty of such libs/toolkits on top of X/Wayland and similar stuff, but that's a ton of extra layers of junk and jank instead of few trivial OpenVG calls, with only quirk of being able to render scaled text labels.

There didn't seem to be anything python that looks suitable for the task, notably:

  • ajstarks/openvg - great simple OpenVG wrapper lib, but C/Go only, has somewhat unorthodox unprefixed API, and seem to be abandoned these days.
  • povg - rather raw OpenVG ctypes wrapper, looks ok, but rendering fonts there would be a hassle.
  • libovg - includes py wrapper, but seem to be long-abandoned and have broken text rendering.
  • pi3d - for 3D graphics, so quite a different beast, rather excessive and really hard-to-use for static 2D UIs.
  • Qt5 and cairo/pango - both support OpenVG, but have excessive dependencies, with cathedral-like ecosystems built around them (qt/gtk respectively).
  • mgthomas99/easy-vg - updated fork of ajstarks/openvg, with proper C-style namespacing, some fixes and repackaged as a shared lib.

So with no good python alternative, last option of just wrapping dozen .so easy-vg calls via ctypes seemed to be a good-enough solution, with ~100-line wrapper for all calls there (evg.py in mk-fg/easy-vg).

With that, rendering code for all above UI ends up being as trivial as:

evg.begin()
evg.background(*c.bg.rgb)
evg.scale(x(1), y(1))

# Bars
evg.fill(*c.baseline.rgba)
evg.rect(*pos.baseline, *sz.baseline)
evg.fill(*c[meter_state].rgba)
evg.rect(*pos.meter, sz.meter.x, meter_height)

## Chart
evg.fill(*c.chart_bg.rgba)
evg.rect(*pos.chart, *sz.chart)
if len(chart_points) > 1:
  ...
  arr_sig = evg.vg_float * len(cp)
  evg.polyline(*(arr_sig(*map(
    op.attrgetter(k), chart_points )) for k in 'xy'), len(cp))

## Text
evg.scale(1/x(1), 1/y(1))
text_size = dxy_scaled(sz.bar_text)

evg.fill(*(c.sku.rgba if not text.sku_found else c.sku_found.rgba))
evg.text( x(pos.sku.x), y(pos.sku.y),
  text.sku.encode(), None, dxy_scaled(sz.sku) )
...
(note: "*stuff" are not C pointers, but python's syntax for "explode value list")
See also: hello.py example for evg.py API, similar to easy-vg's hello.c.

That code can start straight-up after local-fs.target with only dependency being easy-vg's libshapes.so to wrap OpenVG calls to RPi's /dev/vc*, and being python, use all the nice stuff like gpiozero, succinct and no-brainer to work with.

Few additional notes on such over-the-console UIs:

  • RPi's VC4/DispmanX has great "layers" feature, where multiple apps can display different stuff at the same time.

    This allows to easily implement e.g. splash screen (via some simple/quick 10-liner binary) always running underneath UI, hiding any noise and providing more graceful start/stop/crash transitions.

    Can even be used to play some dynamic video splash or logos/animations (via OpenMAX API and omxplayer) while main app/UI initializing/running underneath it.

    (wrote a bit more about this in an earlier Raspberry Pi early boot splash / logo screen post here)

  • If keyboard/mouse/whatever input have to be handled, python-evdev + pyudev are great and simple way to do it (also mentioned in an earlier post), and generally easier to use than a11y layers that some GUI toolkits provide.

  • systemctl disable getty@tty1 to not have it spammed with whatever input is intended for the UI, as otherwise it'll still be running under the graphics.

    Should UI app ever need to drop user back to console (e.g. via something like ExecStart=/sbin/agetty --autologin root tty1), it might be necessary to scrub all the input from there first, which can be done by using StandardInput=tty in the app and something like the following snippet:

    if sys.stdin.isatty()
      import termios, atexit
      signal.signal(signal.SIGHUP, signal.SIG_IGN)
      atexit.register(termios.tcflush, sys.stdin.fileno(), termios.TCIOFLUSH)
    

    It'd be outright dangerous to run shell with some random input otherwise.

  • While it's neat single quick-to-start pid on top of bare init, it's probably not suitable for more complex text/data layouts, as positioning and drawing all the "nice" UI boxes for that can be a lot of work and what widget toolkits are there for.

Kinda expected that RPi would have some python "bare UI" toolkit by now, but oh well, it's not that difficult to make one by putting stuff linked above together.

In future, mgthomas99/easy-vg seem to be moving away from simple API it currently has, based more around paths like raw OpenVG or e.g. cairo in "develop" branch already, but there should be my mk-fg/easy-vg fork retaining old API as it is demonstrated here.

Aug 05, 2018

rsync backups over reverse ssh tunnels

Have been reviewing backups last about 10 years ago (p2, p3), and surprisingly not much changed since then - still same ssh and rsync for secure sync over network remote.

Stable-enough btrfs makes --link-dest somewhat anachronistic, and rsync filters came a long way since then, but everything else is the same, so why not use this opportunity to make things simpler and smoother...

In particular, it was always annoying to me that backups either had to be pulled from some open and accessible port or pushed to same thing on the backup server, which isn't hard to fix with "ssh -R" tunnels - that allows backup server to have locked-down and reasonably secure ssh port open at most, yet provides no remote push access (i.e. no overwriting whatever remote wants to, like simple rsyncd setup would do), and does everything through just a single outgoing ssh connection.

That is, run "rsync --daemon" or localhost, make reverse-tunnel to it when connecting to backup host and let it pull from that.

On top of being no-brainer to implement and use - as simple as ssh from behind however many NATs - it avoids following (partly mentioned) problematic things:

  • Pushing stuff to backup-host, which can be exploited to delete stuff.
  • Using insecure network channels and/or rsync auth - ssh only.
  • Having any kind of insecure auth or port open on backup-host (e.g. rsyncd) - ssh only.
  • Requiring backed-up machine to be accessible on the net for backup-pulls - can be behind any amount of NAT layers, and only needs one outgoing ssh connection.
  • Specifying/handling backup parameters (beyond --filter lists), rotation and cleanup on the backed-up machine - backup-host will handle all that in a known-good and uniform manner.
  • Running rsyncd or such with unrestricted fs access "for backups" - only runs it on localhost port with one-time auth for ssh connection lifetime, restricted to specified read-only path, with local filter rules on top.
  • Needing anything beyond basic ssh/rsync/python on either side.

Actual implementation I've ended up with is ssh-r-sync + ssh-r-sync-recv scripts in fgtk repo, both being rather simple py3 wrappers for ssh/rsync stuff.

Both can be used by regular uid's, and can use rsync binary with capabilities or sudo wrapper to get 1-to-1 backup with all permissions instead of --fake-super (though note that giving root-rsync access to uid is pretty much same as "NOPASSWD: ALL" sudo).

One relatively recent realization (coming from acme-cert-tool) compared to scripts I wrote earlier, is that using bunch of script hooks all over the place is a way easier than hardcoding a dozen of ad-hoc options.

I.e. have option group like this (-h/--help output from argparse):

Hook options:
  -x hook:path, --hook hook:path
    Hook-script to run at the specified point.
    Specified path must be executable (chmod +x ...),
      will be run synchronously, and must exit with 0
      for tool to continue operation, non-0 to abort.
    Hooks are run with same uid/gid
      and env as the main script, can use PATH-lookup.
    See --hook-list output to get full list of
      all supported hook-points and arguments passed to them.
    Example spec: -x rsync.pre:~/hook.pre-sync.sh
  --hook-timeout seconds
    Timeout for waiting for hook-script to finish running,
      before aborting the operation (treated as hook error).
    Zero or negative value will disable timeout. Default: no-limit
  --hook-list
    Print the list of all supported
      hooks with descriptions/parameters and exit.

And --hook-list providing full attached info like:

Available hook points:

  script.start:
    Before starting handshake with authenticated remote.
    args: backup root dir.

  ...

  rsync.pre:
    Right before backup-rsync is started, if it will be run.
    args: backup root dir, backup dir, remote name, privileged sync (0 or 1).
    stdout: any additional \0-separated args to pass to rsync.
      These must be terminated by \0, if passed,
        and can start with \0 to avoid passing any default options.

  rsync.done:
    Right after backup-rsync is finished, e.g. to check/process its output.
    args: backup root dir, backup dir, remote name, rsync exit code.
    stdin: interleaved stdout/stderr from rsync.
    stdout: optional replacement for rsync return code, int if non-empty.

Hooks are run synchronously,
  waiting for subprocess to exit before continuing.
All hooks must exit with status 0 to continue operation.
Some hooks get passed arguments, as mentioned in hook descriptions.
Setting --hook-timeout (defaults to no limit)
  can be used to abort when hook-scripts hang.

Very trivial to implement and then allows to hook much simpler single-purpose bash scripts handling specific stuff like passing extra options on per-host basis, handling backup rotation/cleanup and --link-dest, creating "backup-done-successfully" mark and manifest files, or whatever else, without needing to add all these corner-cases into the main script.

One boilerplate thing that looks useful to hardcode though is a "nice ionice ..." wrapper, which is pretty much inevitable for background backup scripts (though cgroup limits can also be a good idea), and fairly easy to do in python, with minor a caveat of a hardcoded ioprio_set syscall number, but these pretty much never change on linux.

As a side-note, can recommend btrbk as a very nice tool for managing backups stored on btrfs, even if for just rotating/removing snapshots in an easy and sane "keep A daily ones, B weekly, C monthly, ..." manner.

[code link: ssh-r-sync + ssh-r-sync-recv scripts]

Apr 16, 2018

Emacs EMMS backend for long-running mpv processes

EMMS is the best music player out there (at least if you use emacs), as it allows full power and convenience of proper $EDITOR for music playlists and such.

All mpv backends for it that I'm aware of were restarting player binary for every track though, which is simple, good compatibility-wise, but also suboptimal in many ways.

For one thing, stuff like audio visualization is pita if it's in a constantly created/destroyed transient window, it adds significant gaps between played tracks (gapless+crossfade? forget it!), and - more due to why player starts/exit (know when playback ends) - feedback/control over it are very limited, since clearly no good APIs are used there, if wrapper relies on process exit as "playback ended" event.

Rewritten emms-player-mpv.el (also in "mpv-json-ipc" branch of emms git atm) fixes all that.

What's curious is that I didn't see almost all of these interesting use-cases, which using the tool in the sane way allows for, and only wrote new wrapper to get nice "playback position" feedback and out of petty pedantry over how lazy simple implementation seem to be.

Having separate persistent player window allows OSD config or lua to display any kind of metadata or infographics (with full power of lua + mpv + ffmpeg) about current tracks or playlist stuff there (esp. for online streams), enables subs/lyrics display, and getting stream of metadata update events from mpv allows to update any "now playing" meta stuff in emacs/emms too.

What seemed like a petty and almost pointless project to have fun with lisp, turned out to be actually useful, which seem to often be the case once you take a deep-dive into things, and not just blindly assume stuff about them (fire hot, water wet, etc).

Hopefully might get merged upstream after EMMS 5.0 and get a few more features and interesting uses like that along the way.

(though I'd suggest not waiting and just adding anything that comes to mind in ~/.emacs via emms-mpv-event-connect-hook, emms-mpv-event-functions and emms-mpv-ipc-req-send - should be really easy now)

Apr 12, 2018

mpv audio visualization

Didn't know mpv could do that until dropping into raw mpv for music playback yesterday, while adding its json api support into emms (emacs music player).

One option in mpv that I found essential over time - especially as playback from network sources via youtube-dl (youtube, twitch and such) became more common - is --force-window=immediate (via config), so that you can just run "mpv URL" in whatever console and don't have to wait until video buffers enough for mpv window to pop-up.

This saves a few-to-dozen seconds of annoyance as otherwise you can't do anything during ytdl init and buffering phase is done, as that's when window will pop-up randomly and interrupt whatever you're doing, plus maybe get affected by stuff being typed at the moment (and close, skip, seek or get all messed-up otherwise).

It's easy to disable this unnecessary window for audio-only files via lua, but other option that came to mind when looking at that black square is to send it to aux display with some nice visualization running.

Which is not really an mpv feature, but one of the many things that ffmpeg can render with its filters, enabled via --lavfi-complex audio/video filtering option.

E.g. mpv --lavfi-complex="[aid1]asplit[ao][a]; [a]showcqt[vo]" file.mp3 will process a copy of --aid=1 audio stream (one copy goes straight to "ao" - audio output) via ffmpeg showcqt filter and send resulting visualization to "vo" (video output).

As ffmpeg is designed to allow many complex multi-layered processing pipelines, extending on that simple example can produce really fancy stuff, like any blend of images, text and procedurally-generated video streams.

Some nice examples of those can be found at ffmpeg wiki FancyFilteringExamples page.

It's much easier to build, control and tweak that stuff from lua though, e.g. to only enable such vis if there is a blank forced window without a video stream, and to split those long pipelines into more sensible chunks of parameters, for example:

local filter_bg = lavfi_filter_string{
  'firequalizer', {
    gain = "'20/log(10)*log(1.4884e8"
      .."* f/(f*f + 424.36)"
      .."* f/(f*f + 1.4884e8)"
      .."* f/sqrt(f*f + 25122.25) )'",
    accuracy = 1000,
    zero_phase = 'on' },
  'showcqt', {
    fps = 30,
    size = '960x768',
    count = 2,
    bar_g = 2,
    sono_g = 4,
    bar_v = 9,
    sono_v = 17,
    font = "'Luxi Sans,Liberation Sans,Sans|bold'",
    fontcolor = "'st(0, (midi(f)-53.5)/12);"
      .."st(1, 0.5 - 0.5 * cos(PI*ld(0))); r(1-ld(1)) + b(ld(1))'",
    tc = '0.33',
    tlength = "'st(0,0.17);"
      .."384*tc/(384/ld(0)+tc*f/(1-ld(0)))"
      .." + 384*tc/(tc*f/ld(0)+384/(1-ld(0)))'" } }
local filter_fg = lavfi_filter_string{ 'avectorscope',
  { mode='lissajous_xy', size='960x200',
    rate=30, scale='cbrt', draw='dot', zoom=1.5 } }

local overlay = lavfi_filter_string{'overlay', {format='yuv420'}}
local lavfi =
  '[aid1] asplit=3 [ao][a1][a2];'
  ..'[a1]'..filter_bg..'[v1];'
  ..'[a2]'..filter_fg..'[v2];'
  ..'[v1][v2]'..overlay..'[vo]'

mp.set_property('options/lavfi-complex', lavfi)

Much easier than writing something like this down into one line.

("lavfi_filter_string" there concatenates all passed options with comma/colon separators, as per ffmpeg syntax)

Complete lua script that I ended-up writing for this: fg.lavfi-audio-vis.lua

With some grand space-ambient electronic score, showcqt waterfall can move in super-trippy ways, very much representative of the glacial underlying audio rythms:

mpv ffmpeg visualization snapshot

(track in question is "Primordial Star Clouds" [45] from EVE Online soundtrack)

Script won't kick-in with --vo=null, --force-window not enabled, or if "vo-configured" won't be set by mpv for whatever other reason (e.g. some video output error), otherwise will be there with more pretty colors to brighten your day :)

Apr 10, 2018

Linux X desktop "clipboard" keys via exclip tool

It's been a mystery to me for a while how X terminal emulators (from xterm to Terminology) manage to copy long bits of strings spanning multiple lines without actually splitting them with \n chars, given that there's always something like "screen" or "tmux" or even "mosh" running in there.

All these use ncurses which shouldn't output "long lines of text" but rather knows width/height of a terminal and flips specific characters there when last output differs from its internal "how it should be" state/buffer.

Regardless of how this works, terminals definitely get confused sometimes, making copy-paste of long paths and commands from them into a minefield, where you never know if it'll insert full path/command or just run random parts of it instead by emitting newlines here and there.

Easy fix: bind a key combo in a WM to always "copy stuff as a single line".
Bonus points - also strip spaces from start/end, demanding no select-precision.
Even better - have it expose result as both primary and clipboard, to paste anywhere.

For a while used a trivial bash script for that, which did "xclip -in" from primary selection, some string-mangling in bash and two "xclip -out" to put result back into primary and clipboard.

It's a surprisingly difficult and suboptimal task for bash though, as - to my knowledge - you can't even replace \n chars in it without running something like "tr" or "sed".

And running xclip itself a few times is bad enough, but with a few extra binaries and under CPU load, such "clipboard keys" become unreliable due to lag from that script.

Hence finally got fed up by it and rewritten whole thing in C as a small and fast 300-liner exclip tool, largely based on xclip code.

Build like this: gcc -O2 -lX11 -lXmu exclip.c -o exclip && strip exclip

Found something like it bound to a key (e.g. Win+V for verbatim copy, and variants like Win+Shift+V for stripping spaces/newlines) to be super-useful when using terminals and text-based apps, apps that mix primary/clipboard selections, etc - all without needing to touch the mouse.

Tool is still non-trivial due to how selections and interaction with X work - code has to be event-based, negotiate content type that it wants to get, can have large buffers sent in incremental events, and then have to hold these (in a forked subprocess) and negotiate sending to other apps - i.e. not just stuff N bytes from buffer somewhere server-side and exit ("cut buffers" can work like that in X, but limited and never used).

Looking at all these was a really fun dive into how such deceptively-simple (but ancient and not in fact simple at all) things like "clipboard" work.

E.g. consider how one'd hold/send/expose stuff from huge GIMP image selection and paste it into an entirely different app (xclip -out -t TARGETS can give a hint), especially with X11 and its network transparency.

Though then again, maybe humble string manipulation in C is just as fascinating, given all the pointer juggling and tricks one'd usually have to do for that.

Nov 27, 2017

Checking/waiting-on linux network parameters from the scripts

It's one thing that's non-trivial to get right with simple scripts.

I.e. how to check address on an interface? How to wait for it to be assigned? How to check gateway address? Etc.

Few common places to look for these things:

/sys/class/net/

Has easily-accessible list of interfaces and a MAC for each in e.g. /sys/class/net/enp1s0/address (useful as a machine id sometimes).

Pros: easy/reliable to access from any scripts.
Cons: very little useful info there.

/usr/lib/systemd/systemd-networkd-wait-online

As systemd-networkd is a common go-to network management tool these days, this one complements it very nicely.

Allows to wait until some specific interface(s) (or all of them) get fully setup, has built-in timeout option too.

E.g. just run systemd-networkd-wait-online -i enp1s0 from any script or even ExecStartPre= of a unit file and you have it waiting for net to be available reliably, no need to check for specific IP or other details.

Pros: super-easy to use from anywhere, even ExecStartPre= of unit files.
Cons: for one very specific (but very common) all-or-nothing use-case.

Doesn't always work for interfaces that need complicated setup by an extra daemon, e.g. wifi, batman-adv or some tunnels.

Also, undocumented caveat: use ExecStartPre=-/.../systemd-networkd-wait-online ... ("=-" is the important bit) for anything that should start regardless of network issues, as thing can exit with non-0 sometimes when there's no network for a while (which does look like a bug, and might be fixed in future versions).

Update 2019-01-12: systemd-networkd 236+ also has RequiredForOnline=, which allows to configure which ifaces global network.target should wait for right in the .network files, if tracking only one state is enough.

iproute2 json output

If iproute2 is recent enough (4.13.0-4.14.0 and above), then GOOD NEWS!
ip-address and ip-link there have -json output.
(as well as "tc", stat-commands, and probably many other things in later releases)

Parsing ip -json addr or ip -json link output is trivial anywhere except for sh scripts, so it's a very good option if required parts of "ip" are jsonified already.

Pros: easy to parse, will be available everywhere in the near future.
Cons: still very recent, so quite patchy and not ubiquitous yet, not for sh scripts.

Scraping iproute2 (e.g. "ip" and "ss" tools) non-json outputs

Explicitly discouraged by iproute2 docs and a really hacky solution.

Such outputs there are quirky, don't line-up nicely, and clearly not made for this purpose, plus can break at any point, as suggested by the docs repeatedly.

But for terrible sh hacks, sometimes it works, and "ip monitor" even allows to react to netlink events as soon as they appear, e.g.:

[Unit]
After=network-pre.target
Before=network.target systemd-networkd.service

[Service]
ExecStart=/usr/bin/bash -c "\
  dev=host0;\
  exec 2>/dev/null; trap 'pkill -g 0' EXIT; trap exit TERM;\
  awk '/^([0-9]+:\s+'$$dev')?\s+inet\s/ {chk=1; exit} END {exit !chk}'\
  < <( ip addr show dev $$dev;\
    stdbuf -oL ip monitor address dev $$dev &\
    sleep 1 ; ip addr show dev $$dev ; wait )\
  && ip link set $$dev mtu 1280"

[Install]
WantedBy=multi-user.target

That's example of how ugly it can get with events though - two extra checks around ip-monitor are there for a reason (easy to miss event otherwise).

(this specific hack is a workaround for systemd-networkd failing to set mtu in some cases where it has to be done only after other things)

"ip -brief" output is somewhat of an exception, more suitable for sh scripts, but only works for ip-address and ip-link parts and still mixes-up columns occasionally (e.g. ip -br link for tun interfaces).

Pros: allows to access all net parameters and monitor events, easier for sh than json.
Cons: gets ugly fast, hard to get right, brittle and explicitly discouraged.

APIs of running network manager daemons

E.g. NetworkManager has nice-ish DBus API (see two polkit rules/pkla snippets here for how to enable it for regular users), same for wpa_supplicant/hostapd (see wifi-client-match or wpa-systemd-wrapper scripts), dhcpcd has hooks.

systemd-networkd will probably get DBus API too at some point in the near future, beyond simple up/down one that systemd-networkd-wait-online already uses.

Pros: best place to get such info from, can allow some configuration.
Cons: not always there, can be somewhat limited or hard to access.

Bunch of extra modules/tools

Especially for python and such, there's plenty of tools like pyroute2 and netifaces, with occasional things making it into stdlib - e.g. socket.if_* calls (py 3.3+) or ipaddress module (py 3.3+).

Can make things easier in larger projects, where dragging along a bunch of few extra third-party modules isn't too much of a hassle.

Not a great option for drop-in self-contained scripts though, regardless of how good python packaging gets.

Pros: there's a module/lib for everything.
Cons: extra dependencies, with all the api/packaging/breakage/security hassle.

libc and getifaddrs() - low-level way

Same python has ctypes, so why bother with all the heavy/fragile deps and crap, when it can use libc API directly?

Drop-in snippet to grab all the IPv4/IPv6/MAC addresses (py2/py3):

import os, socket, ctypes as ct

class sockaddr_in(ct.Structure):
  _fields_ = [('sin_family', ct.c_short), ('sin_port', ct.c_ushort), ('sin_addr', ct.c_byte*4)]

class sockaddr_in6(ct.Structure):
  _fields_ = [ ('sin6_family', ct.c_short), ('sin6_port', ct.c_ushort),
    ('sin6_flowinfo', ct.c_uint32), ('sin6_addr', ct.c_byte * 16) ]

class sockaddr_ll(ct.Structure):
  _fields_ = [ ('sll_family', ct.c_ushort), ('sll_protocol', ct.c_ushort),
    ('sll_ifindex', ct.c_int), ('sll_hatype', ct.c_ushort), ('sll_pkttype', ct.c_uint8),
    ('sll_halen', ct.c_uint8), ('sll_addr', ct.c_uint8 * 8) ]

class sockaddr(ct.Structure):
  _fields_ = [('sa_family', ct.c_ushort)]

class ifaddrs(ct.Structure): pass
ifaddrs._fields_ = [ # recursive
  ('ifa_next', ct.POINTER(ifaddrs)), ('ifa_name', ct.c_char_p),
  ('ifa_flags', ct.c_uint), ('ifa_addr', ct.POINTER(sockaddr)) ]

def get_iface_addrs(ipv4=False, ipv6=False, mac=False, ifindex=False):
  if not (ipv4 or ipv6 or mac or ifindex): ipv4 = ipv6 = True
  libc = ct.CDLL('libc.so.6', use_errno=True)
  libc.getifaddrs.restype = ct.c_int
  ifaddr_p = head = ct.pointer(ifaddrs())
  ifaces, err = dict(), libc.getifaddrs(ct.pointer(ifaddr_p))
  if err != 0:
    err = ct.get_errno()
    raise OSError(err, os.strerror(err), 'getifaddrs()')
  while ifaddr_p:
    addrs = ifaces.setdefault(ifaddr_p.contents.ifa_name.decode(), list())
    addr = ifaddr_p.contents.ifa_addr
    if addr:
      af = addr.contents.sa_family
      if ipv4 and af == socket.AF_INET:
        ac = ct.cast(addr, ct.POINTER(sockaddr_in)).contents
        addrs.append(socket.inet_ntop(af, ac.sin_addr))
      elif ipv6 and af == socket.AF_INET6:
        ac = ct.cast(addr, ct.POINTER(sockaddr_in6)).contents
        addrs.append(socket.inet_ntop(af, ac.sin6_addr))
      elif (mac or ifindex) and af == socket.AF_PACKET:
        ac = ct.cast(addr, ct.POINTER(sockaddr_ll)).contents
        if mac:
          addrs.append('mac-' + ':'.join(
            map('{:02x}'.format, ac.sll_addr[:ac.sll_halen]) ))
        if ifindex: addrs.append(ac.sll_ifindex)
    ifaddr_p = ifaddr_p.contents.ifa_next
  libc.freeifaddrs(head)
  return ifaces

print(get_iface_addrs())

Result is a dict of iface-addrs (presented as yaml here):

enp1s0:
  - 192.168.65.19
  - fc65::19
  - fe80::c646:19ff:fe64:632f
enp2s7:
  - 10.0.1.1
  - fe80::1ebd:b9ff:fe86:f439
lo:
  - 127.0.0.1
  - ::1
ve: []
wg:
  - 10.74.0.1

Or to get IPv6+MAC+ifindex only - get_iface_addrs(ipv6=True, mac=True, ifindex=True):

enp1s0:
  - mac-c4:46:19:64:63:2f
  - 2
  - fc65::19
  - fe80::c646:19ff:fe64:632f
enp2s7:
  - mac-1c:bd:b9:86:f4:39
  - 3
  - fe80::1ebd:b9ff:fe86:f439
lo:
  - mac-00:00:00:00:00:00
  - 1
  - ::1
ve:
  - mac-36:65:67:f7:99:dc
  - 5
wg: []

Tend to use this as a drop-in boilerplate/snippet in python scripts that need IP address info, instead of adding extra deps - libc API should be way more stable/reliable than these anyway.

Same can be done in any other full-featured scripts, of course, not just python, but bash scripts are sorely out of luck.

Pros: first-hand address info, stable/reliable/efficient, no extra deps.
Cons: not for 10-liner sh scripts, not much info, bunch of boilerplate code.

libmnl - same way as iproute2 does it

libc.getifaddrs() doesn't provide much info beyond very basic ip/mac addrs and iface indexes, and the rest should be fetched from kernel via netlink sockets.

libmnl wraps those, and is used by iproute2, so comes out of the box on any modern linux, so its API can be used in the same way as libc above from full-featured scripts like python:

import os, socket, resource, struct, time, ctypes as ct

class nlmsghdr(ct.Structure):
  _fields_ = [
    ('len', ct.c_uint32),
    ('type', ct.c_uint16), ('flags', ct.c_uint16),
    ('seq', ct.c_uint32), ('pid', ct.c_uint32) ]

class nlattr(ct.Structure):
  _fields_ = [('len', ct.c_uint16), ('type', ct.c_uint16)]

class rtmsg(ct.Structure):
  _fields_ = ( list( (k, ct.c_uint8) for k in
      'family dst_len src_len tos table protocol scope type'.split() )
    + [('flags', ct.c_int)] )

class mnl_socket(ct.Structure):
  _fields_ = [('fd', ct.c_int), ('sockaddr_nl', ct.c_int)]

def get_route_gw(addr='8.8.8.8'):
  libmnl = ct.CDLL('libmnl.so.0.2.0', use_errno=True)
  def _check(chk=lambda v: bool(v)):
    def _check(res, func=None, args=None):
      if not chk(res):
        errno_ = ct.get_errno()
        raise OSError(errno_, os.strerror(errno_))
      return res
    return _check
  libmnl.mnl_nlmsg_put_header.restype = ct.POINTER(nlmsghdr)
  libmnl.mnl_nlmsg_put_extra_header.restype = ct.POINTER(rtmsg)
  libmnl.mnl_attr_put_u32.argtypes = [ct.POINTER(nlmsghdr), ct.c_uint16, ct.c_uint32]
  libmnl.mnl_socket_open.restype = mnl_socket
  libmnl.mnl_socket_open.errcheck = _check()
  libmnl.mnl_socket_bind.argtypes = [mnl_socket, ct.c_uint, ct.c_int32]
  libmnl.mnl_socket_bind.errcheck = _check(lambda v: v >= 0)
  libmnl.mnl_socket_get_portid.restype = ct.c_uint
  libmnl.mnl_socket_get_portid.argtypes = [mnl_socket]
  libmnl.mnl_socket_sendto.restype = ct.c_ssize_t
  libmnl.mnl_socket_sendto.argtypes = [mnl_socket, ct.POINTER(nlmsghdr), ct.c_size_t]
  libmnl.mnl_socket_sendto.errcheck = _check(lambda v: v >= 0)
  libmnl.mnl_socket_recvfrom.restype = ct.c_ssize_t
  libmnl.mnl_nlmsg_get_payload.restype = ct.POINTER(rtmsg)
  libmnl.mnl_attr_validate.errcheck = _check(lambda v: v >= 0)
  libmnl.mnl_attr_get_payload.restype = ct.POINTER(ct.c_uint32)

  if '/' in addr: addr, cidr = addr.rsplit('/', 1)
  else: cidr = 32

  buf = ct.create_string_buffer(min(resource.getpagesize(), 8192))
  nlh = libmnl.mnl_nlmsg_put_header(buf)
  nlh.contents.type = 26 # RTM_GETROUTE
  nlh.contents.flags = 1 # NLM_F_REQUEST
  # nlh.contents.flags = 1 | (0x100|0x200) # NLM_F_REQUEST | NLM_F_DUMP
  nlh.contents.seq = seq = int(time.time())
  rtm = libmnl.mnl_nlmsg_put_extra_header(nlh, ct.sizeof(rtmsg))
  rtm.contents.family = socket.AF_INET

  addr, = struct.unpack('=I', socket.inet_aton(addr))
  libmnl.mnl_attr_put_u32(nlh, 1, addr) # 1=RTA_DST
  rtm.contents.dst_len = int(cidr)

  nl = libmnl.mnl_socket_open(0) # NETLINK_ROUTE
  libmnl.mnl_socket_bind(nl, 0, 0) # nl, 0, 0=MNL_SOCKET_AUTOPID
  port_id = libmnl.mnl_socket_get_portid(nl)
  libmnl.mnl_socket_sendto(nl, nlh, nlh.contents.len)

  addr_gw = None

  @ct.CFUNCTYPE(ct.c_int, ct.POINTER(nlattr), ct.c_void_p)
  def data_ipv4_attr_cb(attr, data):
    nonlocal addr_gw
    if attr.contents.type == 5: # RTA_GATEWAY
      libmnl.mnl_attr_validate(attr, 3) # MNL_TYPE_U32
      addr = libmnl.mnl_attr_get_payload(attr)
      addr_gw = socket.inet_ntoa(struct.pack('=I', addr[0]))
    return 1 # MNL_CB_OK

  @ct.CFUNCTYPE(ct.c_int, ct.POINTER(nlmsghdr), ct.c_void_p)
  def data_cb(nlh, data):
    rtm = libmnl.mnl_nlmsg_get_payload(nlh).contents
    if rtm.family == socket.AF_INET and rtm.type == 1: # RTN_UNICAST
      libmnl.mnl_attr_parse(nlh, ct.sizeof(rtm), data_ipv4_attr_cb, None)
    return 1 # MNL_CB_OK

  while True:
    ret = libmnl.mnl_socket_recvfrom(nl, buf, ct.sizeof(buf))
    if ret <= 0: break
    ret = libmnl.mnl_cb_run(buf, ret, seq, port_id, data_cb, None)
    if ret <= 0: break # 0=MNL_CB_STOP
    break # MNL_CB_OK for NLM_F_REQUEST, don't use with NLM_F_DUMP!!!
  if ret == -1: raise OSError(ct.get_errno(), os.strerror(ct.get_errno()))
  libmnl.mnl_socket_close(nl)

  return addr_gw

print(get_route_gw())

This specific boilerplate will fetch the gateway IP address to 8.8.8.8 (i.e. to the internet), used it in systemd-watchdog script recently.

It might look a bit too complex for such apparently simple task at this point, but allows to do absolutely anything network-related - everything "ip" (iproute2) does, including configuration (addresses, routes), creating/setting-up new interfaces ("ip link add ..."), all the querying (ip-route, ip-neighbor, ss/netstat, etc), waiting and async monitoring (ip-monitor, conntrack), etc etc...

Pros: can do absolutely anything, directly, stable/reliable/efficient, no extra deps.
Cons: definitely not for 10-liner sh scripts, boilerplate code.

Conclusion

iproute2 with -json output flag should be good enough for most cases where systemd-networkd-wait-online is not sufficient, esp. if more commands there (like ip-route and ip-monitor) will support it in the future (thanks to Julien Fortin and all other people working on this!).

For more advanced needs, it's usually best to query/control whatever network management daemon or go to libc/libmnl directly.

Oct 11, 2017

Force-enable HDMI to specific mode in linux framebuffer console

Bumped into this issue when running latest mainline kernel (4.13) on ODROID-C2 - default fb console for HDMI output have to be configured differently there (and also needs a dtb patch to hook it up).

Old vendor kernels for that have/use a bunch of cmdline options for HDMI - hdmimode, hdmitx (cecconfig), vout (hdmi/dvi/auto), overscan_*, etc - all custom and non-mainline.

With mainline DRM (as in Direct Rendering Manager) and framebuffer modules, video= option seem to be the way to set/force specific output and resolution instead.

When display is connected on boot, it can work without that if stars align correctly, but that's not always the case as it turns out - only 1 out of 3 worked that way.

But even if display works on boot, plugging HDMI after boot never works anyway, and that's the most (only) useful thing for it (debug issues, see logs or kernel panic backtrace there, etc)!

DRM module (meson_dw_hdmi in case of C2) has its HDMI output info in /sys/class/drm/card0-HDMI-A-1/, which is where one can check on connected display, dump its EDID blob (info, supported modes), etc.

cmdline option to force this output to be used with specific (1080p60) mode:

video=HDMI-A-1:1920x1080@60e

More info on this spec is in Documentation/fb/modedb.txt, but the gist is "<ouput>:<WxH>@<rate><flags>" with "e" flag at the end is "force the display to be enabled", to avoid all that hotplug jank.

Should set mode for console (see e.g. fbset --info), but at least with meson_dw_hdmi this is insufficient, which it's happy to tell why when loading with extra drm.debug=0xf cmdline option - doesn't have any supported modes, returns MODE_BAD for all non-CEA-861 modes that are default in fb-modedb.

Such modelines are usually supplied from EDID blobs by the display, but if there isn't one connected, blob should be loaded from somewhere else (and iirc there are ways to define these via cmdline).

Luckily, kernel has built-in standard EDID blobs, so there's no need to put anything to /lib/firmware, initramfs or whatever:

drm_kms_helper.edid_firmware=edid/1920x1080.bin video=HDMI-A-1:1920x1080@60e

And that finally works.

Not very straightforward, and doesn't seem to be documented in one place anywhere with examples (ArchWiki page on KMS probably comes closest).

Jun 09, 2017

acme-cert-tool for easy end-to-end https cert management

Tend to mention random trivial tools I write here, but somehow forgot about this one - acme-cert-tool.

Implemented it a few months back when setting-up TLS on, and wasn't satisfied by any existing things for ACME / Let's Encrypt cert management.

Wanted to find some simple python3 script that's a bit less hacky than acme-tiny, not a bloated framework with dozens of useless deps like certbot and has ECC certs covered, but came up empty.

acme-cert-tool has all that in a single script with just one dep on a standard py crypto toolbox (cryptography.io), and does everything through a single command, e.g. something like:

% ./acme-cert-tool.py --debug -gk le-staging.acc.key cert-issue \
  -d /srv/www/.well-known/acme-challenge le-staging.cert.pem mydomain.com

...to get signed cert for mydomain.com, doing all the generation, registration and authorization stuff as necessary, and caching that stuff in "le-staging.acc.key" too, not doing any extra work there either.

Add && systemctl reload nginx to that, put into crontab or .timer and done.

There are bunch of other commands mostly to play with accounts and such, plus options for all kinds of cert and account settings, e.g. ... -e myemail@mydomain.com -c rsa-2048 -c ec-384 to also have cert with rsa key generated for random outdated clients and add email for notifications (if not added already).

README on acme-cert-tool github page and -h/--help output should have more details:

Jun 02, 2017

Upgrading ssh to mosh with UDP hole punching to connect to a host behind NAT

There are way more tools that happily forward TCP ports than ones for UDP.

Case in point - it's usually easy to forward ssh port through a bunch of hosts and NATs, with direct and reverse ssh tunnels, ProxyCommand stuff, tools like pwnat, etc, but for mosh UDP connection it's not that trivial.

Which sucks, because its performance and input prediction stuff is exactly what's lacking in super-laggy multi-hop ssh connections forwarded back-and-forth between continents through such tunnels.

There are quite a few long-standing discussions on how to solve it properly in mosh, which didn't get any traction so far, unfortunately:

One obvious way to make it work, is to make some tunnel (like OpenVPN or wireguard) from destination host (server) to a client, and use mosh over that.

But that's some extra tools and configuration to keep around on both sides, and there is much easier way that works perfectly for most cases - knowing both server and client IPs, pre-pick ports for mosh-server and mosh-client, then punch hole in the NAT for these before starting both.

How it works:

  • Pick some UDP ports that server and client will be using, e.g. 34700 for server and 34701 for client.
  • Send UDP packet from server:34700 to client:34701.
  • Start mosh-server, listening on server:34700.
  • Connect to that with mosh-client, using client:34701 as a UDP source port.

NAT on the router(s) in-between the two will see this exchange as a server establishing "udp connection" to a client, and will allow packets in both directions to flow through between these two ports.

Once mosh-client establishes the connection and keepalive packets will start bouncing there all the time, it will be up indefinitely.

mosh is generally well-suited for running manually from an existing console, so all that's needed to connect in a simple case is:

server% mosh-server new
MOSH CONNECT 60001 NN07GbGqQya1bqM+ZNY+eA

client% MOSH_KEY=NN07GbGqQya1bqM+ZNY+eA mosh-client <server-ip> 60001

With hole-punching, two additional wrappers are required with the current mosh version (1.3.0):

  • One for mosh-server to send UDP packet to the client IP, using same port on which server will then be started: mosh-nat
  • And a wrapper for mosh-client to force its socket to bind to specified local UDP port, which was used as a dst by mosh-server wrapper above: mosh-nat-bind.c

Making connection using these two is as easy as with stock mosh above:

server% ./mosh-nat 74.59.38.152
mosh-client command:
  MNB_PORT=34730 LD_PRELOAD=./mnb.so
    MOSH_KEY=rYt2QFJapgKN5GUqKJH2NQ mosh-client <server-addr> 34730

client% MNB_PORT=34730 LD_PRELOAD=./mnb.so \
  MOSH_KEY=rYt2QFJapgKN5GUqKJH2NQ mosh-client 84.217.173.225 34730

(with server at 84.217.173.225, client at 74.59.38.152 and using port 34730 on both ends in this example)

Extra notes:

  • "mnb.so" used with LD_PRELOAD is that mosh-nat-bind.c wrapper, which can be compiled using: gcc -nostartfiles -fpic -shared -ldl -D_GNU_SOURCE mosh-nat-bind.c -o mnb.so
  • Both mnb.so and mosh-nat only work with IPv4, IPv6 shouldn't use NAT anyway.
  • 34730 is the default port for -c/--client-port and -s/--server-port opts in mosh-nat script.
  • Started mosh-server waits for 60s (default) for mosh-client to connect.
  • Continous operation relies on mosh keepalive packets without interruption, as mentioned, and should break on (long enough) net hiccups, unlike direct mosh connections established to server that has no NAT in front of it (or with a dedicated port forwarding).
  • No roaming of any kind is possible here, again, unlike with original mosh - if src IP/port changes, connection will break.
  • New MOSH_KEY is generated by mosh-server on every run, and is only good for one connection, as server should rotate it after connection gets established, so is pretty safe/easy to use.
  • If client is behind NAT as well, its visible IP should be used, not internal one.
  • Should only work when NAT on either side doesn't rewrite source ports.

Last point can be a bummer with some "Carrier-grade" NATs, which do rewrite src ports out of necessity, but can be still worked around if it's only on the server side by checking src port of the hole-punching packet in tcpdump and using that instead of whatever it was supposed to be originally.

Requires just python to run wrapper script on the server and no additional configuration of any kind.

Both linked wrappers are from here:
← Previous Next → Page 5 of 17