May 14, 2017

ssh reverse tunnel ("ssh -R") caveats and tricks

"ssh -R" a is kinda obvious way to setup reverse access tunnel from something remote that one'd need to access, e.g. raspberry pi booted from supplied img file somewhere behind the router on the other side of the world.

Being part of OpenSSH, it's available on any base linux system, and trivial to automate on startup via loop in a shell script, crontab or a systemd unit, e.g.:

[Unit]
Wants=network.service
After=network.service

[Service]
Type=simple
User=ssh-reverse-access-tunnel
Restart=always
RestartSec=10
ExecStart=/usr/bin/ssh -oControlPath=none -oControlMaster=no \
  -oServerAliveInterval=6 -oServerAliveCountMax=10 -oConnectTimeout=180 \
  -oPasswordAuthentication=no -oNumberOfPasswordPrompts=0 \
  -NnT -R "1234:localhost:22" tun-user@tun-host

[Install]
WantedBy=multi-user.target

On the other side, ideally in a dedicated container or VM, there'll be sshd "tun-user" with an access like this (as a single line):

command="echo >&2 'No shell access!'; exit 1",no-port-forwarding,
  no-X11-forwarding,no-agent-forwarding,no-pty ssh-ed25519 ...

Or have sshd_config section with same restrictions and only keys in authorized_keys, e.g.:

Match User tun-*
 # GatewayPorts yes
 AllowTcpForwarding no
 X11Forwarding no
 AllowAgentForwarding no
 PermitTTY no
 ForceCommand echo 'no shell access!'; exit 1

And that's it, right?

No additional stuff needed, "ssh -R" will connect reliably on boot, keep restarting and reconnecting in case of any errors, even with keepalives to detect dead connections and restart asap.

Nope!

There's a bunch of common pitfalls listed below.

  • Problem 1:

    When device with a tunnel suddenly dies for whatever reason - power or network issues, kernel panic, stray "kill -9" or what have you - connection on sshd machine will hang around for a while, as keepalive options are only used by the client.

    Along with (dead) connection, listening port will stay open as well, and "ssh -R" from e.g. power-cycled device will not be able to bind it, and that client won't treat it as a fatal error either!

    Result: reverse-tunnels don't survive any kind of non-clean reconnects.

    Fix:

    • TCPKeepAlive in sshd_config - to detect dead connections there faster, though probably still not fast enough for e.g. emergency reboot.
    • Detect and kill sshd pids without listening socket, forcing "ssh -R" to reconnect until it can actually bind one.
    • If TCPKeepAlive is not good or reliable enough, kill all sshd pids associated with listening sockets that don't produce the usual "SSH-2.0-OpenSSH_7.4" greeting line.
  • Problem 2:

    Running sshd on any reasonably modern linux, you get systemd session for each connection, and killing sshd pids as suggested above will leave logind sessions from these, potentially creating hundreds or thousands of them over time.

    Solution:

    • "UsePAM no" to disable pam_systemd.so along with the rest of the PAM.
    • Dedicated PAM setup for ssh tunnel logins on this dedicated system, not using pam_systemd.
    • Occasional cleanup via loginctl list-sessions/terminate-session for ones that are in "closing"/"abandoned" state.

    Killing sshd pids might be hard to avoid on fast non-clean reconnect, since reconnected "ssh -R" will hang around without a listening port forever, as mentioned.

  • Problem 3:

    If these tunnels are not configured on per-system basis, but shipped in some img file to use with multiple devices, they'll all try to bind same listening port for reverse-tunnels, so only one of these will work.

    Fixes:

    • More complex script to generate listening port for "ssh -R" based on machine id, i.e. serial, MAC, local IP address, etc.

    • Get free port to bind to out-of-band from the server somehow.

      Can be through same ssh connection, by checking ss/netstat output or /proc/net/tcp there, if commands are allowed there (probably a bad idea for random remote devices).

  • Problem 4:

    Device identification in the same "mutliple devices" scenario.

    I.e. when someone sets up 5 RPi boards on the other end, how to tell which tunnel leads to each specific board?

    Can usually be solved by:

    • Knowing/checking quirks specific to each board, like dhcp hostname, IP address, connected hardware, stored files, power-on/off timing, etc.
    • Blinking LEDs via /sys/class/leds, ethtool --identify or GPIO pins.
    • Output on connected display - just "wall" some unique number (e.g. reverse-tunnel port) or put it to whatever graphical desktop.
  • Problem 5:

    Round-trip through some third-party VPS can add significant console lag, making it rather hard to use.

    More general problem than with just "ssh -R", but when doing e.g. "EU -> US -> RU" trip and back, console becomes quite unusable without something like mosh, which can't be used over that forwarded tcp port anyway!

    Kinda defeats the purpose of the whole thing, though laggy console (with an option to upgrade it, once connected) is still better than nothing.

Not an exhaustive or universally applicable list, of course, but hopefully shows that it's actually more hassle than "just run ssh -R on boot" to have something robust here.

So choice of ubiquitous / out-of-the-box "ssh -R" over installing some dedicated tunneling thing like OpenVPN is not as clear-cut in favor of the former as it would seem, taking all such quirks (handled well by dedicated tunneling apps) into consideration.

As I've bumped into all of these by now, addressed them by:

  • ssh-tunnels-cleanup script to (optionally) do three things, in order:

    • Find/kill sshd pids without associated listening socket ("ssh -R" that re-connected quickly and couldn't bind one).
    • Probe all sshd listening sockets with nmap and make sure there's an "SSH-2.0-..." banner there, otherwise kill.
    • Cleanup all dead loginctl sessions, if any.

    Only affects/checks sshd pids for specific user prefix (e.g. "tun-"), to avoid touching anything but dedicated tunnels.

  • ssh-reverse-mux-server / ssh-reverse-mux-client scripts.

    For listening port negotiation with ssh server, using bunch of (authenticated) UDP packets.

    Essentially a wrapper for "ssh -R" on the client, to also pass all the required options, replacing ExecStart= line in above systemd example with e.g.:

    ExecStart=/usr/local/bin/ssh-reverse-mux-client \
      --mux-port=2200 --ident-rpi -s pkt-mac-key.aGPwhpya tun-user@tun-host
    

    ssh-reverse-mux-server on the other side will keep .db file of --ident strings (--ident-rpi uses hash of RPi board serial from /proc/cpuinfo) and allocate persistent port number (from specified range) to each one, which client will use with actual ssh command.

    Simple symmetric key (not very secret) is used to put MAC into packets and ignore any noise traffic on either side that way.

    https://github.com/mk-fg/fgtk#ssh-reverse-mux

  • Hook in ssh-reverse-mux-client above to blink bits of allocated port on some available LED.

    Sample script to do the morse-code-like bit-blinking:

    And additional hook option for command above:

    ... -c 'sudo -n led-blink-arg -f -l led0 -n 2/4-2200'
    

    (with last arg-num / bits - decrement spec there to blink only last 4 bits of the second passed argument, which is listening port, e.g. "1011" for "2213")

Given how much OpenSSH does already, having all this functionality there (or even some hooks for that) would probably be too much to ask.

...at least until it gets rewritten as some systemd-accessd component :P

Mar 21, 2017

Running glusterfs in a user namespace (uid-mapped container)

Traditionally glusterd (glusterfs storage node) runs as root without any kind of namespacing, and that's suboptimal for two main reasons:

  • Grossly-elevated privileges (it's root) for just using net and storing files.
  • Inconvenient to manage in the root fs/namespace.

Apart from being historical thing, glusterd uses privileges for three things that I know of:

  • Set appropriate uid/gid on stored files.
  • setxattr() with "trusted" namespace for all kinds of glusterfs xattrs.
  • Maybe running nfsd? Not sure about this one, didn't use its nfs access.

For my purposes, only first two are useful, and both can be easily satisfied in non-uid-mapped contained, e.g. systemd-nspawn without -U.

With user_namespaces(7), first requirement is also satisfied, as chown works for pseudo-root user inside namespace, but second one will never work without some kind of namespace-private fs or xattr-mapping namespace.

"user" xattr namespace works fine there though, so rather obvious fix is to make glusterd use those instead, and it has no obvious downsides, at least if backing fs is used only by glusterd.

xattr names are unfortunately used quite liberally in the gluster codebase, and don't have any macro for prefix, but finding all "trusted" outside of tests/docs with grep is rather easy, seem to be no caveats there either.

Would be cool to see something like that upstream eventually.

It won't work unless all nodes are using patched glusterfs version though, as non-patched nodes will be sending SETXATTR/XATTROP for trusted.* xattrs.

Two extra scripts that can be useful with this patch and existing setups:

First one is to copy trusted.* xattrs to user.*, and second one to set upper 16 bits of uid/gid to systemd-nspawn container id value.

Both allow to pass fs from old root glusterd to a user-xattr-patched glusterd inside uid-mapped container (i.e. bind-mount it there), without loosing anything. Both operations are also reversible - can just nuke user.* stuff or upper part of uid/gid values to revert everything back.

One more random bit of ad-hoc trivia - use getfattr -Rd -m '.*' /srv/glusterfs-stuff
(getfattr without -m '.*' hack hides trusted.* xattrs)

Note that I didn't test this trick extensively (yet?), and only use simple distribute-replicate configuration here anyway, so probably a bad idea to run something like this blindly in an important and complicated production setup.

Also wow, it's been 7 years since I've written here about glusterfs last, time (is made of) flies :)

Jan 29, 2017

Proxying ssh user connections to gitolite host transparently

Recently bumped into apparently not well-supported scenario of accessing gitolite instance transparently on a host that is only accessible through some other gateway (often called "bastion" in ssh context) host.

Something like this:

+---------------+
|               |   git@myhost.net:myrepo
|  dev-machine  ---------------------------+
|               |                          |
+---------------+                          |
                              +------------v------+
      git@gitolite:myrepo     |                   |
  +----------------------------  myhost.net (gw)  |
  |                           |                   |
+-v-------------------+       +-------------------+
|                     |
|    gitolite (gl)    |
|  host/container/vm  |
|                     |
+---------------------+

Here gitolite instance might be running on a separate machine, or on the same "myhost.net", but inside a container or vm with separate sshd daemon.

From any dev-machine you want to simply use git@myhost.net:myrepo to access repositories, but naturally that won't work because in normal configuration you'd hit sshd on gw host (myhost.net) and not on gl host.

There are quite a few common options to work around this:

  • Use separate public host/IP for gitolite, e.g. git.myhost.net (!= myhost.net).

  • TCP port forwarding or similar tricks.

    E.g. simply forward ssh port connections in a "gw:22 -> gl:22" fashion, and have gw-specific sshd listen on some other port, if necessary.

    This can be fairly easy to use with something like this for odd-port sshd in ~/.ssh/config:

    Host myhost.net
      Port 1234
    Host git.myhost.net
      Port 1235
    

    Can also be configured in git via remote urls like ssh://git@myhost.net:1235/myrepo.

  • Use ssh port forwarding to essentially do same thing as above, but with resulting git port accessible on localhost.

  • Configure ssh to use ProxyCommand, which will login to gw host and setup forwarding through it.

All of these, while suitable for some scenarios, are still nowhere near what I'd call "transparent", and require some additional configuration for each git client beyond just git add remote origin git@myhost.net:myrepo.

One advantage of such lower-level forwarding is that ssh authentication to gitolite is only handled on gitolite host, gw host has no clue about that.

If dropping this is not a big deal (e.g. because gw host has root access to everything in gl container anyway), there is a rather easy way to forward only git@myhost.net connections from gw to gl host, authenticating them only on gw instead, described below.


Gitolite works by building ~/.ssh/authorized_keys file with essentially command="gitolite-shell gl-key-id" <gl-key> for each public key pushed to gitolite-admin repository.

Hence to proxy connections from gw, similar key-list should be available there, with key-commands ssh'ing into gitolite user/host and running above commands there (with original git commands also passed through SSH_ORIGINAL_COMMAND env-var).

To keep such list up-to-date, post-update trigger/hook for gitolite-admin repo is needed, which can use same git@gw login (with special "gl auth admin" key) to update key-list on gw host.

Steps to implement/deploy whole thing:

  • useradd -m git on gw and run ssh-keygen -t ed25519 on both gw and gl hosts for git/gitolite user.

  • Setup all connections for git@gw to be processed via single "gitolite proxy" command, disallowing anything else, exactly like gitolite does for its users on gl host.

    gitolite-proxy.py script (python3) that I came up with for this purpose can be found here: https://github.com/mk-fg/gitolite-ssh-user-proxy/

    It's rather simple and does two things:

    • When run with --auth-update argument, receives gitolite authorized_keys list, and builds local ~/.ssh/authorized_keys from it and authorized_keys.base file.

    • Similar to gitolite-shell, when run as gitolite-proxy key-id, ssh'es into gl host, passing key-id and git command to it.

      This is done in a straightforward os.execlp('ssh', 'ssh', '-qT', ...) manner, no extra processing or any error-prone stuff like that.

    When installing it (to e.g. /usr/local/bin/gitolite-proxy as used below), be sure to set/update "gl_host_login = ..." line at the top there.

    For --auth-update, ~/.ssh/authorized_keys.base (note .base) file on gw should have this single line (split over two lines for readability, must be all on one line for ssh!):

    command="/usr/local/bin/gitolite-proxy --auth-update",no-port-forwarding
      ,no-X11-forwarding,no-agent-forwarding,no-pty ssh-ed25519 AAA...4u3FI git@gl
    

    Here ssh-ed25519 AAA...4u3FI git@gl is the key from ~git/.ssh/id_ed25519.pub on gitolite host.

    Also run:

    # install -m0600 -o git -g git ~git/.ssh/authorized_keys{.base,}
    # install -m0600 -o git -g git ~git/.ssh/authorized_keys{.base,.old}
    
    To have initial auth-file, not yet populated with gitolite-specific keys/commands.
    Note that only these two files need to be writable for git user on gw host.
  • From gitolite (gl) host and user, run: ssh -qT git@gw < ~/.ssh/authorized_keys

    This is to test gitolite-proxy setup above - should populate ~git/.ssh/authorized_keys on gw host and print back gw host key and proxy script to run as command="..." for it (ignore them, will be installed by trigger).

  • Add trigger that'd run after gitolite-admin repository updates on gl host.

    • On gl host, put this to ~git/.gitolite.rc right before ENABLE line:

      LOCAL_CODE => "$rc{GL_ADMIN_BASE}/local",
      POST_COMPILE => ['push-authkeys'],
      
    • Commit/push push-authkeys trigger script (also from gitolite-ssh-user-proxy repo) to gitolite-admin repo as local/triggers/push-authkeys, updating gw_proxy_login line in there.

    gitolite docs on adding triggers: http://gitolite.com/gitolite/gitolite.html#triggers

Once proxy-command is in place on gw and gitolite-admin hook runs at least once (to setup gw->gl access and proxy-command), git@gw (git@myhost.net) ssh login spec can be used in exactly same way as git@gl.

That is, fully transparent access to gitolite on a different host through that one user, while otherwise allowing to use sshd on a gw host, without any forwarding tricks necessary for git clients.

Whole project, with maybe a bit more refined process description and/or whatever fixes can be found on github here: https://github.com/mk-fg/gitolite-ssh-user-proxy/

Huge thanks to sitaramc (gitolite author) for suggesting how to best setup gitolite triggers for this purpose on the ML.

Oct 22, 2015

Arch Linux chroot and sshd from boot on rooted Android with SuperSU

Got myself Android device only recently, and first thing I wanted to do, of course, was to ssh into it.

But quick look at the current F-Droid and Play apps shows that they either based on a quite limited dropbear (though perfectly fine for one-off shell access) or "based on openssh code", where infamous Debian OpenSSL code patch comes to mind, not to mention that most look like ad-ridden proprietary piece of crap.

Plus I'd want a proper package manager, shell (zsh), tools and other stuff in there, not just some baseline bash and busybox, and chroot with regular linux distro is a way to get all that plus standard OpenSSH daemon.

Under the hood, modern Android (5.11 in my case, with CM 12) phone is not much more than Java VM running on top of SELinux-enabled (which I opted to keep for Android stuff) Linux kernel, on top of some multi-core ARMv7 CPU (quadcore in my case, rather identical to one in RPi2).

Steps I took to have sshd running on boot with such device:

  • Flush stock firmware (usually loaded with adware and crapware) and install some pre-rooted firmware image from xda-developers.com or 4pda.ru.

    My phone is a second-hand Samsung Galaxy S3 Neo Duos (GT-I9300I), and I picked resurrection remix CM-based ROM for it, built for this phone from t3123799, with Open GApps arm 5.1 Pico (minimal set of google stuff).

    That ROM (like most of them, it appears) comes with bash, busybox and - most importantly - SuperSU, which runs its unrestricted init, and is crucial to getting chroot-script to start on boot. All three of these are needed.

    Under Windows, there's Odin suite for flashing zip with all the goodies to USB-connected phone booted into "download mode".

    On Linux, there's Heimdall (don't forget to install adb with that, e.g. via pacman -S android-tools android-udev on Arch), which can dd img files for diff phone partitions, but doesn't seem to support "one zip with everything" format that most firmwares come in.

    Instead of figuring out which stuff from zip to upload where with Heimdall, I'd suggest grabbing a must-have TWRP recovery.img (small os that boots in "recovery mode", grabbed one for my phone from t2906840), flashing it with e.g. heimdall flash --RECOVERY recovery.twrp-2.8.5.0.img and then booting into it to install whatever main OS and apps from zip's on microSD card.

    TWRP (or similar CWM one) is really useful for a lot of OS-management stuff like app-packs installation (e.g. Open GApps) or removal, updates, backups, etc, so I'd definitely suggest installing one of these as recovery.img as the first thing on any Android device.

  • Get or make ARM chroot tarball.

    I did this by bootstrapping a stripped-down ARMv7 Arch Linux ARM chroot on a Raspberry Pi 2 that I have around:

    # pacman -Sg base
    
     ### Look through the list of packages,
     ###  drop all the stuff that won't ever be useful in chroot on the phone,
     ###  e.g. pciutils, usbutils, mdadm, lvm2, reiserfsprogs, xfsprogs, jfsutils, etc
    
     ### pacstrap chroot with whatever is left
    
    # mkdir droid-chroot
    # pacstrap -i -d droid-chroot bash bzip2 coreutils diffutils file filesystem \
        findutils gawk gcc-libs gettext glibc grep gzip iproute2 iputils less \
        licenses logrotate man-db man-pages nano pacman perl procps-ng psmisc \
        sed shadow sysfsutils tar texinfo util-linux which
    
     ### Install whatever was forgotten in pacstrap
    
    # pacman -r droid-chroot -S --needed atop busybox colordiff dash fping git \
        ipset iptables lz4 openssh patch pv rsync screen xz zsh fcron \
        python2 python2-pip python2-setuptools python2-virtualenv
    
     ### Make a tar.gz out of it all
    
    # rm -f droid-chroot/var/cache/pacman/pkg/*
    # tar -czf droid-chroot.tar.gz droid-chroot
    

    Same can be obviously done with debootstrap or whatever other distro-of-choice bootstrapping tool, but likely has to be done on a compatible architecture - something that can run ARMv7 binaries, like RPi2 in my case (though VM should do too) - to run whatever package hooks upon installs.

    Easier way (that won't require having spare ARM box or vm) would be to take pre-made image for any ARMv7 platform, from http://os.archlinuxarm.org/os/ list or such, but unless there's something very generic (which e.g. "ArchLinuxARM-armv7-latest.tar.gz" seem to be), there's likely be some platform-specific cruft like kernel, modules, firmware blobs, SoC-specific tools and such... nothing that can't be removed at any later point with package manager or simple "rm", of course.

    While architecture in my case is ARMv7, which is quite common nowadays (2015), other devices can have Intel SoCs with x86 or newer 64-bit ARMv8 CPUs. For x86, bootstrapping can obviously be done on pretty much any desktop/laptop machine, if needed.

  • Get root shell on device and unpack chroot tarball there.

    adb shell (pacman -S android-tools android-udev or other distro equivalent, if missing) should get "system" shell on a USB-connected phone.

    (btw, "adb" access have to be enabled on the phone via some common "tap 7 times on OS version in Settings-About then go to Developer-options" dance, if not already)

    With SuperSU (or similar "su" package) installed, next step would be running "su" there to get unrestricted root, which should work.

    /system and /data should be on ext4 (check mount | grep ext4 for proper list), f2fs or such "proper" filesystem, which is important to have for all the unix permission bits and uid/gid info which e.g. FAT can't handle (loopback img with ext4 can be created in that case, but shouldn't be necessary in case of internal flash storage, which should have proper fs'es already).

    /system is a bad place for anything custom, as it will be completely flushed on most main-OS changes (when updating ROM from zip with TWRP, for instance), and is mounted with "ro" on boot anyway.

    Any subdir in /data seem to work fine, though one obvious pre-existing place - /data/local - is probably a bad idea, as it is used by some Android dev tools already.

    With busybox and proper bash on the phone, unpacking tarball from e.g. microSD card should be easy:

    # mkdir -m700 /data/chroots
    # cd /data/chroots
    # tar -xpf /mnt/sdcard/droid-chroot.tar.gz
    

    It should already work, too, so...

    # cd droid-chroot
    # mount -o bind /dev dev \
        && mount -o bind /dev/pts dev/pts \
        && mount -t proc proc proc \
        && mount -t sysfs sysfs sys
    # env -i TERM=$TERM SHELL=/bin/zsh HOME=/root $(which chroot) . /bin/zsh
    

    ...should produce a proper shell in a proper OS, yay! \o/

    Furthermore, to be able to connect there directly, without adb or USB cable, env -i $(which chroot) . /bin/sshd should work too.

    For sshd in particular, one useful thing to do here is:

    # $(which chroot) . /bin/ssh-keygen -A
    

    ...to populate /etc/ssh with keys, which are required to start sshd.

  • Setup init script to run sshd or whatever init-stuff from that chroot on boot.

    Main trick here is to run it with unrestricted SELinux context (unless SELinux is disabled entirely, I guess).

    This makes /system/etc/init.d using "sysinit_exec" and /data/local/userinit.sh with "userinit_exec" unsuitable for the task, only something like "init" ("u:r:init:s0") will work.

    SELinux on Android is documented in Android docs, and everything about SELinux in general applies there, of course, but some su-related roles like above "userinit_exec" actually come with CyanogenMod or whatever similar hacks on top of the base Android OS.

    Most relevant info on this stuff comes with SuperSU though (or rather libsuperuser) - http://su.chainfire.eu/

    That doc has info on how to patch policies, to e.g. transition to unrestricted role for chroot init, setup sub-roles for stuff in there (to also use SELinux in a chroot), which contexts are used where, and - most useful in this case - which custom "init" dirs are used at which stages of the boot process.

    Among other useful stuff, it specifies/describes /system/su.d init-dir, from which SuperSU runs scripts/binaries with unrestricted "init" context, and very early in the process too, hence it is most suitable for starting chroot from.

    So, again, from root (after "su") shell:

    # mount -o remount,rw /system
    # mkdir -m700 /system/su.d
    
    # cat >/system/su.d/chroots.sh <<EOF
    #!/system/bin/sh
    exec /data/local/chroots.bash
    EOF
    
    # chmod 700 /system/su.d/chroots.sh
    
    # cat >/data/local/chroots.bash <<EOF
    #!/system/xbin/bash
    export PATH=/sbin:/vendor/bin:/system/sbin:/system/bin:/system/xbin
    
    log=/data/local/chroots.log
    [[ $(du -m "$log" | awk '{print $1}') -gt 20 ]] && mv "$log"{,.old}
    exec >>"$log" 2>&1
    echo " --- Started $(TZ=UTC date) --- "
    
    log -p i -t chroots "Starting chroot: droid-chroot"
    /data/chroots/droid-chroot.sh &
    disown
    
    log -p i -t chroots "Finished chroots init"
    
    echo " --- Finished $(TZ=UTC date) --- "
    EOF
    
    # chmod 700 /data/local/chroots.bash
    
    # cd /data/chroots
    # mkdir -p droid-chroot/mnt/storage
    # ln -s droid-chroot/init.sh droid-chroot.sh
    
    # cat >droid-chroot/init.sh <<EOF
    #!/system/xbin/bash
    set -e -o pipefail
    
    usage() {
      bin=$(basename $0)
      echo >&2 "Usage: $bin [ stop | chroot ]"
      exit ${1:-0}
    }
    [[ "$#" -gt 1 || "$1" = -h || "$1" = --help ]] && usage
    
    cd /data/chroots/droid-chroot
    
    sshd_pid=$(cat run/sshd.pid 2>/dev/null ||:)
    
    mountpoint -q dev || mount -o bind /dev dev
    mountpoint -q dev/pts || mount -o bind /dev/pts dev/pts
    mountpoint -q proc || mount -t proc proc proc
    mountpoint -q sys || mount -t sysfs sysfs sys
    mountpoint -q tmp || mount -o nosuid,nodev,size=20%,mode=1777 -t tmpfs tmpfs tmp
    mountpoint -q run || mount -o nosuid,nodev,size=20% -t tmpfs tmpfs run
    mountpoint -q mnt/storage || mount -o bind /data/media/0 mnt/storage
    
    case "$1" in
      stop)
        [[ -z "$sshd_pid" ]] || kill "$sshd_pid"
        exit 0 ;;
      chroot)
        exec env -i\
          TERM="$TERM" SHELL=/bin/zsh HOME=/root\
          /system/xbin/chroot . /bin/zsh ;;
      *) [[ -z "$1" ]] || usage 1 ;;
    esac
    
    [[ -n "$sshd_pid" ]]\
      && kill -0 "$sshd_pid" 2>/dev/null\
      || exec env -i /system/xbin/chroot . /bin/sshd
    EOF
    
    # chmod 700 droid-chroot/init.sh
    

    To unpack all that wall-of-shell a bit:

    • Very simple /system/su.d/chroots.sh is created, so that it can easily be replaced if/when /system gets flushed by some update, and also so that it won't need to be edited (needing rw remount) ever.

    • /data/local/chroots.bash is an actual init script for whatever chroots, with Android logging stuff (accessible via e.g. adb logcat, useful to check if script was ever started) and simplier more reliable (and rotated) log in /data/local/chroots.log.

    • /data/chroots/droid-chroot.sh is a symlink to init script in /data/chroots/droid-chroot/init.sh, so that this script can be easily edited from inside of the chroot itself.

    • /data/chroots/droid-chroot/init.sh is the script that mounts all the stuff needed for the chroot and starts sshd there.

      Can also be run from adb root shell to do the same thing, with "stop" arg to kill that sshd, or with "chroot" arg to do all the mounts and chroot into the thing from whatever current sh.

      Basically everything to with that chroot from now on can/should be done through that script.

    "cat" commands can obviously be replaced with "nano" and copy-paste there, or copying same (or similar) scripts from card or whatever other paths (to avoid pasting them into shell, which might be less convenient than Ctrl+S in $EDITOR).

  • Reboot, test sshd, should work.

Anything other than sshd can also be added to that init script, to make some full-featured dns + web + mail + torrents server setup start in chroot.

With more than a few daemons, it'd probably be a good idea to start just one "daemon babysitter" app from there, such as runit, daemontools or whatever. Maybe even systemd will work, though unlikely, given how it needs udev, lots of kernel features and apis initialized in its own way, and such.

Obvious caveat for running a full-fledged linux separately from main OS is that it should probably be managed through local webui's or from some local terminal app, and won't care much about power management and playing nice with Android stuff.

Android shouldn't play nice with such parasite OS either, cutting network or suspending device when it feels convenient, without any regard for conventional apps running there, though can be easily configured not to.

As I'm unlikely to want this device as a phone ever (who needs these, anyway?), turning it into something more like wireless RPi2 with a connected management terminal (represented by Android userspace) sounds like the only good use for it so far.

Update 2016-05-16: Added note on ssh-keygen and rm for pacman package cache after pacstrap.

Sep 04, 2015

Parsing OpenSSH Ed25519 keys for fun and profit

Adding key derivation to git-nerps from OpenSSH keys, needed to get the actual "secret" or something deterministically (plus in an obvious and stable way) derived from it (to then feed into some pbkdf2 and get the symmetric key).

Idea is for liteweight ad-hoc vms/containers to have a single "master secret", from which all others (e.g. one for git-nerps' encryption) can be easily derived or decrypted, and omnipresent, secure, useful and easy-to-generate ssh key in ~/.ssh/id_ed25519 seem to be the best candidate.

Unfortunately, standard set of ssh tools from openssh doesn't seem to have anything that can get key material or its hash - next best thing is to get "fingerprint" or such, but these are derived from public keys, so not what I wanted at all (as anyone can derive that, having public key, which isn't secret).

And I didn't want to hash full openssh key blob, because stuff there isn't guaranteed to stay the same when/if you encrypt/decrypt it or do whatever ssh-keygen does.

What definitely stays the same is the values that openssh plugs into crypto algos, so wrote a full parser for the key format (as specified in PROTOCOL.key file in openssh sources) to get that.

While doing so, stumbled upon fairly obvious and interesting application for such parser - to get really and short easy to backup, read or transcribe string which is the actual secret for Ed25519.

I.e. that's what OpenSSH private key looks like:

-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yAAAAJi1Bt0atQbd
GgAAAAtzc2gtZWQyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yA
AAAEAc5IRaYYm2Ss4E65MYY4VewwiwyqWdBNYAZxEhZe9GpNopTJz/d2cMv4VLj/fYkWwX
zyhChhvaVTRBi0uA7H7IAAAAE2ZyYWdnb2RAbWFsZWRpY3Rpb24BAg==
-----END OPENSSH PRIVATE KEY-----

And here's the only useful info in there, enough to restore whole blob above from, in the same base64 encoding:

HOSEWmGJtkrOBOuTGGOFXsMIsMqlnQTWAGcRIWXvRqQ=

Latter, of course, being way more suitable for tried-and-true "write on a sticker and glue at the desk" approach. Or one can just have a file with one host key per line - also cool.

That's the 32-byte "seed" value, which can be used to derive "ed25519_sk" field ("seed || pubkey") in that openssh blob, and all other fields are either "none", "ssh-ed25519", "magic numbers" baked into format or just padding.

So rolled the parser from git-nerps into its own tool - ssh-keyparse, which one can run and get that string above for key in ~/.ssh/id_ed25519, or do some simple crypto (as implemented by djb in ed25519.py, not me) to recover full key from the seed.

From output serialization formats that tool supports, especially liked the idea of Douglas Crockford's Base32 - human-readable one, where all confusing l-and-1 or O-and-0 chars are interchangeable, and there's an optional checksum (one letter) at the end:

% ssh-keyparse test-key --base32
3KJ8-8PK1-H6V4-NKG4-XE9H-GRW5-BV1G-HC6A-MPEG-9NG0-CW8J-2SFF-8TJ0-e

% ssh-keyparse test-key --base32-nodashes
3KJ88PK1H6V4NKG4XE9HGRW5BV1GHC6AMPEG9NG0CW8J2SFF8TJ0e

base64 (default) is still probably most efficient for non-binary (there's --raw otherwise) backup though.

[ssh-keyparse code link]

Sep 01, 2015

Transparent and easy encryption for files in git repositories

Have been installing things to an OS containers (/var/lib/machines) lately, and looking for proper configuration management in these.

Large-scale container setups use some hard-to-integrate things like etcd, where you have to template configuration from values in these, which is not very convenient and very low effort-to-results ratio (maintenance of that system itself) for "10 service containers on 3 hosts" case.

Besides, such centralized value store is a bit backwards for one-container-per-service case, where most values in such "central db" are specific to one container, and it's much easier to edit end-result configs then db values and then templates and then check how it all gets rendered on every trivial tweak.

Usual solution I have for these setups is simply putting all confs under git control, but leaving all the secrets (e.g. keys, passwords, auth data) out of the repo, in case it might be pulled from on other hosts, by different people and for purposes which don't need these sensitive bits and might leak them (e.g. giving access to contracted app devs).

For more transient container setups, something should definitely keep track of these "secrets" however, as "rm -rf /var/lib/machines/..." is much more realistic possibility and has its uses.


So my (non-original) idea here was to have one "master key" per host - just one short string - with which to encrypt all secrets for that host, which can then be shared between hosts and specific people (making these public might still be a bad idea), if necessary.

This key should then be simply stored in whatever key-management repo, written on a sticker and glued to a display, or something.

Git can be (ab)used for such encryption, with its "filter" facilities, which are generally used for opposite thing (normalization to one style), but are easy to adapt for this case too.

Git filters work by running "clear" operation on selected paths (can be a wildcard patterns like "*.c") every time git itself uses these and "smudge" when showing to user and checking them out to a local copy (where they are edited).

In case of encryption, "clear" would not be normalizing CR/LF in line endings, but rather wrapping contents (or parts of them) into a binary blob, and "smudge" should do the opposite, and gitattributes patterns would match files to be encrypted.


Looking for projects that already do that, found quite a few, but still decided to write my own tool, because none seem have all the things I wanted:

  • Use sane encryption.

    It's AES-CTR in the absolutely best case, and AES-ECB (wtf!?) in some, sometimes openssl is called with "password" on the command line (trivial to spoof in /proc).

    OpenSSL itself is a red flag - hard to believe that someone who knows how bad its API and primitives are still uses it willingly, for non-TLS, at least.

    Expected to find at least one project using AEAD through NaCl or something, but no such luck.

  • Have tool manage gitattributes.

    You don't add file to git repo by typing /path/to/myfile managed=version-control some-other-flags to some config, why should you do it here?

  • Be easy to deploy.

    Ideally it'd be a script, not some c++/autotools project to install build tools for or package to every setup.

    Though bash script is maybe taking it a bit too far, given how messy it is for anything non-trivial, secure and reliable in diff environments.

  • Have "configuration repository" as intended use-case.

So wrote git-nerps python script to address all these.

Crypto there is trivial yet solid PyNaCl stuff, marking files for encryption is as easy as git-nerps taint /what/ever/path and bootstrapping the thing requires nothing more than python, git, PyNaCl (which are norm in any of my setups) and git-nerps key-gen in the repo.

README for the project has info on every aspect of how the thing works and more on the ideas behind it.

I expect it'll have a few more use-case-specific features and convenience-wrapper commands once I'll get to use it in a more realistic cases than it has now (initially).


[project link]

Jan 12, 2015

Starting systemd service instance for device from udev

Needed to implement a thing that would react on USB Flash Drive inserted (into autonomous BBB device) - to get device name, mount fs there, rsync stuff to it, unmount.

To avoid whatever concurrency issues (i.e. multiple things screwing with device in parallel), proper error logging and other startup things, most obvious thing is to wrap the script in a systemd oneshot service.

Only non-immediately-obvious problem for me here was how to pass device to such service properly.

With a bit of digging through google results (and even finding one post here somehow among them), eventually found "Pairing udev's SYSTEMD_WANTS and systemd's templated units" resolved thread, where what seem to be current-best approach is specified.

Adapting it for my case and pairing with generic patterns for device-instantiated services, resulted in the following configuration.

99-sync-sensor-logs.rules:

SUBSYSTEM=="block", ACTION=="add", ENV{ID_TYPE}="disk", ENV{DEVTYPE}=="partition",\
  PROGRAM="/usr/bin/systemd-escape -p --template=sync-sensor-logs@.service $env{DEVNAME}",\
  ENV{SYSTEMD_WANTS}+="%c"

sync-sensor-logs@.service:

[Unit]
BindTo=%i.device
After=%i.device

[Service]
Type=oneshot
TimeoutStartSec=300
ExecStart=/usr/local/sbin/sync-sensor-logs /%I
This makes things stop if it works for too long or if device vanishes (due to BindTo=) and properly delays script start until device is ready.
"sync-sensor-logs" script at the end gets passed original unescaped device name as an argument.
Easy to apply all the systemd.exec(5) and systemd.service(5) parameters on top of this.

Does not need things like systemctl invocation or manual systemd escaping re-implementation either, though running "systemd-escape" still seem to be necessary evil there.

systemd-less alternative seem to be having a script that does per-device flock, timeout logic and a lot more checks for whether device is ready and/or still there, so this approach looks way saner and clearer, with a caveat that one should probably be familiar with all these systemd features.

Oct 05, 2014

Simple aufs setup for Arch Linux ARM and boards like RPi, BBB or Cubie

Experimenting with all kinds of arm boards lately (nyms above stand for Raspberry Pi, Beaglebone Black and Cubieboard), I can't help but feel a bit sorry of microsd cards in each one of them.

These are even worse for non-bulk writes than SSD, having less erase cycles plus larger blocks, and yet when used for all fs needs of the board, even typing "ls" into shell will usually emit a write (unless shell doesn't keep history, which sucks).

Great explaination of how they work can be found on LWN (as usual).

Easy and relatively hassle-free way to fix the issue is to use aufs, but as doing it for the whole rootfs requires initramfs (which is not needed here otherwise), it's a lot easier to only use it for commonly-writable parts - i.e. /var and /home in most cases.

Home for "root" user is usually /root, so to make it aufs material as well, it's better to move that to /home (which probably shouldn't be a separate fs on these devices), leaving /root as a symlink to that.

It seem to be impossible to do when logged-in as /root (mv will error with EBUSY), but trivial from any other machine:

# mount /dev/sdb2 /mnt # mount microsd
# cd /mnt
# mv root home/
# ln -s home/root
# cd
# umount /mnt

As aufs2 is already built into Arch Linux ARM kernel, only thing that's left is to add early-boot systemd unit for mounting it, e.g. /etc/systemd/system/aufs.service:

[Unit]
DefaultDependencies=false

[Install]
WantedBy=local-fs-pre.target

[Service]
Type=oneshot
RemainAfterExit=true

# Remount /home and /var as aufs
ExecStart=/bin/mount -t tmpfs tmpfs /aufs/rw
ExecStart=/bin/mkdir -p -m0755 /aufs/rw/var /aufs/rw/home
ExecStart=/bin/mount -t aufs -o br:/aufs/rw/var=rw:/var=ro none /var
ExecStart=/bin/mount -t aufs -o br:/aufs/rw/home=rw:/home=ro none /home

# Mount "pure" root to /aufs/ro for syncing changes
ExecStart=/bin/mount --bind / /aufs/ro
ExecStart=/bin/mount --make-private /aufs/ro

And then create the dirs used there and enable unit:

# mkdir -p /aufs/{rw,ro}
# systemctl enable aufs

Now, upon rebooting the board, you'll get aufs mounts for /home and /var, making all the writes there go to respective /aufs/rw dirs on tmpfs while allowing to read all the contents from underlying rootfs.

To make sure systemd doesn't waste extra tmpfs space thinking it can sync logs to /var/log/journal, I'd also suggest to do this (before rebooting with aufs mounts):

# rm -rf /var/log/journal
# ln -s /dev/null /var/log/journal

Can also be done via journald.conf with Storage=volatile.

One obvious caveat with aufs is, of course, how to deal with things that do expect to have permanent storage in /var - examples can be a pacman (Arch package manager) on system updates, postfix or any db.
For stock Arch Linux ARM though, it's only pacman on manual updates.

And depending on the app and how "ok" can loss of this data might be, app dir in /var (e.g. /var/lib/pacman) can be either moved + symlinked to /srv or synced before shutdown or after it's done with writing (for manual oneshot apps like pacman).

For moving stuff back to permanent fs, aubrsync from aufs2-util.git can be used like this:

# aubrsync move /var/ /aufs/rw/var/ /aufs/ro/var/

As even pulling that from shell history can be a bit tedious, I've made a simplier ad-hoc wrapper - aufs_sync - that can be used (with mountpoints similar to presented above) like this:

# aufs_sync
Usage: aufs_sync { copy | move | check } [module]
Example (flushes /var): aufs_sync move var

# aufs_sync check
/aufs/rw
/aufs/rw/home
/aufs/rw/home/root
/aufs/rw/home/root/.histfile
/aufs/rw/home/.wh..wh.orph
/aufs/rw/home/.wh..wh.plnk
/aufs/rw/home/.wh..wh.aufs
/aufs/rw/var
/aufs/rw/var/.wh..wh.orph
/aufs/rw/var/.wh..wh.plnk
/aufs/rw/var/.wh..wh.aufs
--- ... just does "find /aufs/rw"

# aufs_sync move
--- does "aubrsync move" for all dirs in /aufs/rw

Just be sure to check if any new apps might write something important there (right after installing these) and do symlinks (to something like /srv) for their dirs, as even having "aufs_sync copy" on shutdown definitely won't prevent data loss for these on e.g. sudden power blackout or any crashes.

Sep 23, 2014

tmux rate-limiting magic against terminal spam/flood lock-ups

Update 2015-11-08: No longer necessary (or even supported in 2.1) - tmux' new "backoff" rate-limiting approach works like a charm with defaults \o/

Had the issue of spammy binary locking-up terminal for a long time, but never bothered to do something about it... until now.

Happens with any terminal I've seen - just run something like this in the shell there:

# for n in {1..500000}; do echo "$spam $n"; done
And for at least several seconds, terminal is totally unresponsive, no matter how many screen's / tmux'es are running there.
It's usually faster to kill term window via WM and re-attach to whatever was inside from a new one.

xterm seem to be one of the most resistant *terms to this, e.g. terminology - much less so, which I guess just means that it's more fancy and hence slower to draw millions of output lines.

Anyhow, tmuxrc magic:

set -g c0-change-trigger 150
set -g c0-change-interval 100

"man tmux" says that 250/100 are defaults, but it doesn't seem to be true, as just setting these "defaults" explicitly here fixes the issue, which exists with the default configuration.

Fix just limits rate of tmux output to basically 150 newlines (which is like twice my terminal height anyway) per 100 ms, so xterm won't get overflown with "rendering megs of text" backlog, remaining apparently-unresponsive (to any other output) for a while.

Since I always run tmux as a top-level multiplexer in xterm, totally solved the annoyance for me. Just wish I've done that much sooner - would've saved me a lot of time and probably some rage-burned neurons.

Jul 16, 2014

(yet another) Dynamic DNS thing for tinydns (djbdns)

Tried to find any simple script to update tinydns (part of djbdns) zones that'd be better than ssh dns_update@remote_host update.sh, but failed - they all seem to be hacky php scripts, doomed to run behind httpds, send passwords in url, query random "myip" hosts or something like that.

What I want instead is something that won't be making http, tls or ssh connections (and stirring all the crap behind these), but would rather just send udp or even icmp pings to remotes, which should be enough for update, given source IPs of these packets and some authentication payload.

So yep, wrote my own scripts for that - tinydns-dynamic-dns-updater project.

Tool sends UDP packets with 100 bytes of "( key_id || timestamp ) || Ed25519_sig" from clients, authenticating and distinguishing these server-side by their signing keys ("key_id" there is to avoid iterating over them all, checking which matches signature).

Server zone files can have "# dynamic: ts key1 key2 ..." comments before records (separated from static records after these by comments or empty lines), which says that any source IPs of packets with correct signatures (and more recent timestamps) will be recorded in A/AAAA records (depending on source AF) that follow instead of what's already there, leaving anything else in the file intact.

Zone file only gets replaced if something is actually updated and it's possible to use dynamic IP for server as well, using dynamic hostname on client (which is resolved for each delayed packet).

Lossy nature of UDP can be easily mitigated by passing e.g. "-n5" to the client script, so it'd send 5 packets (with exponential delays by default, configurable via --send-delay), plus just having the thing on fairly regular intervals in crontab.

Putting server script into socket-activated systemd service file also makes all daemon-specific pains like using privileged ports (and most other security/access things), startup/daemonization, restarts, auto-suspend timeout and logging woes just go away, so there's --systemd flag for that too.

Given how easy it is to run djbdns/tinydns instance, there really doesn't seem to be any compelling reason not to use your own dynamic dns stuff for every single machine or device that can run simple python scripts.

Github link: tinydns-dynamic-dns-updater

Next → Page 1 of 4
Member of The Internet Defense League