Jun 09, 2017

acme-cert-tool for easy end-to-end https cert management

Tend to mention random trivial tools I write here, but somehow forgot about this one - acme-cert-tool.

Implemented it a few months back when setting-up TLS on, and wasn't satisfied by any existing things for ACME / Let's Encrypt cert management.

Wanted to find some simple python3 script that's a bit less hacky than acme-tiny, not a bloated framework with dozens of useless deps like certbot and has ECC certs covered, but came up empty.

acme-cert-tool has all that in a single script with just one dep on a standard py crypto toolbox (cryptography.io), and does everything through a single command, e.g. something like:

% ./acme-cert-tool.py --debug -gk le-staging.acc.key cert-issue \
  -d /srv/www/.well-known/acme-challenge le-staging.cert.pem mydomain.com

...to get signed cert for mydomain.com, doing all the generation, registration and authorization stuff as necessary, and caching that stuff in "le-staging.acc.key" too, not doing any extra work there either.

Add && systemctl reload nginx to that, put into crontab or .timer and done.

There are bunch of other commands mostly to play with accounts and such, plus options for all kinds of cert and account settings, e.g. ... -e myemail@mydomain.com -c rsa-2048 -c ec-384 to also have cert with rsa key generated for random outdated clients and add email for notifications (if not added already).

README on acme-cert-tool github page and -h/--help output should have more details:

Sep 25, 2016

nftables re-injected IPSec matching without xt_policy

As of linux-4.8, something like xt_policy is still - unfortunately - on the nftables TODO list, so to match traffic pre-authenticated via IPSec, some workaround is needed.

Obvious one is to keep using iptables/ip6tables to mark IPSec packets with old xt_policy module, as these rules interoperate with nftables just fine, with only important bit being ordering of iptables hooks vs nft chain priorities, which are rather easy to find in "netfilter_ipv{4,6}.h" files, e.g.:

enum nf_ip_hook_priorities {
  NF_IP_PRI_FIRST = INT_MIN,
  NF_IP_PRI_CONNTRACK_DEFRAG = -400,
  NF_IP_PRI_RAW = -300,
  NF_IP_PRI_SELINUX_FIRST = -225,
  NF_IP_PRI_CONNTRACK = -200,
  NF_IP_PRI_MANGLE = -150,
  NF_IP_PRI_NAT_DST = -100,
  NF_IP_PRI_FILTER = 0,
  NF_IP_PRI_SECURITY = 50,
  NF_IP_PRI_NAT_SRC = 100,
  NF_IP_PRI_SELINUX_LAST = 225,
  NF_IP_PRI_CONNTRACK_HELPER = 300,
  NF_IP_PRI_CONNTRACK_CONFIRM = INT_MAX,
  NF_IP_PRI_LAST = INT_MAX,
};

(see also Netfilter-packet-flow.svg by Jan Engelhardt for general overview of the iptables hook positions, nftables allows to define any number of chains before/after these)

So marks from iptables/ip6tables rules like these:

*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -m policy --dir in --pol ipsec --mode transport -j MARK --or-mark 0x101
-A OUTPUT -m policy --dir out --pol ipsec --mode transport -j MARK --or-mark 0x101
COMMIT

Will be easy to match in priority=0 input/ouput hooks (as NF_IP_PRI_RAW=-300) of nft ip/ip6/inet tables (e.g. mark and 0x101 == 0x101 accept)

But that'd split firewall configuration between iptables/nftables, adding more hassle to keep whole "iptables" thing initialized just for one or two rules.

xfrm transformation (like ipsec esp decryption in this case) seem to preserve all information about the packet intact, including packet marks (but not conntrack states, which track esp connection), which - as suggested by Florian Westphal in #netfilter - can be utilized to match post-xfrm packets in nftables by this preserved mark field.

E.g. having this (strictly before ct state {established, related} accept for stateful firewalls, as each packet has to be marked):

define cm.ipsec = 0x101
add rule inet filter input ip protocol esp mark set mark or $cm.ipsec
add rule inet filter input ip6 nexthdr esp mark set mark or $cm.ipsec
add rule inet filter input mark and $cm.ipsec == $cm.ipsec accept

Will mark and accept both still-encrypted esp packets (IPv4/IPv6) and their decrypted payload.

Note that this assumes that all IPSec connections are properly authenticated and trusted, so be sure not to use anything like that if e.g. opportunistic encryption is enabled.

Much simplier nft-only solution, though still not a full substitute for what xt_policy does, of couse.

Dec 09, 2015

Transparent buffer/file processing in emacs on load/save/whatever-io ops

Following-up on my gpg replacement endeavor, also needed to add transparent decryption for buffers loaded from *.ghg files, and encryption when writing stuff back to these.

git filters (defined via gitattributes file) do same thing when interacting with the repo.

Such thing is already done by a few exising elisp modules, such as jka-compr.el for auto-compression-mode (opening/saving .gz and similar files as if they were plaintext), and epa.el for transparent gpg encryption.

While these modules do this The Right Way by adding "file-name-handler-alist" entry, googling for a small ad-hoc boilerplate, found quite a few examples that do it via hooks, which seem rather unreliable and with esp. bad failure modes wrt transparent encryption.

So, in the interest of providing right-er boilerplate for the task (and because I tend to like elisp) - here's fg_sec.el example (from mk-fg/emacs-setup) of how it can be implemented cleaner, in similar fashion to epa and jka-compr.

Code calls ghg -do when visiting/reading files (with contents piped to stdin) and ghg -eo (with stdin/stdout buffers) when writing stuff back.

Entry-point/hook there is "file-name-handler-alist", where regexp to match *.ghg gets added to call "ghg-io-handler" for every i/o operation (including path ops like "expand-file-name" or "file-exists-p" btw), with only "insert-file-contents" (read) and "write-region" (write) being overidden.

Unlike jka-compr though, no temporary files are used in this implementation, only temp buffers, and "insert-file-contents" doesn't put unauthenticated data into target buffer as it arrives, patiently waiting for subprocess to exit with success code first.

Fairly sure that this bit of elisp can be used for any kind of processing, by replacing "ghg" binary with anything else that can work as a pipe (stdin -> processing -> stdout), which opens quite a lot of possibilities.

For example, all JSON files can be edited as a pretty YAML version, without strict syntax and all the brackets of JSON, or the need to process/convert them purely in elisp's json-mode or something - just plug python -m pyaml and python -m json commands for these two i/o ops and it should work.

Suspect there's gotta be something that'd make such filters easier in MELPA already, but haven't been able to spot anything right away, maybe should put up a package there myself.

[fg_sec.el code link]

Dec 08, 2015

GHG - simplier GnuPG (gpg) replacement for file encryption

Have been using gpg for many years now, many times a day, as I keep lot of stuff in .gpg files, but still can't seem to get used to its quirky interface and practices.

Most notably, it's "trust" thing, keyrings and arcane key editing, expiration dates, gpg-agent interaction and encrypted keys are all sources of dread and stress for me.

Last drop, following the tradition of many disastorous interactions with the tool, was me loosing my master signing key password, despite it being written down on paper and working before. #fail ;(

Certainly my fault, but as I'll be replacing the damn key anyway, why not throw out the rest of that incomprehensible tangle of pointless and counter-productive practices and features I never use?

Took ~6 hours to write a replacement ghg tool - same thing as gpg, except with simple and sane key management (which doesn't assume you entering anything, ever!!!), none of that web-of-trust or signing crap, good (and non-swappable) djb crypto, and only for file encryption.

Does everything I've used gpg for from the command-line, and has one flat file for all the keys, so no more hassle with --edit-key nonsense.

Highly suggest to anyone who ever had trouble and frustration with gpg to check ghg project out or write their own (trivial!) tool, and ditch the old thing - life's too short to deal with that constant headache.

Sep 04, 2015

Parsing OpenSSH Ed25519 keys for fun and profit

Adding key derivation to git-nerps from OpenSSH keys, needed to get the actual "secret" or something deterministically (plus in an obvious and stable way) derived from it (to then feed into some pbkdf2 and get the symmetric key).

Idea is for liteweight ad-hoc vms/containers to have a single "master secret", from which all others (e.g. one for git-nerps' encryption) can be easily derived or decrypted, and omnipresent, secure, useful and easy-to-generate ssh key in ~/.ssh/id_ed25519 seem to be the best candidate.

Unfortunately, standard set of ssh tools from openssh doesn't seem to have anything that can get key material or its hash - next best thing is to get "fingerprint" or such, but these are derived from public keys, so not what I wanted at all (as anyone can derive that, having public key, which isn't secret).

And I didn't want to hash full openssh key blob, because stuff there isn't guaranteed to stay the same when/if you encrypt/decrypt it or do whatever ssh-keygen does.

What definitely stays the same is the values that openssh plugs into crypto algos, so wrote a full parser for the key format (as specified in PROTOCOL.key file in openssh sources) to get that.

While doing so, stumbled upon fairly obvious and interesting application for such parser - to get really and short easy to backup, read or transcribe string which is the actual secret for Ed25519.

I.e. that's what OpenSSH private key looks like:

-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yAAAAJi1Bt0atQbd
GgAAAAtzc2gtZWQyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yA
AAAEAc5IRaYYm2Ss4E65MYY4VewwiwyqWdBNYAZxEhZe9GpNopTJz/d2cMv4VLj/fYkWwX
zyhChhvaVTRBi0uA7H7IAAAAE2ZyYWdnb2RAbWFsZWRpY3Rpb24BAg==
-----END OPENSSH PRIVATE KEY-----

And here's the only useful info in there, enough to restore whole blob above from, in the same base64 encoding:

HOSEWmGJtkrOBOuTGGOFXsMIsMqlnQTWAGcRIWXvRqQ=

Latter, of course, being way more suitable for tried-and-true "write on a sticker and glue at the desk" approach. Or one can just have a file with one host key per line - also cool.

That's the 32-byte "seed" value, which can be used to derive "ed25519_sk" field ("seed || pubkey") in that openssh blob, and all other fields are either "none", "ssh-ed25519", "magic numbers" baked into format or just padding.

So rolled the parser from git-nerps into its own tool - ssh-keyparse, which one can run and get that string above for key in ~/.ssh/id_ed25519, or do some simple crypto (as implemented by djb in ed25519.py, not me) to recover full key from the seed.

From output serialization formats that tool supports, especially liked the idea of Douglas Crockford's Base32 - human-readable one, where all confusing l-and-1 or O-and-0 chars are interchangeable, and there's an optional checksum (one letter) at the end:

% ssh-keyparse test-key --base32
3KJ8-8PK1-H6V4-NKG4-XE9H-GRW5-BV1G-HC6A-MPEG-9NG0-CW8J-2SFF-8TJ0-e

% ssh-keyparse test-key --base32-nodashes
3KJ88PK1H6V4NKG4XE9HGRW5BV1GHC6AMPEG9NG0CW8J2SFF8TJ0e

base64 (default) is still probably most efficient for non-binary (there's --raw otherwise) backup though.

[ssh-keyparse code link]

Sep 01, 2015

Transparent and easy encryption for files in git repositories

Have been installing things to an OS containers (/var/lib/machines) lately, and looking for proper configuration management in these.

Large-scale container setups use some hard-to-integrate things like etcd, where you have to template configuration from values in these, which is not very convenient and very low effort-to-results ratio (maintenance of that system itself) for "10 service containers on 3 hosts" case.

Besides, such centralized value store is a bit backwards for one-container-per-service case, where most values in such "central db" are specific to one container, and it's much easier to edit end-result configs then db values and then templates and then check how it all gets rendered on every trivial tweak.

Usual solution I have for these setups is simply putting all confs under git control, but leaving all the secrets (e.g. keys, passwords, auth data) out of the repo, in case it might be pulled from on other hosts, by different people and for purposes which don't need these sensitive bits and might leak them (e.g. giving access to contracted app devs).

For more transient container setups, something should definitely keep track of these "secrets" however, as "rm -rf /var/lib/machines/..." is much more realistic possibility and has its uses.


So my (non-original) idea here was to have one "master key" per host - just one short string - with which to encrypt all secrets for that host, which can then be shared between hosts and specific people (making these public might still be a bad idea), if necessary.

This key should then be simply stored in whatever key-management repo, written on a sticker and glued to a display, or something.

Git can be (ab)used for such encryption, with its "filter" facilities, which are generally used for opposite thing (normalization to one style), but are easy to adapt for this case too.

Git filters work by running "clear" operation on selected paths (can be a wildcard patterns like "*.c") every time git itself uses these and "smudge" when showing to user and checking them out to a local copy (where they are edited).

In case of encryption, "clear" would not be normalizing CR/LF in line endings, but rather wrapping contents (or parts of them) into a binary blob, and "smudge" should do the opposite, and gitattributes patterns would match files to be encrypted.


Looking for projects that already do that, found quite a few, but still decided to write my own tool, because none seem have all the things I wanted:

  • Use sane encryption.

    It's AES-CTR in the absolutely best case, and AES-ECB (wtf!?) in some, sometimes openssl is called with "password" on the command line (trivial to spoof in /proc).

    OpenSSL itself is a red flag - hard to believe that someone who knows how bad its API and primitives are still uses it willingly, for non-TLS, at least.

    Expected to find at least one project using AEAD through NaCl or something, but no such luck.

  • Have tool manage gitattributes.

    You don't add file to git repo by typing /path/to/myfile managed=version-control some-other-flags to some config, why should you do it here?

  • Be easy to deploy.

    Ideally it'd be a script, not some c++/autotools project to install build tools for or package to every setup.

    Though bash script is maybe taking it a bit too far, given how messy it is for anything non-trivial, secure and reliable in diff environments.

  • Have "configuration repository" as intended use-case.

So wrote git-nerps python script to address all these.

Crypto there is trivial yet solid PyNaCl stuff, marking files for encryption is as easy as git-nerps taint /what/ever/path and bootstrapping the thing requires nothing more than python, git, PyNaCl (which are norm in any of my setups) and git-nerps key-gen in the repo.

README for the project has info on every aspect of how the thing works and more on the ideas behind it.

I expect it'll have a few more use-case-specific features and convenience-wrapper commands once I'll get to use it in a more realistic cases than it has now (initially).


[project link]

Jul 16, 2014

(yet another) Dynamic DNS thing for tinydns (djbdns)

Tried to find any simple script to update tinydns (part of djbdns) zones that'd be better than ssh dns_update@remote_host update.sh, but failed - they all seem to be hacky php scripts, doomed to run behind httpds, send passwords in url, query random "myip" hosts or something like that.

What I want instead is something that won't be making http, tls or ssh connections (and stirring all the crap behind these), but would rather just send udp or even icmp pings to remotes, which should be enough for update, given source IPs of these packets and some authentication payload.

So yep, wrote my own scripts for that - tinydns-dynamic-dns-updater project.

Tool sends UDP packets with 100 bytes of "( key_id || timestamp ) || Ed25519_sig" from clients, authenticating and distinguishing these server-side by their signing keys ("key_id" there is to avoid iterating over them all, checking which matches signature).

Server zone files can have "# dynamic: ts key1 key2 ..." comments before records (separated from static records after these by comments or empty lines), which says that any source IPs of packets with correct signatures (and more recent timestamps) will be recorded in A/AAAA records (depending on source AF) that follow instead of what's already there, leaving anything else in the file intact.

Zone file only gets replaced if something is actually updated and it's possible to use dynamic IP for server as well, using dynamic hostname on client (which is resolved for each delayed packet).

Lossy nature of UDP can be easily mitigated by passing e.g. "-n5" to the client script, so it'd send 5 packets (with exponential delays by default, configurable via --send-delay), plus just having the thing on fairly regular intervals in crontab.

Putting server script into socket-activated systemd service file also makes all daemon-specific pains like using privileged ports (and most other security/access things), startup/daemonization, restarts, auto-suspend timeout and logging woes just go away, so there's --systemd flag for that too.

Given how easy it is to run djbdns/tinydns instance, there really doesn't seem to be any compelling reason not to use your own dynamic dns stuff for every single machine or device that can run simple python scripts.

Github link: tinydns-dynamic-dns-updater

May 18, 2014

The moment of epic fail hilarity with hashes

Just had an epic moment wrt how to fail at kinda-basic math, which seem to be quite representative of how people fail wrt homebrew crypto code (and what everyone and their mom warn against).

So, anyhow, on a d3 vis, I wanted to get a pseudorandom colors for text blobs, but with reasonably same luminosity on HSL scale (Hue - Saturation - Luminosity/Lightness/Level), so that changing opacity on light/dark bg can be seen clearly as a change of L in the resulting color.

There are text items like (literally, in this example) "thing1", "thing2", "thing3" - these should have all distinct and constant colors, ideally.

So how do you pick H and S components in HSL from a text tag? Just use hash, right?

As JS doesn't have any hashes yet (WebCryptoAPI is in the works) and I don't really need "crypto" here, just some str-to-num shuffler for color, decided that I might as well just roll out simple one-liner non-crypto hash func implementation.
There are plenty of those around, e.g. this list.

Didn't want much bias wrt which range of colors get picked, so there are these test results - link1, link2 - wrt how these functions work, e.g. performance and distribution of output values over uint32 range.

Picked random "ok" one - Ly hash, with fairly even output distribution, implemented as this:

hashLy_max = 4294967296 # uint32
hashLy = (str, seed=0) ->
  for n in [0..(str.length-1)]
    c = str.charCodeAt(n)
    while c > 0
      seed = ((seed * 1664525) + (c & 0xff) + 1013904223) % hashLy_max
      c >>= 8
  seed

c >>= 8 line and internal loop here because JS has unicode strings, so it's a trivial (non-standard) encoding.

But given any "thing1" string, I need two 0-255 values: H and S, not one 0-(2^32-1). So let's map output to a 0-255 range and just call it twice:

hashLy_chain = (str, count=2, max=255) ->
  [hash, hashes] = [0, []]
  scale = d3.scale.linear()
    .range([0, max]).domain([0, hashLy_max])
  for n in [1..count]
    hash = hashLy(str, hash)
    hashes.push(scale(hash))
  hashes
Note how to produce second hash output "hashLy" just gets called with "seed" value equal to the first hash - essentially hash(hash(value) || value).
People do that with md5, sha*, and their ilk all the time, right?

Getting the values from this func, noticed that they look kinda non-random at all, which is not what I came to expect from hash functions, quite used to dealing crypto hashes, which are really easy to get in any lang but JS.

So, sure, given that I'm playing around with d3 anyway, let's just plot the outputs:

Ly hash outputs

"Wat?... Oh, right, makes sense."

Especially with sequential items, it's hilarious how non-random, and even constant the output there is.
And it totally makes sense, of course - it's just a "k1*x + x + k2" function.

It's meant for hash tables, where seq-in/seq-out is fine, and the results in "chain(3)[0]" and "chain(3)[1]" calls are so close on 0-255 that they map to the same int value.

Plus, of course, the results are anything but "random-looking", even for non-sequential strings of d3.scale.category20() range.

Lession learned - know what you're dealing with, be super-careful rolling your own math from primitives you don't really understand, stop and think about wth you're doing for a second - don't just rely on "intuition" (associated with e.g. "hash" word).

Now I totally get how people start with AES and SHA1 funcs, mix them into their own crypto protocol and somehow get something analogous to ROT13 (or even double-ROT13, for extra hilarity) as a result.

Aug 08, 2013

Encrypted root on a remote vds

Most advice wrt encryption on a remote hosts (VPS, VDS) don't seem to involve full-disk encryption as such, but is rather limited to encrypting /var and /home, so that machine will boot from non-crypted / and you'll be able to ssh to it, decrypt these parts manually, then start services that use data there.

That seem to be in contrast with what's generally used on local machines - make LUKS container right on top of physical disk device, except for /boot (if it's not on USB key) and don't let that encryption layer bother you anymore.

Two policies seem to differ in that former one is opt-in - you have to actively think which data to put onto encrypted part (e.g. /etc/ssl has private keys? move to /var, shred from /etc), while the latter is opt-out - everything is encrypted, period.

So, in spirit of that opt-out way, I thought it'd be a drag to go double-think wrt which data should be stored where and it'd be better to just go ahead and put everything possible to encrypted container for a remote host as well, leaving only /boot with kernel and initramfs in the clear.

Naturally, to enter encryption password and not have it stored alongside LUKS header, some remote login from the network is in order, and sshd seem to be the secure and easy way to go about it.
Initramfs in question should then also be able to setup network, which luckily dracut can. Openssh sshd is a bit too heavy for it though, but there are much lighter sshd's like dropbear.

Searching around for someone to tie the two things up, found a bit incomplete and non-packaged solutions like this RH enhancement proposal and a set of hacky scripts and instructions in dracut-crypt-wait repo on bitbucket.

Approach outlined in RH bugzilla is to make dracut "crypt" module to operate normally and let cryptsetup query for password in linux console, but also start sshd in the background, where one can login and use a simple tool to echo password to that console (without having it echoed).
dracut-crypt-wait does a clever hack of removing "crypt" module hook instead and basically creates "rescure" console on sshd, where user have to manually do all the decryption necessary and then signal initramfs to proceed with the boot.

I thought first way was rather more elegant and clever, allowing dracut to figure out which device to decrypt and start cryptsetup with all the necessary, configured and documented parameters, also still allowing to type passphrase into console - best of both worlds, so went along with that one, creating dracut-crypt-sshd project.

As README there explains, using it is as easy as adding it into dracut.conf (or passing to dracut on command line) and adding networking to grub.cfg, e.g.:

menuentry "My Linux" {
        linux /vmlinuz ro root=LABEL=root
                rd.luks.uuid=7a476ea0 rd.lvm.vg=lvmcrypt rd.neednet=1
                ip=88.195.61.177::88.195.61.161:255.255.255.224:myhost:enp0s9:off
        initrd /dracut.xz
}

("ip=dhcp" might be simplier way to go, but doesn't yield default route in my case)

And there, you'll have sshd on that IP port 2222 (configurable), with pre-generated (during dracut build) keys, which might be a good idea to put into "known_hosts" for that ip/port somewhere. "authorized_keys" is taken from /root/.ssh by default, but also configurable via dracut.conf, if necessary.

Apart from sshd, that module includes two tools for interaction with console - console_peek and console_auth (derived from auth.c in the bugzilla link above).

Logging in to that sshd then yields sequence like this:

[214] Aug 08 13:29:54 lastlog_perform_login: Couldn't stat /var/log/lastlog: No such file or directory
[214] Aug 08 13:29:54 lastlog_openseek: /var/log/lastlog is not a file or directory!

# console_peek
[    1.711778] Write protecting the kernel text: 4208k
[    1.711875] Write protecting the kernel read-only data: 1116k
[    1.735488] dracut: dracut-031
[    1.756132] systemd-udevd[137]: starting version 206
[    1.760022] tsc: Refined TSC clocksource calibration: 2199.749 MHz
[    1.760109] Switching to clocksource tsc
[    1.809905] systemd-udevd[145]: renamed network interface eth0 to enp0s9
[    1.974202] 8139too 0000:00:09.0 enp0s9: link up, 100Mbps, full-duplex, lpa 0x45E1
[    1.983151] dracut: sshd port: 2222
[    1.983254] dracut: sshd key fingerprint: 2048 0e:14:...:36:f9  root@congo (RSA)
[    1.983392] dracut: sshd key bubblebabble: 2048 xikak-...-poxix  root@congo (RSA)
[185] Aug 08 13:29:29 Failed reading '-', disabling DSS
[186] Aug 08 13:29:29 Running in background
[    2.093869] dracut: luksOpen /dev/sda3 luks-...
Enter passphrase for /dev/sda3:
[213] Aug 08 13:29:50 Child connection from 188.226.62.174:46309
[213] Aug 08 13:29:54 Pubkey auth succeeded for 'root' with key md5 0b:97:bb:...

# console_auth
Passphrase:

#
First command - "console_peek" - allows to see which password is requested (if any) and second one allows to login.
Note that fingerprints of host keys are also echoed to console on sshd start, in case one has access to console but still needs sshd later.
I quickly found out that such initramfs with sshd is also a great and robust rescue tool, especially if "debug" and/or "rescue" dracut modules are enabled.
And as it includes fairly comprehensive network-setup options, might be a good way to boot multiple different OS'es with same (machine-specific) network parameters,

Probably obligatory disclaimer for such post should mention that crypto above won't save you from malicious hoster or whatever three-letter-agency that will coerce it into cooperation, should it take interest in your poor machine - it'll just extract keys from RAM image (especially if it's a virtualized VPS) or backdoor kernel/initramfs and force a reboot.

Threat model here is more trivial - be able to turn off and decomission host without fear of disks/images then falling into some other party's hands, which might also happen if hoster eventually goes bust or sells/scraps disks due to age or bad blocks.

Also, even minor inconvenience like forcing to extract keys like outlined above might be helpful in case of quite well-known "we came fishing to a datacenter, shut everything down, give us all the hardware in these racks" tactic employed by some agencies.

Absolute security is a myth, but these measures are fairly trivial and practical to be employed casually to cut off at least some number of basic threats.

So, yay for dracut, the amazingly cool and hackable initramfs project, which made it that easy.

Code link: https://github.com/mk-fg/dracut-crypt-sshd

Apr 29, 2013

Recent fixes to great tools - 0bin and Convergence

I've tried both of these in the past, but didn't have attention budget to make them really work for me - which finally found now, so wanted to also give crawlers a few more keywords on these nice things.

0bin - leak-proof pastebin

As I pastebin a lot of stuff all the time - basically everything multiline - because all my IM happens in ERC over IRC (with bitlbee linking xmpp and all the proprietary crap like icq, skype and twitter), and IRC doesn't handle multiline messages at all.

All sorts of important stuff ends up there - some internal credentials, contacts, non-public code, bugs, private chat logs, etc - so I always winced a bit when pasting something in fear that google might index/data-mine it and preserve forever, so I figured it'll bite me eventually, somewhat like this:

massive security fail

Easy and acceptable solution is to use simple client-side crypto, with link having decryption key after hashmark, which never gets sent to pastebin server and doesn't provide crawlers with any useful data. ZeroBin does that.

But original ZeroBin is php, which I don't really want to touch, and have its share of problems - from the lack of command-line client (for e.g. grep stuff log | zerobinpaste), to overly-long urls and flaky overloaded interface.

Luckily, there's more hackable python version of it - 0bin, for which I hacked together a simple zerobinpaste tool, then simplified interface to bare minimum and updated to use shorter urls (#41, #42) and put to my host - result is paste.fraggod.net - my own nice robot-proof pastebin.

URLs there aren't any longer than with regular pastebins:

http://paste.fraggod.net/paste/pLmEb0BI#Verfn+7o

Plus the links there expire reliably, and it's easy to force this expiration, having control over app backend.

Local fork should have all the not-yet-merged stuff as well as the non-upstreamable simplier white-bootstrap theme.

Convergence - better PKI for TLS keys

Can't really recommend this video highly enough to anyone with even the slightest bit of interest in security, web or SSL/TLS protocols.

There are lots of issues beyond just key distribution and authentication, but I'd dare anyone to watch that rundown of just-as-of-2011 issues and remain convinced that the PKI there is fine or even good enough.
Even fairly simple Convergence tool implementation is a vast improvement, giving a lot of control to make informed decisions about who to trust on the net.

I've been using the plugin in the past, but eventually it broke and I just disabled it until the better times when it'll be fixed, but Moxie seem to have moved on to other tasks and project never got the developers' attention it deserved.

So finally got around to fixing fairly massive list of issues around it myself.

Bugs around newer firefox plugin were the priority - one was compatibility thing from PR #170, another is endless hanging on all requests to notaries (PR #173), more minor issues with adding notaries, interfaces and just plain bugs that were always there.

Then there was one shortcoming of existing perspective-only verification mechanism that bugged me - it didn't utilize existing flawed CA lists at all, making decision of whether random site's cert is signed by at least some crappy CA or completely homegrown (and thus don't belong on e.g. "paypal.com").
Not the deciding factor by any means, but allows to make much more informed decision than just perspectives for e.g. fishing site with typo in URL.

So was able to utilize (and extend a bit) the best part of Convergence - agility of its trust decision-making - by hacking together a verifier (which can be easily run on desktop localhost) that queries existing CA lists.

Enabling Convergence with that doesn't even force to give up the old model - just adds perspective checks on top, giving a clear picture of which of the checks have failed on any inconsistencies.

Other server-side fixes include nice argparse interface, configuration file support, loading of verifiers from setuptools/distribute entry points (can be installed separately with any python package), hackish TLS SNI support (Moxie actually filed twisted-5374 about more proper fix), sane logging, ...

Filed only a few PR for the show-stopper client bugs, but looks like upstream repo is simply dead, pity ;(

But all this stuff should be available in my fork in the meantime.
Top-level README there should provide a more complete list of links and changes.
Hopefully, upstream development will be picked-up at some point, or maybe shift to some next incarnation of the idea - CrossBear seem to potentially be one.
Until then, at least was able to salvage this one, and hacking ctypes-heavy ff extension implementing SOCKS MitM proxy was quite rewarding experience all by itself... certainly broadens horizons on just how damn accessible and simple it is to implement such seemingly-complex protocol wrappers.

Plan to also add a few other internet-observatory (like OONI, CrossBear crawls, EFF Observatory, etc) plugins there in the near future, plus some other things listed in the README here.

Next → Page 1 of 2
Member of The Internet Defense League