Feb 13, 2017

Xorg input driver - the easy way, via evdev and uinput

Got to reading short stories in Column Reader from laptop screen before sleep recently, and for an extra-lazy points, don't want to drag my hand to keyboard to flip pages (or columns, as the case might be).

Easy fix - get any input device and bind stuff there to keys you'd normally use.
As it happens, had Xbox 360 controller around for that.

Hard part is figuring out how to properly do it all in Xorg - need to build xf86-input-joystick first (somehow not in Arch core), then figure out how to make it act like a dumb event source, not some mouse emulator, and then stuff like xev and xbindkeys will probably help.

This is way more complicated than it needs to be, and gets even more so when you factor-in all the Xorg driver quirks, xev's somewhat cryptic nature (modifier maps, keysyms, etc), fact that xbindkeys can't actually do "press key" actions (have to use stuff like xdotool for that), etc.

All the while reading these events from linux itself is as trivial as evtest /dev/input/event11 (or for event in dev.read_loop(): ...) and sending them back is just ui.write(e.EV_KEY, e.BTN_RIGHT, 1) via uinput device.

Hence whole binding thing can be done by a tiny python loop that'd read events from whatever specified evdev and write corresponding (desired) keys to uinput.

So instead of +1 pre-naptime story, hacked together a script to do just that - evdev-to-xev (python3/asyncio) - which reads mappings from simple YAML and runs the loop.

For example, to bind right joystick's (on the same XBox 360 controller) extreme positions to cursor keys, plus triggers, d-pad and bumper buttons there:

map:

  ## Right stick
  # Extreme positions are ~32_768
  ABS_RX <-30_000: left
  ABS_RX >30_000: right
  ABS_RY <-30_000: up
  ABS_RY >30_000: down

  ## Triggers
  # 0 (idle) to 255 (fully pressed)
  ABS_Z >200: left
  ABS_RZ >200: right

  ## D-pad
  ABS_HAT0Y -1: leftctrl leftshift equal
  ABS_HAT0Y 1: leftctrl minus
  ABS_HAT0X -1: pageup
  ABS_HAT0X 1: pagedown

  ## Bumpers
  BTN_TL 1: [h,e,l,l,o,space,w,o,r,l,d,enter]
  BTN_TR 1: right

timings:
  hold: 0.02
  delay: 0.02
  repeat: 0.5
Run with e.g.: evdev-to-xev -c xbox-scroller.yaml /dev/input/event11
(see also less /proc/bus/input/devices and evtest /dev/input/event11).
Running the thing with no config will print example one with comments/descriptions.

Given how all iterations of X had to work with whatever input they had at the time, plus not just on linux, even when evdev was around, hard to blame it for having a bit of complexity on top of way simplier input layer underneath.

In linux, aforementioned Xbox 360 gamepad is supported by "xpad" module (so that you'd get evdev node for it), and /dev/uinput for simulating arbitrary evdev stuff is "uinput" module.

Script itself needs python3 and python-evdev, plus evtest can be useful.
No need for any extra Xorg drivers beyond standard evdev.

Most similar tool to such script seem to be actkbd, though afaict, one'd still need to run xdotool from it to simulate input :O=

Github link: evdev-to-xev script (in the usual mk-fg/fgtk scrap-heap)

Aug 31, 2016

Handy tool to wait for remote TCP port to open - TCP "ping"

Lack of some basic tool to "wait for connection" in linux toolkit always annoyed me to no end.

root@alarm~:~# reboot
Shared connection to 10.0.1.75 closed.

% ssh root@10.0.1.75

...time passes, ssh doesn't do anything...

ssh: connect to host 10.0.1.75 port 22: No route to host

% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused
% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused
% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused

...[mashing Up/Enter] start it up already!...

% ssh root@10.0.1.75
ssh: connect to host 10.0.1.75 port 22: Connection refused
% ssh root@10.0.1.75

root@alarm~:~#

...finally!
Working a lot with ARM boards, can have this thing repeating few dozen times a day.
Same happens on every power-up, after fiddling with sd cards, etc.

And usually know for a fact that I'll want to reconnect to the thing in question asap and continue what I was doing there, but trying luck a few times with unresponsive or insta-failing ssh is rather counter-productive and just annoying.

Instead:

% tping 10.0.1.75 && ssh root@10.0.1.75
root@alarm~:~#

That's it, no ssh timing-out or not retrying fast enough, no "Connection refused" nonsense.

tping (code link, name is ping + fping + tcp ping) is a trivial ad-hoc script that opens new TCP connection to specified host/port every second (default for -r/--retry-delay) and polls connections for success/error/timeout (configurable) in-between, exiting as soon as first connection succeeds, which in example above means that sshd is now ready for sure.

Doesn't need extra privileges like icmp pingers do, simple no-deps python3 script.

Used fping as fping -qr20 10.0.1.75 && ssh root@10.0.1.75 before finally taking time to write that thing, but it does what it says on the tin - icmp ping, and usually results in "Connection refused" error from ssh, as there's gap between network and sshd starting.

One of these "why the hell it's not in coreutils or util-linux" tools for me now.

Mar 03, 2016

Python 3 killer feature - asyncio

I've been really conservative with the whole py2 -> py3 migration (shiny new langs don't seem to be my thing), but one feature that finally makes it worth the effort is well-integrated - by now (Python-3.5 with its "async" and "await" statements) - asyncio eventloop framework.

Basically, it's a twisted core, including eventloop hooked into standard socket/stream ops, sane futures implementation, all the Transports/Protocols/Tasks base classes and such concepts, standardized right there in Python's stdlib.

On one hand, baking this stuff into language core seem to be somewhat backwards, but I think it's actually really smart thing to do - not only it eliminates whole "tech zoo" problem nodejs ecosystem has, but also gets rid of "require huge twisted blob or write my own half-assed eventloop base" that pops-up in every second script, even the most trivial ones.

Makes it worth starting any py script with py3 shebang for me, at last \o/

Dec 29, 2015

Tool to interleave and colorize lines from multiple log (or any other) files

There's multitail thing to tail multiple logs, potentially interleaved, in one curses window, which is a painful-to-impossible to browse through, as you'd do with simple "less".

There's lnav for parsing and normalizing a bunch of logs, and continuously monitoring these, also interactive.

There's rainbow to color specific lines based on regexp, which can't really do any interleaving.

And this has been bugging me for a while - there seem to be no easy way to get this:

interleaved_and_colorized_output_image

This is an interleaved output from several timestamped log files, for events happening at nearly the same time (which can be used to establish the sequence between these and correlate output of multiple tools/instances/etc), browsable via the usual "less" (or whatever other $PAGER) in an xterm window.

In this case, logfiles are from "btmon" (bluetooth sniffer tool), "bluetoothd" (bluez) debug output and an output from gdb attached to that bluetoothd pid (showing stuff described in previous entry about gdb).

Output for neither of these tools have timestamps by default, but this is easy to fix by piping it through any tool which would add them into every line, svlogd for example.

To be concrete (and to show one important thing about such log-from-output approach), here's how I got these particular logs:

# mkdir -p debug_logs/{gdb,bluetoothd,btmon}

# gdb -ex 'source gdb_device_c_ftrace.txt' -ex q --args\
        /usr/lib/bluetooth/bluetoothd --nodetach --debug\
        1> >(svlogd -r _ -ttt debug_logs/gdb)\
        2> >(svlogd -r _ -ttt debug_logs/bluetoothd)

# stdbuf -oL btmon |\
        svlogd -r _ -ttt debug_logs/btmon

Note that "btmon" runs via coreutils stdbuf tool, which can be critical for anything that writes to its stdout via libc's fwrite(), i.e. can have buffering enabled there, which causes stuff to be output delayed and in batches, not how it'd appear in the terminal (where line buffering is used), resulting in incorrect timestamps, unless stdbuf or any other option disabling such buffering is used.

With three separate logs from above snippet, natural thing you'd want is to see these all at the same time, so for each logical "event", there'd be output from btmon (network packet), bluetoothd (debug logging output) and gdb's function call traces.

It's easy to concatenate all three logs and sort them to get these interleaved, but then it can be visually hard to tell which line belongs to which file, especially if they are from several instances of the same app (not really the case here though).

Simple fix is to add per-file distinct color to each line of each log, but then you can't sort these, as color sequences get in the way, it's non-trivial to do even that, and it all adds-up to a script.

Seem to be hard to find any existing tools for the job, so wrote a script to do it - liac (in the usual mk-fg/fgtk github repo), which was used to produce output in the image above - that is, interleave lines (using any field for sorting btw), add tags for distinct ANSI colors to ones belonging to different files and optional prefixes.

Thought it might be useful to leave a note for anyone looking for something similar.

[script source link]

Dec 08, 2015

GHG - simplier GnuPG (gpg) replacement for file encryption

Have been using gpg for many years now, many times a day, as I keep lot of stuff in .gpg files, but still can't seem to get used to its quirky interface and practices.

Most notably, it's "trust" thing, keyrings and arcane key editing, expiration dates, gpg-agent interaction and encrypted keys are all sources of dread and stress for me.

Last drop, following the tradition of many disastorous interactions with the tool, was me loosing my master signing key password, despite it being written down on paper and working before. #fail ;(

Certainly my fault, but as I'll be replacing the damn key anyway, why not throw out the rest of that incomprehensible tangle of pointless and counter-productive practices and features I never use?

Took ~6 hours to write a replacement ghg tool - same thing as gpg, except with simple and sane key management (which doesn't assume you entering anything, ever!!!), none of that web-of-trust or signing crap, good (and non-swappable) djb crypto, and only for file encryption.

Does everything I've used gpg for from the command-line, and has one flat file for all the keys, so no more hassle with --edit-key nonsense.

Highly suggest to anyone who ever had trouble and frustration with gpg to check ghg project out or write their own (trivial!) tool, and ditch the old thing - life's too short to deal with that constant headache.

Dec 07, 2015

Resizing first FAT32 partition to microSD card size on boot from Raspberry Pi

One other thing I've needed to do recently is to have Raspberry Pi OS resize its /boot FAT32 partition to full card size (i.e. "make it as large as possible") from right underneath itself.

RPis usually have first FAT (fat16 / fat32 / vfat) partition needed by firmware to load config.txt and uboot stuff off, and that is the only partition one can see in Windows OS when plugging microSD card into card-reader (which is a kinda arbitrary OS limitation).

Map of the usual /dev/mmcblk0 on RPi (as seen in parted):

Number  Start   End     Size    Type     File system  Flags
        32.3kB  1049kB  1016kB           Free Space
 1      1049kB  106MB   105MB   primary  fat16        lba
 2      106MB   1887MB  1782MB  primary  ext4

Resizing that first partition is naturally difficult, as it is followed by ext4 one with RPi's OS, but when you want to have small (e.g. <2G) and easy-to-write "rpi.img" file for any microSD card, there doesn't seem to be a way around that - img have to have as small initial partitions as possible to fit on any card.

Things get even more complicated by the fact that there don't seem to be any tools around for resizing FAT fs'es, so it has to be re-created from scratch.

There is quite an easy way around all these issues however, which can be summed-up as a sequence of the following steps:

  • Start while rootfs is mounted read-only or when it can be remounted as such, i.e. on early boot.

    Before=systemd-remount-fs.service local-fs-pre.target in systemd terms.

  • Grab sfdisk/parted map of the microSD and check if there's "Free Space" chunk left after last (ext4/os) partition.

    If there is, there's likely a lot of it, as SD cards increase in 2x size factors, so 4G image written on larger card will have 4+ gigs there, in fact a lot more for 16G or 32G cards.

    Or there can be only a few megs there, in case of matching card size, where it's usually a good idea to make slightly smaller images, as actual cards do vary in size a bit.

  • "dd" whole rootfs to the end of the microSD card.

    This is safe with read-only rootfs, and dumb "dd" approach to copying it (as opposed to dmsetup + mkfs + cp) seem to be simpliest and least error-prone.

  • Update partition table to have rootfs in the new location (at the very end of the card) and boot partition covering rest of the space.

  • Initiate reboot, so that OS will load from the new rootfs location.

  • Starting on early-boot again, remount rootfs rw if necessary, temporary copy all contents of boot partition (which should still be small) to rootfs.

  • Run mkfs.vfat on the new large boot partition and copy stuff back to it from rootfs.

  • Reboot once again, in case whatever boot timeouts got triggered.

  • Avoid running same thing on all subsequent boots.

    E.g. touch /etc/boot-resize-done and have ConditionPathExists=!/etc/boot-resize-done in the systemd unit file.

That should do it \o/

resize-rpi-fat32-for-card (in fgtk repo) is a script I wrote to do all of this stuff, exactly as described.

systemd unit file for the thing (can also be printed by running script with "--print-systemd-unit" option):

[Unit]
DefaultDependencies=no
After=systemd-fsck-root.service
Before=systemd-remount-fs.service -.mount local-fs-pre.target local-fs.target
ConditionPathExists=!/etc/boot-resize-done

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/resize-rpi-fat32-for-card

[Install]
WantedBy=local-fs.target

It does use lsblk -Jnb JSON output to get rootfs device and partition, and get whether it's mounted read-only, then parted -ms /dev/... unit B print free to grab machine-readable map of the device.

sfdisk -J (also JSON output) could've been better option than parted (extra dep, which is only used to get that one map), except it doesn't conveniently list "free space" blocks and device size, pity.

If partition table doesn't have extra free space at the end, "fsstat" tool from sleuthkit is used to check whether FAT filesystem covers whole partition and needs to be resized.

After that, and only if needed, either "dd + sfdisk" or "cp + mkfs.vfat + cp back" sequence gets executed, followed by a reboot command.

Extra options for the thing:

  • "--skip-ro-check" - don't bother checkin/forcing read-only rootfs before "dd" step, which should be fine, if there's no activity there (e.g. early boot).

  • "--done-touch-file" - allows to specify location of file to create (if missing) when "no resize needed" state gets reached.

    Script doesn't check whether this file exists and always does proper checks of partition table and "fsstat" when deciding whether something has to be done, only creates the file at the end (if it doesn't exist already).

  • "--overlay-image" uses splash.go tool that I've mentioned earlier (be sure to compile it first, ofc) to set some "don't panic, fs resize in progress" image (resized/centered and/or with text and background) during the whole process, using RPi's OpenVG GPU API, covering whatever console output.

  • Misc other stuff for setup/debug - "--print-systemd-unit", "--debug", "--reboot-delay".

    Easy way to debug the thing with these might be to add StandardOutput=tty to systemd unit's Service section and ... --debug --reboot-delay 60 options there, or possibly adding extra ExecStart=/bin/sleep 60 after the script (and changing its ExecStart= to ExecStart=-, so delay will still happen on errors).

    This should provide all the info on what's happening in the script (has plenty of debug output) to the console (one on display or UART).

One more link to the script: resize-rpi-fat32-for-card

Sep 04, 2015

Parsing OpenSSH Ed25519 keys for fun and profit

Adding key derivation to git-nerps from OpenSSH keys, needed to get the actual "secret" or something deterministically (plus in an obvious and stable way) derived from it (to then feed into some pbkdf2 and get the symmetric key).

Idea is for liteweight ad-hoc vms/containers to have a single "master secret", from which all others (e.g. one for git-nerps' encryption) can be easily derived or decrypted, and omnipresent, secure, useful and easy-to-generate ssh key in ~/.ssh/id_ed25519 seem to be the best candidate.

Unfortunately, standard set of ssh tools from openssh doesn't seem to have anything that can get key material or its hash - next best thing is to get "fingerprint" or such, but these are derived from public keys, so not what I wanted at all (as anyone can derive that, having public key, which isn't secret).

And I didn't want to hash full openssh key blob, because stuff there isn't guaranteed to stay the same when/if you encrypt/decrypt it or do whatever ssh-keygen does.

What definitely stays the same is the values that openssh plugs into crypto algos, so wrote a full parser for the key format (as specified in PROTOCOL.key file in openssh sources) to get that.

While doing so, stumbled upon fairly obvious and interesting application for such parser - to get really and short easy to backup, read or transcribe string which is the actual secret for Ed25519.

I.e. that's what OpenSSH private key looks like:

-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yAAAAJi1Bt0atQbd
GgAAAAtzc2gtZWQyNTUxOQAAACDaKUyc/3dnDL+FS4/32JFsF88oQoYb2lU0QYtLgOx+yA
AAAEAc5IRaYYm2Ss4E65MYY4VewwiwyqWdBNYAZxEhZe9GpNopTJz/d2cMv4VLj/fYkWwX
zyhChhvaVTRBi0uA7H7IAAAAE2ZyYWdnb2RAbWFsZWRpY3Rpb24BAg==
-----END OPENSSH PRIVATE KEY-----

And here's the only useful info in there, enough to restore whole blob above from, in the same base64 encoding:

HOSEWmGJtkrOBOuTGGOFXsMIsMqlnQTWAGcRIWXvRqQ=

Latter, of course, being way more suitable for tried-and-true "write on a sticker and glue at the desk" approach. Or one can just have a file with one host key per line - also cool.

That's the 32-byte "seed" value, which can be used to derive "ed25519_sk" field ("seed || pubkey") in that openssh blob, and all other fields are either "none", "ssh-ed25519", "magic numbers" baked into format or just padding.

So rolled the parser from git-nerps into its own tool - ssh-keyparse, which one can run and get that string above for key in ~/.ssh/id_ed25519, or do some simple crypto (as implemented by djb in ed25519.py, not me) to recover full key from the seed.

From output serialization formats that tool supports, especially liked the idea of Douglas Crockford's Base32 - human-readable one, where all confusing l-and-1 or O-and-0 chars are interchangeable, and there's an optional checksum (one letter) at the end:

% ssh-keyparse test-key --base32
3KJ8-8PK1-H6V4-NKG4-XE9H-GRW5-BV1G-HC6A-MPEG-9NG0-CW8J-2SFF-8TJ0-e

% ssh-keyparse test-key --base32-nodashes
3KJ88PK1H6V4NKG4XE9HGRW5BV1GHC6AMPEG9NG0CW8J2SFF8TJ0e

base64 (default) is still probably most efficient for non-binary (there's --raw otherwise) backup though.

[ssh-keyparse code link]

Sep 01, 2015

Transparent and easy encryption for files in git repositories

Have been installing things to an OS containers (/var/lib/machines) lately, and looking for proper configuration management in these.

Large-scale container setups use some hard-to-integrate things like etcd, where you have to template configuration from values in these, which is not very convenient and very low effort-to-results ratio (maintenance of that system itself) for "10 service containers on 3 hosts" case.

Besides, such centralized value store is a bit backwards for one-container-per-service case, where most values in such "central db" are specific to one container, and it's much easier to edit end-result configs then db values and then templates and then check how it all gets rendered on every trivial tweak.

Usual solution I have for these setups is simply putting all confs under git control, but leaving all the secrets (e.g. keys, passwords, auth data) out of the repo, in case it might be pulled from on other hosts, by different people and for purposes which don't need these sensitive bits and might leak them (e.g. giving access to contracted app devs).

For more transient container setups, something should definitely keep track of these "secrets" however, as "rm -rf /var/lib/machines/..." is much more realistic possibility and has its uses.


So my (non-original) idea here was to have one "master key" per host - just one short string - with which to encrypt all secrets for that host, which can then be shared between hosts and specific people (making these public might still be a bad idea), if necessary.

This key should then be simply stored in whatever key-management repo, written on a sticker and glued to a display, or something.

Git can be (ab)used for such encryption, with its "filter" facilities, which are generally used for opposite thing (normalization to one style), but are easy to adapt for this case too.

Git filters work by running "clear" operation on selected paths (can be a wildcard patterns like "*.c") every time git itself uses these and "smudge" when showing to user and checking them out to a local copy (where they are edited).

In case of encryption, "clear" would not be normalizing CR/LF in line endings, but rather wrapping contents (or parts of them) into a binary blob, and "smudge" should do the opposite, and gitattributes patterns would match files to be encrypted.


Looking for projects that already do that, found quite a few, but still decided to write my own tool, because none seem have all the things I wanted:

  • Use sane encryption.

    It's AES-CTR in the absolutely best case, and AES-ECB (wtf!?) in some, sometimes openssl is called with "password" on the command line (trivial to spoof in /proc).

    OpenSSL itself is a red flag - hard to believe that someone who knows how bad its API and primitives are still uses it willingly, for non-TLS, at least.

    Expected to find at least one project using AEAD through NaCl or something, but no such luck.

  • Have tool manage gitattributes.

    You don't add file to git repo by typing /path/to/myfile managed=version-control some-other-flags to some config, why should you do it here?

  • Be easy to deploy.

    Ideally it'd be a script, not some c++/autotools project to install build tools for or package to every setup.

    Though bash script is maybe taking it a bit too far, given how messy it is for anything non-trivial, secure and reliable in diff environments.

  • Have "configuration repository" as intended use-case.

So wrote git-nerps python script to address all these.

Crypto there is trivial yet solid PyNaCl stuff, marking files for encryption is as easy as git-nerps taint /what/ever/path and bootstrapping the thing requires nothing more than python, git, PyNaCl (which are norm in any of my setups) and git-nerps key-gen in the repo.

README for the project has info on every aspect of how the thing works and more on the ideas behind it.

I expect it'll have a few more use-case-specific features and convenience-wrapper commands once I'll get to use it in a more realistic cases than it has now (initially).


[project link]

Jan 28, 2015

Sample code for using ST7032I I2C/SMBus driver in Midas LCD with python

There seem to be a surprising lack of python code on the net for this particular device, except for this nice pi-ras blog post, in japanese.
So, to give google some more food and a bit of commentary in english to that post - here goes.

I'm using Midas MCCOG21605C6W-SPTLYI 2x16 chars LCD panel, connected to 5V VDD and 3.3V BeagleBone Black I2C bus:

simple digital clock on lcd

Code for the above LCD clock "app" (python 2.7):

import smbus, time

class ST7032I(object):

  def __init__(self, addr, i2c_chan, **init_kws):
    self.addr, self.bus = addr, smbus.SMBus(i2c_chan)
    self.init(**init_kws)

  def _write(self, data, cmd=0, delay=None):
    self.bus.write_i2c_block_data(self.addr, cmd, list(data))
    if delay: time.sleep(delay)

  def init(self, contrast=0x10, icon=False, booster=False):
    assert contrast < 0x40 # 6 bits only, probably not used on most lcds
    pic_high = 0b0111 << 4 | (contrast & 0x0f) # c3 c2 c1 c0
    pic_low = ( 0b0101 << 4 |
      icon << 3 | booster << 2 | ((contrast >> 4) & 0x03) ) # c5 c4
    self._write([0x38, 0x39, 0x14, pic_high, pic_low, 0x6c], delay=0.01)
    self._write([0x0c, 0x01, 0x06], delay=0.01)

  def move(self, row=0, col=0):
    assert 0 <= row <= 1 and 0 <= col <= 15, [row, col]
    self._write([0b1000 << 4 | (0x40 * row + col)])

  def addstr(self, chars, pos=None):
    if pos is not None:
      row, col = (pos, 0) if isinstance(pos, int) else pos
      self.move(row, col)
    self._write(map(ord, chars), cmd=0x40)

  def clear(self):
    self._write([0x01])

if __name__ == '__main__':
  lcd = ST7032I(0x3e, 2)
  while True:
    ts_tuple = time.localtime()
    lcd.clear()
    lcd.addstr(time.strftime('date: %y-%m-%d', ts_tuple), 0)
    lcd.addstr(time.strftime('time: %H:%M:%S', ts_tuple), 1)
    time.sleep(1)
Note the constants in the "init" function - these are all from "INITIALIZE(5V)" sequence on page-8 of the Midas LCD datasheet , setting up things like voltage follower circuit, OSC frequency, contrast (not used on my panel), modes and such.
Actual reference on what all these instructions do and how they're decoded can be found on page-20 there.

Even with the same exact display, but connected to 3.3V, these numbers should probably be a bit different - check the datasheet (e.g. page-7 there).

Also note the "addr" and "i2c_chan" values (0x3E and 2) - these should be taken from the board itself.

"i2c_chan" is the number of the device (X) in /dev/i2c-X, of which there seem to be usually more than one on ARM boards like RPi or BBB.
For instance, Beaglebone Black has three I2C buses, two of which are available on the expansion headers (with proper dtbs loaded).
See this post on Fortune Datko blog and/or this one on minix-i2c blog for one way to tell reliably which device in /dev corresponds to which hardware bus and pin numbers.

And the address is easy to get from the datasheet (lcd I have uses only one static slave address), or detect via i2cdetect -r -y <i2c_chan>, e.g.:

# i2cdetect -r -y 2
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- 3e --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- UU UU UU UU -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- 68 -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

Here I have DS1307 RTC on 0x68 and an LCD panel on 0x3E address (again, also specified in the datasheet).

Both "i2cdetect" command-line tool and python "smbus" module are part of i2c-tools project, which is developed under lm-sensors umbrella.
On Arch or source-based distros these all come with "i2c-tools" package, but on e.g. debian, python module seem to be split into "python-smbus".

Plugging these bus number and the address for your particular hardware into the script above and maybe adjusting the values there for your lcd panel modes should make the clock show up and tick every second.

In general, upon seeing tutorial on some random blog (like this one), please take it with a grain of salt, because it's highly likely that it was written by a fairly incompetent person (like me), since engineers who deal with these things every day don't see above steps as any kind of accomplishment - it's a boring no-brainer routine for them, and they aren't likely to even think about it, much less write tutorials on it (all trivial and obvious, after all).

Nevertheless, hope this post might be useful to someone as a pointer on where to look to get such device started, if nothing else.

Jul 16, 2014

(yet another) Dynamic DNS thing for tinydns (djbdns)

Tried to find any simple script to update tinydns (part of djbdns) zones that'd be better than ssh dns_update@remote_host update.sh, but failed - they all seem to be hacky php scripts, doomed to run behind httpds, send passwords in url, query random "myip" hosts or something like that.

What I want instead is something that won't be making http, tls or ssh connections (and stirring all the crap behind these), but would rather just send udp or even icmp pings to remotes, which should be enough for update, given source IPs of these packets and some authentication payload.

So yep, wrote my own scripts for that - tinydns-dynamic-dns-updater project.

Tool sends UDP packets with 100 bytes of "( key_id || timestamp ) || Ed25519_sig" from clients, authenticating and distinguishing these server-side by their signing keys ("key_id" there is to avoid iterating over them all, checking which matches signature).

Server zone files can have "# dynamic: ts key1 key2 ..." comments before records (separated from static records after these by comments or empty lines), which says that any source IPs of packets with correct signatures (and more recent timestamps) will be recorded in A/AAAA records (depending on source AF) that follow instead of what's already there, leaving anything else in the file intact.

Zone file only gets replaced if something is actually updated and it's possible to use dynamic IP for server as well, using dynamic hostname on client (which is resolved for each delayed packet).

Lossy nature of UDP can be easily mitigated by passing e.g. "-n5" to the client script, so it'd send 5 packets (with exponential delays by default, configurable via --send-delay), plus just having the thing on fairly regular intervals in crontab.

Putting server script into socket-activated systemd service file also makes all daemon-specific pains like using privileged ports (and most other security/access things), startup/daemonization, restarts, auto-suspend timeout and logging woes just go away, so there's --systemd flag for that too.

Given how easy it is to run djbdns/tinydns instance, there really doesn't seem to be any compelling reason not to use your own dynamic dns stuff for every single machine or device that can run simple python scripts.

Github link: tinydns-dynamic-dns-updater

Next → Page 1 of 5
Member of The Internet Defense League