Oct 11, 2017

Force-enable HDMI to specific mode in linux framebuffer console

Bumped into this issue when running latest mainline kernel (4.13) on ODROID-C2 - default fb console for HDMI output have to be configured differently there (and also needs a dtb patch to hook it up).

Old vendor kernels for that have/use a bunch of cmdline options for HDMI - hdmimode, hdmitx (cecconfig), vout (hdmi/dvi/auto), overscan_*, etc - all custom and non-mainline.

With mainline DRM (as in Direct Rendering Manager) and framebuffer modules, video= option seem to be the way to set/force specific output and resolution instead.

When display is connected on boot, it can work without that if stars align correctly, but that's not always the case as it turns out - only 1 out of 3 worked that way.

But even if display works on boot, plugging HDMI after boot never works anyway, and that's the most (only) useful thing for it (debug issues, see logs or kernel panic backtrace there, etc)!

DRM module (meson_dw_hdmi in case of C2) has its HDMI output info in /sys/class/drm/card0-HDMI-A-1/, which is where one can check on connected display, dump its EDID blob (info, supported modes), etc.

cmdline option to force this output to be used with specific (1080p60) mode:


More info on this spec is in Documentation/fb/modedb.txt, but the gist is "<ouput>:<WxH>@<rate><flags>" with "e" flag at the end is "force the display to be enabled", to avoid all that hotplug jank.

Should set mode for console (see e.g. fbset --info), but at least with meson_dw_hdmi this is insufficient, which it's happy to tell why when loading with extra drm.debug=0xf cmdline option - doesn't have any supported modes, returns MODE_BAD for all non-CEA-861 modes that are default in fb-modedb.

Such modelines are usually supplied from EDID blobs by the display, but if there isn't one connected, blob should be loaded from somewhere else (and iirc there are ways to define these via cmdline).

Luckily, kernel has built-in standard EDID blobs, so there's no need to put anything to /lib/firmware, initramfs or whatever:

drm_kms_helper.edid_firmware=edid/1920x1080.bin video=HDMI-A-1:1920x1080@60e

And that finally works.

Not very straightforward, and doesn't seem to be documented in one place anywhere with examples (ArchWiki page on KMS probably comes closest).

Feb 13, 2017

Xorg input driver - the easy way, via evdev and uinput

Got to reading short stories in Column Reader from laptop screen before sleep recently, and for an extra-lazy points, don't want to drag my hand to keyboard to flip pages (or columns, as the case might be).

Easy fix - get any input device and bind stuff there to keys you'd normally use.
As it happens, had Xbox 360 controller around for that.

Hard part is figuring out how to properly do it all in Xorg - need to build xf86-input-joystick first (somehow not in Arch core), then figure out how to make it act like a dumb event source, not some mouse emulator, and then stuff like xev and xbindkeys will probably help.

This is way more complicated than it needs to be, and gets even more so when you factor-in all the Xorg driver quirks, xev's somewhat cryptic nature (modifier maps, keysyms, etc), fact that xbindkeys can't actually do "press key" actions (have to use stuff like xdotool for that), etc.

All the while reading these events from linux itself is as trivial as evtest /dev/input/event11 (or for event in dev.read_loop(): ...) and sending them back is just ui.write(e.EV_KEY, e.BTN_RIGHT, 1) via uinput device.

Hence whole binding thing can be done by a tiny python loop that'd read events from whatever specified evdev and write corresponding (desired) keys to uinput.

So instead of +1 pre-naptime story, hacked together a script to do just that - evdev-to-xev (python3/asyncio) - which reads mappings from simple YAML and runs the loop.

For example, to bind right joystick's (on the same XBox 360 controller) extreme positions to cursor keys, plus triggers, d-pad and bumper buttons there:


  ## Right stick
  # Extreme positions are ~32_768
  ABS_RX <-30_000: left
  ABS_RX >30_000: right
  ABS_RY <-30_000: up
  ABS_RY >30_000: down

  ## Triggers
  # 0 (idle) to 255 (fully pressed)
  ABS_Z >200: left
  ABS_RZ >200: right

  ## D-pad
  ABS_HAT0Y -1: leftctrl leftshift equal
  ABS_HAT0Y 1: leftctrl minus
  ABS_HAT0X -1: pageup
  ABS_HAT0X 1: pagedown

  ## Bumpers
  BTN_TL 1: [h,e,l,l,o,space,w,o,r,l,d,enter]
  BTN_TR 1: right

  hold: 0.02
  delay: 0.02
  repeat: 0.5
Run with e.g.: evdev-to-xev -c xbox-scroller.yaml /dev/input/event11
(see also less /proc/bus/input/devices and evtest /dev/input/event11).
Running the thing with no config will print example one with comments/descriptions.

Given how all iterations of X had to work with whatever input they had at the time, plus not just on linux, even when evdev was around, hard to blame it for having a bit of complexity on top of way simplier input layer underneath.

In linux, aforementioned Xbox 360 gamepad is supported by "xpad" module (so that you'd get evdev node for it), and /dev/uinput for simulating arbitrary evdev stuff is "uinput" module.

Script itself needs python3 and python-evdev, plus evtest can be useful.
No need for any extra Xorg drivers beyond standard evdev.

Most similar tool to such script seem to be actkbd, though afaict, one'd still need to run xdotool from it to simulate input :O=

Github link: evdev-to-xev script (in the usual mk-fg/fgtk scrap-heap)

May 15, 2016

Debounce bogus repeated mouse clicks in Xorg with xbindkeys

My current Razer E-Blue mouse had this issue since I've got it - Mouse-2 / BTN_MIDDLE / middle-click (useful mostly as "open new tab" in browsers and "paste" in X) sometimes produces two click events (in rapid succession) instead of one.

It was more rare before, but lately it feels like it's harder to make it click once than twice.

Seem to be either hardware problem with debouncing circuitry or logic in the controller, or maybe a button itself not mashing switch contacts against each other hard enough... or soft enough (i.e. non-elastic), actually, given that they shouldn't "bounce" against each other.

Since there's no need to double-click that wheel-button ever, it looks rather easy to debounce the click on Xorg input level, by ignoring repeated button up/down events after producing the first full "click".

Easiest solution of that kind that I've found was to use guile (scheme) script with xbindkeys tool to keep that click-state data and perform clicks selectively, using xdotool:

(define razer-delay-min 0.2)
(define razer-wait-max 0.5)
(define razer-ts-start #f)
(define razer-ts-done #f)
(define razer-debug #f)

(define (mono-time)
  "Return monotonic timestamp in seconds as real."
  (+ 0.0 (/ (get-internal-real-time) internal-time-units-per-second)))

(xbindkey-function '("b:8") (lambda ()
  (let ((ts (mono-time)))
      ;; Enforce min ts diff between "done" and "start" of the next one
      (or (not razer-ts-done) (>= (- ts razer-ts-done) razer-delay-min))
      (set! razer-ts-start ts)))))

(xbindkey-function '(Release "b:8") (lambda ()
  (let ((ts (mono-time)))
    (when razer-debug
      (format #t "razer: ~a/~a delay=~a[~a] wait=~a[~a]\n"
        razer-ts-start razer-ts-done
        (and razer-ts-done (- ts razer-ts-done)) razer-delay-min
        (and razer-ts-start (- ts razer-ts-start)) razer-wait-max))
        ;; Enforce min ts diff between previous "done" and this one
        (or (not razer-ts-done) (>= (- ts razer-ts-done) razer-delay-min))
        ;; Enforce max "click" wait time
        (and razer-ts-start (<= (- ts razer-ts-start) razer-wait-max)))
      (set! razer-ts-done ts)
      (when razer-debug (format #t "razer: --- click!\n"))
      (run-command "xdotool click 2")))))

Note that xbindkeys actually grabs "b:8" here, which is a "mouse button 8", as if it was "b:2", then "xdotool click 2" command will recurse into same code, so wheel-clicker should be bound to button 8 in X for that to work.

Rebinding buttons in X is trivial to do on-the-fly, using standard "xinput" tool - e.g. xinput set-button-map "My Mouse" 1 8 3 (xinitrc.input script can be used as an extended example).

Running "xdotool" to do actual clicks at the end seem a bit wasteful, as xbindkeys already hooks into similar functionality, but unfortunately there's no "send input event" calls exported to guile scripts (as of 1.8.6, at least).

Still, works well enough as it is, fixing that rather annoying issue.

[xbindkeysrc.scm on github]

Nov 25, 2015

Replacing built-in RTC with i2c battery-backed one on BeagleBone Black from boot

Update 2018-02-16:

There is a better and simplier solution for switching default rtc than one described here - to swap rtc0/rtc1 via aliases in dt overlay.

I.e. using something like this in the dts:

fragment@0 {
  __overlay__ {

    aliases {
      rtc0 = "/ocp/i2c@4802a000/ds3231@68";
      rtc1 = "/ocp/rtc@44e3e000";
Which is also how it's done in bb.org-overlays repo these days:

BeagleBone Black (BBB) boards have - and use - RTC (Real-Time Clock - device that tracks wall-clock time, including calendar date and time of day) in the SoC, which isn't battery-backed, so looses track of time each time device gets power-cycled.

This represents a problem if keeping track of time is necessary and there's no network access (or a patchy one) to sync this internal RTC when board boots up.

Easy solution to that, of course, is plugging external RTC device, with plenty of cheap chips with various precision available, most common being Dallas/Maxim ICs like DS1307 or DS3231 (a better one of the line) with I2C interface, which are all supported by Linux "ds1307" module.

Enabling connected chip at runtime can be easily done with a command like this:

echo ds1307 0x68 >/sys/bus/i2c/devices/i2c-2/new_device

(see this post on Fortune Datko blog and/or this one on minix-i2c blog for ways to tell reliably which i2c device in /dev corresponds to which bus and pin numbers on BBB headers, and how to check/detect/enumerate connected devices there)

This obviously doesn't enable device straight from the boot though, which is usually accomplished by adding the thing to Device Tree, and earlier with e.g. 3.18.x kernels it had to be done by patching and re-compiling platform dtb file used on boot.

But since 3.19.x kernels (and before 3.9.x), easier way seem to be to use Device Tree Overlays (usually "/lib/firmware/*.dtbo" files, compiled by "dtc" from dts files), which is kinda like patching Device Tree, only done at runtime.

Code for such patch in my case ("i2c2-rtc-ds3231.dts"), with 0x68 address on i2c2 bus and "ds3231" kernel module (alias for "ds1307", but more appropriate for my chip):


/* dtc -O dtb -o /lib/firmware/BB-RTC-02-00A0.dtbo -b0 i2c2-rtc-ds3231.dts */
/* bone_capemgr.enable_partno=BB-RTC-02 */
/* https://github.com/beagleboard/bb.org-overlays */

/ {
  compatible = "ti,beaglebone", "ti,beaglebone-black", "ti,beaglebone-green";
  part-number = "BB-RTC-02";
  version = "00A0";

  fragment@0 {
    target = <&i2c2>;

    __overlay__ {
      pinctrl-names = "default";
      pinctrl-0 = <&i2c2_pins>;
      status = "okay";
      clock-frequency = <100000>;
      #address-cells = <0x1>;
      #size-cells = <0x0>;

      rtc: rtc@68 {
        compatible = "dallas,ds3231";
        reg = <0x68>;

As per comment in the overlay file, can be compiled ("dtc" comes from same-name package on ArchLinuxARM) to the destination with:

dtc -O dtb -o /lib/firmware/BB-RTC-02-00A0.dtbo -b0 i2c2-rtc-ds3231.dts

Update 2017-09-19: This will always produce a warning for each "fragment" section, safe to ignore.

And then loaded on early boot (as soon as rootfs with "/lib/firmware" gets mounted) with "bone_capemgr.enable_partno=" cmdline addition, and should be put to something like "/boot/uEnv.txt", for example (with dtb path from command above):

setenv bootargs "... bone_capemgr.enable_partno=BB-RTC-02"

Docs in bb.org-overlays repository have more details and examples on how to write and manage these.

Update 2017-09-19:

Modern ArchLinuxARM always uses initramfs image, which is starting rootfs, where kernel will look up stuff in /lib/firmware, so all *.dtbo files loaded via bone_capemgr.enable_partno= must be included there.

With Arch-default mkinitcpio, it's easy to do via FILES= in "/etc/mkinitcpio.conf" - e.g. FILES="/lib/firmware/BB-RTC-02-00A0.dtbo" - and re-running the usual mkinitcpio -g /boot/initramfs-linux.img command.

If using e.g. i2c-1 instead (with &bb_i2c1_pins), BB-I2C1 should also be included and loaded there, strictly before rtc overlay.

Update 2017-09-19:

bone_capemgr seem to be broken in linux-am33x packages past 4.9, and produces kernel BUG instead of loading any overlays - be sure to try loading that dtbo at runtime (checking dmesg) before putting it into cmdline, as that might make system unbootable.

Workaround is to downgrade kernel to 4.9, e.g. one from beagleboard/linux tree, where it's currently the latest supported release.

That should ensure that this second RTC appears as "/dev/rtc1" (rtc0 is the internal one) on system startup, but unfortunately it still won't be the first one and kernel will already pick up time from internal rtc0 by the time this one gets detected.

Furthermore, systemd-enabled userspace (as in e.g. ArchLinuxARM) interacts with RTC via systemd-timedated and systemd-timesyncd, which both use "/dev/rtc" symlink (and can't be configured to use other devs), which by default udev points to rtc0 as well, and rtc1 - no matter how early it appears - gets completely ignored there as well.

So two issues are with "system clock" that kernel keeps and userspace daemons using wrong RTC, which is default in both cases.

"/dev/rtc" symlink for userspace gets created by udev, according to "/usr/lib/udev/rules.d/50-udev-default.rules", and can be overidden by e.g. "/etc/udev/rules.d/55-i2c-rtc.rules":

SUBSYSTEM=="rtc", KERNEL=="rtc1", SYMLINK+="rtc", OPTIONS+="link_priority=10", TAG+="systemd"

This sets "link_priority" to 10 to override SYMLINK directive for same "rtc" dev node name from "50-udev-default.rules", which has link_priority=-100.

Also, TAG+="systemd" makes systemd track device with its "dev-rtc.device" unit (auto-generated, see systemd.device(5) for more info), which is useful to order userspace daemons depending on that symlink to start strictly after it's there.

"userspace daemons" in question on a basic Arch are systemd-timesyncd and systemd-timedated, of which only systemd-timesyncd starts early on boot, before all other services, including systemd-timedated, sysinit.target and time-sync.target (for early-boot clock-dependant services).

So basically if proper "/dev/rtc" and system clock gets initialized before systemd-timesyncd (or whatever replacement, like ntpd or chrony), correct time and rtc device will be used for all system daemons (which start later) from here on.

Adding that extra step can be done as a separate systemd unit (to avoid messing with shipped systemd-timesyncd.service), e.g. "i2c-rtc.service":

Before=systemd-timesyncd.service ntpd.service chrony.service

DeviceAllow=/dev/rtc rw
ExecStart=/usr/bin/hwclock -f /dev/rtc --hctosys


Update 2017-09-19: -f /dev/rtc must be specified these days, as hwclock seem to use /dev/rtc0 by default, pretty sure it didn't used to.

Note that Before= above should include whatever time-sync daemon is used on the machine, and there's no harm in listing non-existant or unused units there jic.

Most security-related stuff and conditions are picked from systemd-timesyncd unit file, which needs roughly same access permissions as "hwclock" here.

With udev rule and that systemd service (don't forget to "systemctl enable" it), boot sequence goes like this:

  • Kernel inits internal rtc0 and sets system clock to 1970-01-01.
  • Kernel starts systemd.
  • systemd mounts local filesystems and starts i2c-rtc asap.
  • i2c-rtc, due to Wants/After=dev-rtc.device, starts waiting for /dev/rtc to appear.
  • Kernel detects/initializes ds1307 i2c device.
  • udev creates /dev/rtc symlink and tags it for systemd.
  • systemd detects tagging event and activates dev-rtc.device.
  • i2c-rtc starts, adjusting system clock to realistic value from battery-backed rtc.
  • systemd-timesyncd starts, using proper /dev/rtc and correct system clock value.
  • time-sync.target activates, as it is scheduled to, after systemd-timesyncd and i2c-rtc.
  • From there, boot goes on to sysinit.target, basic.target and starts all the daemons.

udev rule is what facilitates symlink and tagging, i2c-rtc.service unit is what makes boot sequence wait for that /dev/rtc to appear and adjusts system clock right after that.

Haven't found an up-to-date and end-to-end description with examples anywhere, so here it is. Cheers!

Mar 28, 2015

Bluetooth PAN Network Setup with BlueZ 5.X

It probably won't be a surprise to anyone that Bluetooth has profiles to carry regular network traffic, and BlueZ has support for these since forever, but setup process has changed quite a bit between 2.X - 4.X - 5.X BlueZ versions, so here's my summary of it with 5.29 (latest from fives atm).

First step is to get BlueZ itself, and make sure it's 5.X (at least for the purposes of this post), as some enerprisey distros probably still have 4.X if not earlier series, which, again, have different tools and interfaces.
Should be easy enough to do with bluetoothctl --version, and if BlueZ doesn't have "bluetoothctl", it's definitely some earlier one.

Hardware in my case was two linux machines with similar USB dongles, one of them was RPi (wanted to setup wireless link for that), but that shouldn't really matter.

Aforementioned bluetoothctl allows to easily pair both dongles from cli interactively (and with nice colors!), that's what should be done first and serves as an authentication, as far as I can tell.

--- machine-1
% bluetoothctl
[NEW] Controller 00:02:72:XX:XX:XX malediction [default]
[bluetooth]# power on
Changing power on succeeded
[CHG] Controller 00:02:72:XX:XX:XX Powered: yes
[bluetooth]# discoverable on
Changing discoverable on succeeded
[CHG] Controller 00:02:72:XX:XX:XX Discoverable: yes
[bluetooth]# agent on

--- machine-2 (snipped)
% bluetoothctl
[NEW] Controller 00:02:72:YY:YY:YY rpbox [default]
[bluetooth]# power on
[bluetooth]# scan on
[bluetooth]# agent on
[bluetooth]# pair 00:02:72:XX:XX:XX
[bluetooth]# trust 00:02:72:XX:XX:XX

Not sure if the "trust" bit is really necessary, and what it does - probably allows to setup agent-less connections or something.

As I needed to connect small ARM board to what amounts to an access point, "NAP" was the BT-PAN mode of choice for me, but there are also "ad-hoc" modes in BT like PANU and GN, which seem to only need a different topology (who connects to who) and pass different UUID parameter to BlueZ over dbus.

For setting up a PAN network with BlueZ 5.X, essentially just two dbus calls are needed (described in "doc/network-api.txt"), and basic cli tools to do these are bundled in "test/" dir with BlueZ sources.

Given that these aren't very suited for day-to-day use (hence the "test" dir) and are fairly trivial, did rewrite them as a single script, more suited for my purposes - bt-pan.

Update 2015-12-10: There's also "bneptest" tool, which comes as a part of e.g. "bluez-utils" package on Arch, which seem to do same thing with its "-s" and "-c" options, just haven't found it at the time (or maybe it's a more recent addition).

General idea is that one side (access point in NAP topology) sets up a bridge and calls "org.bluez.NetworkServer1.Register()", while the other ("client") does "org.bluez.Network1.Connect()".

On the server side (which also applies to GN mode, I think), bluetoothd expects a bridge interface to be setup and configured, to which it adds individual "bnepX" interfaces created for each client by itself.

Such bridge gets created with "brctl" (from bridge-utils), and should be assigned the server IP and such, then server itself can be started, passing name of that bridge to BlueZ over dbus in "org.bluez.NetworkServer1.Register()" call:



[[ -n "$(brctl show $br 2>&1 1>/dev/null)" ]] && {
  brctl addbr $br
  brctl setfd $br 0
  brctl stp $br off
  ip addr add dev $br
  ip link set $br up

exec bt-pan --debug server $br

(as mentioned above, bt-pan script is from fgtk github repo)

Update 2015-12-10: This is "net.bnep" script, as referred-to in "net-bnep.service" unit just below.

Update 2015-12-10: These days, systemd can easily create and configure bridge, forwarding and all the interfaces involved, even running built-in DHCP server there - see "man systemd.netdev" and "man systemd.network", for how to do that, esp. examples at the end of both.

Just running this script will then setup a proper "bluetooth access point", and if done from systemd, should probably be a part of bluetooth.target and get stopped along with bluetoothd (as it doesn't make any sense running without it):




Update 2015-12-10: Put this into e.g. /etc/systemd/system/net-bnep.service and enable to start with "bluetooth.target" (see "man systemd.special") by running systemctl enable net-bnep.service.

On the client side, it's even simplier - BlueZ will just create a "bnepX" device and won't need any bridge, as it is just a single connection:


ExecStart=/usr/local/bin/bt-pan client --wait 00:02:72:XX:XX:XX


Update 2015-12-10: Can be /etc/systemd/system/net-bnep-client.service, don't forget to enable it (creates symlink in "bluetooth.target.wants"), same as for other unit above (which should be running on the other machine).

Update 2015-12-10: Created "bnepX" device is also trivial to setup with systemd on the client side, see e.g. "Example 2" at the end of "man systemd.network".

On top of "bnepX" device on the client, some dhcp client should probably be running, which systemd-networkd will probably handle by default on systemd-enabled linuxes, and some dhcpd on the server-side (I used udhcpd from busybox for that).

Enabling units on both machines make them setup AP and connect on boot, or as soon as BT donges get plugged-in/detected.

Fairly trivial setup for a wireless one, especially wrt authentication, and seem to work reliably so far.

Update 2015-12-10: Tried to clarify a few things above for people not very familiar with systemd, where noted. See systemd docs for more info on all this.

In case something doesn't work in such a rosy scenario, which kinda happens often, first place to look at is probably debug info of bluetoothd itself, which can be enabled with systemd via systemctl edit bluetooth and adding a [Service] section with override like ExecStart=/usr/lib/bluetooth/bluetoothd -d, then doing daemon-reload and restart of the unit.

This should already produce a ton of debug output, but I generally find something like bluetoothd[363]: src/device.c:device_bonding_failed() status 14 and bluetoothd[363]: plugins/policy.c:disconnect_cb() reason 3 in there, which is not super-helpful by itself.

"btmon" tool which also comes with BlueZ provides a much more useful output with all the stuff decoded from the air, even colorized for convenience (though you won't see it here):

> ACL Data RX: Handle 11 flags 0x02 dlen 20               [hci0] 17.791382
      L2CAP: Information Response (0x0b) ident 2 len 12
        Type: Fixed channels supported (0x0003)
        Result: Success (0x0000)
        Channels: 0x0000000000000006
          L2CAP Signaling (BR/EDR)
          Connectionless reception
> HCI Event: Number of Completed Packets (0x13) plen 5    [hci0] 17.793368
        Num handles: 1
        Handle: 11
        Count: 2
> ACL Data RX: Handle 11 flags 0x02 dlen 12               [hci0] 17.794006
      L2CAP: Connection Request (0x02) ident 3 len 4
        PSM: 15 (0x000f)
        Source CID: 64
< ACL Data TX: Handle 11 flags 0x00 dlen 16               [hci0] 17.794240
      L2CAP: Connection Response (0x03) ident 3 len 8
        Destination CID: 64
        Source CID: 64
        Result: Connection pending (0x0001)
        Status: Authorization pending (0x0002)
> HCI Event: Number of Completed Packets (0x13) plen 5    [hci0] 17.939360
        Num handles: 1
        Handle: 11
        Count: 1
< ACL Data TX: Handle 11 flags 0x00 dlen 16               [hci0] 19.137875
      L2CAP: Connection Response (0x03) ident 3 len 8
        Destination CID: 64
        Source CID: 64
        Result: Connection refused - security block (0x0003)
        Status: No further information available (0x0000)
> HCI Event: Number of Completed Packets (0x13) plen 5    [hci0] 19.314509
        Num handles: 1
        Handle: 11
        Count: 1
> HCI Event: Disconnect Complete (0x05) plen 4            [hci0] 21.302722
        Status: Success (0x00)
        Handle: 11
        Reason: Remote User Terminated Connection (0x13)
@ Device Disconnected: 00:02:72:XX:XX:XX (0) reason 3

That at least makes it clear what's the decoded error message is, on which protocol layer and which requests it follows - enough stuff to dig into.

BlueZ also includes a crapton of cool tools for all sorts of diagnostics and manipulation, which - alas - seem to be missing on some distros, but can be built along with the package using --enable-tools --enable-experimental configure-options (all under "tools" dir).

I had to resort to these tricks briefly when trying to setup PANU/GN-mode connections, but as I didn't really need these, gave up fairly soon on that "Connection refused - security block" error (from that "policy.c" plugin) - no idea why BlueZ throws it in this context and google doesn't seem to help much, maybe polkit thing, idk.

Didn't need these modes though, so whatever.

Jan 30, 2015

Enabling i2c1 on BeagleBone Black without Device Tree overlays

Important: This way was pretty much made obsolete by Device Tree overlays, which have returned (as expected) in 3.19 kernels - it'd probably be easier to use these in most cases.

BeagleBone Black board has three i2c buses, two of which are available by default on Linux kernel with patches from RCN (Robert Nelson).

There are plenty of links on how to enable i2c1 on old-ish (by now) 3.8-series kernels, which had "Device Tree overlays" patches, but these do not apply to 3.9-3.18, though it looks like they might make a comeback in the future (LWN).
Probably it's just a bit too specific task to ask an easy answer for.

Overlays in pre-3.9 allowed to write a "patch" for a Device Tree, compile it and then load it at runtime, which is not possible without these, but perfectly possible and kinda-easy to do by compiling a dtb and loading it on boot.

It'd be great if there was a prebuilt dtb file in Linux with just i2c1 enabled (not just for some cape, bundled with other settings), but unfortunately, as of patched 3.18.4, there doesn't seem to be one, hence the following patching process.

For that, getting the kernel sources (whichever were used to build the kernel, ideally) is necessary.

In Arch Linux ARM (which I tend to use with such boards), this can be done by grabbing the PKGBUILD dir, editing the "PKGBUILD" file, uncommenting the "return 1" under "stop here - this is useful to configure the kernel" comment and running makepkg -sf (from "base-devel" package set on arch) there.
That will just unpack the kernel sources, put the appropriate .config file there and run make prepare on them.

With kernel sources unpacked, the file that you'd want to patch is "arch/arm/boot/dts/am335x-boneblack.dts" (or whichever other dtb you're loading via uboot):

--- am335x-boneblack.dts.bak    2015-01-29 18:20:29.547909768 +0500
+++ am335x-boneblack.dts        2015-01-30 20:56:43.129213998 +0500
@@ -23,6 +23,14 @@

 &ocp {
+       /* i2c */
+       P9_17_pinmux {
+               status = "disabled";
+       };
+       P9_18_pinmux {
+               status = "disabled";
+       };
        /* clkout2 */
        P9_41_pinmux {
                status = "disabled";
@@ -33,6 +41,13 @@

+&i2c1 {
+       status = "okay";
+       pinctrl-names = "default";
+       pinctrl-0 = <&i2c1_pins>;
+       clock-frequency = <100000>;
 &mmc1 {
        vmmc-supply = <&vmmcsd_fixed>;

Then "make dtbs" can be used to build dtb files only, and not the whole kernel (which would take a while on BBB).

Resulting *.dtb (e.g. "am335x-boneblack.dtb" for "am335x-boneblack.dts", in the same dir) can be put into "dtbs" on boot partition and loaded from uEnv.txt (or whatever uboot configs are included from there).

Reboot, and i2cdetect -l should show i2c-1:

# i2cdetect -l
i2c-0   i2c             OMAP I2C adapter                    I2C adapter
i2c-1   i2c             OMAP I2C adapter                    I2C adapter
i2c-2   i2c             OMAP I2C adapter                    I2C adapter

As I've already mentioned before, this might not be the optimal way to enable the thing in kernels 3.19 and beyond, if "device tree overlays" patches will land there - it should be possible to just load some patch on-the-fly there, without all the extra hassle described above.

Update 2015-03-19: Device Tree overlays landed in 3.19 indeed, but if migrating to use these is too much hassle for now, here's a patch for 3.19.1-bone4 am335x-bone-common.dtsi to enable i2c1 and i2c2 on boot (applies in the same way, make dtbs, copy am335x-boneblack.dtb to /boot/dtbs).

Jan 28, 2015

Sample code for using ST7032I I2C/SMBus driver in Midas LCD with python

There seem to be a surprising lack of python code on the net for this particular device, except for this nice pi-ras blog post, in japanese.
So, to give google some more food and a bit of commentary in english to that post - here goes.

I'm using Midas MCCOG21605C6W-SPTLYI 2x16 chars LCD panel, connected to 5V VDD and 3.3V BeagleBone Black I2C bus:

simple digital clock on lcd

Code for the above LCD clock "app" (python 2.7):

import smbus, time

class ST7032I(object):

  def __init__(self, addr, i2c_chan, **init_kws):
    self.addr, self.bus = addr, smbus.SMBus(i2c_chan)

  def _write(self, data, cmd=0, delay=None):
    self.bus.write_i2c_block_data(self.addr, cmd, list(data))
    if delay: time.sleep(delay)

  def init(self, contrast=0x10, icon=False, booster=False):
    assert contrast < 0x40 # 6 bits only, probably not used on most lcds
    pic_high = 0b0111 << 4 | (contrast & 0x0f) # c3 c2 c1 c0
    pic_low = ( 0b0101 << 4 |
      icon << 3 | booster << 2 | ((contrast >> 4) & 0x03) ) # c5 c4
    self._write([0x38, 0x39, 0x14, pic_high, pic_low, 0x6c], delay=0.01)
    self._write([0x0c, 0x01, 0x06], delay=0.01)

  def move(self, row=0, col=0):
    assert 0 <= row <= 1 and 0 <= col <= 15, [row, col]
    self._write([0b1000 << 4 | (0x40 * row + col)])

  def addstr(self, chars, pos=None):
    if pos is not None:
      row, col = (pos, 0) if isinstance(pos, int) else pos
      self.move(row, col)
    self._write(map(ord, chars), cmd=0x40)

  def clear(self):

if __name__ == '__main__':
  lcd = ST7032I(0x3e, 2)
  while True:
    ts_tuple = time.localtime()
    lcd.addstr(time.strftime('date: %y-%m-%d', ts_tuple), 0)
    lcd.addstr(time.strftime('time: %H:%M:%S', ts_tuple), 1)
Note the constants in the "init" function - these are all from "INITIALIZE(5V)" sequence on page-8 of the Midas LCD datasheet , setting up things like voltage follower circuit, OSC frequency, contrast (not used on my panel), modes and such.
Actual reference on what all these instructions do and how they're decoded can be found on page-20 there.

Even with the same exact display, but connected to 3.3V, these numbers should probably be a bit different - check the datasheet (e.g. page-7 there).

Also note the "addr" and "i2c_chan" values (0x3E and 2) - these should be taken from the board itself.

"i2c_chan" is the number of the device (X) in /dev/i2c-X, of which there seem to be usually more than one on ARM boards like RPi or BBB.
For instance, Beaglebone Black has three I2C buses, two of which are available on the expansion headers (with proper dtbs loaded).
See this post on Fortune Datko blog and/or this one on minix-i2c blog for one way to tell reliably which device in /dev corresponds to which hardware bus and pin numbers.

And the address is easy to get from the datasheet (lcd I have uses only one static slave address), or detect via i2cdetect -r -y <i2c_chan>, e.g.:

# i2cdetect -r -y 2
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- 3e --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- UU UU UU UU -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- 68 -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --

Here I have DS1307 RTC on 0x68 and an LCD panel on 0x3E address (again, also specified in the datasheet).

Both "i2cdetect" command-line tool and python "smbus" module are part of i2c-tools project, which is developed under lm-sensors umbrella.
On Arch or source-based distros these all come with "i2c-tools" package, but on e.g. debian, python module seem to be split into "python-smbus".

Plugging these bus number and the address for your particular hardware into the script above and maybe adjusting the values there for your lcd panel modes should make the clock show up and tick every second.

In general, upon seeing tutorial on some random blog (like this one), please take it with a grain of salt, because it's highly likely that it was written by a fairly incompetent person (like me), since engineers who deal with these things every day don't see above steps as any kind of accomplishment - it's a boring no-brainer routine for them, and they aren't likely to even think about it, much less write tutorials on it (all trivial and obvious, after all).

Nevertheless, hope this post might be useful to someone as a pointer on where to look to get such device started, if nothing else.

Jan 12, 2015

Starting systemd service instance for device from udev

Needed to implement a thing that would react on USB Flash Drive inserted (into autonomous BBB device) - to get device name, mount fs there, rsync stuff to it, unmount.

To avoid whatever concurrency issues (i.e. multiple things screwing with device in parallel), proper error logging and other startup things, most obvious thing is to wrap the script in a systemd oneshot service.

Only non-immediately-obvious problem for me here was how to pass device to such service properly.

With a bit of digging through google results (and even finding one post here somehow among them), eventually found "Pairing udev's SYSTEMD_WANTS and systemd's templated units" resolved thread, where what seem to be current-best approach is specified.

Adapting it for my case and pairing with generic patterns for device-instantiated services, resulted in the following configuration.


SUBSYSTEM=="block", ACTION=="add", ENV{ID_TYPE}="disk", ENV{DEVTYPE}=="partition",\
  PROGRAM="/usr/bin/systemd-escape -p --template=sync-sensor-logs@.service $env{DEVNAME}",\



ExecStart=/usr/local/sbin/sync-sensor-logs /%I
This makes things stop if it works for too long or if device vanishes (due to BindTo=) and properly delays script start until device is ready.
"sync-sensor-logs" script at the end gets passed original unescaped device name as an argument.
Easy to apply all the systemd.exec(5) and systemd.service(5) parameters on top of this.

Does not need things like systemctl invocation or manual systemd escaping re-implementation either, though running "systemd-escape" still seem to be necessary evil there.

systemd-less alternative seem to be having a script that does per-device flock, timeout logic and a lot more checks for whether device is ready and/or still there, so this approach looks way saner and clearer, with a caveat that one should probably be familiar with all these systemd features.

May 19, 2014

Displaying any lm_sensors data (temperature, fan speeds, voltage, etc) in conky

Conky sure has a ton of sensor-related hw-monitoring options, but it still doesn't seem to be enough to represent even just the temperatures from this "sensors" output:

Adapter: ACPI interface
Vcore Voltage:      +1.39 V  (min =  +0.80 V, max =  +1.60 V)
+3.3V Voltage:      +3.36 V  (min =  +2.97 V, max =  +3.63 V)
+5V Voltage:        +5.08 V  (min =  +4.50 V, max =  +5.50 V)
+12V Voltage:      +12.21 V  (min = +10.20 V, max = +13.80 V)
CPU Fan Speed:     2008 RPM  (min =  600 RPM, max = 7200 RPM)
Chassis Fan Speed:    0 RPM  (min =  600 RPM, max = 7200 RPM)
Power Fan Speed:      0 RPM  (min =  600 RPM, max = 7200 RPM)
CPU Temperature:    +42.0°C  (high = +60.0°C, crit = +95.0°C)
MB Temperature:     +43.0°C  (high = +45.0°C, crit = +75.0°C)

Adapter: PCI adapter
temp1:        +30.6°C  (high = +70.0°C)
                       (crit = +90.0°C, hyst = +88.0°C)

Adapter: PCI adapter
temp1:        +51.0°C

Given the summertime, and faulty noisy cooling fans, decided that it'd be nice to be able to have an idea about what kind of temperatures hw operates on under all sorts of routine tasks.

Conky is extensible via lua, which - among other awesome things there are - allows to code caches for expensive operations (and not just repeat them every other second) and parse output of whatever tools efficiently (i.e. without forking five extra binaries plus perl).

Output of "sensors" though not only is kinda expensive to get, but also hardly parseable, likely unstable, and tool doesn't seem to have any "machine data" option.

lm_sensors includes a libsensors, which still doesn't seem possible to call from conky-lua directly (would need some kind of ffi), but easy to write the wrapper around - i.e. this sens.c 50-liner, to dump info in a useful way:

atk0110-0-0__in0_input 1.392000
atk0110-0-0__in0_min 0.800000
atk0110-0-0__in0_max 1.600000
atk0110-0-0__in1_input 3.360000
atk0110-0-0__in3_max 13.800000
atk0110-0-0__fan1_input 2002.000000
atk0110-0-0__fan1_min 600.000000
atk0110-0-0__fan1_max 7200.000000
atk0110-0-0__fan2_input 0.000000
atk0110-0-0__fan3_max 7200.000000
atk0110-0-0__temp1_input 42.000000
atk0110-0-0__temp1_max 60.000000
atk0110-0-0__temp1_crit 95.000000
atk0110-0-0__temp2_input 43.000000
atk0110-0-0__temp2_max 45.000000
atk0110-0-0__temp2_crit 75.000000
k10temp-0-c3__temp1_input 31.500000
k10temp-0-c3__temp1_max 70.000000
k10temp-0-c3__temp1_crit 90.000000
k10temp-0-c3__temp1_crit_hyst 88.000000
radeon-0-400__temp1_input 51.000000

It's all lm_sensors seem to know about hw in a simple key-value form.

Still not keen on running that on every conky tick, hence the lua cache:

sensors = {
  ts_read_i=120, ts_read=0,

function conky_sens_read(name, precision)
  local ts = os.time()
  if os.difftime(ts, sensors.ts_read) > sensors.ts_read_i then
    local sh = io.popen(sensors.cmd, 'r')
    sensors.values = {}
    for p in string.gmatch(sh:read('*a'), '(%S+ %S+)\n') do
      local n = string.find(p, ' ')
      sensors.values[string.sub(p, 0, n-1)] = string.sub(p, n)
    sensors.ts_read = ts

  if sensors.values[name] then
    local fmt = string.format('%%.%sf', precision or 0)
    return string.format(fmt, sensors.values[name])
  return ''

Which can run the actual "sens" command every 120s, which is perfectly fine with me, since I don't consider conky to be an "early warning" system, and more of an "have an idea of what's the norm here" one.

Config-wise, it'd be just cpu temp: ${lua sens_read atk0110-0-0__temp1_input}C, or a more fancy template version with a flashing warning and hidden for missing sensors:

template3 ${color lightgrey}${if_empty ${lua sens_read \2}}${else}\
${if_match ${lua sens_read \2} > \3}${color red}\1: ${lua sens_read \2}C${blink !!!}\
${else}\1: ${color}${lua sens_read \2}C${endif}${endif}

It can then be used simply as ${template3 cpu atk0110-0-0__temp1_input 60} or ${template3 gpu radeon-0-400__temp1_input 80}, with 60 and 80 being manually-specified thresholds beyond which indicator turns red and has blinking "!!!" to get more attention.

Overall result in my case is something like this:

conky sensors display

sens.c (plus Makefile with gcc -Wall -lsensors for it) and my conky config where it's utilized can be all found in de-setup repo on github (or my git mirror, ofc).

Nov 01, 2013

Software hacks to fix broken hardware - laptop fan

Had a fan in a laptop dying for a few weeks now, but international mail being universally bad (and me too hopeful about dying fan's lifetime), replacement from ebay is still on its looong way.

Meanwhile, thing started screeching like mad, causing strong vibration in the plastic and stopping/restarting every few seconds with an audible thunk.

Things not looking good, and me being too lazy to work hard enough to be able to afford new laptop, had to do something to postpone this one's imminent death.

Cleaning the dust and hairs out of fan's propeller and heatsink and changing thermal paste did make the thing a bit cooler, but given that it's fairly slim Acer S3 ultrabook, no local repair shop was able to offer any immediate replacement for the fan, so no clean hw fix in reach yet.

Interesting thing about broken fans though, is that they seem to start vibrating madly out of control only beyond certain speed, so one option was to slow the thing down, while keeping cpu cool somehow.

cpupower tool that comes with linux kernel can nicely downclock this i5 cpu to 800 MHz, but that's not really enough to keep fan from spinning madly - some default BIOS code seem to be putting it to 100% at 50C.

Besides, from what I've seen, it seem to be quite counter-productive, making everything (e.g. opening page in FF) much longer, keeping cpu at 100% of that lower rate all the time, which seem to heat it up slower, sure, but to the same or even higher level for the same task (e.g. opening that web page), with side effect being also wasting time.

Luckily, found out that fan on Acer laptops can be controlled using /dev/ports registers, as described on linlap wiki page.
50C doesn't seem to be high for these CPUs at all, and one previous laptop worked fine on 80C all the time, so making threshold for killing the fan higher seem to be a good idea - it's not like there's much to loose anyway.

As acers3fand script linked from the wiki was for a bit different purpose, wrote my own (also lighter and more self-contained) script - fan_control to only put more than ~50% of power to it after it goes beyond 60C and warns if it heats up way more without putting the fan into "wailing death" mode ever, with max being at about 75% power, also reaching for cpupower hack before that.

Such manual control opens up a possibility of cpu overheating though, or otherwise doesn't help much when you run cpu-intensive stuff, and I kinda don't want to worry about some cronjob, stuck dev script or hung DE app killing the machine while I'm away, so one additional hack I could think of is to just throttle CPU bandwidth enough so that:

  • short tasks complete at top performance, without delays.
  • long cpu-intensive stuff gets throttled to a point where it can't generate enough heat and cpu stays at some 60C with slow fan speed.
  • some known-to-be-intensive tasks like compilation get their own especially low limits.

So kinda like cpupower trick, but more fine-grained and without fixed presets one can slow things down to (as lowest bar there doesn't cut it).

Kernel Control Groups (cgroups) turned out to have the right thing for that - "cpu" resource controller there has cfs_quote_us/cfs_period_us knobs to control cpu bandwidth for threads within a specific cgroup.

New enough systemd has the concept of "slices" to control resources for a groups of services, which are applied automatically for all DE stuff as "user.slice" and its "user-<name>.slice" subslices, so all that had to be done is to echo the right values (which don't cause overheating or fan-fail) to that rc's /sys knobs.
Similar generic limitations are easy to apply to other services there by grouping them with Slice= option.

For distinct limits on daemons started from cli, there's "systemd-run" tool these days, and for more proper interactive wrapping, I've had pet cgroup-tools scripts for a while now (to limit cpu priority of heavier bg stuff like builds though).

With that last tweak, situation seem to be under control - no stray app can really kill the cpu and fan doesn't have to do all the hard work to prevent it either, seemingly solving that hardware fail with software measures for now.

Keeping mobile i5 cpu around 50 degrees apparently needs it to spin only barely, yet seem to allow all the desktop stuff to function without noticeable slowdowns or difference.
Makes me wonder why Intel did allow that low-power ARM things fly past it...

Now, if only replacement fan got here before I drop off the nets even with these hacks.

Next → Page 1 of 2
Member of The Internet Defense League