Feb 13, 2017

Xorg input driver - the easy way, via evdev and uinput

Got to reading short stories in Column Reader from laptop screen before sleep recently, and for an extra-lazy points, don't want to drag my hand to keyboard to flip pages (or columns, as the case might be).

Easy fix - get any input device and bind stuff there to keys you'd normally use.
As it happens, had Xbox 360 controller around for that.

Hard part is figuring out how to properly do it all in Xorg - need to build xf86-input-joystick first (somehow not in Arch core), then figure out how to make it act like a dumb event source, not some mouse emulator, and then stuff like xev and xbindkeys will probably help.

This is way more complicated than it needs to be, and gets even more so when you factor-in all the Xorg driver quirks, xev's somewhat cryptic nature (modifier maps, keysyms, etc), fact that xbindkeys can't actually do "press key" actions (have to use stuff like xdotool for that), etc.

All the while reading these events from linux itself is as trivial as evtest /dev/input/event11 (or for event in dev.read_loop(): ...) and sending them back is just ui.write(e.EV_KEY, e.BTN_RIGHT, 1) via uinput device.

Hence whole binding thing can be done by a tiny python loop that'd read events from whatever specified evdev and write corresponding (desired) keys to uinput.

So instead of +1 pre-naptime story, hacked together a script to do just that - evdev-to-xev (python3/asyncio) - which reads mappings from simple YAML and runs the loop.

For example, to bind right joystick's (on the same XBox 360 controller) extreme positions to cursor keys, plus triggers, d-pad and bumper buttons there:

map:

  ## Right stick
  # Extreme positions are ~32_768
  ABS_RX <-30_000: left
  ABS_RX >30_000: right
  ABS_RY <-30_000: up
  ABS_RY >30_000: down

  ## Triggers
  # 0 (idle) to 255 (fully pressed)
  ABS_Z >200: left
  ABS_RZ >200: right

  ## D-pad
  ABS_HAT0Y -1: leftctrl leftshift equal
  ABS_HAT0Y 1: leftctrl minus
  ABS_HAT0X -1: pageup
  ABS_HAT0X 1: pagedown

  ## Bumpers
  BTN_TL 1: [h,e,l,l,o,space,w,o,r,l,d,enter]
  BTN_TR 1: right

timings:
  hold: 0.02
  delay: 0.02
  repeat: 0.5
Run with e.g.: evdev-to-xev -c xbox-scroller.yaml /dev/input/event11
(see also less /proc/bus/input/devices and evtest /dev/input/event11).
Running the thing with no config will print example one with comments/descriptions.

Given how all iterations of X had to work with whatever input they had at the time, plus not just on linux, even when evdev was around, hard to blame it for having a bit of complexity on top of way simplier input layer underneath.

In linux, aforementioned Xbox 360 gamepad is supported by "xpad" module (so that you'd get evdev node for it), and /dev/uinput for simulating arbitrary evdev stuff is "uinput" module.

Script itself needs python3 and python-evdev, plus evtest can be useful.
No need for any extra Xorg drivers beyond standard evdev.

Most similar tool to such script seem to be actkbd, though afaict, one'd still need to run xdotool from it to simulate input :O=

Github link: evdev-to-xev script (in the usual mk-fg/fgtk scrap-heap)

May 15, 2016

Debounce bogus repeated mouse clicks in Xorg with xbindkeys

My current Razer E-Blue mouse had this issue since I've got it - Mouse-2 / BTN_MIDDLE / middle-click (useful mostly as "open new tab" in browsers and "paste" in X) sometimes produces two click events (in rapid succession) instead of one.

It was more rare before, but lately it feels like it's harder to make it click once than twice.

Seem to be either hardware problem with debouncing circuitry or logic in the controller, or maybe a button itself not mashing switch contacts against each other hard enough... or soft enough (i.e. non-elastic), actually, given that they shouldn't "bounce" against each other.

Since there's no need to double-click that wheel-button ever, it looks rather easy to debounce the click on Xorg input level, by ignoring repeated button up/down events after producing the first full "click".

Easiest solution of that kind that I've found was to use guile (scheme) script with xbindkeys tool to keep that click-state data and perform clicks selectively, using xdotool:

(define razer-delay-min 0.2)
(define razer-wait-max 0.5)
(define razer-ts-start #f)
(define razer-ts-done #f)
(define razer-debug #f)

(define (mono-time)
  "Return monotonic timestamp in seconds as real."
  (+ 0.0 (/ (get-internal-real-time) internal-time-units-per-second)))

(xbindkey-function '("b:8") (lambda ()
  (let ((ts (mono-time)))
    (when
      ;; Enforce min ts diff between "done" and "start" of the next one
      (or (not razer-ts-done) (>= (- ts razer-ts-done) razer-delay-min))
      (set! razer-ts-start ts)))))

(xbindkey-function '(Release "b:8") (lambda ()
  (let ((ts (mono-time)))
    (when razer-debug
      (format #t "razer: ~a/~a delay=~a[~a] wait=~a[~a]\n"
        razer-ts-start razer-ts-done
        (and razer-ts-done (- ts razer-ts-done)) razer-delay-min
        (and razer-ts-start (- ts razer-ts-start)) razer-wait-max))
    (when
      (and
        ;; Enforce min ts diff between previous "done" and this one
        (or (not razer-ts-done) (>= (- ts razer-ts-done) razer-delay-min))
        ;; Enforce max "click" wait time
        (and razer-ts-start (<= (- ts razer-ts-start) razer-wait-max)))
      (set! razer-ts-done ts)
      (when razer-debug (format #t "razer: --- click!\n"))
      (run-command "xdotool click 2")))))

Note that xbindkeys actually grabs "b:8" here, which is a "mouse button 8", as if it was "b:2", then "xdotool click 2" command will recurse into same code, so wheel-clicker should be bound to button 8 in X for that to work.

Rebinding buttons in X is trivial to do on-the-fly, using standard "xinput" tool - e.g. xinput set-button-map "My Mouse" 1 8 3 (xinitrc.input script can be used as an extended example).

Running "xdotool" to do actual clicks at the end seem a bit wasteful, as xbindkeys already hooks into similar functionality, but unfortunately there's no "send input event" calls exported to guile scripts (as of 1.8.6, at least).

Still, works well enough as it is, fixing that rather annoying issue.

[xbindkeysrc.scm on github]

May 19, 2015

twitch.tv VoDs (video-on-demand) downloading issues and fixes

Quite often recently VoDs on twitch for me are just unplayable through the flash player - no idea what happens at the backend there, but it buffers endlessly at any quality level and that's it.

I also need to skip to some arbitrary part in the 7-hour stream (last wcs sc2 ro32), as I've watched half of it live, which turns out to complicate things a bit.

So the solution is to download the thing, which goes something like this:

  • It's just a video, right? Let's grab whatever stream flash is playing (with e.g. FlashGot FF addon).

    Doesn't work easily, since video is heavily chunked.
    It used to be 30-min flv chunks, which are kinda ok, but these days it's forced 4s chunks - apparently backend doesn't even allow downloading more than 4s per request.
  • Fine, youtube-dl it is.

    Nope. Doesn't allow seeking to time in stream.
    There's an open "Download range" issue for that.

    livestreamer wrapper around the thing doesn't allow it either.

  • Try to use ?t=3h30m URL parameter - doesn't work, sadly.

  • mpv supports youtube-dl and seek, so use that.

    Kinda works, but only for super-short seeks.
    Seeking beyond e.g. 1 hour takes AGES, and every seek after that (even skipping few seconds ahead) takes longer and longer.
  • youtube-dl --get-url gets m3u8 playlist link, use ffmpeg -ss <pos> with it.

    Apparently works exactly same as mpv above - takes like 20-30min to seek to 3:30:00 (3.5 hour offset).

    Dunno if it downloads and checks every chunk in the playlist for length sequentially... sounds dumb, but no clue why it's that slow otherwise, apparently just not good with these playlists.

  • Grab the m3u8 playlist, change all relative links there into full urls, remove bunch of them from the start to emulate seek, play that with ffmpeg | mpv.

    Works at first, but gets totally stuck a few seconds/minutes into the video, with ffmpeg doing bitrates of ~10 KiB/s.

    youtube-dl apparently gets stuck in a similar fashion, as it does the same ffmpeg-on-a-playlist (but without changing it) trick.

  • Fine! Just download all the damn links with curl.

    grep '^http:' pls.m3u8 | xargs -n50 curl -s | pv -rb -i5 > video.mp4

    Makes it painfully obvious why flash player and ffmpeg/youtube-dl get stuck - eventually curl stumbles upon a chunk that downloads at a few KiB/s.

    This "stumbling chunk" appears to be a random one, unrelated to local bandwidth limitations, and just re-trying it fixes the issue.

  • Assemble a list of links and use some more advanced downloader that can do parallel downloads, plus detect and retry super-low speeds.

    Naturally, it's aria2, but with all the parallelism it appears to be impossible to guess which output file will be which with just a cli.

    Mostly due to links having same url-path, e.g. index-0000000014-O7tq.ts?start_offset=955228&end_offset=2822819 with different offsets (pity that backend doesn't seem to allow grabbing range of that *.ts file of more than 4s) - aria2 just does file.ts, file.ts.1, file.ts.2, etc - which are not in playlist-order due to all the parallel stuff.

  • Finally, as acceptance dawns, go and write your own youtube-dl/aria2 wrapper to properly seek necessary offset (according to playlist tags) and download/resume files from there, in a parallel yet ordered and controlled fashion.

    This is done by using --on-download-complete hook with passing ordered "gid" numbers for each chunk url, which are then passed to the hook along with the resulting path (and hook renames file to prefix + sequence number).

Ended up with the chunk of the stream I wanted, locally (online playback lag never goes away!), downloaded super-fast and seekable.

Resulting script is twitch_vod_fetch (script source link).

Needs youtube-dl (for --get-url), requests python module and aria2 (does the actual downloads, in the most efficient manner).


aria2c magic bits in the script:

aria2c = subprocess.Popen([
  'aria2c',

  '--stop-with-process={}'.format(os.getpid()),
  '--enable-rpc=true',
  '--rpc-listen-port={}'.format(port),
  '--rpc-secret={}'.format(key),

  '--no-netrc', '--no-proxy',
  '--max-concurrent-downloads=5',
  '--max-connection-per-server=5',
  '--max-file-not-found=5',
  '--max-tries=8',
  '--timeout=15',
  '--connect-timeout=10',
  '--lowest-speed-limit=100K',
  '--user-agent={}'.format(ua),

  '--on-download-complete={}'.format(hook),
], close_fds=True)

Didn't bother adding extra options for tweaking these via cli, but might be a good idea to adjust timeouts and limits for a particular use-case (see also the massive "man aria2c").

Seeking in playlist is easy, as it's essentially a VoD playlist, and every ~4s chunk is preceded by e.g. #EXTINF:3.240, tag, with its exact length, so script just skips these as necessary to satisfy --start-pos / --length parameters.

Queueing all downloads, each with its own particular gid, is done via JSON-RPC, as it seem to be impossible to:

  • Specify both link and gid in the --input-file for aria2c.
  • Pass an actual download URL or any sequential number to --on-download-complete hook (except for gid).

So each gid is just generated as "000001", "000002", etc, and hook script is a one-liner "mv" command.


Since all stuff in the script is kinda lenghty time-wise - e.g. youtube-dl --get-url takes a while, then the actual downloads, then concatenation, ... - it's designed to be Ctrl+C'able at any point.

Every step just generates a state-file like "my_output_prefix.m3u8", and next one goes on from there.
Restaring the script doesn't repeat these, and these files can be freely mangled or removed to force re-doing the step (or to adjust behavior in whatever way).
Example of useful restart might be removing *.m3u8.url and *.m3u8 files if twitch starts giving 404's due to expired links in there.
Won't force re-downloading any chunks, will only grab still-missing ones and assemble the resulting file.

End-result is one my_output_prefix.mp4 file with specified video chunk (or full video, if not specified), plus all the intermediate litter (to be able to restart the process from any point).


One issue I've spotted with the initial version:

05/19 22:38:28 [ERROR] CUID#77 - Download aborted. URI=...
Exception: [AbstractCommand.cc:398] errorCode=1 URI=...
  -> [RequestGroup.cc:714] errorCode=1 Download aborted.
  -> [DefaultBtProgressInfoFile.cc:277]
    errorCode=1 total length mismatch. expected: 1924180, actual: 1789572
05/19 22:38:28 [NOTICE] Download GID#0035090000000000 not complete: ...

Seem to be a few of these mismatches (like 5 out of 10k chunks), which don't get retried, as aria2 doesn't seem to consider these to be a transient errors (which is probably fair).

Probably a twitch bug, as it clearly breaks http there, and browsers shouldn't accept such responses either.

Can be fixed by one more hook, I guess - either --on-download-error (to make script retry url with that gid), or the one using websocket and getting json notification there.

In any case, just running same command again to download a few of these still-missing chunks and finish the process works around the issue.

Update 2015-05-22: Issue clearly persists for vods from different chans, so fixed it via simple "retry all failed chunks a few times" loop at the end.

Update 2015-05-23: Apparently it's due to aria2 reusing same files for different urls and trying to resume downloads, fixed by passing --out for each download queued over api.


[script source link]

Apr 11, 2015

Skype setup on amd64 without multilib/multiarch/chroot

Did a kinda-overdue migration of a desktop machine to amd64 a few days ago.
Exherbo has multiarch there, but I didn't see much point in keeping (and maintaining in various ways) a full-blown set of 32-bit libs just for Skype, which I found that I still need occasionally.

Solution I've used before (documented in the past entry) with just grabbing 32-bit Skype binary and full set of libs it needs from whatever distro still works and applies here, not-so-surprisingly.

What I ended up doing is:

  • Grab the latest Fedora "32-bit workstation" iso (Fedora-Live-Workstation-i686-21-5.iso).

  • Install/run it on a virtual machine (plain qemu-kvm).

  • Download "Dynamic" Skype version (distro-independent tar.gz with files) from Skype site to/on a VM, "tar -xf" it.

  • ldd skype-4.3.0.37/skype | grep 'not found' to see which dependency-libs are missing.

  • Install missing libs - yum install qtwebkit libXScrnSaver

  • scp build_skype_env.bash (from skype-space repo that I have from old days of using skype + bitlbee) to vm, run it on a skype-dir - e.g. ./build_skype_env.bash skype-4.3.0.37.

    Should finish successfully and produce "skype_env" dir in the current path.

  • Copy that "skype_env" dir with all the libs back to pure-amd64 system.

  • Since skype binary has "/lib/ld-linux.so.2" as a hardcoded interpreter (as it should be), and pure-amd64 system shouldn't have one (not to mention missing multiarch prefix) - patch it in the binary with patchelf:

    patchelf --set-interpreter ./ld-linux.so.2 skype
    
  • Run it (from that env dir with all the libs):

    LD_LIBRARY_PATH=. ./skype --resources=.
    

    Should "just work" \o/

One big caveat is that I don't care about any features there except for simple text messaging, which is probably not how most people use Skype, so didn't test if e.g. audio would work there.
Don't think sound should be a problem though, especially since iirc modern skype could use pulseaudio (or even using it by default?).

Given that skype itself a huge opaque binary, I do have AppArmor profile for the thing (uses "~/.Skype/env/" dir for bin/libs) - home.skype.

Mar 25, 2015

gnuplot for live "last 30 seconds" sliding window of "free" (memory) data

Was looking at a weird what-looks-like-a-memleak issue somewhere in the system on changing desktop background (somewhat surprisingly complex operation btw) and wanted to get a nice graph of "last 30s of free -m output", with some labels and easy access to data.

A simple enough task for gnuplot, but resulting in a somewhat complicated solution, as neither "free" nor gnuplot are perfect tools for the job.

First thing is that free -m -s1 doesn't actually give a machine-readable data, and I was too lazy to find something better (should've used sysstat and sar!) and thought "let's just parse that with awk":

free -m -s $interval |
  awk '
    BEGIN {
      exports="total used free shared available"
      agg="free_plot.datz"
      dst="free_plot.dat"}
    $1=="total" {
      for (n=1;n<=NF;n++)
        if (index(exports,$n)) headers[n+1]=$n }
    $1=="Mem:" {
      first=1
      printf "" >dst
      for (n in headers) {
        if (!first) {
          printf " " >>agg
          printf " " >>dst }
        printf "%d", $n >>agg
        printf "%s", headers[n] >>dst
        first=0 }
      printf "\n" >>agg
      printf "\n" >>dst
      fflush(agg)
      close(dst)
      system("tail -n '$points' " agg " >>" dst) }'

That might be more awk than one ever wants to see, but I imagine there'd be not too much space to wiggle around it, as gnuplot is also somewhat picky in its input (either that or you can write same scripts there).

I thought that visualizing "live" stream of data/measurements would be kinda typical task for any graphing/visualization solution, but meh, apparently not so much for gnuplot, as I haven't found better way to do it than "reread" command.

To be fair, that command seem to do what I want, just not in a much obvious way, seamlessly updating output in the same single window.

Next surprising quirk was "how to plot only last 30 points from big file", as it seem be all-or-nothing with gnuplot, and googling around, only found that people do it via the usual "tail" before the plotting.

Whatever, added that "tail" hack right to the awk script (as seen above), need column headers there anyway.

Then I also want nice labels - i.e.:

  • How much available memory was there at the start of the graph.
  • How much of it is at the end.
  • Min for that parameter on the graph.
  • Same, but max.
stats won't give first/last values apparently, unless I missed those in the PDF (only available format for up-to-date docs, le sigh), so one solution I came up with is to do a dry-run plot command with set terminal unknown and "grab first value" / "grab last value" functions to "plot".
Which is not really a huge deal, as it's just a preprocessed batch of 30 points, not a huge array of data.

Ok, so without further ado...

src='free_plot.dat'
y0=100; y1=2000;
set xrange [1:30]
set yrange [y0:y1]

# --------------------
set terminal unknown
stats src using 5 name 'y' nooutput

is_NaN(v) = v+0 != v
y_first=0
grab_first_y(y) = y_first = y_first!=0 && !is_NaN(y_first) ? y_first : y
grab_last_y(y) = y_last = y

plot src u (grab_first_y(grab_last_y($5)))
x_first=GPVAL_DATA_X_MIN
x_last=GPVAL_DATA_X_MAX

# --------------------
set label 1 sprintf('first: %d', y_first) at x_first,y_first left offset 5,-1
set label 2 sprintf('last: %d', y_last) at x_last,y_last right offset 0,1
set label 3 sprintf('min: %d', y_min) at 0,y0-(y1-y0)/15 left offset 5,0
set label 4 sprintf('max: %d', y_max) at 0,y0-(y1-y0)/15 left offset 5,1

# --------------------
set terminal x11 nopersist noraise enhanced
set xlabel 'n'
set ylabel 'megs'

set style line 1 lt 1 lw 1 pt 2 pi -1 ps 1.5
set pointintervalbox 2

plot\
  src u 5 w linespoints linestyle 1 t columnheader,\
  src u 1 w lines title columnheader,\
  src u 2 w lines title columnheader,\
  src u 3 w lines title columnheader,\
  src u 4 w lines title columnheader,\

# --------------------
pause 1
reread

Probably the most complex gnuplot script I composed to date.

Yeah, maybe I should've just googled around for an app that does same thing, though I like how this lore potentially gives ability to plot whatever other stuff in a similar fashion.

That, and I love all the weird stuff gnuplot can do.

For instance, xterm apparently has some weird "plotter" interface hardware terminals had in the past:

gnuplot and Xterm Tektronix 4014 Mode

And there's also the famous "dumb" terminal for pseudographics too.

Regular x11 output looks nice and clean enough though:

gnuplot x11 output

It updates smoothly, with line crawling left-to-right from the start and then neatly flowing through. There's a lot of styling one can do to make it prettier, but I think I've spent enough time on such a trivial thing.

Didn't really help much with debugging though. Oh well...

Full "free | awk | gnuplot" script is here on github.

Mar 11, 2015

Adding hotkey for any addon button in Firefox - one unified way

Most Firefox addons add a toolbar button that does something when clicked, or you can add such button by dragging it via Customize Firefox interface.

For example, I have a button for (an awesome) Column Reader extension on the right of FF menu bar (which I have always-visible):

ff extension button

But as far as I can tell, most simple extensions don't bother with some custom hotkey-adding interface, so there seem to be no obvious way to "click" that button by pressing a hotkey.

In case of Column Reader, this is more important because pressing its button is akin to "inspect element" in Firebug or FF Developer Tools - allows to pick any box of text on the page, so would be especially nice to call via hotkey + click, (as you'd do with Ctrl+Shift+C + click).

As I did struggle with binding hotkeys for specific extensions before (in their own quirky ways), found one sure-fire way to do exactly what you'd get on click this time - by simulating a click event itself (upon pressing the hotkey).

Whole process can be split into several steps:

  • Install Keyconfig or similar extension, allowing to bind/run arbitrary JavaScript code on hotkeys.

    One important note here is that such code should run in the JS context of the extension itself, not just some page, as JS from page obviously won't be allowed to send events to Firefox UI.

    Keyconfig is very simple and seem to work perfectly for this purpose - just "Add a new key" there and it'll pop up a window where any privileged JS can be typed/pasted in.

  • Install DOM Inspector extension (from AMO).

    This one will be useful to get button element's "id" (similar to DOM elements' "id" attribute, but for XUL).

    It should be available (probably after FF restart) under "Tools -> Web Developer -> DOM Inspector".

  • Run DOM Inspector and find the element-to-be-hotkeyed there.

    Under "File" select "Inspect Chrome Document" and first document there - should update "URL bar" in the inspector window to "chrome://browser/content/browser.xul".

    Now click "Find a node by clicking" button on the left (or under "Edit -> Select Element by Click"), and then just click on the desired UI button/element - doesn't really have to be an extension button.

    It might be necessary to set "View -> Document Viewer -> DOM Nodes" to see XUL nodes on the left, if it's not selected already.

    ff extension button 'id' attribute

    There it'd be easy to see all the neighbor elements and this button element.

    Any element in that DOM Inspector frame can be right-clicked and there's "Blink Element" option to show exactly where it is in the UI.

    "id" of any box where click should land will do (highlighted with red in my case on the image above).

  • Write/paste JavaScript that would "click" on the element into Keyconfig (or whatever other hotkey-addon).

    I did try HTML-specific ways to trigger events, but none seem to have worked with XUL elements, so JS below uses nsIDOMWindowUtils XPCOM interface, which seem to be designed specifically with such "simulation" stuff in mind (likely for things like Selenium WebDriver).

    JS for my case:

    var el_box = document.getElementById('columnsreader').boxObject;
    var domWindowUtils =
      window.QueryInterface(Components.interfaces.nsIInterfaceRequestor)
        .getInterface(Components.interfaces.nsIDOMWindowUtils);
    domWindowUtils.sendMouseEvent('mousedown', el_box.x, el_box.y, 0, 1, 0);
    domWindowUtils.sendMouseEvent('mouseup', el_box.x, el_box.y, 0, 1, 0);
    

    "columnsreader" there is an "id" of an element-to-be-clicked, and should probably be substituted for whatever else from the previous step.

    There doesn't seem to be a "click" event, so "mousedown" + "mouseup" it is.

    "0, 1, 0" stuff is: left button, single-click (not sure what it does here), no modifiers.

    If anything goes wrong in that JS, the usual "Tools -> Web Developer -> Browser Console" (Ctrl+Shift+J) window should show errors.

    It should be possible to adjust click position by adding/subtracting pixels from el_box.x / el_box.y, but left-top corner seem to work fine for buttons.

  • Save time and frustration by not dragging stupid mouse anymore, using trusty hotkey instead \o/

Wish there was some standard "click on whatever to bind it to specified hotkey" UI option in FF (like there is in e.g. Claws Mail), but haven't seen one so far (FF 36).
Maybe someone should write addon for that!

Dec 23, 2014

A few recent emacs features - remote and file colors

I've been using emacs for a while now, and always on a lookout for a features that'd be nice to have there. Accumulated quite a number of these in my emacs-setup repo as a result.

Most of these features start from ideas in other editors or tools (e.g. music players, irc clients, etc - emacs seem to be best for a lot of stuff), or a simplistic proof-of-concept implementation of something similar.
I usually add these to my emacs due to sheer fun of coding in lisps, compared to pretty much any other lang family I know of.

Recently added two of these, and wanted to share/log the ideas here, in case someone else might find these useful.

"Remote" based on emacsclient tool

As I use laptop and desktop machines for coding and procrastination interchangeably, can have e.g. irc client (ERC - seriously the best irc client I've seen by far) running on either of these.

But even with ZNC bouncer setup (and easy log-reading tools for it), it's still a lot of hassle to connect to same irc from another machiine and catch-up on chan history there.

Or sometimes there are unsaved buffer changes, or whatever other stuff happening, or just stuff you want to do in a remote emacs instance, which would be easy if you could just go turn on the monitor, shrug-off screen blanking, sometimes disable screen-lock, then switch to emacs and press a few hotkeys there... yeah, it doesn't look that easy even when I'm at home and close to the thing.

emacs has "emacsclient" thing, that allows you to eval whatever elisp code on a remote emacs instance, but it's impossible to use for such simple tasks without some convenient wrappers.

And these remote invocation wrappers is what this idea is all about.

Consider terminal dump below, running in an ssh or over tcp to remote emacs server (and I'd strongly suggest having (server-start) right in ~/.emacs, though maybe not on tcp socket for security reasons):

% ece b
* 2014-12-23.a_few_recent_emacs_features_-_remote_and_file_colors.rst
  fg_remote.el
  rpc.py
* fg_erc.el
  utils.py
  #twitter_bitlbee
  #blazer
  #exherbo
  ...

% ece b remote
*ERROR*: Failed to uniquely match buffer by `remote', matches:
2014-12-23.a_few_recent_emacs_features_-_remote_and_file_colors.rst,
fg_remote.el

--- whoops... lemme try again

% ece b fg_rem
...(contents of the buffer, matched by unique name part)...

% ece erc
004 #twitter_bitlbee
004 #blazer
002 #bordercamp

--- Showing last (unchecked) irc activity, same as erc-track-mode does (but nicer)

% ece erc twitter | t
[13:36:45]<fijall> hint - you can use gc.garbage in Python to store any sort of ...
[14:57:59]<veorq> Going shopping downtown. Pray for me.
[15:48:59]<mitsuhiko> I like how if you google for "London Bridge" you get ...
[17:15:15]<simonw> TIL the Devonshire word "frawsy" (or is it "frawzy"?) - a ...
[17:17:04] *** -------------------- ***
[17:24:01]<veorq> RT @collinrm: Any opinions on VeraCrypt?
[17:33:31]<veorq> insightful comment by @jmgosney about the Ars Technica hack ...
[17:35:36]<veorq> .@jmgosney as you must know "iterating" a hash is in theory ...
[17:51:50]<veorq> woops #31c3 via @joernchen ...
~erc/#twitter_bitlbee%

--- "t" above is an alias for "tail" that I use in all shells, lines snipped jic

% ece h
Available commands:
  buffer (aliases: b, buff)
  buffer-names
  erc
  erc-mark
  get-socket-type
  help (aliases: h, rtfm, wat)
  log
  switch-sockets

% ece h erc-mark
(fg-remote-erc-mark PATTERN)

Put /mark to a specified ERC chan and reset its activity track.

--- Whole "help" thing is auto-generated, see "fg-remote-help" in fg_remote.el

And so on - anything is trivial to implement as elisp few-liner. For instance, missing "buffer-save" command will be:

(defun fg-remote-buffer-save (pattern)
  "Saves specified bufffer, matched via `fg-get-useful-buffer'."
  (with-current-buffer (fg-get-useful-buffer pattern) (save-buffer)))
(defalias 'fg-remote-bs 'fg-remote-buffer-save)
Both "bufffer-save" command and its "bs" alias will instantly appear in "help" and be available for calling via emacs client.
Hell, you can "implement" this stuff from terminal and eval on a remote emacs (i.e. just pass code above to emacsclient -e), extending its API in an ad-hoc fashion right there.

"ece" script above is a thin wrapper around "emacsclient" to avoid typing that long binary name and "-e" flag with a set of parentheses every time, can be found in the root of emacs-setup repo.

So it's easier to procrastinate in bed whole morning with a laptop than ever.
Yup, that's the real point of the whole thing.

Unique per-file buffer colors

Stumbled upon this idea in a deliberate-software blog entry recently.

There, author suggests making static per-code-project colors, but I thought - why not have slight (and automatic) per-file-path color alterations for buffer background?

Doing that makes file buffers (or any non-file ones too) recognizable, i.e. you don't need to look at the path or code inside anymore to instantly know that it's that exact file you want (or don't want) to edit - eye/brain picks it up automatically.

emacs' color.el already has all the cool stuff for colors - tools for conversion to/from L*a*b* colorspace (humane "perceptual" numbers), CIEDE2000 color diffs (JUST LOOK AT THIS THING), and so on - easy to use these for the task.

Result is "fg-color-tweak" function that I now use for slight changes to buffer bg, based on md5 hash of the file path and reliably-contrast irc nicknames (based also on the hash, used way worse and unreliable "simple" thing for this in the past):

(fg-color-tweak COLOR &optional SEED MIN-SHIFT MAX-SHIFT (CLAMP-RGB-AFTER 20)
  (LAB-RANGES ...))

Adjust COLOR based on (md5 of-) SEED and MIN-SHIFT / MAX-SHIFT lists.

COLOR can be provided as a three-value (0-1 float)
R G B list, or a string suitable for `color-name-to-rgb'.

MIN-SHIFT / MAX-SHIFT can be:
 * three-value list (numbers) of min/max offset on L*a*b* in either direction
 * one number - min/max cie-de2000 distance
 * four-value list of offsets and distance, combining both options above
 * nil for no-limit

SEED can be number, string or nil.
Empty string or nil passed as SEED will return the original color.

CLAMP-RGB-AFTER defines how many attempts to make in picking
L*a*b* color with random offset that translates to non-imaginary sRGB color.
When that number is reached, last color will be `color-clamp'ed to fit into sRGB.

Returns color plus/minus offset as a hex string.
Resulting color offset should be uniformly distributed between min/max shift limits.

It's a bit complicated under the hood, parsing all the options and limits, making sure resulting color is not "imaginary" L*a*b* one and converts to RGB without clamping (if possible), while maintaining requested min/max distances, doing several hashing rounds if necessary, with fallbacks... etc.

Actual end-result is simple though - deterministic and instantly-recognizable color-coding for anything you can think of - just pass the attribute to base coding on and desired min/max contrast levels, get back the hex color to use, apply it.

Should you use something like that, I highly suggest taking a moment to look at L*a*b* and HSL color spaces, to understand how colors can be easily tweaked along certain parameters.
For example, passing '(0 a b) as min/max-shift to the function above will produce color variants with the same "lightness", which is super-useful to control, making sure you won't ever get out-of-whack colors for e.g. light/dark backgrounds.

To summarize...

Coding lispy stuff is super-fun, just for the sake of it ;)

Actually, speaking of fun, I can't recommend installing magnars' s.el and dash.el right now highly enough, unless you have these already.
They make coding elisp stuff so much more fun and trivial, to a degree that'd be hard to describe, so please at least try coding somethig with these.

All the stuff mentioned above is in (also linked here already) emacs-setup repo.

Cheers!

Jun 15, 2014

Running isolated Steam instance with its own UID and session

Finally got around to installing Steam platform to a desktop linux machine.
Been using Win7 instance here for games before, but as another fan in my laptop died, have been too lazy to reboot into dedicated games-os here.

Given that Steam is a closed-source proprietary DRM platform for mass software distribution, it seem to be either an ideal malware spread vector or just a recipie for disaster, so of course not keen on giving it any access in a non-dedicated os.

I also feel a bit guilty on giving the thing any extra PR, as it's the worst kind of always-on DRM crap in principle, and already pretty much monopolized PC Gaming market.
These days even many game critics push for filtering and essentially abuse of that immense leverage - not a good sign at all.
To its credit, of course, Steam is nice and convenient to use, as such things (e.g. google, fb, droids, apple, etc) tend to be.

So, isolation:

  • To avoid having Steam and any games anywhere near $HOME, giving it separate UID is a good way to go.

  • That should also allow for it to run in a separate desktop session - i.e. have its own cgroup, to easily contain, control and set limits for games:

    % loginctl user-status steam
    steam (1001)
      Since: Sun 2014-06-15 18:40:34 YEKT; 31s ago
      State: active
      Sessions: *7
        Unit: user-1001.slice
              └─session-7.scope
                ├─7821 sshd: steam [priv]
                ├─7829 sshd: steam@notty
                ├─7830 -zsh
                ├─7831 bash /usr/bin/steam
                ├─7841 bash /home/steam/.local/share/Steam/steam.sh
                ├─7842 tee /tmp/dumps/steam_stdout.txt
                ├─7917 /home/steam/.local/share/Steam/ubuntu12_32/steam
                ├─7942 dbus-launch --autolaunch=e52019f6d7b9427697a152348e9f84ad ...
                └─7943 /usr/bin/dbus-daemon --fork --print-pid 5 ...
    
  • AppArmor should allow to further isolate processes from having any access beyond what's absolutely necessary for them to run, warn when these try to do strange things and allow to just restrict these from doing outright stupid things.

  • Given separate UID and cgroup, network access from all Steam apps can be easily controlled via e.g. iptables, to avoid Steam and games scanning and abusing other things in LAN, for example.


Creating steam user should be as simple as useradd steam, but then switching to that UID from within a running DE should still allow it to access same X server and start systemd session for it, plus not have any extra env, permissions, dbus access, fd's and such from the main session.

By far the easiest way to do that I've found is to just ssh steam@localhost, putting proper pubkey into ~steam/.ssh/authorized_keys first, of course.
That should ensure that nothing leaks from DE but whatever ssh passes, and it's rather paranoid security-oriented tool, so can be trusted with that .
Steam comes with a bootstrap script (e.g. /usr/bin/steam) to install itself, which also starts the thing when it's installed, so Steam AppArmor profile (github link) is for that.
It should allow to both bootstrap and install stuff as well as run it, yet don't allow steam to poke too much into other shared dirs or processes.

To allow access to X, xhost or ~/.Xauthority cookie can be used along with some extra env in e.g. ~/.zshrc:

export DISPLAY=':1.0'

In similar to ssh fashion, I've used pulseaudio network streaming to main DE sound daemon on localhost for sound (also in ~/.zshrc):

export PULSE_SERVER='{e52019f6d7b9427697a152348e9f84ad}tcp6:malediction:4713'
export PULSE_COOKIE="$HOME"/.pulse-cookie

(I have pulse network streaming setup anyway, for sharing sound from desktop to laptop - to e.g. play videos on a big screen there yet hear sound from laptop's headphones)

Running Steam will also start its own dbus session (maybe it's pulse client lib doing that, didn't check), but it doesn't seem to be used for anything, so there seem to be no need to share it with main DE.


That should allow to start Steam after ssh'ing to steam@localhost, but process can be made much easier (and more foolproof) with e.g. ~/bin/steam as:

#!/bin/bash

cmd=$1
shift

steam_wait_exit() {
  for n in {0..10}; do
    pgrep -U steam -x steam >/dev/null || return 0
    sleep 0.1
  done
  return 1
}

case "$cmd" in
  '')
    ssh steam@localhost <<EOF
source .zshrc
exec steam "$@"
EOF
    loginctl user-status steam ;;

  s*) loginctl user-status steam ;;

  k*)
    steam_exited=
    pgrep -U steam -x steam >/dev/null
    [[ $? -ne 0 ]] && steam_exited=t
    [[ -z "$steam_exited" ]] && {
      ssh steam@localhost <<EOF
source .zshrc
exec steam -shutdown
EOF
      steam_wait_exit
      [[ $? -eq 0 ]] && steam_exited=t
    }
    sudo loginctl kill-user steam
    [[ -z "$steam_exited" ]] && {
      steam_wait_exit || sudo loginctl -s KILL kill-user steam
    } ;;

  *) echo >&2 "Usage: $(basename "$0") [ status | kill ]"
esac

Now just steam in the main DE will run the thing in its own $HOME.

For further convenience, there's steam status and steam kill to easily monitor or shutdown running Steam session from the terminal.

Note the complicated shutdown thing - Steam doesn't react to INT or TERM signals cleanly, passing these to the running games instead, and should be terminated via its own cli option (and the rest can then be killed-off too).


With this setup, iptables rules for outgoing connections can use user-slice cgroup match (in 3.14 at least) or -m owner --uid-owner steam matches for socket owner uid.

The only non-WAN things Steam connects to here are DNS servers and aforementioned pulseaudio socket on localhost, the rest can be safely firewalled.


Finally, running KSP there on Exherbo, I quickly discovered that sound libs and plugins - alsa and pulse - in ubuntu "runtime" steam bootstrap setups don't work well - either there's no sound or game fails to load at all.

Easy fix is to copy the runtime it uses (32-bit one for me) and cleanup alien stuff from there for what's already present in the system, i.e.:

% cp -R .steam/bin32/steam-runtime my-runtime
% find my-runtime -type f\
  \( -path '*asound*' -o -path '*alsa*' -o -path '*pulse*' \) -delete

And then add something like this to ~steam/.zshrc:

steam() { STEAM_RUNTIME="$HOME"/my-runtime command steam "$@"; }

That should keep all of the know-working Ubuntu libs that steam bootsrap gets away from the rest of the system (where stuff like Mono just isn't needed, and others will cause trouble) while allowing to remove any of them from the runtime to use same thing in the system.

And yay - Kerbal Space Program seem to work here way faster than on Win7.

KSP and Steam on Linux

May 19, 2014

Displaying any lm_sensors data (temperature, fan speeds, voltage, etc) in conky

Conky sure has a ton of sensor-related hw-monitoring options, but it still doesn't seem to be enough to represent even just the temperatures from this "sensors" output:

atk0110-acpi-0
Adapter: ACPI interface
Vcore Voltage:      +1.39 V  (min =  +0.80 V, max =  +1.60 V)
+3.3V Voltage:      +3.36 V  (min =  +2.97 V, max =  +3.63 V)
+5V Voltage:        +5.08 V  (min =  +4.50 V, max =  +5.50 V)
+12V Voltage:      +12.21 V  (min = +10.20 V, max = +13.80 V)
CPU Fan Speed:     2008 RPM  (min =  600 RPM, max = 7200 RPM)
Chassis Fan Speed:    0 RPM  (min =  600 RPM, max = 7200 RPM)
Power Fan Speed:      0 RPM  (min =  600 RPM, max = 7200 RPM)
CPU Temperature:    +42.0°C  (high = +60.0°C, crit = +95.0°C)
MB Temperature:     +43.0°C  (high = +45.0°C, crit = +75.0°C)

k10temp-pci-00c3
Adapter: PCI adapter
temp1:        +30.6°C  (high = +70.0°C)
                       (crit = +90.0°C, hyst = +88.0°C)

radeon-pci-0400
Adapter: PCI adapter
temp1:        +51.0°C

Given the summertime, and faulty noisy cooling fans, decided that it'd be nice to be able to have an idea about what kind of temperatures hw operates on under all sorts of routine tasks.

Conky is extensible via lua, which - among other awesome things there are - allows to code caches for expensive operations (and not just repeat them every other second) and parse output of whatever tools efficiently (i.e. without forking five extra binaries plus perl).

Output of "sensors" though not only is kinda expensive to get, but also hardly parseable, likely unstable, and tool doesn't seem to have any "machine data" option.

lm_sensors includes a libsensors, which still doesn't seem possible to call from conky-lua directly (would need some kind of ffi), but easy to write the wrapper around - i.e. this sens.c 50-liner, to dump info in a useful way:

atk0110-0-0__in0_input 1.392000
atk0110-0-0__in0_min 0.800000
atk0110-0-0__in0_max 1.600000
atk0110-0-0__in1_input 3.360000
...
atk0110-0-0__in3_max 13.800000
atk0110-0-0__fan1_input 2002.000000
atk0110-0-0__fan1_min 600.000000
atk0110-0-0__fan1_max 7200.000000
atk0110-0-0__fan2_input 0.000000
...
atk0110-0-0__fan3_max 7200.000000
atk0110-0-0__temp1_input 42.000000
atk0110-0-0__temp1_max 60.000000
atk0110-0-0__temp1_crit 95.000000
atk0110-0-0__temp2_input 43.000000
atk0110-0-0__temp2_max 45.000000
atk0110-0-0__temp2_crit 75.000000
k10temp-0-c3__temp1_input 31.500000
k10temp-0-c3__temp1_max 70.000000
k10temp-0-c3__temp1_crit 90.000000
k10temp-0-c3__temp1_crit_hyst 88.000000
radeon-0-400__temp1_input 51.000000

It's all lm_sensors seem to know about hw in a simple key-value form.

Still not keen on running that on every conky tick, hence the lua cache:

sensors = {
  values=nil,
  cmd="sens",
  ts_read_i=120, ts_read=0,
}

function conky_sens_read(name, precision)
  local ts = os.time()
  if os.difftime(ts, sensors.ts_read) > sensors.ts_read_i then
    local sh = io.popen(sensors.cmd, 'r')
    sensors.values = {}
    for p in string.gmatch(sh:read('*a'), '(%S+ %S+)\n') do
      local n = string.find(p, ' ')
      sensors.values[string.sub(p, 0, n-1)] = string.sub(p, n)
    end
    sh:close()
    sensors.ts_read = ts
  end

  if sensors.values[name] then
    local fmt = string.format('%%.%sf', precision or 0)
    return string.format(fmt, sensors.values[name])
  end
  return ''
end

Which can run the actual "sens" command every 120s, which is perfectly fine with me, since I don't consider conky to be an "early warning" system, and more of an "have an idea of what's the norm here" one.

Config-wise, it'd be just cpu temp: ${lua sens_read atk0110-0-0__temp1_input}C, or a more fancy template version with a flashing warning and hidden for missing sensors:

template3 ${color lightgrey}${if_empty ${lua sens_read \2}}${else}\
${if_match ${lua sens_read \2} > \3}${color red}\1: ${lua sens_read \2}C${blink !!!}\
${else}\1: ${color}${lua sens_read \2}C${endif}${endif}

It can then be used simply as ${template3 cpu atk0110-0-0__temp1_input 60} or ${template3 gpu radeon-0-400__temp1_input 80}, with 60 and 80 being manually-specified thresholds beyond which indicator turns red and has blinking "!!!" to get more attention.

Overall result in my case is something like this:

conky sensors display

sens.c (plus Makefile with gcc -Wall -lsensors for it) and my conky config where it's utilized can be all found in de-setup repo on github (or my git mirror, ofc).

May 12, 2014

My Firefox Homepage

Wanted to have some sort of "homepage with my fav stuff, arranged as I want to" in firefox for a while, and finally got resolve to do something about it - just finished a (first version of) script to generate the thing - firefox-homepage-generator.

Default "grid of page screenshots" never worked for me, and while there are other projects that do other layouts for different stuff, they just aren't flexible enough to do whatever horrible thing I want.

In this particular case, I wanted to experiment with chaotic tag cloud of bookmarks (so they won't ever be in the same place), relations graph for these tags and random picks for "links to read" from backlog.

Result is a dynamic d3 + d3.layout.cloud (interactive example of this layout) page without much style:

homepage screenshot
"Mark of Chaos" button in the corner can fly/re-pack tags around.
Clicking tag shows bookmarks tagged as such and fades all other tags out in proportion to how they're related to the clicked one (i.e. how many links share the tag with others).

Started using FF bookmarks again in a meaningful way only recently, so not much stuff there yet, but it does seem to help a lot, especially with these handy awesome bar tricks.

Not entirely sure how useful the cloud visualization or actually having a homepage would be, but it's a fun experiment and a nice place to collect any useful web-surfing-related stuff I might think of in the future.

Repo link: firefox-homepage-generator

Next → Page 1 of 3
Member of The Internet Defense League