May 19, 2015 VoDs (video-on-demand) downloading issues and fixes

Quite often recently VoDs on twitch for me are just unplayable through the flash player - no idea what happens at the backend there, but it buffers endlessly at any quality level and that's it.

I also need to skip to some arbitrary part in the 7-hour stream (last wcs sc2 ro32), as I've watched half of it live, which turns out to complicate things a bit.

So the solution is to download the thing, which goes something like this:

  • It's just a video, right? Let's grab whatever stream flash is playing (with e.g. FlashGot FF addon).

    Doesn't work easily, since video is heavily chunked.
    It used to be 30-min flv chunks, which are kinda ok, but these days it's forced 4s chunks - apparently backend doesn't even allow downloading more than 4s per request.
  • Fine, youtube-dl it is.

    Nope. Doesn't allow seeking to time in stream.
    There's an open "Download range" issue for that.

    livestreamer wrapper around the thing doesn't allow it either.

  • Try to use ?t=3h30m URL parameter - doesn't work, sadly.

  • mpv supports youtube-dl and seek, so use that.

    Kinda works, but only for super-short seeks.
    Seeking beyond e.g. 1 hour takes AGES, and every seek after that (even skipping few seconds ahead) takes longer and longer.
  • youtube-dl --get-url gets m3u8 playlist link, use ffmpeg -ss <pos> with it.

    Apparently works exactly same as mpv above - takes like 20-30min to seek to 3:30:00 (3.5 hour offset).

    Dunno if it downloads and checks every chunk in the playlist for length sequentially... sounds dumb, but no clue why it's that slow otherwise, apparently just not good with these playlists.

  • Grab the m3u8 playlist, change all relative links there into full urls, remove bunch of them from the start to emulate seek, play that with ffmpeg | mpv.

    Works at first, but gets totally stuck a few seconds/minutes into the video, with ffmpeg doing bitrates of ~10 KiB/s.

    youtube-dl apparently gets stuck in a similar fashion, as it does the same ffmpeg-on-a-playlist (but without changing it) trick.

  • Fine! Just download all the damn links with curl.

    grep '^http:' pls.m3u8 | xargs -n50 curl -s | pv -rb -i5 > video.mp4

    Makes it painfully obvious why flash player and ffmpeg/youtube-dl get stuck - eventually curl stumbles upon a chunk that downloads at a few KiB/s.

    This "stumbling chunk" appears to be a random one, unrelated to local bandwidth limitations, and just re-trying it fixes the issue.

  • Assemble a list of links and use some more advanced downloader that can do parallel downloads, plus detect and retry super-low speeds.

    Naturally, it's aria2, but with all the parallelism it appears to be impossible to guess which output file will be which with just a cli.

    Mostly due to links having same url-path, e.g. index-0000000014-O7tq.ts?start_offset=955228&end_offset=2822819 with different offsets (pity that backend doesn't seem to allow grabbing range of that *.ts file of more than 4s) - aria2 just does file.ts, file.ts.1, file.ts.2, etc - which are not in playlist-order due to all the parallel stuff.

  • Finally, as acceptance dawns, go and write your own youtube-dl/aria2 wrapper to properly seek necessary offset (according to playlist tags) and download/resume files from there, in a parallel yet ordered and controlled fashion.

    This is done by using --on-download-complete hook with passing ordered "gid" numbers for each chunk url, which are then passed to the hook along with the resulting path (and hook renames file to prefix + sequence number).

Ended up with the chunk of the stream I wanted, locally (online playback lag never goes away!), downloaded super-fast and seekable.

Resulting script is twitch_vod_fetch (script source link).

Update 2017-06-01: rewritten it using python3/asyncio since then, so stuff related to specific implementation details here is only relevant for old py2 version (can be pulled from git history, if necessary).

aria2c magic bits in the script:

aria2c = subprocess.Popen([


  '--no-netrc', '--no-proxy',

], close_fds=True)

Didn't bother adding extra options for tweaking these via cli, but might be a good idea to adjust timeouts and limits for a particular use-case (see also the massive "man aria2c").

Seeking in playlist is easy, as it's essentially a VoD playlist, and every ~4s chunk is preceded by e.g. #EXTINF:3.240, tag, with its exact length, so script just skips these as necessary to satisfy --start-pos / --length parameters.

Queueing all downloads, each with its own particular gid, is done via JSON-RPC, as it seem to be impossible to:

  • Specify both link and gid in the --input-file for aria2c.
  • Pass an actual download URL or any sequential number to --on-download-complete hook (except for gid).

So each gid is just generated as "000001", "000002", etc, and hook script is a one-liner "mv" command.

Since all stuff in the script is kinda lenghty time-wise - e.g. youtube-dl --get-url takes a while, then the actual downloads, then concatenation, ... - it's designed to be Ctrl+C'able at any point.

Every step just generates a state-file like "my_output_prefix.m3u8", and next one goes on from there.
Restaring the script doesn't repeat these, and these files can be freely mangled or removed to force re-doing the step (or to adjust behavior in whatever way).
Example of useful restart might be removing *.m3u8.url and *.m3u8 files if twitch starts giving 404's due to expired links in there.
Won't force re-downloading any chunks, will only grab still-missing ones and assemble the resulting file.

End-result is one my_output_prefix.mp4 file with specified video chunk (or full video, if not specified), plus all the intermediate litter (to be able to restart the process from any point).

One issue I've spotted with the initial version:

05/19 22:38:28 [ERROR] CUID#77 - Download aborted. URI=...
Exception: [] errorCode=1 URI=...
  -> [] errorCode=1 Download aborted.
  -> []
    errorCode=1 total length mismatch. expected: 1924180, actual: 1789572
05/19 22:38:28 [NOTICE] Download GID#0035090000000000 not complete: ...

Seem to be a few of these mismatches (like 5 out of 10k chunks), which don't get retried, as aria2 doesn't seem to consider these to be a transient errors (which is probably fair).

Probably a twitch bug, as it clearly breaks http there, and browsers shouldn't accept such responses either.

Can be fixed by one more hook, I guess - either --on-download-error (to make script retry url with that gid), or the one using websocket and getting json notification there.

In any case, just running same command again to download a few of these still-missing chunks and finish the process works around the issue.

Update 2015-05-22: Issue clearly persists for vods from different chans, so fixed it via simple "retry all failed chunks a few times" loop at the end.

Update 2015-05-23: Apparently it's due to aria2 reusing same files for different urls and trying to resume downloads, fixed by passing --out for each download queued over api.

[script source link]

Sep 23, 2014

tmux rate-limiting magic against terminal spam/flood lock-ups

Update 2015-11-08: No longer necessary (or even supported in 2.1) - tmux' new "backoff" rate-limiting approach works like a charm with defaults \o/

Had the issue of spammy binary locking-up terminal for a long time, but never bothered to do something about it... until now.

Happens with any terminal I've seen - just run something like this in the shell there:

# for n in {1..500000}; do echo "$spam $n"; done
And for at least several seconds, terminal is totally unresponsive, no matter how many screen's / tmux'es are running there.
It's usually faster to kill term window via WM and re-attach to whatever was inside from a new one.

xterm seem to be one of the most resistant *terms to this, e.g. terminology - much less so, which I guess just means that it's more fancy and hence slower to draw millions of output lines.

Anyhow, tmuxrc magic:

set -g c0-change-trigger 150
set -g c0-change-interval 100

"man tmux" says that 250/100 are defaults, but it doesn't seem to be true, as just setting these "defaults" explicitly here fixes the issue, which exists with the default configuration.

Fix just limits rate of tmux output to basically 150 newlines (which is like twice my terminal height anyway) per 100 ms, so xterm won't get overflown with "rendering megs of text" backlog, remaining apparently-unresponsive (to any other output) for a while.

Since I always run tmux as a top-level multiplexer in xterm, totally solved the annoyance for me. Just wish I've done that much sooner - would've saved me a lot of time and probably some rage-burned neurons.

Aug 14, 2011

Notification-daemon in python

I've delayed update of the whole libnotify / notification-daemon / notify-python stack for a while now, because notification-daemon got too GNOME-oriented around 0.7, making it a lot more simplier, but sadly dropping lots of good stuff I've used there.
Default nice-looking theme is gone in favor of black blobs (although colors are probably subject to gtkrc); it's one-note-at-a-time only, which makes reading them intolerable; configurability was dropped as well, guess blobs follow some gnome-panel settings now.
Older notification-daemon versions won't build with newer libnotify.
Same problem with notify-python, which seem to be unnecessary now, since it's functionality is accessible via introspection and PyGObject (part known as PyGI before merge - gi.repositories.Notify).
Looking for more-or-less drop-in replacements I've found notipy project, which looked like what I needed, and the best part is that it's python - no need to filter notification requests in a proxy anymore, eliminating some associated complexity.
Project has a bit different goals however, them being simplicity, less deps and concept separation, so I incorporated (more-or-less) notipy as a simple NotificationDisplay class into notification-proxy, making it into notification-thing (first name that came to mind, not that it matters).
All the rendering now is in python using PyGObject (gi) / gtk-3.0 toolkit, which seem to be a good idea, given that I still have no reason to keep Qt in my system, and gtk-2.0 being obsolete.
Exploring newer Gtk stuff like css styling and honest auto-generated interfaces was fun, although the whole mess seem to be much harder than expected. Simple things like adding a border, margins or some non-solid background to existing widgets seem to be very complex and totally counter-intuitive, unlike say, doing the same (even in totally cross-browser fashion) with html. I also failed to find a way to just draw what I want on arbitrary widgets, looks like it was removed (in favor of GtkDrawable) on purpose.
My (uneducated) guess is that gtk authors geared toward "one way to do one thing" philosophy, but unlike Python motto, they've to ditch the "one *obvious* way" part. But then, maybe it's just me being too lazy to read docs properly.
All the previous features like filtering and rate-limiting are there.

Looking over Desktop Notifications Spec in process, I've noticed that there are more good ideas that I'm not using, so guess I might need to revisit local notification setup in the near future.

Feb 26, 2011

cgroups initialization, libcgroup and my ad-hoc replacement for it

Linux control groups (cgroups) rock.
If you've never used them at all, you bloody well should.
"git gc --aggressive" of a linux kernel tree killing you disk and cpu?
Background compilation makes desktop apps sluggish? Apps step on each others' toes? Disk activity totally kills performance?

I've lived with all of the above on the desktop in the (not so distant) past and cgroups just make all this shit go away - even forkbombs and super-multithreaded i/o can just be isolated in their own cgroup (from which there's no way to escape, not with any amount of forks) and scheduled/throttled (cpu hard-throttling - w/o cpusets - seem to be coming soon as well) as necessary.

Some problems with process classification and management of these limits seem to exist though.

Systemd does a great job of classification of everything outside of user session (i.e. all the daemons) - any rc/cgroup can be specified right in the unit files or set by default via system.conf.
And it also makes all this stuff neat and tidy because cgroup support there is not even optional - it's basic mechanism on which systemd is built, used to isolate all the processes belonging to one daemon or the other in place of hopefully-gone-for-good crappy and unreliable pidfiles. No renegade processes, leftovers, pids mixups... etc, ever.
Bad news however is that all the cool stuff it can do ends right there.
Classification is nice, but there's little point in it from resource-limiting perspective without setting the actual limits, and systemd doesn't do that (recent thread on systemd-devel).
Besides, no classification for user-started processes means that desktop users are totally on their own, since all resource consumption there branches off the user's fingertips. And desktop is where responsiveness actually matters for me (as in "me the regular user"), so clearly something is needed to create cgroups, set limits there and classify processes to be put into these cgroups.
libcgroup project looks like the remedy at first, but since I started using it about a year ago, it's been nothing but the source of pain.
First task that stands before it is to actually create cgroups' tree, mount all the resource controller pseudo-filesystems and set the appropriate limits there.
libcgroup project has cgconfigparser for that, which is probably the most brain-dead tool one can come up with. Configuration is stone-age pain in the ass, making you clench your teeth, fuck all the DRY principles and write N*100 line crap for even the simpliest tasks with as much punctuation as possible to cram in w/o making eyes water.
Then, that cool toy parses the config, giving no indication where you messed it up but the dreaded message like "failed to parse file". Maybe it's not harder to get right by hand than XML, but at least XML-processing tools give some useful feedback.
Syntax aside, tool still sucks hard when it comes to apply all the stuff there - it either does every mount/mkdir/write w/o error or just gives you the same "something failed, go figure" indication. Something being already mounted counts as failure as well, so it doesn't play along with anything, including systemd.
Worse yet, when it inevitably fails, it starts a "rollback" sequence, unmounting and removing all the shit it was supposed to mount/create.
After killing all the configuration you could've had, it will fail anyway. strace will show you why, of course, but if that's the feedback mechanism the developers had in mind...
Surely, classification tools there can't be any worse than that? Wrong, they certainly are.
Maybe C-API is where the project shines, but I have no reason to believe that, and luckily I don't seem to have any need to find out.

Luckily, cgroups can be controlled via regular filesystem calls, and thoroughly documented (in Documentation/cgroups).

Anyways, my humble needs (for the purposes of this post) are:

  • isolate compilation processes, usually performed by "cave" client of paludis package mangler (exherbo) and occasionally shell-invoked make in a kernel tree, from all the other processes;
  • insulate specific "desktop" processes like firefox and occasional java-based crap from the rest of the system as well;
  • create all these hierarchies in a freezer and have a convenient stop-switch for these groups.

So, how would I initially approach it with libcgroup? Ok, here's the cgconfig.conf:

### Mounts

mount {
  cpu = /sys/fs/cgroup/cpu;
  blkio = /sys/fs/cgroup/blkio;
  freezer = /sys/fs/cgroup/freezer;

### Hierarchical RCs

group tagged/cave {
  perm {
    task {
      uid = root;
      gid = paludisbuild;
    admin {
      uid = root;
      gid = root;

  cpu {
    cpu.shares = 100;
  freezer {

group tagged/desktop/roam {
  perm {
    task {
      uid = root;
      gid = users;
    admin {
      uid = root;
      gid = root;

  cpu {
    cpu.shares = 300;
  freezer {

group tagged/desktop/java {
  perm {
    task {
      uid = root;
      gid = users;
    admin {
      uid = root;
      gid = root;

  cpu {
    cpu.shares = 100;
  freezer {

### Non-hierarchical RCs (blkio)

group tagged.cave {
  perm {
    task {
      uid = root;
      gid = users;
    admin {
      uid = root;
      gid = root;

  blkio {
    blkio.weight = 100;

group tagged.desktop.roam {
  perm {
    task {
      uid = root;
      gid = users;
    admin {
      uid = root;
      gid = root;

  blkio {
    blkio.weight = 300;

group {
  perm {
    task {
      uid = root;
      gid = users;
    admin {
      uid = root;
      gid = root;

  blkio {
    blkio.weight = 100;
Yep, it's huge, ugly and stupid.
Oh, and you have to do some chmods afterwards (more wrapping!) to make the "group ..." lines actually matter.

So, what do I want it to look like? This:

path: /sys/fs/cgroup

  _tasks: root:wheel:664
  _admin: root:wheel:644


    _default: true
    cpu.shares: 1000
    blkio.weight: 1000

      _tasks: root:paludisbuild
      _admin: root:paludisbuild
      cpu.shares: 100
      blkio.weight: 100

        _tasks: root:users
        cpu.shares: 300
        blkio.weight: 300
        _tasks: root:users
        cpu.shares: 100
        blkio.weight: 100

It's parseable and readable YAML, not some parenthesis-semicolon nightmare of a C junkie (you may think that because of these spaces don't matter there btw... well, think again!).

After writing that config-I-like-to-see, I just spent a few hours to write a script to apply all the rules there while providing all the debugging facilities I can think of and wiped my system clean of libcgroup, it's that simple.
Didn't had to touch the parser again or debug it either (especially with - god forbid - strace), everything just worked as expected, so I thought I'd dump it here jic.

Configuration file above (YAML) consists of three basic definition blocks:

"path" to where cgroups should be initialized.
Names for the created and mounted rc's are taken right from "groups" and "defaults" sections.
Yes, that doesn't allow mounting "blkio" resource controller to "cpu" directory, guess I'll go back to using libcgroup when I'd want to do that... right after seeing the psychiatrist to have my head examined... if they'd let me go back to society afterwards, that is.
"groups" with actual tree of group parameter definitions.
Two special nodes here - "_tasks" and "_admin" - may contain (otherwise the stuff from "defaults" is used) ownership/modes for all cgroup knob-files ("_admin") and "tasks" file ("_tasks"), these can be specified as "user[:group[:mode]]" (with brackets indicating optional definition, of course) with non-specified optional parts taken from the "defaults" section.
Limits (or any other settings for any kernel-provided knobs there, for that matter) can either be defined on per-rc-dict basis, like this:
  _tasks: root:users
    shares: 300
    weight: 300
    throttle.write_bps_device: 253:9 1000000

Or just with one line per rc knob, like this:

  _tasks: root:users
  cpu.shares: 300
  blkio.weight: 300
  blkio.throttle.write_bps_device: 253:9 1000000
Empty dicts (like "freezer" in "defaults") will just create cgroup in a named rc, but won't touch any knobs there.
And the "_default" parameter indicates that every pid/tid, listed in a root "tasks" file of resource controllers, specified in this cgroup, should belong to it. That is, act like default cgroup for any tasks, not classified into any other cgroup.

"defaults" section mirrors the structure of any leaf cgroup. RCs/parameters here will be used for created cgroups, unless overidden in "groups" section.

Script to process this stuff (cgconf) can be run with --debug to dump a shitload of info about every step it takes (and why it does that), plus with --dry-run flag to just dump all the actions w/o actually doing anything.
cgconf can be launched as many times as needed to get the job done - it won't unmount anything (what for? out of fear of data loss on a pseudo-fs?), will just create/mount missing stuff, adjust defined permissions and set defined limits without touching anything else, thus it will work alongside with everything that can also be using these hierarchies - systemd, libcgroup, ulatencyd, whatever... just set what you need to adjust in .yaml and it wll be there after run, no side effects.
cgconf.yaml (.yaml, generally speaking) file can be put alongside cgconf or passed via the -c parameter.
Anyway, -h or --help is there, in case of any further questions.

That handles the limits and initial (default cgroup for all tasks) classification part, but then chosen tasks also need to be assigned to a dedicated cgroups.

libcgroup has pam_cgroup module and cgred daemon, neither of which can sensibly (re)classify anything within a user session, plus cgexec and cgclassify wrappers to basically do "echo $$ >/.../some_cg/tasks && exec $1" or just "echo" respectively.
These are dumb simple, nothing done there to make them any easier than echo, so even using libcgroup I had to wrap these.

Since I knew exactly which (few) apps should be confined to which groups, I just wrote a simple wrapper scripts for each, putting these in a separate dir, in the head of PATH. Example:

#!/usr/local/bin/cgrc -s desktop/roam/usr/bin/firefox
cgrc script here is a dead-simple wrapper to parse cgroup parameter, putting itself into corresponding cgroup within every rc where it exists, making special conversion in case not-yet-hierarchical (there's a patchset for that though: blkio, exec'ing the specified binary with all the passed arguments afterwards.
All the parameters after cgroup (or "-g ", for the sake of clarity) go to the specified binary. "-s" option indicates that script is used in shebang, so it'll read command from the file specified in argv after that and pass all the further arguments to it.
Otherwise cgrc script can be used as "cgrc -g /usr/bin/firefox " or "cgrc. /usr/bin/firefox ", so it's actually painless and effortless to use this right from the interactive shell. Amen for the crappy libcgroup tools.
Another special use-case for cgroups I've found useful on many occasions is a "freezer" thing - no matter how many processes compilation (or whatever other cgroup-confined stuff) forks, they can be instantly and painlessly stopped and resumed afterwards.
cgfreeze dozen-liner script addresses this need in my case - "cgfreeze cave" will stop "cave" cgroup, "cgfreeze -u cave" resume, and "cgfreeze -c cave" will just show it's current status, see -h there for details. No pgrep, kill -STOP or ^Z involved.

Guess I'll direct the next poor soul struggling with libcgroup here, instead of wasting time explaining how to work around that crap and facing the inevitable question "what else is there?" *sigh*.

All the mentioned scripts can be found here.

Dec 09, 2010

Further improvements for notification-daemon

It's been a while since I augmented libnotify / notification-daemon stack to better suit my (maybe not-so-) humble needs, and it certainly was an improvement, but there's no limit to perfection and since then I felt the urge to upgrade it every now and then.

One early fix was to let messages with priority=critical through without delays and aggregation. I've learned it the hard way when my laptop shut down because of drained battery without me registering any notification about it.
Other good candidates for high priority seem to be real-time messages like emms track updates and network connectivity loss events which either too important to ignore or just don't make much sense after delay. Implementation here is straightforward - just check urgency level and pass these unhindered to notification-daemon.
Another important feature which seem to be missing in reference daemon is the ability to just cleanup the screen of notifications. Sometimes you just need to dismiss them to see the content beneath, or you just read them and don't want them drawing any more attention.
The only available interface for that seem to be CloseNotification method, which can only close notification message using it's id, hence only useful from the application that created the note. Kinda makes sense to avoid apps stepping on each others toes, but since id's in question are sequential, it won't be much of a problem to an app to abuse this mechanism anyway.
Proxy script, sitting in the middle of dbus communication as it is, don't have to guess these ids, as can just keep track of them.
So, to clean up the occasional notification-mess I extended the CloseNotification method to accept 0 as a "special" id, closing all the currently-displayed notifications.

Binding it to a key is just a matter of (a bit inelegant, but powerful) dbus-send tool invocation:

% dbus-send --type=method_call\
   org.freedesktop.Notifications.CloseNotification uint32:0
Expanding the idea of occasional distraction-free needs, I found the idea of the ability to "plug" the notification system - collecting the notifications into the same digest behind the scenes (yet passing urgent ones, if this behavior is enabled) - when necessary quite appealing, so I just added a flag akin to "fullscreen" check, forcing notification aggregation regardless of rate when it's set.
Of course, some means of control over this flag was necessary, so another extension of the interface was to add "Set" method to control notification-proxy options. Method was also useful to occasionally toggle special "urgent" messages treatment, so I empowered it to do so as well by making it accept a key-value array of parameters to apply.
And since now there is a plug, I also found handy to have a complimentary "Flush" method to dump last digested notifications.
Same handy dbus-send tool comes to rescue again, when these need to be toggled or set via cli:
% dbus-send --type=method_call\
In contrast to cleanup, I occasionally found myself monitoring low-traffic IRC conversations entirely through notification boxes - no point switching the apps if you can read the whole lines right there, but there was a catch of course - you have to immediately switch attention from whatever you're doing to a notification box to be able to read it before it times out and disappears, which of course is a quite inconvenient.
Easy solution is to just override "timeout" value in notification boxes to make them stay as long as you need to, so one more flag for the "Set" method to handle plus one-liner check and there it is.
Now it's possible to read them with minimum distraction from the current activity and dismiss via mentioned above extended CloseNotification method.
As if the above was not enough, sometimes I found myself willing to read and react to the stuff from one set of sources, while temporarily ignoring the traffic from the others, like when you're working at some hack, discussing it (and the current implications / situation) in parallel over jabber or irc, while heated discussion (but interesting none the less) starts in another channel.
Shutting down the offending channel in ERC, leaving BNC to monitor the conversation or just supress notifications with some ERC command would probably be the right way to handle that, yet it's not always that simple, especially since every notification-enabled app then would have to implement some way of doing that, which of course is not the case at all.
Remedy is in the customizable filters for notifications, which can be a simple set of regex'es, dumped into some specific dot-file, but even as I started to implement the idea, I could think of several different validation scenarios like "match summary against several regexes", "match message body", "match simple regex with a list of exceptions" or even some counting and more complex logic for them.
Idea of inventing yet another perlish (poorly-designed, minimal, ambiguous, write-only) DSL for filtering rules didn't struck me as an exactly bright one, so I thought for looking for some lib implementation of clearly-defined and thought-through syntax for such needs, yet found nothing designed purely for such filtering task (could be one of the reasons why every tool and daemon hard-codes it's own DSL for that *sigh*).
On that note I thought of some generic yet easily extensible syntax for such rules, and came to realization that simple SICP-like subset of scheme/lisp with regex support would be exactly what I need.
Luckily, there are plenty implementations of such embedded languages in python, and since I needed a really simple and customizabe one, I've decided to stick with extended 90-line "", described by Peter Norvig here and extended here. Out goes unnecessary file-handling, plus regexes and some minor fixes and the result is "make it into whatever you need" language.
Just added a stat and mtime check on a dotfile, reading and compiling the matcher-function from it on any change. Contents may look like this:
(define-macro define-matcher (lambda
  (name comp last rev-args)
  `(define ,name (lambda args
    (if (= (length args) 1) ,last
      (let ((atom (car args)) (args (cdr args)))
        (~ ,@(if rev-args '((car args) atom) '(atom (car args))))
        (apply ,name (cons atom (cdr args))))))))))

(define-matcher ~all and #t #f)
(define-matcher all~ and #t #t)
(define-matcher ~any or #f #f)
(define-matcher any~ or #f #t)
(lambda (summary body)
  (not (and
    (~ "^erc: #\S+" summary)
    (~ "^\*\*\* #\S+ (was created on|modes:) " body))
    (all~ summary "^erc: #pulseaudio$" "^mail:")))
Which kinda shows what can you do with it, making your own syntax as you go along (note that stuff like "and" is also a macro, just defined on a higher level).
Even with weird macros I find it much more comprehensible than rsync filters, apache/lighttpd rewrite magic or pretty much any pseudo-simple magic set of string-matching rules I had to work with.
I considered using python itself to the same end, but found that it's syntax is both more verbose and less flexible/extensible for such goal, plus it allows to do far too much for a simple filtering script which can potentially be evaluated by process with elevated privileges, hence would need some sort of sandboxing anyway.
In my case all this stuff is bound to convenient key shortcuts via fluxbox wm:
# Notification-proxy control
Print :Exec dbus-send --type=method_call\
    /org/freedesktop/Notifications org.freedesktop.Notifications.Set\
Shift Print :Exec dbus-send --type=method_call\
    /org/freedesktop/Notifications org.freedesktop.Notifications.Set\
Pause :Exec dbus-send --type=method_call\
Shift Pause :Exec dbus-send --type=method_call\

Pretty sure there's more room for improvement in this aspect, so I'd have to extend the system once again, which is fun all by itself.

Resulting (and maybe further extended) script is here, now linked against a bit revised scheme implementation.

Feb 26, 2010

libnotify, notification-daemon shortcomings and my solution

Everyone who uses OSS desktop these days probably seen libnotify magic in action - small popup windows that appear at some corner of the screen, announcing events from other apps.

libnotify itself, however, is just a convenience lib for dispatching these notifications over dbus, so the latter can pass it app listening on this interface or even start it beforehand.
Standard app for rendering such messages is notification-daemon, which is developed alongside with libnotify, but there are drop-in replacements like xfce4-notifyd or e17 notification module. In dbus rpc mechanism call signatures are clearly defined and visible, so it's pretty easy to implement replacement for aforementioned daemons, plus vanilla notification-daemon has introspection calls and dbus itself can be easily monitored (via dbus-monitor utility) which make it's implementation even more transparent.

Now, polling every window for updates manually is quite inefficient - new mail, xmpp messages, IRC chat lines, system events etc sometimes arrive every few seconds, and going over all the windows (and by that I mean workspaces where they're stacked) just to check them is a huge waste of time, especially when some (or even most, in case of IRC) of these are not really important.
Either response time or focus (and, in extreme case, sanity) has to be sacrificed in such approach. Luckily, there's another way to monitor this stuff - small pop-up notifications allow to see what's happening right away, w/o much attention-switching or work required from an end-user.

But that's the theory.
In practice, I've found that enabling notifications in IRC or jabber is pretty much pointless, since you'll be swarmed by these as soon as any real activity starts there. And w/o them it's a stupid wasteful poll practice, mentioned above.

Notification-daemon has no tricks to remedy the situation, but since the whole thing is so abstract and transparent I've had no problem making my own fix.
Notification digest

Solution I came up with is to batch the notification messages into a digests as soon as there are too many of them, displaying such digest pop-ups with some time interval, so I can keep a grip on what's going on just by glancing at these as they arrive, switching my activities if something there is important enough.

Having played with schedulers and network shaping/policing before, not much imagination was required to devise a way to control the message flow rate.
I chose token-bucket algorithm at first, but since prolonged flood of I-don't-care-about activity have gradually decreasing value, I didn't want to receive digests of it every N seconds, so I batched it with a gradual digest interval increase and leaky-bucket mechanism, so digests won't get too fat over these intervals.
Well, the result exceeded my expectations, and now I can use libnotify freely even to indicate that some rsync just finished in a terminal on another workspace. Wonder why such stuff isn't built into existing notification daemons...
Then, there was another, even more annoying issue: notifications during fullscreen apps! WTF!?
Wonder if everyone got used to this ugly flickering in fullscreen mplayer, huge lags in GL games like SpringRTS or I'm just re-inventing the wheel here, since it's done in gnome or kde (knotify, huh?), but since I'm not gonna use either one I just added fullscreen-app check before notification output, queueing them to digest if that is the case.
Ok, a few words about implementation.
Token bucket itself is based on activestate recipe with some heavy improvements to adjust flow on constant under/over-flow, plus with a bit more pythonic style and features, take a look here. Leaky bucket implemented by this class.
Aside from that it's just dbus magic and a quite extensive CLI interface to control the filters.
Main dbus magic, however, lies outside the script, since dbus calls cannot be intercepted and the scheduler can't get'em with notification-daemon already listening on this interface.
Solution is easy, of course - scheduler can replace the real daemon and proxy mangled calls to it as necessary. It takes this sed line for notification-daemon as well, since interface is hard-coded there.
Needs fgc module, but it's just a hundred lines on meaningful code.

One more step to making linux desktop more comfortable. Oh, joy ;)

Member of The Internet Defense League