Jan 26, 2019

Old IRC interface for new IRC - Discord

While Matrix still gaining traction and is somewhat in flux, Slack slowly dies in a fire (hopefully!), Discord seem to be the most popular IRC of the day, especially in any kind of gaming and non-technical communities.

So it's only fitting to use same familiar IRC clients for the new thing, even though it doesn't officially support it.

Started using it initially via bitlbee and bitlbee-discord, but these don't seem to work well for that - you can't just /list or /join channels, multiple discord guilds aren't handled too well, some useful features like chat history aren't easily available, (surprisingly common) channel renames are constant pain, hard to debug, etc - and worst of all, it just kept loosing my messages, making some conversations very confusing.

So given relatively clear protocols for both, wrote my own proxy-ircd bridge eventually - https://github.com/mk-fg/reliable-discord-client-irc-daemon

Which seem to address all things that annoyed me in bitlbee-discord nicely, as obviously that was the point of implementing whole thing too.

Being a single dependency-light (only aiohttp for discord websockets) python3 script, should be much easier to tweak and test to one's liking.

Quite proud of all debug and logging stuff there in particular, where you can easily get full debug logs from code and full protocol logs of irc, websocket and http traffic at the moment of any issue and understand, find and fix is easily.

Hopefully Matrix will eventually get to be more common, as it - being more open and dev-friendly - seem to have plenty of nice GUI clients, which are probably better fit for all the new features that it has over good old IRC.

Jan 25, 2013

Static pelican blog

Ditched bloog engine here in favor of static pelican yesterday, and while I was able to remember about keeping legacy links working, pretty sure I forgot about guids on the feed, so apologies to anyone who might care. Guess it's pointless to fix these now.

All the entries can be found on github now in rst-format, though older ones might be a bit harder to read in the source, as they were mostly auto-converted by pandoc and I only checked if they're still rendered correctly to html.

As appengine also made me migrate from master-slave db replication to the shiny high-replication blobstore, I wonder if hosting static html here now counts as abuse...

Sep 16, 2012

Terms of Service - Didn't Read

Right now I was working on python-skydrive module and further integration of MS SkyDrive into tahoe-lafs as a cloud backend, to keep the stuff you really care about safe.

And even if you don't trust SkyDrive to keep stuff safe, you still have to register your app with these guys, especially if it's an open module, because "You are solely and entirely responsible for all uses of Live Connect occurring under your Client ID." and it's unlikely that a generic python interface author will vouch for all it's uses like that.

What do "register app" mean? Agreeing to yet another "Terms of Service", of course!

Do anyone ever reads these?

What the hell "You may only use the Live SDK and Live Connect APIs to create software." sentence means there?
Did you know that "You are solely and entirely responsible for all uses of Live Connect occurring under your Client ID." (and that's an app-id, given out to the app developers, not users)?
How many more of such "interesting" stuff is there?

I hardly care enough to read, but there's an app for exactly that, and it's relatively well-known by now.

What might be not as well-known, is that there's now a campaign on IndieGoGo to keep the thing alive and make it better.
Please consider supporting the movement in any way, even just by spreading the word, right now, it's really one of the areas where filtering-out of all the legalese crap and noise is badly needed.

http://www.indiegogo.com/terms-of-service-didnt-read

Aug 09, 2012

Unhosted remoteStorage idea

Having a bit of free time recently, worked a bit on feedjack web rss reader / aggregator project.
To keep track of what's already read and what's not, historically I've used js + client-side localStorage approach, which has quite a few advantages:
  • Works with multiple clients, i.e. everyone has it's own state.
  • Server doesn't have to store any data for possible-infinite number of clients, not even session or login data.
  • Same pages still can be served to all clients, some will just hide unwanted content.
  • Previous point leads to pages being very cache-friendly.
  • No need to "recognize" client in any way, which is commonly acheived with authentication.
  • No interation of "write" kind with the server means much less potential for abuse (DDoS, spam, other kinds of exploits).

Flip side of that rosy picture is that localStorage only works in one browser (or possibly several synced instances), which is quite a drag, because one advantage of a web-based reader is that it can be accessed from anywhere, not just single platform, where you might as well install specialized app.

To fix that unfortunate limitation, about a year ago I've added ad-hoc storage mechanism to just dump localStorage contents as json to some persistent storage on server, authenticated by special "magic" header from a browser.
It was never a public feature, requiring some browser tweaking and being a server admin, basically.

Recently, however, remoteStorage project from unhosted group has caught my attention.

Idea itself and the movement's goals are quite ambitious and otherwise awesome - to return to "decentralized web" idea, using simple already available mechanisms like webfinger for service discovery (reminds of Thimbl concept by telekommunisten.net), WebDAV for storage and OAuth2 for authorization (meaning no special per-service passwords or similar crap).
But the most interesting thing I've found about it is that it should be actually easier to use than write ad-hoc client syncer and server storage implementation - just put off-the-shelf remoteStorage.js to the page (it even includes "syncer" part to sync localStorage to remote server) and depoy or find any remoteStorage provider and you're all set.
In practice, it works as advertised, but will have quite significant changes soon (with the release of 0.7.0 js version) and had only ad-hoc proof-of-concept server implementation in python (though there's also ownCloud in php and node.js/ruby versions), so I wrote django-unhosted implementation, being basically a glue between simple WebDAV, oauth2app and Django Storage API (which has backends for everything).
Using that thing in feedjack now (here, for example) instead of that hacky json cache I've had with django-unhosted deployed on my server, allowing to also use it with all the apps with support out there.
Looks like a really neat way to provide some persistent storage for any webapp out there, guess that's one problem solved for any future webapps I might deploy that will need one.
With JS being able to even load and use binary blobs (like images) that way now, it becomes possible to write even unhosted facebook, with only events like status updates still aggregated and broadcasted through some central point.
I bet there's gotta be something similar, but with facebook, twitter or maybe github backends, but as proven in many cases, it's not quite sane to rely on these centralized platforms for any kind of service, which is especially a pain if implementation there is one-platform-specific, unlike one remoteStorage protocol for any of them.
Would be really great if they'd support some protocol like that at some point though.

But aside for short-term "problem solved" thing, it's really nice to see such movements out there, even though whole stack of market incentives (which heavily favors control over data, centralization and monopolies) is against them.

Feb 03, 2012

On github as well now

Following another hiatus from a day job, I finally have enough spare time to read some of the internets and do something about them.

For quite a while I had lots of quite small scripts and projects, which I kinda documented here (and on the site pages before that).
I always kept them in some kind of scm - be it system-wide repo for configuration files, ~/.cFG repo for DE and misc user configuration and ~/bin scripts, or ~/hatch repo I keep for misc stuff, but as their number grows, as well as the size and complexity, I think maybe some of this stuff deserves some kind of repo, maybe attention, and best-case scenario, will even be useful to someone but me.

So I thought to gradually push all this stuff out to github and/or bitbucket (still need to learn or at least look at hg for that!). github being the most obvious and easiest choice, just created a few repos there and started the migration. More to come.

Still don't really trust a silo like github to keep anything reliably (besides it lags like hell here, especially compared to local servers I'm kinda used to), so need to devise some mirroring scheme asap.
Initial idea is to take some flexible tool (hg seem to be ideal, being python and scm proper) and build a hooks into local repos to push stuff out to mirrors from there, ideally both bitbucket and github, also exploiting their metadata APIs to fetch stuff like tickets/issues and commit history of these into separate repo branch as well.

Effort should be somewhat justified by the fact that such repos will be geo-distributed backups, shareable links and I can learn more SCM internals by the way.

For now - me on github.

Member of The Internet Defense League