Jun 09, 2017
Tend to mention random trivial tools I write here, but somehow forgot about this
one - acme-cert-tool.
Implemented it a few months back when setting-up TLS on, and wasn't satisfied by
any existing things for ACME / Let's Encrypt cert management.
Wanted to find some simple python3 script that's a bit less hacky than
acme-tiny, not a bloated framework with dozens of useless deps like certbot
and has ECC certs covered, but came up empty.
acme-cert-tool has all that in a single script with just one dep on a standard
py crypto toolbox (cryptography.io), and does everything through a single
command, e.g. something like:
% ./acme-cert-tool.py --debug -gk le-staging.acc.key cert-issue \
-d /srv/www/.well-known/acme-challenge le-staging.cert.pem mydomain.com
...to get signed cert for mydomain.com, doing all the generation, registration
and authorization stuff as necessary, and caching that stuff in
"le-staging.acc.key" too, not doing any extra work there either.
Add && systemctl reload nginx to that, put into crontab or .timer and done.
There are bunch of other commands mostly to play with accounts and such, plus
options for all kinds of cert and account settings, e.g. ... -e
email@example.com -c rsa-2048 -c ec-384 to also have cert with rsa key
generated for random outdated clients and add email for notifications (if not
README on acme-cert-tool
github page and -h/--help output should have more details:
Aug 05, 2016
More D3 tomfoolery!
It's been a while since I touched the thing, but recently been asked to make a
simple replacement for processing common-case time-series from temperature +
relative-humidity (calling these "t" and "rh" here) sensors (DHT22, sht1x, or what
have you), that's been painstakingly done in MS Excel (from tsv data) until now.
So here's the plot:
Misc feats of the thing, in no particular order:
- Single-html d3.v4.js + ES6 webapp (assembed by html-embed script) that
can be opened from localhost or any static httpd on the net.
- Drag-and-drop or multi-file browse/pick box, for uploading any number of tsv
files (in whatever order, possibly with gaps in data) instantly to JS on the page.
- Line chart with two Y axes (one for t, one for rh).
- Smaller "overview" chart below that, where one can "brush" needed timespan
(i.e. subset of uploaded data) for all the other charts and readouts.
- Mouseover "vertical line" display snapping to specific datapoints.
- List of basic stats for picked range - min/max, timespan, value count.
- Histograms for value distribution, to easily see typical values for picked
timespan, one for t and rh.
Kinda love this sort of interactive vis stuff, and it only takes a bunch of
hours to put it all together with d3, as opposed to something like rrdtool,
its dead images and quirky mini-language.
Also, surprisingly common use-case for this particular chart, as having such
sensors connected to some RPi is pretty much first thing people usually want to
do (or maybe close second after LEDs and switches).
Will probably look a bit further to make it into an offline-first Service
Worker app, just for the heck of it, see how well this stuff works these days.
No point to this post, other than forgetting to write stuff for months is bad ;)
May 19, 2015
Quite often recently VoDs on twitch for me are just unplayable through the flash
player - no idea what happens at the backend there, but it buffers endlessly at
any quality level and that's it.
I also need to skip to some arbitrary part in the 7-hour stream (last wcs sc2
ro32), as I've watched half of it live, which turns out to complicate things a bit.
So the solution is to download the thing, which goes something like this:
It's just a video, right? Let's grab whatever stream flash is playing (with
e.g. FlashGot FF addon).
Doesn't work easily, since video is heavily chunked.
It used to be 30-min flv chunks, which are kinda ok, but these days it's
forced 4s chunks - apparently backend doesn't even allow downloading more
than 4s per request.
Fine, youtube-dl it is.
Nope. Doesn't allow seeking to time in stream.
livestreamer wrapper around the thing doesn't allow it either.
Try to use ?t=3h30m URL parameter - doesn't work, sadly.
mpv supports youtube-dl and seek, so use that.
Kinda works, but only for super-short seeks.
Seeking beyond e.g. 1 hour takes AGES, and every seek after that (even
skipping few seconds ahead) takes longer and longer.
youtube-dl --get-url gets m3u8 playlist link, use ffmpeg -ss <pos>
Apparently works exactly same as mpv above - takes like 20-30min to seek to
3:30:00 (3.5 hour offset).
Dunno if it downloads and checks every chunk in the playlist for length
sequentially... sounds dumb, but no clue why it's that slow otherwise,
apparently just not good with these playlists.
Grab the m3u8 playlist, change all relative links there into full urls, remove
bunch of them from the start to emulate seek, play that with ffmpeg | mpv.
Works at first, but gets totally stuck a few seconds/minutes into the video,
with ffmpeg doing bitrates of ~10 KiB/s.
youtube-dl apparently gets stuck in a similar fashion, as it does the same
ffmpeg-on-a-playlist (but without changing it) trick.
Fine! Just download all the damn links with curl.
grep '^http:' pls.m3u8 | xargs -n50 curl -s | pv -rb -i5 > video.mp4
Makes it painfully obvious why flash player and ffmpeg/youtube-dl get stuck -
eventually curl stumbles upon a chunk that downloads at a few KiB/s.
This "stumbling chunk" appears to be a random one, unrelated to local
bandwidth limitations, and just re-trying it fixes the issue.
Assemble a list of links and use some more advanced downloader that can do
parallel downloads, plus detect and retry super-low speeds.
Naturally, it's aria2, but with all the parallelism it appears to be
impossible to guess which output file will be which with just a cli.
Mostly due to links having same url-path,
e.g. index-0000000014-O7tq.ts?start_offset=955228&end_offset=2822819 with
different offsets (pity that backend doesn't seem to allow grabbing range of
that *.ts file of more than 4s) - aria2 just does file.ts, file.ts.1,
file.ts.2, etc - which are not in playlist-order due to all the parallel
Finally, as acceptance dawns, go and write your own youtube-dl/aria2 wrapper
to properly seek necessary offset (according to playlist tags) and
download/resume files from there, in a parallel yet ordered and controlled
This is done by using --on-download-complete hook with passing ordered "gid"
numbers for each chunk url, which are then passed to the hook along with the
resulting path (and hook renames file to prefix + sequence number).
Ended up with the chunk of the stream I wanted, locally (online playback lag
never goes away!), downloaded super-fast and seekable.
Resulting script is twitch_vod_fetch (script source link).
Update 2017-06-01: rewritten it using python3/asyncio since then, so stuff
related to specific implementation details here is only relevant for old py2 version
(can be pulled from git history, if necessary).
aria2c magic bits in the script:
aria2c = subprocess.Popen([
Didn't bother adding extra options for tweaking these via cli, but might be a
good idea to adjust timeouts and limits for a particular use-case (see also the
massive "man aria2c").
Seeking in playlist is easy, as it's essentially a VoD playlist, and every ~4s
chunk is preceded by e.g. #EXTINF:3.240, tag, with its exact length, so
script just skips these as necessary to satisfy --start-pos / --length
Queueing all downloads, each with its own particular gid, is done via JSON-RPC,
as it seem to be impossible to:
- Specify both link and gid in the --input-file for aria2c.
- Pass an actual download URL or any sequential number to --on-download-complete
hook (except for gid).
So each gid is just generated as "000001", "000002", etc, and hook script is a
one-liner "mv" command.
Since all stuff in the script is kinda lenghty time-wise - e.g. youtube-dl
--get-url takes a while, then the actual downloads, then concatenation, ... -
it's designed to be Ctrl+C'able at any point.
Every step just generates a state-file like "my_output_prefix.m3u8", and next
one goes on from there.
Restaring the script doesn't repeat these, and these files can be freely
mangled or removed to force re-doing the step (or to adjust behavior in
Example of useful restart might be removing *.m3u8.url and *.m3u8 files if
twitch starts giving 404's due to expired links in there.
Won't force re-downloading any chunks, will only grab still-missing ones and
assemble the resulting file.
End-result is one my_output_prefix.mp4 file with specified video chunk (or full
video, if not specified), plus all the intermediate litter (to be able to
restart the process from any point).
One issue I've spotted with the initial version:
05/19 22:38:28 [ERROR] CUID#77 - Download aborted. URI=...
Exception: [AbstractCommand.cc:398] errorCode=1 URI=...
-> [RequestGroup.cc:714] errorCode=1 Download aborted.
errorCode=1 total length mismatch. expected: 1924180, actual: 1789572
05/19 22:38:28 [NOTICE] Download GID#0035090000000000 not complete: ...
Seem to be a few of these mismatches (like 5 out of 10k chunks), which don't get
retried, as aria2 doesn't seem to consider these to be a transient errors (which
is probably fair).
Probably a twitch bug, as it clearly breaks http there, and browsers shouldn't
accept such responses either.
Can be fixed by one more hook, I guess - either --on-download-error (to make
script retry url with that gid), or the one using websocket and getting json
In any case, just running same command again to download a few of these
still-missing chunks and finish the process works around the issue.
Update 2015-05-22: Issue clearly persists for vods from different chans,
so fixed it via simple "retry all failed chunks a few times" loop at the end.
Update 2015-05-23: Apparently it's due to aria2 reusing same files for
different urls and trying to resume downloads, fixed by passing --out for each
download queued over api.
[script source link]
Mar 11, 2015
Most Firefox addons add a toolbar button that does something when clicked, or
you can add such button by dragging it via Customize Firefox interface.
For example, I have a button for (an awesome) Column Reader extension on the
right of FF menu bar (which I have always-visible):
But as far as I can tell, most simple extensions don't bother with some custom
hotkey-adding interface, so there seem to be no obvious way to "click" that
button by pressing a hotkey.
In case of Column Reader, this is more important because pressing its button is
akin to "inspect element" in Firebug or FF Developer Tools - allows to pick any
box of text on the page, so would be especially nice to call via hotkey + click,
(as you'd do with Ctrl+Shift+C + click).
As I did struggle with binding hotkeys for specific extensions before (in their
own quirky ways), found one sure-fire way to do exactly what you'd get on click
this time - by simulating a click event itself (upon pressing the hotkey).
Whole process can be split into several steps:
Install Keyconfig or similar extension, allowing to bind/run arbitrary
One important note here is that such code should run in the JS context of the
extension itself, not just some page, as JS from page obviously won't be
allowed to send events to Firefox UI.
Keyconfig is very simple and seem to work perfectly for this purpose - just
"Add a new key" there and it'll pop up a window where any privileged JS can be
Install DOM Inspector extension (from AMO).
This one will be useful to get button element's "id" (similar to DOM elements'
"id" attribute, but for XUL).
It should be available (probably after FF restart) under "Tools -> Web
Developer -> DOM Inspector".
Run DOM Inspector and find the element-to-be-hotkeyed there.
Under "File" select "Inspect Chrome Document" and first document there -
should update "URL bar" in the inspector window to
Now click "Find a node by clicking" button on the left (or under "Edit ->
Select Element by Click"), and then just click on the desired UI
button/element - doesn't really have to be an extension button.
It might be necessary to set "View -> Document Viewer -> DOM Nodes" to see XUL
nodes on the left, if it's not selected already.
There it'd be easy to see all the neighbor elements and this button element.
Any element in that DOM Inspector frame can be right-clicked and there's
"Blink Element" option to show exactly where it is in the UI.
"id" of any box where click should land will do (highlighted with red in my
case on the image above).
whatever other hotkey-addon).
I did try HTML-specific ways to trigger events, but none seem to have worked
with XUL elements, so JS below uses nsIDOMWindowUtils XPCOM interface,
which seem to be designed specifically with such "simulation" stuff in mind
(likely for things like Selenium WebDriver).
JS for my case:
var el_box = document.getElementById('columnsreader').boxObject;
var domWindowUtils =
domWindowUtils.sendMouseEvent('mousedown', el_box.x, el_box.y, 0, 1, 0);
domWindowUtils.sendMouseEvent('mouseup', el_box.x, el_box.y, 0, 1, 0);
"columnsreader" there is an "id" of an element-to-be-clicked, and should
probably be substituted for whatever else from the previous step.
There doesn't seem to be a "click" event, so "mousedown" + "mouseup" it is.
"0, 1, 0" stuff is: left button, single-click (not sure what it does here), no
If anything goes wrong in that JS, the usual "Tools -> Web Developer ->
Browser Console" (Ctrl+Shift+J) window should show errors.
It should be possible to adjust click position by adding/subtracting pixels
from el_box.x / el_box.y, but left-top corner seem to work fine for buttons.
Save time and frustration by not dragging stupid mouse anymore, using trusty
hotkey instead \o/
Wish there was some standard "click on whatever to bind it to specified
hotkey" UI option in FF (like there is in e.g. Claws Mail
), but haven't
seen one so far (FF 36).
Maybe someone should write addon for that!
May 12, 2014
As I was working on a small d3-heavy project (my weird firefox homepage), I
did use d3 scales for things like opacity of the item, depending on its
relevance, and found these a bit counter-intuitive, but with no
readily-available demo (i.e. X-Y graphs of scales with same fixed domain/range)
on how they actually work.
Basically, I needed this:
I'll be first to admit that I'm no data scientist and not particulary good at
math, but from what memories on the subject I have, intuition tells me that
e.g. "d3.scale.pow().exponent(4)" should rise waaaay faster from the very start
than "d3.scale.log()", but with fixed domain + range values, that's exactly the
opposite of truth!
So, a bit confused about weird results I was getting, just wrote a simple
script to plot these charts for all basic d3 scales.
And, of course, once I saw a graph, it's fairly clear how that works.
Here, it's obvious that if you want to pick something that mildly favors higher
X values, you'd pick pow(2), and not sqrt.
Feel like such chart should be in the docs, but don't feel qualified enough to
add it, and maybe it's counter-intuitive just for me, as I don't dabble with
data visualizations much and/or might be too much of a visually inclined person.
In case someone needs the script to do the plotting (it's really trivial
May 12, 2014
Wanted to have some sort of "homepage with my fav stuff, arranged as I want to"
in firefox for a while, and finally got resolve to do something about it - just
finished a (first version of) script to generate the thing -
Default "grid of page screenshots" never worked for me, and while there are
other projects that do other layouts for different stuff, they just aren't
flexible enough to do whatever horrible thing I want.
In this particular case, I wanted to experiment with chaotic tag cloud of
bookmarks (so they won't ever be in the same place), relations graph for these
tags and random picks for "links to read" from backlog.
Result is a dynamic d3 + d3.layout.cloud (interactive example of this
layout) page without much style:
"Mark of Chaos" button in the corner can fly/re-pack tags around.
Clicking tag shows bookmarks tagged as such and fades all other tags out in
proportion to how they're related to the clicked one (i.e. how many links
share the tag with others).
Started using FF bookmarks again in a meaningful way only recently, so not much
stuff there yet, but it does seem to help a lot, especially with these handy
awesome bar tricks.
Not entirely sure how useful the cloud visualization or actually having a
homepage would be, but it's a fun experiment and a nice place to collect any
useful web-surfing-related stuff I might think of in the future.
Repo link: firefox-homepage-generator
Jun 06, 2013
Wanted to share three kinda-big-deal fixes I've added to my firefox:
- Patch to remove sticky-on-top focus-grabbing "Do you want to activate plugins
on this page?" popup.
- Patch to prevent plugins (e.g. Abode Flash) from ever grabbing firefox
hotkeys like "Ctrl + w" (close tab) or F5, forcing to do click outside
e.g. YouTube video window to get back to ff.
and mouse (e.g. overriding F5 to retweet instead of reload page, preventing
copy-paste if forms and on pages, etc).
Lately, firefox seem to give more-and-more control into the hands of web
developers, who seem to be hell-bent on abusing that to make browsing UX a
FF bug-reports about Flash grabbing all the focus date back to 2001 and are
Sites override Up/Down, Space, PgUp/PgDown, F5, Ctrl+T/W I've no idea why -
guess some JS developers just don't use keyboard at all, which is somewhat
understandable, combined with the spread of tablet-devices these days.
Overriding clicks in forms to prevent pasting email/password seem to be
completely ignoring valid (or so I think) use-case of using some storage app
And native "click-to-play" switch seem to be hilariously unusable in FF, giving
cheerful "Hey, there's flash here! Let me pester you with this on every page
All are known, neither one seem to be going away anytime soon, so onwards to the
Removing "Do you want to activate plugins" thing seem to be straightforward js
one-liner patch, as it's implemented in
"browser/base/content/browser-plugins.js" - whole fix is adding
this._notificationDisplayedOnce = true; to break the check there.
"notificationDisplayedOnce" thing is used to not popup that thing on the same
page within the same browing session afaict.
Patch for plugin focus is clever - all one has to do is to switch focus to
browser window (from embedded flash widget) before keypress gets processed and
ff will handle it correctly.
Hackish plugin + ad-hoc perl script solution (to avoid patching/rebuilding ff)
can be found here
My hat goes to Alexander Rødseth however, who hacked the patch attached to
- this one is a real problem-solver, though a bit (not
terribly - just context lines got shuffled around since) out-of-date.
JS-click/key-jacking issue seem to require some JS event firewalling, and
sometimes (e.g. JS games or some weird-design sites) can be useful.
So my solution was simply to bind JS-toggle key, which allows not only to
disable all that crap, but also speed some "load-shit-as-you-go" or
JS-BTC-mining (or so it feels) sites rapidly.
var prefs = Components.classes['@mozilla.org/preferences-service;1']
That's the whole thing, bound to something like Ctrl+\ (the one above Enter
here), makes a nice "Turbo and Get Off My JS" key.
above) via keys without needing any code, but I have this one.
Damn glad there are open-source (and uglifyjs-like) browsers like that, hope
proprietary google-ware won't take over the world in the nearest future.
Mentioned patches are available in (and integrated with-) the firefox-nightly
exheres in my repo, forked off awesome sardemff7-pending
firefox-scm.exheres-0 / mozilla-app.exlib work.
Apr 29, 2013
I've tried both of these in the past, but didn't have attention budget to make
them really work for me - which finally found now, so wanted to also give
crawlers a few more keywords on these nice things.
0bin - leak-proof pastebin
As I pastebin a lot of stuff all the time - basically everything multiline -
because all my IM happens in ERC over IRC (with bitlbee linking xmpp and all
the proprietary crap like icq, skype and twitter), and IRC doesn't handle
multiline messages at all.
All sorts of important stuff ends up there - some internal credentials,
contacts, non-public code, bugs, private chat logs, etc - so I always winced a
bit when pasting something in fear that google might index/data-mine it and
preserve forever, so I figured it'll bite me eventually, somewhat like this:
Easy and acceptable solution is to use simple client-side crypto, with link
having decryption key after hashmark, which never gets sent to pastebin server
and doesn't provide crawlers with any useful data. ZeroBin does that.
But original ZeroBin is php, which I don't really want to touch, and have its
share of problems - from the lack of command-line client (for e.g. grep stuff
log | zerobinpaste), to overly-long urls and flaky overloaded interface.
Luckily, there's more hackable python version of it - 0bin, for which I hacked
together a simple zerobinpaste tool, then simplified interface to bare
minimum and updated to use shorter urls (#41, #42) and put to my host -
result is paste.fraggod.net - my own nice robot-proof pastebin.
URLs there aren't any longer than with regular pastebins:
Plus the links there expire reliably, and it's easy to force this expiration,
having control over app backend.
Local fork should have all the not-yet-merged stuff as well as the
non-upstreamable simplier white-bootstrap theme.
Convergence - better PKI for TLS keys
Can't really recommend this video highly enough to anyone with even the
slightest bit of interest in security, web or SSL/TLS protocols.
There are lots of issues beyond just key distribution and authentication, but
I'd dare anyone to watch that rundown of just-as-of-2011 issues and remain
convinced that the PKI there is fine or even good enough.
Even fairly simple Convergence
tool implementation is a vast improvement,
giving a lot of control to make informed decisions about who to trust on the
I've been using the plugin in the past, but eventually it broke and I just
disabled it until the better times when it'll be fixed, but Moxie seem to have
moved on to other tasks and project never got the developers' attention it
So finally got around to fixing fairly massive list of issues around it
Bugs around newer firefox plugin were the priority - one was compatibility thing
from PR #170, another is endless hanging on all requests to notaries (PR
#173), more minor issues with adding notaries, interfaces and just plain bugs
that were always there.
Then there was one shortcoming of existing perspective-only verification
mechanism that bugged me - it didn't utilize existing flawed CA lists at all,
making decision of whether random site's cert is signed by at least some
crappy CA or completely homegrown (and thus don't belong on
Not the deciding factor by any means, but allows to make much more informed
decision than just perspectives for e.g. fishing site with typo in URL.
So was able to utilize (and extend a bit) the best part of Convergence - agility
of its trust decision-making - by hacking together a verifier (which can be
easily run on desktop localhost) that queries existing CA lists.
Enabling Convergence with that doesn't even force to give up the old model -
just adds perspective checks on top, giving a clear picture of which of the
checks have failed on any inconsistencies.
Other server-side fixes include nice argparse interface, configuration file
support, loading of verifiers from setuptools/distribute entry points (can be
installed separately with any python package), hackish TLS SNI support (Moxie
actually filed twisted-5374 about more proper fix), sane logging, ...
Filed only a few PR for the show-stopper client bugs, but looks like upstream
repo is simply dead, pity ;(
But all this stuff should be available in my fork
in the meantime.
Top-level README there should provide a more complete list of links and
Hopefully, upstream development will be picked-up at some point, or maybe
shift to some next incarnation of the idea - CrossBear
seem to potentially
Until then, at least was able to salvage this one, and hacking ctypes-heavy ff
extension implementing SOCKS MitM proxy was quite rewarding experience all by
itself... certainly broadens horizons on just how damn accessible and simple
it is to implement such seemingly-complex protocol wrappers.
Plan to also add a few other internet-observatory (like OONI, CrossBear crawls,
EFF Observatory, etc) plugins there in the near future, plus some other things
listed in the README here.
Aug 09, 2012
Having a bit of free time recently, worked a bit on feedjack
web rss reader / aggregator project.
To keep track of what's already read and what's not, historically I've used
js + client-side localStorage approach, which has quite a few advantages:
- Works with multiple clients, i.e. everyone has it's own state.
- Server doesn't have to store any data for possible-infinite number of
clients, not even session or login data.
- Same pages still can be served to all clients, some will just hide
- Previous point leads to pages being very cache-friendly.
- No need to "recognize" client in any way, which is commonly acheived
- No interation of "write" kind with the server means much less
potential for abuse (DDoS, spam, other kinds of exploits).
Flip side of that rosy picture is that localStorage only works in one browser
(or possibly several synced instances), which is quite a drag, because one
advantage of a web-based reader is that it can be accessed from anywhere, not
just single platform, where you might as well install specialized app.
To fix that unfortunate limitation, about a year ago I've added ad-hoc storage
mechanism to just dump localStorage contents as json to some persistent
storage on server, authenticated by special "magic" header from a browser.
It was never a public feature, requiring some browser tweaking and being a
server admin, basically.
Recently, however, remoteStorage project from
unhosted group has caught my attention.
Idea itself and the movement's goals are quite ambitious and otherwise
awesome - to return to "decentralized web" idea, using simple already
available mechanisms like webfinger for service discovery (reminds of Thimbl
concept by telekommunisten.net), WebDAV for storage and
OAuth2 for authorization (meaning no special per-service passwords or similar
But the most interesting thing I've found about it is that it should be
actually easier to use than write ad-hoc client syncer and server storage
implementation - just put off-the-shelf remoteStorage.js to the page (it even
includes "syncer" part to sync localStorage to remote server) and depoy or
find any remoteStorage provider and you're all set.
In practice, it works as advertised, but will have quite significant changes
soon (with the release of 0.7.0 js version) and had only ad-hoc
proof-of-concept server implementation in python (though there's also
in php and node.js/ruby versions), so I
implementation, being basically a glue between simple WebDAV, oauth2app
and Django Storage API (which has
Using that thing in feedjack now (here
, for example) instead of that hacky
json cache I've had with django-unhosted deployed on my server, allowing to
also use it with all the apps with support
Looks like a really neat way to provide some persistent storage for any webapp
out there, guess that's one problem solved for any future webapps I might
deploy that will need one.
With JS being able to even load and use binary blobs (like images) that way
now, it becomes possible to write even unhosted facebook, with only events
like status updates still aggregated and broadcasted through some central
I bet there's gotta be something similar, but with facebook, twitter or maybe
github backends, but as proven in many cases, it's not quite sane to rely on
these centralized platforms for any kind of service, which is especially a
pain if implementation there is one-platform-specific, unlike one
remoteStorage protocol for any of them.
Would be really great if they'd support some protocol like that at some point
But aside for short-term "problem solved" thing, it's really nice to see such
movements out there, even though whole stack of market incentives (which
control over data, centralization and monopolies) is against them.
Feb 03, 2012
Following another hiatus from a day job, I finally have enough spare time to
read some of the internets and do something about them.
For quite a while I had lots of quite small scripts and projects, which I
kinda documented here (and on the site pages before that).
I always kept them in some kind of scm - be it system-wide repo for
configuration files, ~/.cFG repo for DE and misc user configuration and ~/bin
scripts, or ~/hatch repo I keep for misc stuff, but as their number grows, as
well as the size and complexity, I think maybe some of this stuff deserves
some kind of repo, maybe attention, and best-case scenario, will even be
useful to someone but me.
So I thought to gradually push all this stuff out to github and/or bitbucket
(still need to learn or at least look at hg for that!). github being the most
obvious and easiest choice, just created a few repos there and started the
migration. More to come.
Still don't really trust a silo like github to keep anything reliably (besides
it lags like hell here, especially compared to local servers I'm kinda used
to), so need to devise some mirroring scheme asap.
Initial idea is to take some flexible tool (hg seem to be ideal, being python
and scm proper) and build a hooks into local repos to push stuff out to
mirrors from there, ideally both bitbucket and github, also exploiting their
metadata APIs to fetch stuff like tickets/issues and commit history of these
into separate repo branch as well.
Effort should be somewhat justified by the fact that such repos will be
geo-distributed backups, shareable links and I can learn more SCM internals by
For now - me on github.