I'd like to take the opportunity to show you the trailer video of the Ultimate Hacking Keyboard, a high-end mechanical keyboard of which I'm the lead developer. Our keyboard is going to be kickstarted soon so you're more than welcome to share it, follow us and subscribe to our list to get notified when our campaign starts. See you on UltimateHackingKeyboard.com! Cheers!
Over the years I've read a fair number of blogs some of which suddenly disappeared which saddens me because those were valuable resources. I think many geeks swear on self-hosting their blog because of the advanced customizability and control over every single aspect of their blog. I cannot blame them because I'm one of them.
The problem is that this way your blog isn't very resilient. What happens if you get hit by a train? That's right, your bank account will eventually get depleted and your hosting provider will shut down your server, making your blog vanish. You may say that I'm being absurd talking about death but if you consider your blog your legacy just as I do then you should also be concerned.
After all this mumbo-jumbo let me introduce you WordPress static blog generator. According to my knowledge this is pretty much the most convenient way to back up your WordPress posts, pages and comments as static HTML pages which you can easily browse and push to GitHub Pages, preserving your blog for eternity.
A while ago upon applying the freshly received Android 4.3 OTA update on my Nexus 4 the following happened:
Although the error message looked rather troubling after rebooting Android got successfully updated to 4.3 to my surprise.
Even though my phone got updated some days later the update re-appeared among my notifications. This time I went over the same process just to make the nofication disappear. Some days later when the update notification popped again I really wanted to get rid of it on the long term so I delved deeper. As it turns out the solution is surprisingly easy.
1) Unroot your phone in SuperSU.
2) Apply the update.
3) Reinstall SuperSU through CWM.
That's it, enjoy!
Recently, I've written dxf2svg2kicad, a highly polished online DXF to SVG to KICAD_PCB converter which I'm very proud of. To be explicit this tool converts:
- DXF to SVG
- SVG to KICAD_PCB (used by KiCad EDA)
- DXF to KICAD_PCB
Speaking to the technical-minded, my tool runs 100% on the client side and I used lots of cutting-edge web technologies to make it happen. You're welcome to check out the code on GitHub.
Nowadays I simply publish my work on GitHub and rarely blog - I've created this post solely for SEO purposes because given the usability of my tool Google should rank it higher.
If you're like most ssh users when your connection breaks it's bad news for you. Not only do you have to reconnect but your session gets destroyed and you have to make all the moves to restore the previous state. This doesn't have to be that way. I'd like to say some words about two tools that solve these problems in the most elegant way possible.
tmux is a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached.
In the world of tmux there are windows and panes within windows. You can think of tmux windows as workspaces on the desktop that are aligned in a horizontal manner. It's like having a number of virtual monitors next to each other each running different shell sessions. You can move across these windows as desired. With the use of panes you can split individual windows horizontally and/or vertically as desired, each pane housing a different session. This is pretty useful for tailing various log files in different panes and monitoring them at once.
You simply have to run the
tmux command to create a new tmux session. Once a session exists upon reconnecting over ssh you have to invoke
tmux attach to reconnect to your already existing session.
If you're like me you may want to use tmux by default upon ssh'ing to servers. To make this happen you have to include
export LC_TMUX_SESSION_NAME=yourusername into your
~/.bashrc and wrap scp on the client side and invoke tmux automatically on the server side. On a related note you can also take a look at my tmux.conf which I believe defines more intuitive shortcuts than the default configuration.
There are a number of alternatives to tmux that I'd like to list starting with the most powerful towards the least powerful. GNU Screen is yet another terminal multiplexer but its feature set, usability and configurability is rather limited compared to tmux. dtach is like a minimalistic tmux featuring one pane inside one window and it only provides a minimal set of options. Finally, with the use of the nohup command you can make your (typically long-running) script immune to hangups and hence it can survive ssh disconnects.
Remote terminal application that allows roaming, supports intermittent connectivity, and provides intelligent local echo and line editing of user keystrokes.
mosh is the other piece of the puzzle leading to the remote shell nirvana. After
mosh on the client and
mosh-server on the server instead of invoking
ssh yourserver.com invoke
mosh yourserver.com. From this point on you don't have to worry about reconnecting to ssh or having to wait for the server to echo back your characters anymore.
It should be no news for any well informed geek that bias lighting is good for your eyes. I've just recently implemented my setup which was surprisingly easy to put together. It only needed a self adhesive LED strip, an AC adapter, a switch and some wires. Click on the album below for your viewing pleasure!
Static resources are static because those never change thus always should be cached forever, right? Well, more often than not some of those files eventually change in which case the files in question must be renamed to be updated by the client which is a pain, especially if you use revision control (which you should).
Lately I came up with a new way to cache static server-side resources as efficiently and effortlessly as possible. Consider the following nginx configuration fragment:
From this point on you can reference a resource like http://yoursite.com/images/background.png as http://yoursite.com/static/1/images/background.png to be cached forever which you can change to http://yoursite.com/static/2/images/background.png in case the contents of this file gets updated. Alternatively, instead of incremental numeric values you may want to use the hash of the current Git commit or any other identifier.
A while ago I embarked on the quest of extending wifi coverage to our whole backyard.
Having a venerable ASUS WL-500GPv2 sitting at the front side of the house, my natural approach was to place another access point (AP) near the back side of the house which would cover our whole backyard. That is, in theory. As it turns out in reality things are a little bit more complex.
After installing the AP I was getting complaints from my sizeable user base (my sister and my mother) that the connectivity of their smartphones and tablets got shitty beyond imagination. After investigating this problem I realized that upon entering the house from the backyard wifi devices connected to the AP and as they moved towards the front of the house this connection stayed alive despite the router having a much stronger signal level at this point than the AP did. I even set up a multiple-AP (roaming) network configuration as suggested but the same thing was happening, only that I couldn't see right away which AP I was connected to.
I was dumbfounded by what I saw. I assumed that wifi devices always (re)connect to the AP with the strongest signal level. Wouldn't this be the Right Thing to do, after all? Well, not so much.
The first problem is switchover lag. Wifi is not GSM. With GSM you can travel through the country, move across dozens of cell towers without noticing a thing. With wifi, switchover lag is noticable and is highly unwanted when using streaming applications, especially VoIP.
The second problem is deciding when to switch over. The hardcoded client policy is to switch over when the current AP becomes totally unreachable. Another policy could be switching over as soon as there's another AP in the vicinity with a slightly stronger signal level. This wouldn't be optimal either, however. Just imagine being located right between two APs and taking some steps back and forth and back and forth. That'd result in lots of unwanted switchovers. I guess manufacturers could put two wifi transceiver into each device to solve this issue but it probably wouldn't justify the price and this method would draw excess battery power.
Given that clients implement a hardcoded switchover policy let's see what could we do on the server side. A buddy of mine who worked as an admin at an ISP suggested using RouterBOARD appliances with which one can specify a dB treshold below which the appliance disconnects the relevant clients so those clients can switch over to another AP in the vicinity. Unfortunately, such uber feature is out of reach for most and I don't know about any other devices implemeting this feature, not even OpenWrt.
So what did I end up with? My buddy also suggested placing my router to the attic and ditching the AP. Now the overall coverage is better than it used to be. It's not perfect but the signal is almost always within reach on our property. As a rule of thumb one should place the wifi router to the highest and most central location. I'm pretty happy overall, althought a wireless-N MIMO router would probably boost signal levels like crazy but I'm in savings mode right now and I don't wanna spend a ton of money on an ASUS RT-N66U Dark Knight until it's totally justified.
About two and half years before I invested in a heavily capable laptop, an Acer Aspire 8935G. After having spent all this time using my laptop I finally reached the conclusion that I'll avoid laptops like plague in the future. I understand that it's quite a harsh stance, especially given a laptop of this caliber but there are too many reasons against them from my perspective.
My first reason of not buying a laptop ever again: Neither suspend nor hibernate works on Linux
Just tell me a more essential feature that you expect from your laptop. When I go to sleep I wanna suspend my laptop to have a silent environment and to be able to continue my work from where I left off. When I leave home for some hours I'd also like to suspend my laptop just to save some power. Hibernate could also work (in a suboptimal fashion) in such situations except that it doesn't. Upon resuming my laptop it freezes in no time. Let's also take into consideration that I use a really complex session with lots of applications spread across multiple workspaces and lots of passwords to type upon startup. This shit costs me about a boring quarter an hour every time I wake up. It may not seem much but I despise this ritual and I cannot forgive for such an essential feature not working.
So far I've surely spent more than 100 hours trying to make resume work with no success. I've tried a number of distributions, fiddled with various parameters of s2ram, tried to suspend from console, switched the graphics card and did pretty much everything under the Sun. According to my understand the major problem is that the iGPU gets resumed instead of the eGPU and the BIOS provides no options to disable the iGPU. In general this BIOS is dumbed down crap, providing only a handful of options at most. I'm not in the mood of elaborating in detail about this but it's been a sickening experience which I couldn't solve despite having a strong Linux background and spending a *lots* of time on this issue.
The major problem the way I see it is that most laptop manufacturers (Acer surely included) don't give a shit about Linux support. I can't really blame them considing the 1% market share of Linux but it's sure as hell that I won't give them a fucking cent ever again for not being able to suspend such a crazy-expensive laptop.
My second reason of not buying a laptop ever again: I have to pay for the sub-optimal hardware and software configuration most of which I already have
Let's suppose that one already owns a laptop and is about to buy a new one. Let's just go over of what hardware components could be used from the old laptop:
- HDDs, SSDs
- Wifi module
- Bluetooth module
(I didn't list the motherboard, the CPU and the graphics card because Moore's law ruthlessly obsoletes these components.)
Some of these components (HDDs, SSDs, Wifi module, Bluetooth module) could be easily reused in a new laptop, but manufacturers provide no means to order a laptop without these components. Other components (Screen, Keyboard, Case) could also be theoretically reused in a new laptop but manufacturers couldn't care less about designing according to the need of reusability. As a result customers have to pay for all components every time when buying a new laptop. This is the opposite of the PC world.
And let's not even mention that nowadays almost every laptop come with glossy screens which I utterly hate because of their reflection, hence my journey of searching for a replacement matte screen begins, making me spend a hundred-something extra bucks but only if I get lucky enough to find a replacement matte screen.
On the software side of things given that I dislike Microsoft as much as I do and I don't even use Windows my first thing to do is to send back the laptop to Acer for them to remove Windows which takes about two weeks and I almost don't get any money back because I have to pay for my laptop to be shipped to the Acer service center. Fail!
The portable desktop
My approach involves using 1 main station and N dock stations, N being the number of places that I frequently spend time at doing heavy computing. If you're like most people then you only heavily use computers at home and at work. That's two places. I personally work from home but I have two locations between which I travel on a frequent basis and spend some time every time, leaving me with two places, too.
The main station is a Mini-ITX box composed of:
- PicoPSU power supply
- Graphics card
- Optionally Wifi and/or Bluetooth depending on the motherboard and on your needs
A dock station is composed of:
- USB hub
- DC power supply
Let's pick a super-capable desktop-like laptop like the Acer Aspire 8950G which will set you back with about $1,600 and will be replaced in every few years. (So far I could only see laptops with 18.4" screens which I consider desktop-like from Acer.)
The permament parts of the main station cost $216 and composed of:
- Lian Li PC-TU200 Mini-ITX case sells for $160
- PicoPSU-160-XT DC-DC Converter, 160 W power supply sells for $56
The soon-to-be-obsoleted parts of the main station cost $474 and composed of:
- ZOTAC H55-ITX mainboard sells for $130
- Core i5-2500K CPU sells for $220
- 2 x Kingston DDR3-RAM 4GB PC3-10667 (KVR1333D3N9/4G) sells for $44
- MSI N430GT-MD2GD3 2048MB Graphics card sells for $80
A dock station costs $400 and composed of:
- BenQ G2420HDBL Monitor for $200
- Leopold Tenkeyless Tactile Click Keyboard sells for $110
- Logitech M500 Mouse sells for $30
- USB hub sells for $10
- 19v/8.4A 160 Watt AC-DC Power Adapter sells for $50
You surely won't get the parts for these exact prices but the numbers are in the ballpark. That's $1600 recurring cost vs. $1016 one-time cost + $474 recurring cost.
I personally never needed a laptop, I needed a portable desktop. The pros of these solutions are fairly apparent but I list them for completeness' sake:
- Having the exact hardware configuration that you want
- Better compatibility allowing you to suspend, resume on Linux
Right now I'm not sure when will I ditch my laptop. So far I'm satisfied with its performance but the time will come eventually, inevitably.
Given the lack of portability my approach is not for everyone but I think it's thought-provoking because many people don't even think about the possible advantages of such a configuration in this laptop-centric world.
Today morning it's been an experience to notice that Google has fucked up Reader so badly as nobody could foresee. As to how such abomination could have ever been created by a company that is supposed to create web applications of superior usability is surely beyond me, but as a result of this event I migrated from the abandoned shared items page to Tumblr.
This post could have been tweeted due to its small size but I ultimately decided to post it there because of its significance.