Minimizing CSS and JavaScript HTTP requests automatically on the CMS level

Broadband Internet connections are pretty standard in the modern world nowadays, but despite of this many sites feel slow to load. Why is that? This is because of a multitude of reasons but the one that really hits home for me is excess HTTP requests. To be even more specific, I'd like to talk about excess CSS and JavaScript file requests. It's not unusual for sites to load about a dozen or more CSS and JavaScript files combined which I think is way overkill.

I came up with an algorithm that could be implemented by any CMS on the API level and it could dramatically reduce the load times of sites and relieve web servers significantly.

  1. Expose a dedicated API on the CMS level for plugins such as add_cached_css() / add_cached_js() .
  2. Execute the following points upon every page load:
  3. Check the modification times of all the CSS / JavaScript files.
  4. If the modification time of any file has been changed since the last page load or any new file has been added then go on, otherwise abort.
  5. Save the modification time of all the CSS / JavaScript files, concatenate the files and md5sum the concatenated files.
  6. Save the concatenated CSS / JavaScript files using their md5sums, such as 7c1735b79f2d13052454c196259ca511.css and 9fee0c4c4391bd75ca4269dac409a0aa.js
  7. Save the md5sums for the CMS to be able to reference the generated files from the main page.

A couple of things to note:

  • The generated CSS / JavaScript files should be cached forever as it's practically impossible for two distinct generated files to ever collide.
  • This algorithm could be implemented by any CMS so that any plugins could use it with no effort.
  • No new API functions should be necessary for every CMS.  For example, WordPress already has functions for adding CSS / JavaScript files.  A simple define should be enough to activate such an algorithm.

Let me know what you think.

How to encode uncompressed .MOV to MPEG-4 from the command line

I've been trying to find this out for a while. I have a digicam which produces uncompressed .MOV files which are huge. I wanted a simple way to encode those to MPEG-4 from the command line.

You have to be a rocket scientist to comprehend the command line options of ffmpeg. I finally succeded by encoding a movie using Kdenlive and eventually realized that it can export encoder scripts. Making a flexible script from such an exported script wasn't trivial for me but with some help from the mailing list of the Media Lovin' Toolkit I was able to do it.

Here's my mov2mp4 script:

How to embed Flash in an XHTML compliant way

Update (2011-04-27): You may wanna take a look at my Ultimate Flash XHTMLizer WordPress plugin which implements the algorithm below.

Many web developers were there: you wanna embed Flash in an XHTML compliant way. "It shouldn't be hard" - you may think first. Most of us finally conclude that it's nowhere near obvious. But first, a word about the importance of XHTML compliance...

I'm fully aware that most "web developers" don't give a shit about XHTML compliance and I can understand them to a degree. Try to validate your favorite sites and you'll quickly realize that there are hardly any sites that are completely valid. There are many browser quirks that make our lives very hard and we may don't have enough time to try to comply to every single rule. Hell, XHTML Strict is out of control!

On the other hand, I still think that being XHTML compliant is important. I'm not actually concerned about the minor, but rather the major errors. Missing the alt attribute of an image is no big deal but failing to close tags can be. "The browser will correct such errors" - you may say and you may be right but browsers handle such issues differently and you cannot tell what will happen in which browser. To summarize this paragraph, the way I see it is this: even if there are some minor validation errors, those can be disturbing enough to not notice the major errors so your best bet is to be 100% XHTML compliant.

After I hopefully persuaded you about the importance of XHTML compliance, let's get back to the solution. Once upon a time A List Apart wrote a fascinating article about the topic back in 2002. I've tested their method with the help of some of my friends and guess what: it didn't work consistently. It might have worked back then but 8 years have passed by and it failed in many of the browsers we tested with. I've also tried out validifier.com which is a very interesting tool. It generally worked with YouTube videos but failed with Flickr slideshow embeds in many browsers. I've tested everything under the sun and nothing worked. I had to devise a solution that worked predictably all the time.

After tinkering for a while I came up with a pretty clear solution that involves using JavaScript and seems to work in all browsers all the time. The necessity of using JavaScript is definitely the most ineloquent aspect of my solution but I don't consider it such a big deal in this web2.0ish age. Also, I'd like to note that there are some articles on the web about non-JavaScript based "solutions". Every such articles revolves around transforming the embed code in a way to be XHTML compliant and they are XHTML compliant but they do not work across every major browser. It's simply not possible to do without JavaScript. If you say it is then you didn't test in every major browser using every major OS.

My method involves htmlspecialchars'ing the Flash embed code, putting it between divs, hiding it with CSS and un-htmlspecialchars'ing it using JavaScript. That's it, but let's see it in detail.

1) The CSS

.flash {color: #fff; }  /* Make the color the same as your background color. */

2) The HTML

htmlspecialchars'ify the code and put it between <div class="flash"> and </div> tags and finally insert it to your page. I've made a handy conversion tool just for you!

3) The JavaScript

$(document).ready(function() {
   $('.flash').each(function(index) {
       $(this).html($(this).text())
   });
});

Ok, this uses jQuery instead of pure JavaScript but I won't work around the getElementsByClassName() bug of IE for sure.

Be aware that if you use this method in a CMS that generates a feed from a page then your feed will contain the embed code in plaintext format which you surely don't want. As for me, I'm only concerned about WordPress. My beloved readers who are on my feed are currently screwed in this respect but I'm thinking about writing a WordPress plugin that automatically does the above conversion process without affecting the feed.

Thanks!

I'd like to thank to CsLaci, Dömi, Eszter, Gyula, JoE, Luigi and Tenta for testing this embed method in about two dozens of browsers on every major OS.

How to easily log the output of your scripts with per line timestamps

I've faced a number of times with this scenario: I have some scripts and I wanna log their output in a way to timestamp every lines but I don't wanna write any additional code to achieve the outcome.

Meet timestamper.

Save it as "timestamper" to a directory in your $PATH and make it executable. Using it is a no brainer:

./myscript 2>&1 | timestamper >> myscript.log

Could it be any simpler? I don't think so.

How to display a link in the WordPress RSS widget to your Google Reader shared page

This question puzzled me for a while. It turned out that the WordPress RSS widget uses the link element of the atom feed it gets to link the header of the widget. The problem is that the Google Reader atom feed doesn't have such an element.

I originally wanted to use Yahoo Pipes which failed to solve this problem so I've written a handy little script.

Installing cx_Oracle on Ubuntu Karmic Koala, 64 bit

I'm using Oracle 10g, but you're free to download any other versions that you want.

wget http://prdownloads.sourceforge.net/cx-oracle/cx_Oracle-5.0.2-10g-py26-1.x86_64.rpm?download
# We should use alien but it didn't work for me.
rpm2cpio cx_Oracle-5.0.2-10g-py26-1.x86_64.rpm | cpio -id
sudo cp usr/lib64/python2.6/site-packages/cx_Oracle.so /usr/lib/python2.6
# Go to the Oracle Instant Client download page and accept their fucking license, then download Instant Client Package - Basic for version 10.2.0.3, that is instantclient-basic-linux-x86-64-10.2.0.3-20070103.zip
unzip instantclient-basic-linux-x86-64-10.2.0.3-20070103.zip
sudo cp instantclient_10_2/{libclntsh.so.10.1,libnnz10.so} /usr/local/lib
sudo ldconfig

Installing such proprietary shit like Oracle (related software) is a bad experience too many times.

Lock your laptop and turn off display with the touch of a keystroke in Ubuntu Karmic

I think this feature will soon be standard in Ubuntu as many users requested it. It's absolutely mandatory for me because every time I leave my laptop I carry out this action, even at home. Yeah, call me paranoid...

I've written a simple script to deal with the issue:

#!/bin/bash
gnome-screensaver-command -l
sleep 3
xset -display :0.0 dpms force off

You're encouraged to bind it to any key combo. It should work perfectly out of the box but a gnome-power-manager related bug enables the display some seconds or minutes later randomly, so we have to

killall gnome-power-manager

and it should be pretty fine. For those who can't afford to live without gnome-power-manager an alternative (and in my opinion suboptimal) workaround exists.

Streamlined OpenVPN configuration for LANs

I have a reoccuring task of setting up OpenVPN for the LANs of small enterprises and adding / removing users.  Usually they have a dumb little TP-Link or D-Link router facing the public Internet, we bring a relatively powerful PC to their office and my job is to configure the PC as an OpenVPN gateway (among other things).  OpenVPN traffic gets forwarded to our PC through the dumb little router using port forwarding.  Well, this is not particularly challenging to me but I was looking for a way to automate this process as much as I can because managing clients can be cumbersome.

Let's clarify a task at hand: An OpenVPN gateway has to be set up for a /24 LAN in order to provide access to all hosts on the LAN.  Privilege management will be implemented using PKI.  On top of that we'll use tls-auth so the HMAC firewall will only answer if the received packet signature is valid, thus effectively making the OpenVPN service undetectable by any scanning techniques.

The LAN should reside on a class A private subnet (10.x.y.0/24) where x and y should be randomly choosen because it'll minimize the probability of address collision with other subnets used with OpenVPN.

First of all, the PKI should not reside on the server on which the OpenVPN daemon runs for security reasons.  I store it on my home partition which is heavily encrypted and regularly backed up.  I create a directory under ~/openvpn for every OpenVPN installations where I store the server and client configuration files and the PKI.  Only the needed files will be transferred to the server or to the clients.

This post will describe the implementation of the above configuration and will provide a set of scripts to make the task very efficient.

1) Set up the ~/openvpn infrastructure

mkdir ~/openvpn
cd ~/openvpn

# User credentials will be temporarily published under the directory below for user download.  This should be a trusted host.
# It's probably needless to say but I mention that $PUBLISH_URL should not under any circumstances be listable by the web server.
cat >config <<END
PUBLISH_PATH=yourhost:/var/www/pki
PUBLISH_URL=http://yourhost.com/pki
END

wget /wordpress/wp-content/uploads/openvpn-scripts.tar.bz2
tar xjf openvpn-scripts.tar.bz2 -C ~/bin
rm openvpn-scripts.tar.bz2

2) Set up the server directory

cd ~/openvpn
mkdir SERVERNAME
cd SERVERNAME

3) Set up the PKI

mkdir easy-rsa
cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/* easy-rsa
cd easy-rsa
# Edit the all the KEY_* variables in ./vars so you won't have to type them anymore.
. ./vars
./clean-all
./build-ca
./build-key-server server
./build-dh
cd ..
mkdir ccd

4) Create server configuration

openvpn --genkey --secret ta.key

cat >server.conf << END
mode server
local 10.X.Y.Z
tls-server
dev tun
proto udp
port 1194
client-config-dir ccd
ifconfig 10.8.0.1 10.8.0.2
push "route 10.X.Y.0 255.255.255.0"
push "route 10.8.0.0 255.255.255.0"
route 10.8.0.0 255.255.255.0
keepalive 10 120
ca ca.crt
cert server.crt
key server.key
dh dh1024.pem
tls-auth ta.key 0
log server.log
verb 3
END

# This will be used by the synchronization script to rsync the configuration to the server through SSH.
echo SERVERHOSTNAME > server.hostname

5) Create general client configuration

# This is the client configuration from which the all individual client configurations will be generated.
# Don't touch "username" as it will be automatically replaced with the name of the relevant user during the generation process.

cat >client.conf << END
dev tun
proto udp
nobind
remote OPENVPN-GATEWAY-HOST 1194
client
ca server.crt
tls-auth server-ta.key 1
cert username.crt
key username.key
verb 3
END

6) Add users

openvpn-add-user username1
openvpn-add-user username2
...

# The configuration will be automatically transferred to the server.

7) Publish client credentials

openvpn-publish-user-credentials username1
openvpn-publish-user-credentials username2
...

# Which outputs something like this:
# User credentials are accessible from http://yourhost.com/pki/servername-username1-65378842373270.zip
# User credentials are accessible from http://yourhost.com/pki/servername-username2-10200344763221.zip
# ...

# These URLs are meant to be mailed to the relevant users and removed eventually.

8) Unpublish client credentials

openvpn-unpublish-user-credentials username1
openvpn-unpublish-user-credentials username2
...

# Which removes the relevant files from the server.

9) Revoke client credentials

openvpn-revoke-user-credentials username

# The configuration will be automatically transferred to the server.