Boot multiple Linux iso images from USB – manual setup

April 17th, 2013

First thing to decide is the USB key’s file system. If you just want to use it for booting Linux images then go for Linux file system. If you also want to use it to store other data and this data to be visible to other operating systems then go for vfat.

When the USB key is mounted go to the mount point and create the following two folders:
mkdir boot iso

After this execute the following command (Install GRUB to MBR of USB stick)
#grub-install --force --no-floppy --root-directory=/mount/point /dev/sdX

Copy the ISO file(s) to iso folder on the USB key.

Create/Change the grub.cfg file in boot/grub folder on the USB key. Click here to see an example of the file

The menu entries vary for different Linux versions. Some of the versions you can see in the example above. For Ubuntu/Mint/Knoppix/Gparted/Grub Rescue Remix/Parted Magic/SystemRescueCD check here (note that examples are made for booting from hard drive – no need for (hd0,X) options)

[1] –

Installing from testing source

January 14th, 2013

Recently I’ve needed to install a package (keepass2) which was available only in testing and unstable repositories. As I’m running stable version of Debian (for stability and compatibility with servers) I’ve come across of an installation method worth rememering. Bellow is the adaptation of the method mentioned here.

      Add repository
      Open /etc/apt/sources.list file and add (in this case) Wheezy main repository deb-src wheezy main or
      # echo "deb-src wheezy main" >> /etc/apt/sources.list
      Create folder for storing source and build files
      $ mkdir -p ~/Build/keepass2
      $ cd !$
      Get the source files:
      $ apt-get source keepass2
      Build dependencies
      $ su
      # apt-get build-dep keepass2
      # exit
      Build the package from source:
      $ cd keepass2-2.18+dfsg
      $ dpkg-buildpackage -us -uc
      $ cd ..
      Install it:
      $ su
      # dpkg -i *deb

Setting cron-apt

June 12th, 2012

First install it.

apt-get install cron-apt

I like to get e-mail every time the cron-apt runs. This way I know that everything works. To get the mail change MAILON=”error” to MAILON=”always” and uncomment the line in /etc/cron/apt/config.

Reconfigure exim4

dpkg-reconfigure exim4-config

Settings (in order of appearance):

  1. mail sent by smarthost; no local mail
  2. System mail name: (confirm the default option i.e. hostname)
  3. IP-addresses to listen on for incoming SMTP connections: (confirm the default option i.e. ; ::1)
  4. Other destinations wor which mail is accepted: (confirm the default option i.e. hostname)
  5. Visible domain name for local users: (confirm the default option i.e. hostname)
  6. IP adress or host name of the outgoing smarthost: or IP
  7. Kep number of DNS-queries minimal (Dial-on-Demand)? : (confirm the default option i.e. No)
  8. Split configuration into small files? : (confirm the default option i.e. No)
  9. Root and postmaster mail recepient:

Squeeze LXC container in Debian Squeeze

August 31st, 2011

In Debian Squeeze, the default lxc-debian script installs a Lenny container. The bug was reported and fixed in version 0.7.4 of LXC.
To install Squeeze container first download Syd’s version of LXC (currently 0.7.5-1), unpack it and find the lxc-debian file (lxc_0.7.5-1_amd64.deb —> data.tar.gz -> /usr/lib/lxc/templates/lxc-debian). Copy the file to your machine/server, make it executable and in the same folder run:

./lxc-debian -p /var/lib/lxc/name_of_container

Useful resource:

Finding duplicate entries in MySQL database

July 7th, 2011

On upgrading my Moodle installation from version 1.9 to 2.1 the upgrade process (initiated from command line) exited with the following error:

!!! Error reading from database !!!
!! Illegal mix of collations (utf8_general_ci,IMPLICIT) and (utf8_unicode_ci,IMPLICIT) for operation '='
SELECT AS oldpage_id, po.pagename AS oldpage_pagename, po.version, po.flags,
                   po.content,, po.userid AS oldpage_userid, po.created, po.lastmodified, po.refs, po.meta, po.hits,,
          AS newpage_id, p.subwikiid, p.title, p.cachedcontent, p.timecreated, p.timemodified AS newpage_timemodified,
                   p.timerendered, p.userid AS newpage_userid, p.pageviews, p.readonly, AS entry_id, e.wikiid, e.course AS entrycourse,
                   e.groupid, e.userid AS entry_userid, e.pagename AS entry_pagename, e.timemodified AS entry_timemodified,
          AS wiki_id, w.course AS wiki_course,, w.summary AS summary, w.pagename AS wiki_pagename, w.wtype,
                   w.ewikiprinttitle, w.htmlmode, w.ewikiacceptbinary, w.disablecamelcase, w.setpageflags, w.strippages, w.removepages,
                   w.revertchanges, w.initialcontent, w.timemodified AS wiki_timemodified,
          AS cmid
              FROM wiki_pages_old po
              LEFT OUTER JOIN wiki_entries_old e ON =
              LEFT OUTER JOIN wiki w ON = e.wikiid
              LEFT OUTER JOIN wiki_subwikis s ON e.groupid = s.groupid AND e.wikiid = s.wikiid AND e.userid = s.userid
              LEFT OUTER JOIN wiki_pages p ON po.pagename = p.title AND p.subwikiid =
              JOIN modules m ON = 'wiki'
              JOIN course_modules cm ON (cm.module = AND cm.instance =

Summed up — an “Illegal mix of collations” was occurring while querying the table wiki_pages_old. A quick search found a possible solution. Unfortunately (for now) I’m not really sure if it helped, since the error persisted, but it is worth of making a note of it until being sure (by migrating the production server).

So I went in with phpMyAdmin and changed the collation on the table manually, which didn’t help either, since it didn’t change the collation of the individual fields. So I changed the fields’ collation manually, except for one field, which — after changing the collation — started to report a duplicate entry. After some trial and error I’ve found a solution.

From the table dump file I’ve deleted the offending key and after importing the table into the database I’ve searched for the offending duplicate entries.

The following MySQL query, from MySQL forum, finds the duplicate entries.

SELECT colA,colB,COUNT(*) FROM t1 GROUP BY colA,colB HAVING COUNT(*)>1) as t2
ON t1.cola = t2.cola and t1.colb = t2.colb;

In my case the upper query changes into this:

SELECT wiki_pages_old.* FROM wiki_pages_old INNER JOIN (
SELECT pagename,version,wiki,COUNT(*) FROM wiki_pages_old GROUP BY pagename,version,wiki HAVING COUNT(*)>1) as wiki_t2
ON wiki_pages_old.pagename = wiki_t2.pagename and wiki_pages_old.version = wiki_t2.version and =;

I’ll write some more details after migrating the production server.

Audio and Video Manipulation With ffmpeg

July 5th, 2011

In order to avoid always re-learning same things I’m listing few useful commands. For more detail and information go here.

Any video/audio to OGG/Vorbis

fmpeg -i source.file -vn -acodec libvorbis -aq 6 audio.ogg

The -vn option disables video recording — omit if source is audio file. The -aq option stands for audio quality. The value of 6 results roughly at 192 kbit/s. For other values check this table.

Any audio/video to mp3

ffmpeg -i source.file -vn -ar 44100 -ac 2 -ab 192 -f mp3 sound.mp3

The -vn option disables video recording — omit if source is audio file; -ar sets the audio sampling frequency; -ac number of audio channels; -ab audio bitrate in bit/s; -f forces mp3 format.

Instaling from source with Checkinstall

January 31st, 2011

If everything goes well, all you need to do is run the commands below.

$ ./configure
$ make -j 4
# checkinstall -D make install

The -j 4 option means, that make will run 4 concurrent jobs and thus utilise 4 cores.
You’ll end up with installed package or deb package to install with dpkg. In either case you can simply remove the package and all it’s files with apt-get or other package manager..


Setting up Logwatch

January 31st, 2011

Checking logs is the only way to know what’s happening with your servers and one way to check them is using Logwatch.

Installing it on Debian is easy:
apt-get install logwatch

On my virtual Debian host there was no configuration file in the expected place so I copied it from /usr/share/logwatch/default.conf folder:
cp /usr/share/logwatch/default.conf/logwatch.conf /etc/logwatch/conf/

Things you must change in this file are:

  • Output = mail
  • MailTo =
  • Detail = High

The rest is optional and subject to your needs. The logwatch.conf is well documented.

Create folder /var/cache/logwatch needed by logwatch as specified in logwatch.conf.
# mkdir /var/cache/logwatch

Test the setup by running:
# logwatch

To finish the automatism edit the /etc/cron.daily/00logwatch file, removing --mailto: root option to receive mails to the address we specified in logwatch.conf file.


Upgrade Debian over slow Internet connection

October 14th, 2010

It was a moderately cold fall evening when I finally decided to do upgrade to Squeeze on my home computer. I have a 2MB line at home, sharing it with two other users. Apt-get dist-upgrade announced two and half hours of downloading at full speed and as it was evening I didn’t want to cut off other users. I have an 1GB optical line straight from my office to one of Debian mirrors, so I’ve decide to do downloading at work.

After short googling I’ve came accross this comment and this manual. The steps below are a combination of both.

  1. At home I’ve run the following command:
    apt-get --print-uris -y dist-upgrade | grep "^'" | gawk '{ print $1 }' | sed "s/'//g" > packages.lst
  2. Sent myself the packages.lst to the office with fast connection and there run the command below in an empty folder on a portable device. You can use -P option to specify the destination folder.
    wget -i packages.lst
  3. Back at home I’ve run:
    apt-get -o dir::cache::archives="/folder/on/portable/disc/" dist-upgrade

Package and Install PHP extension

May 17th, 2010

No need to use too many words.
Go to and follow step-by-step instruction for downloading PECL extension, making DEB package and installing it with dpkg.