Tag Archives: linux

Pausing Spotify and playing a random video in Python – A party trick for Halloween

For a Halloween party last weekend I wrote a python script to pause Spotify, play a random video and start music playback again. The videos were basic ogg files I cobbled together which showed a scary image and evil laughs or screaming with OpenShot. I can’t really share them, as I don’t have rights to the media, but it’s pretty simple to recreate them yourself.

The code for this script is on Github, and I’ve reproduced the latest snapshot below. Feel free to fork and improve if you want to scare your guests, or add support for other OS’s. Presently it only supports Linux because I used dbus to perform the play/pause actions.

#!/usr/bin/python

'''
This is a Halloween party script which pauses Spotify and plays a video
at random intervals.
'''

import random
import subprocess
from subprocess import call
from time import sleep
import os
import datetime

start_time = datetime.time(21, 0, 0)
stop_time = datetime.time(23, 0, 0)

video_dir = '/home/alex/Videos/scream/'
videos = { 'scream1_nofade.ogg': 30,
'happy.ogg': 1,
'evil_laugh.ogg': 5,
}

def time_in_range(start, end, x):
"""Return true if x is in the range [start, end]"""
if start <= end:
print("start<end")
return start <= x <= end
else:
print("end<start")
return start <= x or x <= end

def weighted_choice(weights):
total = sum(weights[video] for video in weights)
r = random.uniform(0, total)
upto = 0
print("total: %s\nrandom: %s" % (total, r))

for video in weights:
w = weights[video]
if upto + w > r:
return video
upto += w
assert False, "shouldn't get here"

def spotifyPause():
command = "dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Pause"
print("pausing spotify")
os.system(command)

def spotifyPlay():
print("playing spotify")
command = "dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlayPause"
os.system(command)

def play_video(video_file):
print("Playing %s" % video_file)
#call(['/usr/bin/mplayer', '-fs', video_file], stdout=None, stderr=None)
#result = subprocess.Popen(['/usr/bin/mplayer', '-really-quiet', '-fs', video_file])
result = subprocess.check_call(['/usr/bin/mplayer', '-really-quiet', '-fs', video_file], stdout=None, stderr=None)
return result

def playBuzz(buzzfile):
print("Buzz...")
result = subprocess.check_call(['/usr/bin/mplayer', '-really-quiet', '-ss', '18', buzzfile], stdout=None, stderr=None)
return result

def infiniteLoop():
while 1:
current_time = datetime.datetime.now().time()
#if current_time > stop_time or current_time < midday:

choice = weighted_choice(videos)

random_time = random.randrange(1200,2400)
random_time = 3

video_file = video_dir + choice
print("Chose video %s after %s seconds" % (video_file, random_time))
sleep(random_time)

# Whether to play buzz
buzz = False
if random.randrange(0,100) > 90:
buzz = True

# Continue if outside time range
if not time_in_range(start_time, stop_time, current_time):
print("Not playing video, outside time range")
continue

# Do it
spotifyPause()
if buzz:
playBuzz('/home/alex/Videos/scream/audio/buzz.mp3')
play_video(video_file)
spotifyPlay()

if __name__ == "__main__":
infiniteLoop()

Stopping the Intel WiFi LED from blinking in Ubuntu

My Dell E4300 has an Intel 5100 wifi card and the led blinks constantly. I still don’t understand how Intel can consider blinking the wifi LED during data transfer to be a sensible default. For most people it blinks non-stop which is both uninformative and irritating.

Fortunately blogger Alex Cabal found a solution for Karmic, and his updated solution also works for Natty/11.04. It describes opening a text editor and pasting a couple of lines, however I’m much lazier so here’s the one-line version:

echo 'options iwlcore led_mode=1' >> /etc/modprobe.d/wlan.conf

Of course, the command above must be run as root (sudo -i), for some reason sudo gave me access denied. You also need to reboot… or you could just unload and reload the module:

modprobe -r iwlagn && modprobe iwlagn

The double ampersand just executes the next command if the previous one succeeded (exit status 0).

Update

As of Ubuntu 11.10 (kernel 3.0.0) the option has to be applied to the iwlagn module, options for iwlcore are ignored. Thus the full solution now becomes:

sudo -i
echo 'options iwlagn led_mode=1' >> /etc/modprobe.d/wlan.conf
modprobe -r iwlagn && modprobe iwlagn

Setting up a secure Ubuntu LAMP server

Disclaimer: This article is provided for your information only, and simply following this guide will not make your server “secure”. As the server administrator you are ultimately responsible for its security!

Intro

Having recently been through the process of setting up a few Ubuntu LAMP (Linux, Apache, MySQL, PHP) servers lately I thought I’d make an article out of my notes and provide a starters guide to setting up the LAMP stack on Ubuntu.

It goes without saying that the only truly secure computer is one with no network connection, no ports or input devices and is locked in a bank vault, but such a machine is not terribly useful. Regretfully, compromises must be made to allow functionality! Besides presuming insecurity, there are a lot of things you can do to make your server more secure and keep out the vast majority of would-be hackers running port scans, meta-exploit scripts and dictionary attacks.
Continue reading

Recovering a RAID5 mdadm array with two failed devices

Update
Before reading this article you should know that it is now quite old and there is a better method – ‘mdadm –assemble –force’ (it may have been there all along). This will try to assemble the array by marking previously failed drives as good. From the man page:

If mdadm cannot find enough working devices to start the array, but can find some devices that are recorded as having failed, then it will mark those devices as working so that the array can be started.

I would however strongly suggest that you first disconnect the drive that failed first. If you need to discover which device failed first, or assemble doesn’t work and you need to manually recreate the array, then read on.

I found myself in an interesting situation with my parents home server today (Ubuntu 10.04). Hardware wise it’s not the best setup – two of the drives are in an external enclose connected with eSATA cables. I did encourage Dad to buy a proper enclosure, but was unsuccessful. This is a demonstration of why eSATA is a very bad idea for RAID devices.

What happened was that one of the cables had been bumped, disconnecting one of the drives. Thus the array was running in a degraded state for over a month – not good. Anyway I noticed this when logging in one day to fix something else. The device wasn’t visible so I told Dad to check the cable, but unfortunately when he went to secure the cable, he must have somehow disconnected the another one. This caused a second drive to fail so the array immediately stopped.

Despite having no hardware failure, the situation is similar to someone replacing the wrong drive in a raid array. Recovering it was an interesting experience, so here I’ve documented the process.
Continue reading

Splitting files with dd

We have an ESXi box hosted with Rackspace, it took a bit of pushing to get them to install ESXi it in the first place as they tried to get us to use their cloud offering. But this is a staging environment and we want something dedicated on hardware we control so we can get an idea of performance without other people’s workloads muddying the water.

Anyway, I’ve been having a bit of fun getting our server template uploaded to it, which is only 11GB compressed – not exactly large, but apparently large enough to be inconvenient.

In my experience the datastore upload tool in the vSphere client frequently fails on large files. In this case I was getting the “Failed to log into NFC server” error, which is probably due to a requisite port not being open. I didn’t like that tool anyway, move on.

The trusty-but-slow scp method was also failing however. Uploads would start but consistently stall at about the 1GB mark. Not sure if it’s a buffer or something getting filled in dropbear (which is designed to be a lightweight ssh server and really shouldn’t need to deal with files this large), but Googling didn’t turn up much.
Continue reading

Bash script to alert when memory gets low

We have a web server that’s running out of memory about once every couple of weeks. The problem is that when it happens the swap file is totally full, the system is unresponsive and it usually needs a hard reboot. So it’s a bit difficult to debug. To avoid digging through log files I don’t understand I elected to put a script in /etc/cron.hourly which checks the total amount of free memory (including swap and physical). If there is less than 256mb free (this server has 512mb of ram and a 1gb swap so at this point the situation is serious), it dumps the process list to /tmp/processes.txt and sends me an email with it attached.

Note that mutt must be installed (‘apt-get install mutt’ on Debian/Ubuntu, or ‘yum install mutt’ on RedHat/CentOS).

#!/bin/bash

free=`free -mt | grep Total | awk '{print $4}'`

if [ $free -lt 256 ]; then
        ps -eo %mem,pid,user,args >/tmp/processes.txt
        echo 'Warning, free memory is '$free'mb' | mutt -a /tmp/processes.txt -s "Server alert" email@me.com
fi

Then of course make it executable and symlink to cron.hourly:

chmod +x /etc/scripts/memalert.sh
ln -s -t /etc/cron.hourly/memalert.sh /etc/scripts/memalert.sh

Au Revoir Ubuntu, Bonjour Fedora

If you check the about page and previous posts you’ll note that I’ve been travelling the past few months. In fact I’ve just settled in London and started looking for a job.

There are several shortcomings on my CV that have made it difficult to get past the recruitment agents for a lot of the roles I am interested in. Firstly there’s the lack of big corporate experience – I worked as a technical consultant on a major corporate contract for close to 6 months but the majority of my experience (including almost all of my “BAU” experience) has come from the education sector. Secondly, there’s lack of experience on 100+ Linux server sites (unfortunately no schools are that big in New Zealand, and we don’t have the federated district IT model that many state schools operate in the US). Finally and perhaps most critically is the lack of production experience with Red Hat Linux.

My own personal Linux dabbling experience has come from Ubuntu and Debian Linux. At work it’s been Debian and SUSE. However the number of roles that mention these distributions in the UK is insignificant compared to the number that mention RedHat and CentOS (CentOS is a clone of Red Hat Enterprise Linux, basically Red Hat with all the branding stripped and separately maintained repositories). In New Zealand Red Hat is hardly an endangered species, but roles that mention it are similar in number to those that mention Debian, SUSE and even Ubuntu, so lack of experience with it is not really an issue.

In the UK however, it most certainly is an issue for recruiters searching for “Red Hat” in CV databases. So it is for this reason that I must farewell Ubuntu and switch to the red team – which if I am to take this seriously means adopting Fedora (presently Fedora 13) for my day to day computing.

I made the switch yesterday, and so far no problems. There is less hand-holding for sure, but I like how it doesn’t try to hide what’s happening under the hood. The flexible installer was also nice, if not as attractive as Ubuntu’s. I also like how Fedora ships software as the upstream maintainers intended – this strikes me as a more sustainable long term solution than having to backport distribution-specific patches every release, but at the same time it doesn’t have the “coherent vision” of Ubuntu, and there are some niceties in Lucid that I really appreciated such as the messaging notification area.

As a professional tool for software developers and Linux IT professionals, Fedora is a fine choice. For end-users and Linux enthusiasts who don’t have to use a specific distribution in their work, Ubuntu is generally an easier distribution to get in to. In fact I wouldn’t recommend Fedora for anyone other than Linux geeks because only the two most recent versions are supported. This means a forced upgrade every 12 months minimum. Ubuntu LTS releases on the other hand are supported for 3 years on the desktop and 5 on the server. The previous 8.04 LTS release will still be supported on the desktop until April 2011. Install Lucid today and you will get security updates for it until April 2013.

Why not CentOS 5.5? Too old. Having been accustomed to the current versions of software shipped with Ubuntu it’s a bit hard to go back to Gnome 2.16! (Gnome 2.16 was released in September 2006 and was the version shipped with Ubuntu 6.10, Fedora 13 ships with the current stable 2.30 release).
As desktop distributions, CentOS and RHEL are simply too far behind to be competitive, except in large environments with legacy apps which require absolute stability of APIs. However modern applications are often browser based, so for any environment considering a desktop deployment of Linux that doesn’t depend on legacy desktop software, I’d be suggesting a very hard look at Ubuntu LTS.

Despite the title I won’t be abandoning Ubuntu entirely, in fact I play to keep tabs on each release (maybe even dual boot to test each one) but professional needs have dictated that I upskill in “the Red Hat way”. Let’s hope this has a happy ending!

Configuring the backup system

This article is part of a series about setting up a home server. See this article for further details.

Surprisingly, this is one of the easiest bits. If you don’t mind sticking with the options presented by the GUI, Back In Time makes backups so simple it’s almost criminal not to use it. The use of the GUI itself is fairly straightforward so I’m not going to go step by step and instead go for the important bits.

Just make sure you use the root shortcut (Back In Time – root) to prevent any permissions problems.

I’ve used NTFS for the backup volume because it supports hard links and is readable by Windows machines if something goes wrong. A native Linux file system would be preferable for many, but whatever you do don’t use FAT32 (FAT32 doesn’t support hard links, so every snapshot would consume 100% of its size whether the file was changed since the last backup or not).

Creating the Job

This is all done in the settings menu, which isn’t labelled but represented by the classic screwdriver and spanner icon – intuitive enough.

Under General, make sure you’re saving snapshots to your backup volume. Set the schedule to whatever you like, but I prefer to handle the schedule manually as it doesn’t give enough options. For a desktop machine the “daily” option would make sense, but as this machine will be on 24/7 I want it to run at a set time each day, not whenever it feels like it. So we will setup a cron job manually later.

Under the Include tab add your data folder (/media/data). Under exclude I removed all the preset options as I want everything on the data volume backed up. Everything that is except the lost+found folder, so I would suggest clicking Add folder and adding “/media/data/lost+found”.

The auto-remove options are up to you. I set the free space threshold to 1Gb, checked the smart-remove box, and chose not to remove named snapshots as they all seem fairly logical. The expert options don’t really need tweaking unless you want to do different schedules for different folders.

Click OK to save and you can now take a backup.

Altering the schedule

As I explained above we want to make sure the backup runs at a set time, which the gui for Back In Time doesn’t allow for, so fire up a terminal and enter the command: ‘sudo crontab -e’

The crontab is like task scheduler on Windows, but arguably a lot more powerful and flexible. The ‘-e’ option just tells crontab to edit the existing crontab instead of overwriting.

The screenshot below shows my crontab.

The @daily line is the line that the Back In Time gui added. I’m not so concerned about ‘niceness’ at 4am (nice values on Linux serve the same purpose as task priority on Windows), so I left that out. The final line is:
0 4 * * * /usr/bin/backintime --backup-job >/dev/null 2>&1

For an explanation of the crontab, see this crontab quick reference. Basically all you need to know though, is that the first number is the minute and the second is the hour. So if for example you would rather it ran at 1.30am instead of 4am, change the first number to 30 and the second to 1 so it reads:
30 1 * * * /usr/bin/backintime --backup-job >/dev/null 2>&1

Later on we will modify this to also email the result.

Important Caveat

I just discovered that the Back In Time gui blitzes any lines in the the crontab that contain the string “backintime” whenever you click OK from the preferences window. This is a rather annoying problem, as I can easily see this happening.

I recommend making sure the gui schedule is set to every day rather than disabled, which means that if someone does fiddle at least the backup will still happen once a day. The solution to this is to call a wrapper script which does not contain “backintime” in its name… I’ll update this once I’ve written and tested it.

Next part – Monitoring and email configuration

Creating user accounts and setting up the file shares

This article is part of a series about setting up a home server. See this article for further details.

In this section we will create accounts for each user that will access the server, create a folder for each user, make sure the permissions are sane, and configure the samba shares.

For home environments a single user account that everyone uses can be good enough. However I like to have some semblance of security to raise barriers for viruses (who knows what’s going to be connecting to the network), so I setup guest to be read-only and assign write permissions only to authenticated users.

But before we proceed…

One Caveat when doing remote administration in a NeatX session

For some reason NeatX breaks policykit, which means any buttons in control panel applets that require root privileges will simply fail to work.

The way around this is to run the applets with gksu. The most convenient way to do this in my opinion is to create a desktop shortcut.

Go to System > Administration, right click on the users and groups icon in the menu, and “Add this launcher to desktop”. Next, right click on the resulting desktop icon, click properties and in the “Command” field, prepend gksu so that it reads “gksu users-admin”.

Double-clicking on the icon should then prompt you for your password, and all the buttons will work. Hopefully in the future this won’t be necessary!

I also created desktop shortcuts for Disk Utility and Back In Time (root).

Adding the users

Creating users is simple enough, but afterwards we need to add the users to two groups – “sambashare” and “users” (you should be able to figure this out). After doing this, go to Advanced Settings > select the Advanced tab and change the main group to users.

The reason for changing the primary group is so that any files the user creates are also accessible to others in the users group – which will include anyone that we want to be able to access files on the server. If you want to keep files private it is best to leave the primary group as the user name. Old school Unix people tend to know this, but for Windows refugees the lower level Linux concepts such as user groups and file system permissions can seem a bit strange, as they work quite differently.

Folders and Permissions

The raid array in my home server is mounted at /media/data. I like mount points to be owned by root to avoid accidental tampering:
chown root.root /media/data
chmod 755 /media/data

The octal permissions are 755, which means read/write/execute for the owner, read/execute for the group and all other users. For newbies I must confess that the rwx permissions notation is easier to understand, but unfortunately I learned octal permissions and it’s become a habit!

Under the data folder I have a folder for each user. The owner of the folder is the user, and the group is users, as I want Mum and Dad to be able to see each others files:
mkdir /media/data/mum
chown -R mum.users /media/data/mum
chmod -R 775 /media/data/mum

Repeat this for each user (substitute the user for “mum” in the example above). Note the use of the -R switch which applies the command to all sub-folders and files.

If Mum wanted keep her files private, both the owner and group would be the user name, e.g.:
chown -R mum.mum /media/data/mum
And the permissions would be:
chmod -R 700 /media/data/mum
Remember to make sure the primary group is the user name as well.

Setting up sharing (Samba)

I’m not 100% sure I’ve done this the officially sanctioned way, especially since it involves the decidedly old-school method of editing smb.conf. However for anyone comfortable with the terminal I think it works perfectly well.

First open /etc/samba/smb.conf in your favourite text editor (my preference is vim). There’s no need to modify any of the configuration, so scroll down to the bottom where the shares are located. I always comment out the printer shares (print$ and printers), as sharing printers via samba is a fool’s errand in my opinion, just get a blimmin’ network printer.

My shares are setup as follows:
[Data]
comment = Raid5 array, backed up daily
path = /media/data
browseable = yes
read only = no
guest ok = yes

[Backup]
comment = Backup drive, read only, no guest
path = /media/backup
browseable = yes
read only = yes
guest ok = no

[Media]
comment = Media files for XBMC
path = /media/media
browseable = yes
read only = no
guest ok = yes

Some explanation is definitely needed. Firstly while [Data] allows guests and the share is not read-only, guests will not be able to write because of the file system permissions which only allow the owner and group to modify files. You may want to create a public folder with permissions 777, which would allow guests to copy files on to the server. Or you may want to set up a another share and change guest ok to “no” for the data share.

The backup drive is read-only because I don’t want anyone to modify files on the backup drive, and file system permissions are no protection due to it being NTFS (Linux doesn’t and really shouldn’t support NTFS permissions). It would be too easy to go back to a previous version of a file and accidentally save it, and I’m not sure how Back In Time would handle a backed up file being newer than the source. Altering a file in the backup would also change every linked copy, so basically writing to files on the backup volume is bad mmkay? It is shared only to make restoring previous versions convenient.

NFS?

I haven’t covered setting up NFS here, as Mum and Dad both run Windows machines. If you do decide to setup NFS it’s fairly straightforward, but to save yourself some pain make sure the user ID’s match on all machines – NFS matches uid and not the actual user name. Off the top of my head the packages to install are nfs-common and portmap, and the config file to modify is /etc/exports.

A final note on samba passwords

I have found that a login to the local machine is required in order for the samba password to be synchronised with the unix password. If after logging in you still can’t access samba shares with that account, use the command smbpasswd to set the password, e.g.:
sudo smbpasswd mum

If you need to restart samba you can do so with the command ‘service smbd restart’.

Next section – Configuring the backup system