Tag Archives: bash

Safely running bulk operations on Redis with lua scripts

This article was also posted on the Gumtree devteam blog

If there was one golden rule when working with redis in production, it would be

“Don’t use KEYS”

The reason for this is that it blocks the redis event loop until it completes, i.e. while it’s busy scanning its entire keyspace, it can’t serve any other clients.

Recently, we had a situation where code was storing keys in redis without setting an expiry time, with the result that our keyspace started to grow:
Continue reading

How not to program Bash

Came across this gem today:

COMP=$1

if [ -e $COMP ]; then
    echo "Please supply a competition_id"
    exit
fi

On first read it looks backwards – “if exist $COMP then exit”, and the programmer clearly wanted to do the exact opposite. However it actually works as desired. Mostly.

If no argument is supplied, $COMP will be a blank string, i.e. “”. Thus [ -e $COMP ] will parse to [ -e ], which returns 0 (effectively true). It’s basically like saying “does nothing exist”.

This is because the -e argument tests whether a file exists. So if an argument is supplied, [ -e $COMP ] will return 1 (effectively false), which does what the programmer intended UNLESS the argument supplied is a valid file path.

In this case a number was expected, so sure it’s unlikely to fail in this way, but it’s still an incredibly bad way to test if an argument is set. Not to mention confusing to read!

The correct way to test this by the way would be to use -z, which simply tests if a string’s length is zero:

COMP=$1

if [ -z "$COMP" ]; then
    echo "Please supply a competition id"
    exit
fi

Or better still, use getopts. For more info run ‘man test’ from a bash terminal.

Photo workflow script

After trying several photo management tools I’ve concluded that the best way to manage a photo library is the simplest – copy the files yourself and name the directory.

I also often shoot in RAW+JPG mode on my camera and like to keep the files separate to make viewing saner, so under each event directory I will have a RAW and JPG directory as well.

After downloading, the first step is always to delete the photos that didn’t come out. With both raw and jpeg files to delete this process is a bit of a pain, especially when they’re in separate directories, so I wrote a script that deletes the raw file when the jpg is missing. That way I can simply browse through the JPG directory with Eye of Gnome, delete the photos I don’t want, and the script handles cleanup of the RAW directory.

https://github.com/al4/scripts/blob/master/bash/rawdel.sh

The code is below but you should use the github link above in case it gets improved at some point in the future.

#!/bin/bash

# rawdel.sh, a photo workflow script to delete raw files when the jpg has been removed
# By Alex Forbes
#
# I frequently shoot in RAW+JPG mode and when downloading from the camera I like to separate 
# the raw and jpg files into separate directories. I then go through the jpg directory and 
# delete the rejects. It is a pain to have to manually delete the corresponding raw files as 
# well, so I wrote a script to do it for me.
#
# It simply removes RAW files from a directory when the corresponding JPG file has been removed.

# Set these
rawextn="CR2"	# raw file extension (Canon is CR2)
rawdir="./RAW"	# directory where raw files reside
jpgdir="./JPG"	# directory where jpg files reside
				# rawdir and jpgdir can be the same

# Working variables, leave as-is
list=""			# list of files that have been deleted
rawlist=""		# the list of raw files that we will delete
filecount=""	# number of files we will delete

# Operate on each raw file
for f in $(ls -1 $rawdir/*.$rawextn); do 
	# Corresponding JPG file is:
	jpgfile=$(basename $f | sed "s/\.$rawextn$/.JPG/")

	# If this JPG file doesn't exist
	if [ ! -f $jpgdir/$jpgfile ]; then
		# Add to our list of files that have been deleted
		list=$(echo -e "${jpgfile} ${list}")
	fi
done

# Convert jpg filenames back to corresponding raw filenames
rawlist=$(echo ${list} | sed 's/\.JPG$/.CR2/g')
filecount=$(echo -e ${rawlist}| awk 'END{print NF}')

if [ $filecount == 0 ]; then
	echo "No files to delete"
	exit 0
fi

echo -e "About to remove $filecount files:\n${rawlist}"
read -p "Continue? [Y/N] " prompt
 
if [[ $prompt = "Y" || $prompt = "y" ]]; then
	# Delete all files in the list
	for f in ${rawlist}; do
		rm -v $rawdir/$f
	done
	exit 0
else
	echo -e "\nAborted."
	exit 1
fi

I Sincerely Hope I Never Write a Script like this Again

This sort of stuff destroys your soul:

#!/bin/bash
logfile=~/slave-watcher.log

while true; do
	status=$(mysql --execute="show slave status\G"|grep "Seconds_Behind_Master:"|awk '{print $2}')

	if [ $status == "NULL" ]; then
		mysql --execute="show slave status\G" | grep "Last_SQL_Error:" | tee -a $logfile
		mysql --execute="set global sql_slave_skip_counter=1; start slave;"
	fi

	sleep 1
done

What it’s doing is looking at MySQL’s slave status and skipping over any statement that causes an error. There are so many reasons why you should never do this, and I won’t go into detail on what necessitated it here, but you’ll be glad to know the slave I set this loose on was not in production!

Splitting files with dd

We have an ESXi box hosted with Rackspace, it took a bit of pushing to get them to install ESXi it in the first place as they tried to get us to use their cloud offering. But this is a staging environment and we want something dedicated on hardware we control so we can get an idea of performance without other people’s workloads muddying the water.

Anyway, I’ve been having a bit of fun getting our server template uploaded to it, which is only 11GB compressed – not exactly large, but apparently large enough to be inconvenient.

In my experience the datastore upload tool in the vSphere client frequently fails on large files. In this case I was getting the “Failed to log into NFC server” error, which is probably due to a requisite port not being open. I didn’t like that tool anyway, move on.

The trusty-but-slow scp method was also failing however. Uploads would start but consistently stall at about the 1GB mark. Not sure if it’s a buffer or something getting filled in dropbear (which is designed to be a lightweight ssh server and really shouldn’t need to deal with files this large), but Googling didn’t turn up much.
Continue reading

Bash script to alert when memory gets low

We have a web server that’s running out of memory about once every couple of weeks. The problem is that when it happens the swap file is totally full, the system is unresponsive and it usually needs a hard reboot. So it’s a bit difficult to debug. To avoid digging through log files I don’t understand I elected to put a script in /etc/cron.hourly which checks the total amount of free memory (including swap and physical). If there is less than 256mb free (this server has 512mb of ram and a 1gb swap so at this point the situation is serious), it dumps the process list to /tmp/processes.txt and sends me an email with it attached.

Note that mutt must be installed (‘apt-get install mutt’ on Debian/Ubuntu, or ‘yum install mutt’ on RedHat/CentOS).

#!/bin/bash

free=`free -mt | grep Total | awk '{print $4}'`

if [ $free -lt 256 ]; then
        ps -eo %mem,pid,user,args >/tmp/processes.txt
        echo 'Warning, free memory is '$free'mb' | mutt -a /tmp/processes.txt -s "Server alert" email@me.com
fi

Then of course make it executable and symlink to cron.hourly:

chmod +x /etc/scripts/memalert.sh
ln -s -t /etc/cron.hourly/memalert.sh /etc/scripts/memalert.sh