hints tips tricks

All posts tagged hints tips tricks

Although ReciPants v1.2 is still hosted on SourceForge (and Freecode), it has not been actively developed or updated since 2004. If you search for ReciPants on this blog, you will see that I’ve had some trouble migrating it between servers and keeping it working over the years. I, therefore, finally decided to migrate all of our recipes (close to 500 of them) out of this software and in to the latest stable release of MediaWiki. While I’m probably the last person on earth using this software, I thought I’d share here how I performed the export, just in case I’m not!

MediaWiki allows export and import of pages in XML format. This page on the MediaWiki site was very helpful in providing the required format of this XML file. Additionally, I installed a fresh copy of MediaWiki on my Web server, mocked up a fake recipe page similar to the format I wanted, then exported that page in order to inspect it. One of the main differences I noticed between the example from the MediaWiki site and the actual export I performed was near the <text> tag. In the example, the tag is simply <text>, but I found that my imports using this tag were not getting rendered in my wiki as wikitext. My actual exported page, however, had the following tags preceding the <text> tag and a different <text> tag itself:

<model>wikitext</model>
<format>text/x-wiki</format>
<text xml:space=”preserve”>

That combination of tags resulted in the wikitext being rendered properly. Without them, the raw wikitext was shown in MediaWiki with no LF/CR — very un-readable!

With that information at hand, I set to work creating a script in Perl which would connect to the MySQL recipants database using DBI and extract the various data I wished to export into variables. The main outer loop iterates through the recipes table. Inside that loop, the other tables are queried for the data they hold about the current recipe. Everything is shoved into variables, arrays, or arrays of arrays along the way. At the end of the main loop, the XML for that page is generated.

You can download or view the script source code: rpexport.txt

This is a sample of the XML output, limited to just recipes with “berry” in the name: berry.xml

Found myself in a predicament just the other day. I had spent several hours building out template VMs for a few servers, installing vmtools, applying all the latest updates, running sysprep, etc. At the end of that effort, I exported them to .ova files and removed the original VMs. Then I decided I might want to test these .ova exports to make sure they work. Well, guess what? I found myself with broken .ova files that would not import.

The error message, “OVF Deployment Failed: File ds:///vmfs/volumes/uuid/_deviceImage-0.iso was not found” led me to VMware KB article 2034422. The resolution, of course, required use of the original source VMs which I had over zealously deleted earlier. Thankfully, the article gave enough detail about the issue that I was able to work up the following little hack to repair my damaged .ova files:

  1. I extracted the contents of the .ova files using tar. This works because .ova files are just uncompressed tar archives. You could also use 7zip on Windows.
  2. Inside, there was one .mf or manifest file, one .ovf file, and one .vmdk. There would be more .vmdk files if I had more drives associated with the VMs.
  3. I edited the .ovf file to change the text “vmware.cdrom.iso” to “vmware.cdrom.remotepassthrough”. The reason for the failure was that the import process was trying to mount a non-existent vmware tools iso image.
  4. Once edited, the SHA1 sum of the .ovf file had changed, causing it to not match the sum contained in the manifest. I generated a new SHA1 sum and replaced the original in the .mf manifest file.
  5. Finally, I re-archived the files with tar, making sure to change the extension on the end back to .ova. The tricky bit to this is to make sure you add the files to the archive in the correct order. The .ovf has to be the first file in the archive. Use tar cvf archive.ova vm.ovf to create, then tar uvf archive.ova *.mv *.vmdk to append the rest of the files. Note that I couldn’t get 7zip to archive these in order. I had to use GNU tar from an Ubuntu VM.

I was then able to successfully import the .ova files back into my vSphere environment.

Overview

When a virtual machine (VM) is shut down, part of that process is the deletion of its .vswp or virtual machine swap file. If, however, the host on which the VM is running crashes, the .vswp file may not get removed. When the VM powers back up, it will create a new .vswp file and leave the old one in place. If there are several host crashes, this can start to eat up datastore space, robbing your VMs of space for snapshots or causing issues if you’ve over-allocated your storage.

Procedure

First off, a warning. If you delete the active .vswp file I don’t know what will happen, but I’m sure it will be Very Bad Indeed. Therefore, the most important part of this procedure is to identify the newest or youngest .vswp file created. This should be the one with the latest time stamp on it.

Another way to guarantee you identify the correct .vswp file is to shutdown the virtual machine properly. This will remove the active .vswp file, leaving behind only the extra ones you no longer need. To minimize confusion, make sure there are no snapshots of the VM prior to shutting it down.

Once you’ve identified the active .vswp file or shut the VM down to remove it, you can then use the vCenter client to browse your VM’s datastore and remove the extra .vswp file or files.

OK, so I searched Google but couldn’t find the magic combination anywhere. Hopefully, this post will help you!

The setup: I wanted to compare the contents of two directories which had previously been synchronized via rsync without actually synchronizing them. The main goal was to find out the total size of the data which would need to be transferred so I could estimate how long the actual rsync run would take. To do this, you’d think the following would work, based on the rsync man pages:

rsync -avvni sourcedir/ destdir/

Broken down that is:

  • -a archive meta-option
  • -vv extra verbosity
  • -n dry run
  • -i itemize changes

The output, however, lists “total size” as the total size of all the files — NOT just the size of the changed files which would be synchronized. So I did some research using the rsync man page and some testing with several options combinations and came up with the following solution:

rsync -an --stats sourcedir/ destdir/

Here’s a mock sample output from running that command:

Number of files: 2
Number of files transferred: 1
Total file size: 4096 bytes
Total transferred file size: 2048 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 82
File list generation time: 0.013 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 110
Total bytes received: 32
sent 110 bytes  received 32 bytes  284.00 bytes/sec
total size is 4096  speedup is 1.23

The particular stats you’ll need to parse are the following:

  • Total file size: (given in bytes)
  • Total transferred file size: (also in bytes, this is the changed data to be transfered)

You can ignore Total bytes sent and Total bytes received as they only refer to the actual data transferred by the rsync process. In a dry run (-n option) this amounts to only the communication data exchanged by the rsync processes.

Also of interest are the Number of files and Number of files transferred statistics. It is also worth noting that the trailing slashes on the directories are important. If you leave them out, what you are actually testing is the copying of sourcedir to destdir/sourcedir which is probably not what you want to do if you are trying to compare their contents.

If this post was helpful to you, please spread the word and share it with others!

So you want one USB Flash stick to boot the latest versions of both System Rescue CD and Clonezilla-Live? So did I! Easy, I thought, just use UNetbootin to create each one in turn, copying the files between runs, then merge them together. Well, it wasn’t that easy.

First off, Clonezilla (1.2.5-35) installs just fine via UNetbootin, but the latest SRCD (1.5.8) does not. I noticed, however, that SRCD now includes an installer script called usb_inst.sh which essentially does the same thing UNetbootin does. Here are the steps I followed to get them both crammed on to one 1 GB USB flash stick (with about 608 MB spare space):

  1. Install SRCD to the USB stick using the usb_inst.sh script.
  2. Boot to the USB stick to verify it worked OK.
  3. Install Clonezilla to the same USB stick with UNetbootin. Be sure to NOT overwrite the files when it prompts you to do so.
  4. Boot to the USB stick to make sure it still works for SRCD. At this point, Clonezilla will NOT show up in the boot menus.
  5. Remove the first few lines from the top of /syslinux.cfg, stopping at the blank line before the first “label” line.
  6. Merge the /syslinux/syslinux.cfg and /syslinux.cfg files with cat /syslinux.cfg >> /syslinux/syslinux.cfg does the trick. Be sure to append /syslinux.cfg at the end of /syslinux/syslinux.cfg
  7. Boot to the USB stick several times and verify you can start up each of the menu items successfully.

Note that the only reason this works is that the SRCD install script uses that /syslinux subfolder for its boot menus, and that both are using similar boot techniques. If the SRCD and UNetbootin scripts continue to configure themselves like this, then this method should work for future version, too.

For my next challenge. . . cram BackTrack, SRCD and Clonezilla on a 4 GB USB Flash stick!

Everything requires time. Writing this article required time to think, time to write a draft, time to edit, etc. Sleeping requires time. Eating requires time. Planning requires time. You get the idea.

Therefore, a complex GTD system itself requires a significant amount of time to implement. Managing tickler files, to do lists and the like require time to implement and maintain. With complexity, these tasks become chores and most people try to avoid chores. This makes the barrier to entry quite high for these systems, and that makes turning them in to habit more difficult.

I say, start simple. You may decide later to build up to a more complex method, or you may find that you need go no further. My method revolves around my Calendar and Inbox. Quite simply, if it is worth spending the time to do, it should be on your calendar. Period.

Incoming tasks come from your Inbox and go to one of three places:

  • Complete the task immediately or respond if additional information is required. Move a copy to your Follow-up folder and add an alert so you can keep track later.
  • On the calendar to be completed at some point in the future.
  • In the Trash. Save what you need for long-term documentation in the appropriate location (folder, Wiki, project documentation system, etc.), then delete it.

Use your calendar to block off recurring appointments for handling e-mail, planning and scheduling, but also be flexible. If it is on your calendar, you can drag it to a different time to re-schedule. Keep it on your calendar, and it will get done. If you find that you are bumping a particular item on your calendar too much, then you should re-consider whether it should be on there at all — it obviously isn’t that important to you.

Here’s a quick tip on how to create a bootable USB flash drive from which you can install ESXi.

  1. Download UNetbootin (available for Windows and Linux).
  2. Download the latest ESXi .iso file from VMware.
  3. Format your USB flash drive (1 to 2 GB should work, I used a 4 GB one) with a FAT32 file system.
  4. Fire up UNetbootin and select the option to use your own .iso file.
  5. Make sure you choose the RIGHT USB flash drive — Best Practice would be to have only your target drive connected while doing this!
  6. Click OK and let it cook. This may take a few minutes.
  7. Cancel the prompt at the end to reboot — you don’t want to reboot, really.
  8. Unmount your USB flash drive and test it on ESXi compatible hardware!

There are some limitations to this method:

  • Target system must support USB boot (very few don’t)
  • Won’t work with an EFI BIOS unless that BIOS supports booting a legacy mode BIOS. Even then, it still may not work.

And, yes, you can do this with the regular ESX .iso file, but you’ll need to purchase licensing for those installs eventually. You can register ESXi for free use!

Here are some hints and tips for those who are new to using ssh/OpenSSH for Linux system administration. Most of these tips have come from my recent work with a large number of Linux servers hosted on a VMware ESXi 4.x server farm.

Password authentication VS ssh key authentication

  • If you are administering only a few systems on a closed network (i.e. accessible only locally or by a secure VPN connection), then password authentication is probably OK, but you should consider using ssh keys anyway.
  • If your network needs to allow ssh access directly from the Internet or you are administering a large number of systems, then you should definitely use ssh keys.

Ssh-agent, scripting and cron

  • ssh-agent can save you typing in the password to your ssh key every time you need it.
  • This site gives a good overview of ssh-agent and includes some code you can add to your .bash_profile script to ensure your keys get added upon login.
  • Although there are hack-ish ways to get ssh-agent and cron to work together, you are probably better off setting up special keys to use with scripts that must be called via cron. Just keep in mind that keys without passwords are a security risk.
  • If you cannot risk using keys without passwords, consider running those cron scripts locally on each system. Utilize shared file space or e-mail to collect the results.

Bash one-liners and ssh with ssh keys

  • I’ve become a fan of using bash “one-liner” scripts to keep abreast of server stats such as load averages, available patches and disk usage.
  • Keep an up-to-date list of hosts in a file called hostlist.
  • Run your one-liners while ssh-agent has your ssh keys cached.
  • Here’s a template one-liner which checks uptime on each host listed in the file hostlist:

for e in `cat hostlist`; do echo $e; ssh $e "uptime"; done

  • In the above example, you can replace uptime with just about any command which exists on the remote host.
  • You can also synchronize some of the configurations under /etc with the above by utilizing either scp or rsync instead of ssh in that one-liner.

Turn your one-liners into scripts

  • If you find yourself using the same one-liner over and over, it is time to save yourself some typing and turn it into a script.
  • I like to keep these sorts of scripts under ~/bin. I also like to add that to my $PATH and create a simlink ~/scripts.
  • Some one-liners are good candidates to be turned in to cron scripts. Just keep in mind the risks of using ssh keys without passwords, and include logic to detect conditions you want to monitor. For example, you can run /proc/loadavg through awk to isolate one of the three figures and send yourself an e-mail if that average is too high.