Skip navigation

Tag Archives: technology

Although ReciPants v1.2 is still hosted on SourceForge (and Freecode), it has not been actively developed or updated since 2004. If you search for ReciPants on this blog, you will see that I’ve had some trouble migrating it between servers and keeping it working over the years. I, therefore, finally decided to migrate all of our recipes (close to 500 of them) out of this software and in to the latest stable release of MediaWiki. While I’m probably the last person on earth using this software, I thought I’d share here how I performed the export, just in case I’m not!

MediaWiki allows export and import of pages in XML format. This page on the MediaWiki site was very helpful in providing the required format of this XML file. Additionally, I installed a fresh copy of MediaWiki on my Web server, mocked up a fake recipe page similar to the format I wanted, then exported that page in order to inspect it. One of the main differences I noticed between the example from the MediaWiki site and the actual export I performed was near the <text> tag. In the example, the tag is simply <text>, but I found that my imports using this tag were not getting rendered in my wiki as wikitext. My actual exported page, however, had the following tags preceding the <text> tag and a different <text> tag itself:

<model>wikitext</model>
<format>text/x-wiki</format>
<text xml:space=”preserve”>

That combination of tags resulted in the wikitext being rendered properly. Without them, the raw wikitext was shown in MediaWiki with no LF/CR — very un-readable!

With that information at hand, I set to work creating a script in Perl which would connect to the MySQL recipants database using DBI and extract the various data I wished to export into variables. The main outer loop iterates through the recipes table. Inside that loop, the other tables are queried for the data they hold about the current recipe. Everything is shoved into variables, arrays, or arrays of arrays along the way. At the end of the main loop, the XML for that page is generated.

You can download or view the script source code: rpexport.txt

This is a sample of the XML output, limited to just recipes with “berry” in the name: berry.xml

Here’s a quick write-up from my presentation on The Amnesic Incognito Live System (TAILS) at the August 2014 CIALUG meeting.

The main TAILS Web site: https://tails.boum.org/

TAILS is intended to make it easy for non-technical end users to boot into a live, Linux-based OS which automatically routes its traffic over The Onion Router (TOR) network. The intention is to provide anonymity, privacy, and plausible deniability for dissidents, whistle-blowers, or anyone who feels the need to conduct searches or communicate securely while leaving little to no trace of those activities on the host system.

While TAILS does succeed at providing a bootable system that defaults to a TOR-routed connection, non-technical or even non-Linux end users will need some training from a more savvy user to make the best use of this system. Keep the following points in mind:

  • TAILS is still susceptible to any issues which effect the TOR network. Know and understand how to limit your behaviors when using TOR and apply those to your use of TAILS.
  • Out of the box, the current version (as of this writing, 1.1 released July 2014) of TAILS had 34 packages which were out of date, and TOR itself was one of those pending updates. Installing updates before each use should be top priority, but more on that later.
  • It does NOT appear that TAILS uses the TOR Browser Bundle. This makes it more important to apply updates before each use as Firefox, Vidalia and the TOR Button may need to be updated (no updates were pending for these in version 1.1 as of this writing).

As mentioned above, the very first thing which should be done after successfully booting to TAILS and connecting to the Internet and TOR network is to apply updates. This is accomplished by logging in to a terminal, elevating to root, and running ‘apt-get update’ followed by ‘apt-get upgrade’. Note that I ran in to the following issues when updating version 1.1 of TAILS in this manner:

  • Updating was slow. This is actually a good thing because the updates are grabbed via the TOR network.
  • When the TOR package gets updated, it prompts whether or not to replace the configuration. I recommend keeping the existing configuration (the default choice).
  • When the TOR package gets updated, it stops the TOR service but doesn’t restart it. Later in the update process, some other packages need to download firmware. Because the TOR service is stopped, that process fails. I had to start the TOR service again, then re-run ‘apt-get upgrade’ to successfully update those packages.
  • When the TOR package gets updated, it breaks the running Vidalia process. I simply closed it. TOR continued to work without that process running.

While this isn’t a complete summary of my presentation, I hope it is helpful. Please share this post if you found it so. Thanks!

Below is a link to the presentation I gave at the September 18th, 2013 CIALUG meeting.

Tor and the Tor Browser Bundle: Hints, Tips, and Tricks for Effective Use

Found myself in a predicament just the other day. I had spent several hours building out template VMs for a few servers, installing vmtools, applying all the latest updates, running sysprep, etc. At the end of that effort, I exported them to .ova files and removed the original VMs. Then I decided I might want to test these .ova exports to make sure they work. Well, guess what? I found myself with broken .ova files that would not import.

The error message, “OVF Deployment Failed: File ds:///vmfs/volumes/uuid/_deviceImage-0.iso was not found” led me to VMware KB article 2034422. The resolution, of course, required use of the original source VMs which I had over zealously deleted earlier. Thankfully, the article gave enough detail about the issue that I was able to work up the following little hack to repair my damaged .ova files:

  1. I extracted the contents of the .ova files using tar. This works because .ova files are just uncompressed tar archives. You could also use 7zip on Windows.
  2. Inside, there was one .mf or manifest file, one .ovf file, and one .vmdk. There would be more .vmdk files if I had more drives associated with the VMs.
  3. I edited the .ovf file to change the text “vmware.cdrom.iso” to “vmware.cdrom.remotepassthrough”. The reason for the failure was that the import process was trying to mount a non-existent vmware tools iso image.
  4. Once edited, the SHA1 sum of the .ovf file had changed, causing it to not match the sum contained in the manifest. I generated a new SHA1 sum and replaced the original in the .mf manifest file.
  5. Finally, I re-archived the files with tar, making sure to change the extension on the end back to .ova. The tricky bit to this is to make sure you add the files to the archive in the correct order. The .ovf has to be the first file in the archive. Use tar cvf archive.ova vm.ovf to create, then tar uvf archive.ova *.mv *.vmdk to append the rest of the files. Note that I couldn’t get 7zip to archive these in order. I had to use GNU tar from an Ubuntu VM.

I was then able to successfully import the .ova files back into my vSphere environment.

The Setup

A client called in with an interesting problem. They had recently performed a planned outage due to a power issue at their premises, but upon powering everything back up one of their hosts was unable to access the iSCSI SAN volumes. Fortunately, they were able to bring up all of the VMs on the remaining hosts, but capacity was down enough that they had to disable HA and DRS.

Upon interviewing the client, it became clear that they had already checked all the usual suspects. Nothing in the configuration prior to the power cycling had changed. Switch configurations, cables, ports, port groups, iSCSI settings, etc. were all exactly as before. In desperation, they had even wiped and re-installed the host using ESXi, carefully re-configuring everything to match the working hosts in the hopes that this would resolve the issue. It did not.

The Non-Standard Configuration

We know that VMware and many other experts recommend separating your vSphere network into separate Storage, Management, vMotion, and Production VM networks, typically using VLANs. This client, however, had opted to stay with a flat network configuration with all port groups and vmkernel interfaces configured on the same IP subnet. While this isn’t a Best Practice, there really isn’t anything wrong with doing things like this. As long as the vmknics can talk to the SAN, all should work fine, right?

At some point in the past, however, the decision was made to place an air gap between the storage interfaces and the rest of the network. All of the storage-related physical interfaces were connected to a switch which was not uplinked to the rest of the network in an effort to isolate that traffic so it wouldn’t overload the Production VM and Management traffic. Again, this isn’t a Best Practice, but it should work (and it did for quite some time) configured this way.

Troubleshooting

Where to start? I first plugged my laptop into the storage switch and attempted to ping the IP addresses assigned to the iSCSI vmkernel ports on the troubled host. No pings were returned, yet I was able to successfully ping all other storage interfaces present on the switch. Also, as expected, I was unable to ping all the interfaces connected to the Production/Management switches — a quick check to make sure there wasn’t an uplink between them. This pretty much established what we already knew, but more importantly, laid the ground work for my next test.

Next, I used PuTTY to ssh in to the troubled host and perform vmkpings. Here, I noticed a pattern that led me to my conclusion. I was not able to ping the iSCSI interfaces of the SAN, but when I tried to ping IPs that I knew were only on the Production/Management switches, the pings were returned. This made it clear that the Storage network traffic for the troubled host was exiting the host via the physical interfaces connected to the Management/Production switches and not via the interfaces on the Storage switch.

So what was happening? Upon booting up, that host was binding its iSCSI software initiator to the Management vmkernel port and not the vmkernel ports uplinked to the Storage switch. Since all vmkernel ports are automatically enabled for iSCSI traffic and there is no way to disable iSCSI traffic on a vmkernel port, there was no way to force the iSCSI software initiator to bind to a particular vmkernel port — except to do things right and set them up on separate IP subnets/VLANs.

The Real Solution and The Work-Around

So, of course, the real solution is to re-work the storage network so that it uses a different IP subnet than the production network. This, however, requires planned down time to re-IP all the storage interfaces on all hosts and the SAN. Until that can be planned, they were still down a host and running without HA/DRS. On a hunch, I came up with the following work-around, which got the host back up and running until such time as the reconfiguration could take place:

  1. Power down the troubled host.
  2. Disconnect the network cables serving as uplinks for the Management vmkernel port.
  3. Power up the host, leaving those cables disconnected.
  4. Wait long enough to assure the host had completely booted into ESXi.
  5. Plug the Management vmkernel uplink ports back in.

This worked because the only vmkernel interfaces available while the server was booting were the ones connected to active uplinks — the ones connected to the Storage switch. Once that binding took place, it would not change so it was then safe to plug the Management vmkernel uplinks back in. Obviously not an ideal situation, but it did get the host back in service until a outage window can be scheduled to properly configure the Storage network interfaces.

The Setup

I recently purchased three albums from iTunes. After downloading them, syncing them to my iPod, and listening through them I was happy. While driving the next day, I thought I’d shuffle through the playlist I had created with the new tracks. I quickly discovered that all the tracks on one of the albums had a flaw that was only evident when shuffling them. Tracks 2 through 17 were missing the first 2 seconds, and tracks 1 through 16 had the first 2 seconds of the following track tacked on to the end. This is easy enough to fix in Audacity, but I felt it important to report to iTunes, if only to call their attention to the issue so they could fix it before too many others reported it.

Requesting Support

After surfing through the apple.com site for a bit, I finally landed on their Express Lane Support page. I had to dig around a bit more before I found “Quality of purchased content” under “Purchases, Billing & Redemption.” I filled in the information required and opened a case describing the issue. An automated message stating my support request would be responded to within 24 hours came back very quickly.

Initial Response

The “real live person” response came back within 4 hours of posting the complaint. Impressive, since I posted my complaint at about 12:20 AM. The responding service representative apologized and gave detailed instructions on how to delete and re-download the content. I as pretty sure this would not resolve the issue (it wasn’t a corrupt download, as they seemed to think), but I went through the motions anyway. Upon listening to the re-downloaded content, I confirmed that the issue was not resolved and replied to the service representative, advising them that I believed the source files on their servers were not correct.

The Final Response

Hello Kenneth,

X here again from the iTunes Store Support. I am very sorry about my delay in responding to you. I have been away from the office for the last 2 days. I understand the album is still incorrect. When it comes to your money, I can certainly appreciate how important it is to feel that you are treated fairly, and I would be more than happy to help you out with this today.

I’m sorry to learn that this item did not meet the standard of quality you have come to expect from the iTunes Store. I have submitted this item for investigation. Apple takes the quality of the items offered on the iTunes Store seriously and will investigate the issue with this item, but I can’t say when or if the issue will be resolved.

In five to seven business days, a credit of $9.99 should be posted to your card that appears on the receipt for that purchase.

Kenneth, I want to thank you for choosing the iTunes Store and for being such a big part of the iTunes family.

Thank you for contacting iTunes Store Customer Support. Have a great day.

Fix It Yourself

Given that response, I had no choice but to spend some time in Audacity repairing the tracks. The general procedure was as follows:

  1. Open the first track in Audacity. This imports it to a native format used by Audacity for manipulation.
  2. Jump to the end and copy the bit at the end which belongs to the following track.
  3. Open the second track in Audacity and paste the first two seconds into their rightful place
  4. Zoom in on the pasted part and remove the slight pause introduced by the copy/paste operation. I progressively zoomed in and removed large blank spaces until I was zoomed as far as I could, then I matched up the two ends, deleting the last bit of silence.
  5. Listen to the second track to make sure it was a seamless paste (and the right song).
  6. Go back to the first track and delete the tail end. Export that track to .mp3 and .m4a (AAC) formats. Close that track.
  7. The open second track becomes your “first track”, and the “second track” becomes the following track. Start again at step 2 above.

After all the tracks were repaired (I worked on a copy from out of iTunes), I deleted the originals from iTunes and re-imported the repaired versions. I then had to go through and repair the tagging, as it was a bit messed up. For some reason, the tagging didn’t import consistently from the Apple versions (either that or it was inconsistent to begin with).

I wonder how many free copies of this album they’ll give out before they correct the files on their server? I wonder how many people will complain and get a refund vs. the number who will just put up with the issue? I wonder how many other albums are messed up on this way?

Background

I’ve been happily running this site on Textpattern (version 4.2.0) for a couple of years now with no issues or concerns. It has been a solid platform and I found a good template which I was able to customize to my liking. Recently, however, my wife has expressed interest in starting her own blog (more on that to come), and she wanted to use WordPress primarily because she had used it before.

Over the past couple of months, I’ve been working a lot with WordPress in support of Lori’s effort to bring her blog to life. We’ve been running an internal development site during that time to find the right theme, customize that theme, play with different post presentations and build some content prior to launching. She’s very serious about taking this blog live, so I figured I should probably get serious about learning as much as I can about WordPress.

This post is all about my experience converting from Textpattern 4.2.0 to WordPress 3.0.4.

Finding a Theme and Testing Migration

The first thing I did was install WordPress 3.0.4 on my internal server so I could play with different themes and test migrating my old content. Since I’ve been doing a lot more photography recently, I tried to find a photo-centric theme. After looking at four or five different themes, I settled on F8 Lite by Graph Paper Press. I’m a big fan of its clean, simple grid presentation and the focus on photography.

During my initial testing I discovered that the standard process for creating a child theme under which to make your customizations did not work. I’ve not figured out why it breaks, so for now I’m working with a full copy of the original theme. I’ll have to keep track of what I’ve customized so I can re-apply those customizations when an updated version of this theme comes out (hopefully that won’t be often).

Another issue I found while researching migration from Textpattern to WordPress is that the built in tools for Textpattern migration have been broken for quite some time. Some people have developed some work-arounds, but it seemed the process was hit or miss depending on the versions of each platform in use. I noted, however, that there was a generic import tool which would utilize the old site’s RSS feed to import posts. I tested this out and it worked very well. All the post content was imported (some caveats on that later) and only some minor formatting issues were introduced.

After playing with the F8 theme and my imported content for several hours I decided to go ahead and start the process of migrating my live site.

Migration Preparations

Any successful migration starts with a full backup of the old site so you can restore it should something go horribly wrong. Textpattern, just like WordPress, is a MySQL/PHP based site, so there really are just two things to back up: the database and the site files.

First, I backed up the database with the following:

mysqldump -u root -p txp_database | gzip -9 > 20110116_textpattern.sql.gz

Breaking that down, I called mysqldump with a root account, prompted for that user’s password (-p) and dumped the database called txp_database. Since mysqldump outputs to standard out, I piped that through gzip with -9 for maximum compression, then re-directed it into the final file. I like to put the date as the first part of a backup filename so it is easier to distinguish later if I have a bunch of backup files in the same location.

Next it was time to back up the textpattern site files. This contained all the PHP code plus all of the customizations I’d made to that code and the site graphics. After changing to the directory under which all this lives (varies by server configuration, but could be /opt/textpattern or /usr/share/textpattern) I then issued the following:

sudo tar -czvf /tmp/20110116_textpattern_files.tgz ./

Breaking down this sequence, I escalated my privileges to root with sudo, issued tar to create (-c) a compressed (-z with gzip, use -j for bzip2), with verbose output (-v), file with the given name (-f) containing the contents of the current directory (./). Note that I placed the output file in a different location than the current directory to avoid any problem with the tar process trying to recursively include its output file in the input. It would also be a good idea to move both the MySQL dump and this backup file to a common location for safe keeping — leaving either under /tmp is a bad idea because some systems clear the contents of that folder upon rebooting.

At this point, I could completely mess up Textpattern and I would be able to utilize the contents of these two files to restore it all back to its current condition.

Migration and Importing Textpattern Content

Once all the backups were out of the way, it was time to move the WordPress files and database from my development site to my “live” Web server. This was the tricky bit, as I discovered some of the initial setup was migrated with the database and there was no easy way to re-configure those settings. Most notably, the site URL kept re-directing me to my local dev site after I migrated the files. I ended up starting with a fresh copy of the database, but using my modified WordPress files. I had to re-configure the site and re-import the Textpattern content, but that was easy to do since I already had a dump of the rss feed.

First, I wrapped up my WordPress development files in a tar/gzip file, similar to backing up by issuing the following after changing to the root of the WordPress folder on my development server:

sudo tar -czvf /tmp/20110116_wordpress_files.tgz ./

I then copied that tgz file up to my server and uncompressed it to a WordPress folder by changing to that folder and issuing:

sudo tar -xzvf /tmp/20110116_wordpress_files.tgz

Next, I issued the following sequence to get into MySQL, then issue SQL statements to create a fresh database:

mysql -u root -p
create database wordpress;
grant all privileges on wordpress.* to 'wp_user'@'localhost' identified by 'wp_password';
quit

Breaking that down, the first line opens a MySQL command prompt with root privileges. The next line creates the wordpress database. The third line simultaneously creates the user ‘wp_user’ with the password ‘wp_password’ and grants that user full access to the wordpress database. The last line quites the MySQL interface.

At this point, my old Textpattern site was still live, but I had to configure the WordPress site. I decided to quickly switch over to the WordPress site and finish up the configuration. To do this, I simply had to edit my /etc/apache2/sites-available/default file so it pointed to the location of the WordPress site instead of the Textpattern one. All the rest of the settings in that file remained the same.

Once that was done, I hit the wp-admin URL to complete the site setup and create a site administrator user account. I then logged in as the site administrator and fired up the rss-importer plugin, which I had already installed as part of my development site, and it came over when I copied those files. But first, a word on getting that RSS content out of Textpattern. . .

The rss-importer plugin takes as its input an RSS XML file. In order to generate such a file from Textpattern, I had to go into the site settings and set the RSS feed to present all of my posts in the feed and set it to place the entire contents of each post in that feed. Once that was set, I visited the site and right-clicked on the RSS link, saving that link as a file called rss.file. Within my WordPress development site, I was then able to upload the contents of rss.file into the rss-importer plugin. Here are my caveats about this method and why it worked for me, but might not work for you:

  1. My Textpattern content was all code/text. All of my images were hosted from my Flickr account. I don’t believe site embedded pictures would have transferred with this method.
  2. There is a 2 MB file upload limit in WordPress. I only had 108 posts in Textpattern and the RSS XML file was under 500 KB. I believe you can increase the 2 MB limit, if needed.
  3. The import wasn’t perfect. Some formatting was lost. I spent a significant amount of time going through each of the 108 posts and adjusting the formatting. A better import method may have preserved this formatting.
  4. The categories did not import either. I had to go through all posts and assign categories and tags. I’m not sure if any of the other import methods would preserve categories. I wanted to re-work these anyway, so this wasn’t a big loss.

Aftermath and Conclusion

So, yeah, I had to touch every post and fix some formatting. I also had to set up new Categories and set tags on each post. Good thing I’m not a terribly prolific blogger, or I would have had some tough choices to make. As it was, 108 posts weren’t too bad. The later posts were more complex, requiring more attention. As the posts got older, there was less to do, so the last half to one quarter went a lot quicker.

Overall, I’m happy with the look of the site and the way this theme integrates with WordPress. As mentioned before, I still need to work out the child theme issue, but hopefully I can figure that out in my development site soon. I’m also going to dig into the CSS of the site and change up some of the colors. I don’t like the red article link headers and hyperlinks, and I want some of the fonts to be just a bit larger.

My work on this site will never be done, but that is the way of the blogger. . .

Overview

When a virtual machine (VM) is shut down, part of that process is the deletion of its .vswp or virtual machine swap file. If, however, the host on which the VM is running crashes, the .vswp file may not get removed. When the VM powers back up, it will create a new .vswp file and leave the old one in place. If there are several host crashes, this can start to eat up datastore space, robbing your VMs of space for snapshots or causing issues if you’ve over-allocated your storage.

Procedure

First off, a warning. If you delete the active .vswp file I don’t know what will happen, but I’m sure it will be Very Bad Indeed. Therefore, the most important part of this procedure is to identify the newest or youngest .vswp file created. This should be the one with the latest time stamp on it.

Another way to guarantee you identify the correct .vswp file is to shutdown the virtual machine properly. This will remove the active .vswp file, leaving behind only the extra ones you no longer need. To minimize confusion, make sure there are no snapshots of the VM prior to shutting it down.

Once you’ve identified the active .vswp file or shut the VM down to remove it, you can then use the vCenter client to browse your VM’s datastore and remove the extra .vswp file or files.

OK, so I searched Google but couldn’t find the magic combination anywhere. Hopefully, this post will help you!

The setup: I wanted to compare the contents of two directories which had previously been synchronized via rsync without actually synchronizing them. The main goal was to find out the total size of the data which would need to be transferred so I could estimate how long the actual rsync run would take. To do this, you’d think the following would work, based on the rsync man pages:

rsync -avvni sourcedir/ destdir/

Broken down that is:

  • -a archive meta-option
  • -vv extra verbosity
  • -n dry run
  • -i itemize changes

The output, however, lists “total size” as the total size of all the files — NOT just the size of the changed files which would be synchronized. So I did some research using the rsync man page and some testing with several options combinations and came up with the following solution:

rsync -an --stats sourcedir/ destdir/

Here’s a mock sample output from running that command:

Number of files: 2
Number of files transferred: 1
Total file size: 4096 bytes
Total transferred file size: 2048 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 82
File list generation time: 0.013 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 110
Total bytes received: 32
sent 110 bytes  received 32 bytes  284.00 bytes/sec
total size is 4096  speedup is 1.23

The particular stats you’ll need to parse are the following:

  • Total file size: (given in bytes)
  • Total transferred file size: (also in bytes, this is the changed data to be transfered)

You can ignore Total bytes sent and Total bytes received as they only refer to the actual data transferred by the rsync process. In a dry run (-n option) this amounts to only the communication data exchanged by the rsync processes.

Also of interest are the Number of files and Number of files transferred statistics. It is also worth noting that the trailing slashes on the directories are important. If you leave them out, what you are actually testing is the copying of sourcedir to destdir/sourcedir which is probably not what you want to do if you are trying to compare their contents.

If this post was helpful to you, please spread the word and share it with others!

Overview

I don’t have all of the numbers memorized, but here’s what I remember off the top of my head:

  • They had about 400 lab stations available, each with a WYSE thin client and two monitors.
  • Everything was “in the cloud” running from data centers across the country, none of them local.
  • Each lab’s VMs were created and destroyed on demand.
  • One monitor had the virtual environment and the other had your PDF lab guide.
  • Over the course of the conference they created/destroyed nearly 20,000 VMs.

Some Problems

I had to re-take a couple of labs due to some slowness issues. These appeared to be due to some storage latency when certain combinations of labs were turned up at the same time. I overheard some of the lab guides asking people to move to a different workstation when they complained of slowness. They explained that, by moving to a different station you would be logging in to a different cluster of servers, which would possibly help speed you up. I opted to come back later and re-take the two troubled labs. I was only able to get in 8 lab sessions as a result. I could have potentially completed 10 or 11.

Most of the time the lab VMs were very responsive and I was able to complete them with plenty of time to spare. The default time alloted was 90 minutes, but they would adjust that down to as low as 60 minutes if there was a long line in the waiting area. Prior to one lab session, I had to wait in the “Pit Stop” area before my session. Here’s a photo I snapped while waiting:

IMG_3174

List of Labs I Took

Here’s the list of labs I sat through:

  • Troubleshooting vSphere
  • Performance Tuning
  • ESXi Remote Management Utilities
  • Site Recovery Manager Basic Install & Config
  • Site Recovery Manager Extended Config & Troubleshooting
  • Vmware vCenter Data Recovery
  • VMware vSphere PowerCLI
  • Vmware vShield

Overall Impression

My overall impression of the lab environment was positive. Despite a few performance issues, I think they did an excellent job of presenting a very large volume of labs. I certainly learned a lot while sitting the labs and look forward to taking more next year. I’m sure the labs team gathered a lot of data which will help them improve the lab performance for next year as well.