Skip navigation

Tag Archives: Linux

Although ReciPants v1.2 is still hosted on SourceForge (and Freecode), it has not been actively developed or updated since 2004. If you search for ReciPants on this blog, you will see that I’ve had some trouble migrating it between servers and keeping it working over the years. I, therefore, finally decided to migrate all of our recipes (close to 500 of them) out of this software and in to the latest stable release of MediaWiki. While I’m probably the last person on earth using this software, I thought I’d share here how I performed the export, just in case I’m not!

MediaWiki allows export and import of pages in XML format. This page on the MediaWiki site was very helpful in providing the required format of this XML file. Additionally, I installed a fresh copy of MediaWiki on my Web server, mocked up a fake recipe page similar to the format I wanted, then exported that page in order to inspect it. One of the main differences I noticed between the example from the MediaWiki site and the actual export I performed was near the <text> tag. In the example, the tag is simply <text>, but I found that my imports using this tag were not getting rendered in my wiki as wikitext. My actual exported page, however, had the following tags preceding the <text> tag and a different <text> tag itself:

<model>wikitext</model>
<format>text/x-wiki</format>
<text xml:space=”preserve”>

That combination of tags resulted in the wikitext being rendered properly. Without them, the raw wikitext was shown in MediaWiki with no LF/CR — very un-readable!

With that information at hand, I set to work creating a script in Perl which would connect to the MySQL recipants database using DBI and extract the various data I wished to export into variables. The main outer loop iterates through the recipes table. Inside that loop, the other tables are queried for the data they hold about the current recipe. Everything is shoved into variables, arrays, or arrays of arrays along the way. At the end of the main loop, the XML for that page is generated.

You can download or view the script source code: rpexport.txt

This is a sample of the XML output, limited to just recipes with “berry” in the name: berry.xml

Here’s a quick write-up from my presentation on The Amnesic Incognito Live System (TAILS) at the August 2014 CIALUG meeting.

The main TAILS Web site: https://tails.boum.org/

TAILS is intended to make it easy for non-technical end users to boot into a live, Linux-based OS which automatically routes its traffic over The Onion Router (TOR) network. The intention is to provide anonymity, privacy, and plausible deniability for dissidents, whistle-blowers, or anyone who feels the need to conduct searches or communicate securely while leaving little to no trace of those activities on the host system.

While TAILS does succeed at providing a bootable system that defaults to a TOR-routed connection, non-technical or even non-Linux end users will need some training from a more savvy user to make the best use of this system. Keep the following points in mind:

  • TAILS is still susceptible to any issues which effect the TOR network. Know and understand how to limit your behaviors when using TOR and apply those to your use of TAILS.
  • Out of the box, the current version (as of this writing, 1.1 released July 2014) of TAILS had 34 packages which were out of date, and TOR itself was one of those pending updates. Installing updates before each use should be top priority, but more on that later.
  • It does NOT appear that TAILS uses the TOR Browser Bundle. This makes it more important to apply updates before each use as Firefox, Vidalia and the TOR Button may need to be updated (no updates were pending for these in version 1.1 as of this writing).

As mentioned above, the very first thing which should be done after successfully booting to TAILS and connecting to the Internet and TOR network is to apply updates. This is accomplished by logging in to a terminal, elevating to root, and running ‘apt-get update’ followed by ‘apt-get upgrade’. Note that I ran in to the following issues when updating version 1.1 of TAILS in this manner:

  • Updating was slow. This is actually a good thing because the updates are grabbed via the TOR network.
  • When the TOR package gets updated, it prompts whether or not to replace the configuration. I recommend keeping the existing configuration (the default choice).
  • When the TOR package gets updated, it stops the TOR service but doesn’t restart it. Later in the update process, some other packages need to download firmware. Because the TOR service is stopped, that process fails. I had to start the TOR service again, then re-run ‘apt-get upgrade’ to successfully update those packages.
  • When the TOR package gets updated, it breaks the running Vidalia process. I simply closed it. TOR continued to work without that process running.

While this isn’t a complete summary of my presentation, I hope it is helpful. Please share this post if you found it so. Thanks!

Below is a link to the presentation I gave at the September 18th, 2013 CIALUG meeting.

Tor and the Tor Browser Bundle: Hints, Tips, and Tricks for Effective Use

Background

If you aren’t familiar with GNU screen, you really should stop right now and familiarize yourself with it. This is a very powerful utility which allows you to run terminal based programs on a system, disconnect from that session and re-connect later from the same or a different location. You can also start multiple terminals within a given screen session. Whenever I ssh into a system, I almost always launch screen first. If my ssh session gets disconnected unexpectedly, I can simply re-connect and pick up where I left off by re-attaching to the screen session.

The Problem

I was recently working with a client on a process that was going to take quite some time to complete. The command we were running would give a progress indicator, so we could monitor the progress off and on over time. I assumed, since we both had the ability to utilize sudo to change user privileges that he would be able to sudo su – myusername followed by screen -r to take over the screen session I had started which contained this command. When he tried this, however, he was greeted with the following error:

Cannot open your terminal '/dev/pts/1' - please check.

The Solution

Searching around on Google comes up with a couple of different solutions. One of these solutions suggests that the second user should change the permissions on his tty to allow everyone access to it. While this works, it is definitely a Bad Idea for security as any user on the system could then snoop that tty.

The other solution suggests that the second user should issue script /dev/null after escalating themselves to the first user’s account. This works and does not appear to have the same security implications as the method above because everyone retains access to their own tty’s.

But Why Does This Work?

What I found was that none of the posts I ran across which presented the second, preferred solution explained why this works. They merely said, “use this method,” and left it at that. Being naturally curious, and harboring a concern as to whether this also opened up a tty to others’ snooping, I had to investigate.

Prerequisites

Of course, this all assumes that at least the second user has the ability to sudo su – and escalate their privileges. That is all. Let’s move on.

Stepping Through The Process

Here’s how I went about discovering what exactly script /dev/null does and why it allows the second user to access what appeared to be an inaccessible tty.

First, usera logs in via ssh, checks which tty was assigned, checks the permissions on that tty and launches screen:

usera@localhost ~ $ ssh usera@remotehost
usera@remotehost ~ $ tty
/dev/pts/1
usera@remotehost ~ $ ls -l /dev/pts/1
crw--w---- 1 usera tty 136, 1 2011-01-09 20:14 /dev/pts/1
usera@remotehost ~ $ screen

As you can see, usera has RW permissions, group members have W permissions and others have no access at all to this tty. Next, userb logs in to the same system via ssh, checks which tty was assigned and checks the permissions on that tty:

userb@localhost ~ $ ssh userb@remotehost
userb@remotehost ~ $ tty
/dev/pts/2
userb@remotehost ~ $ ls -l /dev/pts/2
crw--w---- 1 userb tty 136, 2 2011-01-09 20:20 /dev/pts/2

Again, the same permissions are present on the tty assigned to userb. So neither user can snoop on the other’s tty at this point. Here’s where it gets interesting, though. Let’s have userb escalate to usera and check the tty assignment and permissions again:

userb@remotehost ~ $ sudo su - usera
[sudo] password for userb:
usera@remotehost ~ $ tty
/dev/pts/2
usera@remotehost ~ $ ls -l /dev/pts/2
crw--w---- 1 userb tty 136, 2 2011-01-09 20:20 /dev/pts/2

This is where I had my “aha moment.” Although userb has changed to usera, the same tty (with the same permissions) is in use. Therefore, all commands issued are now under usera but any command which tries to manipulate the tty (like screen does) will fail because the tty remains under control of userb.

So now let’s take a look at what script /dev/null does to the tty:

usera@remotehost ~ $ script /dev/null
Script started, file is /dev/null
usera@remotehost ~ $ tty
/dev/pts/3
usera@remotehost ~ $ ls -l /dev/pts/3
crw--w---- 1 usera tty 136, 3 2011-01-09 20:36 /dev/pts/3

Ahh, we now have a new tty assigned to this user. Therefore, when screen -r is issued, the currently assigned tty, /dev/pts/3 is accessible to usera and the command succeeds! Also note that this new tty has the same permissions as the original usera tty, so it should be just as secure from snooping.

Conclusion

If you need to share a screen session with another (admin-rights holding) user, then the script /dev/null method is much preferred over mucking around with tty permissions. It appears that the script /dev/null method is just as secure as the original user’s tty because the permissions on the new tty are exactly the same.

On a more general note, be aware that solutions you find on the Internet might work, but they may not always be the best solution for the task at hand. Be sure you understand the implications of what you are doing instead of blindly copying and pasting commands you found on someone’s blog. If you are not sure what a particular solution does, I encourage you to test as I did (on a non-production system, of course) to make sure you understand it before you put it to use.

The Hardware

The Software

The netbook came with Windows 7 Starter Edition pre-installed. Because that was basically useless, I opted to upgrade it to Ubuntu. After firing up the laptop and going through the initial setup for Windows 7 (which took over an hour to complete), I rebooted the system to Clonezilla and took a drive image. Once I had that image, I wiped the system and installed Ubuntu Netbook Edition 10.04. Out of the box that gave me Firefox, Open Office and several other useful Open Source software products. I added the following items to round out the tools I’d need for the conference:

  • Dropbox
  • Truecrypt
  • KeePass
  • Pidgin

What Worked, What Didn’t

Most of the features I needed worked just fine under Ubuntu. Wireless and wired networking; suspend/hibernate worked flawlessly the whole time. The suspend and hibernate modes helped me extend my battery life significantly, as I could quickly close it to suspend when I thought I didn’t need to take notes and open it to return from suspend quickly when I wanted to jot down a quick note or two. I also tried to remember to use hibernate between sessions to help maximize my battery life, but I probably ended up using suspend most of the time.

While I did not actually benchmark the battery life, I had no problems going a full day of conference sessions without stopping to charge up. I did aggressively use suspend and hibernate modes to maximize my battery life. I also kept the screen at its dimmest setting most of the time — all of the conference rooms and labs were lit low enough that I could easily pull this off. On a full charge Ubuntu reported between 5 to 6 hours of run time at the beginning of each day, and I was able to realize 9-11 hours of usage with my battery-saving tactics. If I remember correctly, the lowest my battery ever got was down to 55 minutes estimated run time.

I discovered that the sound card did not send audio through the audio jack on the side. Sound would work find through the built in speaker, and would cut off when headphones were connected, but there was no audio through the headphones. I’m still researching a fix for this but it was by no means a show stopper.

I was also annoyed that I could not turn off the wireless radio with the keyboard hot key combination. In order to use this netbook on the airplane, I had to reboot the system and enter the BIOS to disable the wireless NIC for in-flight use.

Final Thoughts

I was extremely pleased by the performance of this little netbook running Ubuntu Netbook Edition. It met all the needs I had for the conference. In fact, as I write this I’m doing so from this little netbook while riding as a passenger down an Iowa 2-lane highway using the Verizon MiFi for connection back to my server.

I think this little netbook will remain in my hardware arsenal for quite some time.

So you want one USB Flash stick to boot the latest versions of both System Rescue CD and Clonezilla-Live? So did I! Easy, I thought, just use UNetbootin to create each one in turn, copying the files between runs, then merge them together. Well, it wasn’t that easy.

First off, Clonezilla (1.2.5-35) installs just fine via UNetbootin, but the latest SRCD (1.5.8) does not. I noticed, however, that SRCD now includes an installer script called usb_inst.sh which essentially does the same thing UNetbootin does. Here are the steps I followed to get them both crammed on to one 1 GB USB flash stick (with about 608 MB spare space):

  1. Install SRCD to the USB stick using the usb_inst.sh script.
  2. Boot to the USB stick to verify it worked OK.
  3. Install Clonezilla to the same USB stick with UNetbootin. Be sure to NOT overwrite the files when it prompts you to do so.
  4. Boot to the USB stick to make sure it still works for SRCD. At this point, Clonezilla will NOT show up in the boot menus.
  5. Remove the first few lines from the top of /syslinux.cfg, stopping at the blank line before the first “label” line.
  6. Merge the /syslinux/syslinux.cfg and /syslinux.cfg files with cat /syslinux.cfg >> /syslinux/syslinux.cfg does the trick. Be sure to append /syslinux.cfg at the end of /syslinux/syslinux.cfg
  7. Boot to the USB stick several times and verify you can start up each of the menu items successfully.

Note that the only reason this works is that the SRCD install script uses that /syslinux subfolder for its boot menus, and that both are using similar boot techniques. If the SRCD and UNetbootin scripts continue to configure themselves like this, then this method should work for future version, too.

For my next challenge. . . cram BackTrack, SRCD and Clonezilla on a 4 GB USB Flash stick!

Here are some hints and tips for those who are new to using ssh/OpenSSH for Linux system administration. Most of these tips have come from my recent work with a large number of Linux servers hosted on a VMware ESXi 4.x server farm.

Password authentication VS ssh key authentication

  • If you are administering only a few systems on a closed network (i.e. accessible only locally or by a secure VPN connection), then password authentication is probably OK, but you should consider using ssh keys anyway.
  • If your network needs to allow ssh access directly from the Internet or you are administering a large number of systems, then you should definitely use ssh keys.

Ssh-agent, scripting and cron

  • ssh-agent can save you typing in the password to your ssh key every time you need it.
  • This site gives a good overview of ssh-agent and includes some code you can add to your .bash_profile script to ensure your keys get added upon login.
  • Although there are hack-ish ways to get ssh-agent and cron to work together, you are probably better off setting up special keys to use with scripts that must be called via cron. Just keep in mind that keys without passwords are a security risk.
  • If you cannot risk using keys without passwords, consider running those cron scripts locally on each system. Utilize shared file space or e-mail to collect the results.

Bash one-liners and ssh with ssh keys

  • I’ve become a fan of using bash “one-liner” scripts to keep abreast of server stats such as load averages, available patches and disk usage.
  • Keep an up-to-date list of hosts in a file called hostlist.
  • Run your one-liners while ssh-agent has your ssh keys cached.
  • Here’s a template one-liner which checks uptime on each host listed in the file hostlist:

for e in `cat hostlist`; do echo $e; ssh $e "uptime"; done

  • In the above example, you can replace uptime with just about any command which exists on the remote host.
  • You can also synchronize some of the configurations under /etc with the above by utilizing either scp or rsync instead of ssh in that one-liner.

Turn your one-liners into scripts

  • If you find yourself using the same one-liner over and over, it is time to save yourself some typing and turn it into a script.
  • I like to keep these sorts of scripts under ~/bin. I also like to add that to my $PATH and create a simlink ~/scripts.
  • Some one-liners are good candidates to be turned in to cron scripts. Just keep in mind the risks of using ssh keys without passwords, and include logic to detect conditions you want to monitor. For example, you can run /proc/loadavg through awk to isolate one of the three figures and send yourself an e-mail if that average is too high.
  • Meeting topic was “graphics”, but we pretty much had a free-for-all discussion.
  • Last night I complied this list of graphics-related links. We really didn’t talk about this list all that much.
  • Dave Weis from Internet Solver had swag to hand out (spiffy tees) in celebration of being recognized in the Business Record as a Best Of
  • I brought some miscellaneous electronics and books to give away.

Well, my demo of the Linux Gamers Live DVD didn’t go so well. My crappy old computer did not perform very well, so we ended up borrowing someone’s laptop to perform the demo. Wasn’t too exciting. I just brought up each game on the disk and played them for a bit. We also discussed some other Linux-friendly gaming.

After that FAIL, I needed some success. I finally got some relief with the ReciPants database issue. The problem seems to be with my method for transferring the database from the old server to the new. Here’s a brief synopsis of what I did to get it to work:

Migrated from the old server (with MySQL 3.23) to a virtual machine running CentOS 3 (also with MySQL 3.23).

On the old server:

  • Exported the database with the following command (no extra options used, my mistake was exporting with —opt and/or —add-drop-tables):

mysqldump -u root -p ReciPants > ReciPants-database.sql

On the “new” server:

  • Set up ReciPants v1.2 on the new server, per the Web site instructions — including running the SQL scripts tables-mysql.sql and ref_data.sql.
  • Once I confirmed that worked OK, restored the data from the old
    server with the following command (the -f is necessary, as there is a
    non-critical error early on that will halt the process):

mysql -u root -p -f ReciPants < ReciPants-database.sql

Again, I need to test the above method on MySQL 5.x, but I believe it will work just fine.

At the Central Iowa Linux User’s Group meeting this Wednesday (10/15), the theme is “Linux Gaming.” I intend to demo the Linux Gamers Live DVD on an older AMD 2 GHz box I have.

The problem with that? I need a better video card, more RAM and to download the bootable iso! The on-board SIS video sucks for gaming, the system has only 512 MB of RAM installed, and I deleted my copy of the iso file a couple of weeks ago, thinking I wouldn’t need it. . .

I managed to purchase an ATI Radeon X1550 this afternoon, so that’s one thing off the list. I’m downloading the 3+ GB iso image as I type this. The only thing left is the RAM. I went to two local computer shops today and neither had the RAM I need (either PC2100, PC2700 or PC3200). I’ve got one more local shop to check tomorrow.

Wish me luck.