Yeah, it has been way too long. Here’s some content from a presentation I gave to the local Linux Users’ Group a while back.
Linux
Although ReciPants v1.2 is still hosted on SourceForge (and Freecode), it has not been actively developed or updated since 2004. If you search for ReciPants on this blog, you will see that I’ve had some trouble migrating it between servers and keeping it working over the years. I, therefore, finally decided to migrate all of our recipes (close to 500 of them) out of this software and in to the latest stable release of MediaWiki. While I’m probably the last person on earth using this software, I thought I’d share here how I performed the export, just in case I’m not!
MediaWiki allows export and import of pages in XML format. This page on the MediaWiki site was very helpful in providing the required format of this XML file. Additionally, I installed a fresh copy of MediaWiki on my Web server, mocked up a fake recipe page similar to the format I wanted, then exported that page in order to inspect it. One of the main differences I noticed between the example from the MediaWiki site and the actual export I performed was near the <text> tag. In the example, the tag is simply <text>, but I found that my imports using this tag were not getting rendered in my wiki as wikitext. My actual exported page, however, had the following tags preceding the <text> tag and a different <text> tag itself:
<model>wikitext</model>
<format>text/x-wiki</format>
<text xml:space=”preserve”>
That combination of tags resulted in the wikitext being rendered properly. Without them, the raw wikitext was shown in MediaWiki with no LF/CR — very un-readable!
With that information at hand, I set to work creating a script in Perl which would connect to the MySQL recipants database using DBI and extract the various data I wished to export into variables. The main outer loop iterates through the recipes table. Inside that loop, the other tables are queried for the data they hold about the current recipe. Everything is shoved into variables, arrays, or arrays of arrays along the way. At the end of the main loop, the XML for that page is generated.
You can download or view the script source code: rpexport.txt
This is a sample of the XML output, limited to just recipes with “berry” in the name: berry.xml
Here’s a quick write-up from my presentation on The Amnesic Incognito Live System (TAILS) at the August 2014 CIALUG meeting.
The main TAILS Web site: https://tails.boum.org/
TAILS is intended to make it easy for non-technical end users to boot into a live, Linux-based OS which automatically routes its traffic over The Onion Router (TOR) network. The intention is to provide anonymity, privacy, and plausible deniability for dissidents, whistle-blowers, or anyone who feels the need to conduct searches or communicate securely while leaving little to no trace of those activities on the host system.
While TAILS does succeed at providing a bootable system that defaults to a TOR-routed connection, non-technical or even non-Linux end users will need some training from a more savvy user to make the best use of this system. Keep the following points in mind:
- TAILS is still susceptible to any issues which effect the TOR network. Know and understand how to limit your behaviors when using TOR and apply those to your use of TAILS.
- Out of the box, the current version (as of this writing, 1.1 released July 2014) of TAILS had 34 packages which were out of date, and TOR itself was one of those pending updates. Installing updates before each use should be top priority, but more on that later.
- It does NOT appear that TAILS uses the TOR Browser Bundle. This makes it more important to apply updates before each use as Firefox, Vidalia and the TOR Button may need to be updated (no updates were pending for these in version 1.1 as of this writing).
As mentioned above, the very first thing which should be done after successfully booting to TAILS and connecting to the Internet and TOR network is to apply updates. This is accomplished by logging in to a terminal, elevating to root, and running ‘apt-get update’ followed by ‘apt-get upgrade’. Note that I ran in to the following issues when updating version 1.1 of TAILS in this manner:
- Updating was slow. This is actually a good thing because the updates are grabbed via the TOR network.
- When the TOR package gets updated, it prompts whether or not to replace the configuration. I recommend keeping the existing configuration (the default choice).
- When the TOR package gets updated, it stops the TOR service but doesn’t restart it. Later in the update process, some other packages need to download firmware. Because the TOR service is stopped, that process fails. I had to start the TOR service again, then re-run ‘apt-get upgrade’ to successfully update those packages.
- When the TOR package gets updated, it breaks the running Vidalia process. I simply closed it. TOR continued to work without that process running.
While this isn’t a complete summary of my presentation, I hope it is helpful. Please share this post if you found it so. Thanks!
Below is a link to the presentation I gave at the September 18th, 2013 CIALUG meeting.
Tor and the Tor Browser Bundle: Hints, Tips, and Tricks for Effective Use
Background
If you aren’t familiar with GNU screen, you really should stop right now and familiarize yourself with it. This is a very powerful utility which allows you to run terminal based programs on a system, disconnect from that session and re-connect later from the same or a different location. You can also start multiple terminals within a given screen session. Whenever I ssh into a system, I almost always launch screen first. If my ssh session gets disconnected unexpectedly, I can simply re-connect and pick up where I left off by re-attaching to the screen session.
The Problem
I was recently working with a client on a process that was going to take quite some time to complete. The command we were running would give a progress indicator, so we could monitor the progress off and on over time. I assumed, since we both had the ability to utilize sudo to change user privileges that he would be able to sudo su – myusername followed by screen -r to take over the screen session I had started which contained this command. When he tried this, however, he was greeted with the following error:
Cannot open your terminal '/dev/pts/1' - please check.
The Solution
Searching around on Google comes up with a couple of different solutions. One of these solutions suggests that the second user should change the permissions on his tty to allow everyone access to it. While this works, it is definitely a Bad Idea for security as any user on the system could then snoop that tty.
The other solution suggests that the second user should issue script /dev/null after escalating themselves to the first user’s account. This works and does not appear to have the same security implications as the method above because everyone retains access to their own tty’s.
But Why Does This Work?
What I found was that none of the posts I ran across which presented the second, preferred solution explained why this works. They merely said, “use this method,” and left it at that. Being naturally curious, and harboring a concern as to whether this also opened up a tty to others’ snooping, I had to investigate.
Prerequisites
Of course, this all assumes that at least the second user has the ability to sudo su – and escalate their privileges. That is all. Let’s move on.
Stepping Through The Process
Here’s how I went about discovering what exactly script /dev/null does and why it allows the second user to access what appeared to be an inaccessible tty.
First, usera logs in via ssh, checks which tty was assigned, checks the permissions on that tty and launches screen:
usera@localhost ~ $ ssh usera@remotehost usera@remotehost ~ $ tty /dev/pts/1 usera@remotehost ~ $ ls -l /dev/pts/1 crw--w---- 1 usera tty 136, 1 2011-01-09 20:14 /dev/pts/1 usera@remotehost ~ $ screen
As you can see, usera has RW permissions, group members have W permissions and others have no access at all to this tty. Next, userb logs in to the same system via ssh, checks which tty was assigned and checks the permissions on that tty:
userb@localhost ~ $ ssh userb@remotehost userb@remotehost ~ $ tty /dev/pts/2 userb@remotehost ~ $ ls -l /dev/pts/2 crw--w---- 1 userb tty 136, 2 2011-01-09 20:20 /dev/pts/2
Again, the same permissions are present on the tty assigned to userb. So neither user can snoop on the other’s tty at this point. Here’s where it gets interesting, though. Let’s have userb escalate to usera and check the tty assignment and permissions again:
userb@remotehost ~ $ sudo su - usera [sudo] password for userb: usera@remotehost ~ $ tty /dev/pts/2 usera@remotehost ~ $ ls -l /dev/pts/2 crw--w---- 1 userb tty 136, 2 2011-01-09 20:20 /dev/pts/2
This is where I had my “aha moment.” Although userb has changed to usera, the same tty (with the same permissions) is in use. Therefore, all commands issued are now under usera but any command which tries to manipulate the tty (like screen does) will fail because the tty remains under control of userb.
So now let’s take a look at what script /dev/null does to the tty:
usera@remotehost ~ $ script /dev/null Script started, file is /dev/null usera@remotehost ~ $ tty /dev/pts/3 usera@remotehost ~ $ ls -l /dev/pts/3 crw--w---- 1 usera tty 136, 3 2011-01-09 20:36 /dev/pts/3
Ahh, we now have a new tty assigned to this user. Therefore, when screen -r is issued, the currently assigned tty, /dev/pts/3 is accessible to usera and the command succeeds! Also note that this new tty has the same permissions as the original usera tty, so it should be just as secure from snooping.
Conclusion
If you need to share a screen session with another (admin-rights holding) user, then the script /dev/null method is much preferred over mucking around with tty permissions. It appears that the script /dev/null method is just as secure as the original user’s tty because the permissions on the new tty are exactly the same.
On a more general note, be aware that solutions you find on the Internet might work, but they may not always be the best solution for the task at hand. Be sure you understand the implications of what you are doing instead of blindly copying and pasting commands you found on someone’s blog. If you are not sure what a particular solution does, I encourage you to test as I did (on a non-production system, of course) to make sure you understand it before you put it to use.
OK, so I searched Google but couldn’t find the magic combination anywhere. Hopefully, this post will help you!
The setup: I wanted to compare the contents of two directories which had previously been synchronized via rsync without actually synchronizing them. The main goal was to find out the total size of the data which would need to be transferred so I could estimate how long the actual rsync run would take. To do this, you’d think the following would work, based on the rsync man pages:
rsync -avvni sourcedir/ destdir/
Broken down that is:
- -a archive meta-option
- -vv extra verbosity
- -n dry run
- -i itemize changes
The output, however, lists “total size” as the total size of all the files — NOT just the size of the changed files which would be synchronized. So I did some research using the rsync man page and some testing with several options combinations and came up with the following solution:
rsync -an --stats sourcedir/ destdir/
Here’s a mock sample output from running that command:
Number of files: 2 Number of files transferred: 1 Total file size: 4096 bytes Total transferred file size: 2048 bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 82 File list generation time: 0.013 seconds File list transfer time: 0.000 seconds Total bytes sent: 110 Total bytes received: 32
sent 110 bytes received 32 bytes 284.00 bytes/sec total size is 4096 speedup is 1.23
The particular stats you’ll need to parse are the following:
- Total file size: (given in bytes)
- Total transferred file size: (also in bytes, this is the changed data to be transfered)
You can ignore Total bytes sent and Total bytes received as they only refer to the actual data transferred by the rsync process. In a dry run (-n option) this amounts to only the communication data exchanged by the rsync processes.
Also of interest are the Number of files and Number of files transferred statistics. It is also worth noting that the trailing slashes on the directories are important. If you leave them out, what you are actually testing is the copying of sourcedir to destdir/sourcedir which is probably not what you want to do if you are trying to compare their contents.
If this post was helpful to you, please spread the word and share it with others!
The Hardware
- Asus Eee PC model 1018PB-BK8
- Verizon MiFi (used only at the hotel, as the conference had good wireless, but the hotel wireless sucked)
- Incase 10.2 inch netbook case (slightly larger than the laptop, but the best fit I could find at the store)
The Software
The netbook came with Windows 7 Starter Edition pre-installed. Because that was basically useless, I opted to upgrade it to Ubuntu. After firing up the laptop and going through the initial setup for Windows 7 (which took over an hour to complete), I rebooted the system to Clonezilla and took a drive image. Once I had that image, I wiped the system and installed Ubuntu Netbook Edition 10.04. Out of the box that gave me Firefox, Open Office and several other useful Open Source software products. I added the following items to round out the tools I’d need for the conference:
- Dropbox
- Truecrypt
- KeePass
- Pidgin
What Worked, What Didn’t
Most of the features I needed worked just fine under Ubuntu. Wireless and wired networking; suspend/hibernate worked flawlessly the whole time. The suspend and hibernate modes helped me extend my battery life significantly, as I could quickly close it to suspend when I thought I didn’t need to take notes and open it to return from suspend quickly when I wanted to jot down a quick note or two. I also tried to remember to use hibernate between sessions to help maximize my battery life, but I probably ended up using suspend most of the time.
While I did not actually benchmark the battery life, I had no problems going a full day of conference sessions without stopping to charge up. I did aggressively use suspend and hibernate modes to maximize my battery life. I also kept the screen at its dimmest setting most of the time — all of the conference rooms and labs were lit low enough that I could easily pull this off. On a full charge Ubuntu reported between 5 to 6 hours of run time at the beginning of each day, and I was able to realize 9-11 hours of usage with my battery-saving tactics. If I remember correctly, the lowest my battery ever got was down to 55 minutes estimated run time.
I discovered that the sound card did not send audio through the audio jack on the side. Sound would work find through the built in speaker, and would cut off when headphones were connected, but there was no audio through the headphones. I’m still researching a fix for this but it was by no means a show stopper.
I was also annoyed that I could not turn off the wireless radio with the keyboard hot key combination. In order to use this netbook on the airplane, I had to reboot the system and enter the BIOS to disable the wireless NIC for in-flight use.
Final Thoughts
I was extremely pleased by the performance of this little netbook running Ubuntu Netbook Edition. It met all the needs I had for the conference. In fact, as I write this I’m doing so from this little netbook while riding as a passenger down an Iowa 2-lane highway using the Verizon MiFi for connection back to my server.
I think this little netbook will remain in my hardware arsenal for quite some time.
Large Scale Geek Assault
Moscone Center wasn’t big enough for the whole conference this year. With a record 17,000+ attendees, the halls were crowded and the lines to sessions were quite long — especially the first couple of days. I think a larger venue is in order for years to come. Not sure where they can go, though.
I was unable to get in to a couple of sessions the first day, but managed to fill in some of that time with work in the labs (more no those later). Overall, though, I was able to cram in enough sessions to make it well worth the trip. My main problem was trying to narrow down my focus. This year, I tried to stick to sessions dealing with Troubleshooting and Best Practices.
In all, I took notes in 13 sessions and sat through 8 lab sessions. Not bad for a New-V?
Notes and Power Outlets
I made a good call and picked up a small netbook computer to take with me in lieu of my larger T61 ThinkPad. The longer battery life on the netbook (more info on it later) allowed me to skip the power outlets when racing to my next session. Still, I tried to conserve power by putting it into sleep or hibernate as much as possible during and between sessions. I uploaded my notes to my Dropbox account so I would have a backup.
Why was this a good call? Because there were a lot of people there with larger laptops suckling power from the outlets wherever they could be found. On the third day of the conference I found a small room on the second floor of Moscone West with a sign in front stating “VCP Lounge.” Assuming I would have to prove I held a VCP certification, I quickly pulled up my transcript on my Droid, then walked in. Turns out no one was checking, so I sat down, plugged in and caught up on some work e-mail whcih had accumulated over the first part of the week.
Food
The food provided at the conference was hit or miss. The breakfast area in Moscone West was huge and never seemed full when I was there (maybe it got busy later in the day?). They had croissants, muffins, danishes, bagels, fresh fruit, coffee and juices — everything you needed to fuel up for a morning of work in the lab which was in the same building.
I had a couple of cold boxed lunches. One was called Mediterranean Salad, which consisted of a main dish of mixed greens, veggies and a vinaigrette dressing, an apple and a sort of fruit brownie. I grabbed that box, headed over to the Yerba Buena Gardens to eat outdoors and escape the crowds. The other cold lunch was in a similar box also with a brownie bar, fruit and a sandwich. The only hot lunch I had was not very good, so I avoided the hot lunches from that point on. It consisted of overcooked fried chicken, cole slaw and a biscuit. Next year, I’ll stick to the cold lunches.
One day, I decided to escape the conference food and had a bowl of Seafood Udon at Shiki Japanese Restaurant which is across Third Street from the Moscone South building.
More to Come?
What have I missed in this first article? In the coming days I’m going to write up some articles with more detail on the following:
- My impressions of the lab environment.
- My netbook setup for the conference.
- List of labs I took and any significant notable items.
- List of sessions I attended and some of my notes from each.
So you want one USB Flash stick to boot the latest versions of both System Rescue CD and Clonezilla-Live? So did I! Easy, I thought, just use UNetbootin to create each one in turn, copying the files between runs, then merge them together. Well, it wasn’t that easy.
First off, Clonezilla (1.2.5-35) installs just fine via UNetbootin, but the latest SRCD (1.5.8) does not. I noticed, however, that SRCD now includes an installer script called usb_inst.sh which essentially does the same thing UNetbootin does. Here are the steps I followed to get them both crammed on to one 1 GB USB flash stick (with about 608 MB spare space):
- Install SRCD to the USB stick using the usb_inst.sh script.
- Boot to the USB stick to verify it worked OK.
- Install Clonezilla to the same USB stick with UNetbootin. Be sure to NOT overwrite the files when it prompts you to do so.
- Boot to the USB stick to make sure it still works for SRCD. At this point, Clonezilla will NOT show up in the boot menus.
- Remove the first few lines from the top of /syslinux.cfg, stopping at the blank line before the first “label” line.
- Merge the /syslinux/syslinux.cfg and /syslinux.cfg files with cat /syslinux.cfg >> /syslinux/syslinux.cfg does the trick. Be sure to append /syslinux.cfg at the end of /syslinux/syslinux.cfg
- Boot to the USB stick several times and verify you can start up each of the menu items successfully.
Note that the only reason this works is that the SRCD install script uses that /syslinux subfolder for its boot menus, and that both are using similar boot techniques. If the SRCD and UNetbootin scripts continue to configure themselves like this, then this method should work for future version, too.
For my next challenge. . . cram BackTrack, SRCD and Clonezilla on a 4 GB USB Flash stick!
Studying for LPI certification this evening, and I had a fit of whimsy whilst playing with regular expressions. Random thoughts passing through my head resulted in the following combination of “memes”:
- Swear words in the linux kernel source code.
- Britney Spears’ song, “If You Seek Amy.”
So I downloaded the source code (linux-source-2.6.28, Ubuntu), unzipped it and ran the following grep command against it:
grep -ri "<fuck me>" *
Amazingly, despite the prevalence of the just the swear word itself (33 variations including fuck, fucked, fucking, fucker, etc in 2.6.28) in the kernel, there was only one hit:
fs/binfmt_aout.c: /* Fuck me plenty... */
So I guess a more apt title for this post is, “If you seek Amy Plenty.”