Skip navigation

Tag Archives: technology

Large Scale Geek Assault

Moscone Center wasn’t big enough for the whole conference this year. With a record 17,000+ attendees, the halls were crowded and the lines to sessions were quite long — especially the first couple of days. I think a larger venue is in order for years to come. Not sure where they can go, though.

I was unable to get in to a couple of sessions the first day, but managed to fill in some of that time with work in the labs (more no those later). Overall, though, I was able to cram in enough sessions to make it well worth the trip. My main problem was trying to narrow down my focus. This year, I tried to stick to sessions dealing with Troubleshooting and Best Practices.

In all, I took notes in 13 sessions and sat through 8 lab sessions. Not bad for a New-V?

Notes and Power Outlets

I made a good call and picked up a small netbook computer to take with me in lieu of my larger T61 ThinkPad. The longer battery life on the netbook (more info on it later) allowed me to skip the power outlets when racing to my next session. Still, I tried to conserve power by putting it into sleep or hibernate as much as possible during and between sessions. I uploaded my notes to my Dropbox account so I would have a backup.

Why was this a good call? Because there were a lot of people there with larger laptops suckling power from the outlets wherever they could be found. On the third day of the conference I found a small room on the second floor of Moscone West with a sign in front stating “VCP Lounge.” Assuming I would have to prove I held a VCP certification, I quickly pulled up my transcript on my Droid, then walked in. Turns out no one was checking, so I sat down, plugged in and caught up on some work e-mail whcih had accumulated over the first part of the week.

Food

The food provided at the conference was hit or miss. The breakfast area in Moscone West was huge and never seemed full when I was there (maybe it got busy later in the day?). They had croissants, muffins, danishes, bagels, fresh fruit, coffee and juices — everything you needed to fuel up for a morning of work in the lab which was in the same building.

I had a couple of cold boxed lunches. One was called Mediterranean Salad, which consisted of a main dish of mixed greens, veggies and a vinaigrette dressing, an apple and a sort of fruit brownie. I grabbed that box, headed over to the Yerba Buena Gardens to eat outdoors and escape the crowds. The other cold lunch was in a similar box also with a brownie bar, fruit and a sandwich. The only hot lunch I had was not very good, so I avoided the hot lunches from that point on. It consisted of overcooked fried chicken, cole slaw and a biscuit. Next year, I’ll stick to the cold lunches.

One day, I decided to escape the conference food and had a bowl of Seafood Udon at Shiki Japanese Restaurant which is across Third Street from the Moscone South building.

More to Come?

What have I missed in this first article? In the coming days I’m going to write up some articles with more detail on the following:

  • My impressions of the lab environment.
  • My netbook setup for the conference.
  • List of labs I took and any significant notable items.
  • List of sessions I attended and some of my notes from each.

First off, my hat goes off to both Sean Clark and Theron Conrey for organizing an excellent gathering which mixed geeks, beer and munchies at the Thirsty Bear. I got to rub elbows with Scott Lowe, author of Mastering VMware vSphere 4, which was instrumental in my obtaining my VCP 4 certification this year. I resisted the urge to ask for a photo, but I did manage to get his business card.

Anyway, I think Theron and Sean will need a bigger venue next year. The place was packed with people, but just to capacity. I’m sure interest in this event will grow for next year so I hope they can find a suitable location. Maybe get a few kegs from the Thirsty Bear to keep the tradition going?

Hopefully I can make it again next year. I need to get better at introducing myself to people and socializing. Guess I’m just your average introverted Geek, but I’m working on it!

So you want one USB Flash stick to boot the latest versions of both System Rescue CD and Clonezilla-Live? So did I! Easy, I thought, just use UNetbootin to create each one in turn, copying the files between runs, then merge them together. Well, it wasn’t that easy.

First off, Clonezilla (1.2.5-35) installs just fine via UNetbootin, but the latest SRCD (1.5.8) does not. I noticed, however, that SRCD now includes an installer script called usb_inst.sh which essentially does the same thing UNetbootin does. Here are the steps I followed to get them both crammed on to one 1 GB USB flash stick (with about 608 MB spare space):

  1. Install SRCD to the USB stick using the usb_inst.sh script.
  2. Boot to the USB stick to verify it worked OK.
  3. Install Clonezilla to the same USB stick with UNetbootin. Be sure to NOT overwrite the files when it prompts you to do so.
  4. Boot to the USB stick to make sure it still works for SRCD. At this point, Clonezilla will NOT show up in the boot menus.
  5. Remove the first few lines from the top of /syslinux.cfg, stopping at the blank line before the first “label” line.
  6. Merge the /syslinux/syslinux.cfg and /syslinux.cfg files with cat /syslinux.cfg >> /syslinux/syslinux.cfg does the trick. Be sure to append /syslinux.cfg at the end of /syslinux/syslinux.cfg
  7. Boot to the USB stick several times and verify you can start up each of the menu items successfully.

Note that the only reason this works is that the SRCD install script uses that /syslinux subfolder for its boot menus, and that both are using similar boot techniques. If the SRCD and UNetbootin scripts continue to configure themselves like this, then this method should work for future version, too.

For my next challenge. . . cram BackTrack, SRCD and Clonezilla on a 4 GB USB Flash stick!

Everything requires time. Writing this article required time to think, time to write a draft, time to edit, etc. Sleeping requires time. Eating requires time. Planning requires time. You get the idea.

Therefore, a complex GTD system itself requires a significant amount of time to implement. Managing tickler files, to do lists and the like require time to implement and maintain. With complexity, these tasks become chores and most people try to avoid chores. This makes the barrier to entry quite high for these systems, and that makes turning them in to habit more difficult.

I say, start simple. You may decide later to build up to a more complex method, or you may find that you need go no further. My method revolves around my Calendar and Inbox. Quite simply, if it is worth spending the time to do, it should be on your calendar. Period.

Incoming tasks come from your Inbox and go to one of three places:

  • Complete the task immediately or respond if additional information is required. Move a copy to your Follow-up folder and add an alert so you can keep track later.
  • On the calendar to be completed at some point in the future.
  • In the Trash. Save what you need for long-term documentation in the appropriate location (folder, Wiki, project documentation system, etc.), then delete it.

Use your calendar to block off recurring appointments for handling e-mail, planning and scheduling, but also be flexible. If it is on your calendar, you can drag it to a different time to re-schedule. Keep it on your calendar, and it will get done. If you find that you are bumping a particular item on your calendar too much, then you should re-consider whether it should be on there at all — it obviously isn’t that important to you.

This excellent animation by Nina Paley succinctly explains to anyone the dangers which the Electronic Freedom Foundation fights every day. If you ever encounter someone who just doesn’t understand why most EULAs and Three Strikes (or HADOPI) laws are evil, then this will help make your point clear.

And don’t forget to visit Nina Paley’s blog! There’s lots of good stuff there, too.

Here’s the video, but please visit the page for the full resolution version and more info!

Here’s a quick tip on how to create a bootable USB flash drive from which you can install ESXi.

  1. Download UNetbootin (available for Windows and Linux).
  2. Download the latest ESXi .iso file from VMware.
  3. Format your USB flash drive (1 to 2 GB should work, I used a 4 GB one) with a FAT32 file system.
  4. Fire up UNetbootin and select the option to use your own .iso file.
  5. Make sure you choose the RIGHT USB flash drive — Best Practice would be to have only your target drive connected while doing this!
  6. Click OK and let it cook. This may take a few minutes.
  7. Cancel the prompt at the end to reboot — you don’t want to reboot, really.
  8. Unmount your USB flash drive and test it on ESXi compatible hardware!

There are some limitations to this method:

  • Target system must support USB boot (very few don’t)
  • Won’t work with an EFI BIOS unless that BIOS supports booting a legacy mode BIOS. Even then, it still may not work.

And, yes, you can do this with the regular ESX .iso file, but you’ll need to purchase licensing for those installs eventually. You can register ESXi for free use!

Click the photos or go to this set’s Flickr page for more photos, larger versions and slide show.
Red Morgan3
Red Morgan
Red Morgan2
People around a Mini
Red LeMans
White Triumph2
Jaguar Dash
Blue Austin
Reflections

IV. Tactical Dispositions
1. Sun Tzu said: The good fighters of old first put themselves beyond the possibility of defeat, and then waited for an opportunity of defeating the enemy.
2. To secure ourselves against defeat lies in our own hands, but the opportunity of defeating the enemy is provided by the enemy himself.
3. Thus the good fighter is able to secure himself against defeat, but cannot make certain of defeating the enemy.

This section brings to mind the common phrase, “low hanging fruit,” which is often used in business and political circles to refer to easy targets for adding new business or reforming policies. What is often missing from those discussions, however, is the question, “How high are we capable of reaching?”

When cutting the budgets of time, resources and money for a project, the goal can quickly get out of reach. As the time allowance shrinks, you leave less room for planning which can cause a higher margin of error — thus requiring more time to fix. As resources shrink, either more time will be needed or more resources will need to be brought in near the end — most likely at greater cost. As funding is removed, the ability to respond to unforeseen issues (hardware failures, natural disasters, personnel issues, etc.) is greatly diminished and will result in the need for emergency funding.

Keep in mind, too, that the height of the low hanging fruit is relative to your own capabilities and the capabilities of your competition. Do not take on a project which, once initiated, could easily be taken over by a more capable competitor. At the same time, be on the look out for smaller competitors who have taken on more than they can handle.

The fruit which hang low today may be out of reach tomorrow. Be prepared to reach as high as you can, but hold back the temptation to reach too high — even if the project is just within your range.

Sun Tzu wrote:

III. ATTACK BY STRATAGEM
18. Hence the saying: If you know the enemy
and know yourself, you need not fear the result of a
hundred battles. If you know yourself but not the enemy,
for every victory gained you will also suffer a defeat.
If you know neither the enemy nor yourself, you will
succumb in every battle.

We assume that we possess an accurate understanding of our own skills and capabilities. Many people, however, tend to overestimate their own performance, skills or capabilities. When they do, they make a fatal mistake which guarantees they will never see success.

Take some time out of your busy schedule to assess and re-assess your own skills and capabilities. An excellent way to do this is to review and update your current resume at least once a year. That will represent your view (however accurate) of yourself. Once you’ve updated this document, make sure to seek honest feedback from your colleagues. Hand them a red pen and request they be as brutally honest as they dare. Promise to take their feedback seriously and without retribution. Their honesty (should they wield it) will certainly be difficult to swallow and could possibly shatter your self esteem, but you will be better off for the experience.

Next, be sure to obtain as much information about potential projects and teams with which you work. If your skills do not benefit the team or will not advance the project, you should bow out. You will have only a 50/50 chance of success and may end up damaging your standing with others. Beware that some people may misrepresent the scope or requirements of a project. Be sure you always set up an exit path to avoid those sorts of situations.

With a solid grasp of your own capabilities and enough research prior to taking on any project, you will succeed far more times than you fail.

Here are some hints and tips for those who are new to using ssh/OpenSSH for Linux system administration. Most of these tips have come from my recent work with a large number of Linux servers hosted on a VMware ESXi 4.x server farm.

Password authentication VS ssh key authentication

  • If you are administering only a few systems on a closed network (i.e. accessible only locally or by a secure VPN connection), then password authentication is probably OK, but you should consider using ssh keys anyway.
  • If your network needs to allow ssh access directly from the Internet or you are administering a large number of systems, then you should definitely use ssh keys.

Ssh-agent, scripting and cron

  • ssh-agent can save you typing in the password to your ssh key every time you need it.
  • This site gives a good overview of ssh-agent and includes some code you can add to your .bash_profile script to ensure your keys get added upon login.
  • Although there are hack-ish ways to get ssh-agent and cron to work together, you are probably better off setting up special keys to use with scripts that must be called via cron. Just keep in mind that keys without passwords are a security risk.
  • If you cannot risk using keys without passwords, consider running those cron scripts locally on each system. Utilize shared file space or e-mail to collect the results.

Bash one-liners and ssh with ssh keys

  • I’ve become a fan of using bash “one-liner” scripts to keep abreast of server stats such as load averages, available patches and disk usage.
  • Keep an up-to-date list of hosts in a file called hostlist.
  • Run your one-liners while ssh-agent has your ssh keys cached.
  • Here’s a template one-liner which checks uptime on each host listed in the file hostlist:

for e in `cat hostlist`; do echo $e; ssh $e "uptime"; done

  • In the above example, you can replace uptime with just about any command which exists on the remote host.
  • You can also synchronize some of the configurations under /etc with the above by utilizing either scp or rsync instead of ssh in that one-liner.

Turn your one-liners into scripts

  • If you find yourself using the same one-liner over and over, it is time to save yourself some typing and turn it into a script.
  • I like to keep these sorts of scripts under ~/bin. I also like to add that to my $PATH and create a simlink ~/scripts.
  • Some one-liners are good candidates to be turned in to cron scripts. Just keep in mind the risks of using ssh keys without passwords, and include logic to detect conditions you want to monitor. For example, you can run /proc/loadavg through awk to isolate one of the three figures and send yourself an e-mail if that average is too high.