All posts by Chris Walden

Chris Walden is a multifaceted guy with a background in technology, writing, and theatrical production. He is the force behind Mythmade Productions in Austin, Texas and enjoys creating unique experiences for people that go beyond mere entertainment. He lives in Cedar Park with his wife, daughter and some number of cats. He is a regular correspondent for Saul Ravencrafts activities.

Rolling out technology for a non-profit with volunteers and open source

Imaging

I’d like to tell you that we moved the whole office onto Linux desktops, but that’s not the case.  All the systems are going out with a standard image of Windows XP.  However, there is progress there which will allow at least some of the workstations to move to Linux in the future.  More on that later.

The first challenge was to come up with a consistent way of working with images.  It needed to be cheap.  It needed to be easy.  It needed to have the potential for automation.  In a previous life, I was responsible for the agency-wide system replacement for Y2K at the Texas Lottery Commission.  I learned a lot about system imaging at that time.  I knew the commercial players in the imaging space.  Since it was all government money at that time, I didn’t have to worry about expense.  (I’m so ashamed.)  But now we want to go for free if we can.  Enter Clonezilla.

Clonezilla is a nice little combination of open source tools that allow you to create and restore images.  There is a server version and a live CD.  We opted for the live version because of network problems that are waiting for the contractor to complete.  (If you must know we need a buggy run between buildings replaced.  We’re doing fiber and it will be much better!)  As it turned out, it was not difficult to put both the base system image and the bootable imaging system on an 8GB flash drive, so the entire solution fits in the pocket.

Overall, I’m pleased with the Clonezilla solution.  It did as well as commercial products I’ve used in the past, with not much more trouble to learn it.  With the flash drive I was able to build a new system in about six minutes.  Not too shabby.  Since the live cd is Linux with some tools added in and startup scripting to get it going I can adjust that to have automated builds available.  A user could potentially load their own image off of a flash drive with no technical help.

Structured for support

We made some decisions about the structure of the systems based on my previous experience.  Here are some keys to what we did.  You might consider some of them if you are doing your own roll-out.

1. Separate operating system and data.

System administrators have known this for years.  Keep your data on separate storage from your operating system.  That helps isolate damage when there’s a problem.  If the OS disk has problems you don’t have to worry about the data so much and vice versa.  In the workstations we don’t have two disks, but we do have two partitions.  20G for the OS and the rest for the data.  That way we can re-image the OS at any time and leave the user’s data intact.  The policy is keep your information on the D: drive and everything will be fine.  Later we’ll probably change this policy to putting those things on server storage… but local data is almost always required, even if it’s just program settings.  Windows does allow us to move the user data to another drive so that’s been pretty easy.  Of course, as Linux moves into the desktop area we’ll be able to do whatever we want because of the robust aliasing and mounting approach to the Linux file system.  (Life is so much better without drive letters!)

2.   Keep careful track of your golden image

When imaging it is important that the Golden Image, that base that you use to build a workstation, is as clean as possible.  Any personal user information or configuration taints the image.  You want to keep it clean.  So, while designing the image, expect to spend a lot of time saving and restoring while you figure out what you need.  If you so much as add an icon that you want to be on the base image you need to save a new one.  If you’ve been tinkering with a system you probably shouldn’t use that one, you should build a new clean system, add the icon and then save the updated Golden Image.  This is tedious, but it’s the only way to keep your image pristine and reliable.  Virtual machines are becoming a usable resource for this kind of work and I may explore more what I can do with image creation in this way.

Until you have time to redo the golden image, then you need to document the change steps which must be done after a new image install.  These will be the exact changes that you need to do to update the golden image.  This also provides you with a nice change log of the image… which is a good thing to do anyway.

3.  Have a plan for migrating the old data

I was blessed to not be as involved with the migration of old user information onto the new systems.  Some areas of user migration is very complicated, especially moving between versions of Windows.  I’ve gotten so used to simply scooping up a home directory and moving on that cleaning out system IDs and all of those other things just seem to be a mess.  Some of our team found a solution and were able to make it go and I just came in after to check on some remote control and drive mapping issues.  The point is that users don’t want to lose their stuff.  You need to have a plan for how to get the old stuff away and give some thought to how your going to keep it safe.  Separating the OS and data can help, but there may be some intertwining there that you aren’t aware of that will be a big nasty surprise when you actually have to restore somone.

4.  Test your procedures

You know it should work.  I know it should work.  But does it know it should work?  Workstations defy logic sometimes.  When you have a plan, try it out a few times before inflicting it on your users.  Having everyone involved do a few dry runs with the test systems during the image development phase will save a lot of trouble later.  Remember, by doing imaging you are ultimately saving the need to do many kinds of troubleshooting.  When something is wrong enough you just re-image, saving the trouble if installing OS, other software and tweaking to the right settings for your environment.  The time you spend up front will be paid back in the time you save not doing all that re-installing and reconfiguring.

Moving on

I’m pleased with how everything has been going.  We’ve been rolling the systems out and it’s beeing going pretty well.  The goal of all of this is to make sure that the church is able to not spend money that they don’t have to and apply those funds to the unique things that it must buy and, of course, to its mission– which is surprisingly not IT-related.  Any volunteer organization can benefit from using more open tools and techniques with the help of volunteers.  Sure, many organizations can get pretty spectacular software licenses donated by members.  But often those members move away or reduce their involvement.  Then the organization has to make the tough decision of paying the bill to use the whiz-bang tools or migrating to the next one that is donated.  Using open source solutions in these cases stops that cycle.  It also removes all the barriers to entry for technical people who want to help out.  There is a little bit of a learning curve for some, though most know much more than they think they do if they stop being stubborn and give it a try.  However, they don’t need to buy expensive software or have an enterprise background to help out.

Next time I’ll probably tell you more about the software options that we used to gently introduce users to open source choices.  I’ll also tell you some of the choices that we made on the back end to help volunteers support the environment inexpensively. Maybe later I’ll spend some time talking about getting volunteers to appreciate these choices and getting on board.  (Maybe you’ll have some ideas for me too!  It can be so hard to keep people motivated on volunteer things like this, even when they are worthwhile.)

Microsoft in the Linux code

There have been a number of stories lately about Microsoft contributing code to the Linux kernel.  Some of it has been pretty cynical and would make for a good popcorn movie.  I’ll leave it up to you where my thoughts belong.

What did Microsoft contribute?

Just in case you’ve been on vacation, here is a story that gives you the basics. It was actually challenging to find a story that wasn’t filled with speculation and spin on what this means and why it’s happening.

Thoughts

This is one of those disclaimer times when I remind you that my comments and opinions are not necessarily those of IBM.

First, I think it’s interesting that Microsoft has taken this step.  It is a very different approach to one that they had in the past of citing the irrelevance of Linux.  Clearly they see that some of their audience is going to want to work with Linux and that it is to their advantage to have a level of cooperation with that.  I believe these drivers have had some availability, but the source was closed.  The most interesting part of this is Microsoft’s decision to release the code under the GPL v2.0 license.  Microsoft leadership has shown some real distaste for the GPL in the past.  Releasing the Linux code into the predominant Linux license seems enlightened to me.

Second, the code is not really a change to Linux.  It’s simply drivers to allow Linux to run more smoothly within the Microsoft Windows 2008 virtual machine space.  It’s really more of a punch at VMWare and other virtual machine developers/vendors.  It would be nice to see this as more of a trend.  There are many vendors that keep their Linux drivers proprietary which makes for brokenness when you exercise your right to manipulate the Linux environment.  Being able to simply recompile any and all drivers into the environment of my choosing would make for a much happier Linux experience all the way around.  It also allows the Linux community to help fix driver problems based on a vendors incomplete understanding of Linux best practices.  (Yeah!  Go ahead and laugh.  There are Linux best practices.)

Third, the driver contribution is designed to make it easier for Linux to run subject to Windows.  Windows will be your key environment and Linux will run safely inside of it.  To me, this seems completely backwards.  From time to time I am compelled to fire up Windows for some silly activity where people were unaware of the multi-platform options.  I use VMWare to run Windows in those situations so I can get the activity over with… but Windows is safely encapsulated within my Linux environment, which reduces the exposure.

I think I can safely conclude that Microsoft expects users to be running Linux.  They are accomodating that while maintaining the position that Windows should be the key environment.  They chose to go along with the Linux conventions in their licensing which is actually pretty good form.

Overall this has little to no affect on Linux other than to make it easier for it to appear in some die-hard Windows environments.  That means that techies who appreciate Linux can now leverage this in their environments where the ruling admins demand Windows.  I find that when Linux gets into an environment doing work that people notice the benefits.  An “Aha!” moment seems to come when something is reliable, or customizeable or fixible in a way that is empowering.

Does this mean a big love fest for Linux from Microsoft?  Probably not.  Microsoft will continue to agressively leverage their position against what they perceive as a threat.  But maybe… just maybe… it will be a doorway for people inside Microsoft who see tools and techniques ported to Linux.  Maybe Microsoft goes a little more multi-platform, and that wouldn’t be so bad.

Chrome, Windows, Linux? Who wins?

I’ve read a few different articles lately about the rise of the Linux desktop and the impending Chrome operating system by Google.  Following each article is a fairly predictable line of comments which talks lightly about the information of the article and then degrades into finger pointing and name calling like a medieval battle line.

The Linux guys talk about the deficiencies in Windows.  The Windows guys talk about how clueless and geeky the Linux guys are.  The Linux guys talk about the impending destruction of Windows.  The Windows guys say that’s been predicted several times and nothing’s happened.  Blah, blah, blah!  Yada yada yada!  Everybody’s right.  Everybody’s wrong.

I’m a full-time Linux user and a pretty avid proponent for Open Source.  I don’t pretend to understand the far-reaching implications of economics and the true future of technology.  In fantasy and science fiction truly knowing the future typically not possible because the future is always affected by free will… which is precisely my point.

The point of these OS competitions is choice.  I don’t have a grudge against Microsoft that wants to see them die.  I used Windows when Windows fit my needs.  When my needs changed I moved on.  No big deal.  We can still be friends.  I feel that way about any of the technologies that I had a “relationship” with.  However, I don’t want my previous choices to follow me around.  I don’t want them to badger me that I made the wrong choice and I need to come back to them.  I don’t want them to make trouble for my newer choices that makes it difficult or impossible for me to continue on my path.  (Wow!  As I write these words down this starts to sound like the plot for a summer movie!)

Basically, all of this winds down to interoperability and standards.  In areas, like the Internet, where some open standards have been established, things go really well.  In areas like document formats, where there are fewer open standards and more commonly used tools, this becomes difficult.  I am a huge proponent of using tools that run on more than one platform.  You know some of the names:  Firefox for Web browsing, Thunderbird for email, Apache HHTP Server for Web serving.  There are probably several that you don’t know, like Audacity for sound editing, Pidgin for instant messaging, Inkscape for vector graphic editing, tightVNC for remote control (yes, I can control a Windows box from a Linux machine and vice versa!).  They don’t all have to be “free” either.  I use an XML editing tool called “Oxygen” for my editorial work in developerWorks.  It’s is not at all free, but well done, and works across platforms.  There are many choices for applications that do the same work to the same protocols despite your operating system choice.  If you choose a protocol and/or tools that are supported across platforms, then this becomes easy.  People choose what they like and no one has to suffer needlessly about not being chosen.

There are standards that work today in this fashion.  Adobe has been good about making protocols such as PostScript and PDF available.  JPEG is available for photos, but there is also the often ignored PGN (Portable Network Graphic) format that allows for transparency.  Of course HTML and good ol’ ASCII text are available.  Significant headway is coming in the form of the Open Document Format (ODF) which is used by Open Office, IBM’s Symphony, and recently supported by Microsoft Office with the addition of  a plugin.  MP3 is available for audio, but I bet you’ve never even tried the completely free Ogg Vorbis standard.

To me it seems that all of these environments can live together in relative peace provided that users can kick their co-dependency issues and be more open to making choices for themselves.  We don’t need to have a community where we look for the next alpha leader to destroy the previous one.  We don’t need to make sure that our side wins so we aren’t enslaved by the other side.  That’s not what’s going on here.

Don’t get bogged down in the argument.  Don’t pick a side.  Look at all of this as more choice opening up before you.  Remember that you can change your future through your choices.

Celebrating my freedom and a stupid SSH trick

I just got back into town from visiting my parents in the Dallas, Texas area (USA).  We did something that I’ve never done before, though it’s pretty obvious when I think about it.  It’s Summer Vacation for my daughter, which means no school.  My wife’s activities have slowed down.  I’m the one who gets to work.  Then I had an inspiration.

Like many in the technology world I do a lot of work remotely.  I actually work from a home office, which technically means that any time there is Internet and a telephone I’m i the office.  (Some may read that I am always in the office, and there might be a level of truth to that.)  I don’t miss the commute or the need to keep my desk so that it would impress visitors at any time.  IBM makes it extraordinarily easy to work remotely, even from Linux.  A little VPN client connects me to the IBM network any time I want.  I have email of course– lots and lots of email.  I also have the Sametime instant messaging which works just as well as people popping into my cube to disturb me.  It works better in fact because I can complete a thought before switching to them without feeling their eyes boring into my head and hearing the foot gently tapping.  In fact when I was in the office I had people carry on instant message conversations with me from the other side of the cube wall!  Other important applications are available through the Notes client or the web browser (Firefox of course!).

One advantage to this is I have some flexibility.  Since I work remotely anyway, it doesn’t matter if I’m in my home office or tucked in a room at the back of my parents’ house.  So, when we went visit my parents for Father’s Day — a quiet US event where everyone is supposed to be nice to their Dad for a day, which pales in comparison to the lavish attention we spend on Mom — rather than rushing back on Sunday we extended the visit with me working from there.  My parents get extra grandkid time.  My wife gets extra time with the grandparents distracting the kid so she can think.  My daughter gets extra spoiled.  I get work done and enjoy everyone’s company in the evenings and at lunch time.  It’s win-win-win.

All of that goes swimmingly.  My daughter went with grandpa to the natural museum to look at spiders and snakes.  My wife and my mother went shopping.  I got stuff done.  But then I end up with a weird little thing.  It had nothing to do with my IBM work.  All of that worked fine.  But I do have a life outside of IBM, you know.  My problem was with my personal email.  My Dad’s Internet connection assumes that only he will be sending email through their mail server… so as a courtesy to him they blocked access to other SMTP servers.  I guess that’s a good thing overall, but for me it was annoying.  It didn’t make sense to try to have the conversation with my dad about getting the block removed from his account.  It wasn’t worth bothering him, and since he is a Windows users it was probably good to keep any protections like that in place.  My solution ended up being simple and right under my nose.

Since I run Linux on all of my systems SSH is built-in on everything.  If you don’t already know about SSH then you are missing something. Basically it’s a very simple way to tunnel information through strongly encrypted sockets.  (If you are running another environment SSH is also available with freely downloaded applications.  For example, PuTTY is a popular client for Microsoft Windows.)   I use SSH to securely synchronize files between my laptop and the desktop across the Internet and to remotely access my home system.  However, SSH will allow you to securely tunnel any socket to a system or through a system.

Here’s the way I worked it out.  I cannot access SMTP from my current location, but my home system can access the account.  I can access my home system.  So, if I can have my laptop go through my home system I can use my mail normally.  Basically, I’m going to use my home system as a proxy server for POP and SMTP to my domain and I’ll do it through a secured tunnnel.

The command was simple.  I already use a free dynamic DNS server to update the address for my home system, so I can keep up with changing IPs from my ISP. (That was interesting.)  Essentially, I’m setting up a port on my local system which will be proxie through my home system to my mailserver destination.  Here we’ll call that myhost.  I use the username on my home system, which I’ll call user.  Here’s what the command lookedlike to do the magic:

ssh -l user@myhost -L 10110:pop.mymailserver:110 -L 10025:smtp.mymailserver:25

I get a connection between my laptop and my home system.  Anything I send to port 10110 on my localhost goes to the POP port on my mailserver.  Anything sent to 10025 on my localhost goes to the SMTP port on my mailserver.  My mailserver sees POP and SMTP traffic all coming from the same box, because the proxy causes it all to originate from my home system.  My dad’s ISP is oblvious.  I get the added advantage that any transactions going on with email are encrypted from my laptop to my home PC– though communication between the home PC and the mail server gain no additional encryption.

The command above created two tunnels.  I also automatically set up several tunnels using this technique, including one to tightVNC for remote control of my system.  In other situations I use the same sort of proxy technique to connect into a network and have VNC access to multiple workstatios behind the firewall.  Cool, huh?

I’m constantly amazed at the sort of flexibility that I have in this environment and how there are always little free tools and techniques to work through a situation.  It keeps it fun.  If you’re interested in some of these techniques check some of the links that I put throughout.  Linux does this stuff out of the box (or straight off the Internet, I guess I should say).  But with the tools around you can do the same thing in just about any environment if you do a little digging.

Happy exploring!  Cheers!

I’m so proud of my dad

I really am.

A while back I won the “Ultimate” version of a prominent desktop operating system in a prize drawing.  As a Linux user, this, and really any product from this manufacturer, isn’t terribly useful to me.  I had a good laugh at the irony and tossed the box in the trunk of my car, where it sat for a couple of months.  My father is still a devoted user of this company’s operating system, so on my next visit I gave it to him.  He could use it more than me.

I got an instant message from him the other day.  He had a number of adventures.  First, there were issues getting his various types of hard drives in correct order as the operating system and the computer’s BIOS wrestled with him.  Once he got that sorted out he discovered that the operating system in question did not recognize much of his hardware and he had to go surfing for drivers.  He eventually found everything and got the system in working order, but it was a two-day ordeal of dedicated effort.  Good thing he’s retired!

The point is, that my dad is not a computer geek.  He had a technical background but spent several years as an executive, where one is fairly isolated from the day-to-day mucking about with a system in which many of us engage.  Yet he tackled this situation like a pro.  He had the persistence and patience to find the issues and knock them off one at a time.  He was also very sporting about facing the myth that I’ve always had to face as a Linux user:  “Those sorts of things don’t happen with major desktop operating systems.”

Obviously any operating system can turn into a nightmare under the right circumstances.  I was quietly amused to think that if I had walked in with one of the current Linux distributions that the majority of the hardware might have been detected and activated and that anything else I needed could be found with relative ease on the Internet.  There might be challenges, but it wouldn’t have been any worse than what my dad slogged through.  It would certainly be easier than what I went through several years ago on the open-source path.

My dad got to see first hand what happens when things don’t work as expected, and that you can dig into the problem and solve them.  He got to see that it can happen with even the most “Ultimate” of mainstream environments.  I think the experience leveled the playing field in his mind a little.  In fact, he said that our conversations about what I work with in Linux helped him form his strategy to solve his problems.  Hmmmm.  I’ve already got him using Firefox, GIMP and a few other open-source packages.  Now that he’s seen what he is capable of, maybe he’ll be up to giving Linux a try!  I’ll keep a disk handy for the next time he upgrades his computer hardware.

Anyway… Way to go, dad!  I’m proud of you.

Living in the future

I actually wrote this some time ago on a blog hosted at IBM’s developerWorks. Now that I’m doing my own site I thought it made more sense to live here. I hope it’s still useful to you.


Dejayu3_400I feel like one of the “old guys” when it comes to computing.  I cut my teeth on the Applie iie that we had at school.  (A friend of mine had one too, so I got to experiment and explore outside of the curriculum.)  My own first computer was a Commodore 64, where I discovered a lot about computing, assidiously typing in programs from the back of computing magazines.  I also managed to purchase a 300 Baud modem and got into the world of Bulletin Boards and finding information that was available on university computers throughout the world.I did DOS on an 8MHz computer.  I did Windows 1.x, which was mostly needed to run Excel.  However, I usually used Lotus 123.  I used Microsoft Word 2.x, which came on 5.25″ floppy disks which I had to swap in and out to use the spell checker before I got my fancy 20MB hard drive installed (more space than I would ever need).

I connected to the Internet through dial-up and Windows 3.1 and eventually found the browser for the first time.  Explorer was cool, but I ended up being more of a Netscape guy.  (I was working on the phone support queue for a company called CompuAdd at the time.)  Later I was a technician for a Value Added Reseller (VAR), where I got deeper into networking and all of the different integration issues that businesses.  I learned about Novell Netware (2.x) and many of the strange ways that small businesses used computing to try to get through their days.  I even did some OS/2 and some Token Ring!

I could go on and on, further alienating myself from the under-30 crowd and get someone looking to get me some prune juice and a cane, but I’ll cut to the chase:  Out of all the advances and innovations I’ve seen in computing over my life, we are in the most exciting time that I’ve ever seen.  The expanded utility of the Internet is creating connectivity and functionality that was largely unimagined in my Archie and Veronica days.  (Look it up!)  Everything really can be connected to everything else.  Virtual computing, multi-platform computing, cloud computing have all changed the landscape.  Mobile phones, video game consoles, navigation devices and more all play in the mix now.  It’s all amazing!  And it’s not just the realm of the biggest companies and the super-geeky.  Anyone and everyone has a piece of this puzzle.

Most exciting of all is what is freely available.  When I first got into the technology world there weren’t many options.  You either had to work for someone who had big technology or pay an armload for it.  Free software was not largely available.  It was all pretty serious business.  But look at it now!  Linux, open standards and open-source have created more opportunity than there has ever been for someone who, like me, was just curious about everything, to learn and grow.  It’s just waiting.

In these ponderings I’ll be talking about some of the cool choices that are out there and how I use them every day.  I’ll also talk about resources to learn and grow with technology, some of which will be of use to students and teachers.  Next I’ll talk a little about why I became a Linux user.