Re-Hosted (updated)

I’m phasing out my micro hosting site on Amazon Web Services. There have been a couple of instances lately where there were too many web queries, causing the system to run out of memory.  So, we’re now hosting on a 1GB memory virtual machine somewhere in a cloud center near DFW. 

For those who care, it’s a XEN HVM running Centos 6.2, hosted at Thrust::VPS.


Update: 12/22

So far, so good. There are a few bumps at Thrust::VPS aka Damn::VPS aka :ioMart

  • One time when the VPS went offline and tech support had to bring it back.
  • There is a timeout getting to from the VPS.  Every other web site is OK. Tech support is baffled, as am I. It may have something to do with routing.
  • The network card doesn’t work with the latest kernel (2.6.32-220) from CentOS.  This seems to be a CentOS bug and has been reported through their tracking system.
  • There’s some difficulting getting the rDNS set properly.  Tech support is working on it.



Virtual desktops and beyond

I’ve been following a discussion on Desktop Virtualization on LinkedIn’s CIO Forum with a mixture of confusion, deja vu, and real excitement. 

Initially, I wasn’t sure what problem was being solved.  Server virtualization is easy.  It solves several problems — too many boxes doing too little, eating too much power in too much space with operating environments tied to the hardware.

Desktop virtualization has been around for a long time, as Windows terminal services, Citrix services, and SunRay devices. The newer desktop virtualizaton technologies continue to solve the same problems in more powerful ways. The problems solved, like those of server virtualization, belong to IT management:   How do I deliver a controlled environment to the user?  How do I take the personal out of personal computer?  It’s not that I’m an evil tyrant bent on stifling the user’s creativity.  I just can’t figure out any other way of ensuring that stuff "will just work".  The end user is happier and more productive and IT really has only one "computer" to maintain, even if it’s a horribly more complex, distributed, virtualized desktop.  The central environment is safe from viruses, trojans, and nasties that might be on the personal computer, as well as conflicts from software, hardware, and missing updates.

Application virtualization is coming into its own with tools like VMWare ThinApp.  In desktop virtualization, I deliver an entire computer to you.  With application virtualization, I deliver a package to you that carries just enough of a computer to ensure that it will run.  If you want to edit a document, you don’t need all of Windows XP. You just need Word and enough of Windows for it to work.  It lives in a little bubble on whatever computing device you’re using. It’s the role of the virtualization software to translate it, whether that’s a Windows PC, a Linux PC, a Mac, an iPhone, or your TV. Like desktop virtualization, do what you want with your computer, because the application in the bubble is insulated.

Both desktop and application virtualization make the end-user’s choice of hardware and operating system irrelevant.  If there’s a client for that OS and hardware, then IT can deliver a standardized application.

And here comes Google…  Why would Google develop an operating system like Chrome?  Chrome is an operating system that’s delivered as a browser.  It seems kind of redundant to run Chrome on Windows, Mac OS/X, or Linux.  Those certainly are targets for Chrome, but Google Apps already run in Firefox, IE, and the rest on those platforms. There is another market — the instant on, I want to edit a spreadsheet and I want to do it now end user.  We’re all moving that way. If you had to wait 60-120 seconds to use your cell phone each time you "turned it on", you’d toss it away as unusable.  When we all carry something that the current netbooks want to be but aren’t, something like that little tablet computer they carried around on Star Trek TNG, we’ll expect instant on, instant connections, and a vast library of complex applications.  Phoenix has seen this coming. They’re building instant-on environments in bios.  And Google will be there with Google Apps and hundreds of cloud-based applications that either run in the cloud or on the local processor on a device that’s "just" a browser.


Conceptual computers

The CCIM Institute is become just a little more virtual. Over the past couple of weeks, we’ve started replacing aging, single-use servers. Rather than buy another small box for each purpose, we’ve purchased a couple of fast, powerful Dell boxes, installed VMWare, and turned each into a host for several virtual servers.

Currently, we’ve put two MX, two DNS servers, a batch reporting tool server, and a backup software hosting server into the virtual space, running Windows 2003, Windows XP, and Fedora Linux. These systems will continue in test mode through the New Year’s holiday and move into production in early January, allowing us to shut down or repurpose some of that old hardware.

Reducing the number of physical boxes has some real benefits. Space, heating, and cooling are the most obvious. (In the rebuild of the 8th floor, our server room lost about 36 sq ft.) Virtualization also gives us the ability to add additional single purpose servers with no marginal cost, other than the possible cost of an operating system license. We can also quickly clone an existing server to create a test environment, add or remove memory from a virtual server in a matter of minutes, and re-allocate resources dynamically.