Tuesday, September 25, 2018

Using VMware Remote Console on Debian Linux

If you're a Linux user and have tried the built-in VM console in the VMware vSphere web interface, then you probably know that it's clunky, buggy, and difficult to use.  Luckily, VMware has a separate Remote Console tool which is more robust and does not suffer from the shortcomings of the web-based console.

VMware's Remote Console is not certified to run on Debian Linux, but with a small amount of trial and error, I was able to get it working just fine and now it works flawlessly.


The Problem:

The VMware installer puts everything it needs under the /usr/lib/vmware directory.  That's fine.  But beyond that, there are several problems:

  1. The required libraries are in /usr/lib/vmware/lib but they are split up into subdirectories named like the actual library files, like /usr/lib/vmware/lib/libssl.so.1.0.2/libssl.so.1.0.2

  2. The libraries are not added to the system's search path during install, so when you launch Remote Console, it tries to use the existing system libraries and not the ones delivered with the product.  This can lead to a variety of different errors.

  3. Adding the libraries to the system's search path results in those libraries taking precedence over the OS-delivered versions, so then other programs like Firefox and Chrome try to use the specialized VMware-delivered versions of these libraries when they launch, and they fail.

  4. The actual executable -- /usr/lib/vmware/bin/vmrc -- is actually a symbolic link to /usr/lib/vmware/bin/appLoader.  Since appLoader uses the name of the command that was actually invoked in order to know what to do, the invoked command must be vmrc.

The Solution:

We need to create a script which sets up the correct library path (putting every single one of those stupid library-named subdirectories into the LD_LIBRARY_PATH environment variable) and then run the Remote Console with that library path.  The name of the invoked command has to be vmrc, so I ended up creating a temporary symlink to appLoader as /tmp/vmrc and calling that from the script:

#!/bin/bash

# Set the library and bin directories
LIBDIR="/usr/lib/vmware/lib"
BINDIR="/usr/lib/vmware/bin"

# Clear the LD_LIBRARY_PATH
LD_LIBRARY_PATH=""

# Build the LD_LIBRARY_PATH with all of the $LIBDIR subdirectories
for dir in $(find ${LIBDIR} -type d) ; do
    LD_LIBRARY_PATH="${dir}:${LD_LIBRARY_PATH}"
done

export LD_LIBRARY_PATH

# Create a temporary symlink to appLoader in /tmp and run it
ln -s ${BINDIR}/appLoader /tmp/vmrc
/tmp/vmrc $*

# Delete the symlink and exit
rm -f /tmp/vmrc
exit 0

Simply delete /usr/lib/vmware/bin/vmrc and replace it with this script, and it works!

Wednesday, May 17, 2017

Passing Encrypted Data Between PHP and ColdFusion

During a recent project at work, I came across the need to pass an identifier between a page written in PHP and a web form developed in ColdFusion. The identifier is the user's unique key in our database, and while it's not a super-top-secret piece of information, it's also something we did not want directly exposed in the browser's URL in this case.

The solution turned out to be rather simple. We decided to encrypt the identifier, and pass the encrypted string in the URL. The following examples will encrypt a string in PHP and decrypt it in ColdFusion, although the process can be reveresed to pass data from ColdFusion to PHP without much trouble. You need three things to be hard-coded at each end of this process:
  1. The IV (initialization vector)
  2. The encryption key
  3. The encryption cipher

Encrypting a string in PHP


The PHP side is fairly straightforward. PHP needs to be compiled with OpenSSL support, and then you simply use the openssl_encrypt() function to generate the encrypted string.

encrypt.php
<?php
    $raw    = "This is a test!";   // The string to be encrypted
    $iv     = "3a513d6df56cd0fd";  // The initialization vector
    $key    = "b341813267566422";  // The encryption key
    $cipher = "AES-128-CBC";       // The encryption cipher

    // Generate the encrypted string
    $enc = openssl_encrypt($raw, $cipher, $key, 0, $iv);

    if( $enc === FALSE ) {
        die("Encrypt operation failed.\n");
    }

    // Generate the URL with the encrypted string as a parameter called 'e'
    $url = "https://www.example.com/decrypt.cfm?e=" . rawurlencode($enc);
    print "$url";
?>

In this example, $raw is the string to be encrypted, and I have hard-coded it for simplicity. It can actually be any string, such as POST data from a form, a value from a database, etc.

Decrypting a string in ColdFusion


The ColdFusion side is a bit more complicated, but still not too bad. ColdFusion supports encryption and decryption by default. You use the decrypt() function to decrypt the string, but first you have to convert the input vector into a binary object, the encryption key into a Base64-encoded string, and sanitize the string passed in the URL.

decrypt.cfm
<!--- Convert the IV to a usable binary object --->
<cfset ivString="3a513d6df56cd0fd">
<cfset ivBase64=toBase64(ivString)>
<cfset ivBinary=binaryDecode(ivBase64, "Base64")>

<!--- Convert the key to a Base64 encoded string --->
<cfset keyString="b341813267566422">
<cfset keyBase64=toBase64(keyString)>

<!--- AES-128-CBC encrypted string (Base64 encoded) --->
<cfset enc = replace(urlDecode(url.e), " ", "+", "all") />

<!--- This is the cipher method to use --->
<cfset method="AES/CBC/PKCS5Padding">

<!--- Decrypt the encrypted string --->
<cfset dec=decrypt(enc, keyBase64, method, "Base64", ivBinary)>

<!--- Show the results --->
<cfoutput>
<p>
<b>Encrypted:</b> #enc# <br />
<b>Decrypted:</b> #dec# <br />
</p>
</cfoutput>


Notes:
  • We had originally tried to use a 256-bit cipher, but it turned out that a Java Cryptography Extension is needed for ColdFusion to support higher than 128-bit encryption. So, we decided to use a 128-bit cipher instead.
  • When you're using a 128-bit cipher, the key needs to be exactly 128 bits (16 characters) long. PHP does not seem to care if the key is too long (I suspect it just truncates it to the required length), but ColdFusion will throw an error.
  • The call to replace() on the ColdFusion end is needed to work around a weird issue. The encrypted string can contain the + character, and the urlencode() function in PHP replaces + with %2B. However, the urlDecode() method in ColdFusion decodes %2B as a space. We need to catch that and make sure the + remains intact.

Tuesday, October 4, 2016

Redirecting Index Pages with mod_rewrite

I recently got a request from our web team to redirect any index page to its parent.  The problem was that www.utica.edu/about and www.utica.edu/about/index.cfm both loaded the same content and were causing problems with search engines indexing our site.  To solve this, the web team wanted any request for an index page to return a 301 (resource permanently moved) redirect to the parent.  Thus, www.utica.edu/about/index.cfm would get permanently redirected to www.utica.edu/about/.

We're running Apache with mod_rewrite, so the solution was to implement a rewrite rule in the Apache configuration.  The rule to blindly redirect any index page to its parent is actually quite simple:

RewriteCond %{REQUEST_URI} ^(.*)/index.cfm$ [NC]
RewriteRule ^(.*)/index\.cfm$ $1/ [NC,R=301,L]
Note: In our case, the index pages are always named index.cfm.  You could replace this with index.htm, index.html, index.php, or whatever your index page is defined as and it should still work.

The above rule works fine for blindly redirecting any index page, but causes problems if you have a web form which submits to an index page.  To work around this, you can add another condition to make sure the request method is not POST:

RewriteCond %{REQUEST_URI} ^(.*)/index.cfm$ [NC]
RewriteCond %{REQUEST_METHOD} !^post$ [NC]
RewriteRule ^(.*)/index\.cfm$ $1/ [NC,R=301,L]

Likewise, you could make the condition even more narrow by explicitly matching GET requests only, but this might be overkill for most purposes:

RewriteCond %{REQUEST_URI} ^(.*)/index.cfm$ [NC]
RewriteCond %{REQUEST_METHOD} !get$ [NC]
RewriteRule ^(.*)/index\.cfm$ $1/ [NC,R=301,L]

I hope this is helpful if you're in the same situation.

Thursday, August 11, 2016

Using an SSH Server for Remote Administration

If you manage Linux or Unix servers, then getting remote access to them is often a necessity. An SSH server can be an easy way to provide secure remote access to the servers you manage. It's not hard to set one up and secure it, and once it's ready, you can use an SSH client on your PC, laptop, phone, or tablet to get to your servers any time. You can even create tunnels between your local client and a remote machine through the SSH server, allowing it to act as a proxy between you and your servers on an internal network. All you need in order to set up an SSH server is a Linux box, and it doesn't really matter what distro you choose.

Since the SSH server will have a public IP address, it's important to make sure that the SSH daemon is secure. This means we'll need to do the following:
  • Change the listen port to something other than 22
  • Allow public key authentication only
  • Disable password authentication
  • Disable root logins
  • Set a restrictive authentication timeout
  • Make sure each user's private key has a passphrase (you can't enforce this at the server level, though)

I have created a zip file with an appropriately secured sshd_config, as well as shell scripts for performing various tasks. Before you get started, download and extract the zip file:

Download ssh_server_files.zip

Inside the zip file, there are two subdirectories. The files/ssh/ directory contains the sshd_config file, as well as a generic SSH login banner, which you can customize to your needs. The files/bin/ directory contains shell scripts for performing various tasks on the SSH server, such as adding users and allowing tunnels.

What You Will Need:
  • The ssh_server_files.zip file, which you can download here
  • A PC, server, or virtual machine with a Linux OS installed (it doesn't have to be anything fancy)

How to Do It:
1. Install OpenSSH
  • Debian / Ubuntu / Kali:
    apt-get install ssh

  • CentOS / Fedora / Red Hat:
    yum install openssh


2. Configure the SSH daemon (sshd)
  • Back up the existing /etc/ssh/sshd_config file:
    cp /etc/ssh/sshd_config /etc/ssh/sshd_config.ORIGINAL

  • Copy the new sshd_config and sshd_banner files to the /etc/ssh directory:
    cp files/ssh/sshd_config /etc/ssh
    cp files/ssh/sshd_banner /etc/ssh

  • Edit the sshd_config file and make sure to specify a listen address for the SSH daemon on the ListenAddress line. The IP address must be assigned to an interface on the SSH server.

  • You can also change the listen port by editing the Port line if you want to.


3. Define the tunnels you want to allow through the SSH server
  • Use the add-new-tunnel.sh script to allow tunneling to your servers. For example:

    Allow tunneling to the host mysrv1.example.com on port 443 (HTTPS)
    add-new-tunnel.sh mysrv1.example.com:443


4. Create a user account
  • Use the add-new-user.sh script to add an authorized user account for anyone who will need to use the SSH server. For example:
    add-new-user.sh dave


5. Create and install authentication keys
  • Create an RSA key pair:
    $ ssh-keygen -f mykey -t rsa -b 2048

    You should always specify a passphrase for your private key! Don't leave it empty!

  • Copy the public key into the user's authorized_keys file:
    cat mykey.pub >> /home/dave/.ssh/authorized_keys
    chown dave /home/dave/.ssh/authorized_keys

  • Copy the private key (the file without the .pub extension) to the client machine you'll be using for remote access.


How to Use It:
  • If you simply need console access, you can SSH to the server using your private key. Under Linux or OS X, you would do something like this:
    ssh -p 2222 -i mykey dave@sshsrv.example.com

    In this example:
    • The SSH server is configured to listen on port 2222
    • The username on the SSH server is "dave"
    • There's a private key file called mykey on the local machine
    • The matching public key is contained in the authorized_keys file on the SSH server
    • The server has the hostname sshsrv.example.com

  • If you want to create a tunneled connection between your client machine and a remote machine, you can do so by creating a "local" tunnel (mapping a port on your local machine to a port on the remote machine). Under Linux or OS X, you can use the -L option for the ssh command.

    The syntax for this option is:
    -L <local_port>:<remote_host>:<remote_port>

  • Let's say you have a server on your internal network named mysrv1.example.com, which hosts an HTTPS service but is not accessible from the outside (due to having an internal IP or whatever). You can use an SSH tunnel to map a port on your local machine to port 443 on that server, and then access it using a localhost address:

    ssh -p 2222 -i mykey -L 4443:mysrv1.example.com:443 dave@sshsrv.example.com

    Then, to access the HTTPS service hosted at mysrv1.example.com, you can simply open a browser on your local machine and go to https://localhost:4443.

If you use tunnels, the SSH server makes a great proxy for most TCP-based services, such as HTTP, HTTPS, FTP, SSH, SFTP, LDAP, and MySQL. At my job, our web developers use SSH tunnels to connect their client machines to our web servers and MySQL server. And I often use a tunnel through our SSH server to connect to an LDAP server on our internal network from home.

NOTE:
Virtual hosts on a web server can sometimes prevent HTTP and HTTPS from working correctly through an SSH tunnel. For example, http://localhost:8880 won't work if a virtual host on the server requires a specific host name.


Troubleshooting and Auditing:
  • To diagnose problems with authentication, tunneling, etc., you can check the system logs. With the log level in sshd_config set to DEBUG3, the SSH daemon will generate a lot of log output. You should generally be able to find the underlying error in these logs. A few common issues may arise:
    • In the case of an authentication problem, it's usually due to sshd failing to find a public key in the user's ~/.ssh/authorized_keys file which matches the private key being used for authentication.

    • In the case of a tunneled connection not working, it's usually due to the tunnel not being allowed in sshd_config. Use the add-new-tunnel.sh script to allow tunneling to the desired host and port.

  • If you want to see which servers are being accessed through the SSH server, and by whom, you can use the audit-tunnels.sh script. The script takes a server:port pair as the only argument, and will show you any tunnels which were opened to that host recently.


If you use this solution for remote access, please leave a comment and let me know what you think. Thanks for reading!

Thursday, June 16, 2016

Replacing Drives in an Iomega/LenovoEMC NAS

A few years ago, I came across a situation at work where a drive had failed in an old Iomega ix4-200r NAS. Replacement drives for this unit were no longer available from the manufacturer, so I tried to swap out the bad drive for a new one of the same size which we already had, but the NAS refused to recognize it. In their support forums, an Iomega tech said that it wasn't possible to mix and match drive models, so I would need to replace all 4 drives (something they obviously did not support).

I decided to try a full drive replacement. I had known for a long time that these NAS appliances were basically just Linux boxes, and once I logged into the unit, I discovered it was even using the standard Linux software RAID system. So, I acquired 4 brand new drives which were the same size as the old ones but a different model, and successfully replaced every drive in the unit.

Important Notes:
  • This procedure is intended to be used only when the NAS has suffered a drive failure and will no longer boot.
  • These recovery steps will only work if you have at least one old drive from the NAS available.
  • If more than one drive has failed and the NAS is using RAID 5, then the stored data will be lost (the operating system will still be recovered).
  • The replacement drives do not have to be the same model as the original drive, but all of the replacement drives need to be the same.
  • This procedure was successfully used to restore an Iomega ix4-200r to working condition after the boot drive failed. These steps should also work on any Iomega/LenovoEMC NAS of a similar generation, but this has not been tested.

What You'll Need:
  • Monitor and keyboard
  • Replacement drive(s)
  • Physical access to the NAS
  • Administrator password for the NAS

How to Do It:
  1. Connect a keyboard and monitor to the NAS, because you will need to run commands once it is booted.
  2. Boot the NAS with the old HDD in bay 1 and the other bays empty. Log in as root.

    The default root password is soho if using factory defaults, otherwise it's soho with your admin password appended (ex: sohoabcd9876 if the password is abcd9876).

  3. Once the NAS is booted, insert a new drive into bay 2. After a few seconds, the RAID driver will begin to mirror the first partition from disk 1 onto disk 2.
  4. Monitor the status of the RAID operation using the command mdadm -v -D /dev/md0 every 30 seconds or so.
  5. Once the RAID mirror of disk 2 is complete, insert a new drive into bay 3 and repeat Step 4.
  6. Once the RAID mirror of disk 3 is complete, insert a new drive into bay 4 and repeat Step 4.
  7. Once the RAID mirror of disk 4 is complete, remove disk 1 (the old drive) and insert a new drive into bay 1, then repeat step 4.

    *** DO NOT REBOOT THE NAS! ***

    At this point, all four of the new drives are installed and the OS partition has been mirrored to all of them. However, the MBR of the old drive has not been mirrored to the new drives, so the GRUB bootloader is not installed in the MBR of the new boot drive. To solve this, you have to install GRUB into the MBR of disk 1, and preferably the MBR of each drive so that any other drive could be used for booting should the first drive fail. You can do this by using the GRUB command shell.

  8. Run /boot/ginstall/grub

    In the GRUB command shell, run the following commands in order:

    device (hd0) /dev/sdb
    root (hd0,0)
    setup (hd0)
    device (hd0) /dev/sdc
    root (hd0,0)
    setup (hd0)
    device (hd0) /dev/sdd
    root (hd0,0)
    setup (hd0)
    device (hd0) /dev/sde
    root (hd0,0)
    setup (hd0)

    At this point, the drives are designated /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde because /dev/sda was the old drive, and that drive was removed. When you reboot, the drives will once again be sda, sdb, etc., but this does not affect the RAID or anything else as far as I can tell.

  9. Reboot the NAS
After a reboot, the NAS should come back up and then function like new. These instructions only restored the OS partition, and you may need to use the web interface to rebuild the data storage volume. In that case, the NAS will be unusable for a few hours while the storage volume is rebuilt.

Sunday, May 15, 2016

Welcome to the Computer Salad Blog!

About Me

My name is Dave, and I'm an all-around computer enthusiast.  I created this blog in order to share some of the projects, discoveries, successes, and failures in my professional and hobbyist adventures with computers, just in case this is useful to anyone else.

I hold a B.S. in computer science, and currently work as a systems administrator at Utica College.  I'm primarily responsible for the deployment, maintenance, and operation of Linux and Unix servers, as well as the applications which run on them.  My job also includes quite a bit of scripting, and a smattering of application development, data recovery, networking, database administration, user support, and lots of fun with SSL certificates.

I strongly believe in the power of open source software, and try to use open source solutions whenever possible.  I run Debian GNU/Linux as the primary OS on my work and home PCs.  I am the author and maintainer of an open source Android app called CheckValve, as well a piece of companion software called CheckValve Chat Relay, both of which are released under version 3 of the GNU General Public License (GPL).

Outside of work, my hobbies include maintaining CheckValve, writing random programs, and working with old technology.  Old computers and video game consoles hold a special place in my heart, and I seem to collect far more old stuff than anyone probably should.  My collection includes some old PCs, even older Macs, a Commodore Amiga, a Commodore 128, a TRS-80 Model III, and even a TRS-80 Model 100.  I also have an NES, a few original Game Boys, and a Sega Game Gear.  The old TI-99/4A we had as kids must still be around somewhere, and perhaps someday I'll dig it out.  Yep, lots of old stuff.

I'm truly lucky to be married to my beautiful wife, and we have two incredible 5-year-old sons.  Their never-ending interest in science, love of nature, and constant questioning of the world around them is an inspiration.  I spend every possible moment with them, loving every minute of it.  My experiments and projects with technology have certainly waned over the last few years, and I find it takes me a very long time to get things done, but I wouldn't trade it for anything.

Well, I hope something here proves useful to you during your own adventures with technology.  If you have a question, please don't hesitate to ask.  Enjoy your Computer Salad!