Archive

Archive for the ‘How To’ Category

SSH Tunneling–My Sys Admin Favorite Tool

Happy SysAdmin day everyone! While my title may not say “System Administrator” any longer (although, my business cards do… probably need to change them), I still wield those skill all the time and have utmost respect for those in the trenches.

I thought today would be a great time to bust out a post on what I believe to be the most useful tool in my Sys Admin belt… SSH Tunneling. image

SSH Overview

SSH (aka – Secure Shell) is a network protocol that allows connections to network connected devices. SSH sessions are encrypted, so your communications to and from the devices are safe. Typically, SSH connections are seen in Linux-based, UNIX-based, and Networking devices.

Contrast SSH to Telnet… while they provide many of the same functions (ex: remotely connecting to a device), the Telnet communications are plain text and wide open. So, when logging into a server and providing your password, someone with a packet sniffer (WireShark, for example) would see something akin to:

SSH: B12394jfjLL1055bSSJxla,;;293
Telnet: thisismysupersecretpassword

SSH Tunneling Overview

While SSH allows for remote connections to devices, that is not the only function it can provide. Tunneling is a little option available that actually uses the SSH connection to a remote device as a launching point to get into another device or service. When the SSH connection is made to the device, a new connection to specific hosts and TCP-based services are also made and tunneled back to you.

For those of you that have not used SSH port forwarding, the concept may be a little funky. So, please read on and learn about the coolness that is SSH tunneling. For those that use the functionality now, I hope you continue to use it wisely and appropriately… and read on for a refresher if you’d like.

The Nitty Gritty

[Note: this is all hypothetical. If you have security restrictions/policies in place, I do not suggest your using this. This is just an example of how the technology works. It is up to you to figure out how to use it in your life]

This is my “environment”. This is overly simplistic, but should get the point across.

My Environment

As mentioned earlier, SSH tunneling uses the connecting host as the launching pad for the tunneled connections. When a successful connection to the SSH target is made, the specified connections are also plumbed between your workstation/client and the destination.

Once a connection is established by you, the destination will see a connection from the SSH launch pad. You will see a port on your local workstation/client as a localhost connection.

Scenario

You’re visiting your in-laws in Ohio. For whatever reason, you need to check something at work on the Linux2 server. . All you have available is a non-work Windows desktop. Without a VPN connection or remote desktop client installed, how can you check on the thing? SSH tunneling through Linux1! Let me show you how.

  • Download PuTTY (Fantastic utility. I would probably die without it).
  • Start PuTTY

image

  • Provide the Host Name or IP of the Linux1 server

image

  • Walk down the Category section on the left and go to: Connection –> SSH –> Tunnels

image

  • This is where the magic happens. In the source port, enter the TCP port that will represent the tunnel on localhost.
    • This requires you to know what is running on the workstation. Do not chose an existing port as the SSH connection will be successful, but the tunnel will fail.
  • In the destination section, enter the target of the tunneling (aka – what you’re trying to get to) AND the TCP port of the service in the following format: linux2.yourdomain.com:80
    • The destination tunnel endpoint will utilize the SSH host resources you are connecting to. So, DNS resolution of linux2.yourdomain.com will take place from linux1, not your workstation.
  • I am going to select Source of 22222 and Destination of linux2.yourdomain.com:80
  • Click the Add button

image

  • Now, if you want other tunnels, you can add them here as well. But, we’re just going to move on. Click Open to open the SSH connection and login.
  • Once you have logged in, you can use ‘netstat –anp tcp’ to show the TCP connections on your workstation:

image

  • Magic! You can see that I have a TCP:22222 listening on my workstation.
  • Now, it is time to check that super important thing you are trying to check on. Since we know this is a web-based utility, we just need to open a web browser and browse away. Open your browser-du-jour.
  • Go to http://localhost:22222/superimportant

image

  • Phew! Looks everything looks alright… Time to cleanup!
  • Close the browser window and logout of the SSH session.
  • If you run the ‘netstat –anp tcp’ command again, TCP:22222 is missing.

image

image

 

Conclusion

Sys Admins have a great arsenal of tools available to them at any given point in time. SSH Tunneling has provided massive functionality to me in my day job. I hope that you find some value in SSH tunneling and can add it to your tool belt for those visiting the in-law moments.

Categories: How To, Systems Tags: ,

vSphere – Extending VMFS Datastore–Live (With Unisphere)

November 8, 2010 Leave a comment

It seems like any time some storage environment is being carved up and put to good use, there is always 1 or 2 situations where the calculations were slightly off and need to be adjusted. In my situation, this happened on a couple VMFS datastores and our new SAN. However, making a couple slight adjustments to the space allocation on the SAN and extending the VMFS file system was all it took.

Note: These instructions are being made using an EMC Celerra and Unisphere 1.0. Your SAN hardware experience may vary. So, as always, take this as an example and test in your test environment FIRST.

Stage 0 – Determine how much space you need

The mistake I made in calculating some of the space was that I missed a couple vswap files. Oops… So, now I need to expand the datastore by about 3 GB in order to Storage vMotion the VM into place and keep some room for snaps (and whatnot).

Now… it is worth noting that Celerra SANs work with file systems as the base storage level (the base place you can keep data). If you are hip to Unix/Linux environments, you know that when you format a file system, a portion of the storage is consumed by inodes (and other structures depending on the system). So, a 20GB partition is not necessarily 20GB usable. By default, the Celerra is taking 2% of the disk space for whatever it wants. 🙂

image

Note – I am not an EMC guy, so there may be ways to tweak this. That’s cool.

Recall that I need 3GB extra (which would make it a 23GB file system). Extending by 3GB would leave me with 22.6GB usable (again, recall 2% overhead). Extending by 4GB, though, leaves me with 23.6GB usable (377.0MB overhead). So, 4GB it is!

Stage 1 – Extend the file system

Open the Unisphere Client and browse to the file systems. Locate the file system you are looking for.

image

Right-click and select “Extend”

So… now that we know we need to add 4GB, we just need to do some quick math because the Extend function wants sizes in MB. 1024MB/GB * 3GB = 4096MB. Plug “4096” into “Extend Size by (MB):” field and click OK.

image

Now Unisphere is showing that the file system has been extended. Pretty cool, huh!?

image

image

 

Stage 2 – Extend the iSCSI Share

Part of setting up the iSCSI access to a file system requires you to define how much of the file system will be used by the iSCSI access. Initially, I wanted all of the space to be used by iSCSI. So, I told the Celerra to use all 20GB (or 19.7GB after the 2% overhead). Now, though, we are still using the 19.7GB even though we have the additional 3.9GB available.

To resolve the issue, we need to extend the iSCSI share to consume the rest of the space.

In the Unisphere client, browse over to Sharing –> iSCSI

Locate the LUN ID of the share you want to extend. For us, this is LUN #19. Right-click on the share and select “Extend”.

image

Notice that the size is showing 19.684GB. We’re going to change this now to include the new space we added.

This step contains a little inconsistency in the Unisphere UI that drives me crazy. Specifically, the rounding and display of significant digits. In the graphic above, you see that the iSCSI share is 19.684GB in size. However, notice in the next graphic that we can extend by 3.9GB. Hmmm… What do you want to bet that it is not quite 3.9GB!?

Instead of trying to figure out the exact size, in MB, that we want to extend by, there is a little trick we can employ. The UI catches incorrect sizes, throws an error, and then corrects the error by replacing the value with the largest value possible. So, I am going to enter 4096 into the field knowing that it will fail. However, it will correct my work for me (ah… so nice).

image

image

Click OK and extend away.

Now, the Unisphere UI will show that the iSCSI share has been extended.

image

Stage 3 – Extend the Datastore in vSphere

Even though we have done some pretty cool data storage extension, the vSphere environment is not showing that. Grrrr… Luckily, there is an easy fix for that.

image

Locate the proper datastore by selecting an ESX host, Configuration tab –> Storage. Right-click on the datastore and select the “Properties” option.

Next, select the “Increase” button

image

In the following window, you will see a selection of the available devices that we can select to become an extent to the file system. Low and behold, the same LUN #19 has appeared (recall that the iSCSI LUN for the file system was #19!

image

Select that LUN and click Next.

Magically, vSphere knows exactly what to do with it.

image

vSphere sees that there is an additional 3.94GB of space available and that the ‘Free space’ will be used to expand the VMFS volume. Click Next!

Click Next to “Maximize capacity”.

Click Finish to begin the magic.

In a mere 6 seconds (for me), the VMFS datastore has been expanded and is available to use.

image

As a nice aside, this feature will also initiate a Storage Rescan of all ESX hosts connected to the datastore to ensure the change has been reflected everywhere.

Stage 4 – vPat-Yourself-On-The-Back

 

This procedure to extend a VMFS datastore is super easy. Again, different SAN vendors may have different utilities and procedures for enlarging an iSCSI share on the fly. So, make sure to do you research on the procedure. However, this should go to show that it can be super easy and any VMs running on the store will have little/no impact.

vSphere–Migration to vNetwork Distributed Switch (vDS)–LIVE!

October 13, 2010 Leave a comment

The vNetwork Distributed Switch (vDS) is an amazing new entity in the vSphere realm. As with many things, knowing a little history makes you appreciate things a little more.

A long time ago, vSphere environments relied upon each ESX host being configured with multiple virtual switches. These switches were bound to physical NICs that bridged the virtual and the physical networks. However, as the switches needed to be defined on each host, there was room for error. vMotion relies upon the same switch names to be defined on each ESX host, or else it will fail. Additionally, it was darn near impossible to collect any networking information regarding how a VM was using a switch because the VM could be on any given switch at any time.

So, VMware saw the inefficiencies of this structure and began seeing the benefits of a more centralized virtual switching infrastructure. Enter the vNetwork Distributed Switch (vDS). The vDS removes the need to create virtual switches on the ESX/ESXi host and, instead, creates the switch on the data center. So, any machine inside of the datacenter has access to the virtual switch. Now, all the administrator needs to do is assign the correct physical NICs to the correct switches, and the work is done for you.

The vDS is also the cornerstone for some of the new networking functionality that VMware is introducing (vShield products, vCloud, etc…). vDS introduces the ability to add logic and control to the switching of the virtual machines. Naming standards are applied across all hosts to ensure that machines will migrate flawlessly. Network statistics on a VM are maintained as they are running on the same switching entity, regardless of ESX host. New functions, like Network I/O Control can be introduced (ability to prioritize which VMs have network access). Again, this is the future of how VMware is going to handle vSphere networking. So, getting on the horse now is going to be beneficial as you will be more likely to be able to use new features and functions sooner in the future.

Now, if you are like me, all of the NICs in our vSphere environment are being consumed. This poses a problem because we need at least 1 vmnic to connect to the vDS in order for it to become useful, right?! Plus, being able to migrate the VMs without any downtime seems like a swell proposition as well. So, I have developed what I believe to be a fairly simple procedure that should allow for us to reconfigure the virtual networking while keeping the VMs live, without needing to vMotion everything off to another host… just a little repetitive and requires a little balancing. But, that just makes it more fun! The instructions are going to assume using the GUI to define the vDS.

[Stage 0 – Define the Physical Network Adjustments]

The underlying physical network needs to be configured properly for this to work. This may be a great opportunity to make some adjustment to the networking that you have always wanted to make. For me, I am taking the opportunity to move everything over to trunking ports, so the vDS is going to tag the VLANs as necessary. Just remember that you are going to need to make sure any pSwitch adjustments you are going to make ahead of time are not going to impact the production network. This may take some coordination with your networking team. If you, like me, are the networking team, this conversation should be nice and quick. Just make sure everyone is on the same page before starting.

[Stage 1 – Define the vDS]

This is super easy. You can define the vDS objects without interrupting anything. Part of the GUI setup asks if you would like to add physical interfaces now or later. Just select the option to add later and move on with the setup.

Home -- Inventory -- Networking

Next up, right-click on your Datacenter and select “New vNetwork Distributed Switch”

Assuming you are running vCenter 4.1 (with at least 1 ESX 4.1 host), you should have the option between version 4.0 and 4.1.

vDS Version Selection

Selection version 4.1.0 is going to be the most preferable as it introduces support for the most new functions. On the down side, you need to be running ESX/ESXi 4.1 to get the functionality. For this example, we will assume you are running on ESX/ESXi 4.1.

The next step involves giving the vDS a name. Name the switch something descriptive that would allow you to know exactly which network it connects to just by the name. So, dvSwitch-DMZ would be a good name, for example. Additionally, you can define the number of dvUplinks (vmnics) will connect to the switch (per host). The default value of 4 is just fine for our environment, so select it and move on.

The next step allows you to select the vmnics from each host that meets the vDS configurations. So, if you have a single 4.1 host in your environment and your environment has 10 ESX hosts, you will only have a single host in this list. Recall that we selected vDS 4.1.0 above… and 4.1.0 requires ESX/ESXi 4.1. For the time being, we are going to select the option to “Add Later” and move on.

The final stage allows for the creation of a default port group and to finish. In this situation, we will go ahead and allow the wizard to create the port group automatically and create the vDS!

[Stage 2 – Remove One vmnic From Existing vSwitch]

Seeing as all of the vmnics are consumed, we need at least one vmnic available to move to the new vDS. Now… take into consideration the network changes that you may need to make for the vDS (back in Stage 0). You may need to coordinate the switch port change with the network team (or yourself). Now… as we are removing a vmnic from your production environment, you are responsible for double-checking to make sure that this should work for what you are doing. If you have a vSwitch with a single vmnic, you WILL lose connectivity when you pull the vmnic for the vDS.

Once you know which vmnic you are going to remove from the existing vSwitch for the new vDS and you have cleared it with your networking team, go ahead and remove it from the vSwitch via the vSphere Client.

[Stage 3 – Add the vmnic to the new vDS]

Now that you have a vmnic available to use, ensure it has been properly configured for the network (again, see Stage 0).

The next step is to add the vmnic to the vDS. Since we did not assign any ports to the vDS during creation, we need to add the ports from the ESX/ESXi host itself.

Open the host configuration tab –> Networking –> Click on the “vNetwork Distributed Switch” button

Host Configuration -- Networking -- vDS

This view will show you the configuration of the vDS on the ESX/ESXi host itself.

Click the “Manage Physical Adapters” link on the top-right corner of the screen. This will load a page that displays the vmnics attached to the vDS. Select the “Click to Add NIC” link.

Locate the section labeled “Unclaimed adapters”. Inside the section, you should see the vmnic you just removed in Stage 2! Sweet!

Unclaimed Adapters

Select the adapter and click OK.

You have been returned to the “Manage Physical Adapters” screen again. This time, you should see that the vmnic has been added. In my environment, this is vmnic2.

pNIC Added To vDS

 

[Stage 4 – Configure the Port Group VLAN]

The Port Group is a structure inside of the vDS that handles many of the switching functions you would expect. These functions include Traffic Shaping, Failover, VLAN, etc… This is semi-analogous to the vSwitch you were used to configuring in the past.

In the event that you need the vDS to tag your network traffic with the proper VLAN, you need to edit the Port Group settings.

Home -- Inventory -- Networking

Locate the vDS in your vCenter structure. Click the [+] next to the vDS to get access to the Port Group (default label is “dvPortGroup”). Right-click on the Port Group and edit the settings.

Locate “VLAN” in the table on the left. Notice the “VLAN Type” defaults to “None”. Drop the box down and select “VLAN”. Then, provide the VLAN ID.

dvPortGroup VLAN Definition

Click OK to close out the windows.

Now, your Port Group is on VLAN 30!

 

[Stage 5 – Move A VM To vDS]

Next stop is the time to test your work. Find a nice utility or low-use VM. You should be able to ping the VM continuously as you move from the old vSwitch to the vDS. So, begin pinging the VM.

Now, we need to change the network that the VM’s NIC is connected to. This is a simple operation you can perform on the fly, while the machine is online.

Edit the settings of the VM.

Select the Network Adapter of the VM

On the right-side of the window, you will see a section labeled “Network Connection”. Look for the Network Label box and drop it down. Magically, you will see your new vDS as a network label. Select the vDS

VM Network Adapter Settings

Hold your breath and click OK. See… that was not hard was it?! Pings should continue uninterrupted. Now… if you have any connectivity issues to the VM, I would suggest the following:

  • Move the VM back to the original Network (select the old Network Label from the Network Connection section of the VM Settings)
  • Check to ensure the proper pSwitch port has been configured correctly. Do you need a trunk? Is it configured as a trunk? Was the correct port configured (oops)?
  • Check to ensure you have defined the VLAN in the Port Group correctly, if you are trunking.
  • Try again once everything has been verified.

 

[Stage 6 – Balancing Act]

If you have made it this far, you know that the vDS is setup as you expected. The underlying vmnic has been configured properly to allow access to the network for the VMs.

So, the next step is going to be a balancing act. You will need to migrate your VM network adapters from the old vSwitch to the new vDS. As the load on the vDS gets higher, you will need to migrate another vmnic from the vSwitch to the vDS. Ultimately, all VMs will need to be migrated before the last vmnic is removed. Otherwise, the VMs on the vSwitch will be lost in their own network-less world until you move them over.

Rinse-Wash-Repeat

[Stage 7 – vBeers]

Test everything out. The networking for the VMs should not be interrupted. After the balancing act has been completed, the VMs on the host should be safely running on the new vDS! Complete the same tasks on your other ESX/ESXi hosts (again, remember that we need them to be on v4.1, right?!).

Well done. You deserve to have some vBeers for your hard work!

 

Notes and Considerations:

  • You are the master of your environment. The example above is just that… an example. You know what is best for your environment. Take everything you read here with a grain of salt.
  • By all means, you can feel free to shutdown VMs to change any configuration if you feel more comfortable going that route. The same procedure applies. With less concern about the VM availability since you know it is going to be offline anyway.
  • Migrating VMs to a vDS will interrupt any vMotion activities until other hosts have been configured (recall that each ESX/ESXi host needs the same Networks to be available for the migration to complete). So, while this can be performed live, selecting a maintenance window for the work would still be wise.
  • If you have the capacity in your environment, you can always evacuate all VMs from a host and configure it without any VMs running as a safety measure. In that environment, you do not need to worry about a balancing act as you will not have any production load on either the vSwitch or the vDS. The same configuration principles apply.

Thanks for reading through this walkthrough on implementing a vDS on a live system! Look for future posts on migrating VMkernel interfaces to vDS configurations and how Host Profiles can help make this easier for you.

As always, please leave comments at the bottom. If you know of a way to improve the procedure, let me know!