Subscribe via feed.

Ubuntu Server update openssl requires reboot

Posted by Michael under Linux, Systems Administration (3 Responds)

So I had the joy of updating an Ubuntu 10.04 LTS web server today. I logged in and did an apt-get updateNext I did an aptitude dist-upgrade I was told the following packages would need to be updated.

apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common bzip2 dpkg dpkg-dev libapache2-mod-php5 libbz2-1.0 libssl0.9.8 openssl php5 php5-cli
php5-common php5-curl php5-gd php5-ldap php5-mysql php5-odbc php5-sqlite

Nothing in this list looked to me like I would need a reboot, so I went through with it. The update completed successfully, and I went on about some other tasks I needed to complete on this server. To do that I needed to open another session to the server. When I logged in I was greeted with a message telling me the server needed to be rebooted.. Why on earth would I need to do that for this simple update.. The first place to look for the WHY is


In these files you find the message and the package causing the need. Only thing listed in here was libssl0.9.8 Why should this need a reboot I thought to my self so I did some digging I found this annoying bug. Long and short of it is there is a bug that has been open on this issue for over 2 years that causes the ssl package to report a reboot is needed when its not. The good news is that you might not really need to reboot after this update. I did not need to reboot, and here is how I know:lsof |grep ssl When I run this command on my web server I get the following:

=> lsof |grep ssl
master 912 root DEL REG 251,0 3145880 /lib/
qmgr 977 postfix DEL REG 251,0 3145880 /lib/
apache2 1456 www-data mem REG 251,0 333856 3147107 /lib/
pickup 2006 postfix mem REG 251,0 333856 3147107 /lib/
apache2 3752 www-data mem REG 251,0 333856 3147107 /lib/
apache2 3931 www-data mem REG 251,0 333856 3147107 /lib/
apache2 3932 www-data mem REG 251,0 333856 3147107 /lib/
apache2 3969 www-data mem REG 251,0 333856 3147107 /lib/
apache2 32085 root mem REG 251,0 333856 3147107 /lib/
apache2 32272 www-data mem REG 251,0 333856 3147107 /lib/
apache2 32360 www-data mem REG 251,0 333856 3147107 /lib/
apache2 32439 www-data mem REG 251,0 333856 3147107 /lib/
apache2 32440 www-data mem REG 251,0 333856 3147107 /lib/
apache2 32442 www-data mem REG 251,0 333856 3147107 /lib/
apache2 32447 www-data mem REG 251,0 333856 3147107 /lib/
apache2 32448 www-data mem REG 251,0 333856 3147107 /lib/

What this shows is that apache2 and postfix are the only things using ssl on my box, so all I really need to do is restart apache and postfix. Apache was restarted already from the updates so that leaves only postfix. A simple service postfix restart fixes postfix. Now my system is still telling me that I need to reboot.. I can shut that up by doing the following:rm /var/run/reboot-required* I hope this will help some folks out there avoid unneeded reboots.

Tags: , ,

Force Pandora Traffic out of a specific interface with pfsense

Posted by Michael under Systems Administration (No Respond)

Something that has started using TONS of our bandwidth at work is online radio. Currently the most popular on our network is Pandora. It counts for nearly 1/3 of the daily traffic we pull as a company though our squid proxy server.

A little bit about us:

Our company currently has about 100 employees, and only 12 people are using online radio, and of the 12 only 2 of them are not using Pandora. Their traffic accounts for 1/3 of all the traffic we have. That is a lot of resources for such a small group of people to be using for something that you can argue has nothing to do with work. Our office is also not just in 1 place. These people are spread across Texas in Dallas, San Antonio, and Houston. San Antonio is the central location and is where our IT operations take place. The way our network is designed all locations come to San Antonio then we filter the content and pass it back along to them. Our Houston office is in a location where we could not get fiber in the building so we have to deal with T-1 lines for connectivity. Our Dallas office is connected to us with a 4M direct connection. Each location also has its own local internet in case of failure they will be able to VPN into San Antonio and keep working..

The problem as I see it

This setup can cause one hell of a problem if everyone who works in Houston decides its time to listen to online radio. That office currently has close to 20 people in it and if each of them open Pandora we do not have the bandwidth to give them the radio and the applications from the app servers located in San Antonio. I doubt that will ever happen, but if even 1/2 decide to it is a lot of traffic all the sudden. Dallas causes less of a problem because that office only has 10 or 12 people and we have plenty of bandwidth to serve them with apps and music, but not in high quality, and apps over the network may get a bit slow if one of us in IT needs to log on to a Dallas desktop to support them. In the past we have had a no online radio policy that was only enforced if things got to moving slow and people would complain. We would look through logs figure out who was causing it, and call them and tell them to stop it. Now we are trying to find ways to work with the users to allow them access to this service. The main reason for this is that radios do not always get a good reception in buildings, and lets face it.. when people have more freedoms they tend to be more happy, so one could conclude more productive… Some people even pay for the Pandora service (I know this isnt our problem) so they want to utilize it as often as possible, and I dont blame them.. Pandora is pretty awesome. I use it my self.

Our solution

At least for now our solution is to force all Pandora traffic out each locations local internet. Since the connections just sit there doing nothing all day why not use them when we don’t need to use them? We decided to put a Wireless router in each location that is hooked to the local internet, and not part of our network. This worked great for folks who had laptops and wanted to bring them and surf during lunch and not be blocked by our squid server. I got to thinking why not force all the online radio from our network out of these unused connections too.. So we tried it and it seems to be working great.

How we did it

Since we have a pfsense firewall in place at all of our locations making this all happen was a snap. I went in after finding Pandoras netblock and added a few quick rules. First we have a rule in place so that only the proxy server can access the internet on http and https ports. This makes sure our users cant by pass the proxy by turning off the proxy settings in the browser. Next I had to make a rule so that traffic heading to Pandora would go out the local internet instead of over the network and out our gateway, but also needed to condition it so that it only allowed the squid server to access it. This way we can still keep tabs on who is using it and how much they use it. From the user side of it nothing changed at all. They still have to log into squid and go to a web site. Its just now when they do it WE force the traffic out of an unused for the most part connection, and lowered network traffic. It did mean the inclusion of a squid server in each location to avoid the LAN traffic still since our primary squid was only in San Antonio, but this will also be a good testing ground for forcing all internet traffic out each locations local pipe.

Tags: ,

Who owns that IP

Posted by Michael under Linux, Systems Administration (1 Respond)

You may find your self in a situation where you need to know who owns an IP, and maybe even the netblock. Well today I was in just that boat. An unnamed company that we do business with made a new feature for their website. This feature required our users to have an active X control added to their computer and allowed them to more easily do business with the company. The problems started as soon as we tested this with our first user. We use a Squid Proxy server, and for what ever reason the company’s new feature did not support using an authenticated proxy. This means for our users to use this new feature we will need to allow direct access to their site (and any other server that they use to deliver content to our users browser). We asked for the NetBlock but they only gave us 1 IP. I added this IP to our firewall and allowed the outgoing traffic to go with out needing the proxy and all was good, or so we thought. We tried testing again but something was causing problems.. some stuff would pull up but not others. I found quickly that the company was not just using the 1 IP they gave us, and they had no idea what other IP to tell us about.. (sad for them they dont know what to tell their customers..) To solve this problem I open a terminal window on my linux desktop. I run the command:
whois [IP]
Assuming we were trying to find out about Google this would be the output:

=> whois

OrgName: Google Inc.
Address: 1600 Amphitheatre Parkway
City: Mountain View
StateProv: CA
PostalCode: 94043
Country: US

NetRange: –
NetHandle: NET-74-125-0-0-1
Parent: NET-74-0-0-0-0
NetType: Direct Allocation
NameServer: NS1.GOOGLE.COM
NameServer: NS2.GOOGLE.COM
NameServer: NS3.GOOGLE.COM
NameServer: NS4.GOOGLE.COM
RegDate: 2007-03-13
Updated: 2007-05-22

OrgTechHandle: ZG39-ARIN
OrgTechName: Google Inc.
OrgTechPhone: +1-650-318-0200

# ARIN WHOIS database, last updated 2010-05-19 20:00
# Enter ? for additional hints on searching ARIN’s WHOIS database.
# ARIN WHOIS data and services are subject to the Terms of Use
# available at

As you can see CIDR: Google has a nice /16.
This simple command will return the relevant information needed. I was able to see who the netblock belongs to, and what size and range they have. Now I am able to add this new info to my proxypac file, and to my firewall.
Problem solved.

Tags: , ,

Troubleshooting your DHCP Server with tcpdump

Posted by Michael under Linux, Systems Administration (3 Responds)

Having issues with your DHCP server? Maybe tcpdump can help.

The first thing to do is to log onto your dhcp server, and gain root access.
=> ssh mike@dhcpserver
=> sudo -s
Next I need to verify my dhcp server is up and running

=> ps waux|grep dhcp
dhcpd 1490 0.0 0.2 16080 2452 ? S Apr26 0:00 /usr/sbin/dhcpd3 -f -d -cf /etc/dhcp3/dhcpd.conf -lf /var/lib/dhcp3/dhcpd.leases eth0

OK, so its running, now for the tcpdump. DHCP activity will happen on ports 67 and 68 so we can run a simple command like this:
=> tcpdump -n port 67 or port 68
This should give us some output like so

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
08:30:40.158223 IP > BOOTP/DHCP, Request from 00:22:19:a9:e8:3d, length 548
08:30:42.452181 IP > BOOTP/DHCP, Request from 00:22:19:a9:e8:3d, length 548
08:30:44.109164 IP > BOOTP/DHCP, Request from 00:22:19:a9:e8:3d, length 548
08:30:46.464025 IP > BOOTP/DHCP, Request from 00:0c:f1:7b:09:3b, length 300
08:30:51.463328 IP > BOOTP/DHCP, Request from 00:0c:f1:7b:09:3b, length 300

If your output looks like mine above then you have a problem.. You can see clients asking for a lease but for some reason my server is not replying to them.
Once we find the problem with our DHCPd we should be able to run this command and see similar output:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes
09:33:42.800945 IP > BOOTP/DHCP, Request from 00:1a:a0:21:2f:04, length: 300
09:33:42.801949 IP > BOOTP/DHCP, Reply, length: 300
09:33:50.056500 IP > BOOTP/DHCP, Request from 00:18:8b:8a:b3:51, length: 318
09:33:50.096365 IP > BOOTP/DHCP, Reply, length: 323
09:34:03.377480 IP > BOOTP/DHCP, Request from 00:24:e8:2e:a6:ec, length: 300
09:34:03.380555 IP > BOOTP/DHCP, Reply, length: 300
09:34:11.697196 IP > BOOTP/DHCP, Request from 00:24:e8:2e:49:09, length: 300
09:34:11.699273 IP > BOOTP/DHCP, Reply, length: 300
09:35:20.272780 IP > BOOTP/DHCP, Request from 00:22:19:a9:e8:3d, length: 548
09:35:20.277025 IP > BOOTP/DHCP, Reply, length: 300

Now we can see that the server is now replying to the requests, so everything is working.

Check back soon as we cover how to find what was causing the problem with DHCP.

Tags: , , , ,

Extending a Logical Volume

Posted by Michael under Systems Administration (2 Responds)

When you run out of space and are using LVM its fairly easy to extend your volume. In my case I am using Vsphere 4, FalconStor IPStor 6, and Centos 5. I will show you what to do to get the extra space you need for your Centos 5 machine with out any downtime. Many of the steps in here can be done with out needing Vmware or Falconstor.

The first thing we need to do is log into our FalconStor console and create another LUN. Once that is complete assign it to your ESX host(s). In our environment I use “Read/Write Non Exclusive” this will give a warning, but is required if using Vmotion with VShpere. Be sure if assigning to more than 1 host that you use the same LUN ID for each host. Once the disk has been assigned to VSphere you need to configure any replication or backup settings.

Next we need to move on to Vshpere. Log into your ESX host or VcenterServer using your vsphere client as an Admin user. Select the ESX host that you have assigned the LUN to. Next click the configuration tab. In the Hardware menu select Storage Adapters. Here select the adapter you assigned the LUN to. In my case its vmhba4 Fiberchannel with a nice long WWPN. I right click the adapter and choose “Rescan”. Next I need to look below in the devices view and find the matching LUN ID. If this process doesnt show your disk the first time give it a couple of mins and try again. I have had to disconnect from vshpere and reconnect to it to get the LUN to show up. This is more common of a problem when using an XP host to run your VcenterServer from. Once your LUN shows up you need to edit the settings of your Virtual Machine and add a new Hard disk to it. In my case I am using a raw mapped LUN as the device type. Once this is complete you can now move on to the Centos part. Everything from here on out will need to be done as the root user or as an admin user using sudo, unless otherwise noted.

By default Linux will not just “see” this newly attached disk. We need to rescan the scsi bus first. To do this I use a simple script I wrote. I call it rescan_bus.
echo "Scanning host0"
echo "- - -" > /sys/class/scsi_host/host0/scan
sleep 1;
echo "Done"
exit 0;

Put this into a file called rescan_bus and place the file in /usr/local/sbin then execute the following command:
chmod 700 /usr/local/sbin/rescan_bus && chown root:root /usr/local/sbin/rescan_bus
This will make sure that no one except root can do anything with this file. Once you have completed this you can now execute this file.
=> rescan_bus
Scanning host0

Next you should be able to see the new disk by using fdisk
=> fdisk -l
[... suppressed output...]

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

This was the output from my command. As we can see I now have sdc which has no valid partition type. I want to use the whole disk here for my Volume Group. But which volume group do I want to use, and how many are on my system anyway? Well to find out simply run: vgdisplay
=> vgdisplay
[...suppressed output...]
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.88 GB
PE Size 32.00 MB
Total PE 188
Alloc PE / Size 160 / 5.00 GB
Free PE / Size 28 / 896.00 MB
VG UUID cg5tC3-Xvr7-Vv6q-LuzB-qPV4-2fqb-2hRv3c

This happens to be the default name for the primary volume group on Centos. Now since I want to use this whole 10G disk for my primary Volume group we can do that now using pvcreate and vgextend.
=> pvcreate /dev/sdc [130]
Physical volume "/dev/sdc" successfully created
=> vgextend VolGroup00 /dev/sdc
/dev/cdrom: open failed: No medium found
Volume group "VolGroup00" successfully extended

Next we can see how the Volume Group was extended by running vgdisplay again

=> vgdisplay
[...suppressed output...]
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 15.84 GB
PE Size 32.00 MB
Total PE 507
Alloc PE / Size 160 / 5.00 GB
Free PE / Size 347 / 10.84 GB
VG UUID cg5tC3-Xvr7-Vv6q-LuzB-qPV4-2fqb-2hRv3c

Now you should see that the Free PE / Size has grown.

Next we need to extend the actual volume that is out of space. Which Volumes are in our group? We can get a list by running lvdisplay

=> lvdisplay
--- Logical volume ---
[...suppressed output...]
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID cJnNf0-nBQ1-8s2W-SV1a-OIC3-mHD1-i0i4eT
LV Write Access read/write
LV Status available
# open 1
LV Size 4.00 GB
Current LE 128
Segments 2
Allocation inherit
Read ahead sectors 0
Block device 253:0

--- Logical volume ---
[...suppressed output...]

As we can see from this we have a volume named LogVol00. This is the default volume for the Linux root. You can see this from running df

=> df -h
Filesystem Size Used Avail Use% Mounted on
3.9G 3.3G 411M 90% /

At this time the partition is so small we are unable to run yum update to update this server so we have to extend the volume. We also cant take this machine offline as it is the email server for our company. LVM allows from online resizes, so we will do that now. First lets extend the volume by 10G.

=> lvextend -L+10G /dev/VolGroup00/LogVol00
Extending logical volume LogVol00 to 14.00 GB
Logical volume LogVol00 successfully resized

Great, that was successful lets move on. This is the part that can be scary, here we will use an online resize of our disk using resize2fs
=> resize2fs /dev/VolGroup00/LogVol00
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/LogVol00 to 3670016 (4k) blocks.
The filesystem on /dev/VolGroup00/LogVol00 is now 3670016 blocks long.

Great so it looks like it worked! Lets see:
=> df -h
Filesystem Size Used Avail Use% Mounted on
14G 3.3G 9.6G 26% /

So it worked, now we can yum update at last!

Tags: , , , ,