I have an old drive with exiting LVM data. The drive is plugged in as /dev/sdb.

[root@bt ~]# fdisk -l /dev/sdb 

Disk /dev/sdb: 300.1 GB, 300069052416 bytes
255 heads, 63 sectors/track, 36481 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x92fc9607

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   83  Linux
/dev/sdb2              14       36481   292929210   8e  Linux LVM
[root@bt ~]# 
Lets check the status of the Logical Volumes:

[root@bt ~]# lvscan -a
  inactive          '/dev/VolGroup00/LogVol00' [278.78 GiB] inherit
  inactive          '/dev/VolGroup00/LogVol01' [576.00 MiB] inherit
  ACTIVE            '/dev/VolGroup/lv_root' [50.00 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_home' [315.22 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [5.88 GiB] inherit
[root@bt ~]#

So the two volumes on sdb are currently inactive. The easiest way to tell if its just the logical volumes are inactive, or if the entire volume group is inactive, is by checking if the volume group directory exists in /dev.

[root@bt ~]# ls -l /dev/VolGroup*
/dev/VolGroup:
total 0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_home -> ../dm-2
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_root -> ../dm-0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_swap -> ../dm-1

[root@bt ~]# 
As you can see, you only see the ones that are already marked active. So we know the volume group is what is inactive. Lets go ahead and enable it.

[root@bt ~]# vgchange -a y VolGroup00
  2 logical volume(s) in volume group "VolGroup00" now active
[root@bt ~]#
Had the volume group been active, but the logical volumes not been active, then you would use "lvchange -a y ". This is typically the case in a system recovery. Now you can confirm that it is enabled, by again checking in /dev.

[root@bt ~]# ls -l /dev/VolGroup*
/dev/VolGroup:
total 0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_home -> ../dm-2
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_root -> ../dm-0
lrwxrwxrwx 1 root root 7 May  9 17:11 lv_swap -> ../dm-1

/dev/VolGroup00:
total 0
lrwxrwxrwx 1 root root 7 May 19 11:42 LogVol00 -> ../dm-3
lrwxrwxrwx 1 root root 7 May 19 11:42 LogVol01 -> ../dm-4
[root@bt ~]#  
And subsequently, our Logical Volumes are active now too:

[root@bt ~]# lvscan -a
  ACTIVE            '/dev/VolGroup00/LogVol00' [278.78 GiB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [576.00 MiB] inherit
  ACTIVE            '/dev/VolGroup/lv_root' [50.00 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_home' [315.22 GiB] inherit
  ACTIVE            '/dev/VolGroup/lv_swap' [5.88 GiB] inherit
[root@bt ~]# 


Getting setup with the Amazon EC2 API

| 0 Comments | 0 TrackBacks
It was annoying as heck trying to figure out what format Amazon wanted. So I figured I would share what I did to generate and make it work. 

First you want to generate the key, and certificate. 

 openssl genrsa 1024 > key.pem
 openssl req -new -x509 -nodes -sha1 -days 365 -key key.pem -outform PEM > cert.pem

Next you want to put them somewhere... I kept seeing people putting them in ~/.ec2/. So I will do the same. 

[root@unknown5cdad4100b91 ~]# mkdir .ec2
[root@unknown5cdad4100b91 ~]# mv *.pem .ec2/
[root@unknown5cdad4100b91 ~]# 

What you do with this cert, is you log into Amazon Web Services, and navigate to the Security Credentials page. Then import your certificate(cert.pem). 

Now we need to set the environmental variables. We might as well go ahead and set the variables for the api software at the same time.

export EC2_HOME=~/ec2-tools/ec2-api-tools-1.6.7.2
export EC2_AMITOOL_HOME=~/ec2-tools/ec2-ami-tools-1.4.0.9
export PATH=$PATH:$EC2_HOME/bin:$EC2_AMITOOL_HOME/bin
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
export EC2_PRIVATE_KEY=~/.ec2/key.pem
export EC2_CERT=~/.ec2/cert.pem

Keep in mind the above versions will change as Amazon updates their packages. Just make sure they match the versions you download in the next step. 

 
 wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
 wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip
 mkdir ec2-tools
 mv *.zip ec2-tools
 cd ec2-tools
 unzip ec2-api-tools.zip 
 unzip ec2-ami-tools.zip 
 yum -y install java-openjdk

You should now be set to use the amazon API. I also recommend making sure your time is correct, otherwise the API will not work. Log out, and log back in, or whatever other method you choose to reload your bashrc, and you should be good to go. 


In both my current job, and previous job LVM was not a tool we were able to use. In the previous job it was because the auto provisioning system simply would not work well with LVM. After the company went through a merger, we were able to get that added. But it was quite late in the game. In my current job, we use Debian Lenny, which did not provision with LVM. This drives me up a fucking wall. I can not stress enough why this is so important. 

In case you have no idea what I am talking about. LVM stands for Logical Volume Manager. Instead of writing your file system to a partition, you put LVM on the partition, then you can chop up the volume, and group it however you want. In doing that you will create Logical Volumes. You then write your file system to the Logical Volume. The major gain here, is being able to resize a file system without the risk of losing it if you misaligned the blocks, file system, etc. 

[root@media smb]# lvresize --help
  lvresize: Resize a logical volume

lvresize
[-A|--autobackup y|n]
[--alloc AllocationPolicy]
[-d|--debug]
[-f|--force]
[-h|--help]
[-i|--stripes Stripes [-I|--stripesize StripeSize]]
{-l|--extents [+|-]LogicalExtentsNumber[%{VG|LV|PVS|FREE|ORIGIN}] |
-L|--size [+|-]LogicalVolumeSize[bBsSkKmMgGtTpPeE]}
[-n|--nofsck]
[--noudevsync]
[-r|--resizefs]
[-t|--test]
[--type VolumeType]
[-v|--verbose]
[--version]
LogicalVolume[Path] [ PhysicalVolumePath... ]

[root@media smb]#

In the dark ages, we needed to shut the box down, boot it up using a live CD, and then make the changes to partition table. Using LVM, you only create one partition. So this is completely moot. 

[root@media smb]# fdisk -l  /dev/sda

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b93ae

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
/dev/sda2              64       13055   104344576   83  Linux
[root@media smb]# 

As you can see, we have a boot partition, and a partition that holds LVM. 

Now, rule of thumb:
-If you are growing a file system, you can do this online. Yes, without even unmounting it. How cool is that?
-If you are shrinking a file system, you need to unmount it. 

The reason for this is, if you are writing to the file system, and you grow it, you get more room... Who cares? If you are shrinking, and the space you are trying to write to is no longer part of the file system... bad things happen.

It is important to note, that I am using the -r flag to automatically run resize2fs/fsck after the LV is resized. Basically, you need to resize the LV, and ALSO the file system held within the LV. Then just for safety sake, you run fsck. Using -r cuts down the three step process to a single step. 

In this example. We have a /home directory we want to shrink by 30G. We want to re-allocate this to /. 

Step 1: Unmount, and Shrink /home by 30G. 
 
[root@media smb]# umount /home
[root@media smb]# lvresize -r -L -30G  /dev/mapper/vg_media-lv_home
fsck from util-linux-ng 2.17.2
/dev/mapper/vg_media-lv_home: 20/2990080 files (5.0% non-contiguous), 233719/11954176 blocks
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/vg_media-lv_home to 4089856 (4k) blocks.
The filesystem on /dev/mapper/vg_media-lv_home is now 4089856 blocks long.

  Reducing logical volume lv_home to 15.60 GiB
  Logical volume lv_home successfully resized
[root@media smb]# mount /home

Now lets confirm we shrank down /home to ~16G. 

[root@media smb]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_media-lv_root
                       50G   15G   32G  32% /
tmpfs                 935M     0  935M   0% /dev/shm
/dev/sda1             485M   40M  420M   9% /boot
/dev/mapper/vg_media-lv_home
                       16G  169M   15G   2% /home
[root@media smb]#


Step 2: We don't need to unmount /, because we are growing this partition. 


[root@media smb]# lvresize -r -L +30G /dev/mapper/vg_media-lv_root
  Extending logical volume lv_root to 80.00 GiB
  Logical volume lv_root successfully resized
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/vg_media-lv_root is mounted on /; on-line resizing required
old desc_blocks = 4, new_desc_blocks = 5
Performing an on-line resize of /dev/mapper/vg_media-lv_root to 20971520 (4k) blocks.
The filesystem on /dev/mapper/vg_media-lv_root is now 20971520 blocks long.

[root@media smb]#

Now that we have grown / by 30G. Lets check and make sure df reflects this change. 


[root@media smb]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_media-lv_root
                       79G   15G   61G  20% /
tmpfs                 935M     0  935M   0% /dev/shm
/dev/sda1             485M   40M  420M   9% /boot
/dev/mapper/vg_media-lv_home
                       16G  169M   15G   2% /home
[root@media smb]#

As you can see, we successfully grew /, and shrunk /home. We didnt need to turn the server off, and if we had anything running out of /home we simply would have needed to stop it. But anything running on / would have been fine to leave running. Compare this to what you would have needed to do if we weren't using LVM. Good, now I assume you will use LVM going forward. 

Other neat shit you can do with LVM:
  • Use LVM Snapshots to create backups  eg:MyLVMBackup
  • CLVM (clustered logical volume manager) to cluster file systems between multiple boxen.
  • Use LVM to mirror/stripe file systems to create a pseudo software RAID.

Stop the AXFR!

| 0 Comments | 0 TrackBacks
Recently I have been watching my log files a lot more closely. While doing so, I noticed A LOT of interesting things. The first I will mention is, for the love of god just block all of the Chinese IP space. The vast majority of password cracking attempts are from China, and reporting it to them does absolutely nothing. 

But the other thing I noticed was this:
Feb 10 22:55:37 core named[18088]: client 83.117.170.114#2478: transfer of 'fazey.org/IN': AXFR started
Feb 10 22:55:37 core named[18088]: client 83.117.170.114#2478: transfer of 'fazey.org/IN': AXFR ended

Wait a minute... Did you just attempt a zone transfer, and my DNS server provided it? 

So, what is a zone transfer(AXFR)? 
Well, when you have a slave DNS server, it will periodically dump your zones and update itself. So there is a feature called a zone transfer. It provides all of the records for a given zone. By default, it is allowed from all, so if you have configured your own bind/named, it is easy to miss. Oddly enough, it is very common to miss. 

So how do we do it? 
You do a dig at the nameserver for the domain, and you append the request AXFR. If successful, the output will look like this:

[root@core log]# dig @ns1.fazey.org fazey.org AXFR

; <<>> DiG 9.6.2-P2-RedHat-9.6.2-4.P2.fc11 <<>> @ns1.fazey.org fazey.org AXFR
; (1 server found)
;; global options: +cmd
fazey.org. 86400 IN SOA fazey.org. root.fazey.org. 2012040905 28800 7200 604800 86400
fazey.org. 86400 IN TXT "v=spf1 ip4:64.85.161.114 -all"
fazey.org. 86400 IN NS ns1.fazey.org.
fazey.org. 86400 IN NS ns2.fazey.org.
fazey.org. 86400 IN MX 10 mail.fazey.org.
fazey.org. 86400 IN A 64.85.161.114
core.fazey.org. 86400 IN A 64.85.161.115
mail.fazey.org. 86400 IN A 64.85.161.115
ns1.fazey.org. 86400 IN A 64.85.161.114
ns2.fazey.org. 86400 IN A 64.85.161.115
project.fazey.org. 86400 IN A 64.85.161.115
www.fazey.org. 86400 IN CNAME fazey.org.
fazey.org. 86400 IN SOA fazey.org. root.fazey.org. 2012040905 28800 7200 604800 86400
;; Query time: 5 msec
;; SERVER: 64.85.161.114#53(64.85.161.114)
;; WHEN: Tue Feb 12 18:50:50 2013
;; XFR size: 26 records (messages 1, bytes 619)

[root@core log]#

As you can see, being able to dump my entire zone file would make doing recon a breeze for any attacker. 

So how do we fix it?
edit your /etc/named.conf, and there is a field called options:
options {
        directory "/var/named";
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside . trust-anchor dlv.isc.org.;
allow-transfer { none;};
version "[null]";
};

The two options we add are: 
1. allow-transfer { none;};
2. version "[null]"; 

These two directives prevent version requests, and zone transfers. If you have a slave DNS server, where it says none, you would put the IP of your slave. That would allow your slave DNS server to function, but not leave you wide open for zone transfers. I recommend everyone use these options in their global option config. 

So, we added our config, and we restarted named. Now, lets take a look at what the request returns:

[root@core log]# dig @ns1.fazey.org fazey.org AXFR

; <<>> DiG 9.6.2-P2-RedHat-9.6.2-4.P2.fc11 <<>> @ns1.fazey.org fazey.org AXFR
; (1 server found)
;; global options: +cmd
; Transfer failed.
[root@core log]# 

As you can see, we now have the desired affect of being rejected. Do yourself a favor and get your configuration updated before you notice the foreign IPs attempting zone transfers. 





Prelude LML: Windows server 2008 rules

| 0 Comments | 0 TrackBacks
At a previous job, we were using Prelude IDS. The product suffered a bit due to a change in management, but the Windows Server 2008 rules never seemed to get published. So Nick started it, and I finished it after he left to move on to a new job. There is no point in others suffering through writing this windows2008.rules.

This is by no means complete. It does user logon/logoff, and Logon Successful/Failed. While you can definitely expand this to find additional Event IDs, this was enough to do what I needed. Feel free to expand on it.

lftp: the file transfer swiss army knife

| 0 Comments | 0 TrackBacks
Working in the industry, at one time or another, you will have to transfer files. Im sure it will be in a variety of different ways. For the most part everyone has their favorites for each situation. But I would prefer to have one utility on all servers to handle all of those situations. So, my choice has to be a bad ass.

Lets look at some requirements:
1. support for all of my common protocols
2. easy and logical navigation
3. parallel threads!
4. full command line usage.

lftp has been the only thing i've come across that met my criteria. Let me prove this my giving you some examples:

Example 1:
At some point, everyone has had to mirror a directory that was being served by Apache with directory indexing turned on. Something like http://pkgs.repoforge.org/bsc/.

Lets demonstrate lftps versatility.

debian:/tmp/outgoing# lftp
lftp :~> open http://10.100.15.10/log/dists/lenny-20120514/binary-i386/
cd ok, cwd=/log/dists/lenny-20120514/binary-i386                             
lftp 10.100.15.10:/log/dists/lenny-20120514/binary-i386> mirror
Total: 1 directory, 69 files, 0 symlinks                                                                  
New: 69 files, 0 symlinks
180374247 bytes transferred in 136 seconds (1.27M/s)
lftp 10.100.15.10:/log/dists/lenny-20120514/binary-i386>

Downloading a Debian image from a local box. But look at the protocol... http://. I'm able to treat a web page like a cli. But it does lack the depth to do anything crazy. As far as I can see, there isnt a way to be like "mirror http://10.100.15.10/log/dists/lenny-20120514/binary-i386/a*"

Other ways to solve this? Yes. I could have done a fancy curl request stripping html using links/lynx, and then wgetting the result.

Example 2:

You need an entire directory copied from your server, to another server(put)... But you only have SSH

[root@core ~]# ls -l lame
total 300
-rw-r--r-- 1 root root 100002 2013-01-13 05:32 a
-rw-r--r-- 1 root root 100003 2013-01-13 05:32 b
-rw-r--r-- 1 root root 100004 2013-01-13 05:32 c
[root@core ~]#

Now lets go ahead and login over sftp.

[root@core ~]# lftp sftp://root@g1.ragenetworks.com
Password: 
lftp root@g1.ragenetworks.com:~> mirror -R --parallel=3 lame 
Total: 1 directory, 3 files, 0 symlinks
New: 3 files, 0 symlinks
300009 bytes transferred in 2 seconds (133.2K/s)
lftp root@g1.ragenetworks.com:~>  

What I did was reverse(-R) mirror the directory. In other words I put the directory from my server, to the remove box. But I also did this using parallel threads(--parallel=N).


Example 3:

Along with command line usage is, how scriptable is it. There are many times you need to simply back up a directory with a cron job. This time we are going to use ftp, and script our remote commands in a file. 

[root@core ~]# cat script-file                        
open ftp://username:password@fazey.org
mirror -R /root/local /home/james/remote
exit
[root@core ~]#  

Now we call lftp with the "-f" flag to give it a script input. 

[root@core ~]# lftp -f script-file 
at 80527360 (80%) 35.12M/s eta:1s [Sending data]
...
[root@core ~]# 


As you can see, lftp is a hell of a tool. 
Enhanced by Zemanta

N2N VPN

| 0 Comments | 0 TrackBacks
So having a bunch of success with PF_RING, I decided to check out some of ntop.org's other creations. One I came across that I had a use for was N2N. Basically you have a supernode daemon, and you create tunnels to it from your edge nodes. But the setup is about as simple as it can really be.

Pretty much exactly what the manual says

setup your supernode (relay for lack of a better phrase)
supernode -l 9939

Then all you need for an edge node is:
edge -a 10.10.2.1 -c some_community -k some_key -l <supernode ip>:9939

Next edge node:
edge -a 10.10.2.2 -c some_community -k some_key -l <supernode ip>:9939

Then from either node, you should be able to reach the other.
[root@core ~]# ping 10.10.2.1
PING 10.10.2.1 (10.10.2.1) 56(84) bytes of data.
64 bytes from 10.10.2.1: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 10.10.2.1: icmp_seq=2 ttl=64 time=0.070 ms
64 bytes from 10.10.2.1: icmp_seq=3 ttl=64 time=0.063 ms
^C
--- 10.10.2.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2496ms
rtt min/avg/max/mdev = 0.063/0.068/0.073/0.010 ms
[root@core ~]#


Thats it. Seriously... Now if you want it to persist, you need to make an init script for supernode, and edge. I'm also not a huge fan of it having the key sitting there visible in the process list on the edge servers.

[root@core ~]# ps aux | grep edge
root      2367  0.0  0.1   3644   724 ?        Ss   Aug30   0:33 edge -a 10.10.2.1 -c HOME -k superkey -l g1.poop.com:4099
root     22730  0.0  0.1   4200   728 pts/0    S+   20:04   0:00 grep edge
[root@core ~]#

That's kind of blatant to just leave laying around. In this fashion it pretty much screams its key in the process list. So I would use a shell script or something to wrap it, so its a little less obvious.

reading a config file with perl

| 0 Comments | 0 TrackBacks
Recently I have had to do a lot of work with perl. I have always had a curiosity about perl. Its a scripting language, but its performance never ceases to amaze me when done right.

I absolutely can't take credit for this. It just happens to be the most useful method I have found for reading a config file.

my $filename = /etc/configfile.conf
open (CONFIG, $filename);
while (<CONFIG>) {
    chomp;                  # no newline
    s/#.*//;                # no comments
    s/^\s+//;               # no leading white
    s/\s+$//;               # no trailing white
    next unless length;     # leftovers
    my ($var, $value) = split(/\s*=\s*/, $_, 2);
    $pref{$var} = $value;
}
close (CONFIG);

This basically loads up all the non blank, or commented lines and sets them to:
$pref{"directive"} = value

For example... if I had a config file that had.

bash-# cat /etc/configfile.conf
# zomg list
poop = 11
kitten = 8

bash-#

Then:

print $pref{"poop"};

would return 11.

building a debian package

| 0 Comments | 0 TrackBacks
Coming from an rpm based background, switching to dpkg required a little research. This is by no means a comprehensive guide. In fact, this is the simplest I could come up with.

dpkg's have a directory called [debian|DEBIAN]. This directory contains several key files that get read, and set properties on our package. For now, lets just go ahead and create the directory.

debian:~# mkdir /tmp/example
debian:~# mkdir /tmp/example/DEBIAN


The way we put files in our package is by making it relative to our build directory. In this case, my build directory is /tmp/example/. So I want to add a file /etc/init.d/balls when my package is installed. Therefore I put this file in /tmp/example/etc/init.d/balls.
 
debian:~# mkdir -p /tmp/example/etc/init.d
debian:~# echo "omg dpkgs yay" >> balls
debian:~# cat balls
omg dpkgs yay
debian:~# mv balls /tmp/example/etc/init.d/
debian:~#


Now, lets go ahead and make some of those important build files we were talking about. One is the control file, it contains your basic package information.

debian:~# vim /tmp/example/DEBIAN/control
debian:~# cat /tmp/example/DEBIAN/control
Package: example-balls
Version: 0.1
Provides: example-balls
 
 
Section: headers
Priority: optional
Architecture: i386
Essential: no
Depends:
Installed-Size: 1M
Maintainer: James <james@thisurl.com>
Description: some example
debian:~#

You should also create the following two files, but I just touched them for simplicity sake.

debian:~# touch /tmp/example/DEBIAN/changelog
debian:~# touch /tmp/example/DEBIAN/copyright
debian:~#


Now, lets go ahead and build our package.

debian:/home/builduser/test/example# dpkg -b /tmp/example/ example-balls_0.1-1_i386.deb
dpkg-deb: building package `example-balls' in `example-balls_0.1-1_i386.deb'.
debian:/home/builduser/test/example#


Now, lets watch our package in action!
debian:/home/builduser/test/example# ls -l /etc/init.d/balls
ls: cannot access /etc/init.d/balls: No such file or directory
debian:/home/builduser/test/example# dpkg -i example-balls_0.1-1_i386.deb
Selecting previously deselected package example-balls.
(Reading database ... 55760 files and directories currently installed.)
Unpacking example-balls (from example-balls_0.1-1_i386.deb) ...
Setting up example-balls (0.1) ...
debian:/home/builduser/test/example# ls -l /etc/init.d/balls
-rw-r--r-- 1 root root 14 2012-08-03 12:55 /etc/init.d/balls
debian:/home/builduser/test/example#


If you need to actually build anything, there is a ./debian/rules file. This is typically auto-generated, and then you customize it to your needs based on your ./configure line. The concept is exactly the same. It will build the files using fakeroot to chroot itself to your build directory. From there, when it does 'make install' the path is completely relative to your directory. So when the dh_buildpackage command gets called, it packages all your binaries that are in your chrooted directory.

 

aus-snmpd in Adaptec StorMan package

| 0 Comments | 0 TrackBacks
While working on a box, I noticed that the aus-snmpd binary wouldn't stay running on some machines. This is problematic, because the aus-snmpd binary is used by snmpd to remotely poll the raid card for failed drives.

I had previously created an init script for it. Just in case anyone wants it, here. It should work on RHEL/CentOS/Debian/Ubuntu.

--
I ran the binary with strace, and then poked through the log to discover:

10497 select(8, [3 5 7], NULL, NULL, {0, 999999}) = 1 (in [7], left {0, 999999})
10497 getsockname(7, {sa_family=AF_FILE, path=@""}, [9579780311546331138]) = 0
10497 recvfrom(7, "\1\22\0\0\221\1\0\0\0\0\0\0\263\316\330x \0\0\0F\323\250\25\0\0\0\0\5\0\0\0"..., 65536, 0, NULL, NULL) = 52
10497 times({tms_utime=2, tms_stime=0, tms_cutime=0, tms_cstime=0}) = 938907755
10497 times({tms_utime=2, tms_stime=0, tms_cutime=0, tms_cstime=0}) = 938907755
10497 semget(IPC_PRIVATE, 1, IPC_CREAT|0) = -1 ENOSPC (No space left on device) <--------- Right there.
10497 exit_group(0)

So the solution here is to clear them.
for i in `ipcs -s | awk {'print $2'} | egrep '[[:digit:]]{3,6}'`; do ipcrm -s $i ; done

Then restart aus-snmpd, and it should stay started this time.