Archive

Archive for the ‘General’ Category

The Dying Art of C Programming

June 21st, 2014

Most people think of C as a cruel mistress. To the contrary, C is a submissive slave. C will do exactly what you tell it to do. I’ve found that people that learn high level languages like PHP, Java, Ruby, and so on hate C because of how much work it is.

I recently came across a Ruby programmer that didn’t know C. The very idea of a programmer that doesn’t know C boggles my mind. He had a problem in Ruby which I solved by downloading the source code to Ruby (which is written in C) and figuring out what Ruby was actually doing.

The fact of the matter is that much like the old days when “real” programming was ASM, C abstracted the machine language out of the architecture at the lowest possible level. C is so awesome that it doesn’t even judge you on architecture. You can embed ASM directly into your C (although you probably shouldn’t).

So why do so many people hate C? It’s because C lets you fuck yourself over. And it does so happily. For instance this is a perfectly valid C program that will compile and execute properly:

#include 
#include 
#include 
int main(int argc, char *argv[]) {
	void *string;
	string = malloc(1);
	strcpy(string, "are you kidding me?  this works?");
	printf("%s\n", string);
}

Feel free to compile and run it yourself. It will happily run. But any C programmer will immediately tell you what is so horrendously wrong with this program. If you don’t understand why this program is horrifically bad — or more importantly why it works at all — then you cannot understand how the underlying system works.

Which brings me to the point: If you do not understand how the underlying system of your programming language works, then you will never be a good programmer.

So let’s look at the above program and why it’s so bad:

The includes grab the header files. These files define the functions you’re going to use. You can’t just randomly make a function in C and call it. It has to be defined. If you try to call an undefined function in C, the compiler will bitch and moan about it.

Next, we have the “main” function. The main function is always what gets executed first and it is always predefined as exactly:

int main(int argc, char *argv[])

Some compilers will disable warnings when you try to implement main differently, but if there is one thing you need to know about C, it is that main() is always defined this way. “int main(int argc, char *argv[])” is basically muscle memory for any C programmer. As an example:

#include 
#include 
#include 
main() {
        void *string;
        string = malloc(1);
        strcpy(string, "are you kidding me?  this works?");
        printf("%s\n", string);
}

The compiler will accept this, but will probably complain about it. Feel free to compile it yourself and run it. The compiler will complain, but the program will run.

Compilers aside, why is this program bad in the first place? It’s because I’ve allocated 1 byte of memory and then I copied 33 bytes of data into that memory space. At this point, the astute reader would say “are you kidding me? this works?” is only 32 bytes! The string library in C always adds a null character to the end. So your 32 byte string is using 32 bytes of memory.

So what the hell, C? How can I allocate 1 byte of memory and you let me put 33 bytes into it? Because like the honey badger, C doesn’t give a fuck.

It depends on architecture and implementation of “malloc()” but you generally cannot allocate a single byte of memory. When you call malloc(), you are given a “page” of memory, which is going to be more than 1 byte. If you want to find out how much memory malloc() allocates in a page, a simple C program can tell you:

#include 
#include 
#include 
int main(int argc, char *argv[]) {
	char *string;
	char *ptr;
	string = malloc(1);
	for (ptr = string; ptr < string+1024*1024; ptr++) {
		printf("%d\n", (int)(ptr-string));
		*ptr = 0;
	}
}

This will run until it crashes. When it crashes, the number that is output is the end of the page. This awesome little program is also a great example of another great feature of C: Pointers. Lots of high level languages hint at pointers with emulated pass by reference and whatnot, but this is where C shines. I don't want to get into how pointers work, but if you look at the above code and are confused, then you don't understand pointers.

All programmers of any language should know C. Even if you hate how it doesn't "do things for you." Even if it annoys you that there are no "objects" (hint: there are, but not like you're used to). Not learning C as a programmer is like hobbling yourself intentionally.

General

The Greatest AWS Advertisement Ever

February 22nd, 2013

John,

  1. There is going to be a lot of equipment coming into the facility. I’d guess somewhere around 70 pieces of equipment total. Of those, most will be liquidated. I spoke with Chris about that yesterday and he’s aware of what’s happening.
  2. Either a rack or a cabinet is fine. It does not matter to us.
  3. All equipment to be racked will have rail kits and they are standard 4 post 19″ equipment
  4. To start, we’re only asking for a single gigabit ethernet drop. We’ll run our own distribution switch from that drop (1U 19″ rack mount cisco 3560 48 port gig-e switch). I think Matthew can provide the full cross-connect information for that drop. All equipment in the rack will then connect to that switch for networking.
  5. I’m not sure how you guys do power distribution. All of the servers have dual power supplies. We are looking to rack no more than 15 servers total. This means we’re going to need about 30 power outlets (standard 110V) and it should not draw more than 40 amps total. I assume you run redundant power circuits, in which case if you could run two power drops with 16 outlet PDUs of 30 amps each, that would be more than enough. If needed, we can provide managed PDUs as well. We have about 16 8 port 20 amp APC PDUs and 2 16 port 30 amp APC PDUs that will be available. I would prefer having 2 30 amp drops if that’s possible.
  6. Our team will work directly with asset management to get everything straightened out. That’s probably going to be a lengthy process.
  7. We are requesting — if possible — one piece of equipment be moved from our Equinix Ashburn location by IT so we can get started building our infrastructure. Equipment from our Equinix San Jose datacenter will be shipped by a vendor on or about March 4th. It should arrive at the Sterling location in a crate that same week. Equipment from our QTS Atlanta datacenter will be unracked and moved to the Atlanta office location. Equipment from our Equinix Ashburn will be unracked and moved by Igor and Shawn on the week of the 11th. Again, most of this equipment will be liquidated.

  8. Igor, Shawn, and I are the only ones with access to the current Ashburn Equinix datacenter. However, we can contact the facility and grant escorted access to whoever is going to be moving the equipment. If need be, we can even have the datacenter remote hands staff pull the server and have it ready for pickup.

I’m available for a meeting at any time.

# ./create-instance

General

Sync A Large Directory Structure to S3

October 23rd, 2012

There’s a handful of commands out there that deal with command line operations for s3. The most popular (I think) is s3tool’s s3cmd. However, we have a filesystem that we would like to keep in sync with S3 while we are working on migrating. s3cmd has a sync command that works really well for filesystems with a small to medium number of files (not total file size… total file count). We have a filesystem that contains many millions of files which can be problematic for programs like s3cmd (even rsync has issues with this many files). The problem (or feature) is that they tend to calculate the changes for everything recursively all at once, then they start performing operations.

If you do not need this feature, it takes a lot less memory to calculate all the changes on a directory by directory basis. Of course, if you’re syncing a single directory with millions of files, you have bigger problems anyway and this won’t help. Luckily, we tend to split up the files into categorized directories.

So, I wrote this very simple little PHP script that keeps S3 in sync with a local directory structure. It shouldn’t be too hard to rewrite this in just about any language. It’s not complicated at all.

IMPORTANT NOTES:

  • This WILL dereference symlinks. So make sure you do not have recursive symlinks in your directory structure. For example: “ln -s . recurseme” would be bad
  • The local filesystem is always authoritative. If it doesn’t exist locally, it will get deleted from S3
  • It does not compare MD5 sums (even though you can see that I thought about it in the code)
  • It does not update the S3 side timestamp with the local timestamp and will only sync if the file size is different or the local timestamp is later than the S3 timestamp
#!/usr/bin/php
<?
require_once('AWSSDKforPHP/sdk.class.php');

$s3 = new AmazonS3();
$basepath = '/path/to/sync';
$bucket = 'your-bucket-name';

function getDirectoryList($localdir) {
    global $directoryList;

    /*
    // this is useful for testing
    if (substr_count($localdir, '/') > 2) {
        return;
    }
    */
    $d = opendir($localdir);
    while ($ent = readdir($d)) {
        if ($ent == '.' || $ent == '..') {
            continue;
        }
        if (is_dir($localdir . '/' . $ent)) {
            $directoryList[] = $localdir . '/' . $ent;
            getDirectoryList($localdir . '/' . $ent);
        }
    }
    closedir($d);
}

function syncDirectory($basepath, $localdir) {
    global $s3;

    $remotedir = preg_replace('%^' . $basepath . '/?%', '', $localdir);
    echo "getting s3 file list for $remotedir\n";
    $s3filelist = getRemoteDirectory($remotedir);
    echo "getting local file list for $localdir\n";
    $localfilelist = getLocalDirectory($basepath, $localdir);
    echo "calculating differences\n";
    foreach ($localfilelist as $key => $linfo) {
        if (! array_key_exists($key, $s3filelist)) {
            syncFile($basepath . '/' . $key, $key);
            continue;
        }
        $rinfo = $s3filelist[$key];
        if ($linfo['lastmodified'] > $rinfo['lastmodified']) {
            syncFile($basepath . '/' . $key, $key);
            continue;
        }
        if ($linfo['size'] != $rinfo['size']) {
            syncFile($basepath . '/' . $key, $key);
            continue;
        }
    }
    foreach ($s3filelist as $key => $rinfo) {
        if (! array_key_exists($key, $localfilelist)) {
            deleteFile($key);
            continue;
        }
    }
}

function getRemoteDirectory($remotedir) {
    global $s3, $bucket;

    $s3filelist = array();
    do {
        $args['delimiter'] = '/';
        if (strlen($remotedir)) {
            $args['prefix'] = $remotedir . '/';
        }
        if (isset($lastkey)) {
            $args['marker'] = $lastkey;
        }
        $response = $s3->list_objects($bucket, $args);
        if (! $response->isOK()) {
            echo "error: failed to get S3 object list for static $remotedir\n";
            return false;
        }
        foreach ($response->body->Contents as $s3object) {
            $s3filelist[(string)$s3object->Key] = array(
                    'md5' => preg_replace('/^\"(.*)\"$/', '$1',
                        (string)$s3object->ETag),
                    'size' => (string)$s3object->Size,
                    'lastmodified' => strtotime((string)$s3object->LastModified),
                    );
            $lastkey = (string)$s3object->Key;
        }
        $isTruncated = (string)$response->body->IsTruncated;
        unset($response);
    } while ($isTruncated == 'true');
    return $s3filelist;
}

function getLocalDirectory($basepath, $localdir) {
    $d = opendir($localdir);
    if (! $d) {
        return false;
    }
    $localfilelist = array();
    while ($ent = readdir($d)) {
        if ($ent == '.' || $ent == '..') {
            continue;
        }
        if (is_dir($localdir . '/' . $ent)) {
            continue;
        }
        $localfile = $localdir . '/' . $ent;
        $key = preg_replace('%^' . $basepath . '/?%', '', $localfile);
        $localfilelist[$key] = array(
                'md5' => $GLOBAL['checkmd5'] == true ? md5_file($localfile) : null,
                'size' => filesize($localfile),
                'lastmodified' => filemtime($localfile),
                );
    }
    closedir($d);
    return $localfilelist;
}

function syncFile($localfile, $remotefile) {
    global $s3, $bucket;

    echo "     sync  : $localfile -> s3://$bucket/$remotefile\n";
    try {
        $response = $s3->create_object($bucket, $remotefile,
                array('fileUpload' => $localfile));
        if (! $response->isOK()) {
            echo "error: failed to sync $localfile\n";
            echo $response->body->Code . ": " . $response->body->Message . "\n";
        }
    } catch (Exception $e) {
        echo "error: failed to sync $localfile\n";
        echo $e->getMessage . "\n";
    }
}

function deleteFile($remotefile) {
    global $s3, $bucket;

    echo "     delete: s3://$bucket/$remotefile\n";
    try {
        $response = $s3->delete_object($bucket, $key);
        if (! $response->isOK()) {
            echo "error: failed to delete s3://$bucket/$key:\n";
            echo $response->body->Code . ": " . $response->body->Message . "\n";
        }
    } catch (Exception $e) {
        echo "error: failed to sync $localfile\n";
        echo $e->getMessage . "\n";
    }
}

$directoryList = array();
getDirectoryList($basepath);
foreach ($directoryList as $localdir) {
    syncDirectory($basepath, $localdir);
}

?>

General , , ,

Ping Is Too Pessimistic

June 23rd, 2012

There are many billions of packets flying across the Internet every single second. The fact that a packet can get from one host in one part of the world to another host in another part of the world in a matter of milliseconds is absolutely amazing. The ping utility has long been used as a measure of checking if this awesomeness works… but unfortunately, it is very pessimistic.

To think that I can send 56 bytes of non-sense data from my machine to any Internet connected machine on the planet and have ping tell me “0% packets lost” seems rather depressing. Instead, ping should be happily exclaim “100% packets found!”

I wrote a little patch that makes the ping utility a happier utility program and the user benefits from seeing just how awesome the Internet is (unless of course, some packets were not found). ping should be a “glass is half-full” kinda program if you ask me.

eric@lolbuntu:/tmp/iputils-20071127.new$ sudo ./ping -c 3 www.google.com
PING www.l.google.com (74.125.45.147) 56(84) bytes of data.
64 bytes from yx-in-f147.1e100.net (74.125.45.147): icmp_seq=1 ttl=51 time=51.5 ms
64 bytes from yx-in-f147.1e100.net (74.125.45.147): icmp_seq=2 ttl=51 time=50.0 ms
64 bytes from yx-in-f147.1e100.net (74.125.45.147): icmp_seq=3 ttl=51 time=49.7 ms

--- www.l.google.com ping statistics ---
3 packets transmitted, 3 received, 100% packets found, time 2002ms
rtt min/avg/max/mdev = 49.792/50.433/51.504/0.762 ms

Here’s the patch:

--- iputils-20071127/ping_common.c	2007-12-09 20:56:22.000000000 -0700
+++ iputils-20071127.new/ping_common.c	2012-06-23 00:16:47.838210690 -0600
@@ -795,9 +795,9 @@
 	if (nerrors)
 		printf(", +%ld errors", nerrors);
 	if (ntransmitted) {
-		printf(", %d%% packet loss",
-		       (int) ((((long long)(ntransmitted - nreceived)) * 100) /
-			      ntransmitted));
+		printf(", %d%% packets found",
+		       (int) (100 - ((((long long)(ntransmitted - nreceived)) * 100) /
+			      ntransmitted)));
 		printf(", time %ldms", 1000*tv.tv_sec+tv.tv_usec/1000);
 	}
 	putchar('\n');

General

A Proper Post-Mortem

June 17th, 2012

There are three companies that I really enjoy doing business with. They are USAA, Amazon (on the consumer side), and Internap (on the tech side).

Let’s skip USAA, because it isn’t tech oriented and let’s look at Internap first. When there is a failure in Internap’s service, 9 times out of 10 they tell me before I realize it. Most of the time, these failures are transient and I would have never even known there was a problem had Internap not sent me an email giving me the info.

Here’s an email I got from Internap on May 30th, 2012:

At approximately 12:18 EDT on May 30, 2012 we were notified that the BGP session for our Verio provider in the ACS PNAP (Atlanta, GA) was in an active (down) state. The session recovered at 12:22 EDT and has been stable since that time.

During this time period, you may have noticed some sub-optimal routing and slight latency or packet loss as traffic destined for the Verio network was re-routed through our other providers in the PNAP. Once the session recovered, you may have noticed sub-optimal routing and slight latency again, as traffic was re-routed back onto Verio.

This type of outage is routine and isn’t a big deal. Internap lost an upstream provider at their PNAP. So what? I don’t really care. I pay them to deal with this and I experienced no downtime. But what happens when Internap themselves fucks up? We’ve had two major Internap outages. Each time, we’ve received a full RFO. One of them was an internal error from a sysadmin and another was a faulty Cisco command module. Most importantly, we received a full RFO (reason for outage) each time.

Now that we’ve moved to AWS, June 14th 2012′s outage RFO from Amazon makes me incredibly happy. From Amazon:

We would like to share some detail about the Amazon Elastic Compute Cloud (EC2) service event last night when power was lost to some EC2 instances and Amazon Elastic Block Store (EBS) volumes in a single Availability Zone in the US East Region.

This is the most beautiful thing I can imagine. They are not hiding the failures. They are admitting that it failed and they are giving both the reason why it failed and what they’re going to do to prevent future failures. The best part of this is that I didn’t have to wake up and deal with this all night. This is why IaaS is a good idea… as long as this communication continues.

General

Introduction To Syslog Log Levels/Priorities

June 7th, 2012

A very common question about syslog is how to decide the appropriate log priority (a.k.a. log level) for a specific log message. Deciding on the correct priority depends on a number of different factors.

Syslog allows you define a facility and a log level for each individual log message. The syslog “facility” is used to separate out log messages by application or by function. For example, email logs are normally logged on the LOG_MAIL facility. This groups all email related logs together. Your systems administrator will assign you a log facility to use. You should not use an arbitrary facility.

Log priorities are not as cut and dry. For custom applications, developers and systems administrators need to work together to define what constitutes a certain priority of a message. In the most generic terms possible, syslog levels are defined as:

       LOG_EMERG      system is unusable
       LOG_ALERT      action must be taken immediately
       LOG_CRIT       critical conditions
       LOG_ERR        error conditions
       LOG_WARNING    warning conditions
       LOG_NOTICE     normal, but significant, condition
       LOG_INFO       informational message
       LOG_DEBUG      debug-level message

Before digging into specifics on the priority definitions, let me address the developers directly. These priorities are defined in syslog.h as follows:

#define LOG_EMERG       0       /* system is unusable */
#define LOG_ALERT       1       /* action must be taken immediately */
#define LOG_CRIT        2       /* critical conditions */
#define LOG_ERR         3       /* error conditions */
#define LOG_WARNING     4       /* warning conditions */
#define LOG_NOTICE      5       /* normal but significant condition */
#define LOG_INFO        6       /* informational */
#define LOG_DEBUG       7       /* debug-level messages */

Normally, your application should call syslog() from some sort of internal logging function/method. This allows you to set an application-wide maximum log level. For example, on a production system, you do not want to waste CPU cycles generating debug error messages. Your application’s logging function should specify a maximum log level and filter the messages internally via some configuration variable.

For instance, in production, you may only care about messages of priority LOG_ERR and higher. So you would specify via some variable that your maximum logging level is LOG_ERR (3) and if a message comes into your logging function with a priority of 4 or greater, it is ignored.

The syslog server has the ability to filter messages of specific priorities as well. Systems administrators may choose to log only LOG_ERR or higher. So if the application is generating LOG_DEBUG error messages and the syslog server is only logging LOG_ERR or higher, this is just wasted processing time for both the application, the syslog server, and possibly even network I/O.

But the main question here is what is the most appropriate log level/priority for a particular message. Again, this is a hard question to answer because it can vary wildly by application, but in general, I would define them as such:

  1. LOG_EMERG – The application has completely crashed and is no longer functioning. Normally, this will generate a message on the console as well as all root terminals. This is the most serious error possible. This should not normally be used applications outside of the system level (filesystems, kernel, etc). This usually means the entire system has crashed.
  2. LOG_ALERT – The application is unstable and a crash is imminent. This will generate a message on the console and on root terminals. This should not normally be used applications outside of the system level (filesystems, kernel, etc).
  3. LOG_CRIT – A serious error occurred during application execution. Someone (systems administrators and/or developers) should be notified and should take action to correct the issue.
  4. LOG_ERR – An error occurred that should be logged, however it is not critical. The error may be transient by nature, but it should be logged to help debug future problems via error message trending. For example, if a connection to a remote server failed, but it will be retried automatically and is fairly self-healing, it is not critical. But if it fails every night at 2AM, you can look through the logs to find the trend.
  5. LOG_WARNING – The application encountered a situation that it was not expecting, but it can continue. The application should log the unexpected condition and continue on.
  6. LOG_NOTICE – The application has detected a situation that it was aware of, it can continue, but the condition is possibly incorrect.
  7. LOG_INFO – For completely informational purposes, the application is simply logging what it is doing. This is useful when trying to find out where an error message is occurring during code execution.
  8. LOG_DEBUG – Detailed error messages describing the exact state of internal variables that may be helpful when debugging problems.

So as an application developer, you may be asking yourself why you should not be using LOG_EMERG or LOG_ALERT. This is a valid question and this depends on you working with your systems administrator to determine if these log levels are appropriate. By default, almost every syslog implementation will log all LOG_EMERG and LOG_ALERT messages to the console which can make it difficult to actually work on a system to fix the problem if the log messages are flying by on the screen. Your systems administrator can set up filters on the syslog server to log them to the appropriate place, but before using those two priority levels, you should definitely consult with your systems administrator.

LOG_CRIT should be reserved for error messages that actually need to be visible to systems administrators and/or developers. If the error message you are logging will be ignored by everyone receiving the error, it should not be considered critical. Excuse the tautology, but “Critical errors are critical.” A critical error requires user intervention. If it does not require user intervention, it should be logged as LOG_ERR.

Priorities of LOG_WARNING and lower should be used at the developer’s discretion. It is common practice not to log any message with a priority lower than LOG_ERR on a production system.

General

Connecting a Fortinet VPN to Amazon AWS VPC

May 5th, 2012

There is a lot of spotty information out there on the Internet on how to connect a Fortinet VPN router to an Amazon AWS VPC VPN, but a lot of it is confusing, wants you to use the GUI, is outdated, or simply doesn’t work that well. It took me a bit to get all of the pieces put together, but here’s the basic steps involved:

  1. Enable asymmetric routing – this allows packets to go out through one of the tunnels and come back through the other
  2. Create interface based VPN tunnels (phase1 and phase2)
  3. Configure the wan1 sub-interfaces automatically created in step 2
  4. Configure BGP
  5. Configure firewall rules

So here’s a generic configuration that does this. If you right click on the VPN gateway in the AWS Console and download the “Generic” configuration, you can easily change the values in this config.

Also, you need to make sure that the policy numbers I put in for the firewall configuration (policies 200-203) do not conflict with any existing policy numbers you have configured. If they do, just pick a different number; the number doesn’t matter. Note that these policies allow all traffic in and out of your internal network and the VPC. After you get it working, you’ll probably want to tighten those policies up quite a bit.

So without further ado:

config system settings
    set asymroute enable
end

config vpn ipsec phase1-interface
    edit "amazon1"
        set interface "wan1"
        set dpd enable
        set dhgrp 2
        set proposal aes128-sha1
        set remote-gw <CHANGE: Tunnel #1 Outside Virtual Private Gateway>
        set psksecret <CHANGE: Tunnel #1 Pre-Shared Key>
        set dpd-retryinterval 10
    next
    edit "amazon2"
        set interface "wan1"
        set dpd enable
        set dhgrp 2
        set proposal aes128-sha1
        set remote-gw <CHANGE: Tunnel #2 Outside Virtual Private Gateway>
        set psksecret <CHANGE: Tunnel #2 Pre-Shared Key>
        set dpd-retryinterval 10
    next
end

config vpn ipsec phase2-interface
    edit "amazon1"
        set dhgrp 2
        set pfs enable
        set phase1name "amazon1"
        set proposal aes128-sha1
        set replay enable
    next
    edit "amazon2"
        set dhgrp 2
        set pfs enable
        set phase1name "amazon2"
        set proposal aes128-sha1
        set replay enable
    next
end

config system interface
    edit "amazon1"
        set vdom "root"
        set ip <CHANGE: Tunnel #1 Inside Customer Gateway> 255.255.255.255
        set type tunnel
        set remote-ip <CHANGE: Tunnel #1 Inside Virtual Private Gateway>
        set interface "wan1"
    next
    edit "amazon2"
        set vdom "root"
        set ip <CHANGE: Tunnel #2 Inside Customer Gateway> 255.255.255.255
        set type tunnel
        set remote-ip <CHANGE: Tunnel #2 Inside Virtual Private Gateway>
        set interface "wan1"
    next
end

config router bgp
    set as <CHANGE: BGP Customer Gateway ASN>
        config neighbor
            edit <CHANGE: Tunnel #1 Inside Virtual Private Gateway>
                set remote-as <CHANGE: Tunnel #1 BGP Virtual Private Gateway ASN>
            next
            edit <CHANGE: Tunnel #2 Inside Virtual Private Gateway>
                set remote-as <CHANGE: Tunnel #2 BGP Virtual Private Gateway ASN>
            next
        end
        config network
            edit 1
                set prefix <CHANGE: Your Local Net> <CHANGE: Your Local netmask>
            next
        end
        config redistribute "connected"
        end
        config redistribute "rip"
        end
        config redistribute "ospf"
        end
        config redistribute "static"
        end
    set router-id <CHANGE: Tunnel #1 Inside Virtual Private Gateway>
end

config firewall policy
    edit 200
        set srcintf "internal"
        set dstintf "amazon1"
            set srcaddr "all"
            set dstaddr "all"
        set action accept
        set schedule "always"
            set service "ANY"
    next
    edit 201
        set srcintf "amazon1"
        set dstintf "internal"
            set srcaddr "all"
            set dstaddr "all"
        set action accept
        set schedule "always"
            set service "ANY"
    next
    edit 202
        set srcintf "internal"
        set dstintf "amazon2"
            set srcaddr "all"
            set dstaddr "all"
        set action accept
        set schedule "always"
            set service "ANY"
    next
    edit 203
        set srcintf "amazon2"
        set dstintf "internal"
            set srcaddr "all"
            set dstaddr "all"
        set action accept
        set schedule "always"
            set service "ANY"
    next
end

General

Conditionally Installing Packages With Puppet

March 15th, 2012

If you want to install a package using puppet only if another package is already installed, you can use puppet’s virtual resources to accomplish this. The proper way to do this is to define your two classes and then realize the virtual package in the dependent class. For example, if I wanted to install php5-dev only if gcc was installed, I would make two modules: a gcc module and a php5 module.

In the php5 module:


class php5($type) {
    package { 'php5-common':
        ensure => installed,
    }
    package { 'php5-cli':
        ensure => installed,
        require => Package['php5-common'],
    }
    @package { 'php5-dev':
        ensure => installed,
        tag => 'develpkgs',
    }
}

The ‘@’ symbol defines the php5-dev package as a virtual resource, so it doesn’t actually get realized when the puppet manifest is compiled unless some other module realizes it. To realize it, we go into our gcc module:


class gcc {
    package { 'gcc': ensure => installed, }
    package { 'g++': ensure => installed, }
    package { 'make': ensure => installed, }
    Package <| tag == 'develpkgs' |>
}

This will search through all of your modules and realize any virtual resource that is tagged with ‘develpkgs’. So for example, if you have another module called mysql and you want to install the mysql development package:


class mysql {
    package { 'mysql': ensure => installed, }
    package { 'mysql-server': ensure => installed, }
    @package { 'libmysqlclient-dev':
        ensure => installed,
        tag => 'develpkgs',
    }
}

General, Puppet ,

Using LAME to Concatenate MP3 Files

February 13th, 2012

I needed a way to concatenate multiple MP3 files of varying bitrate/sample rate/channels and I needed it to be scriptable to handle pretty much any permutation of various input MP3 formats.

I came up with a simple script that does just that. It’s certainly not ideal, because it requires re-encoding everything 2 (more) times, but it works well enough for me. Of course, the input files can be anything lame supports, so you can pass in AIFF files which makes this a little better.

The goal was to take a short intro audio file, a long content audio file, and a short outro audio file and pull them all together. To do this, I first transcode each audio file to an MP3 with known sample rate, channels, and bitrate. Then I decode that newly encoded file to PCM. Finally, it encodes the PCM to a new MP3 file with my desired final MP3 settings.

for f in intro.mp3 content.mp3 outro.mp3 ; do
        lame -m m -b 192 --resample 44.1 $f - | lame --decode -t --mp3input - -
done | lame -r -m m -s 44.1 --resample 22.05 - outfile.mp3

I’m going to have to do this same thing with video using ffmpeg in the near future. I have a feeling that’s going to be a lot more difficult.

Thanks to this guy for sending me down this path.

General

Dencor Energy Control Systems – Bad Idea Or Worst Idea?

July 16th, 2011

I’m going to deviate slightly from what I normally post about on here, but I guess this is somehow tangentially related to technology. I bought a new house a few months back and it had a Dencor Energy Control System in it. Of course, I had no clue what this system does (and frankly, I’m still not entirely sure), but it wasn’t that big of a deal until recently.

Basically, the system consists of a programmable interface inside the house and a relay disconnect outside of the house. I’ve spoken to two different electricians about the system and they both say they also know nothing about it. The system that I have was installed by the original builders back in the late 70s or so, so we’re talking about pretty old technology here. I’m sure things have progressed since then, but that’s not really the point of this post.

The problem is that I have 3 electrical outlets on different breakers that mysteriously stopped working. This may or may not be related to this Dencor Energy Management System, but since I have no idea how this thing works, it seemed like a good thing to investigate. When I first bought the house, I was kind of curious how the system worked, but when I called the Dencor headquarters, they told me it was going to cost me $20 or $30 to get a copy of the manual. I wasn’t that curious.

So now that I can’t charge my razor or my fancy electronic toothbrushes once these outlets died, I decided to try again and I emailed the president of Dencor Energy Control Systems, Matt Essig, with this email:

I purchased a home back in February and it seems the original builder installed Dencor energy management systems throughout the neighborhood (back in the 70s). We’ve recently had a handful of outlets on various breakers stop working and I can’t figure out any reason why other than possibly this system. I’ve asked all of my neighbors if they know how this thing works and no one knows anything about it.

It says DDS-809 on the outer cover and on the circuit board it lists 809-1002.

I spoke with someone a few months back and they said you would have to charge me $20 or $30 for a manual for this, but that seems a bit extreme just to buy some instructions for a product.

I can find no information about this system online and your website isn’t very informative. If you have a manual for this, can’t you just scan it and post it on your website or at the very least email it to me. Or if that’s too much effort, simple photocopies of the manual pages would be fine and I can stop by and pick it up since I live in south Denver. I’ve attached a picture of the control panel (sorry it’s blurry.. i can get a real picture if needed) and I can provide photos of the relay box in the back of the house if that helps too.

If I can’t figure out how it works, my next step is going to be trying to figure out how to disable the whole system without killing myself by electric shock.

-eric

That seems pretty reasonable to me. But then something strange happened. Here is the email exchange between me and Matt Essig, the president of Dencor Energy Control Systems.

Eric,

The manual and spec sheet are attached.

I know actually charging for products and services when you are a for profit business in a market driven economy seems odd but maybe your approach is the right one; when I’m at the grocery store I’m going to insist they give me everything for free because the prices they charge are excessive.

We stopped producing the 809 decades ago; in 20 years would you support a product you stopped developing and selling, or giving away, today? How about Microsoft? Oracle? Thought so….

Maybe you should disconnect the system and watch your power bills go up (assuming the system is currently programmed properly)…

Matt

Since Matt is a big fan of free market economics, I figured I’d teach him a thing or two. So I responded with this:

Matt,

Thank you for the manual.

I am well aware of how markets work, but it seems you are not. In a market driven economy, customer service is incredibly important. This is increasingly more important now that the masses have such innovations as the Internet in order to share information about how companies treat their customers.

I see that you’re beginning to understand this since you responded to “Sandra’s” 2008 post on ripoffreport.com just a week or two ago on July 5th, 2011. I agree that Sandra was a being a bit unreasonable, but given your response to me, I can see why she might be a tad bit upset with you.

Now there’s a pretty distinct difference between what I’m asking of your company versus what you suggested I should ask at a grocery store. I think a more apt analogy would be me contacting the grocery store to help me out with instructions on how to microwave a pizza I bought. Or even better yet, contacting the *manufacturer* of the pizza… say Red Baron (via the toll free number on the back of the box that says “questions?”) and asking them how to microwave it. Now granted, I’m not going to ask how to microwave a 20 year old pizza, but we’ll discuss that next.

You see, you think I want something for free, but I am not asking you to give me any actual product or service for free unless you consider the instruction manual for your real product yet another product. That’s quite the stretch. But you asked quite an interesting question. Can I, in fact, find support for say…. Windows 3.1? You bet your ass I can. As bad as Microsoft support is, they appear to be doing a better job than your company. It’s unfortunate that you happened to pick the industry I am in for your examples.

Here is Microsoft supporting 20+ year old products:

Oh? I can download an updated vshare.386 binary for Windows 3.1? Yep… right here: http://www.microsoft.com/download/en/details.aspx?id=16991

Holy crap! Look at this! Windows 3.0 instructions on editing an autoexec.bat and config.sys file? Wow, that brings back some memories of the 80s…. http://support.microsoft.com/kb/85194

Of course, there are plenty more examples, but I think that should be sufficient for now. If you want me to give you some more examples (maybe HP printer manuals from the 80s?) I could certainly dig that up as well if you’d like. But at any rate, that’s not really the issue anymore now, is it? I possibly would have hired someone to come fix and/or upgrade the system, which of course, would benefit you, because as you well know, in a market driven economy if people can make money working on your products, your product’s future value increases in non-real terms (hint: think advertising).

But back to the point: The issue now is that your level of customer service has made my decision quite easy. I will post my email to you as well as your email back (and this one too) in its entirety on my website. I think others would be glad to hear how the president of Dencor responds to requests from users of their products.
I’ve also noticed that you seem to have a bit of litigious streak in you. You can contact “Christian Onsager, at Onsager Staelin & Guyerson” and let them know that you want to file suit against me when I post this information online as well. There’s no need for a John Doe subpoena though, you can have them serve notice directly to me at the following address:

Eric [redacted]
[redacting my actual address here as well]

Remember…. all I asked for was a simple manual. And again, thanks for the manual as well as the incredibly quick response.

Eventually, however, you’ll learn one of the greatest lessons of the market driven economy: Don’t be a dick to your customers.

-eric

Now I assumed that’d be the end of the story. Only an idiot would respond to that email. But Mr. Matt Essig, the president of Dencor Energy Management Systems didn’t want to leave it at that. He said he would sue me if I posted these emails:

Eric,

If you would like to post the emails on your website then go ahead. The email was meant for you, and you only, hence it was addressed to you. I will litigate over this if you choose to do so…just try me.

Matt

Well, not to let him down and of course I haven’t been sued in a long time, here we are. So I responded with this:

Matt,

Seriously? Emails are certainly not confidential. Furthermore, Colorado doesn’t even require two-party consent for recording and publishing of phone calls, let alone other electronic communications. You may want to contact your attorneys before you continue digging yourself into a bigger hole. You would think that for a president of a company, you would be a little better informed about the ramifications of your communications and your business conduct in general. But again, you have my address. Instead, you sound like a petulant toddler trying desperately to undo the damage that you’ve already done. Feel free to have your legal team serve notice of a lawsuit.

I will contact you again when I post the information online with a web address where you can find your emails and my commentary on my dealings with you today.

Kindest Regards,

Eric

P.S. The manuals you sent don’t mention anything about programming the system. I appreciate the documentation you provided, but if you could send the actual programming manual, that’d be incredibly helpful. Thanks again.

Well Mr. Matt Essig of Dencor Energy Management Systems, your move. Best regards and I would appreciate that manual if you could foward over a copy. You have my physical and email address.

Also, I’ve sent him a link to this post. I look forward to hearing from you again Mr. Essig

General