Archive

Posts Tagged ‘linux’

Conditionally Installing Packages With Puppet

March 15th, 2012

If you want to install a package using puppet only if another package is already installed, you can use puppet’s virtual resources to accomplish this. The proper way to do this is to define your two classes and then realize the virtual package in the dependent class. For example, if I wanted to install php5-dev only if gcc was installed, I would make two modules: a gcc module and a php5 module.

In the php5 module:


class php5($type) {
    package { 'php5-common':
        ensure => installed,
    }
    package { 'php5-cli':
        ensure => installed,
        require => Package['php5-common'],
    }
    @package { 'php5-dev':
        ensure => installed,
        tag => 'develpkgs',
    }
}

The ‘@’ symbol defines the php5-dev package as a virtual resource, so it doesn’t actually get realized when the puppet manifest is compiled unless some other module realizes it. To realize it, we go into our gcc module:


class gcc {
    package { 'gcc': ensure => installed, }
    package { 'g++': ensure => installed, }
    package { 'make': ensure => installed, }
    Package <| tag == 'develpkgs' |>
}

This will search through all of your modules and realize any virtual resource that is tagged with ‘develpkgs’. So for example, if you have another module called mysql and you want to install the mysql development package:


class mysql {
    package { 'mysql': ensure => installed, }
    package { 'mysql-server': ensure => installed, }
    @package { 'libmysqlclient-dev':
        ensure => installed,
        tag => 'develpkgs',
    }
}

General, Puppet ,

nVidia Overscan Correction fixed in Latest Drivers

April 1st, 2010

My solution for fixing overscan on nvidia cards is obsolete! I did find out just a few days ago that my solution does actually work.

The person that I was originally helping with this problem decided to give Linux another shot. He tested it out and reported that it did indeed fix his overscan problems.

However… for no particular reason I decided to check out the nVidia settings control panel again. When I opened it up in Ubuntu 10.04, I noticed this (and tested it to make sure it works, which it does):

Screenshot-NVIDIA X Server Settings

General , , ,

Solaris ZFS vs. Linux with Hardware Raid

April 1st, 2010

I’ve had to start using Xen virtualization for a current project we’re working on. I always hate switching back to Linux servers because all of our fancy tools and scripts for automation are written for Solaris since we only have a handful of Linux servers.

At any rate, I’ve got Xen all figured out and really started to dig into Linux’s LVM for the first time. There’s some similarities between LVM and ZFS, but most noticeably LVM doesn’t deal with RAID at all. You have to set up manual Linux software RAID and put a VolumeGroup on the RAID meta-device. So I set up a nice software RAID5 device, created a VolumeGroup, and off I went.

The write performance was horrendous.

So I begrudgingly went into the RAID controller BIOS and set up hardware RAID5 and put LVM on top of that. After the installation, I decided to see how fast this was compared to ZFS raid1z (which is more or less RAID5).

The machines are identical:

  • Dual 6 Core Opteron
  • Sun STK RAID Controller (Adaptec) — 256MB cache, write-back cache mode enabled
  • 16 Gigs of memory

Here’s the results:

Linux — 21GB Write

# time dd if=/dev/zero of=/root/test bs=10240 count=2009600
2009600+0 records in
2009600+0 records out
20578304000 bytes (21 GB) copied, 146.226 seconds, 141 MB/s

real    2m26.377s
user    0m4.068s
sys     1m53.823s

Linux — 1GB Write

# time dd if=/dev/zero of=/root/test bs=10240 count=102400
102400+0 records in
102400+0 records out
1048576000 bytes (1.0 GB) copied, 2.69437 seconds, 389 MB/s

real    0m2.702s
user    0m0.108s
sys     0m2.584s

Solaris — 21GB Write

# time dd if=/dev/zero of=/zonepool/test bs=10240 count=2009600
2009600+0 records in
2009600+0 records out
20578304000 bytes (21 GB) copied, 55.3566 s, 372 MB/s

real    0m55.412s
user    0m0.913s
sys     0m27.012s

Solaris — 1GB Write

# time dd if=/dev/zero of=/zonepool/test bs=10240 count=102400
102400+0 records in
102400+0 records out
1048576000 bytes (1.0 GB) copied, 1.25254 s, 837 MB/s

real    0m1.257s
user    0m0.046s
sys     0m1.211s

837MB/s for burst writes on raidz1! ZFS is too awesome.

Here’s the controller configurations:

Linux Controller Configuration
Solaris Controller Configuration

General, Solaris , ,

PulseAudio: An Async Example To Get Device Lists

October 13th, 2009

I have a love/hate relationship with PulseAudio. The PulseAudio simple API is… well…. simple. For 99% of the applications out there, you’ll rarely need anything more than the simple API. The documentation leaves a little to be desired, but it’s not to hard to figure out since you have the sample source code for pacat and parec.

The asynchronous API, on the other hand, is really complex. The learning curve isn’t really a curve. It’s more like a brick wall. Compounding the issue is that the documentation is atrocious. If you know exactly what you’re looking for and if you already know how it works, the documentation can be helpful.

More importantly, simple example code is nearly impossible to come by. So, since I took the time to figure it out, I figured I would document this here in the hopes that this little example will help someone else. This is not production ready code. There’s a lot of error checking that’s not being done. But this should at least give you an idea of how to use the PulseAudio asyncrhonous API.

Update: I spoke with the PulseAudio team and they encouraged me to put this source code on their wiki. So now you can find it at the main PulseAudio wiki: http://pulseaudio.org/wiki/SampleAsyncDeviceList

Read more…

General , , ,

Time Lapse Video using gphoto2 and ffmpeg

August 30th, 2009

An interesting little project I’ve been working on is time lapse photography. I picked up a used Canon Powershot A520 pretty cheap, and set up a laptop with Ubuntu to communicate with the camera. I’m still working on the best angle to minimize the power lines out front, but I’ve got a good start going.

What you’ll need:

  • A Linux machine (a laptop really helps)
  • gphoto2 >= 2.4.5 (note that you can upgrade jaunty’s gphoto2 with the karmic packages to get this version)
  • A camera that supports remote capture
  • An AC power outlet near were you want to take your photos (and an AC adapter for your camera unless you have really awesome batteries).
  • jpeg2yuv and ffmpeg (with libx264 support)
  • Something relatively interesting to take pictures of

This is what I ended up with:

So here’s what I did:

  1. Connect the USB cable to the camera
  2. Run the following command (in a while loop in case it crashes):
    while true ; do
        gphoto2 --capture-and-download -I 30
    done

  3. Wait about 8 hours or so
    1. If you’re impatient like me, you can nfs mount the laptop after about 45 frames (about 20 minutes) and get a preview.
    2. You can rsync the laptop’s nfs mounted directory locally so you don’t have to copy the files over (most likely) wireless every time you want to encode the latest version
  4. Collect all of your images and make sure that each frame is numbered sequentially.
  5. Create an MPEG with jpeg2yuv by piping the output to ffmpeg:
    starframenum=XXXX # put the number of the first image in the sequence here
    jpeg2yuv -b $startframenum \
            -v 0 \
            -j the/path/to/your/images/IMG_%04d.JPG \
            -f 15 \
            -I p | ffmpeg -threads 2 -y -i - \
            -vcodec libx264 \
            -b 2500k \
            -acodec libfaac -ab 48k -ar 48000 -ac 2
            -s 1024x768 -f mp4 \
            outputfile.mp4

  6. In the options above, the important ones are ffmpeg “-f” which is the framerate. You can change this to speed up and or slow down your movie. The “-s” option is the size. Keep in mind that the width and height of your images needs to be a multiple of 16 (ie, 640×480, 1024×768, 1920×1152, etc). Note that 1080 is not divisble by 16. 1280×720 will work for widescreen (16:9) hi-def though. Lastly, the “-b” option is the video encoding bitrate. Increase it for better quality and decrease it for smaller output movie files.

    General ,

DVI to HDMI overscan (screen edge cutoff) on an HDTV

July 3rd, 2009

Update – 4/1/2010: Latest nVidia drivers have overscan correction built in

Well I learned something new recently. I have a friend that’s making the Ubuntu switch and he called me up with a bizarre problem. He’s using an nVidia card (although other cards have the same issue) with a DVI out port to a DVI->HDMI converter to an HDMI input on a 26″ HDTV that he uses as a monitor.

He called me up and described the problem and I confessed that I had never heard of this before. All 4 sides of his sides of his screen were getting cut off. He could only see part of his menu bars at the top and bottom and the left/right edges were cut off as well. After some Googling, I at least found the name for the problem: overscan.

And once I figured out the name, that’s when my Google searches became eye openers. There are a lot of people out there with overscan problems and there are very few solutions in Linux. The Windows nVidia drivers allow dynamic overscan correction inside of their driver toolbox. The X server nVidia drivers have no options (for DVI out… for TV out there apparently are).

The problem, as I understand it, is that the PC is sending a DVI PC style output, but the TV is reading a HDMI TV style input. As such, the TV thinks it’s receiving a TV signal and acts accordingly. If your TV has a DVI input, it should treat that as a PC input and give you 1:1 pixel mapping (which is what you’re looking for). If not, you’ll need to adjust for the overscan on the PC side. Some TVs even have an option to treat an HDMI signal as if it were PC. Check your TV’s manual.

Anyhow, there are a lot of people asking for help for this issue but is very hard to find any actual information.

Option 1 – Manually

I don’t know if this works, but it looks like good info. If you’re looking for a way to fix this (and you’re ready to spend quite a while doing it), you should read this:

Ubuntu Forums: Nvidia, Modelines, Overscan…8.10

Basically it’s trial and error to get the correct X server config’s Modeline. It’s mindboggling that no one (especially nVidia, which seems to care about Linux a little bit) has put out any definitive information on this topic.

Option 2 – A little less manually

I definitely don’t know if this works. I don’t know if anyone has even tried it. If this works/doesn’t work for you, post in the comments.

You can see if the Xfree modeline generator will give you something that works. I don’t really understand what all the modeline timings mean, but here’s a shot in the dark (You’re probably desparate at this point anyway… and I have no way of testing this so I don’t know if it even works at all). Also, I’ll give the same warning everyone gives on this… I take no responsibility at all of this damages your television. Try this at your own risk.

First things first, back up your xorg.conf file (/etc/X11/xorg.conf) somewhere safe (like your home directory).

I wrote a quick program that will help you determine your visible screen size:

Source: findcoords.c (source)
Binary: findcoords (compiled on Ubuntu 9.04)

If the binary doesn’t work for you or you’d prefer to compile from source, you’ll need the libx11 development packages installed (as well as the standard stuff like gcc and whatnot). On Ubuntu, running “sudo apt-get install build-essentials libx11-dev” should do the trick. To compile it run: gcc -lX11 -o findcoords findcoords.c

Now run it by typing ./findcoords

It’ll tell you to click the upper left and bottom right corners of the screen. Get as close as possible. You want the very point of the cursor as close to the edge as possible. That means in the bottom right, you should only be able to see about 1 pixel of your cursor. When you’ve done that it’ll calculate your viewable screen size. It will output something like this:

Root Window Size: 2880x900
Viewable Size: 2764x798
Your screen is cut off by the following number of pixels:
Left  : 31
Right : 85
Top   : 24
Bottom: 78

Armed with the actual visible screen size, head over to the XFree Modeline Calculator (it works for Xorg too).

1. Enter the values under “Monitor Configuration” if you know them. If not leave that section blank.
2. Under “Basic Configuration” enter the viewable size that got output from findcoords.
3. If you know the max refresh rate for your TV, you can enter it here. If not, just use 60Hz.
4. If you know the dot clock frequency enter it as well, otherwise, just leave it blank.
5. IMPORTANT: If you’re TV is interlaced at max resolution (i.e. 1080i), check the interlaced button.
6. Click the “Calculate Modeline” button and it should give you a modeline at the top of the screen.
7. In your xorg.conf file, put the modeline it gives you into the Monitor section
8. And this line to your Monitor section as well:

Option "ExactModeTimingsDVI" "TRUE"

9. Now, to use this, you’ll need to add this line to your Device section:

Option "UseEDID" "FALSE"

10. Then in the Display section, add a line that LOOKS like this, but define the mode specified in the modeline that the generator gave you:

Modes "1960x1080@60i"

In other words, if the modeline generator spit out:

Modeline "1816x980@60i" 65.89 1816 1848 2096 2128 980 1002 1008 1031 interlace

You would put the following in the Display section:

Modes "1816x980@60i"

That text has to match EXACTLY. When it’s all said and done, you should end up with an xorg.conf that looks something like this:

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "DELL S199WFP"
    HorizSync       30.0 - 83.0
    VertRefresh     56.0 - 75.0
    Option         "ExactModeTimingsDVI" "TRUE"
    Modeline "1816x980@60i" 65.89 1816 1848 2096 2128 980 1002 1008 1031 interlace
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce 9800 GT"
    Option         "UseEDID" "FALSE"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
        Modes       "1816x980@60i"
    EndSubSection
EndSection

Give that a shot and see what you get. Can’t be much worse, can it? If it doesn’t work, just revert back to what you had by replacing your xorg.conf file from the backup. If you get any halfway decent results at all, let me know.

More terms to know:

1-to-1 pixel mapping: If your HDTV (as a monitor) supports this option, chances are this will solve your problem. This means that every pixel sent by the PC will be mapped to a pixel on the screen (i.e. disable overscan).

Full Pixel: This is the same as 1:1 pixel mapping

Modelines: Definitions of video modes that control the display size in the X server

Overscan: Part of standard TV input where a percentage of the edges of the screen are cut off. Not noticeable for normal TV viewing, but very noticeable on a PC desktop.

EDID: Monitor/TV device information telling the PC what modes are supported (stored in the monitor and not configurable)

Good luck.

General , , ,