Archive for the ‘Programming’ Category

Plack Vs FastCGI

I’ve been sold on Plack for a while. This clarifies a part of the reason nicely1.

If you design a fibonacci() function, would you make it print the result to STDOUT, or return the result as a return value?

(It’s referring to the PSGI spec rather than Plack, but I figure the latter needs press a bit more)

Quick thought though – if a page takes a long time to generate, the stdout technique can deliver a bit at a time. Is there a plack plugin for that, or is the answer to go for ajax these day and deliver the page outline asap?

1. Namely that printing to stdout always seemed like a horrible and inefficient hack.

Read Full Post »

One thing I like about developing the emacs environment (i.e. writing emacs lisp) is programming in the image. In contrast when I run a perl script, any data loaded into the script vanishes when the script exits.

my $image = {};
# ... more populate functions ...

If the populate functions take a long time to run and there is a syntax error in one of them, I might waste a lot of time waiting for data to be loaded over and over and potentially annoy my DBA.

Hence my thinking about REPLs earlier in the year.

Instead, I can mitigate this problem and support exploratory programming by serialising my data structure to disk after each section has been loaded.

use Storable;

use constant IMAGE_FILE => '/tmp/.jareds-image-RANDOMTEXT';

sub populate_particular_set_of_people
    my $image = shift;

    return if (exists $image->{'store'}{'people'});

    # Get the set of people

    store $image->{'store'}, IMAGE_FILE;

sub populate_particular_set_of_orders
    my $image = shift;

    return if (exists $image->{'store'}{'orders'});

    # Get the set of orders

    store $image->{'store'}, IMAGE_FILE;

my $image = {};
$image = retrieve(IMAGE_FILE) if -f IMAGE_FILE;

# more populate functions

Error handling has been elided.

I’m only storing part of the hashref, $image->{'store'} in this example. This is fairly typical of my standard use case for this technique – quick one off reports. Some of the data is slow to load in from the database so I persist it to disk. The rest is calculated based on the loaded data so I don’t persist it. I keep all of the data together so I can pass it to each subroutine as a single parameter.

Of course, I wouldn’t recommend this technique for a production system.

Read Full Post »

Neat solution for deploying/running Perl Apps on Windows:

… [share] your production Perl installation, with all the packages required by all your apps correctly installed. And place the top-level .pl scripts (or some placeholder .cmd files if some local setup is required), within that installation. Then your users can run those applications from that shared installation.

Read Full Post »

Happy Perl Devs

I’m kinda amused to see this old post suggesting that perl developers are happier than other developers. I can well believe it. I’ve always been happy playing with perl than the other languages I know, although I’m not quite convinced by the method.

Just a thought though – personally, I’m happier on smaller code bases than larger code bases. Could it be that perl sees more use for moderately sized systems and other languages are used to create developer depressing byzantine balls of mud?

Read Full Post »

I recommend Dave Rolsky

I can’t believe Dave is still on the job market.

I don’t know him personally, but I know him by his posts and his code and I can recommend him without hesitation. The guy is one of the great thinkers of the Perl Community.

If I had hiring authority, I’d get him for my firm.

Read Full Post »

Forked Processes and Pipes

Last time, I linked to some example code that forks a bunch of processes and communicates with them via pipes. This is the main feature of the code I’m interested in, but the explanation is the article is kinda sparse so you can consider this to be supplemental.

As usual, the perl doco is pretty good for covering this stuff.

Creating a child process (a kid) involves two pipes, one for the parent to send data to the kid, and one for the kid to send data back to the parent.

One probably obvious thing to note, you can’t directly send a reference down a pipe, (well, not in any reasonable way and that’s a feature, not a bug), so you’ll be interested in serialisation modules. I’ve mentioned them in passing before and I generally use JSON::XS these days.

Another hopefully obvious thing is if the writer is buffered and the reader is waiting for something specific, there will probably be some deadlock in your future. Remember to unbuffer the writer.

I made a couple more J – comments inline:

sub create_kid {
    my $to_kid = IO::Pipe->new;
    my $from_kid = IO::Pipe->new;

    # J - Fork returns undef for failure, 0 for the child, and the
    # J - child PID for the parent

    # J - Handle fork error
    defined (my $kid = fork) or return; # if can't fork, try to make do

    unless ($kid) { # I'm the kid
      # J - The kid reads from $to_kid and writes to $from_kid

      # J - unbuffer writing to the pipes.  Otherwise may deadlock

      # J - Reset all of the signal handling
      $SIG{$_} = 'DEFAULT' for grep !/^--/, keys %SIG; # very important!
      do_kid($to_kid, $from_kid);
      exit 0; # should not be reached

    # J - parent here...
    # J - The parent reads from $from_kid and writes to $to_kid

    # J - unbuffer writing to the pipes.  Otherwise may deadlock

    $kids{$kid} = [$to_kid, $from_kid];

Read Full Post »

Parallel Tasks using Fork

Randal Schwartz wrote an example link checker which used forked processes to run tasks in parallel. Each child process created has a read pipe from and a write pipe to the parent (created with IO::Pipe).

The result is an inverted version of my preferred architecture. I like the parent to dump work on a queue and whichever child is ready to pull it off. This is pretty easy to do with threads.

In Randal’s version, the parent figures out which child is available to do work.

Read Full Post »

« Newer Posts - Older Posts »