• Redirecting STDOUT to variable in perl and running child scripts

    by  • January 18, 2010 • perl • 0 Comments

    I recently created a perl script (lets call it 1.) to process a network trace and compute the number of bytes received for a given protocol (e.g. udp, tcp or routing protocols such as AODV, DSR) for a given time interval for each node in the trace. The output of this is a comma seperated value (CSV) delimited output to STDOUT, which I redirect into a file when invoked from cmd using the standard > operator, for graphing.

    Today I wanted to compute routing protocol overheads, as a percentage of routing packets vs. all packets (data and routing) for a network trace. This requires running perl script (1.) twice with different¬† protocol numbers so I thought writing a container perl script to invoke script (1.) was the way to go. I admit I’m a bit of a perl newbie so please bear with me if this is obvious!

    The perl script needed to achieve several objectives:

    1. Redirect STDOUT to a variable
    2. Execute perl script (1.)  for ALL protocols
    3. Process variable holding STDOUT into data structure
    4. Repeat steps 2 and 3 for routing protocol.
    5. Reset STDOUT.
    6. Perform calculation from two data structures and output routing overheads as CSV, which I could redirect to a file using standard > operator.

    Here’s the code:

    1. my $var;
    2.  
    3. open(OLDOUT,">&STDOUT") or die "Unable to save STDOUT $!\n"; #save STDOUT handle to OLDOUT
    4. close STDOUT;
    5. open(STDOUT,">", \$var) || die "Unable to open STDOUT: $!"; #open STDOUT handle to use $var
    6.  
    7. @ARGV = ($opt_infile,$opt_class,$opt_ipaddr,"all","table",$opt_start_time,$opt_end_time);
    8.  
    9. #invoke perl script (1.) with above args
    10. do("datareceived2.pl");
    11. my @dataandroutingcsv = split "\n",$var;
    12.  
    13. open(STDOUT,">&OLDOUT");
    14.  
    15. #process @dataandroutingcsv removed for brevity
    16.  
    17. close STDOUT;
    18.  
    19. #reinit $var
    20. undef $var;
    21.  
    22. open(STDOUT,">", \$var) || die "Unable to open STDOUT: $!"; #reopen STDOUT to use $var
    23.  
    24. @ARGV = ($opt_infile,$opt_class,$opt_ipaddr,$opt_protocol,"table",$opt_start_time,$opt_end_time);
    25. #invoke perl script (1.) again with new args
    26. do("datareceived2.pl");
    27. my @routingcsv = split "\n",$var;
    28.  
    29. open(STDOUT,">&OLDOUT"); #redirect STDOUT to use OLDOUT (for printing results to console)
    30.  
    31. #process @routingcsv, print result CSV removed for brevity

    Some gotchas which caused a little head-scratching:

    1. exit(0) in child script (datareceived2.pl) terminated running of caller perl script, so I removed this.
    2. datarecieved2.pl used non-strict mode which meant variables were global and caused a little confusion, so I added use strict; at the top of both scripts to point out those variables which needed to be localised through the my keyword. http://en.wikibooks.org/wiki/Perl_Programming/Functions#Important_note:_global_and_local_variables gave me good insight.
    3. use of require vs. do. require loads and executes the perl script once and once only whilst do executes the perl script as many times as you call it. In my case, I needed do.  Thanks to http://soniahamilton.wordpress.com/2009/05/09/perl-use-require-import-and-do/

    About

    .NET developer at thetrainline.com, previously web developer at MRM Meteorite. Awarded a PhD in misbehaviour detection in wireless ad-hoc networks.A keen C# ASP.net developer bridging the gap with APIs and JavaScript frameworks, one web app at a time.

    http://www.paulkiddie.com

    Leave a Reply

    Your email address will not be published. Required fields are marked *