My code currently looks something like this:
my $fh;
if (...) {
open $fh, '|-', 'cmd1', 'arg1', ...;
} elsif (...) {
open $fh, '|-', 'cmd2', 'arg1', ...;
} elsif (...) {
open $fh, '|-', 'cmd3', 'arg1', ...;
}
while (...) {
# do stuff
print $fh $stuff;
}
close $fh;
if ($!) {
# error handling
}
So depending on some conditions, I execute a different command to which I then write the same content. I now want to conditionally replace one of the commands by a pipeline, so I tried doing this (without error handling for brevity):
my $fh;
if (...) {
open $fh, '|-', 'cmd1', 'arg1', ...;
} elsif (...) {
open $fh, '|-', 'cmd2', 'arg1', ...;
} elsif (...) {
my $reader, $writer;
pipe $reader, $writer;
my $pid = fork();
if ($pid == 0) {
open(STDIN, '<&', $fh);
open(STDOUT, '>&', $writer);
close($reader);
exec 'filtercmd';
}
close $writer
open $reader, '|-', 'cmd3', 'arg1', ...;
}
while (...) {
# do stuff
print $fh $stuff;
}
close $fh;
if ($!) {
# error handling
}
The intention is to spawn a background process to which I hand $fd
as standard input which then uses a pipe to write to another process. But of course this doesn't work because at that point $fh
is not defined. This is no problem when using open $fh, '|-'
but it is a problem here. Do I have to do something with $fh
before i can use it as stdin for my forked process which I later want to write to?
EDIT
Comments suggested to use IPC::Open2
instead of rolling it myself with fork()
. I'm trying to change a project which so far did not use the IPC
module so I'd like to avoid adding this dependency in my patch but for the sake of the argument, lets try to create a minimal example using IPC::Open2
. The following is supposed to do something similar to echo foo | tee /dev/stderr | tee /dev/stderr
. I'm using tee
instead of cat
so that we can see foo
getting printed at each step of the pipeline:
my ($reader, $writer, $fh);
pipe $reader, $writer or die "pipe failed: $!";
open2($reader, $fh, 'tee', '/dev/stderr') or die "cannot open2: $!";
open($writer, '|-', 'tee', '/dev/stderr') // die "wpen failed: $!";
print $fh "foo";
This will print foo
only once, so it reaches the first process but then does not get passed on further down the pipeline. Why?
The easy way to do this is just overcome your distaste for modules and use IPC::Run
. Modules written by smart people allowing you to simply do things that are fairly complex and/or tedious to implement are great; they should be embraced, not shunned. We're not talking Node levels of gratuitous dependencies on is-odd
here.
A skeleton framework using it:
#!/usr/bin/env perl
use warnings;
use strict;
use IPC::Run qw/start pump finish/;
use Symbol;
my $h; # The IPC::Run harness returned by start
my $fh = gensym; # IPC::Run can open pipes automatically but needs a valid glob reference first
if (...) {
# Simple case; a single command that reads anything written to $fh
$h = start ['cmd1', 'arg1', ...], '<pipe', $fh;
} elsif (...) {
$h = start ['cmd2', 'arg1', ...], '<pipe', $fh;
} elsif (...) {
# Complex case; a pipeline of multiple commands
$h = start ['filtercmd'], '<pipe', $fh, '|', ['cmd3', 'arg1', ...];
}
close $reader;
while (...) {
# do stuff
print $fh $stuff;
pump $h; # Might not be needed when not using strings as input/output sources but let's be safe
}
close $fh;
finish $h;
and a echo foo | tee /dev/stderr | tee /dev/stderr
equivalent:
#!/usr/bin/env perl
use warnings;
use strict;
use IPC::Run qw/start finish/;
use Symbol;
# Prints foo three times, twice to standard error, once to standard output
my @tee = qw{tee /dev/stderr};
my $fh = gensym;
my $h = start \@tee, '<pipe', $fh, '|', \@tee;
print $fh "foo\n";
close $fh;
finish $h;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With