Raku Gather,
I Take

by Arne Sommer

Raku Gather, I Take

[4] Published 31. March 2019.

Perl 6 → Raku

This article has been moved from «perl6.eu» and updated to reflect the language rename in 2019.

Raku's gather and take allows us to set up a producer and a consumer of a stream of data, without race condition issues. It is an interesting alternative to asynchronous programming. At least some of the time.

The Promotional Video for my Beginning Perl 6 Class at PerlCon 2019 in August has scrolling code, and I wrote a litte program to do the output - with a time delay after each line to make it almost readable.

I'll start with a more traditional approach, before showing the gather/take version.

lines()

This version of the scrolling program uses $*ARGFILES implicitly, and will output the content of every file specified on the command line:

File: scroller-argfiles
constant indent     = 2;        # [2]
constant pause      = 0.05;     # [3]

for lines() -> $line            # [1]
{
  say $line.indent(indent);     # [2]
  sleep pause;                  # [3]
}

[1] This gives us all the lines from all the files.

[2] I added two spaces as indentation on the output, as that made it easier to make a nice looking video. The indent method is primarily used by Pod (the Raku documentation), but here I actualy found a use for it. (And yes, it is contrived.)

[3] I ended up with a delay of 0.05 seconds after some testing.

The constant keyword gives a constant variable (which is a contradiction, so forget about the variable part). Use it when the value doesn't change after the initial declaration. See docs.raku.org/routine/constant for more information.

These two lines give the same result:

$ raku scroller-argfiles hello sum
$ cat hello sum | raku scroller-argfiles

See docs.raku.org/language/variables#index-entry-$*ARGFILES for more information about $*ARGFILES.

Displaying the file name

The first version works quite well, but I would like to show the file name before the content, for each file. That isn't possible without accessing the individual files. So a complete rewrite is required:

File: scroller-say
constant indent     = 2;
constant pause      = 0.05;
constant ansi-start = "\e[33m"; # [2]
constant ansi-stop  = "\e[0m";  # [3]

for @*ARGS -> $file             # [1]
{
  say-with-delay "{ ansi-start }File: { $file }{ ansi-stop }"; # [2] [4] [3]
  say-with-delay $_ for $file.IO.lines;                        # [5]
  say-with-delay "";                                           # [6]
}

sub say-with-delay ($line)                                     # [7]
{ 
  say $line.indent(indent);
  sleep pause;
}

[1] We iterate over the file names, as given on the command line.

[2] I use an ANSI Control Sequence to turn on yellow text ("\e[33m") in the terminal.

[3] And another to reset to the default ("\e[0m").

[4] Displaying the file name.

[5] Displaying the lines in the file, one at a time.

[6] Add a blank line after each file.

[7] We call sleep after printing every single line.

The problem with this version is that very large files will cause a slight delay when loaded, possibly causing a noticeable lag in the scrolling.

The array version

This version reads everything in (and pushes the content to an array), before displaying anything. This removes the possibility of lag.

File: scroller-array
constant indent     = 2;
constant pause      = 0.05;
constant ansi-start = "\e[33m";
constant ansi-stop  = "\e[0m";

my @lines;

for @*ARGS -> $file
{
  @lines.push: "{ ansi-start }File: { $file }{ ansi-stop }";
  @lines.push: $_ for $file.IO.lines;
  @lines.push: "";
}

for @lines -> $line
{
  say $line.indent(indent);
  sleep pause;
}

The «problem» with this version is that it reads the entire content of all the files (into an array), before outputing anything. That is a problem with extremely large files, or if we specify a lot of files. And if we only use smaller files, the original lag in the previous version shouldn't be an issue as it is drowned out by the explicit delay. So we have solved one problem, and introduced another.

The gather/take version

And finally the gather/take version:

File: scroller
my @files = @*ARGS || dir('.', test => { .IO.f }).pick(*); # [1]

constant indent     = 2;
constant pause      = 0.05; 
constant ansi-start = "\e[33m"; 
constant ansi-stop  = "\e[0m"; 

my $lines := gather                                    # [2]
{
  for @files -> $file 
  {
    take "{ ansi-start }File: { $file }{ ansi-stop }"; # [3]
    for $file.IO.lines -> $line { take $line; }        # [3]
    take "";                                           # [3]
  }
}

for $lines -> $line                                    # [4]
{
  say $line.indent(indent);
  sleep pause;
}

[1] The files can either be given on the command line (and we get the names with @*ARGS), or else we use all the files in the current directory in random order.

[2] We wrap the values in a gather block, to «gather them together». We use binding to ensure that the sequence is kept as a lazy data structure.

[3] This is very similar to the «say» and «array» versions, but we use take instead of say or push.

[4] We iterate over the values, and they are computed only when needed.

This version has the same potential lag problem as the «scroller-say» version. But it isn't really a problem in practice.

We can consider the gather as a procedure, and the take command as the return statement. It keeps track of where it was, and continues after that take the next time it is called.

gather/take isn't really asyncronous, so the ingress on the main page is somewhat misleading.

Exercise

Compare the «scroller-say», «scroller-array» and «scroller» programs. The code is very similar. Which approach do you like best?

Commercial Break

This article was written as part of the work on my coming course «Advanced Raku».