# HG changeset patch # User František Kučera # Date 1543172286 -3600 # Node ID ee7e9615167333c3258421dd14e3b5375212b449 # Parent 297da74fcab2cb149255b5d27925f0a5434c178e classic pipeline example diff -r 297da74fcab2 -r ee7e96151673 relpipe-data/animals.txt --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/relpipe-data/animals.txt Sun Nov 25 19:58:06 2018 +0100 @@ -0,0 +1,6 @@ +large white cat +medium black cat +big yellow dog +small yellow cat +small white dog +medium green turtle diff -r 297da74fcab2 -r ee7e96151673 relpipe-data/classic-example.xml --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/relpipe-data/classic-example.xml Sun Nov 25 19:58:06 2018 +0100 @@ -0,0 +1,119 @@ + + + Classic pipeline example + Explained example of classic pipeline + + +

+ Assume that we have a text file containing a list of animals and their properties: +

+ + + +

+ We can pass this file through a pipeline: +

+ + + +

+ Particular steps of the pipeline are separated by the | pipe symbol. + In the first step, we just read the file and print it on STDOUT.Of course, this is an UUoC, but in examples the right order makes it easier to read than usage of < file redirections. + In the second step, we filter only dogs and get: +

+ +
+ +

+ In the third step, we select second field (fields are separated by spaces) and get colours of our dogs: +

+ +
+ +

+ In the fourth step, we translate the values to uppercase and get: +

+ +
+ +

+ So we have a list of colors of our dogs printed upper-case. + In case we have several dogs of same colors, we could avoid duplicates simply by adding | sort -u in the pipeline (after the cut part). +

+ +

The great parts

+ +

+ The authors of cat, grep, cut or tr programs don't have to know anything about catsn.b. the cat in the command name is a different cat than in our text file and dogs and our business domain. + They can focus on their tasks which are reading files, filtering by regular expressions, doing some substrings and text conversions. And they do it well without being distracted by any animals. +

+ +

+ And we don't have to know anything about the low-level programming in the C language or compile anything. + We just simply build a pipeline in a shell (e.g. GNU Bash) from existing programs and focus on our business logic. + And we do it well without being distracted by any low-level issues. +

+ +

The pitfalls

+ +

+ This simple example looks quite flawlessly. + But actually it is very brittle. +

+ +

+ What if we have a very big cat that can be described by this line in our file? +

+ +
dog-sized red cat
+ +

In the second step of the pipeline (grep) we will include this record and the final result will be:

+ +
+ +

Which is really unexpected and unwanted result. We don't have a RED dog and this is just an accident. The same would happen if we have a monkey of a doggish color.

+ +

+ This problem is caused by the fact that the grep dog filters lines containing the word dog regardless its position (first, second or third field). + Sometimes we could avoid such problems by a bit more complicated regular expression and/or by using Perl, but our pipeline wouldn't be as simple and legible as before. +

+ +

+ What if we have a turtle that has lighter color than other turtles? +

+ +
small light green turtle
+ +

+ If we do grep turtle it will work well in this case, but our pipeline will fail in the third step where the cut will select only light (instead of light green). + And the final result will be: +

+ +
+ +

+ Which is definitively wrong because the second turtle is not LIGHT, it is LIGHT GREEN. + This problem is caused by the fact that we don't have a well-defined separators between fields. + Sometimes we could avoid such problems by restrictions/presumptions e.g. the color must not contain a space character (we could replace spaces by hyphens). + Or we could use some other field delimiter e.g. ; or | or ,. But still we would not be able to use such character in the field values. + So we must invent some kind of escaping (like \; is not a separator but a part of the field value) + or add some quotes/apostrophes (which still requires escaping, because what if we have e.g. name field containing an apostrophe?). + And parsing such inputs by classic tools and regular expressions is not easy and sometimes even not possible. +

+ +

+ There are also other problems like character encoding, missing meta-data (e.g. field names and types), joining multiple files (Is there always a new-line character at the end of the file? Or is there a BOM at the beginning of the file?) + or passing several types of data in a single stream (we have list of animals and we can have e.g. also a list of foods or list of our staff where each list has different fields). +

+ +
+ +
diff -r 297da74fcab2 -r ee7e96151673 relpipe-data/index.xml --- a/relpipe-data/index.xml Sun Nov 25 01:03:26 2018 +0100 +++ b/relpipe-data/index.xml Sun Nov 25 19:58:06 2018 +0100 @@ -20,10 +20,14 @@ Each running program (process) has one input stream (called standard input or STDIN) and one output stream (called standard output or STDOUT) and also one additional output stream for logging/errors/warnings (STDERR). We can connect programs and pass the STDOUT of first one to the STDIN of the second one (etc.) using pipes.

+ +

+ A classic pipeline example (explained): +

+ + + +

Bytes, text, structured data? XML, YAML, JSON, ASN.1

+ +

Rules:

+ + + +

What are?

@@ -101,12 +118,12 @@

diff -r 297da74fcab2 -r ee7e96151673 relpipe-data/makra/classic-example.xsl --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/relpipe-data/makra/classic-example.xsl Sun Nov 25 19:58:06 2018 +0100 @@ -0,0 +1,17 @@ + + + +
cat animals.txt | grep dog | cut -d " " -f 2 | tr a-z A-Z
+ +
+