--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/relpipe-data/examples-in-sql-reading-sql.xml Tue Oct 29 13:39:42 2019 +0100
@@ -0,0 +1,90 @@
+<stránka
+ xmlns="https://trac.frantovo.cz/xml-web-generator/wiki/xmlns/strana"
+ xmlns:m="https://trac.frantovo.cz/xml-web-generator/wiki/xmlns/makro">
+
+ <nadpis>Reading SQL scripts</nadpis>
+ <perex>read an SQL file and display it in a tabular way</perex>
+ <m:pořadí-příkladu>03200</m:pořadí-příkladu>
+
+ <text xmlns="http://www.w3.org/1999/xhtml">
+
+ <p>
+ SQL scripts containing
+ DDL (Data Definition Language) and DML (Data Manipulation Language)
+ contain both data structures (relations) and data (records).
+ Simple example:
+ </p>
+
+ <m:pre jazyk="sql" src="examples/relpipe-in-sql-1.sql"/>
+
+ <p>
+ We can read such data using the <code>relpipe-in-sql</code> command
+ in a similar way we read CSV, XML or Recfile streams – just pipe the stream into the particular input filter
+ and let it convert data to the relational format:
+ </p>
+
+ <m:pre jazyk="bash"><![CDATA[cat relpipe-in-sql-1.sql | relpipe-in-sql | relpipe-out-tabular]]></m:pre>
+
+ <p>
+ And in the next step we use an output filter and covert relational data to some other format e.g. the tabular output displayed in our terminal:
+ </p>
+
+ <m:pre jazyk="text" src="examples/relpipe-in-sql-1.txt"/>
+
+
+ <p>
+ Of course, we can add further steps in our pipeline and use any transformation tool for filtering or modifying data:
+ </p>
+
+ <m:pre jazyk="bash"><![CDATA[# AWK transformation:
+cat relpipe-in-sql-1.sql \
+ | relpipe-in-sql \
+ | relpipe-tr-awk \
+ --relation 'a' --where 'message == "Hello,"' \
+ --relation '.*' --drop \
+ | relpipe-out-tabular
+
+# Guile transformation:
+cat relpipe-in-sql-1.sql \
+ | relpipe-in-sql \
+ | relpipe-tr-guile \
+ --relation 'a' --where '(string= $message "Hello,")' \
+ --relation '.*' --drop \
+ | relpipe-out-tabular]]></m:pre>
+
+ <p>and get filtered output:</p>
+ <m:pre jazyk="text" src="examples/relpipe-in-sql-1.filtered.txt"/>
+
+ <p>
+ However, it is usually not necessary, because once we have data in an in-memory database (which happens on-the-fly in the <code>relpipe-in-sql</code> step),
+ we can use the SQL language for filtering and transformations and get the same output as above:
+ </p>
+
+ <m:pre jazyk="bash"><![CDATA[cat relpipe-in-sql-1.sql \
+ | relpipe-in-sql \
+ --relation 'a' "SELECT * FROM a WHERE message = 'Hello,'" \
+ | relpipe-out-tabular]]></m:pre>
+
+ <p>
+ Actually, the <code>relpipe-in-sql</code> is just a symbolic link to the <code>relpipe-tr-sql</code>
+ and have the same capabilities (with just bit different default behavior to match general logic of the input filters).
+ So if we do not need special feautres of Guile, AWK or other transformation tool, it is better to stay with SQL (when we already use the SQL module).
+ </p>
+
+ <p>
+ The difference between <code>relpipe-tr-sql</code> and <code>relpipe-in-sql</code> is that
+ the SQL input filter reads SQL scripts from the STDIN while the SQL transformation reads relational data from the STDIN.
+ And the input filter has implicit <code>--copy '.*'</code> option, if executed without any arguments (so it passes through all relations from the input without need of writing any SELECTs).
+ We can override this default behavior by using any argument – we can e.g. copy only certain relations using <code>--copy 'a|c'</code> (it is a regular expression for matching relation names)
+ or rename them: <code>--copy-renamed 'a|c' 'copy_of_$0'</code> or run arbitrary SELECT: <code>--relation 'a' "SELECT * FROM …"</code> as we have seen above.
+ </p>
+
+ <p>
+ SQL scripts can be used for storing relational data as an alternative to other human-readable and human-editable text formats like XML, CSV or Recfiles.
+ And compared to the other formats, SQL scripts may contain even some logic (e.g. call SQL functions) or views.
+ </p>
+
+
+ </text>
+
+</stránka>
\ No newline at end of file