relpipe-data/examples-in-xmltable-tr-sql-xhtml-table.xml
branchv_0
changeset 268 1b8576c9640c
equal deleted inserted replaced
267:1826d1cce404 268:1b8576c9640c
       
     1 <stránka
       
     2 	xmlns="https://trac.frantovo.cz/xml-web-generator/wiki/xmlns/strana"
       
     3 	xmlns:m="https://trac.frantovo.cz/xml-web-generator/wiki/xmlns/makro">
       
     4 	
       
     5 	<nadpis>Processing data from an XHTML page using XMLTable and SQL</nadpis>
       
     6 	<perex>reading a web table and compute some statistics</perex>
       
     7 	<m:pořadí-příkladu>03000</m:pořadí-příkladu>
       
     8 
       
     9 	<text xmlns="http://www.w3.org/1999/xhtml">
       
    10 		
       
    11 		<p>
       
    12 			Sometimes there are interesting data in a semi-structured form on a website.
       
    13 			We can read such data and process them as relations using the XMLTable input and e.g. SQL transformation.
       
    14 			This example shows how to read the list of available Relpipe implementations,
       
    15 			filter the commands (executables) and compute statistics, so we can see, how many input filters, output filters and transformations we have:
       
    16 		</p>
       
    17 		
       
    18 		<m:pre jazyk="bash" src="examples/xhtml-table-sql-statistics.sh"/>
       
    19 
       
    20 		<p>This script will generate a relation:</p>
       
    21 
       
    22 		<m:pre jazyk="text" src="examples/xhtml-table-sql-statistics.txt"/>
       
    23 		
       
    24 		<p>
       
    25 			Using these tools we can build e.g. an automatic system which watches a website and notifies us about the changes.
       
    26 			In SQL, we can use the EXCEPT operation and compare current data with older ones and SELECT only the new or changed records.
       
    27 		</p>
       
    28 		
       
    29 		<p>
       
    30 			There are also some caveats:
       
    31 		</p>
       
    32 		
       
    33 		<p>
       
    34 			What if the table structure changes? 
       
    35 			At first, we must say that parsing a web page (which is a presentation form, not designed for machine processing) is always suboptimal and hackish.
       
    36 			The propper way is to arrange a machine-readable format for data exchange (e.g. XML with well-defined schema).
       
    37 			But if we do not have this option and must parse some web page, we can improve it in two ways:
       
    38 		</p>
       
    39 		
       
    40 		<ul>
       
    41 			<li>modify the <code>--records</code> XPath expression so it will select the table with exact number of colums and propper names instead of selecting the first table,</li>
       
    42 			<li>use XQuery which is much more powerful than XMLTable and can generate even dynamic relations with attributes derived from the content of the XHTML table, so if new columns are added, we will get automatically new attributes.</li>
       
    43 		</ul>
       
    44 		
       
    45 		<p>
       
    46 			What if the web page is invalid? Unfortunately, current web is full of invalid and faulty documents that can not be easily parsed.
       
    47 			In such case, we can pass the stream through the <code>tidy</code> tool which fixes the bugs and then pass it to the <code>relpipe-in-xmltable</code>.
       
    48 			It is just one additional step in our pipeline.
       
    49 		</p>
       
    50 
       
    51 		
       
    52 	</text>
       
    53 
       
    54 </stránka>