Usage
aext
is a tool used to extract full XML documents out of CLAIMS Direct. It is installed as part of the CLAIMS Direct repository. Please see the Client Tools Installation Instructions for more information about how to install this tool.
aext [ Options ... ] --pgdbname=s Database configuration name (default=alexandria) --solrurl=s Solr index url (default=http://solr.alexandria.com:8080/alexandria-index/alexandria) --loadid=i modified-load-id to extract --table=s extract from table --sqlq=s extract from SQL statement --solrq=s extract from Solr query --root=s directory to deposit output file(s) or into which files will be archived --prefix=s prefix for output files (default=batch) --archive archive data into predictable path structure --nthreads=i number of parallel processes (default=4) --batchsize=i number of documents per process (default=500) --dbfunc=s specific user-defined database function
Detailed Description of the Parameters
Connectivity
Parameter | Description |
---|---|
pgdbname | As configured in /etc/alexandria.xml , the database entry pointing to the on-site CLAIMS Direct PostgreSQL instance. The default value is alexandria as this value is pre-configured in /etc/alexandria.xml . |
solrurl | Available with optional Solr on-site installation only, this is the URL of the standalone CLAIMS Direct Solr instance or, if used, the URL of the load balancer. Although there is a default value, if you specify --solrq , this parameter is mandatory. |
Source
The following parameters determine the source criteria for extracting CLAIMS Direct XML. Only one may be specified.
Parameter | Description |
---|---|
loadid | The modified_load_id of the table xml.t_patent_document_values . Please see the documentation on content updates describing the various load-ids. |
table | The name of a user-created table with a minimum required column publication_id . |
sqlq | Any raw SQL that returns one or more publication_id values. |
solrq | Any raw Solr query. |
Extract Naming and Destination
Parameter | Description |
---|---|
root | The output location of either the batches or, if --archive is specified, the root directory for files in the predictable path structure. The default is the current working directory. |
prefix | The standard extract is run in batches. This parameter specifies the prefix for each output file. The default is batch . |
archive | Archive the XML into a predictable path structure. The structure is as follows: <root>/<country>/kind/nnnnnn/nn/nn/nn/ucid.xml Where: For example: |
Process Options
Parameter | Description |
---|---|
nthreads | For increased speed, the extraction of data by default is done using parallel processes. This parameter specifies exactly how many parallel processes will be used. A general rule of thumb is to set this parameter to the number of CPU cores the machine has. |
batchsize | This parameter specifies the number of documents to extract per thread. If you know the content you are extracting, this parameter can be used to increase speed, e.g., bibliographic content only would benefit from a larger value while full-text content would benefit from a lower value. |
Output XML Filtering
Parameter | Description |
---|---|
dbfunc | By default, aext uses the internal PostgreSQL function xml.f_patent_document_s to extract full XML documents. This parameter allows you to specify a custom extract function. |
Examples
Extracting Using a Specific load-id
The following example uses modified_load_id
261358. The resulting XML batches will be in /tmp
and will be prefixed with TEST
. The logging output may be different depending on your logging configuration.
aext --loadid=261358 --root=/tmp --prefix=TEST ## ## the results in /tmp ## ls -l /tmp/TEST*.xml -rw-r--r-- 1 root root 56626271 Apr 6 03:52 /tmp/TEST.00000001-00000001.00000500.001491465129.xml -rw-r--r-- 1 root root 68733642 Apr 6 03:52 /tmp/TEST.00000002-00000501.00001000.001491465129.xml -rw-r--r-- 1 root root 91214345 Apr 6 03:52 /tmp/TEST.00000003-00001001.00001500.001491465129.xml -rw-r--r-- 1 root root 91201427 Apr 6 03:52 /tmp/TEST.00000004-00001501.00002000.001491465129.xml -rw-r--r-- 1 root root 79966094 Apr 6 03:52 /tmp/TEST.00000005-00002001.00002500.001491465129.xml -rw-r--r-- 1 root root 86552704 Apr 6 03:52 /tmp/TEST.00000006-00002501.00003000.001491465129.xml -rw-r--r-- 1 root root 35221625 Apr 6 03:52 /tmp/TEST.00000007-00003001.00003500.001491465129.xml -rw-r--r-- 1 root root 68582397 Apr 6 03:52 /tmp/TEST.00000008-00003501.00004000.001491465129.xml -rw-r--r-- 1 root root 80311992 Apr 6 03:52 /tmp/TEST.00000009-00004001.00004500.001491465129.xml -rw-r--r-- 1 root root 17395649 Apr 6 03:52 /tmp/TEST.00000010-00004501.00004613.001491465129.xml
Extracting Using a Table
The following example uses the table
parameter. A user-defined table is created with a subset of documents which are then extracted using aext.
First we create the table in a private schema.
CREATE TABLE mySchema.t_load_261358 ( publication_id integer );
Next, we load the table with publication-ids. For the sake of an example, all documents associated with modified_load_id
261358 will be selected.
INSERT INTO mySchema.t_load_261358 ( publication_id ) SELECT t.publication_id from xml.t_patent_document_values as t where t.modified_load_id=261358;
Finally, extract the documents into a predicable path structure in the current directory.
aext --table=mySchema.t_load_261358 --archive ## ## abbreviated listing ## ./JP ./JP/B2 ./JP/B2/000H07 ./JP/B2/000H07/11 ./JP/B2/000H07/11/02 ./JP/B2/000H07/11/02/83 ./JP/B2/000H07/11/02/83/JP-H07110283-B2.xml ./JP/B2/000H07/11/56 ./JP/B2/000H07/11/56/83 ./JP/B2/000H07/11/56/83/JP-H07115683-B2.xml etc ...
Extracting Using SQL
This example takes the raw SQL used to populate the private table in the example above, and uses it directly as a parameter to aext.
aext --sqlq="SELECT t.publication_id from xml.t_patent_document_values as t where t.modified_load_id=261358" \ --archive \ --root=/tmp
Extracting Using Solr
If the optional CLAIMS Direct Solr instance is installed, the power of Solr can be used to search, filter, and extract documents. This example simply pulls the same set of documents as above using Solr query syntax.
aext --solrurl=http://SOLR-INSTANCE-URL/alexandria-v2.1/alexandria --archive --solrq='loadid:261358' [aindex01] [2017/04/06 04:17:11] [DEBUG ] [preparing extract ...] [aindex01] [2017/04/06 04:17:11] [DEBUG ] [creating t_tmp_000000000000_001491466631 ... ] [aindex01] [2017/04/06 04:17:11] [DEBUG ] [querying SOLR (http://SOLR-INSTANCE-URL/alexandria-v2.1/alexandria { loadid:261358 })] [aindex01] [2017/04/06 04:17:12] [DEBUG ] [running extract ...] [aindex01] [2017/04/06 04:17:27] [DEBUG ] [finalizing extract ...] [aindex01] [2017/04/06 04:17:27] [INFO ] [extract complete: { 4613 documents across 10 batches in 15.643s (294.894/s) }]
Extracting Using a Custom Database Function
The following example describes a use-case in which only CPC classifications are of interest. It makes use of a custom extract function created in a private schema.
By manipulating the content of the XML, there is a risk that invalid XML can be produced. If you are validating the XML using the CLAIMS Direct DTD, beware of required elements.
First, we create the function that extracts only publication information and classification information.
CREATE OR REPLACE FUNCTION mySchema.f_cpc_only( integer, text) RETURNS SETOF xml AS $BODY$ select xmlelement(name "patent-document", xmlattributes(modified_load_id as "file-reference-id", $1 as "mxw-id", ucid as ucid, lang as lang, country as country, doc_number as "doc-number", kind as kind, to_char(published, 'YYYYMMDD') as "date", to_char(produced, 'YYYYMMDD') as "date-produced", family_id as "family-id", case when withdraw=true then 'yes' end as "withdrawn"), xmlelement(name "bibliographic-data", (select content from xml.t_publication_reference where publication_id=$1 ), (select xmlelement(name "technical-data", (select xmlagg(content) from xml.t_classifications_cpc where publication_id=$1) ) ) ) -- end bibliographic-data ) -- end patent-document from xml.t_patent_document_values where publication_id=$1 $BODY$ LANGUAGE sql VOLATILE COST 100 ROWS 1000;
Together with the --loadid
parameter, we can now extract XML that only includes publication and CPC information.
aext --loadid=261358 --dbfunc=mySchema.f_cpc_only
Checking Status
To determine the current status of the data extraction, check the log output for the batch number currently being extracted, then insert it into the following formula:
( ( total-documents / batch-size ) - current-batch-number ) * batch-size = number of documents left to extract
For example, given 17000000 total documents, a batch size of 500, and a current batch number of 31000, the formula would determine that there are 1500000 documents left to extract:
( ( 17000000 / 500 ) - 31000 ) * 500 = 1500000