Page tree
Skip to end of metadata
Go to start of metadata

Related Pages

For step-by-step instructions, see the PostgreSQL Installation Instructions.

The PostgreSQL component is the heart of CLAIMS Direct. It contains the XML for the entire data warehouse collection, processes updates from the primary, and functions as data source for the optional SOLR index.

Hardware Requirements

System Memory24GB
Storage Capacity6TB (SSD preferred)

Software Requirements

Supported Versions
Operating SystemRHEL 7-8, CentOS 7-8, Amazon Linux AMI

9.2-6, 10

yum -y install \
  postgresql postgresql-contrib \
  postgresql-odbc postgresql-plperl \
IFI CLAIMS Repository
yum -y install \

Note: CentOS 7 needs an additional repository: yum -y install epel-release

Amazon Linux 1
yum -y install \

yum -y install \

Note: This installs EPEL and the PowerTools repositories


Some CLAIMS Direct loading and maintenance code utilizes the PostgreSQL perl extension (plperl) as well as a heavy reliance on the libxml2 XML parsing library. IFI CLAIMS has produced a patched release of libxml2 as an RPM. It is highly recommended to update libxml2 from the IFI CLAIMS software repository. For additional distributions, please contact

PostgreSQL Schema and Tools

In an effort to further streamline CLAIMS Direct PostgreSQL schema versioning, we are introducing a schema and tools package into the CLAIMS Direct yum repository. The new package will offer the main CLAIMS Direct database schema as well as the supporting users database to support deployment of on-site web services. The new package can be installed as follows:

sudo yum -y install alexandria-schema-tools

The package contains the SQL needed to create the database(s) supporting PostgreSQL versions 9.2 - 10.x, as well as tools to check installation, populate the tables, and perform extracts. To create the databases, simply load the SQL via psql into the instance.

echo "create role alexandria with superuser login;" \
  | psql -Upostgres postgres

gunzip -c /usr/share/alexandria-schema-tools/alexandria-dbs-10.x.gz \
  | psql -Ualexandria -h <POSTGRESQL-HOST> postgres

# There's also support for 9.x,
# gunzip -c /usr/share/alexandria-schema-tools/alexandria-dbs-9.x.gz \
#  | psql -Ualexandria -h <POSTGRESQL-HOST> postgres

In addition to the schema, there is a collection of tools to help determine instance loading feasibility, populating, bulk extracting, counts and sizes of tables, and more. All tools accept the same parameters with reasonable defaults for a Type-1 CLAIMS Direct installation.


  Option        Description                     Default
  -H|--host     specify host                    localhost
  -P|--port     specify port                    5432
  -d|--database specify database                alexandria
  -u|--user     specify database user name      alexandria
  -h|--help     print this usage and exit


Use this tool to test the viability of populating the newly created database. It runs a few simple checks.

Testing localhost/alexandria ...
  OK    : procedual language sql
  OK    : procedual language plpgsql
  OK    : procedual language plperl
  OK    : procedual language plperlu
  OK    : XML capability (test 1/libxml): 
  OK    : XML capability (test 2/libxml): 

You can use this tool to populate the initial data delivery. After unpacking or otherwise making the initial data available, cd into the top level directory and simply run:

Note: This tool expects there to be a ./data directory in the current working directory. 

This tool counts the rows in all tables in the main schema (cdws, xml)
683	cdws.t_applications
0	cdws.t_cited_documents
0	cdws.t_class_hierarchies
813	cdws.t_priority_documents
366	xml.t_abstract
0	xml.t_amended_claims
366	xml.t_application_reference
# etc. Analyze all tables in the main schema 

Calculate the on-disk size of each table in the main schema
192 kB	cdws.t_applications
48 kB	cdws.t_cited_documents
40 kB	cdws.t_class_hierarchies
272 kB	cdws.t_priority_documents
416 kB	xml.t_abstract
32 kB	xml.t_amended_claims
232 kB	xml.t_application_reference
# etc ... 

Extract all relevant table data to ./data directory

Next Steps

Once the data has been loaded, proceed to:

  • No labels