Note: It is recommended to copy and paste the code provided in these instructions.
1. IFI CLAIMS will create a single tar.gz file that includes several bash scripts that are used during implementation and a subdirectory of tar.gz files for each of the tables in the PostgreSQL data warehouse. This file will be placed into an S3 bucket and we will provide you with a link for you to access and download the file.
2. Extract the tar.gz file into your local environment.
3. Use yum to install PostgreSQL.
4. If you are using CentOS 7, add the epel repository.
If you are using RHEL 7, use the following code to add the epel repository. Otherwise, continue to step 5.
5. Install the appropriate repository for your operating system using the command listed in the Repository column below. If necessary, adjust the code to reflect the version you are using.
yum -y install \
Note: This installs EPEL and the PowerTools repositories
|Amazon Linux 1|
6. After installing the repository, run a yum update to pull in the patched version of libxml2 from the IFI CLAIMS repository.
Note: Reboot if kernel was upgraded.
7. CLAIMS Direct requires a working PostgreSQL cluster. If you do not have an initialized cluster, the following commands will initialize the cluster and give you rudimentary authentication and access levels needed to run CLAIMS Direct. Note that the
initdb command has to be run by the user who owns PostgreSQL (user postgres). Enter:
8. Using a text editor, modify the IP addresses in the following configuration files.
Note: If you are installing the client tools on a separate machine, other hosts will be required. Be sure to remove the hash (#) at the start of the ‘other hosts’ entry if you need to enable access for other hosts or subnets.
Note: If you already have an initialized cluster, please be certain that local access is enabled for stand-alone installation. In either distributed install, if a separate services machine is created, its IP address needs access. This is imperative for the client update procedures.
9. Enable and restart the PostgreSQL cluster.
10. Create the role alexandria and load the SQL via
psql into the instance.
11. Change the directory to the mounted USB file system and create the database. If desired, you can redirect errors (if any) to LOG.2.
12. To ensure that the database has been created, run:
The results should show the alexandria database.
13. Run pgtune. Note that this requires Python. You can also use the online tool https://pgtune.leopard.in.ua/#/ and fill in the required values as well as those that correspond to your system. Add or change the appropriate settings and restart PostgreSQL.
14. Run the pre-flight check script to confirm that your system is properly configured to load the data.
Note: The scripts used in these instructions are located on the mounted USB file system.
The sample output of a properly configured system would look like this:
Resolve any recognized errors. For unfamiliar errors, please contact email@example.com.
15. Use the load script to load the CLAIMS Direct data into PostgreSQL tables. Since the loading process will take 1-2 days, we recommend that you use the
nohup command to detach the script from the terminal and allow it to run in the background.
16. Use the
ps command periodically to check whether the loading process has completed.
Note: If you want to check on the process while it is running, use the following command to show the progress of the tables which are being copied:
17. Once the loading process is complete, you can run the
cd-count.sh script, a simple QA of table counts, to ensure that the tables have loaded correctly. Modify the IP address to reflect the PostgreSQL server. This may take an hour or more to run.
The results should show that 39 tables have loaded. The following tables are deprecated and will show a count of 0:
The following tables will be populated if you have a Premium Plus subscription. For Basic and Premium subscriptions, they will show a count of 0:
More information about the tables can be seen in Data Warehouse Design.
18. Optional: you may want to run a simple SQL query as an additional test to confirm that the data is present.