Current Events and Technology  
Home Pictures Projects The name "Anders" Donate to Anders.com Contact

Postgres Administration Notes

postgres is a great open-source database. i'm going on 200 million rows per month in one of my postgres installations and have found it to be very stable and fast. this is a quick cheat sheet for the seasoned systems administrator.

compile, install, start
./configure
make
su
make install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &

install procedural language in template1 db
shell> psql template1 -U postgres
template1=# create function plpgsql_call_handler ( ) returns opaque as '$libdir/plpgsql' language c;
CREATE
template1=# create trusted procedural language plpgsql handler plpgsql_call_handler;
CREATE
template1=#

give postgres user a password
template1=# alter user postgres with password 'pig_blanket';
ALTER USER 0
template1=#

create user with db create priv, login as user, create database, revoke db create priv.
template1=# create user web with password 'g-money' createdb;
CREATE USER
template1=# \q
shell> psql template1 -U web
template1=> create database web;
CREATE DATABASE
template1=> \q
shell> psql template1 -U postgres
template1=# alter user web nocreatedb;
ALTER USER
template1=# \q
shell>

startup script
#! /bin/sh

case "$1" in
  start)
    su -c '/usr/local/pgsql/bin/pg_ctl start -D /usr/local/pgsql/data -l /usr/local/pgsql/logs/server.log -o "-i"' postgres
        ;;
  stop)
    su -c '/usr/local/pgsql/bin/pg_ctl stop -D /usr/local/pgsql/data' postgres
        ;;
  *)
        echo "Usage: /etc/init.d/postgresql {start|stop}"
        exit 1
esac

exit 0

Alternatively you can run postgres from /service with the following run file.
#!/bin/sh
exec setuidgid postgres /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data -i 2>&1


Ruby and Postgres
When installing the native postgres driver for ruby, you probably want to tell it where you installed postgres:

gem install postgres -- --with-pgsql-dir=/usr/local/pgsql

dump a table as CSV

Set the field separator to a comma, show only tuples and unalign output:
\f','
\t
\a
\o things.cdr
select * from thing;

UUIDs in postgres 9.4 and above

template1=# create extension 'pgcrypto';
CREATE EXTENSION
template1=# select gen_random_uuid();
gen_random_uuid
--------------------------------------
45dd2777-b401-4822-9fe2-7df51276ef1b
(1 row)

performance tuning

make use of indexes to make selects faster

if you are doing lots of inserts and they are slow, consider recoding so all inserts can be part of one transaction if possible. i limit inserts to no more than 1,000 per transaction so as not to blow out the size on the transaction log.

using JDBC, you might turn off autoCommit, run your inserts and do one big commit at the end:
conn.setAutoCommit( false );
// insert data
// insert data
// insert data
// ...
conn.commit( );

if multiple inserts kill performance in your app, and doing everything in one transaction is not pheasable, try turning off fsync( ) calls with the -F option to postmaster. you shouldn't leave fsyncs off in production as they could cause the data on the filesystem to be left in a dirty state in a crash regardless of raid or journaling filesystem setups. at the end of each transaction, an fsync( ) is normally called to flush changes down to disk which can greatly impeade performance if lots of transactions need to happen in a short period of time. (this was an issue for me when populating millions of rows in a timely manner. for general production usage, you shouldn't fool with fsyncs so forget i suggested this...)

if you do lots of updates to a table, selects will begin to slow down unless you vacuum. (and index well, but i've covered that) they can slow for two primary reasons: deleted rows are only marked for deletion so there is more data to sift through, and the query planner will lack current statistics about the existing data to make efficient decisions on how to go about executing a select in an efficient way. doing a vacuum full blows away deleted tuples and helps make selects quick again, and should recover the drive space. adding an analyze to that vacuum full statement will update the planner statistics as well. example: vacuum full analyze

in a heavily insert / delete environment, drive usage may seem to grow without signs of slowing even though you vacuum full because this will recover space from data deletes, but not unused index space.

if you are, for example, deleteing all records older than X that have an indexed id or date column, filespace may continue to be endlessly eaten. postgres will not recover the "deleted" index space because a b-tree index won't reuse emptied pages when the new data isn't within that range. (which it wouldn't be if the data is sequential in nature like an id or "date created" column) to fix this, you must reindex your index with: reindex index myindex where myindex is the name of your index. by far the most common index type is a b-tree which doesn't know how to rebalance and drop unused keys on it's own, so periotic reindexing is necessary if your dataset, for example, moves with time.

after a big dump / reload of a large table with an index, you HAVE TO run a vacuum analyze to update planner statistic or the optimizer will probably pick a sequential scan over an index scan. explain your select to see what type of scan the optimizer is picking. if it's picking a sequential scan, you can test the performance of an index scan by disableing the sequential scan for your current session with set enable_seqscan to off. if results are better, try a vacuum analyze and then explain the select again to see if it picks an index scan. you should always analyze before running an explain.



Once to every man and nation,
comes the moment to decide,
In the strife of truth with falsehood,
for the good or evil side;
Some great cause,
some great decision,
offering each the bloom or blight,
And the choice goes by forever,
'twixt that darkness and that light.
Once to Every Man and Nation.mp3
~ James R. Lowell - 1845

"In our sleep, pain that cannot forget falls drop by drop upon the heart and in our despair, against our will comes wisdom through the awful grace of God."
~ Aeschylus

"Some men see things as they are and ask 'Why?'. I dream of things things that never were and ask 'Why not?'"
~ RFK

"In dreams begin responsibilities."
~ U2

"... we do these things not because they are easy, but because they are hard!"
~ JFK - on sending man to the moon



user:   pass:   ©