Having a Lyris E-mail List Server can be a blessing or a nightmare depending on how well you care for it.  We inherited a Lyris machine through an acquisition.  Here are some tips based on what we learned.

WARNING:  THIS INFORMATION IS PROVIDED “AS IS” WITH NO WARRANTY EXPRESS OR IMPLIED.  THIS INFORMATION IS NOT PRESUMED TO BE COMPLETE OR INFALLIBLE.  USE IT AT YOUR OWN RISK.  IF YOU BREAK YOUR COMPUTER OR HOSE YOUR SERVER USING ANY OF THIS INFORMATION, THAT IS YOUR RESPONSIBILITY, NOT OURS.

Administration Tips for Lyris 7.0 and POSTGRES Database

This is on a 7.0 server running on Linux with a postgres database.  When we acquired this server, we had basically no experience with postgres and have learned a little bit here and there as time passed.

When we first began overseeing the operation of this machine, it was very slow.  Web interface activity was limited to non-existent for some people.  We were told to run a vacuum on the database which we did often those first few weeks.  That helped a little.

We found that most of the time we could not delete old lists from the server.  Command line support for doing those types of things was pretty much non-existent.  Lyris has an incredible dearth of good resources for system administrators of their software.

A programmer on staff was able to integrate the statistics on mailings with Billmax, our ISP billing software.  So that provided us with at least a good reference point for customer activity.

However, the database itself still seemed to be sluggish.  The size was around 14-16 GB of space on disk (/var/lib/pgsql).

I read up on the possibility of doing an export of the database and then a reimport.  We set a date for this, announced it to the customers, and then began the work.

Rather than simply do it and toss caution to the wind, I shut down Lyris and postgres and made a copy of the /var/lib/pgsql directory structure.  After doing that, I ran the pgdump command and waited a while (!) for the dump to complete.  The command I used looked like this:

nohup /usr/bin/pg_dumpall -C -v | split -b 1900m – /path/to/file/dumpdb  &

After a couple of hours or more, the dump was complete.  I then dropped the database and began an import.  It failed immediately.  I was immediately thankful that I had made a copy of the directory.  It seems that for some databases, postgres needs a more thorough dump routine which utilizes actual INSERT statements rather than just copying the data..

So… after copying the directory back to its original location, I began the more thorough dump routine.

nohup /usr/bin/pg_dumpall -C -d | split -b 1900m – /path/to/file/dumpdb  &

This took several more hours.  (This is not for the faint of heart, and you need to have a good time planned and a supportive customer base.  We chose the day before Thanksgiving figuring that most people would not be doing a lot of mailing list activity then.  It worked out pretty well due to the fact that the whole task took over 24 hours to accomplish!)

After the export was complete, I dropped the database again and began the import.  This time it began without a hitch.  But the insert method is admittedly much slower.  I kept an eye on the process off and on throughout Thanksgiving Day as time permitted.  The import took about 18-20 hours to complete.

When it was finished, I restarted the Lyris service and began handling customer activity again.  (The messages had been queued on a secondary MX server and were available for sending almost immediately.)

Total space used?  The /var/lib/pgsql directory was not at 6.2 GB.  We were elated.

Our journey with Lyris and postgres didn’t end there, however.  As I began to spend more time learning about some of the basic postgres monitor commands (psql), I began to look at the Lyris tables in more detail.

What I found was that many of the lists we had inherited had been set up to archive messages forever.  This included lists which were no longer active.  As I began to do queries on the messages_ table in Lyris and search or sort output by the different lists, I found that some lists had as many as 27,000 archived messages.  And that was for an inactive list!  When I would issue a handful of delete queries, I found that (for whatever reason), postgres is pretty slow about deleting messages in Lyris.  At the outset, it was deleting about four messages per minute, roughly 15 seconds per delete query.  At optimal performance, it would have taken about five full days just to delete the messages on that one list.  Since we had numerous lists like this, I knew another solution was in order.

After thinking through the situation, it seemed that dumping the messages_ table by itself and then reimporting only a portion of it would be the best solution.  With our list service, we have offered up to a full year of archives for lists.  The messages_ table we were dealing with contained an additional six years worth of archived messages.  I dumped the table a few times in order to test how would be the best way to edit out the older messages.

I decided on using sed, the Unix string editor, for the task.  But first I had to determine exactly what I was going to edit and how.  I used this query:

select messageid_,list_,creatstamp_ from messages_;

to select the information I needed and redirected the output to a file called all-mids-datesort.txt.  This gave me the list of messageid_ field along with the date and time the message was created.  The output looked like this:

(393277 rows)

————+——————————–+————————

select messageid_,list_,creatstamp_ from messages_;

messageid_ |             list_              |      creatstamp_

65345 | somelist-admin-announce         | 2000-03-21 15:27:00-05

65346 | somelist-admin-announce         | 2000-03-21 15:27:00-05

65347 | somelist-admin-announce         | 2000-03-21 15:27:00-05

65348 | somelist-admin-announce         | 2000-03-21 15:27:00-05

65349 | somelist-admin-announce         | 2000-03-21 15:27:00-05

65350 | somelist-admin-announce         | 2000-03-21 15:27:00-05

65351 | somelist-admin-announce         | 2000-03-21 15:27:00-05

65352 | somelist-admin-announce         | 2000-03-21 15:27:00-05

etc.

Once I had this information, I ran a script to take each of the year/month segment and pull that data out of the file.  The script looked like this:

#!/bin/bash

grep      ‘2000-03’  all-mids-datesort.txt  | wc -l  >        wc-lyris-totals.txt

grep      ‘2000-04’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

grep      ‘2000-05’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

grep      ‘2000-06’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

grep      ‘2000-07’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

grep      ‘2000-08’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

grep      ‘2000-09’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

grep      ‘2000-10’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

grep      ‘2000-11’  all-mids-datesort.txt  | wc -l >>        wc-lyris-totals.txt

etc.

The output in the wc-lyris-totals.txt file looked like this:

196

1

6

2

2

1

3

6592

5464

4162

6010

5514

5107

5472

6287

Basically, I was getting the number of messages that had been created for each month.  After getting this information, I pulled the totals into an Excel spreadsheet.  I then pulled in the all-mids-datesort.txt file and took just the column with the time stamp.  After splitting that column so that I only had the year and month (e.g. 2000-03), I copied that column right next to the column with the total number of messages for the month.  The output looked like this:

196 2000-03

1 2000-04

6 2000-05

2 2000-06

2 2000-07

etc.

I took out the months from 2006-03 forward in order to preserve archives for the current active lists.  Then it was a simple task to sort the date based on the amount of messages.  Once I had the months with the most messages, I deleted the column with the number of messages (leaving ONLY the column with the datefields, sorted to my satisfaction), saved the file as text and used tools to edit and manipulate the data to make a bash script so it looked like this:

#!/bin/bash

/bin/sed -e ‘/2003-06/d’ -e ‘/2003-07/d’ -e ‘/2003-08/d’ -e ‘/2004-05/d’ -e ‘/2003-10/d’ -e ‘/2005-01/d’ -e ‘/2003-09/d’ -e ‘/2004-0

3/d’ -e ‘/2004-04/d’ -e ‘/2004-11/d’ -e ‘/2005-03/d’ -e ‘/2004-07/d’ -e ‘/2004-02/d’  -e ‘/2004-12/d’  -e ‘/2004-06/d’  -e ‘/2004-10

/d’  -e ‘/2000-10/d’  -e ‘/2003-05/d’  -e ‘/2004-09/d’  -e ‘/2001-07/d’  -e ‘/2001-05/d’  -e ‘/2001-09/d’  -e ‘/2002-03/d’  -e ‘/200

2-01/d’  -e ‘/2001-08/d’  -e ‘/2001-01/d’  -e ‘/2002-09/d’  -e ‘/2003-04/d’  -e ‘/2004-08/d’  -e ‘/2004-01/d’  -e ‘/2003-11/d’  -e ‘

/2005-02/d’  -e ‘/2001-11/d’  -e ‘/2003-03/d’  -e ‘/2002-04/d’  -e ‘/2003-01/d’  -e ‘/2001-02/d’  -e ‘/2001-04/d’  -e ‘/2000-11/d’

-e ‘/2002-02/d’  -e ‘/2001-10/d’  -e ‘/2003-12/d’  -e ‘/2002-10/d’  -e ‘/2001-03/d’  -e ‘/2003-02/d’  -e ‘/2002-08/d’  -e ‘/2001-06/

d’  -e ‘/2002-05/d’  -e ‘/2001-12/d’  -e ‘/2002-06/d’  -e ‘/2000-12/d’  -e ‘/2002-07/d’  -e ‘/2005-09/d’  -e ‘/2005-05/d’  -e ‘/2002

-11/d’  -e ‘/2005-04/d’  -e ‘/2006-01/d’  -e ‘/2005-11/d’  -e ‘/2005-08/d’  -e ‘/2005-10/d’  -e ‘/2005-06/d’  -e ‘/2002-12/d’  -e ‘/

2005-07/d’  -e ‘/2005-12/d’  -e ‘/2000-03/d’  -e ‘/2000-05/d’  -e ‘/2000-09/d’  -e ‘/2000-06/d’  -e ‘/2000-07/d’  -e ‘/2000-04/d’  –

e ‘/2000-08/d’ dumpfile > dumpfile.pruned

The sed command set is all on one long line.  What is happening is that we are taking an input file and deleting every line that contains the string 2003-06, then 2003-07, then 2003-08, then 2004-05 and so on.   We were ready to act.

The next step was to shutdown Lyris and do a postgres dump of just the messages_ table.  The reason for using split is the same as above:  older versions of Linux had a 2 GB file size limit with postgres.

nohup /usr/bin/pg_dump -C -d -t messages_ lm70 | split -b 500m – /path/to/file/dumpfile &

cat dumpfilea* > dumpfile

Once I had the split files, I used ‘cat’ to put them all back into one file (cat dumpfileaa, dumpfileab, dumpfileac > dumpfile) which was 2.4 GB in size.  I then ran the bash script which began editing the file.  THIS is why I took the extra long steps of getting my date fields sorted by number of messages each month from most to least.  I figured that if sed had to look through the file, it would be best to have it pick out the most lines first, next most and so on until there was little data left to be sorting through.  The results took about twenty minutes, and the file size was now a reasonable 267 MB.  Quite a savings.

The next step was to import the messages_ table back into postgres.  Since the sed script only edited out the INSERT clauses, the table creation data was still in the dump.pruned file.

This brought us down more in our database size, but there was still much to accomplish.  Lyris takes each message that comes in and analyzes the message for unique words.  Once it completes that task, it write an entry to the uniquewords_ table if there is a new word that has not yet been written.  Then it writes entries to the wordmessage_ table that define which unique words are contained in that particular message.  It writes this based on the messageid_ of the message.  In our case, the wordmessage_ table contained a lot of entries for the messages that had been archived from 2000-2005.  Although we ran a series of scripts to delete these based on messageid_ fields from the data we had kept about the deleted messages, I won’t go into that here simply because, in the long run, we ended up truncating that table along with the uniquewords_ table.  The reason we chose to do so was that it would have been too labor intensive to figure out how many of the 820,000+ unique words were really contained in the messages that we had imported back into the messages_ table.

So, I stopped Lyris and did a dump of the wordmessage_, uniquewords_ and uniquewords_seq tables.  I then truncated the tables and dropped the sequence for the uniquewords_seq table in the psql monitor (‘truncate table uniquewords_’, followed by ‘truncate table wordmessage_’, followed by ‘drop sequence uniquewords_seq’).  Then I used the head and tail commands to get the sections of the dump files that are used to recreate the tables, saved the data to files named uniquewords_table, uniquewords_seq, wordmessage_table and ran the following from the command prompt:

cat uniquewords_table | psql lm70

cat uniquewords_seq_table | psql lm70

cat wordmessage_table | psql lm70

This saved over 2 GB of data in the database alone with the indices and data.

Other tables that I worked on include the following:

The metricevents_ table is used to store performance metrics data about each list.  Since it does this EVERY MINUTE you can imagine how large this table can get and how quickly it can do so.  In our case, we truncated this table and recreated it after dumping it to a file.

The log_ table is used to store data about each mailing Lyris attempts for each list.  This includes responses to subscribe commands and the like.  Since this is essentially the heart of billing data, we did not want to simply truncate this table.  Instead, we used a sed script similar to the one above to remove the data for each year going back to 2003 (the earliest year in this table).  Then we saved the data for 2006 and 2007 in a file.  After truncating this table, we imported it with the 2006 and 2007 data intact.

The docs_ and docparts_ tables are used for the different autoresponders that Lyris makes available to list customers.  For whatever reason, our tables contained docs from the late 1990s.

First, I had to find the docid_ for each document and/or list that we did not want to keep.  We ran a query to get all of the list names that were referenced in the docs_ table.  Then we edited that by hand to remove all the lists and/or sites which are active.  Then I executed a query to gather the docids_ for the lists that we did want to clean up.  This query looked like this:

select docid_ from docs_ where site_=’list: some-list’;

select docid_ from docs_ where site_=’list: some-other-list’;

select docid_ from docs_ where site_=’list: yet-another-list’;

etc.

That returned a list of numbers.  I edited that file to insert a delete query for each number.  The result looked like this:

delete from docparts_ where docid_=’1000′;

delete from docparts_ where docid_=’1005′;

delete from docparts_ where docid_=’1007′;

delete from docparts_ where docid_=’1008′;

There were about 1300 of these old docparts_ entries that needed to be removed.

Then I ran a set of queries to delete the main records in the docs_ table using a query like this:

delete from docs_ where site_=’list: some-list’;

delete from docs_ where site_=’list: some-other-list’;

delete from docs_ where site_=’list: yet-another-list’;

etc.

There were 978 of these on our server.  Not a lot of wasted disk space, but certainly unnecessary.

After that step, Lyris runs another query on docparts_.  I copied the previous docparts_ query file and edited it to look like this:

select oid from only docparts_ where docid_=’1000′ for update of docparts_;

select oid from only docparts_ where docid_=’1005′ for update of docparts_;

select oid from only docparts_ where docid_=’1007′ for update of docparts_;

select oid from only docparts_ where docid_=’1008′ for update of docparts_;

Next, we updated the phrases_ table by editing the docs_ query above to look like this.  Phrases are used for automatic handling of e-mail based on some word or phrase contained in the message.  It’s quite powerful and can be used for simple things like editing our “Get your own free e-mail at xyzprovider.com” statements to generating notifications to an e-mail address based on content.

delete from phrases_ where site_=’list: some-list’;

delete from phrases_ where site_=’list: some-other-list’;

delete from phrases_ where site_=’list: yet-another-list’;

etc.

Then we updated the respond_ table with a similar query.

delete from respond_ where site_=’list: some-list’;

delete from respond_ where site_=’list: some-other-list’;

delete from respond_ where site_=’list: yet-another-list’;

etc.

Not all list administrators actively pay attention to the moderation requests that they received for their lists.  Some of our lists had messages needing approval that spanned several months and, in some cases, years.  We queried the moderate_ table in order to get a listing of lists and then sorted and edited that to get the lists that were obviously closed or no longer existed on the server.  The final query we sent to postgres looked like this:

delete from moderate_ where list_=’some-list’;

delete from moderate_ where list_=’some-other-list’;

delete from moderate_ where list_=’yet-another-list’;

I have not found a setting in this version of Lyris that allows an administrator or list owner to set a limit on how long messages are kept in “pending moderation” status before deleting them.  This is, of course, a big annoyance.

Another good query to execute to find out your lists’ archive settings is this:

select archivdays_,archivkeep_,archivnum_,archivsize_,name_ from lists_ where archivdays_ <> 0 order by name_;

You can also use =0 to get those lists that are set to archive forever:

select archivdays_,archivkeep_,archivnum_,archivsize_,name_ from lists_ where archivdays_=’0′ order by name_;

THE END RESULT

After doing this work on our Lyris server and postgres database, we went from an original 14-16 GB of database size down to 1.2 GB.  That has made a tremendous impact on server performance, web accessibility for list administrators and end users, and also the ability for Lyris to perform its own routine maintenance.  Whereas before the server would be bogged down with DELETE processes trying to clear out old messages, it can do it pretty quickly now.  Worth the effort?  Definitely.

****************************************************************************************

Other notes on the sed command:

I found a few useful functions that helped make this task easier.  When I would generate output from a query and get a long list of numbers each in a row, the next task was to build a query (delete, insert or whatever) around that number.  I used this command:

sed ‘s/$/#;/’ file1.txt > file2.txt

quite often.  It takes the input file (in my case, a long list of numbers) and puts a “#;” at the end of the line.  Once you have that, you can

If you have a blank space at the beginning of the line and want to edit that out, you can use:

sed -e ‘s/^ //’ file1.txt > file2.txt

If you want to replace that blank space with a “#” symbol for later use with search/replace functions in your editor, you can use:

sed -e ‘s/^ /# /’ file1.txt > file2.txt

****************************************************************************************

ROUTINE QUERIES THAT LYRIS RUNS

delete from inmail_ where Status_ = ‘done’ and Created_ < ‘2007-01-23’

delete from outmail_ where Status_ = ‘done’ and Created_ < ‘2007-01-23’ and Type_ <> ‘list’

where the date string is something current based on server settings.

****************************************************************************************

Below are samples of some of the commands used with psql to create tables for Lyris.  These are not complete by any means.

CREATING THE METRICEVENTS_ TABLE

— Selected TOC Entries:

\connect – postgres

— TOC Entry ID 1 (OID 0)

— Name: lm70 Type: DATABASE Owner: postgres

Create Database “lm70”;

\connect lm70 postgres

\connect – lyris

— TOC Entry ID 2 (OID 92176100)

— Name: metricevents_ Type: TABLE Owner: lyris

CREATE TABLE “metricevents_” (

“key_” integer NOT NULL,

“subkey_” character varying(100),

“value_” integer NOT NULL,

“when_” timestamp with time zone NOT NULL

);

— Data for TOC Entry ID 4 (OID 92176100)

— Name: metricevents_ Type: TABLE DATA Owner: lyris

— TOC Entry ID 3 (OID 92176100)

— Name: “ix_metricevents” Type: INDEX Owner: lyris

CREATE  INDEX “ix_metricevents” on “metricevents_” using btree ( “when_” “timestamp_ops”, “key_” “int4_ops” );

— TOC Entry ID 5 (OID 134010071)

— Name: “RI_ConstraintTrigger_134010070” Type: TRIGGER Owner: lyris

CREATE CONSTRAINT TRIGGER “key_1717165245” AFTER INSERT OR UPDATE ON “metricevents_”  FROM “performancemetrics_” NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE “RI_FKey_check_ins” (‘key_1717165245’, ‘metricevents_’, ‘performancemetrics_’, ‘UNSPECIFIED’, ‘key_’, ‘key_’);

****************************************************************************************

CREATING THE UNIQUEWORDS_ TABLE

— Selected TOC Entries:

\connect – postgres

— TOC Entry ID 1 (OID 0)

— Name: lm70 Type: DATABASE Owner: postgres

Create Database “lm70”;

\connect lm70 postgres

\connect – lyris

— TOC Entry ID 2 (OID 92175571)

— Name: uniquewords_ Type: TABLE Owner: lyris

CREATE TABLE “uniquewords_” (

“word_” character varying(30) NOT NULL,

“wordid_” integer DEFAULT nextval(‘uniquewords_seq’::text) NOT NULL,

Constraint “pk_uniquewords” Primary Key (“wordid_”)

);

— Data for TOC Entry ID 4 (OID 92175571)

— Name: uniquewords_ Type: TABLE DATA Owner: lyris

— TOC Entry ID 3 (OID 92175571)

— Name: “ix_uniquewords_word” Type: INDEX Owner: lyris

CREATE UNIQUE INDEX “ix_uniquewords_word” on “uniquewords_” using btree ( “word_” “varchar_ops” );

— TOC Entry ID 5 (OID 134010049)

— Name: “RI_ConstraintTrigger_134010048” Type: TRIGGER Owner: lyris

CREATE CONSTRAINT TRIGGER “wordid_1415270972” AFTER DELETE ON “uniquewords_”  FROM “wordmessage_” NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE “RI_FKey_noaction_del” (‘wordid_1415270972’, ‘wordmessage_’, ‘uniquewords_’, ‘UNSPECIFIED’, ‘wordid_’, ‘wordid_’);

— TOC Entry ID 6 (OID 134010051)

— Name: “RI_ConstraintTrigger_134010050” Type: TRIGGER Owner: lyris

CREATE CONSTRAINT TRIGGER “wordid_1415270972” AFTER UPDATE ON “uniquewords_”  FROM “wordmessage_” NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE “RI_FKey_noaction_upd” (‘wordid_1415270972’, ‘wordmessage_’, ‘uniquewords_’, ‘UNSPECIFIED’, ‘wordid_’, ‘wordid_’);

****************************************************************************************

CREATING THE WORDMESSAGE_ TABLE

— Selected TOC Entries:

\connect – postgres

— TOC Entry ID 1 (OID 0)

— Name: lm70 Type: DATABASE Owner: postgres

Create Database “lm70”;

\connect lm70 postgres

\connect – lyris

— TOC Entry ID 2 (OID 92175586)

— Name: wordmessage_ Type: TABLE Owner: lyris

CREATE TABLE “wordmessage_” (

“context_” character(1) NOT NULL,

“listid_” integer NOT NULL,

“messageid_” integer NOT NULL,

“wordid_” integer NOT NULL,

CONSTRAINT “wordmessage__context_” CHECK ((((context_ = ‘H’::bpchar) OR (context_ = ‘B’::bpchar)) OR (context_ = ‘A’::bpchar

)))

);

— Data for TOC Entry ID 4 (OID 92175586)

— Name: wordmessage_ Type: TABLE DATA Owner: lyris

— THE INSERT STATEMENTS WOULD GO HERE IF YOU HAD INSERTS TO DO

— TOC Entry ID 3 (OID 92175586)

— Name: “ix_wordmessage_compound” Type: INDEX Owner: lyris

CREATE UNIQUE INDEX “ix_wordmessage_compound” on “wordmessage_” using btree ( “listid_” “int4_ops”, “wordid_” “int4_ops”, “messageid_” “int4_ops”, “context_” “bpchar_ops” );

— TOC Entry ID 5 (OID 134010035)

— Name: “RI_ConstraintTrigger_134010034” Type: TRIGGER Owner: lyris

CREATE CONSTRAINT TRIGGER “listid_207359138” AFTER INSERT OR UPDATE ON “wordmessage_”  FROM “lists_” NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE “RI_FKey_check_ins” (‘listid_207359138’, ‘wordmessage_’, ‘lists_’, ‘UNSPECIFIED’, ‘listid_’, ‘listid_’);

— TOC Entry ID 6 (OID 134010047)

— Name: “RI_ConstraintTrigger_134010046” Type: TRIGGER Owner: lyris

CREATE CONSTRAINT TRIGGER “wordid_1415270972” AFTER INSERT OR UPDATE ON “wordmessage_”  FROM “uniquewords_” NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE “RI_FKey_check_ins” (‘wordid_1415270972’, ‘wordmessage_’, ‘uniquewords_’, ‘UNSPECIFIED’, ‘wordid_’, ‘wordid_’);

****************************************************************************************

COMMAND SET TO DUMP ALL THE TABLES (uses split command to keep files below 500 MB for manageability)

/usr/bin/pg_dump -C -d -t axistypes_  lm70 | split -b 500m – /path/to/file/axistypes

/usr/bin/pg_dump -C -d -t chartdescription_ lm70 | split -b 500m – /path/to/file/chartdescription

/usr/bin/pg_dump -C -d -t chartdescription_seq  lm70 | split -b 500m – /path/to/file/chartdescription_seq

/usr/bin/pg_dump -C -d -t clickgroups_  lm70 | split -b 500m – /path/to/file/clickgroups

/usr/bin/pg_dump -C -d -t clickgroups_seq lm70 | split -b 500m – /path/to/file/clickgroups_seq

/usr/bin/pg_dump -C -d -t clickstreamdata_  lm70 | split -b 500m – /path/to/file/clickstreamdata

/usr/bin/pg_dump -C -d -t clicktracking_  lm70 | split -b 500m – /path/to/file/clicktracking

/usr/bin/pg_dump -C -d -t clicktracking_seq lm70 | split -b 500m – /path/to/file/clicktracking_seq

/usr/bin/pg_dump -C -d -t config_ lm70 | split -b 500m – /path/to/file/config

/usr/bin/pg_dump -C -d -t country_  lm70 | split -b 500m – /path/to/file/country

/usr/bin/pg_dump -C -d -t ctrylang_ lm70 | split -b 500m – /path/to/file/ctrylang

/usr/bin/pg_dump -C -d -t ctrylang_seq  lm70 | split -b 500m – /path/to/file/ctrylang_seq

/usr/bin/pg_dump -C -d -t dnsbypass_  lm70 | split -b 500m – /path/to/file/dnsbypass

/usr/bin/pg_dump -C -d -t dnsbypass_seq lm70 | split -b 500m – /path/to/file/dnsbypass_seq

/usr/bin/pg_dump -C -d -t docparts_ lm70 | split -b 500m – /path/to/file/docparts

/usr/bin/pg_dump -C -d -t docparts_seq  lm70 | split -b 500m – /path/to/file/docparts_seq

/usr/bin/pg_dump -C -d -t docs_ lm70 | split -b 500m – /path/to/file/docs

/usr/bin/pg_dump -C -d -t docs_seq  lm70 | split -b 500m – /path/to/file/docs_seq

/usr/bin/pg_dump -C -d -t doctypes_ lm70 | split -b 500m – /path/to/file/doctypes

/usr/bin/pg_dump -C -d -t inmail_ lm70 | split -b 500m – /path/to/file/inmail

/usr/bin/pg_dump -C -d -t inmail_seq  lm70 | split -b 500m – /path/to/file/inmail_seq

/usr/bin/pg_dump -C -d -t language_ lm70 | split -b 500m – /path/to/file/language

/usr/bin/pg_dump -C -d -t lists_  lm70 | split -b 500m – /path/to/file/lists

/usr/bin/pg_dump -C -d -t lists_seq lm70 | split -b 500m – /path/to/file/lists_seq

/usr/bin/pg_dump -C -d -t log_  lm70 | split -b 500m – /path/to/file/log

/usr/bin/pg_dump -C -d -t log_seq lm70 | split -b 500m – /path/to/file/log_seq

/usr/bin/pg_dump -C -d -t members_  lm70 | split -b 500m – /path/to/file/members

/usr/bin/pg_dump -C -d -t members_seq lm70 | split -b 500m – /path/to/file/members_seq

/usr/bin/pg_dump -C -d -t messagerelationship_  lm70 | split -b 500m – /path/to/file/messagerelationship

/usr/bin/pg_dump -C -d -t messages_ lm70 | split -b 500m – /path/to/file/messasges

/usr/bin/pg_dump -C -d -t messages_seq  lm70 | split -b 500m – /path/to/file/messages_seq

/usr/bin/pg_dump -C -d -t metricevents_ lm70 | split -b 500m – /path/to/file/metricevents

/usr/bin/pg_dump -C -d -t mimeexts_ lm70 | split -b 500m – /path/to/file/mimeexts

/usr/bin/pg_dump -C -d -t mimeexts_seq  lm70 | split -b 500m – /path/to/file/mimeexts_seq

/usr/bin/pg_dump -C -d -t mimetypes_  lm70 | split -b 500m – /path/to/file/mimetypes

/usr/bin/pg_dump -C -d -t mimetypes_seq lm70 | split -b 500m – /path/to/file/mimetypes_seq

/usr/bin/pg_dump -C -d -t mimetypetoext_  lm70 | split -b 500m – /path/to/file/mimetypetoext

/usr/bin/pg_dump -C -d -t moderate_ lm70 | split -b 500m – /path/to/file/moderate

/usr/bin/pg_dump -C -d -t moderate_seq  lm70 | split -b 500m – /path/to/file/moderate_seq

/usr/bin/pg_dump -C -d -t outmail_  lm70 | split -b 500m – /path/to/file/outmail

/usr/bin/pg_dump -C -d -t outmail_seq lm70 | split -b 500m – /path/to/file/outmail_seq

/usr/bin/pg_dump -C -d -t people_ lm70 | split -b 500m – /path/to/file/people

/usr/bin/pg_dump -C -d -t performancemetrics_ lm70 | split -b 500m – /path/to/file/performancemetrics

/usr/bin/pg_dump -C -d -t performancemetrics_seq  lm70 | split -b 500m – /path/to/file/performancemetrics_seq

/usr/bin/pg_dump -C -d -t phrases_  lm70 | split -b 500m – /path/to/file/phrases

/usr/bin/pg_dump -C -d -t phrases_seq lm70 | split -b 500m – /path/to/file/phrases_seq

/usr/bin/pg_dump -C -d -t referrals_  lm70 | split -b 500m – /path/to/file/referrals

/usr/bin/pg_dump -C -d -t respond_  lm70 | split -b 500m – /path/to/file/respond

/usr/bin/pg_dump -C -d -t respond_seq lm70 | split -b 500m – /path/to/file/respond_seq

/usr/bin/pg_dump -C -d -t sites_  lm70 | split -b 500m – /path/to/file/sites

/usr/bin/pg_dump -C -d -t subsets_  lm70 | split -b 500m – /path/to/file/subsets

/usr/bin/pg_dump -C -d -t subsets_seq lm70 | split -b 500m – /path/to/file/subsets_seq

/usr/bin/pg_dump -C -d -t topics_ lm70 | split -b 500m – /path/to/file/topics

/usr/bin/pg_dump -C -d -t uniquewords_  lm70 | split -b 500m – /path/to/file/uniquewords

/usr/bin/pg_dump -C -d -t uniquewords_seq lm70 | split -b 500m – /path/to/file/uniquewords_seq

/usr/bin/pg_dump -C -d -t urls_ lm70 | split -b 500m – /path/to/file/urls

/usr/bin/pg_dump -C -d -t urls_seq  lm70 | split -b 500m – /path/to/file/urls_seq

/usr/bin/pg_dump -C -d -t usercolumninfo_ lm70 | split -b 500m – /path/to/file/usercolumninfo

/usr/bin/pg_dump -C -d -t usercolumninfo_seq  lm70 | split -b 500m – /path/to/file/usercolumninfo_seq

/usr/bin/pg_dump -C -d -t wordmessage_  lm70 | split -b 500m – /path/to/file/wordmessage