The firm I support has a problem with email; Like some people have a problem with alcohol or shoes. The users I support either think they need multiple copies of every email messages that is related to every client or they are required to receive and save copy after copy until their mail folders are over-flowing with redundant messages. To add insult to injury the process works like this: The legal assistants get copied on EVERY message that comes into the attorneys they support. So long story short this causes a problem for the mail server because it has trouble keeping up with indexing so many folders that are near or over 10,000 messages (items). And yes I have argued with the shareholders and legal assistants that there are better ways to do this but they just aren’t ready to make any big changes.
Based on my experience with various servers, file systems, 32-bit vs. 64-bit, and available RAM every server is going to have some upward limit on how many individual items it can reasonably index in a given amount of time. The result of having numerous mail user’s folders that have crept up to 10,000 or beyond means that the Kerio Connect system can fall behind simply because it hasn’t finished indexing. If it can’t index it can’t add new messages to that user’s folder and so on (somewhat similar to a cascade failure).
We are running Kerio Connect on a fast Intel Xserve with a RAID 5 with 7,200 RPM SATA drives. Now we could change the drives to a RAID 0 to speed things up a bit but that’s not an option right now (actually I am giving serious thought to putting in a flash-based boot drive for the OS and Kerio Connect and have the data on the RAID) or we could find some other method of speeding up disk access but I feel these mostly are workarounds. The server just isn’t that slow.
The one thing that must happen is that each user’s mail folders must be kept under 10,000 messages and so I established a policy with a 5,000 message limit (this gives us some cushion). After the break are the details of how I am using a shell script, Lingon, and Splunk to help us efficiently keep track of each user’s mail folder contents.
When I contacted Kerio about our issue they provided me with a find command that searches through the index file for each folder and outputs the totals to a text file. I modified this command so that it outputs the text file to someplace that Splunk can index. If you are not familiar with Splunk it is a self-contained log file index and search tool. I have found it to be a very powerful and useful tool. It is also very expensive, probably worth it, but I have been unable to sell the firm’s shareholders on the idea of paying for it. What I do is run the free version on each individual server, instead of centrally on a single server, and this gets me what I need. The downside that is instead of managing one Splunk installation I am managing 10.
What happens is that once a week my script runs and outputs the data to the text file. Then once a week I search through the data to find who needs help with their folders. To get this to work I had to first figure out how to get Splunk to identify which folders in the text file are over 5,000 messages.
Because the file with the data is just a text file and not actual log data Splunk doesn’t know what to do other than treat each line as a separate log entry.Â It turns out it isn’t that difficult to get Splunk to recognize how understand the data. Splunk has a built-in regex tool to make this step easy. Below is an example of the basic search to identify all folders that have more than 5,000 items in them.
Then way Splunk works you can simply click on a user’s name in the search results and then it will automatically refine the search to include just that user. Below is the final search string for the user â€œdonkâ€ (that’s me) and below that is the result of the refined search.