I learned an important tip that I thought I should share with others who are considering Octopress with publishing to S3/Cloudfront. If the S3 deployment line in your Rakefile includes the s3cmd option for “–cf-invalidate” you should be aware that this will drive up your costs quite a bit. (3 guesses how I know this) I had about a years worth of expected CF costs in my first month because I didn’t understand what that was doing.
I’ve run my own hosted server for many years now. Originally it was just to have my own mail server but later I decided to start the blog using Wordpress. It’s great to have the control and flexibility to do whatever you want with a server.. but there are definitely some downsides as well. Wordpress is great.. but there are constant security issues to worry about and people attacking your server trying to hack it with automated tools. Customizing it has also never exactly been easy.. and it’s a huge code base that I don’t really understand.
So this new version of the blog is far enough along that it was time to turn it lose.
Progress has slowed a bit on the fraud management system… other priorities have come up over the last month or so but here’s a new walk though. The backend hasn’t changed much. What has changed is the hardware requirements. When I started this project I had no idea how much processing power or space or IO was going to be required so I built the system in such a way that it could be easily scaled at several points. As it turns out.. for this size of a network (about 250k customers) that was unnecessary.
It’s been about a month since I posted the initial overview and I thought it would be good to post an update of the progress. While I’ve not been able to devote 100% to this project over the last month there have been some significant improvements. The most visible ones are to the web interface. I’ve added a very flexible application level authorization system. I’ve also added an interface for managing one of the key inputs to the system with a protective 2 level approval process. I’ve also been tweaking the scoring system to better handle corner cases as I’ve seen them. Still plenty of work to be done but it’s starting to take shape. For the overview of this system check out my first post about it. Screen shots after the break.
I’ve been working on a big new project since just before the new year and it’s starting to take shape and generate useful results. I can’t give away too many details on how exactly it works but I wanted to share this with some of you who are also working in telecom. I was asked to develop a real-time system to identify toll fraud that would work for our entire voip carrier network that currently originates calls from 19 different countries for both residential, SMB, and wireless. For those who don’t know.. I spent a year working for another telecom software company helping to run and debug a call mediation and rating platform for a tier2 carrier. This experience was useful in that I was able to quickly develop a scalable, distributed processing framework while avoiding the cumbersome overhead I’ve observed in other systems. Continue after the jump for more details…
As part of a new and fairly large project I have a need to partition a few postgres tables and have a rolling daily window. That is.. I want to organize data by a timestamp storing each day in its own partition and maintain 90 days of historical data. Doing this is possible in Postgresql but it’s not pretty or very clean to set it up. To simplify the process I wrote this perl script that (when run daily) will pre-create a certain number of empty partitions into the future and remove the oldest partitions from your window.
I was called on to provide a method of alerting from within nagios that was more active and direct than the usual use of email or SMS messages. So I came up with a simple way to have a nagios notification place a phone call to our off hours tier3 support line to report certain very rare but serious problems.
Anyone who has to manage servers and other equipment in remote datacenters can appreciate the need for good documentation. One reason you appreciate good documentation so much is because it’s so rare. People are lazy and forgetful and when changes are made by lots of different people in lots of different locations it’s easy for reference docs to get out of date and unreliable. So from those two concepts was born a need to create dynamic rack face diagrams and end the dependance on the manual task of updating and distributing static visio diagrams. Attached are templates for most common rack sizes and example php code on how to make it work.
I’m often ask what it is I do for a living… and being lazy I usually just say ‘computer stuff’. In an effort to provide a little more context to anyone who may be interested this is one in a series of postings where I’ll cover some aspect of what it is I do.
In my current role I spend part of the time doing development projects. (aka programming) I’m not a hard core developer though.. it’s not my full time occupation nor do I want it to be. I work mostly with perl and php when necessary, mysql and occasionally PostgreSQL or Oracle all under various flavors of linux. (debian is my favorite). Usually these development tasks are related to some sort of management automation for a global VoIP network but sometimes they involve making complex things easier to understand. Part of that involves automating the collection of large amounts of data and then presenting in a meaningful way so that problems and long term trends can be identified. What follows are some examples of the sorts of things I mean.