16.4.11

You have to setup and tear down

Recently I witnessed the execution of performance test runs that did not include the elements necessary for successful and reliable result generation.

The test run should have included the following activities:

  1. Test setup
    1. Seed the database with fresh data
    2. Rebuild indexes
    3. Recreate statistics
  2. Test execution and
  3. Test tear down
    1. Delete data generated during the test
    2. Clear caches
    3. Recycle application pools
    4. Remove all database deadlocks

The initial performance run included steps 1-2; subsequent runs only included step 2. When the results from all runs where compared, it was observed that the performance of the system under test(SUT) gradually deteriorated.

The main point is that without proper tear down and setup after each performance run, the tests were measuring a different SUT each time.

For performance comparisons to be meaningful, one has got to compare similar SUTs.

If you can’t perform steps 1-3 because of complexities in your environment, then abandon your performance tests until these prerequisites are resolved.

5.4.11

Google Chart API

We are building a lightweight reporting module; our charts need to scale and given that we could be hosting the report module on a nix platform, using the .NET charting APIs was not an option.

After a bit of digging we settled on using the Google Chart API which provides a POST API. All we needed to do to perform integration was supply the data series, labels, chart type, chart dimensions etc.

https://chart.googleapis.com/chart?cht=p3&chs=300x100&chl=Agribusiness|Haulage|Jobs|Other|Property&chd=t:6,3,2,2,1 

The rendered chart is displayed below

This is a great services from Google and demonstrates the power of software as a service(SaaS). Software development should be about assembling and orchestrating various services to build a cohesive feature set.

3.4.11

Zam-Track on track

Based on current data volumes and interest expressed in the Z-Track platform, we expect to process about 1 million data points this year. Each data item has the following structure:

%%800704,A,110402201650,N5121.3489W00011.1780,000,230,NA,47000000,531,CFG:Z31,10,1|

A JSON representation of this feed looks like this:

ID:800704,GPSValid:A,DateTime:110402201650,Loc:N5121.3489W00011.1780,Speed:000,Dir:230,Temp:NA,Status:47000000,Event:531, Message:CFG:Z31,10,1|

The structure of the data makes it ideal for a document-centric data store. We have evaluated MongoDB as the storage solution; other contenders in the running are CouchDB. Fast data writes and reads are essential for our application.

The feed gives us vast opportunities to integrate with HR, Payroll and financials systems.