I have quickly put together a test of Data::MySQL to assess its suitability for a project involving real-time data generation with typically 5-10 records being generated per second, but occasional bursts of up to 100 records per second. Each record has 20 or so fixed width columns, requiring less than 100 bytes in total. I would have thought this should be well within the capabilities of Data::MySQL, but on my first test, I was struggling to achieve any rate beyond this with repeated "INSERT INTO" queries based on a stored query employing lots of "use(record.this), use(record.that), use(record.other)" syntax. If I tried to integrate this into the real-time application that generates the data, it would struggle to run at all. The test involves a single table with a single auto-increment integer used as a primary key - there are no complicated indexing arrangements.
Is this a reasonable result... or should I expect to achieve 100s and 1000s of records per second via this method, or some other method? What is the preferred way of submitting 100s and 1000s of records (or more) per second to the database?