Efficient data imports
An application's performance is affected, among other things, by the performance of its parts. A large number of current applications contain a database layer which I've noticed become neglected more often than it deserves. This is unfortunate because there are a lot of quick performance victories that can be achieved by harnessing a database's strong points.
Let's think of an application which periodically collects large amounts of data, adapts it from a foreign structure into its native domain and stores the results in a database for further use. Data units must be unique, something we need to enforce each time a new import takes place.
One way of achieving this would be to construct domain native objects or structures by parsing the external data feeds and check against the existence of duplicates in the database, using a custom hashcode identity mechanism. We can store the hashcode values in a
UNIQUE
database column to ensure data integrity.
DATA.each {|e| DB[:entries] << e rescue nil}
This code iterates over the adapted object enumeration and attempts a database insert for each entry, ignoring any exceptions due to uniqueness violations. It also introduces the significant overhead of performing a number of database queries equal to the number of entries included in the imported collection.
Bulk inserts are nothing new and most, if not all, modern databases offer this functionality, which is also supported by the majority of database access application libraries. Ruby's
Sequel,
for instance, allows bulk insert operations with the
multi_insert
method.
DB[:entries].multi_insert(DATA)
There's a caveat here, as this operation will terminate the moment a duplicate entry violation error occurs. MySQL offers the
INSERT IGNORE
construct which is particularly useful in this scenario. Using the
IGNORE
keyword will cause errors that occur while executing the
INSERT
statement to be treated as warnings.
Looking to investigate the performance boost associated with the above technique, I've put together a small extension for Sequel, enabling the toolkit to make use of
INSERT IGNORE
.
module InsertIgnore def ignore_duplicates! @ignore = true self end def multi_insert_sql(columns, values) columns = column_list(columns) values = values.map {|r| literal(Array(r))}.join(Sequel::MySQL::Dataset::COMMA_SEPARATOR) ignore = @ignore ? " IGNORE " : ' ' ["INSERT#{ignore}INTO #{source_list(@opts[:from])} (#{columns}) VALUES #{values}"] end end
This can be used like this:
Sequel::MySQL::Dataset.send(:include, InsertIgnore) DB[:entries].ignore_duplicates!.multi_insert(DATA)
Inserting 100,000 records, some of them duplicates, using the application loop approach which issues an insert query for each entry took about 49 seconds on my laptop. Its
INSERT IGNORE
counterpart took about 4 seconds.
There are things to watch out for when using the latter approach. We can potentially construct very large queries, depending on the number of records we intend to insert. MySQL sets the maximum length of packets with the
max_allowed_packet
system variable which defaults to 1 kilobyte and can be increased up to 1 gigabyte. Loading such large datasets in memory can prove problematic, so slicing the import in chunks is probably a good idea.
In like manner, it's worth mentioning MySQL's
ON DUPLICATE KEY UPDATE
,
which updates an existing column subsequent to a failed insert due to a duplicate value violation.