1.10.2017
In the last article about parallel programming I outlined how parallel calculations in Qt work and what is needed for that. To remind: we need at least two componnents to make QT parallel calculations: input data in one of the Qt containers (typically QList or QVector) and a function (kernel) then can perform the desired action with one input data entry.
Sometimes I have a lot of different pictures in the directory and I need to scale them down to some maximum size. Sometimes I really solve this problem and I use simple a script using netpbm written directly on the command line. I have to say that by scripting on the command line I had never manipulated the pictures in parallel - always one by one. It's the easiest way to manipulate pictures. I would never write such script in C++. As an example of parallel programming, however, such a task is pretty appropriate.
Input data in the example are stored in QStringList, each file name in one item.
File name are passed to the program as arguments on the command line. The arguments
can be easily obtained using QCoreApplication::arguments()
.
It should be noted that the first item in the arguments list is not the first file,
but the program name. This is the property of the Unix operating system. The first
item in the container should be removed.
QStringList files = QCoreApplication::arguments(); files.takeFirst();
Call: +420 777 566 384 or email to address info@hobrasoft.cz
The computational function receives one item of input data as a parameter and performs the required operation with this item. In our case the function in the parameter receives the name of an input file with the image and its the task is to retrieve, reduce, and save this file to a new file with modified name. In Qt, such a function can be very simple:
void resize(const QString& filename) { QImage image(filename); QImage scaled = (image.width() > image.height()) ? image.scaledToWidth(196) : image.scaledToHeight(196); scaled.save("resized-"+filename); }
For parallel calculations, the Qt library offers several different functions (map, filter, reduce, blocking, and combinations thereof). For our purposes, the blocking map is the best option:
QtConcurrent::blockingMap ( files, // Input data resize // Map function );
#include <QCoreApplication> #include <QtConcurrent> #include <QImage> #include <QDebug> void resize(const QString& filename) { QImage image(filename); QImage scaled = (image.width() > image.height()) ? image.scaledToWidth(196) : image.scaledToHeight(196); scaled.save("resized-"+filename); } int main(int argc, char **argv) { QCoreApplication app(argc, argv); QStringList files = QCoreApplication::arguments(); files.takeFirst(); if (files.isEmpty()) { qDebug() << "Usage: resize filename1.jpg filename2.jpg ..."; return 1; } QtConcurrent::blockingMap (files, resize); return 0; }
It cannot be easier.
I tested the program using thousand of jpg images. I used the eight-core processor AMD FX-8350.
I made a single-thread application, where the QtConcurrent::blockingMap() was replaced with simple loop:
for (int i=0; i<files.size(); i++) { resize(files[i]); }
The result:
time ../resize *.jpg real 0m3.507s user 0m3.344s sys 0m0.155s
At first glance at the result I was surprised that the program consumed much more CPU time then was the total run time. But then I realized - it did run parallel on multiple cores. Interestingly, parallel calculation consumed more CPU time than the single-thread variant. Although the parallel calculation spent more CPU time, it was completed several times faster.
time ../resize *.jpg real 0m0.633s user 0m4.302s sys 0m0.171s
Although the eight-core processor was used, the acceleration is only about five times faster. Speed up the calculation even more likely will be impossible - signifacant part of the time was spent with IO operations (reading and writing to disk). In another article I will show the in purely computational operations the acceleration corresponds with number of CPU cores much better.
This article is one of several articles on parallel programming in Qt. Watch this site, watch our Twitter. Further parts will follow.