Hi, this will be a long mail. I have organized it in different sections.
This work is sponsored by the Pharo consortium and financially supported by Lifeware and Schmidt.
– There are performance issues in Pharo 7.
– I have made benchmarks.
– With Guille we made an improvement in saving source code in methods.
– More improvements to come.
– Automation and CI in the future.
Since the release of Pharo 7 (and before it), it was notorious that
there exists a performance regression in the normal execution of the
image. By checking the usual operations, we have seen ( and many have
also detected) that there was an issue with the loading, compilation,
and unloading of code. Also with the creation of classes, traits and
the use of slots.
Although we were sure that there is a performance regression, we have
to show it in a way we can test it and measure it. If we cannot
measure it or repeat its execution it is worthless.
For doing so, I have created an initial set of micro-benchmarks to
cover normal operations of the image.
The set of benchmarks is available here:
These benchmarks are developed on top of SMark, only adding a command
line tool and the ability to generate a CSV file.
The idea is to run the benchmarks in different versions of Pharo an
assert that we are not breaking anything.
The first results were basically a nightmare, some operations take
almost 20 times more in Pharo 7. Especially, the ones that are
In the attached document, there is the detail of all the benchmarks,
the different results and the analysis of the improvements and
regressions (Positive percentages are regressions (more time),
negative are improvements (less time)).
I have checked the results in OSX with 64 bits images. But as the
problem is in pure Smalltalk implementations the problems are (and the
solutions) cross platforms.
Having the benchmarks, it was easy to start looking for the problems.
Thanks to the help of Guille we have detected some problems in the
implementation of SourceFile. Objects of this class have the
responsibility to handle the save of the source code when a method is
Improving this implementation we have gotten to results similar to
Pharo 6 in the compilation of methods.
Comparing a stock Pharo8 image with the one with the fix, we have the
Again there are more details in the attached file.
Also, we have ported this fix to Pharo 7.
– Making it a part of the CI infrastructure: making it run in each PR
and build to detect regressions in the changes.
– Adding more micro and macro benchmarks. I have a list of things to
test, but I am open to more:
– Slot Implementation
– Process handling
– Files (open / write / read)
– Loading: Moose / Seaside
– Recompile All
– Condense Sources
We also know that there are platform related issues (especially
Windows), so the idea it will be the same, build a benchmark, measure
it, improve it.
The idea is to have a more robust way of detecting and handling the
performance of Pharo. Of course, I am open to all your comments.