To support the start of the new season of the Pharo Mooc (https://www.fun-mooc.fr/courses/course-v1:inria+41010+self_paced/about) on May 6th, we are happy to announce a new version of the book: TinyBlog: a First web App in Pharo in English and French.
Stéphane, Luc and Olivier. http://mooc.pharo.org
IMTDouai Team (L. Fabresse/N. Bouraqadi) just won the Best paper award in the “Industrial Robot” category @ ICARSC 2019 (https://web.fe.up.pt/~icarsc2019/ ) with our paper titled “PolySLAM: A 2D Polygon-based SLAM Algorithm” implemented in Pharo
@pharoproject @johannDichtl @lxsang @glozenguez @nourybouraqadi @IMTLilleDouai
Hello everyone,we’ve been working on a workflow engine written in Pharo. You can check it out at: https://github.com/skaplar/NewWaveIt is still in the early development and we discussed about making it public so everyone interested can join, take a look or provide any kind of feedback. I’m also at discord so you can contact me @skaplar.Best regards,Sebastijan Kaplar
Really great talks at PharoDays
Hi, this will be a long mail. I have organized it in different sections.
This work is sponsored by the Pharo consortium and financially supported by Lifeware and Schmidt.
– There are performance issues in Pharo 7.
– I have made benchmarks.
– With Guille we made an improvement in saving source code in methods.
– More improvements to come.
– Automation and CI in the future.
Since the release of Pharo 7 (and before it), it was notorious that
there exists a performance regression in the normal execution of the
image. By checking the usual operations, we have seen ( and many have
also detected) that there was an issue with the loading, compilation,
and unloading of code. Also with the creation of classes, traits and
the use of slots.
Although we were sure that there is a performance regression, we have
to show it in a way we can test it and measure it. If we cannot
measure it or repeat its execution it is worthless.
For doing so, I have created an initial set of micro-benchmarks to
cover normal operations of the image.
The set of benchmarks is available here:
These benchmarks are developed on top of SMark, only adding a command
line tool and the ability to generate a CSV file.
The idea is to run the benchmarks in different versions of Pharo an
assert that we are not breaking anything.
The first results were basically a nightmare, some operations take
almost 20 times more in Pharo 7. Especially, the ones that are
In the attached document, there is the detail of all the benchmarks,
the different results and the analysis of the improvements and
regressions (Positive percentages are regressions (more time),
negative are improvements (less time)).
I have checked the results in OSX with 64 bits images. But as the
problem is in pure Smalltalk implementations the problems are (and the
solutions) cross platforms.
Having the benchmarks, it was easy to start looking for the problems.
Thanks to the help of Guille we have detected some problems in the
implementation of SourceFile. Objects of this class have the
responsibility to handle the save of the source code when a method is
Improving this implementation we have gotten to results similar to
Pharo 6 in the compilation of methods.
Comparing a stock Pharo8 image with the one with the fix, we have the
Again there are more details in the attached file.
Also, we have ported this fix to Pharo 7.
– Making it a part of the CI infrastructure: making it run in each PR
and build to detect regressions in the changes.
– Adding more micro and macro benchmarks. I have a list of things to
test, but I am open to more:
– Slot Implementation
– Process handling
– Files (open / write / read)
– Loading: Moose / Seaside
– Recompile All
– Condense Sources
We also know that there are platform related issues (especially
Windows), so the idea it will be the same, build a benchmark, measure
it, improve it.
The idea is to have a more robust way of detecting and handling the
performance of Pharo. Of course, I am open to all your comments.
This did already exist in various forms, a couple of years ago I made a newer version, they can all be found in http://www.smalltalkhub.com/#!/~BenComan/DNS/ – including unit tests (but some of the older code in there is a bit stale).
It covers most record types, but most of them are not used a lot.
NeoSimplifiedDNSClient default addressForName: ‘pharo.org‘. “22.214.171.124”
One of my goals was to use it as a more reliable, non-blocking ‘do we have internet access’ test:
NeoNetworkState default hasInternetConnection. “true”
From the class comments:
I am NeoSimplifiedDNSClient.
I resolve fully qualified hostnames into low level IP addresses.
NeoSimplifiedDNSClient default addressForName: ‘stfx.eu‘.
I use the UDP DNS protocol.
I handle localhost and dot-decimal notation.
I can be used to resolve Multicast DNS addresses too.
NeoSimplifiedDNSClient new useMulticastDNS; addressForName: ‘zappy.local’.
I execute requests sequentially and do not cache results.
This means that only one request can be active at any single moment.
It is technically not really necessary to use my default instance as I do not hold state.
I am NeoDNSClient.
I am a NeoSimplifiedDNSClient.
NeoDNSClient default addressForName: ‘stfx.eu‘.
I add caching respecting ttl to DNS requests.
I allow for multiple outstanding requests to be handled concurrently.
UDP requests are asynchroneous and unreliable by definition. Since DNS requests can take some time, it should be possible to have multiple in flight at the same time, thus concurrently. Replies will arrive out of order and need to be matched to their outstanding request by id.
If a request has been seen before and its response is not expired, it will be answered from the cache.
Each incoming request is handled by creating a NeoDNSRequest object and adding that to the request queue. This triggers the start up of the backend process, if necessary. The client then waits on the semaphore inside the request object, limited by the timeout.
The backend process loops while there are still outstanding requests that have not expired. It sends all unsent requests at once, and then listens briefly for incoming replies. It cleans up expired requests. When a reply comes in, it is connected to its request by id. The semaphore in the request object is then signalled so that the waiting client can continue and the request is removed from the queue. The process then loops. If the queue is empty, the backend process stops.