This is my weekly ChangeLog, from 11 September 2017 to 17 September 2017.
You can see it in a better format by going here: http://log.smallworks.eu/web/search?from=11/9/2017&to=17/9/2017
14 September 2017:
* Since Ronie came to spend two weeks with us, I made a “stop the world” call to work with him in a couple
of projects I want to integrate in Pharo.
So, I’ve been working on that last two days: Ronie has a working implementation of a “real headless”
VM that can chose to start a world using SDL2.
Why we want this? Because of many reasons… one of them is because is lame to open a hidden window when
we want to use just a command line, but also because the desition of opening a window (the World or
whatever) should be responsibility of the image (the user), not the VM.
Anyway, the problem is that Ronie made his VM using CMake and we use plain Makefiles (a very complex
structure of makefiles), so we needed to convert that.
Also, since this is experimental, a lot ot small details are missing and we are running to supply them.
But well… I’m happy to say that we have a Pharo 7.0 running 100% on headless mode with an SDL2 window
serving the world, on macOS 32bits.
Tomorrow we will look to fix some remaining details and to expand the source to linux and windows (32bits),
then we will jump to 64bits.
11 September 2017:
* I just released [iceberg v0.5.8](https://github.com/pharo-vcs/iceberg/releases/tag/v0.5.8).
This version is a maintenance version, and contains this fixes:
* speed up of commits by not traversing the full tree to detect staged files (just compare later)
* fix refresh option in PR tool
* do not use hardcoded colours in diffs
* add guessing of source dirs to easy adding of local repositories
* recategorise methods
It will be integrated on Pharo 7.0 in on [case: 20406](https://pharo.fogbugz.com/f/cases/20406).
* Last week I was at ESUG 🙂
Anyway, I’m working on [tonel](http://github.com/pharo-vcs/tonel), the new fileformat made to replace
[filetree](http://github.com/pharo-vcs/filetree) (because of many reasons, being the non-scalable nature of
file-per-method strategy the most important).
Uncategorized methods in SystemVersionTest, SystemProgressMorph, SugsWatchpointSuggestion, SugsSuggestionSwapMethodTest, SugsNau
add Gt and Github support to minimal image
Uncategorized methods in TextKern, TextSelectionColor, TextURL
Enforce Proper method categorization – Part 1 – SUnit
Use Pharo.org instead of disney
Uncategorized methods in WorldMenuHelp,WeakValueAssociation, WeakOrderedCollectionTest, WeakOrderedCollection, WeakMessageSendTe
Pharo 7 Help should already use Pharo 7 instead of 6 and show some first highlights
Categorize methods in UserOfFooSharedPool, UnusedVariable, UnlimitedInstanceVariableSlotTest, UnknownSelector, UndefinedVariable
Categorize methods in TabLabelMorph, TaskbarTask, TermInfoCharacter, TestAutoFractionComputation, TextClassLink, TextComposer, T
Uncategorized methods in VSUnloadUnit and VMWindowDriver
Uncategorized methods in ZipStringMember, ZipStore, ZipFileMember, ZipDirectoryMember
MailMessage API Improvement
Add Retry of tests in CI
Categorize methods in RubNotificationStrategy
free and beNull missing methods in referenced structures
New warning text color is not readable on white theme
deprecated call in QANautilusPluginMorph>>#displayCritique:
RecursionStopper methods not categorized
Categorize method in PharoShortcuts
Uncategorized methods in CP1253TextConverter, CairoPNGPaint, CairoScaledFont, CheckboxButtonMorph, CheckboxMorph, CheckboxMorph,
Uncategorized methods in DummyUIManager, DropListMorph class, DropListMorph, DoesNotUnderstandDebugAction, DockingBarMenuMorph
Move SymbolicBytecode>># and hash from package “GT-BytecodeDebugger” into “Kernel”
[ Pharo 70] Senders-of-String—isLegalInstVarName-isLegalClassName-
[ Pharo 70] Categorize methods in AnnouncementLogger #240
[ Pharo 70 ] Build 83 PR 236 introduce-at-at-in-dictionary
[ Pharo 70 ] Build XX PR 236 free-and-beNull-missing-methods-in-referenced-structures
[ Pharo 70 ] Build XX PR 233 cleanup #isMorphicModel
… lots not recorded, build is at 79.
[ Pharo 70 ] Build 69 PR 225 20350-include 32-bit sources in the 64-bit Pharo archive
[ Pharo 70 ] Build 66 PR 223 20348-testWideStringClassName-needs-to-be-unmarked-as-expected-failure
The Pharo Consortium is very happy to announce that Zweidenker GmbH
has upgraded to Gold Member status.
The goal of the Pharo Consortium is to allow companies and institutions to
support the ongoing development and future of Pharo.
Individuals can support Pharo via the Pharo Association:
Hi – I neglected to mentioned “the catch” with Lambda, next to my results. So on a tiny EC2 instance you get those kinds of results (this is where I measured the numbers of 50ms) – however on Lambda you aren’t entirely clear what hardware its running on – and there are 2 aspects to consider – a cold start (where you are allocated a new Lambda instance, and so it has to bring in your deployed package) and then there appears to be a cached start – where it seems that one of your old Lambda environments can be reused. On top of both of these states – there is an extra cost of spawning out to Pharo as its not supported natively.
I mention this in the Readme on the gitlab page (it’s possibly a bit subtle) – but I was pointed to the Sparta GoLang project (who aren’t supported natively either) where they have measured that that cost of spawning out to GoLang (and it looks fairly similar for Pharo) is 700ms. Essentially this spawning is the cost of loading up a NodeJS environment (presumably some Docker like image they have already prepared – although they don’t reveal how this is done), “requiring” the ‘child-process’ node module to get an exec method, and then your code to shell out. (In my repo – this is the PharoLambda.js file).
Empirically I am seeing results from 500ms to 1200ms which are in line with Sparta (possibly better? I haven’t loaded up a Go environment to understand what they need to package up to deploy an app that can be exec’d and how that compares to our 10mb’ish footprint).
If I look at a basic NodeJS hello world app – I see .5ms to 290ms responses – (the min billing unit is 100ms). I got the impression for a recent serverless meet-up that sub 500 is what people aim for. Which means we are at least in the running.
I don’t know how sensitive the ‘overhead’ load time is due to the package size you deploy (I saw a big increase when I got my package below 10mb) or whether it truly is the NodeJS tax. I would love to get hold of the AWS team and suggest they provide another fixed solution that efficiently exec’s in C, a named executable with configurable parameters and the “event” parameter serialised in JSON (on the surface it seems overkill to use NodeJS for just that simple operation).
All this said the free tier gives you “1M free requests per month and 400,000 GB-seconds of compute time per month” – so assuming we can do interesting things in under a second (which I’ve shown), then you can process 400,000 of them a month for free (which isn’t bad really).