Report from AWS: Pharo Lambda

Hi – I neglected to mentioned “the catch” with Lambda, next to my results. So on a tiny EC2 instance you get those kinds of results (this is where I measured the numbers of 50ms) – however on Lambda you aren’t entirely clear what hardware its running on – and there are 2 aspects to consider – a cold start (where you are allocated a new Lambda instance, and so it has to bring in your deployed package) and then there appears to be a cached start – where it seems that one of your old Lambda environments  can be reused. On top of both of these states – there is an extra cost of spawning out to Pharo as its not supported natively.

I mention this in the Readme on the gitlab page (it’s possibly a bit subtle) – but I was pointed to the Sparta GoLang project (who aren’t supported natively either) where they have measured that that cost of spawning out to GoLang (and it looks fairly similar for Pharo) is 700ms. Essentially this spawning is the cost of loading up a NodeJS environment (presumably some Docker like image they have already prepared – although they don’t reveal how this is done), “requiring” the ‘child-process’ node module to get an exec method, and then your code to shell out. (In my repo – this is the PharoLambda.js file).

Empirically I am seeing results from 500ms to 1200ms which are in line with Sparta (possibly better? I haven’t loaded up a Go environment to understand what they need to package up to deploy an app that can be exec’d and how that compares to our 10mb’ish footprint).

If I look at a basic NodeJS hello world app – I see .5ms to 290ms responses – (the min billing unit is 100ms). I got the impression for a recent serverless meet-up that sub 500 is what people aim for. Which means we are at least in the running.

I don’t know how sensitive the ‘overhead’ load time is due to the package size you deploy (I saw a big increase when I got my package below 10mb) or whether it truly is the NodeJS tax. I would love to get hold of the AWS team and suggest they provide another fixed solution that efficiently exec’s in C, a named executable with configurable parameters and the “event” parameter serialised in JSON (on the surface it seems overkill to use NodeJS for just that simple operation).

All this said the free tier gives you “1M free requests per month and 400,000 GB-seconds of compute time per month” – so assuming we can do interesting things in under a second (which I’ve shown), then you can process 400,000 of them a month for free (which isn’t bad really).



Hi – the script I’m using is in – it’s nothing fancy, look at the Gitlab .yml file.
This is all Linux running in an ubuntu docker image on Gitlab CI.



Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: