I'm using Comlink on a Deno server that is handling a lot of traffic - I'm just passing a string to the worker, and then the worker is sending back ArrayBuffers (I'm using Comlink.transfer for sending those, of course), and it turns out that Comlink has become a bottleneck on the main thread - specifically this code:

I'd have thought that V8 would have some magic optimizations to make the function creation here not as expensive as it seems to be. It looks like it's creating a "fresh" function each time, and then isn't able to optimize it since it only gets called once, and so simple stuff like !ev.data || !ev.data.id runs slow because it's basically running in "interpreted" mode. That's my guess here, anyway.
I'm wondering if you'd welcome a pull request which optimizes this, and if so, do you have any preferred approach? My thinking here is that you'd just have a single function (rather than creating a new one for every request), and it uses a Map which maps an id to its resolver.
I'm using Comlink on a Deno server that is handling a lot of traffic - I'm just passing a string to the worker, and then the worker is sending back ArrayBuffers (I'm using
Comlink.transferfor sending those, of course), and it turns out that Comlink has become a bottleneck on the main thread - specifically this code:I'd have thought that V8 would have some magic optimizations to make the function creation here not as expensive as it seems to be. It looks like it's creating a "fresh" function each time, and then isn't able to optimize it since it only gets called once, and so simple stuff like
!ev.data || !ev.data.idruns slow because it's basically running in "interpreted" mode. That's my guess here, anyway.I'm wondering if you'd welcome a pull request which optimizes this, and if so, do you have any preferred approach? My thinking here is that you'd just have a single function (rather than creating a new one for every request), and it uses a
Mapwhich maps anidto itsresolver.