With the current double compiler model, it is not possible to switch to optimized code faster. Optimization can be accelerated, but at some point you can only achieve this acceleration by removing the optimization lanes, which reduces peak performance.
Sparkplug is designed to translate quickly. It’s so fast that we can pretty much compile it when we want, allowing us to switch to Sparkplug code more robustly than we can to TurboFan code. Google says.
One reason for this speed is that the functions that Sparkplug compiles have already been compiled into bytecode, and the bytecode compiler has already done the heavy lifting. On the other hand, Sparkplug does not generate any intermediate representation (IR) as most compilers do. Instead, Sparkplug compiles directly to machine code in a single linear path through the bytecode, releasing a code that matches that bytecode implementation.
Since Sparkplug does not generate any intermediate representation, opportunities for optimization are limited, but this is not a problem as there is an optimized compiler in the pipeline.
Google introduces how Sparkplug works in this very interesting Technical note.
“Hardcore beer fanatic. Falls down a lot. Professional coffee fan. Music ninja.”