Skip to content
This repository has been archived by the owner on May 7, 2022. It is now read-only.

Should Node.js be VM neutral in the future? #54

Closed
mikeal opened this issue Jan 22, 2016 · 147 comments
Closed

Should Node.js be VM neutral in the future? #54

mikeal opened this issue Jan 22, 2016 · 147 comments

Comments

@mikeal
Copy link
Contributor

mikeal commented Jan 22, 2016

First and foremost, a bit of a reality check: The "Node.js Platform" is already available on a variety of VMs other than the V8 runtime we ship with:

  • Compile-to systems like the early Tessel project and now Samsung's JerryScript/IoT.js.
  • Microsoft ships a Node.js on top of Chakra for XBOX, Windows Phone and Windows 10 IoT.
  • Nodyn is a Node.js API on top of the JVM.
  • JXCore is an "extended Node.js API" that can bind to Spidermonkey, V8 and Chakra.

Because of Node.js' massive ecosystem of packages, educational materials, and mind-share we should expect that in the future this will continue. Part of Node.js going everywhere is that it may need to be on other VMs in environments V8 can't go. There's not much that we can do to prevent this.

So the question becomes: Should Node.js Core move towards being VM neutral and supporting more VMs in the main project?

Some of the advantages would be:

  • Normalization of the Native API (which currently breaks every major release)
  • De-facto standard for any JS VM to add Node.js support.
  • Centralization of test infrastructure would increase stability of non-V8 platforms.
  • Standardization of API could allow us to adopt new V8 versions outside of breaking major releases.

There's a long discussion about how to do this. Without guarantees from all the target VM vendors that they will support this neutral API it could fall on us to make that work. Historically V8 has made drastic API changes that were not supportable through the API nan had implemented.

There's also an open question about how to structure the tree and build and if we should continue sticking V8 and every supported VM in a vendor directory or pulling it in during build time.

Anyway, let's have the discussion :)

@nodejs/ctc

@Fishrock123
Copy link

Also, some increased competition in this area will likely push for performance improvements in VMs I think, which would be really good for us.

@misterdjules
Copy link

JXcore is an "extended Node.js API" on top of the JVM.

I didn't know JXcore had anything to do with the JVM. Where can I find more details about this?

@MadaraUchiha
Copy link

How do we handle the compatibility issues? Speaking from a package maintainer's perspective. Worst case scenario: This will be the cross-browser hell all over again (albeit on a smaller scale because no DOM)

@brycebaril
Copy link

I'm very much in favor of this. My hope is that the WebAssembly story is a big part of this. It would be fantastic to have this on a roadmap in a way that allows Node.js to become part of the WebAssembly discussions, as this is already an area with some potential roadmap to the shared-VM discussion where the major VM vendors are already working on common APIs.

@RReverser
Copy link
Member

JXcore is an "extended Node.js API" on top of the JVM.

Correct me if I'm wrong, but JXCore is not based on JVM AFAIK. It just provides extended Node.js API on top of one of any chosen JavaScript engine among supported (all of V8, SpiderMonkey and Chakra are supported).

@mikeal
Copy link
Contributor Author

mikeal commented Jan 22, 2016

@misterdjules @RReverser you're right, my bad, I was confusing Nodyn with JXCore. Updated the description.

@RReverser
Copy link
Member

@mikeal Nice. Although I see that you removed JXCore completely, while I believe it does belong to the list and discussion, as it also adds couple of supported non-V8 engines.

@mikeal
Copy link
Contributor Author

mikeal commented Jan 22, 2016

@MadaraUchiha So, the native module ecosystem currently binds directly to V8 (often through nan which abstracts the V8 API). We suffer a break in compatibility every time we take a new version of V8 because of that, so moving towards a neutral (stable) API might actually improve compatibility beyond what we currently have.

@indutny
Copy link
Member

indutny commented Jan 22, 2016

I don't think that it will create more competition than there is for browsers, but it is surely a good thing in my opinion.

There are lots of obstacles on this way though: addon compatibility, ES6 features, etc

@mikeal
Copy link
Contributor Author

mikeal commented Jan 22, 2016

@RReverser it's back now :)

@rdodev
Copy link

rdodev commented Jan 22, 2016

Sounds like a good idea to make node VM agnostic. My question would be: is there a difference in terms of effort, speed to fix 0-day vulns, maintenance if node is V8-centric or not?

@ELLIOTTCABLE
Copy link

Not on any of the teams, or really important in any way, but I wanted to kind of re-phrase simplistically what I think the majority view of this is going to be:

  1. This is almost incontrovertibly Good for The Community. Good for users. A good thing.
  2. This is a lot of work. Work that could, theoretically, be invested elsewhere.

Unless I'm super-off-base there, then, I'd assume that this conversation needs to directly assault “whether that amount of work being spent on this effort is proportional to the benefits to users.”

(Sorry if this seems obvious! I just didn't see it laid out in so many words above.)

@MadaraUchiha
Copy link

@mikeal Wouldn't this mean that this abstracted API would have to always be the lowest common denominator in terms of newest ES features etc? Otherwise, you're risking syntax and APIs working in one build, but breaking on another.

Another option is to put newest features behind a flag, but that's kind of meh, and we've been trying to (at least I think) avoid that going forward.

@rdodev
Copy link

rdodev commented Jan 22, 2016

My thoughts exactly @ELLIOTTCABLE . I think before node jumps on "let's do it because it's good" let's ask if they're accidentally overcommitting and a using time that could be better used improving aspects of node.

@inikulin
Copy link

If the aim is to bring node to new platforms (like Windows Phone) and benefit in speed (e.g. on Windows using Chakra) and keep everything under the hood without changing end-user API then it's the way to go. (e.g. node for Windows will use Chakra as default engine, for MacOs - V8, and so on). But if developers will start using language features available only in the particular engine it may cause significant fragmentation which is not a good thing.

@ELLIOTTCABLE
Copy link

(@rdodev To be clear, I'm not against it. I happen to personally think that the benefit at least slightly outweighs the effort required, despite the huge amount of effort. I'm just … trying to preclude arguments about whether or not it is beneficial in certain ways; or at least, I want to see such arguments prefixed by a big “No. I disagree. I definitely think this is not beneficial, in way” … to differentiate them from what are I expect are the far-more-salient “Yeah, but ” arguments. Making a meta-argument, if you will. :P)

@RReverser
Copy link
Member

@rdodev Personally I see two ways of doing this: one is to provide API for external engines to use, and have separate core team + teams who maintain each separate engine's bindings. This is basically what we have now, but with proper API instead of V8's always-changing one + V8 bindings will live in separate repo just like any other implementations.

Second way is to have Node team to continue take care of V8 bindings officially as it does now in addition to neutral APIs for external consumers, and also of a list of approved engines (which will happen if nodejs/node#4675 is merged directly into Node). That case increases responsibilities and testing matrix of Node team a lot, so it will be harder to react to all possible 0-days in each possible engine in reasonable time IMO.

Personally I was very glad to use JXCore and see nodejs/node#4675 coming, but option 1 feels more preferable to me, as gives clear split of responsibilities between maintainers, and allows Node team to focus on what it always was good at - developing stable, safe and scalable core for server-side applications. In the meanwhile, people who worked on V8 part in Node will be able to continue that work in the new repo, and Microsoft will be able to work on their Chakra bindings, but without all the mess associated with merging core changes and fixing conflicts over and over as it was before.

@fivdi
Copy link

fivdi commented Jan 22, 2016

Samsung JerryScript is a JavaScript engine so perhaps Samsung IoT.js should have been mentioned together with JerryScript in the original post. Given that the IoT.js API is a subset of the Node.js API, the question as to what "The Node.js Platform" actually is needs to be answered as not everything that runs on Node.js will run on IoT.js.

@LPGhatguy
Copy link

I think that making Node.js VM-neutral would be beneficial purely on the front of dealing with V8 -- nan is helpful but not the end-all solution. Gaining compatibility with other VMs might be a secondary gain over improved API stability.

@RReverser
Copy link
Member

@inikulin

But if developers will start using language features available only in the particular engine it may cause significant fragmentation which is not a good thing.

Note that it's exactly what already happens, even if we don't take into account all the engines in the description in first message. Even with just V8, we already have ES6 evolving, and there is growing number of modules that are being published untranspiled, targeting only latest versions of Node with native ES6 support, so you can't be sure that installed module will work on your Node, unless you keep it up-to-date or transpile everything incl. node_modules folder in your own app.

So this change won't bring anything new to that problem, but rather will push harder to finding a better, more generic solutions.

@rdodev
Copy link

rdodev commented Jan 22, 2016

@RReverser if it is a forgone conclusion that node is going VM-agnostic, then Option 1, sounds like the way to go, for sure.

@ELLIOTTCABLE
Copy link

“the question as to what "The Node.js Platform" actually is needs to be answered as not everything that runs on Node.js will run on IoT.js” is a really good point, but … I think one that's already answered?

Nothing above said we necessarily need to absorb every project that implements a subset of the current Node API. There can still be ‘pseudo-Nodes’ that aren't blessed, isn't that the case?

@RReverser
Copy link
Member

@rdodev In that case I believe the answer to your question is easy as each team will be interested in fixing 0-days that are specific to their engine - whether it's a browser or server-side, doesn't matter that much.

@whitfin
Copy link

whitfin commented Jan 22, 2016

As an outsider I really can't think of a downside to doing this aside from workload and a bit of complexity in specifying the Node.js interface.

Different VMs will likely function better on different platforms, so it allows for developers (specifically hardware vendors as an example) to select an appropriate VM for the system they're on - obvious example being Chakra for the Windows-related systems.

I suppose you could argue that because the Node.js API would have to be common across all implementations, you might either have a) VM implementations with missing features, or b) a very slow adoption of things like new ES versions. That would need to be well thought out, but I don't expect it would be that difficult to figure out.

@ELLIOTTCABLE
Copy link

Another shot-from-a-distance, re @zackehh's point about slow adoption of new features: maybe this is okay / a good thing? Node.js is getting a little older; and I think a slower progression of new features is acceptable, perhaps even could be viewed as a positive thing (stability) at this point. (=

@inikulin
Copy link

Node.js is getting a little older; and I think a slower progression of new features is acceptable, perhaps even could be viewed as a positive thing (stability) at this point.

Wait a sec. AFAIK io.js fork was made to fight such state of things.

@mikeal
Copy link
Contributor Author

mikeal commented Jan 22, 2016

@MadaraUchiha

Wouldn't this mean that this abstracted API would have to always be the lowest common denominator in terms of newest ES features etc? Otherwise, you're risking syntax and APIs working in one build, but breaking on another.

In reality almost everyone binds to nan today, which is abstract API support V8 versions back to v0.10. So most of the ecosystem is already on a "lowest common denominator" API.

Also, one of the current proposals is to use the current v8 as a starting point and do the work of making newer V8's shim to the old API.

Another option is to put newest features behind a flag, but that's kind of meh, and we've been trying to (at least I think) avoid that going forward.

I think we'll end up doing this any way during the early phase of supporting any new VM. It's the best way we can get support into people's hands and see how stable it is.

@mikeal
Copy link
Contributor Author

mikeal commented Jan 22, 2016

Wait a sec. AFAIK io.js fork was made to fight such state of things.

io.js was about project governance and release cadence. we now have great governance, a growing contributor base, and a good release cadence. having a stable native API would allow us to move even faster because we'd be able to take a new V8 as quickly as every 6 weeks rather than every 6 months (which we do in order to avoid breaking the native ecosystem so often).

@mhdawson
Copy link
Member

@kobalicek there are plans coming out of the discussion last week from https://github.com/jasnell/vm-summit for @ianwjhalliday and @stefanmbu to get started on an API. There will be a summary of the overall meeting coming out soon and one of the next steps is to schedule an API WG (https://github.com/nodejs/api) meeting to present the current plan from the summit and to see if there are others who also have time to contribute to the effort. Sounds like we should somehow sync what you are doing with this effort.

@kobalicek
Copy link

@mhdawson I checked out the documents, thanks for pointing them out!

What I'm missing here is that there is a lot of questions, but there is no analysis of existing VMs. I don't even see which VMs would be the target. I'm writing this as I have studied SpiderMonkey, V8, and ChakraCore so far, and these engines really have different APIs and handle types. I think that whoever defines the initial neutral API proposal should be really aware of all the differences these engines have. Supporting simpler engines like duktape shouldn't be an issue as they have much simpler APIs and usually have only one type of handle.

My observations so far:

  • SpiderMonkey:
    • C++ API
    • Exposes different kind of handles - Rooted doesn't seem to have an equivalent in V8
    • Doesn't have so rich type-system like V8, basically Value, JSString, JSObject, but doesn't expose primitive types like v8::Boolean
    • Function templates are done differently compared to V8
    • Size of any value seems to be always 64-bit, even on 32-bit machine
    • SpiderMonkey doesn't just cast between values and other types like JSString - they have a concept called payload. So to use JSString you need to extract it first from the value. This is in contrast to V8 which just casts value to any type that inherits it
    • Probably much more, these are the biggest differences I have found so far
  • ChakraCore:
    • I found only JSRT C-API, seems it's the only API for embedding, and it's C-based
    • Exposes very slim API, doesn't really go into much detail from embedders perspective
    • Doesn't expose types, just JsValueRef
    • Is based on error codes, all calls return JsErrorCode

Basically only based on "my observations" I can answer some questions asked:

  • C vs C++? It should be C++, trying to make a neutral C-API is a suicide.
  • Binary compatibility between various VM implementations? Dreaming. Even handles don't guarantee to have the same size across VM implementations and VMs pass arrays of handles to native functions. This is not the only deal-breaker. Only source code compatibility should be the goal.
  • Low-level vs High-level API? Probably the high-level should be preferred, maybe with some macros to create proper native functions (each engine exposes functions having different signatures). Low-level APIs especially for creating function prototypes and binding classes are different and hard to unify. I think the neutral API should be able to use signatures (V8) and other mechanisms by default so no JS code can crash the VM by calling a native function with improper this argument, etc...
  • How many value types to provide? I guess Value should be enough, maybe String, Object, and Function, but I'm not sure.
  • How many handle types to provide? This is the most complex thing to answer.
  • Should neutral API provide FFI? There is no point unless all target VMs support it natively.

Well, sorry for long reply, just wanted to share some of my observations.

@stefanmb
Copy link

@mhdawson The API that @ianwjhalliday and I are planning is a module API. This is related to the VM agnostic problem but not the same thing, as established during https://github.com/jasnell/vm-summit, if anything it is closer to an evolution of Nan.

@kobalicek The detailed list of concerns is very useful, thanks. There have been several other attempts at shimming and it is likely worthwhile examining them as well. Here are the ones I am aware of:
https://github.com/jxcore/jxcore/blob/master/doc/native/Embedding_API_Details.md
https://github.com/martine/v8c/blob/v8c/include/v8c.h
https://github.com/tjfontaine/node-addon-layer/blob/master/include/shim.h

@Fishrock123
Copy link

See also nodejs/vm#1

@kobalicek
Copy link

I think another question that should be answered is about performance. If the Neutral API wraps the underlying engine completely (aka jxcore approach or v8c approach) then there will be some overhead.

For example here https://github.com/tjfontaine/node-addon-layer/blob/master/include/shim.h I don't really like the dynamic memory allocations that happens inside the implementation. I really think the wrapper should be thin and shouldn't need to allocate additional memory. VM architects have already taken care of it.

@kzc
Copy link

kzc commented Apr 11, 2016

My 2 cents regarding the engine neutral API - macros and C++ templates do not lend themselves to stable long term ABIs. A goal of this project should be to be able to use the same node native module shared library against any engine (v8, chakra or other) on the same platform without recompilation. Engine state should not leak out into the API. C linkage provides the most stable ABI and is compiler agnostic on the same platform. But the Qt/KDE projects have also proven that this can be done with a restricted subset of C++ for the same compiler.

https://community.kde.org/Policies/Binary_Compatibility_Issues_With_C%2B%2B

@kobalicek
Copy link

@kzc Sorry, but I think that some C++ magic would actually help to create something that can be stable and useful in long-term. Interacting with VM's that use concurrent garbage collection isn't easy and C++ actually really helps here. Even SpiderMonkey moved from C to C++, because it makes the interaction with the VM easier. If I take into consideration that V8 and SM already provide C++ API then proposing something C-only doesn't make much sense here. That's my 2 cents :)

@kzc
Copy link

kzc commented Apr 11, 2016

@kobalicek My point was having a stable ABI whether it is C or C++. One that remains backwards compatible without the need for native module recompilation for new node releases - even with new engines supported. Compile-time solutions are not as useful or convenient for users.

@mhdawson
Copy link
Member

Sorry for causing some confusion. I should have made it clear the API work I mentioned is focused initially on modules. It was discussed that it might become part of the solution for the vm layer integration but that was to be seen later on.

I do believe that for native modules they need to be able to be used without recompilation with different Node binaries. Similarly, although I acknowledge there will be challenges, I also think we should target being able to use different engines with the same Node binary without recompilation. This has some desirable characteristics and will help ensure that engine internals don't leak out.

@kobalicek
Copy link

@mhdawson I just wonder what is node gonna expose if you plan to make it ABI compatible?

@bobmcwhirter
Copy link

fwiw, the progress on Nodyn stalled because of the lack of VM neutralness.

Should this become a reality, Nodyn may certainly be re-invested in.

@lance
Copy link
Member

lance commented Jun 15, 2016

@ariya et. al. I am the creator of Nodyn, and just stumbled on this issue recently. There were a lot of reasons I (and Red Hat) stopped development on Nodyn. As @bobmcwhirter noted, a lack of VM neutrality was one. Another was the fact that we were just winging it - working with no documentation other than the Node.js source code.

I've now seen https://github.com/nodejs/vm and will follow along.

@trevnorris

Another option that I've investigated and found promising is creating a lower level JS API that the existing API can sit on top of. Then the binding point for the VM is on the JS layer, which reduces the native API problem to the public API.

+1 👍

@DemiMarie
Copy link

One approach is to ditch native modules entirely and replace them with an FFI. However, there is a catch: an FFI would need to be implemented in each VM and integrated with the VM's JIT and GC in order to have good performance. Furthermore, an FFI is almost useless for browsers (lack of isolation in JS ensures that this would lead to security problems).

The big advantages of a C API (vs. C++) are:

  • Much easier to have a stable ABI. In C++, this basically requires that you expose what is essentially a C API with a little bit of syntactic sugar (constructors, destructors, overloaded operators) on top of it. In that case, one might as well expose a C API and write the C++ API on top of it.
  • Much easier to write native modules in other languages. Rust, in particular, could dramatically reduce the risk of native-module related security vulnerabilities. I am sure that there are other languages in which people want to write modules (instead of C/C++).

@mhdawson
Copy link
Member

@DemiMarie you may want to check out the latest API working group meeting. I've not written up the notes but the recording is available and link to raw notes is in the meeting issue as well:

nodejs/api#22

I'm suggesting this as there is work on the stable ABI (well just for modules at this point). We are heading down the path of a C API with C++ sugar on top.

@ianwjhalliday
Copy link

@DemiMarie agreed. FFI does have its interesting merits, but I think mostly as a convenience to the end users, and I don't know if the effort is worth the benefit. @ofrobots is experimenting with this idea. I am interested to see what he comes up with.

@jinderek
Copy link

@kobalicek I think the overhead may can not be avoided NOW. The vms are already mature and different, we have to do some sacrifice in the implementation.

@kobalicek
Copy link

kobalicek commented Jul 29, 2016

@mhdawson on So you wrap C++ API in C API and then create a C++ layer to provide a sugar on top of the C API. For me this seems like a step back and moving backward, sorry.

@Xuyv Talking about overhead without having the overhead measured is premature. If you wrap every interaction with V8 that was now inlined into a function call then the overhead can be even 5x/10x, which doesn't seem like a good idea, especially if you consider that node.js is a high-performance environment.

BTW: @mhdawson - I think you are doing it the opposite - if you decide to change the API you should first start with node internals and then propagate these changes to node modules, and not to expose the new thing (that will be highly untested and unstable in the early beginning) into all the modules that depend on V8 now and keep the old API internally. Another reason is that the new API should really be able to do everything the old API does.

I just hope I will be able to use V8 in the future and keep my modules high-performance, others are fine to use the new API :)

@mhdawson
Copy link
Member

mhdawson commented Aug 2, 2016

@kobalicek the challenge is that it may just not be possible to maintain ABI stability while using C++. Ian found this document which covers some of the issues: http://www.oracle.com/technetwork/articles/servers-storage-dev/stablecplusplusabi-333927.html.

The net seems to be that the stable ABI would be in C, but C++ wrappers that are all in-lined and only use the stable ABI are possible because the C++ would be built into the module and then module could continue to work provided the exports in C are ABI stable.

Discussion around whether to change the internal use/module use was extensive at the vm summit with people landing on starting with the modules first. The discussion was that the ABI stable API could address 90-95% of native modules, while those with specific requirements could continue to use the v8 APIs directly.

We are at the point of starting to assess the performance overhead in these issues (still early days though so take them with a grain of salt):

So that is definitely part of the analysis.

@kobalicek
Copy link

kobalicek commented Aug 2, 2016

If we will be allowed to use V8 directly then I have no problem with that. I personally consider wrapping this in a C API (with a lot of external functions) a significant overhead that I just can't accept. I have already tried to make a C++ wrapper around V8 API in a way to hide everything, my attempt is available here: https://github.com/kobalicek/njs

The thing is, from my own experience, that wrapping VM APIs in a low-level way is much more complicated than creating higher level interface, because their low-level API is the thing that is the most different. For example compare how you wrap classes in V8 and in SpiderMonkey - both engines provide a completely different way of doing that thing.

The way I use NJS looks like this (sorry for a bit long-code, showing only one simple class):

// C++ header (allows other modules to use that in other C++ code).
struct JSLinearGradient : public JSGradient {
  NJS_INHERIT_CLASS(JSLinearGradient, JSGradient, "LinearGradient")

  NJS_INLINE JSLinearGradient(double x0, double y0, double x1, double y1) NJS_NOEXCEPT
    : JSGradient(b2d::Gradient::kTypeLinear, x0, y0, x1, y1) {}
};

// C++ source (implementation).
NJS_BIND_CLASS(JSLinearGradient) {
  NJS_BIND_CONSTRUCTOR() {
    unsigned int argc = ctx.ArgumentsCount();

    double x0, y0, x1, y1;
    if (argc == 0) {
      x0 = y0 = x1 = y1 = 0.0;
    }
    else if (argc == 4) {
      NJS_CHECK(ctx.UnpackArgument(0, x0));
      NJS_CHECK(ctx.UnpackArgument(1, y0));
      NJS_CHECK(ctx.UnpackArgument(2, x1));
      NJS_CHECK(ctx.UnpackArgument(3, y1));
    }
    else {
      return ctx.InvalidArgumentsCount();
    }

    JSLinearGradient* self = new(std::nothrow) JSLinearGradient(x0, y0, x1, y1);
    ctx.Wrap(ctx.This(), self);
    return ctx.Return(ctx.This());
  }

  NJS_BIND_GET(x0) {
    return ctx.Return(self->_obj.getValue(b2d::Gradient::kScalarIdLinearX0));
  }

  NJS_BIND_SET(x0) {
    double x0;
    NJS_CHECK(ctx.UnpackValue(x0));
    self->_obj.setValue(b2d::Gradient::kScalarIdLinearX0, x0);
    return njs::kResultOk;
  }

  NJS_BIND_GET(y0) {
    return ctx.Return(self->_obj.getValue(b2d::Gradient::kScalarIdLinearY0));
  }

  NJS_BIND_SET(y0) {
    double y0;
    NJS_CHECK(ctx.UnpackValue(y0));
    self->_obj.setValue(b2d::Gradient::kScalarIdLinearY0, y0);
    return njs::kResultOk;
  }

  NJS_BIND_GET(x1) {
    return ctx.Return(self->_obj.getValue(b2d::Gradient::kScalarIdLinearX1));
  }

  NJS_BIND_SET(x1) {
    double x1;
    NJS_CHECK(ctx.UnpackValue(x1));
    self->_obj.setValue(b2d::Gradient::kScalarIdLinearX1, x1);
    return njs::kResultOk;
  }

  NJS_BIND_GET(y1) {
    return ctx.Return(self->_obj.getValue(b2d::Gradient::kScalarIdLinearY1));
  }

  NJS_BIND_SET(y1) {
    double y1;
    NJS_CHECK(ctx.UnpackValue(y1));
    self->_obj.setValue(b2d::Gradient::kScalarIdLinearY1, y1);
    return njs::kResultOk;
  }
};

The code itself has zero overhead as it doesn't have to call external functions, everything possible is inlined, everything that would expand is marked NJS_NOINLINE to expand only once in a binary. Maybe I can discuss my way with some nodejs dev somewhere? Not sure if this is the right thread.

@DemiMarie
Copy link

DemiMarie commented Aug 3, 2016 via email

@bnoordhuis
Copy link
Member

The code itself has zero overhead as it doesn't have to call external functions, everything possible is inlined, everything that would expand is marked NJS_NOINLINE to expand only once in a binary.

That sounds like the approach that nan takes but that is not good enough for what we're discussing here because it only maintains source compatibility, not binary compatibility (API vs. ABI.)

@kobalicek
Copy link

It's not exactly what NaN does - NaN builds on top of V8, NJS defines interface which is then provided by V8-integration layer. But yes, it's not ABI, it's a source-level compatibility layer.

@DemiMarie
Copy link

A rather wild option:

What about using libclang and LLVM to JIT compile C++ add-ons at run-time?

@ianwjhalliday
Copy link

Hmm, no. That wouldn't help with the problem of API/ABI breakage with different versions of node/v8. I believe that would be more or less equivalent to the status quo, albeit deferring compilation to runtime which would impact performance. It would may also have issues on Windows, not sure how good libclang's MSVC compatibility is.

@DemiMarie
Copy link

It would solve the ABI problem by doing requiring that source always be
available. The compiled code could (and should) be put in a persistent
cache, so the performance penalty could be minimized.

But this is just working around the problem. I once believed that a
standard FFI integrated into each engine's JIT is the solution. But it is
a LOT of work that has basically no use in browsers for obvious security
reasons, since JS lacks adequate encapsulation to allow writing fully safe
wrappers, nor does it have in-realm access control. An FFI works great for
languages like Rust, OCaml, D, C#, and Haskell that enforce encapsulation,
allowing wrappers to check arguments. Doing the same in JS is nearly
impossible, since JS has too much reflection.

Another issue is that any FFI will allow crashing the VM or – worse –
C-style memory unsafety security vulnerabilities.

If we do use an FFI exclusively, there will need to be ways for library
authors to provide safe wrappers with little performance penalty.

On Sep 13, 2016 15:19, "Ian Halliday" [email protected] wrote:

Hmm, no. That wouldn't help with the problem of API/ABI breakage with
different versions of node/v8. I believe that would be more or less
equivalent to the status quo, albeit deferring compilation to runtime which
would impact performance. It would may also have issues on Windows, not
sure how good libclang's MSVC compatibility is.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#54 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AGGWBzcZguKzkbi5TvCwHl-gHunBwqr3ks5qpvdcgaJpZM4HKo_M
.

@valera-rozuvan
Copy link

A year has passed without a comment. Has this been discussed elsewhere? Will there be a common ABI for a VM to integrate itself with Node?

@williamkapke
Copy link
Contributor

Has this been discussed elsewhere?

Yup! check out: https://github.com/nodejs/vm

@Trott
Copy link
Member

Trott commented May 6, 2022

Closing all issues in this archived repository.

@Trott Trott closed this as completed May 6, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests