Make a websocket server to expose the APIs



From @Anthony

Right now the API is using gRPC with some streams and streams cannot be used in a client side directly. gRPC is planning to support that but in few months only

The idea is to have a websocket API that acts as a proxy on the gRPC API and that let users to access the gRPC api even from a client side application.

With that we could easily create a website that let you deploy your service, test it, even create directly apps based on a website only (and the core running in background)

some ideas for implementation

Reply from @ilgooz

I watched a video earlier about Envoy. They call it a special proxy gateway in between gRPC server and grpc-web. Video gives a general technical view about Envoy. Gotto check out.


I think, we don’t need to have a built-in proxy in Core. Why not just using Envoy as a proxy like recommended in the gRPC Web docs? That way we don’t need to maintain any kind of websocket proxy inside Core. People who wants to access to Core’s gRPC APIs could just use Envoy and I think this option supports bidirectional streams as well.

We can also publish a special Dockerfile for envoy that is configured for Core. Also, we can make mesg-js to be compatible with browsers and that way devs can use this pre-generated gRPC client for coreapi.


The best will be to create a system service that contain Envoy :slight_smile:


Clarification about Envoy proxy & supporting websocket

gRPC is originally a TCP based communication protocol. But HTML5 doesn’t give access to creating TCP connections to servers. Because of this limitation, folks at gRPC created an official spec about supporting Web clients trough websocket.

In future, all gRPC servers should both accept TCP and Websocket connections from clients. It’s also possible to access gRPC server from Web by using proxies like Envoy. Envoy is able to proxy Websocket/HTTP based gRPC requests to a gRPC server that only accepts TCP connections.

So, we have two valid options here. For now, we can just use Envoy to support Web clients and in future we can use a simple package to support websocket connections next to TCP connections in the Core. This pkg is very simple to use and doesn’t bring any complexity to our code. I haven’t check the code yet but it’s some kind of a wrapper for net connections to upgrade them to a websocket connection or keep them as TCP connections depending on the request style. But anyway, we can do adopt it any time, no need to rush for this.


Difference between grpc/grpc-web & improbable-eng/grpc-web

They both stands for allowing websocket clients to make gRPC requests to servers/proxies that accepts websocket connections.

They both have the same public APIs but in addition, improbable-eng/grpc-web also supports bidi streams where official one doesn’t. Officials plan to support it in future and their implementation will have the same public APIs with improbable-eng’s version.

Also, improbable-eng provides a plugin to output TS types as well when generating the client code.

So, it’s totally OK to use improbable-eng/grpc-web in favor of having bidi streams and TS types.


Supporting both Server and Browser in mesg-js

Please read first:
That comment on the issue explains how we can provide our gRPC APIs to consumers to consumers by both supporting Browsers and Nodejs.

This is related with require(‘mesg-js’).application().api API. This object exposes the gRPC core APIs. It currently only supports node.js but should also support browsers as well.

We first thought about sperating the packages for browser and server, that way we don’t include extra code that generated for node while mesg-js is being used in a browser environment.

But after a second thought, I think seperating them wouldn’t be good because that way it’s not possible to support universal SPAs natively. And if we seperate them, consumers have to do some magic in their webpack confings to switch between the browser and nodejs version of mesg-js. This is not needed. It’s always nice to be able to provide a simple mesg.js that can run in browsers and nodejs without any effort. It should support UMD and that way it can be used with scripts tags in browsers or by getting imported as a package both in browser and node environments easily.

In future we can think of some kind of optimizations for removing node related code when mesg-js is running on browser but it’s too much for now to think about that. And the extra code is just a few 100s of lines. So it’s OK to have them.

We can make the extra code can be removed when webpack target is web or detecting browsers via ENV vars by checking for procces.env.NODE_ENV. This way we can give an option to our consumers to reduce their mesg.js file size when on browser.
Or we can make it possible to node related code to be removed with some webpack plugins and put this info into the docs. Also see.
Or we can provide separate builds, one (mesg-js) can support both client and server and other (mesg-js-web.js) can only support only web etc…

I propose to make require(‘mesg-js’).application().api to support both browsers and node.js for now without thinking any optimizations in mind. And let’s not touch to require(‘mesg-js’).service().api because service api is only planned to be run in a server env for now, not browsers. But we can change this in future.
Note that, his is a breaking change because we’re also going to use plugins to generate clients and types. Which introduces setters/getters for XRequest, XReply types. We used to directly set the values to mutable fields but this is no longer an option.

  • Update current APIs to support both browsers and node.js without thinking any file size optimizations and generate client code automatically.
  • Think about file size optimizations, avoid any breaking change that setter/getters will introduce and manually create client code(only types).

0 voters


Great research @ilgooz!

I really like the concept to have a universal lib that work in both browser and node with the same API.

We can do breaking change because we are already removing the when/then logic of the application lib.

Even if I really like the universal lib, I would like to do it step by step.

  • Firstly, we focus only on the node.js version with the generated TS file (so the new API). Once done, we release it as a new major version on NPM. I would like to release this new version and update the doc maximum Friday 4th January.
  • Secondly, then we have time to make the core compatible with gRPC-web and upgrade the lib to make it compatible with browser as an universal lib. Having more time will also us to do a better universal lib and also release the feature with new docs, medium articles and make a big public announcement!

What do you think guys?


Yeah, I agree! Let’s go step by step.