Instance DB

The goal of this feature is to split the Service DB into Service DB and Instance DB.

The Service DB will only store the service definitions. Its primary index is the hash of the service definition, calculated by the Engine, called Service Hash.

The Instance DB will store the info of the actual running services (docker services / containers IDs, networks IDs, but also Service Definition Hash). Its primary index is the hash of the service definition + the custom env, calculated by the Engine, called Instance Hash.

Note on custom env: the custom env are NOT stored in any DB but are used for the calculation of the instance hash AND injected in the docker service on start.

Let’s see the full start and stop processes:

Start (service hash, env) -> instance hash

This api is similar to the current start api except that it will download and build the service, create and save in instance DB the container related info.

The steps of this api are:

  • Fetch service definition from service hash in Service DB
  • Download the service source
  • Build the docker image
  • Merge custom user env
  • Calculate Instance Hash (based on the service hash and the custom env)
  • Check if Instance Hash already exist. If not, then:
  • Save the Instance object in Instance DB
  • Start the Service docker services
  • Update Instance DB with the docker services / containers / networks IDs
  • Return Instance Hash

Stop (instance hash)

This is basically the same as the current implementation except it’s using instance db.

  • Check if Instance Hash already exist. If yes, then:
  • Stop the docker services
  • Wait for the docker containers to be removed / deleting the docker containers
  • Delete the networks
  • Delete the instance from Instance DB

Delete (service hash)

The only difference with current implementation is this api has to return an error if any instance referencing the service hash exist. The user have to stop the instances first.


Now that Service DB only stores “static” data, it can be used to synchronise / publish Service Definition across the Network and can replace the current Marketplace running on Ethereum.

Another very important modification is all APIs that are requiring a Service ID to use a running Service (listenEvent, listenTask, executeTask, StopService) will now use the instance hash instead of the service hash. Be careful, this is only for running service, for example, the API DeleteService should delete in ServiceDB thus using the Service Hash.


In orange, step that are different than current implementation.

Start schema

graph TD 1[Fetch service definition from service hash in Service DB] --> 2 2[Download the service source] --> 3 3[Build the docker image] --> 4 4[Merge custom user env] --> 5 5[Calculate Instance Hash] --> 6 6{Check if Instance Hash already exist} -- if not --> 7 6 -- if yes --> error 7[Save the Instance object in Instance DB] --> 8 8[Start the Service docker services] --> 9 9[Update Instance DB with the docker services / containers / networks IDs] --> 10 10[Return Instance Hash] error[return error] classDef diff fill:orange; class 2,3,5,6,7,9,10 diff;

Stop schema

graph TD 1{Check if Instance Hash already exist} -- if yes --> 2 1 -- if no --> error error 2[Stop the docker services] --> 3 3[Wait for the docker containers to be removed / deleting the docker containers] --> 4 4[Delete the networks] --> 5 5[Delete the instance from Instance DB] classDef diff fill:orange; class 1,5 diff;

gRPC definition, server & sdk

As Service compilation & deploy, this feature should not modify any existing gRPC definition, server or sdk functions but rather create new ones to start from a fresh and clean mind even if code duplication is necessary :wink:

Edit #1

This new gRPC api should be created in /protobuf/api/instance.proto and contain the following protobuf def:

syntax = "proto3";

package api;

service Instance {
  rpc Create (CreateRequest) returns (string) {}
  rpc Delete (string) returns (string) {}

message CreateRequest {
  string serviceHash = 1;
  repeated string env = 2;

This api Instance will only manage resources in Instance DB and use a CRUD-like design.

Why? Shouldn’t be bob starts the service by sending service hash, why core needs to redeploy the service every time?

What is the pros of passing SD to deploy? When we have decided to drop deploy api?

Why you can’t run two same SD with same custom env?

What if I wan’t to delete all runnign instance of given service no metter what custom envs are?

This is because of future decentralized network and deterministic hash calculation.
Two node should create the same instance hash from the same service definition and the same env variables.

In this case, you will use a list instance api that returns the instance hash of each instance and pass them to the stop instance api

Should we calc service hash from env+service def hash or env+docker image hash. The second seems more precise to me since the built image might actually contain different dependencies when their versions aren’t fixed by the dev in Dockerfile.

env+docker image hash is not possible yet as docker doesn’t generate deterministic image hash.
This might be solved later on and when we can make sure that docker builds are deterministic then yes instance hash = hash(definition, env, docker build) but for now instance hash = hash(definition, env)

small correction:
service hash = hash(definition, source files)
instance hash = hash(service hash, env)

As requested by @ilgooz, here is the explanation about service or instance status.

  • Service status: simply remove it. A service is not directly related to any docker container. So it there is no logic that a service has a status.
  • Instance status: very similar to the current service’s status. It needs to fetch the docker api to get the status of each related docker container, aggregate to one status (like it’s currently done for service’s status).

So basically, service’s status is moved to instance’s status.