Network implementation with CosmosSDK


Decentralization of the different resources managed by the Engine:

  • Services (to create this marketplace of services in our own network)
  • Instances (to be able to delegate execution to specific nodes)
  • Executions (to process executions on the network)

In order to distribute these resources, we need to make sure that every change of state is replicated on the network and verified by each node of the network before broadcasting it.

For that, we will use CosmosSDK & Tendermint that already implement this distributed state machine with a lot of really cool stuff.


Details about CosmosSDK App concept, Architecture, Design

We will use what Cosmos provides:

  • Keeper/Store
  • Messages
  • Handlers
  • Querier


The place to store the data. We actually have everything in the database package, this database package will disappear and the resource package (service, instance, and execution) will manage the storing based on these keepers.

Keepers expose getter and setters of the data. We should only read/write the data based on these keepers


Type of messages that can trigger actions on the data. These are the messages that will transit in the network (or within the local instance). This message will validate the basic data (light validation).

Messages are directly handled by Tendermint and will be propagated automatically (magic)


Handlers are the actions that will update the Keeper based on Message received. This has most of the logic and could be delegated to the sdk package.

Handlers are called either directly from the sdk or call based on the routing defined by the application.


Not sure exactly how this is useful in our case but it allows to read the data, we should probably only read data based on a querier and never try to read directly from the keeper

We can implement all these objects directly in the resource packages:

   - type.go
   - keeper.go # implement the keeper
   - msgs.go # implement the messages
   - handler.go # implement the handlers
   - querier.go # implement the queries
   - codec.go # needed codec to save the data
   - type.go
   - keeper.go # implement the keeper
   - msgs.go # implement the messages
   - handler.go # implement the handlers
   - querier.go # implement the queries
   - codec.go # needed codec to save the data
   - type.go
   - keeper.go # implement the keeper
   - msgs.go # implement the messages
   - handler.go # implement the handlers
   - querier.go # implement the queries
   - codec.go # needed codec to save the data

Few notes from development:

  • docs for app.go is rather poor (cosmos-sdk-tutorial) and there is a lot going on there

  • I would rather do not go with the prosposed file structure. The sdk should be kept as seperate package and only save instance/service/execution object in single database (each db as kvstore put in an app struct like here

  • the rest package can show us how to expose protobuf api for communication. I have to implement BaseReq fro the protobuf. It is used in every request

  • how to we solve the creation of execution, service, instanace owner account?
    do we want some api to create them? How do we handle tokens for them, or there is no token at all?

  • How do we handle the update of execution - suppose the service A creates an execution and it’s the owner of it, then service B wants to put and ouptut for execution, but B is not the owner - I need to figure out this sceneario.

So I continue the creation of very simple example (no cli, no rest api, just protobuf api), simplify app.go to have only one store for execution (and just one method create an execution)

Could we start with no token and no owner of account? A pure trusted environment?

Yes let’s not worry about token yet, for the creation of resources:

  • Service: the node that deploys the service is the owner of it, other nodes cannot edit it
  • Instance: same as service
  • Execution: Shared edition, one node create the execution, the rest of the nodes can add their signature to the list of emitters (and the execution will be executed when enough emitters observed the event and create/update the same execution)

The proposed structure is not in the sdk but in the dedicated resources packages, /service, /instance, /execution but I’m open to discuss that and improve it

let’s not worry about rest or event grpc api now

That would be great, but as I saw, we can start without tokens, but we have to have accounts.

Yes, instances and services are rather easy, this is why I asked mainly about execution.

Have you seen api somewhere for shared edition?

How do you want to test it then? how do you want to create/stop instances, update execution without grpc api? Without api, I won’t be able to test the code.

And last - if accounts must exist, then how do we manage them (also the account must be handled to test cosmos-sdk stories)

This is just in the handler part and its validation, either only the owner can edit et we trigger an error if the sender is not the owner but in the case of the execution there is no owner, just the one that already append their data cannot change them.

We already have these apis, I was thinking you wanted to add new apis. We keep the current apis and adapt them to use the store

So this is the question can you store object (in our case exeuction) in kvstore without owner?

Yes I need to adopt them :slight_smile:

I’m pretty sure you can, owner is just a data like any other data. The ownership is controlled by the validation you are doing in the handlers and messages.

Step-by-step guide to implement Cosmos:

DEPRECATED VERSION. Scroll down to see new one.

  1. Database

    • Add a param context to each function. This param should use a new interface/struct that implement a getter to fetch the store. In the case of the database, it returns the golevel instance, in the case of the keeper, it returns the kvstore from cosmos.
    • Move database interface to sdk sub-package. Make it private because only SDK is using the interface.
    • Add (useless) params context to current database struct to make it implement its corresponding interface
    • Create keeper that respect the interface. Put it in service package? @core what do you think?
  2. SDK

    • Create interface in interface.go
    • Split current sdk implementation into a Classic and logic struct. The Classic implement the interface. The logic implement the logic function. The first version of classic will basically delegate everything to logic.
    • Create Cosmos struct that implement the interface
      • Create new messages that inherit the ones already used by the gRPC API (eg: CreateServiceRequest).
        • the messages needs to implement the types.Msg interface. So the inherit version will implement it.
      • Add function NewHandler and NewQuerierHandler to start to implement the AppModule interface of cosmos. Those functions should use logic struct and encode/decode the inputs and outputs if needed.
      • More function needs to be added in order to fully implement the AppModule but it can be done by a generic Struct that make the SDK compatible with Cosmos with ease (lot’s of function that can return default values).
    • When flag experimental is set, initialize the Cosmos struct instead of the Classic.
  • why database needs a context, and what kind of context interface are you writting about?

  • why you want to move database to sdk sub-package. We can keep it seperate as it is right now (in fact we can move service, instance and everthing else to sdk package because only sdk uses it). I don’t understand the yours irresistible desire to move everything into sdk package :stuck_out_tongue:

  • sdk -> Create interface in interface.go (interface for what? :))

Split current sdk implementation into a Classic and logic struct. The Classic implement the interface. The logic implement the logic function. The first version of classic will basically delegate everything to logic.

I don’t understand this part of splitting classic and logic.

  • Create new messages that inherit the ones already used by the gRPC API (eg: CreateServiceRequest). The same topic as on friaday. I’m against passing CreateServiceRequest to cosmos package because
  1. it’s diffrent api and will have diffrent messages (for example it already contains the service owner) - you want to unify it with grpc and i think it should stay seperated

  2. cosmos is key value database and it will store pbtypes.Service so why it needs to know about more types liek CreateServiceRequest

  3. if cosmos will accept CreateServcieRequest, then the database interface should accept it as well?

  • there is no info user managment

it is required by cosmos in order to load the actual kvstore. if we want the sdk to work both with cosmos kvstore and without, then it needs to be there.
I need to check more, but for what I understand the context is different at each call. so the store needs to be loaded every time. the context is containing the height of the blockchain and can load a completely different version of the database.
We could pass the golevel database or cosmos kvstore instead, using a common interface. But I worry that the context will be useful later to access more data about the current request. We could still change back to context if it makes stuff easier now and we really need it.

i’m speaking only about the interface. It will remove the dependency between the sdk and the actual implementation of the database.

Interface of the current public API. When I’m speaking about SDK, I’m speaking about the sub-sdks package (Service, Instance etc…).

The goal is to separate the logic of the package from the public “api” of the same package. Like this, we could create another public api (using a new struct) and still use the same logic functions.

That’s a good point. Maybe all data will not make sense to be on both messages.

The CreateServcieRequest is not the data to store in database (state) but the action to change the database (state transition). Whatever message we will use, they will not be saved in database, but the result of the modification of state will be saved in database.
The SDK has the logic to apply the action, so it needs to both now the action and the modification of state it produces.

yes right. I didn’t put anything because it’s not a functionally that the Engine is currently doing.
We should create a new SDK for it and new gRPC APIs.


Are we going to change our sdk implementation now? If no, then why we need an interface in that case? We don’t know how exactly this interface should look like, but we can add it in the feature.

As I see it, the MsgSetService (same as MsgRemoveServcie, etc…) describes the action to be taken, so that action has the right data (pbtypes.Service) in order to transition state. So MsgSetService is an actual action because based on that message cosmos app takes the actions.


Also what do you think about two database interface just for readability.

// this interface will be general key-value database interface
type KVDatabase interface {
  Set([]byte, []byte) error
  // remove, list, etc ....

// this interface will be used directly by sdk so the mapping from bytes to struct
// will be done here and not directly in the sdk methods 
type ServiceDatabase interface {
  SetService(pbtypes.Serice) error
// remove, list etc...

type InstanceDatabase interface {
  SetInstance(pbtypes.Instance) error
// remove, list etc...

type Database interface {
  InstanceDatabase ....

It requires more code and I’m thinking if this is neccesery or we should just use Marshal directly in the sdk


The above database interface dosne’t have to be an interface, it can be just a seperate struct in the sdk package to does the marshal/unmarshal saves etc. So it will look like this

type Database struct {
  serviceKV KVDatabase

  instanceKV KVDatabase
  // etc...

func (db *Database) SetService(service pbtypes.Servicie) error {
  return db.serviceKV.Set(service.Hash, service.MarshalBinary())

// the same for list, remove service, interface, execution...

The interface will be useful for being able to switch from Classic to Cosmos implementation of the sdk using the flag experimental. It also something we need to do for a while for unit test :wink:

Classic is the current implementation of the sub-sdk package that directly call the database.
Cosmos will be the version using the cosmos’s abci query and transaction to create state transition of the db.

I’m ok with the KVDatabase. ServiceDatabase and InstanceDatabase should keep the current API. Simple Get, List, etc…
Why do you want to create on Database struct that contains all database?

Some more suggestion:

  • The database or future keeper should have a lot more verification on the data that are saved in the db. In Cosmos design, a keeper can access another keeper. I think we should do the same. Like this, a sdk sub-package will NOT access another SDK package, but the keeper of Instances can access the keeper of Services (for example). Thus, the keepers should be put in their attached data package (Keeper of Services in Service, etc…), this will not create dep cycle as the data themself doesn’t create cycle.

To have only one database interface and rely only on the database object. It could be split as we have right now with InstanceDatabase, Execution etc… but I don’t we need the split and for the sub sdk pacakages

// New creates a new SDK with given options.
func New(c container.Container, serviceDB database.ServiceDB, instanceDB database.InstanceDB, execDB database.ExecutionDB,         workflowDB database.WorkflowDB, engineName, port string) *SDK {
  ps := pubsub.New(0)
  serviceSDK := servicesdk.New(c, serviceDB)
  instanceSDK := instancesdk.New(c, serviceSDK, instanceDB, engineName, port)
  workflowSDK := workflowsdk.New(instanceSDK, workflowDB)
  executionSDK := executionsdk.New(ps, serviceSDK, instanceSDK, workflowSDK, execDB)
  eventSDK := eventsdk.New(ps, serviceSDK, instanceSDK)

So, for now, sdk instances are passed around only just to access it’s getter (fg. instance has service sdk to just call a getter). This is why I think database is needed to be passed instead of sdk.

If someone needs the database it has to receive only a simple database as a dependency, not the whole sdk. It’s like with SQL db. A package receives an open connection to db to insert/select data it needs, but it dosen’t mean the package and read/write anything it wants from the database.

With this approch the New funtion should looks more like :

// New creates a new SDK with given options.
func New(c container.Container, db database.Database, engineName, port string ) *SDK {
  ps := pubsub.New(0)
  serviceSDK := servicesdk.New(c, db)
  instanceSDK := instancesdk.New(c, db, engineName, port)
  workflowSDK := workflowsdk.New(db)
  executionSDK := executionsdk.New(ps,db)
  eventSDK := eventsdk.New(ps, db)

Step-by-step v2 guide to implement Cosmos

I will use service for this guide.


  • remove the ServiceDB interface
  • Transform LevelDBServiceDB to ServiceDB and change the dependency to leveldb.DB to store.Store
  • update the rest of the file and the associated tests accordingly
    • use Has function to check if a key exist instead of leveldb.ErrNotFound
    • store doesn’t implement transaction. it could be necessary to implement a similar system
  • update the rest of codebase:
    • database.ServiceDB to *database.ServiceDB
    • update initialisation of ServiceDB to use the goleveldb store:
    - serviceDB, err := database.NewServiceDB(filepath.path)
    + store, err := store.NewLevelDBStore(path)
    if err != nil {
    	return nil, err
    + serviceDB := database.NewServiceDB(store)
    • update tests!
  • Create PR (eg:

SDK step 1

  • Create interface Service of the current public API in new file type.go
  • Update codebase to use the interface Service. Basically remove pointer on servicesdk.Service
  • Rename file to deprecated.go
  • Rename Service struct to Deprecated. New function to NewDeprecated
  • Update codebase to use NewDeprecated function
  • Create PR (eg:

SDK step 2

  • Extract logic function to utils.go
  • Add new implementation SDK that implement the interface but panic for now
  • Add new struct module that implement querier, handler functions and register its module and store to cosmos
  • Init new structs sdk and module in New function of sdk/sdk.go
  • Update main.go for new depencies
  • Create PR (eg:

SDK step 3

TODO: Concrete implementation of tx, handler and query.

Problem to fix before implementing step 3

Should use service proto struct for storing in DB and in the cosmos transaction / query
  • Problem with marshal with current struct using Cosmos Amino
    • Amino is based on proto, so using a struct generated from proto solve the problem
    • Better performance to encode/decode in “proto” binary rather than JSON in db
    • Can still encode in classic JSON for transaction and query for human-readability
  • Need to find a way to keep using the struct tag for hash and validation
    • gogoproto have an extension to do it (set the golang tags in the proto files)
    • if the validation is set in proto file, then it could be possible to have multi-language validation
  • No more mapping functions
  • One source of truth for structure of data: proto files
  • Need to move the generated proto files in “resource” (eg: service) folder to be able to add the helper functions on the struct of the data.
    • Don’t necessary have to be done this way. Be it will be way nicer.
  • I would recommend to use the proto struct as data in the database and in the cosmos transaction and query.

Solution: use proto only for transaction and query. Will use proto for db later on.

Public API of SDK needs to receive the user’s account (account name + password)
  • Required to sign the transaction before broadcasting them
  • Could be a new struct passed as parameter of the write functions (create, delete)
  • Should it be inside a context variable? I’m worry than the context variable could be a “garbage” of all the required data.

Solution: new struct Account passed as parameter in SDK function. Use gRPC metadata on the network.

Should we use the gRPC API’s messages as Cosmos transaction and query
  • They should contain mostly the same data
  • Differences: user account management
    • In Cosmos transaction the address / public key and a signature will be used
    • In the gRPC API message, the account name and password should be provided (could be put in the header maybe? but not the best place i think).
  • It will really make the gRPC API super close to the SDK api
  • be able to get rid of the custom gRPC server implementation in order to use a generic one (sdk could register itself directly to the gPRC server)
  • but no flexibility in the data “reserved” for the gRPC API and the ones for the cosmos transactions
  • I would recommend to not use gRPC API message for now to have maximum flexibility. In a few weeks (maybe when implementation without cosmos is deleted), we could reopen the question in order to refactor and reduce codebase.

Solution: define cosmos transaction and query for now. let’s see later if a merge with gRPC message is possible.

Helpers on top of cosmos: