Create container for development scripts

as discussed on weekly meeting, we need a docker container with pre installed dependencies needed by scripts.

we can provide current working core directly with the -v flag to provide schema, proto files etc. and get the built go files.

example usage:
docker run -v `pwd` mesg-gen ./scripts/

Thanks for putting that here, @krhubert was talking about an image that just exports the result in the terminal and like that we can put it in the file that we want and this docker image can be totally generic

I guess it will looks like something like that:

cat fileToProcess | docker run mesg-asset >> pathToTheAssetFileWeWantToOutput
cat protoToProcess | docker run mesg-proto >> pathToTheProtoFileWeWantToGenerate

If we can do something like that it’s really nice but otherwise sharing the volume is enough :slight_smile:

The goal here is that everyone needs to have the exact same version so one way or the other will solve this problem

please check the new Makefle & Dockerfile-gen here.

you can run make build-gen to build the mesg-gen image and then run make build-proto to create pb files with the container. i think this is a very simple way to solve the problem.

1 Like

@krhubert any feedback on this?

Yes, but unfortunately not every tool has an option to read/write from stdin/stdout.

For protoc there is an issue to allow this.

So read from stdin and write to stdout should be the first choice, mounting dir otherwhise

@krhubert what is the disadvantages of mounting dir with -v flag?

If we really want to read from stdin we can create bash script and I think this might work and save docker image user some time to use it, eg:

# - this file would be run from CMD in dockerfile
until $DONE ;do
read || DONE=true
echo $REPLY >> /tmp/p.proto

protoc /tmp/p.proto --out /tmp/p.pb.go

cat /tmp/p.pb.go

And this trick allow us to use stdin/out

It’s ok to use the volume, the only “problem” with the volume is that we need to document it so it’s not that clear but we will probably have this in a script so it’s totally fine for me.

will be implemented by

1 Like

We should build tools in seperate images, here are the resons:

  • we might have tools from diffrent languages (therefore we can’t use go image)
  • we might have tools that requires diffrent go version
  • we can control version of go and tool itself from the container build process

Now we have one container and some version (golang) are defined in dockerfile and some in main repo in dep file. I think we should keep them together

We only have 3 go tools and the protoc binary used as a dependency at the moment.

I think, for now, we don’t need to complicate things for a possibility of problem in versioning.
I think we’ll not have 10s of dependencies in future so to me a single Dockerfile is enough for now.
If we need to split them in future we can do it always.

We can still use base Go image even if require some dependencies written in another languages. They’ll mostly be in binary anyway and if we face with some problems we can always split the containers.

I think most go tools will be compatible with the latest version of Go, so we should be fine about that one.

Yes there are versions defined in both Dockerfile-dev and Gopkg.toml file for dependencies. Gopkg.toml only keeps the versions for go tools. Dockerfile-dev keeps the versions about container itself and protoc dependency. And I agree that protoc and protoc-gen-doc go tool defined/downloaded in separete places. This looks a bit ambigous but it should be fine if we don’t have 10s of them. And it’s also possible to version go tools inside the Dockerfile too but it’ll require more lines in the code and I didn’t want to complicate things for now because we only have a few dependencies.

I agree that having a docker container for each service is ideal but for now I think having one that aggregate all is enough and we can always improve it later on.

1 Like