Thanks for this post it’s really nice
I like your syntax, it’s really close to the api that we have right now and this is something nice. I have one concern though. We need in the future to be able to chain the executions. In my first proposition I was thinking to flat all the executions and resolve the dependencies. I’m afraid with your syntax it might be hard to chain executions.
It’s something we should think about. Maybe something like that:
when: serviceX: event: eventX: execute: nameofexecution: serviceY: taskY result: resultY: map: foo: $nameofexecution.resultY.outputY bar: $event.dataX execute: nameofexecution2: serviceZ: taskZ ...
With something like that we could even flatten all the executions and resolve them based on the data they need. This might be too much to implement for now but I just want to have a syntax that we will easily be able to migrate to adopt that.
All good for that, I would just remove the update part, let’s keep it simple for now, we delete and create a new one like the services. We can have an id system on top of that later on to mimic an update.
I would be careful to have a consistant naming between the service and workflow actions like remove vs delete
Multiple task executions
Totally necessary, it’s kind of related to my first point but I think here you are more talking about executing them all in parallel and not chained which is something that we should cover too but we will have the same problems. The execution can be extended but the mapping might be totally different and this is why I think we should group the mapping inside the execution part.
This one is tricky, especially if we want something simple. We definitely need a filter system, filters that for me should be done based on all the data from the execution (data, tag, outputKey) but also the one from the parent execution (in case of nested executions). For the kind of filters at least the equal is necessary and all the other primitives should be perfect but for now, like you propose we can use services for that. Let’s make sure that we have something where we will be able to add the filters but we can now have special services for that.
I think this one is too much, I would recommend to go with a service for that, we will never be able to cover the different needs for that so let’s not try I think
I think we should have something always reacting from event’s services. For now we can have something simple and listen for the api that we already have based on the workflow informations, basically what you’ve already did. But we should have all these workflow informations in a database and for every events request this database to see if we need to execute a task. This way we remove all “listening” part that is not really scalable and hard to manage.
In conclusion, it’s really nice and for now we can use the system of listeners but we should keep in mind that this will evolve with a database (even distributed database) and also the syntax needs to be “future friendly”. I really think we should name the executions and do the processing inside these executions, that way we will be really flexible but I might be biased by my previous researches. Definitely open to rethink that.