This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm that you accept these cookies being set.

Parallel processing in index.lp
#1
Hello, 

I have a separate controller application hosted on GCP, and would like to send requests to the LogicMachine from my application and therefore need an endpoint. 
I have created a custom application within the LogicMachine using index.lp, and was able to send requests to it. The LogicMachine will then send requests to the KNX server using grp.write().

Now my question is, is there a way to use Lua script and Javascript within index.lp (or maybe widget.lp?) in order to send requests to the KNX server in parallel using something like JavaScript's promise.all()?

Any other ideas are also appreciated.

Thanks, and kind regards.
Reply
#2
It's not possible to do in it parallel but even if it would there would not be any performance gains. Using plain HTTP for requests is very inefficient. Does your application support any other transports like MQTT?
Reply
#3
Thanks for the reply!

Though I understand that HTTP requests are inefficient, I'd like to stick with it for now.
If doing it in parallel within one application is not possible, is it possible to run multiple separate custom LogicMachine applications in parallel?
For example, if index1.lp, index2.lp, and index3.lp exists and I send requests to all of them, would they be processed in parallel?
Reply
#4
The web server can handle multiple connections at once but it does not matter whether you use a single or multiple endpoints.
Reply
#5
So do you mean for multiple custom LM apps, when a request is sent to each application at the same time, it is processed in a single OS process?
Reply
#6
Web server is a single OS process but it can handle multiple concurrent requests.

Since LM has a single core CPU nothing can truly happen in parallel. OS constantly switches between tasks but only a single task can actually be doing something at any time. grp calls are also processes one-by-one by any process that is listening to the object value changes.

As I've already mentioned if you have a lot of object updates your first bottleneck is the overhead that comes with using HTTP requests. From my tests LM can handle around 20..30 object writes per second via remote services. But that consumes quite a lot of CPU resources.
Reply


Forum Jump: