This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm that you accept these cookies being set.

CPU I/O High Load when using Modbus RTU
#1
Hi,

We've a LM which communicates with 8 modbus 485, there are any scripts running and nothing... 

When RTU is enabled CPU/IO rises to 2 or 3 most of the time... Is it normal?

Firmware: 20230612
RTU 2 - 19200 - N/8/1 - HalfDuplex
8 slaves (4 wiith poll time 5s and 4 every 10s)

Thanks.
Reply
#2
How many registers per device and do you use Value send Delta to limit the traffic?
------------------------------
Ctrl+F5
Reply
#3
About 20 registers on each slave and I use value delta.

Profiles have more than 20 registers but most of them are not mapped to KNX (devices are input/output modules and I've generic profiles and i'm mapping only what I need)

I've noticed
Reply
#4
install System load app and see what it says. What about KNX traffic?
------------------------------
Ctrl+F5
Reply
#5
Hi,

There isn't KNX traffic, there isn't KNX TP line and KNX IP is not used (KNX connection set to IP Routing)

System Load (in this conditions CPU I/O is over 1):

1101
nginx:worker process
0.79%
3.61MB
1077
lua/lib/genohm-scada/plugins/modbus/daemon.lua 1
0.20%
2.43MB
1
/sbin/init
0.00%
0.61MB
714
/sbin/syslogd-C16
0.00%
0.63MB
716
/sbin/klogd
0.00%
0.65MB
718
/sbin/hotplug2--override --persistent --set-rules-file /etc/hotplug2.rules --set-coldplug-cmd /sbin/udevtrigger --max-children 1
0.00%
0.66MB
912
/sbin/watchdog-t 5 /dev/watchdog
0.00%
0.60MB
979
/usr/sbin/gpiod-l /lib/restore/defaults.sh -d /lib/restore/restore.sh -b 9
0.00%
0.29MB
988
/usr/sbin/ntpd-n -p 0.europe.pool.ntp.org -p 1.europe.pool.ntp.org -p 2.europe.pool.ntp.org -p 3.europe.pool.ntp.org
0.00%
0.67MB
997
/usr/sbin/redis-server/etc/redis.conf
0.00%
1.16MB
1058
lua/lib/genohm-scada/core/groupmonitor.lua
0.00%
2.20MB
1069
lua/lib/genohm-scada/core/scenes.lua
0.00%
2.05MB
1070
lua/lib/genohm-scada/core/ipblocker.lua
0.00%
0.93MB
1072
/usr/bin/eibd-e 15.15.255 -q 100 -L 1 -Q 0 -T -f eth0 -D -S224.0.23.12 -F n,n,n,n,0,0 ip:224.0.23.12
0.00%
0.98MB
1076
lua/lib/genohm-scada/plugins/modbus/daemon.lua 0
0.00%
2.28MB
1087
lua/lib/apps/daemon.lua lmcloud /home/apps/store/daemon/lmcloud/daemon.lua
0.00%
2.18MB
1099
nginx:master process nginx -c /tmp/nginx.conf
0.00%
1.02MB
1107
/usr/sbin/crond-l 20 -c /etc/crontabs
0.00%
0.55MB
1113
/usr/bin/dbus-daemon--system
0.00%
0.84MB
1125
/usr/sbin/vsftpd/tmp/vsftpd.conf
0.00%
0.63MB
1137
avahi-daemon:running [LM.local]
0.00%
1.05MB

system
0.00%


total
0.99%
26.02MB
Reply
#6
The actual CPU usage is low. IO load is high because the Modbus mapper is constantly waiting on data from RS485. This is normal.
Reply
#7
(14.03.2024, 12:03)admin Wrote: The actual CPU usage is low. IO load is high because the Modbus mapper is constantly waiting on data from RS485. This is normal.

Ok, thanks!
Reply


Forum Jump: