In CAN, the maximum latency only grows linearly with bus load and this only applies to the lowest priority (highest ID number) device.
Assuming each message takes 1 ms (at 125 kbit/s) and you put 90 devices on the bus with different message IDs, each transmitting a message every 100 ms. The first device with the lowest ID will suffer
0-1 ms latency, the 45th 0 to 45 ms latency and the last 0 to 90 ms latency.In CanOpen one configuration option is that all slaves are triggered by the SYNC message from the master. All slaves become ready to transmit at the same time, but due to the arbitration, the messages are transmitted in more or less increasing ID order in a single burst and the bus is idle until the next SYNC. Any low priority non-realtime messages will be delayed by the length of the burst. Thus a non-realtime message is delayed either 0 ms or the length of the burst.
When devices transmit at random times but with specified intervals, the situation is similar, but there may (and sometimes not) be intervals between the realtime message frames, so the latency for non-realtime messages can be anything from 0 to the length of the burst.
In practice keeping the load from realtime messages below 50 % gives plenty of time for non-realtime messages.
There might be a risk for flooding if you send a message at each input signal state change and for some reasons (e.g. a bouncing contact), these messages would be generated at a huge rate. In CanOpen you can specify an inhibit time for each message, which is the smallest interval a change of state message is allowed to be transmitted. The risk with high bus load is that if there are error frames on the bus (i.e. some node sometimes hears a garbled message and CRC does not match and thus generates the error frame), this could increase the load to 100 % and the messages with the highest ID would not get through. Of course, the electric quality of the bus should be fixed as soon as possible to avoid error frames.
Paul