When using the serial MODBUS RTU decode function in Picoscope 6.13.2.3439 Beta I encounter some problems.

1. Why do I have to invert the A-B RS485 signal for proper decoding?

2. How is the start and end frame time calculated. I get an "invalid start" and "invalid end" error. If I calculate the 3.5 chars time -> 1/115200*1*3.5=334.2us. Picoscope only gives a valid when gap is bigger than 2.66ms.

If by your 1st question you mean "why do I have to reverse my signals going to channels A and B in order to get valid decoded data", then the answer is that you just have to make sure that Channel A is +ve going and Channel B is -ve going because, as you can see in the serial decoding drop down list for the Master (channel selection for the Master on the bus) the listed option is A-B, and a for our decoding of RS-485, what will generate a data value of 1 is a +ve differential of A relative to B.

Regarding your second question/comment, are you saying that for the lower bit rates (down to our limit of 4.8k baud) the inter frame delay is less than 3.5 characters? If not, then I'm not sure that I understand what your asking. If the minimum inter-frame delay is correct according to the specification, then as long as the correct increased inter-frame delay from the bit-rates lower than 19200 baud still provide a valid decode, why would you want to adjust the inter-frame delay for bit-rates higher than 19200 baud (which would invalidate the specification)?

First question:In the attached picture you can see my serial decode setup. In the previous attached datafile you can see that +ve(A) is blue. So this should work like you mention. Why do I then have to check the "invert" box in the serial decode setup to get the proper data?

Second question:Our inter-space time is shorter than 1.75ms, so the first two frames are seen as one frame. I know this is not according to the modbus spec, but we know that our slaves respond must faster than 1.75ms. So we use 3.5 chars time for a minimum inter-frame time.

I was wondering if this 1.75ms could be altered (e.g. in the serial decode setup), so that we can decode our messages even when we don't have a minimum inter-frame time of 1.75ms.

I see what you're referring to and was under the impression that manufacturers correctly just arbitrarily label the polarity of their terminals (which unfortunately they do..... incorrectly).

In actual fact, according to the RS485 standard, when labelling the two terminals of a differential balanced line alphabetically, they should be labelled A for negative and B for positive (which, I didn't realize, is what we do). The confusion over A and B arises because some manufacturers will label inputs incorrectly with A positive w.r.t. B, while most manufacturers will instead use alternatives to alphabetic labels such as + and –, or some variation of D+, D, or D-.

So because of the confusion, and any confusion over the labelling of other serial decoding protocols, we have the invert option in our setup. But the decoding in our PicoScope 6 software for Mobus RTU is done with A negative w.r.t. B.

Regarding, making the minimum inter-frame delay adjustable, I can put forward the request to our development team, on your behalf. However, I know that our aim is to have our serial decoding compliant with the standards of the individual protocols, so the reality is that not a lot of time is typically available for pursuing requests from customers for any forms of custom decoding.

Actually the "0"="ON" (VA > VB) and "1"="OFF" (VA < VB) part of the spec is only confusing for newbies to serial comms, as it also applies to single ended RS-232, and common UART's which are much more accessible from a technical point of view (much more ubiquitous and an elementary piece of code to program) in comparison to the polarity of a hardware transceiver (more restricted to hardware guys and less trivial to design).

So regarding the polarity of A & B, at this short notice my only source of information was an expert not within Pico Technology (so somewhere along the line the standard was mis-interpreted) and a number of different sources of text and diagrams that state, why the convention should be that A is -ve (some go into quite a bit of detail e.g. : http://www.bb-elec.com/Learning-Center/ ... S-422.aspx)

However, thanks for posting the excerpt from the actual standard on the subject (we will need to change the default polarity of our Modbus RTU decoding in order to conform). Unfortunately, this may not actually 100% clarify what the labelled polarity of the equipment that you're using may be, as is explained in this document in the section "What else can go wrong? Check your signal polarity." (https://www.csimn.com/CSI_pages/RS-485-FAQ.html). The excerpt from the standard, in the link that you posted, states that "Texas Instruments datasheets for RS-485 transceivers follow the convention that A is the non-inverting pin and B is the inverting pin. Most, if not ALL manufacturers of RS-485 transceivers use the same convention for polarity and pin naming as Texas Instruments", so there is the possibility that the standard has once again been misinterpreted in my last document link. However, from all of the info I've read, there has been conflicting equipment that has caused all of this uncertainty (and the standard infers that there may be conflicting equipment out there, using the phrase 'Most, if not all'), and because the fix is relatively easy (just swapping the connections) there has been no burning need to 'fix' any 'wrong' hardware.

So, in conclusion, as you can see, there IS confusion over the labelling of polarity for RS-485 and, if you are implementing the Modbus RTU protocol you are always best advised to make sure that all pieces of equipment are connected up with reference to the "inverting" and "non-inverting" inputs, rather than the A and B inputs, to avoid having to fault find connections to reverse. If you are analyzing the transmissions of Modbus RTU, e.g. in an piece of Test and Measurement equipment that has Serial Decoding capability, it's not so important when the equipment has a get-out-of-jail card, such as our 'invert' button.

Today I was also using the picoscope at work for modbus. I was confused that I always got an "invalid start". In the end the reason was that I was "zoomed in" too much on the frame, so there was no gap of 3.5 characters in front of the frame/trigger. I think in this case it should not say "invalid" start, but "start not checked" or something like this. Because I am absolutely no expert on modbus, I actually thought something was wrong with the start, and spent hours configuring many different start options on the Siemens PLC.

I also could not find a modbus specific trigger option (maybe I looked in the wrong place?), making it hard to always trigger at the start of the message.

I'm sorry that you had to waste that much time looking for the problem, and I understand your frustration. Without knowing the algorithm used to perform the decoding it's easy to specify what we would like to see, but not necessarily easy to implement at the time of the function creation (the first check may not be the inter-frame delay it might be where the start bit should be if there is an inter-framing delay, in which case, it's difficult to differentiate between a start bit that is wrong because the framing delay is too long/short or a start bit that is wrong because the start bit is too short).

However, "Start not checked" is only marginally better as an error message (doesn't tell you why it wasn't checked. A more informative one for this particular problem would be "framing error at start"). We do strive to make our software as good as it can be, so I can put in a request to our development team for an error message that reports the reason for the error better and let you know the outcome.

Regarding triggering correctly (note that this is a bit like the start bit error problem that you were trying to make sense of, from a developers perspective) we don't have serial protocol or application specific triggers. You need to use the available advanced triggers, with the correct parameters and adjust the capture window (if necessary) to set up a trigger that will work. In your case, you need to trigger on something unique to a data packet. So you should use the interframe delay, i.e. trigger when a high state on the non-inverting input is greater than the length of 1 character (in case a packet contains data that is high for 8 bits) and smaller than the length of 3.5 characters (it doesn't matter exactly where the trigger occurs because you will adjust the start of the capture relative to the trigger to get all of the data). Within PicoScope 6, there are different triggers you could use, a simpler one being a pulse width trigger, to fire when a period is longer than, say, 2 characters. Then, to allow the decoder to detect the frame start correctly, you need to add enough pre-trigger samples (by reducing the % capture time in the pre-trigger window) to capture the event that the decoder is looking for before the trigger will fire, i.e. the inter-frame delay (Note, you need to capture at least the minimum inter-frame delay period, but you don't want anymore pre-trigger data than that because you want to maximize the amount of post-trigger data that you will capture).

I hope that clarifies things for you (I went into a lot more detail then planned to explain the concept behind setting up an optimum trigger for serial decoding).

Thanks for your reply. I was indeed using the advanced triggering, but was just wondering if I missed how to setup a specialized trigger. I must say I was really glad to find out the Pico 6 beta software supported modbus, because all other scopes we had at hand did not do that (so a specialized trigger would more be the icing on the cake)

Gerry wrote:Hi _Wim_ ,A more informative one for this particular problem would be "framing error at start").

I am afraid I do not fully agree with the above, as for me the above still seems to imply something is wrong with the DUT, while the actual issue is that the time between the detected start bit and the beginning of the scope screen is insufficiently long to check the inter-frame delay (so the scope cannot check the 3.5 character delay is present, but it also cannot check this interframe delay is not present). So something like "cannot check inter-frame delay, change time base" or the like would be more clear to me.

I take your point, but the problem is not necessarily with the Time-Base. It could be that the time base is long enough, but the trigger is positioned so that not enough data is being captured to represent a complete inter-frame delay. So, perhaps just "Not enough data for Inter-frame delay check" (which would account for both scenarios) is enough. In any case, our development team will ultimately make the call.