I'm trying to interface a Mega2560 to 3 Siemens DL3416 alpha numeric displays. Since there are no libraries written for these displays, I'm trying to learn some details of how things work in the libraries. Being a likely good example, I read through the LiquidCrystal library .cpp file and the .h file, but I'm not seeing what I expected. I confess I'm not real good at any programming language, especially C or C++.

Right now I can do simple things like turn on the cursor in any given character position, so timing is not an issue, but it's very tedious addressing the displays using the bit wise I/O. Is there a way to read or write an entire 8 bit "port" or do I have to shift array values and write a bit at a time? I've searched the forums for bytewise, bytewide, etc., but didn't find the right thread I guess. I'm sure this has been a topic before.

dhenry

I don't know your particular device but those devices typically don't "read" the data pins until they are "strobed", ie. a particular pin is going through a transition. In that case, no atomicity of writing to the data pins makes zero difference.

Yes you can directly manipulate the I/O 'ports' to read or write all bits in a given I/O port in a single instruction. However depending on which arduino board type you have not all 8 bits for each port are avalible for use.

So here is the software methodshttp://www.arduino.cc/en/Reference/PortManipulation

And here is a pin mapping diagram to associate arduino pin numbers to port/bit numbers for the 168/328 chips and the 1280/2560 chips:

A far better way to accomplish this would be to use binary. Typically it is the bits that are important, not the numerical value. When you try to do the conversion yourself the very best you can do is get it correct. The compiler gets it correct every time.

In Arduno it works fine (as there are #defines in the Binary.h file which define all of the 8bit binary numbers). However in C, binary literals are not part of the specification so the strictest of compilers just spit them out. With GCC, there is built in support for binary literals, so you can use things like 0b or 0B to denote a binary number. It is beyond me why the people at Arduino decided to define there own binary numbers rather than use the GCC ones.

I read that all 8 bit binary numbers had been explicitly defined in Arduino and was surprised that they were not "natively" supported in C/C++. But I have a question. I just tried this: int x = B01001100; and the compiler didn't complain. Then I tried adding 8 more bits and it squawked. I guess that makes sense if only the first 256 binary numbers are defined, but doesn't it take 16 bits to be an integer type? Does this mean you can access only the low byte in an integer using binary?

I read that all 8 bit binary numbers had been explicitly defined in Arduino and was surprised that they were not "natively" supported in C/C++. But I have a question. I just tried this: int x = B01001100; and the compiler didn't complain. Then I tried adding 8 more bits and it squawked. I guess that makes sense if only the first 256 binary numbers are defined, but doesn't it take 16 bits to be an integer type? Does this mean you can access only the low byte in an integer using binary?

Try using the 0b syntax that is built into gcc, e.g. int x = 0b10101010101010

The microcontroller itself works in binary and has not problems using 16bit binary numbers - in fact all decimal and hex numbers are converted to binary anyway. The problem is just telling the compiler that the number you have written should be treated as if it is already in binary and that is what is missing from the C specification.When you try and save an 8 bit number to a 16bit int, the microcontroller simply assumes the upper 8 bits are all zero (unsigned) or all one (both are signed and the 8bit number is negative).

My preference is to use a "bit" macro that defines a constant for an individual bitthen OR them together to create other constants.I also like to use individual defines for each bit because usually the bits mean somethingand you can assign them mnemonic names for clarity.

It makes the code easier to read and much less prone to errors since when typing inbinary constants it is easy to miscount the bits and create incorrect constants.

For example all these assignements to "mask" are the sameyet at least in my in my view the last one is the easiest to tell whatis going on:

For some unfathomable reason this is not possible with some C compilers but as far as I know it works ok with the Arduino.

Don

This is not a C compiler issue but rather a self created issue by team Wiring/Arduino

All C compilers support binary constants.The problem is what I can only describe as what appears to be a NIH (not invented here) mindset in team Wiring/Arduinowhich often decides that having a similar yet proprietary way of doing thingsis better than simply explaining the real way of doing things in the language being used: C/C++In this case the team Wiring/Arduino guys invented their own binary constants.This kind of deliberate duplication of functionalityhas always frustrated and annoyed me particularly because just like in thiscase it isn't a complete replacement for the built in solution.

I mean why not simply explain that the binary syntax is

0bxxxxxxxx

How is that any harder thanBxxxxxxxx

Which is non standard, requires a custom header file and a define for every single Bxxxxxxxx define?Today's Arduino solution only works for 8 bit values.Take a look at binary.h down in the core directorysometime. It is comical and yet sad at the same time.

The method they have used does not scale beyond 8 bits.

Think of the size of that header file if it needs to support16 bit numbers or the impossibility of using it for32 bit binary constants for something like DUE?

The 0bxxxxx format is something GCC adds as an extension to the standard. I think a couple of other compilers have similar extensions as well, but not all by any means.

I stand corrected. Thank you for that clarification.I feel silly for not verifying it first. I'm so used to gcc (been using it since the late 80's)I forget to check on what the real standard supports.Kind of odd that there is no support for binary literals in ANSI C.Oddly enough in over 30 years of embedded C programming I have yet to everuse binary literals.