If you are stepping through the code, it is ultimately up to the debugger and how it interprets advancing by one statement. I'm guessing it will take two steps. The first for the increment, the second for the function call. Some complex lines of code, such as when temporaries are created or function call results are used as parameters, will stop for single-step advances three or four or more times, depending on how many pieces the debugger thinks it is.

If you find the debugger's decision is troublesome, you can change the view to a mixed disassembly view and step one instruction at a time rather than one C++ line at a time. Sometimes this can help reveal details you didn't realize were happening, such as implicit conversions, overloaded operators, or other less visible features of the language.

The debugger jumps to the code marked as being on the next line. It's not aware of C statements, just the line information of the source. Note, this is the same information that the processor macro __LINE__ outputs, meaning that it can be modified with #line statements.

If two statements are typed in the same line, how does this affect the debugger? Since the debugger
picks line numbers apparently(in GDB and the Visual Stuido debugger)

in my experience with the VS debugger, if you have a single line (with multiple statements), it is treated as a single unit (for things like single-stepping, ...), but if spread over multiple lines, then each will executed individually.

this has the result that how things are formatted will effect how stepping works:

for(i=0; i<1000; i++)DoSomething();

vs:

for(i=0; i<1000; i++)

DoSomething();

where the former will be stepped over all at once, but the latter will involve needing to step through each iteration of the loop.

an exception seems to be function calls, which IME seem to (normally) always behave as a single unit (although it is possible to step-into called functions and similar within argument lists).

could potentially cause your debugger to stop multiple times on the line if you are stepping through the debugger. The debugger gets to choose what it means.

The debugger might step over the entire line in one step.
The debugger might stop execution many times as it goes through each of the function calls, maybe even stopping before running x's constructor.

You can always drop down the assembly level if you specifically need to break in between the two statements, in case it matters. Debuggers don't actually work with source code or care in the least about it. It's just the UIs (be it a CLI or GUI interface) that interface with the code and take the concepts you as a programmer mostly deal with (lines in files) and map those to assembly instructions. Likewise, you can debug binaries without debug information, it's just (often significantly) harder because you're stuck manually correlating things in the assembly to the original source.

This causes me to think, is there a standard for debuggers?Searching a while at stackOverflow.com seemed to throw no meaningful results.

Standard for _what_ in debuggers?

Different OSes expose different debugging-relevant features in different ways. Different hardware exposes debugging features in different ways. Different executable formats encode their debug-symbol information in different ways. Different programming languages and runtimes have different semantics that can be debugged and different ways of being hooked into by a debugger.

This causes me to think, is there a standard for debuggers?
Searching a while at stackOverflow.com seemed to throw no meaningful results.

Standard for _what_ in debuggers?

Different OSes expose different debugging-relevant features in different ways. Different hardware exposes debugging features in different ways. Different executable formats encode their debug-symbol information in different ways. Different programming languages and runtimes have different semantics that can be debugged and different ways of being hooked into by a debugger.

Hence the question, why do programming languages possess a standard but debuggers don't?

Because there is no need for portability, and because if they did add something to ensure portability it would come at a cost that most would be unwilling to pay.

Note that all this debugger talk is explicitly against "debug" builds. These builds are mostly unoptimized and usually quite slow. When you attempt to debug an optimized build the debugger may jump back and forth between lines many times, and will also frequently jump off out regions of assembly that the debugger has no idea where the C++ equivalent is.

There is no need, no desire, and whatever standardization were recommended would likely be ignored.

Searching a while at stackOverflow.com seemed to throw no meaningful results.

There is standard format for debug information: DWARF. MSVC has it's own format that it uses. There is also a slew of front end features that are expected from people using debuggers, mostly based on gdb.