Community Reputation

About Brother Bob

Personal Information

I am fairly certain you can do that by applying the translation of card after the projection matrix. Just insert the projection matrix between step 3 and 4 in your previously "incorrect" list:
Start with vertices comprising an upright card centered around the origin.
Apply a rotation matrix to rotate the card.
Apply a translation matrix moving the card forward so its dead center in front of the eye.
Apply projection matrix here.
Apply another translation matrix moving the card so its in the lower-right corner of the screen.
The amount to translate in step 5 is left as an exercise to the reader (i.e., too lazy to check myself) but I believe it has to be done in normalised device units. That is, a translation of 0.5 would translate the card by half the size of the window (or rather the viewport, but I assume you're not changing the viewport in this approach and the viewport is the full size of the window).

I see now what you intend to do. If you consider the view of a single card as its own viewport then what you describe here is precisely what glViewport is for; you define the region of the window where you want the scene (your single card) to be drawn.
It even gives you possibly positive side effects, such as clipping. Consider for example if you zoom in a little on the card while it's rotating, or if the viewport is too small. If the corners or the card extend beyond the viewport while rotating it, they will be clipped to the viewport region and won't interfere with things outside the defined viewport. For this purpose, the viewport is typically set together with the scissor region; look up glScissor.

The parameters x and y can be negative, but the width and height parameters must be non-negative.
The viewport transform is just a coordinate transformation from normalised device coordinates to window (or pixel) coordinates.
Once all transformations (model, view, perspective, or any other transformation you may have) have been applied, you end up in a coordinate system called normalized device coordinate. It is a coordinate system when the visible coordinates ar in the range [-1, 1] along all three axes. For example, -1 to 1 along the X-axis corresponds to what is visible along the X-axis from left to right, independent of Y and Z-coordinate.
This range [-1, 1] along both the X and Y is then transformed by the viewport transform, so that -1 ends up at the pixel coordinates x or y (the parameters to glViewport), and 1 ends up at the pixel coordinates x+width or y+height. The -1 and 1 along the Z-axis is subject to the depth buffer process so will not covered at this stage.
So where your point (100,100) ends up in window space depends on all the transformations you apply to it, and what its normalised device coordinates is. But if you move the viewport around by changing the x and y parameters, you will effectively just translate the rendering region around the window, like grabbing the title bar of any window and moving it around.
Stencil testing, however, is tied to the actual pixel coordinates. Thus, moving the viewport around will render your scene over a different set of actual pixels within the window, and therefore subject the rendered scene to a different region of the stencil buffer.
As far as I understand what you want to do and the way you want to use the viewport to achieve this, you need negative width or height to achieve the flipping effect. That is not possible in the first place; see Q1.

Original post has been restored and some replies regarding post history has been hidden. You can continue that discussion in the corresponding thread; clicky. Do not fundamentally change you posts like this again.

What you want then is for the compiler to find a type Type such that Dummy<Type>::type resolves to int. The type Type is unrelated to the parameter type int and the compiler would have to instantiate Dummy with every possible type in order to find the ones where Dummy<Type>::type is an int. That is, as you can imagine, a quite unreasonable task. There could be some obscure and hidden type, somewhere, that specialize the Dummy template with using Type = int and that would necessarily have to be a legal type for Type.
The template parameter is in a non-deduced context and the language simply doesn't allow deduction in this case. I imagine my argument above would be a reasonable reason for that.
Possible ways around it depends on use case and what other possible constraints you can impose on the types. But as it stands, it simply isn't a context where a template parameter can be deduced. For example, if you pass other parameter types to the function, Type being one of them, then those parameters can deduce the types:
template<typename Type>
void dummyFunction(typename Dummy<Type>::type type, Type other)
{ ... }
Now the parameter other can deduce the template parameter, and Dummy<Type> is instantiated accordingly.

If you don't initialize a member in the initializer list, its default constructor will be executed as a part of the initialization of the object. Once the initializer list is executed (including the default constructor for members you don't explicitly initialize) the constructor body is executed. The difference between constructing a member in the initializer list and "constructing" it in the body of the containing class' constructor is; the former directly calls the proper constructor, while the latter default constructs the object and then calls the assignment operator of whatever you're assigning to it.
If the member type cannot be default initialized, you must initialize it in the constructor body. If the member type is expensive to default construct, then you pay for unnecessary default initialization and then an assignment to override the value from the default constructor.
edit: To expand on the above a little more. Once the constructor body starts executing, all members have had one of their constructors called and they are all properly constructed. I mentioned in my last post that some things cannot be initialized other than in the initializer list. For example, the base class in an inheritance tree must be called from the initializer list; objects without a default constructor cannot be default constructed; const objects cannot be assigned to in the constructor body since that would change a const object.

The two are not equivalent in the constructor implementation. In the initializer list, however, they produce equivalent results; the first is value-initialization which for pointers means its value is set to the null pointer, while the second (explicitly) initialises it with a null pointer. Start using the initializer list; some things just cannot be initialized in the constructor body.

Correct. In modern math notation this type of function works with [0,x) meaning it includes zero but stops just short of x. This is also common in many other systems. Graphics, for example, typically draw segments in the [A,B) form, starting exactly at A and ending the instant before B.
I tried this in VS2015 and it does return RAND_MAX.. is it not supposed to?
#include <iostream>
int main() {
for(int i = 0; i < 200000; ++i) {
int r = rand();
if(r == RAND_MAX)
std::cout << "MAX\n";
else if(r == (RAND_MAX - 1))
std::cout << "ALMOST\n";
else if(r == 0)
std::cout << "ZERO\n";
}
}
The range for both rand() and the <random> library (at least the uniform integer distributions when comparing with rand) are inclusive at both ends. The range is therefore [0, RAND_MAX], not [0, RAND_MAX). This is different from, for example, iterator ranges in the standard library which are half-open.

That may explain why I was surprised there wasn't much information about it. It's in VS 2013 at least where I tried it, but if it was actually removed then it should be fairly easy to make the necessary types to handle the particular problem raised in this thread.
template<typename T>
struct identity {
using type = T;
};
The idea of making the second parameter a non-deduced one still applies.

A third and not-so-crazy option is to make the type of the second parameter a non-deduced type.
#include <type_traits>
Foo<T>& operator*=(Foo<T>& left, typename std::identity<T>::type right) {
...
}
Passing the type through a dependent type like that excludes that T from the deduction process. The parameter instead participates in conversion once the actual type has been resolved.

Assuming that "top" point upwards, the quad (A, EndA, EndB, B) is clockwise, since if you look at it from above face the vertices are in clockwise order. If you rotate the cube slightly around the X-axis so that the "bottom" face is visible, you'll see that (EndD, EndC, C, D) is counter clockwise if you look at it from below. Thus, the "bottom" and the "top" face of that cube has inconsistent winding order.
Two faces with the same winding order that shares and edge between two vertices must share that edge in opposite order. For example if "front" has the edge from A to B (in that order) then "top" must connect from B to A. And, indeed, that is the case; the last vertex B in your list connects back to the first vertex A. However, this is not the case for the "bottom" face and the edge between C and D; both the "front" and the "bottom" face shares the edge from C to D in the same direction and therefore they have different winding orders.