Almost arbitrary symbolic step (calc. of var.)

1. This is really more of a math question than a physics one... it's in calculus of variations while deriving a proof for the Euler equation. I always hit a mental road block when it goes from this step:

2. F(x,Y,Y') = I(e) --> [itex]\frac{dI}{de} = \frac{\partial F}{\partial Y} \frac{dY}{de} + \frac{\partial F}{\partial Y'} \frac{dY'}{de}[/itex] *note I'm not including the integral because it's not what I'm addressing, but if you feel more comfortable with it there just use your imagination then.

3. The issue I have is why is there a sum? How do we know these variables (or really the Y and Y') are linearly independent?... or at least that's what I'm assuming it represents. I can imagine that it's possible for it to not work if they are not... idk am I missing something obvious? Or should functions and their derivatives be considered linearly independent in some function space or something?

This is just the chain rule for a multi-variable function. You have to add up the contribution from each variable separately. You should be able to find a more detailed discussion of this procedure in any good multivariate calculus textbook.

This is just the chain rule for a multi-variable function. You have to add up the contribution from each variable separately. You should be able to find a more detailed discussion of this procedure in any good multivariate calculus textbook.

I understand the chain rule for the most part, and I understand why normally there would be a sum - it makes sense when the two variables are independent - but now I have a Y and a Y', and I'm not convinced of why they are treated independently so that the same rules apply. I thought I made this clear in my question, I guess I didn't. I wouldn't be asking something that could be solved in two seconds by opening my calculus book. Perhaps it means I didn't acquire some crucial but subtle notion of why they can be treated independently... is it just because someone "decided" to treat them as independent variables? If so, this is allowable?

I keep looking for a direct answer to the particular question I have but it's not easy to find one. I figure it's much more efficient and effective if I ask a relative expert directly here.

As far as F is concerned, Y and Y' are independent variables. It doesn't know that one is defined in terms of the other, or that they vary together. All it knows is that when Y varies by this much, and Y' varies by this much, it will vary by this much in response to that. So if you're looking for the response of F to a change in e, you just run the numbers like you would for any other multi-variable function.

Does that make sense? If not, maybe you could give a little more context about this equation, so that we can see in more detail how the various functions are defined.

As far as F is concerned, Y and Y' are independent variables. It doesn't know that one is defined in terms of the other, or that they vary together. All it knows is that when Y varies by this much, and Y' varies by this much, it will vary by this much in response to that. So if you're looking for the response of F to a change in e, you just run the numbers like you would for any other multi-variable function.

Does that make sense? If not, maybe you could give a little more context about this equation, so that we can see in more detail how the various functions are defined.

Thanks that helped a lot, it's just weird how one can apply as you just did "as far as F is concerned". I have trouble understanding why this is ok to do just because it seems like by treating them as independent one is making a false statement. It makes sense that F can't really tell them apart though, who's to say it's not the case that anything that is treated independently actually can be dependent - just because it's treated that way doesn't mean it is I guess....

But this is why I brought up the question of the function space - are functions and their derivatives actually independent in a certain context, and is it necessary for this to be so to justify "pretending"?
For instance, the Wronskian uses the derivates of functions to test for linear independence.

At its simplest level, a function of two variables like F just takes in two numbers and spits out a result. The fact that we label those inputs Y and Y' doesn't imply anything at all about their relationship--they're just two input parameters that you specify in order to get a value for F. It might make more sense to mentally rename them to something else like A and B, to reinforce the idea that they are just numbers. At this level, all you can do is take the partial derivative of F with respect to A or with respect to B.

Things get more interesting when the sources for A and B are functions, because like you said, they may be related to each other in some way. If both of those inputs are themselves dependent on a third variable e, then you get into the situation you listed above, where you can say ##I(e) = F(Y(e), Y'(e))##. At that point, you can take the derivative of I with respect to e, for which you use the chain rule to reduce to the partial derivatives of F that we computed above: ##\frac {dI}{de} = \frac{\partial F}{\partial A}\frac{dY}{de} + \frac{\partial F}{\partial B}\frac{dY'}{de}##. A change in e will in general result in a change to both Y and Y', and if that's the case, then both of them will contribute in turn to a change in F.

Hello Jere and welcome to PF. Nice way to use the template, but it was intended differently.
My imagination falls short, and I don't see a sum.
Are you aware of the meaning of the partial derivative ?

When I say sum I'm just referring to the "+" sign after the arrow. And yeah I understand the meaning of the partial derivative for the most part... I think. My lack of confidence doesn't come from this at least. Here is a more detailed depiction of what step I'm talking about: