In article <57gjs0$dad@eng-ser1.erg.cuhk.hk>, mcau@ee.cuhk.hk (Au_Yeung_Man_Ching) writes:|> Hi,|> |> I would like to ask some questions:|> |> (1) Given the following problem,|> |> min J(q,S)|> q,S|> |> in doing the above problem, we do the|> above problem like this:|> |> min min J(q,S)|> q S|> |> i.e. doing the problem sequentially as two|> parallel independent minimization problem.|> With a fixed q first, minimize J(q,S) w.r.t.|> S to obtain S*. Then, minimize J(q,S*) w.r.t.|> q. Finally, get the optimal point(local/global??)|> (q*,S*).|> |> CAN WE DO THE PROBLEM AS DESCRIBED ABOVE??|> IF CANNOT, how can we do?|> if J is convex in the joined variable (q,S) this works and is known for a long time (blockwise coordinate descent).the classical book of blum&oettli 'mathematical optimization (or similar title)'which appeared with Springer publisher is a good source.Otherwise you migth try to solve grad_{q,S}J=0 by solving (analyticallyor numerically) a subsystem for S=S(q) plug this into the remaining equationsand solve these for q (But I am afraid that this will not work well globallyeven if your problem is quite well behaved but nonconvex)hope this helpspeter