Visu­al Com­put­ing Sem­in­ar (Spring 2016)

General information

The Visu­al com­put­ing sem­in­ar is a new weekly sem­in­ar series on top­ics in Visu­al Com­put­ing.

Why: The mo­tiv­a­tion for cre­at­ing this sem­in­ar is that EPFL has a crit­ic­al mass of people who are work­ing on subtly re­lated top­ics in com­pu­ta­tion­al pho­to­graphy, com­puter graph­ics, geo­metry pro­cessing, hu­man–com­puter in­ter­ac­tion, com­puter vis­ion and sig­nal pro­cessing. Hav­ing a weekly point of in­ter­ac­tion will provide ex­pos­ure to in­ter­est­ing work in this area and in­crease aware­ness of our shared in­terests and oth­er com­mon­al­it­ies like the use of sim­il­ar com­pu­ta­tion­al tools — think of this as the visu­al com­put­ing edi­tion of the “Know thy neigh­bor” sem­in­ar series.

Who: The tar­get audi­ence are fac­ulty, stu­dents and postdocs in the visu­al com­put­ing dis­cip­lines, but the sem­in­ar is open to any­one and guests are wel­comed. There is no need to form­ally en­roll in a course. The format is very flex­ible and will in­clude 45 minute talks with Q&A, talks by ex­tern­al vis­it­ors, as well as short­er present­a­tions. In par­tic­u­lar, the sem­in­ar is also in­ten­ded as a way for stu­dents to ob­tain feed­back on short­er ~20min talks pre­ced­ing a present­a­tion at a con­fer­ence. If you are a stu­dent or postdoc in one of the visu­al com­put­ing dis­cip­lines, you’ll prob­ably re­ceive email from me soon on schedul­ing a present­a­tion.

Where and when: every Wed­nes­day in BC02 (next to the ground floor at­ri­um). Food is served at 11:50, and the ac­tu­al talk starts at 12:15.

How to be no­ti­fied: If you want to be kept up to date with an­nounce­ments, please send me an email and I’ll put you on the list. If you are work­ing in LCAV, CVLAB, IVRL, LGG, LSP, IIG, CHILI, LDM or RGL, you are auto­mat­ic­ally sub­scribed to fu­ture an­nounce­ments, so there is noth­ing you need to do.

Schedule

I’ll in­tro­duce the sem­in­ar and will give a present­a­tion about a tech­nique to gen­er­ate high qual­ity meshes from many dif­fer­ent kinds of in­put.

Ab­stract: We present a nov­el ap­proach to remesh a sur­face in­to an iso­trop­ic tri­an­gu­lar or quad-dom­in­ant mesh us­ing a uni­fied loc­al smooth­ing op­er­at­or that op­tim­izes both the edge ori­ent­a­tions and ver­tex po­s­i­tions in the out­put mesh. Our al­gorithm pro­duces meshes with high iso­tropy while nat­ur­ally align­ing and snap­ping edges to sharp fea­tures. The meth­od is simple to im­ple­ment and par­al­lel­ize, and it can pro­cess a vari­ety of in­put sur­face rep­res­ent­a­tions, such as point clouds, range scans and tri­angle meshes. Our full pipeline ex­ecutes in­stantly (less than a second) on meshes with hun­dreds of thou­sands of faces, en­abling new types of in­ter­act­ive work­flows. Since our al­gorithm avoids any glob­al op­tim­iz­a­tion, and its key steps scale lin­early with in­put size, we are able to pro­cess ex­tremely large meshes and point clouds, with sizes ex­ceed­ing sev­er­al hun­dred mil­lion ele­ments. To demon­strate the ro­bust­ness and ef­fect­ive­ness of our meth­od, we ap­ply it to hun­dreds of mod­els of vary­ing com­plex­ity.

Ab­stract: We present a new al­gorithm for com­pu­ta­tion­al caustic design. Our al­gorithm solves for the shape of a trans­par­ent ob­ject such that the re­frac­ted light paints a de­sired caustic im­age on a re­ceiv­er screen. We in­tro­duce an op­tim­al trans­port for­mu­la­tion to es­tab­lish a cor­res­pond­ence between the in­put geo­metry and the un­known tar­get shape. A sub­sequent 3D op­tim­iz­a­tion based on an ad­apt­ive dis­cret­iz­a­tion scheme then finds the tar­get sur­face from the cor­res­pond­ence map. Our ap­proach sup­ports piece­wise smooth sur­faces and non-biject­ive map­pings, which elim­in­ates a num­ber of short­com­ings of pre­vi­ous meth­ods. This leads to a sig­ni­fic­antly rich­er space of caustic im­ages, in­clud­ing smooth trans­itions, sin­gu­lar­it­ies of in­fin­ite light dens­ity, and com­pletely black areas. We demon­strate the ef­fect­ive­ness of our ap­proach with sev­er­al sim­u­lated and fab­ric­ated ex­amples.

Ab­stract: Con­tinu­ous-do­main visu­al sig­nals are usu­ally cap­tured as dis­crete (di­git­al) im­ages. This op­er­a­tion is not in­vert­ible in gen­er­al, in the sense that the con­tinu­ous-do­main sig­nal can­not be ex­actly re­con­struc­ted based on the dis­crete im­age, un­less it sat­is­fies cer­tain con­straints. In this talk, we fo­cus on the prob­lem of re­cov­er­ing bin­ary shape im­ages from a set of samples. First, we pro­pose a scheme for sampling and re­con­struc­tion of shape im­ages with fi­nite rate of in­nov­a­tion (FRI). More spe­cific­ally, we mod­el the shape bound­ar­ies as a sub­set of an al­geb­ra­ic curve with an im­pli­cit bivari­ate poly­no­mi­al. We show that the im­age para­met­ers (i.e., the poly­no­mi­al coef­fi­cients) are the solu­tions of a set of lin­ear an­ni­hil­a­tion equa­tions with the coef­fi­cients be­ing the im­age mo­ments. We then re­place con­ven­tion­al 2D mo­ments with more stable gen­er­al­ized mo­ments that are ad­jus­ted to the giv­en sampling ker­nel. This leads to suc­cess­ful re­con­struc­tion of shape im­ages with mod­er­ate com­plex­it­ies from samples gen­er­ated with real­ist­ic sampling ker­nels and in the pres­ence of low to mod­er­ate noise levels.

The pro­posed FRI scheme falls short of re­con­struct­ing shape im­ages with in­tric­ate geo­met­ries from real­ist­ic samples. In the second part of this talk, we pro­pose a scheme for re­cov­er­ing shape im­ages with smooth bound­ar­ies from a set of samples. The re­con­struc­ted im­age is con­strained to re­gen­er­ate the same samples (meas­ure­ment con­sist­ency) as well as form­ing a bi­level im­age. We ini­tially for­mu­late the re­con­struc­tion tech­nique by min­im­iz­ing the shape peri­met­er over the set of con­sist­ent bin­ary shapes. Next, we re­lax the non-con­vex shape con­straint to trans­form the prob­lem in­to min­im­iz­ing the total vari­ation over con­sist­ent non-neg­at­ive-val­ued im­ages. We also in­tro­duce a re­quire­ment (called re­du­cib­il­ity) that guar­an­tees equi­val­ence between the two prob­lems. We il­lus­trate that the re­du­cib­il­ity prop­erty ef­fect­ively sets a re­quire­ment on the min­im­um sampling dens­ity. In this scheme, un­like FRI schemes, we do not con­strain the bound­ary curves by any spe­cif­ic mod­el. In­stead, we let the sampling ker­nel and the sample val­ues de­cide for them. As a res­ult, there is less re­stric­tion on the achiev­able shape geo­met­ries.

Ab­stract: Mean-field vari­ation­al in­fer­ence is one of the most pop­u­lar ap­proaches to in­fer­ence in dis­crete ran­dom fields. Stand­ard mean-field op­tim­iz­a­tion is based on co­ordin­ate des­cent and in many situ­ations can be im­prac­tic­al. Thus, in prac­tice, vari­ous par­al­lel tech­niques are used, which either rely on ad-hoc smooth­ing with heur­ist­ic­ally set para­met­ers, or put strong con­straints on the type of mod­els. In this pa­per, we pro­pose a nov­el prox­im­al gradi­ent-based ap­proach to op­tim­iz­ing the vari­ation­al ob­ject­ive. It is nat­ur­ally par­al­lel­iz­able and easy to im­ple­ment. We prove its con­ver­gence, and then demon­strate that, in prac­tice, it yields faster con­ver­gence and of­ten finds bet­ter op­tima than more tra­di­tion­al mean-field op­tim­iz­a­tion tech­niques. Moreover, our meth­od is less sens­it­ive to the choice of para­met­ers.

Ab­stract: By in­cor­por­at­ing com­pu­ta­tion­al meth­ods in­to the im­age ac­quis­i­tion pipeline, com­pu­ta­tion­al pho­to­graphy has opened up new av­en­ues in the rep­res­ent­a­tion and visu­al­iz­a­tion of real world ob­jects in the di­git­al world. Stained glass win­dows are a dy­nam­ic art form that change their ap­pear­ance con­stantly, due to the ever-chan­ging out­door il­lu­min­a­tion. They are there­fore, an ex­cep­tion­al can­did­ate for vir­tu­al re­light­ing. However, as they are anchored and very large in size, it is of­ten im­possible to sample their en­tire light trans­port with con­trolled il­lu­min­a­tion. We present a new ac­quis­i­tion and mod­el­ing frame­work for in­verse ren­der­ing of stained glass win­dows, by ex­ploit­ing sparsity in­du­cing pri­ors on the light trans­port. We show by ex­per­i­ments that our pro­posed solu­tion pre­serves volume im­pur­it­ies un­der both con­trolled and un­con­trolled, nat­ur­al il­lu­min­a­tions. We also present a mat­rix com­ple­tion ap­proach to syn­thes­ize glass slabs based on learnt pri­ors. Fi­nally, we present an easy-to-use, hand­held ac­quis­i­tion frame­work to ac­quire re­light­able pho­to­graphs of more gen­er­al, re­flect­ive scenes such as paint­ings. Our ac­quis­i­tion sys­tem con­sists of two syn­chron­ised smart­phones, one ob­serving the scene while the oth­er is used moved on an ar­bit­rary tra­ject­ory by the user. We then pro­pose a com­press­ive sens­ing re­con­struc­tion of the sparsely sampled light trans­port, which in­turn is used for scene re­light­ing.

Ab­stract: We present a com­pu­ta­tion­al meth­od for in­ter­act­ive 3D design and ra­tion­al­iz­a­tion of sur­faces via auxet­ic ma­ter­i­als, i.e., flat flex­ible ma­ter­i­al that can stretch uni­formly up to a cer­tain ex­tent. A key mo­tiv­a­tion for study­ing such ma­ter­i­al is that one can ap­prox­im­ate doubly-curved sur­faces (such as the sphere) us­ing only flat pieces, mak­ing it at­tract­ive for fab­ric­a­tion. We phys­ic­ally real­ize sur­faces by in­tro­du­cing cuts in­to ap­prox­im­ately in­ex­tens­ible ma­ter­i­al such as sheet met­al, plastic, or leath­er. The cut­ting pat­tern is modeled as a reg­u­lar tri­an­gu­lar link­age that yields hexagon­al open­ings of spa­tially-vary­ing ra­di­us when stretched. In the same way that iso­metry is fun­da­ment­al to mod­el­ing de­velop­able sur­faces, we lever­age con­form­al geo­metry to un­der­stand auxet­ic design. In par­tic­u­lar, we com­pute a glob­al con­form­al map with bounded scale factor to ini­tial­ize an oth­er­wise in­tract­able non­lin­ear op­tim­iz­a­tion. We demon­strate that this glob­al ap­proach can handle non-trivi­al to­po­logy and non-loc­al de­pend­en­cies in­her­ent in auxet­ic ma­ter­i­al. Design stud­ies and phys­ic­al pro­to­types are used to il­lus­trate a wide range of pos­sible ap­plic­a­tions.

Ab­stract: We de­scribe the col­or re­pro­duc­tion work­flow that en­ables con­vert­ing in­tens­ity-tun­able emit­ted or re­flec­ted spec­tral ra­di­ation stim­uli in­to full col­or im­ages. This work­flow com­prises the char­ac­ter­iz­a­tion of the tun­able ra­di­ation stim­uli, the es­tab­lish­ment of a mod­el en­abling pre­dict­ing the col­ors that are achieved by these tun­able ra­di­ations, the map­ping between col­ors defined in an in­put col­or space such as sR­GB in­to col­ors achiev­able by these tun­able ra­di­ations and the cal­cu­la­tion of the in­tens­ity of these ra­di­ations in or­der to dis­play a giv­en col­or. As con­crete ex­amples, we con­sider the cre­ation of fluor­es­cent col­or im­ages vis­ible only un­der UV light as well as the cre­ation of clas­sic­al cy­an, magenta and yel­low prints.

Ab­stract: We ad­ap­ted the clas­sic col­or-re­pro­duc­tion work­flow in or­der to print col­or im­ages on highly spec­u­lar metal­lic sheets. Be­cause of the strong spec­u­lar re­flec­tion of the metal­lic prints, the res­ult­ing col­or im­ages have a col­or­ful ap­pear­ance un­matched by con­ven­tion­al prints on pa­per.

Ad­di­tion­ally, by op­tim­iz­ing both the dif­fus­ing white ink and colored inks halftones, we were able to in­de­pend­ently con­trol light­ness un­der spec­u­lar and un­der non-spec­u­lar ob­ser­va­tion con­di­tions. This al­lows us to hide pat­terns or gray­scale shapes un­der one view­ing con­di­tion, spec­u­lar or non-spec­u­lar, and re­veal them un­der the oth­er view­ing con­di­tion.

Fi­nally, the an­iso­trop­ic line halftones prin­ted on the metal­lic sheet change col­or upon in-plane ro­ta­tion by 90o. This col­or change oc­curs due to the dir­ec­tion­al op­tic­al dot-gain ef­fect. By ana­lyz­ing this phe­nomen­on and mod­el­ling it, we were able to cre­ate an in­nov­at­ive col­or re­pro­duc­tion work­flow re­ly­ing on a 6-di­men­sion­al spec­tral pre­dic­tion mod­el. This work­flow en­ables cre­at­ing at once two dif­fer­ent im­ages at the same po­s­i­tion, one view­able without ro­ta­tion and one view­able after 90° azi­muth­al ro­ta­tion of the print.

Ab­stract: In this talk I will present a par­al­lel pri­or­it­ized Jac­obi­an based in­verse kin­emat­ics al­gorithm for multi-threaded ar­chi­tec­tures. The ap­proach solves damped least squares in­verse kin­emat­ics us­ing a par­al­lel line search by identi­fy­ing and sampling crit­ic­al in­put para­met­ers. Par­al­lel com­pet­ing ex­e­cu­tion paths are spawned for each para­met­er in or­der to se­lect the op­tim­um which min­im­izes the er­ror cri­ter­ia. The al­gorithm is highly scal­able and can handle com­plex ar­tic­u­lated bod­ies at in­ter­act­ive frame rates. The res­ults are shown on com­plex skel­et­ons con­sist­ing of more than 600 de­grees of free­dom while be­ing con­trolled us­ing mul­tiple end ef­fect­ors. We im­ple­ment our al­gorithm both on multi-core and GPU ar­chi­tec­tures and demon­strate how the GPU can fur­ther ex­ploit fine-grain par­al­lel­ism not dir­ectly avail­able on a mul­ticore pro­cessor. The im­ple­ment­a­tions are 10 - 150 times faster com­pared to state-of-art seri­al im­ple­ment­a­tions while provid­ing high­er ac­cur­acy. We also demon­strate the scalab­il­ity of the al­gorithm over mul­tiple scen­ari­os and ex­plore the GPU im­ple­ment­a­tion in de­tail.

Where: note that on this day, the sem­in­ar takes place in INM 11 at the same time as usu­al.

Ab­stract: Con­sist­ent re­con­struc­tion is a very well-known concept in the sig­nal pro­cessing com­munity for sampling and re­con­struc­tion of dif­fer­ent classes of sig­nals. Con­sist­ent al­gorithms work by en­for­cing the con­straint that the re­con­struc­ted sig­nal, after sampling, leads to the same set of samples as the in­put of the re­con­struc­tion al­gorithm.

In this talk, We will in­vest­ig­ate the idea of con­sist­ent re­con­struc­tion in the area of mul­tiple-view geo­metry, es­pe­cially the 3-D tri­an­gu­la­tion prob­lem. We will ana­lyse dif­fer­ent state-of-the-art tri­an­gu­la­tion al­gorithms to see wheth­er they sat­is­fy con­sist­ency con­di­tion or not. Moreover, we show that, un­der cer­tain con­di­tions, con­sist­ent re­con­struc­tion can lead to an op­tim­al de­cay in the re­con­struc­tion er­ror rate with re­spect to the num­ber of cam­er­as.

Moreover, by ap­ply­ing con­sist­ent re­con­struc­tion, we can ob­tain a con­fid­ence area for the re­con­struc­ted res­ult, us­ing which we can ef­fi­ciently at­tack some oth­er prob­lems in im­age pro­cessing, in­clud­ing multi-cam­era track­ing in a re­con­fig­ur­able cam­era set-up and op­tim­al cam­era ar­ray design. I will briefly ex­plain these dir­ec­tions as well as some res­ults on real and syn­thet­ic data­sets.

Ab­stract: Spec­tral Fil­ter Ar­rays (SFA) is an emer­ging tech­no­logy for multis­pec­tral im­age ac­quis­i­tion. SFAs make pos­sible the use of a com­pact multis­pec­tral sensor to ac­quire still im­ages or/and video. An in­tro­duc­tion will present and com­pare typ­ic­al multis­pec­tral and col­or im­age ac­quis­i­tion sys­tems in or­der to clearly identi­fy the un­der­ly­ing dif­fer­ences. The fol­low­ing will present the dif­fer­ent as­pects that may be taken in­to ac­count while design­ing or us­ing such sensor (fil­ter sens­it­iv­ity, en­ergy bal­ance, demo­sa­icing, ap­plic­a­tions, etc.). Then I will present the char­ac­ter­ist­ics of a pro­to­type cam­era that we de­veloped at the Uni­versity of Bour­gogne.

Bio: Jean-Bap­tiste Thomas re­ceived his Bach­el­or in Ap­plied Phys­ics in 2004 and his Mas­ter in Op­tics, Im­age and Vis­ion in 2006, both from the Uni­versité Jean Mon­net in France. He re­ceived his PhD from the Uni­versity of Bour­gogne in 2009. After a stay as a re­search­er at the Gjovik Uni­versity Col­lege and then at the C2RMF (Centre de Recher­che et de Res­taur­a­tion des Musées de France), he is now Maître de Con­férences (As­so­ci­ate Pro­fess­or) at the Uni­versité of Bour­gogne. His re­search fo­cuses on col­or sci­ence and on col­or and multi-spec­tral ima­ging.

Ab­stract: We pro­pose an ef­fi­cient ap­proach to ex­ploit­ing mo­tion in­form­a­tion from con­sec­ut­ive frames of a video se­quence to re­cov­er the 3D pose of people. Pre­vi­ous ap­proaches typ­ic­ally com­pute can­did­ate poses in in­di­vidu­al frames and then link them in a post-pro­cessing step to re­solve am­bi­gu­ities. By con­trast, we dir­ectly re­gress from a spa­tio-tem­por­al volume of bound­ing boxes to a 3D pose in the cent­ral frame.

We fur­ther show that, for this ap­proach to achieve its full po­ten­tial, it is es­sen­tial to com­pensate for the mo­tion in con­sec­ut­ive frames so that the sub­ject re­mains centered. This then al­lows us to ef­fect­ively over­come am­bi­gu­ities and im­prove upon the state-of-the-art by a large mar­gin on the Hu­man3.6m, Hu­manEva, and KTH Mul­tiview Foot­ball 3D hu­man pose es­tim­a­tion bench­marks.

What Play­ers do with the Ball: A Phys­ic­ally Con­strained In­ter­ac­tion Mod­el­ing

Ab­stract: Track­ing the ball is crit­ic­al for video-based ana­lys­is of team sports. However, it is dif­fi­cult, es­pe­cially in low res­ol­u­tion im­ages, due to the small size of the ball, its speed that cre­ates mo­tion blur, and its of­ten be­ing oc­cluded by play­ers. In this pa­per, we pro­pose a gen­er­ic and prin­cipled ap­proach to mod­el­ing the in­ter­ac­tion between the ball and the play­ers while also im­pos­ing ap­pro­pri­ate phys­ic­al con­straints on the ball’s tra­ject­ory. We show that our ap­proach, for­mu­lated in terms of a Mixed In­teger Pro­gram, is more ro­bust and more ac­cur­ate than sev­er­al state-of-the-art ap­proaches on real-life vol­ley­ball, bas­ket­ball, and soc­cer se­quences.