11:46:31drmeisterThe static analyzer is running on AWS - it's time to bring MPS back up.

12:10:14drmeister::notify attila_lendvai I made a change to the parallel scraping code - so that it always does 8 parallel scraping jobs. It looks like every time we pass a different -j xx argument to ./waf it starts scraping everything all over again. My compromise hack is to stop that from happening. We need a better way

13:08:48balrogThe only way to programmatically find out if it’s needed is using otool to check if the lib is linked against libffi. Or maybe compile and attempt to run a test program (that’s the autotools way right?)

16:24:52frgoI just read the log. drmeister: Re the nr of scraper jobs: You said you pinned the nr of jobs to 8. Why not use something like https://github.com/muyinliu/cl-cpus to have a more dynamic solution, for e.g. high end workstations having a lot more cores? Yes, i know, one more dependency. If that's not a viable option then why not "steal" the relevant code?

16:29:45drmeisterI wrote something in the wscript file to do parallel scraping and then attila wrote something else and his (and maybe mine) have this annoying unintended side effect that they force waf to rebuild things that it shouldn't when you change the -j option.

17:18:43drmeisterWe were talking about adding value numbering and Kildall to eliminate unnecessary temporaries. Then we talked about how mem2reg converts our code to SSA and that eliminates unnecessary temporaries.

17:19:28drmeisterIs value numbering a stronger approach to eliminating temporaries? Is that why we don't just convert to SSA and remove useless store/load's?

17:20:26karloszI rewrote SSA entirely, using basic blocks. the algorithm is essentially straight out of the textbook

17:20:47BikeThe issue is we'd like to analyze things about closure variables as well, which can't be SSAd in general