I also noticed that just as with Mavericks the Paths aren't sticking in ML,
everytime I open a New terminal window I have to enter;
export PATH=/Developer/NVIDIA/CUDA-6.0/bin:$PATH
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-6.0/lib:$DYLD_LIBRARY_PATH
or the compiler can't find nvcc. This is starting to be annoying.

OSX Cuda toolkit 7.5 download is 1 GiB... this could take a little while, lol"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

toolkit samples building and running fine. next stop boinc libraries and Xbranch checkout"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

up to the same point, with missing INT_MIN and INT_MAX references. Will be able to go through the makefiles removing the dead file references, and then find where limits.h is supposed to be included and probably isn't."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

I also noticed that just as with Mavericks the Paths aren't sticking in ML,
everytime I open a New terminal window I have to enter;
export PATH=/Developer/NVIDIA/CUDA-6.0/bin:$PATH
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-6.0/lib:$DYLD_LIBRARY_PATH
or the compiler can't find nvcc. This is starting to be annoying.

That's basically what Petri suggested. In Mavericks I have a file in the Home folder named .bashrc. When you enter nano ~/.bashrc it even pops up saying GNU nano 2.0.6 File: /Users/Tom/.bashrc and has the above path listed there. However I still get;
checking for nvcc... checking for nvcc... no
configure: error: NVCC compiler not found!
if I don't run the paths first. I suppose I'm missing some magic symbol(s) or something? Silly me just pasted the above lines in the file after creating the file.
BTW, /Developer/NVIDIA/CUDA-6.0/bin is even in the Configure line and the compiler Still can't find NVCC;
./configure --disable-graphics --disable-shared --enable-bitness=64 --enable-client --enable-static-client --enable-static --enable-dependency-tracking --enable-intrinsics --build=x86_64-apple-darwin --host=x86_64-apple-darwin --target=x86_64-apple-darwin --with-boinc-platform=x86_64-apple-darwin --enable-fast-math CC="/usr/bin/clang" CPPFLAGS=" -DUSE_I386_OPTIMIZATIONS -DUSE_I386_XEON -DSETI7 -DUSE_SSE41 -O3 -I/Developer/NVIDIA/CUDA-6.0/bin -I/Users/Tom/sah_v7_opt/Xbranch/client/vector -I/usr/local/cuda/lib" LDFLAGS=" -mmacosx-version-min=10.8 -ldl /usr/lib/libz.1.dylib -mtune=core2 -march=core2 -fstrict-aliasing -framework Carbon" BOINCDIR="/Users/Tom/boinc"

While waiting for a CUDA App I decided to try this version 8 thing with OpenCL. I used the same configure line I used months ago on a fresh r3185 sah_v7_opt file I downloaded just after raistmer finished his last commit. I received MBv7_7.08r3185_ati5_ssse3_x86_64-apple-darwin, obviously a version 7 App. Is there also some magic needed to get a version 8 App?
Unfortunately MBv7_7.08r3185 isn't any faster than my old MBv7_7.08r2955 and still gives the same occasional Error, in fact, I believe it's Slower And gives more Errors, http://setiathome.berkeley.edu/result.php?resultid=4561265556

In el Capitan for command line building I had to create export entries in the .profile for the paths. For graphical samples that are dynamically linked I had to create an enviroment.plist file with setenv entries. For the running of an app itself (when it actually builds completely), then [graphical] applications can find the libraries. That won't do for standalone running of course, so eventually I'll work out how to add the origin into the exe (like before & on Linux)

First though a few things.
Near the top of x86_float4.h, there is an #if defined(__linux__) , that needs to become #if defined(__linux__) || ( defined(__APPLE__) && defined(__MACH__))
that one enables inclusion of the <climits> standard include

Next is a minor hamfist I injected fairly recently to print the driver version during Cuda initialisation. That api is only available on Windows, so needs disabling.

Last main thing is that depending on toolkit version the code generators present for old cuda devices ( compute capability 1.x ) will make the compile cack out. I may just switch to an older Cuda version in the morning, So I don;t have to deal with adding logic just yet. adding exceptions depending on if you're using Cuda 7+ or not will require some fiddling.

From there the CPU files seem to compile under clang here, and the first cuda file build, but then the build system starts trying to include something called 'Core Framework" which I'm not familiar with, and chokes nastily. I suspect some weird option somewhere, because the samples don;t do that. More digging tomorrow.

Can commit the minimal changes to get to that point, which may work on older XCode+Cuda already, after some rest (crossed eyes make for bad commits )

Thanks Jason, it appears we are making progress. I was going to build a version with cuda 6.0 for the older devices and test it with my GTS 250. Then I was going to try a version with cuda 6.5 and try it with my 750Ti, I have another 750Ti due in a couple days. When I try Petri's Modded files with cuda 6.0 I also receive errors saying the code is for Newer hardware, so, I suppose if the Mod actually works in OSX it can't be used with older devices. Oh well, time to buy a newer card I suppose.

I also noticed that just as with Mavericks the Paths aren't sticking in ML,
everytime I open a New terminal window I have to enter;
export PATH=/Developer/NVIDIA/CUDA-6.0/bin:$PATH
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-6.0/lib:$DYLD_LIBRARY_PATH
or the compiler can't find nvcc. This is starting to be annoying.

That's basically what Petri suggested. In Mavericks I have a file in the Home folder named .bashrc. When you enter nano ~/.bashrc it even pops up saying GNU nano 2.0.6 File: /Users/Tom/.bashrc and has the above path listed there. However I still get;

Yes you do! .bashrc is for a job shell [no terminal], not an interactive shell [yes terminal].
from link above about 3/4 of the way to the end

A session started as a login session will read configuration details from the /etc/profile file first. It will then look for the first login shell configuration file in the user's home directory to get user-specific configuration details.

It reads the first file that it can find out of ~/.bash_profile, ~/.bash_login, and ~/.profile and does not read any further files.

In contrast, a session defined as a non-login shell will read /etc/bash.bashrc and then the user-specific ~/.bashrc file to build its environment.

From there the CPU files seem to compile under clang here, and the first cuda file build, but then the build system starts trying to include something called 'Core Framework" which I'm not familiar with, and chokes nastily. I suspect some weird option somewhere, because the samples don;t do that. More digging tomorrow.

Yep, that seems likely, Since that Framework stuff looks totally gui/Apple stuff."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

Book said best way was to start a new project in xcode, Mac OS application, command line tool. That will give you an empty hello world which you can use to compare all the build settings. Remember to hit the all button on the settings to see all of them. Hope that lets you find the one(s) that is being a PITA.

Wasn't paying super attention, but if it was a library, there is a blank template tool for that too.

@Tbar, yes you have to create the file. vi, vim, emacs, or some GUI tool.

Works fine to compile the one file to a .o ..... but ONLY after you comment out the cudaAcceleration.h include at the top of the source file first (otherwise same unrequested framework errors). So the gremlin blocking Cuda file compilation is in cudaAcceleration.h (or something included by it)

Will test a bit later hiow much easier a clean XCode project will be, and decide whether or not to fix the Mac part of the makefiles then. A lot will depend on what XCode feels like compared to the more or less fine but broken makefiles. Probably best I head straight toward gradle automation, ditching all the enviroments, lol, after v8 prelims are covered. Then can trigger from one set of gradle build files across the development fleet, and autoupload to a website while drinking beer (work setting up, but totally need that, lol )"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

So maybe you need to #include <cuda_runtime_api.h> where ever you get compiler errors saying cudaErr_t not defined.To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones

Yeah there was that (mostly because of a breakage I made putting some windows code in), then weird apple problems I've never seen before after that. .

I thought Apples were supposed to be all shiny and you just mash things with your forehead.... apparently not."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

Looks like we'll be juggling and dodging v8 updates amidst the figuring out. Just something to factor into the process :)"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

...Can commit the minimal changes to get to that point, which may work on older XCode+Cuda already, after some rest (crossed eyes make for bad commits )...

Any luck on some new commits? Should I just give up on CUDA Toolkit 6.0, the older hardware, and move on to 6.5?

Status is juggling ready for v8 at the moment, which messes with the plan quite a bit.

Looking at some of the changes that Petri's put forward, some of which need to go straight into stock, 6.5 or newer would be a better choice where available (maybe 7+ if something he's going to be playing with soon works out). Will be isolating the midrange issues, putting those bits on hold along with my major infrastructure changes (for later).

We've been loosely comparing notes on various options from here. (of which there are many)

With preliminary v8 support enabled, and some tweaks in place, I'm hoping for simultaneous Mac+Linux+Win alpha, quickly followed by wider beta (at least for existing main v7 functionality). v8 support is going to depend on last minute changes at beta (Have been told there may be changes there, and could need to hold back until they iron out stock CPU + Splitters)

The Mac part is going to depend on if Xcode plays ball this week. For me, one more week of work before holidays start, then probably my once a year development burst can go from then until (hopefully) February. Since that burst didn't happen last year (for various reasons), most likely I'll stage the rampup. But historically practice is rapid incremental commits & alpha builds with a prescribed test sequence (rather than immediate deployment).

Will put a call out for specific cases that will need testing as things go, though for Mac-Cuda purposes the alpha test team probably amounts to you and I alone, as far as I can tell at the moment."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.

...It reads the first file that it can find out of ~/.bash_profile, ~/.bash_login, and ~/.profile and does not read any further files.

Hmmm, ALL of those have the same effect. If I have one present the Compiler can Not find pkg-config and terminates immediately. Same thing in Mavericks and Mountain Lion. Wasted about 30 minutes trying to reinstall/fix pkg-config without any success, remove the file and it works the way it use to.

Now with CUDA Toolkit 6.5 I'm seeing about the same as with CUDA 6.0. In Mavericks I still get;

clang: error: no such file or directory: 'seti_cuda-analyzeFuncs_sse2.o'
clang: error: no such file or directory: 'seti_cuda-analyzeFuncs_sse3.o'
clang: error: no such file or directory: 'seti_cuda-analyzeFuncs.o'
clang: error: no such file or directory: 'cudaAcceleration.o'
clang: error: no such file or directory: 'cudaAcc_CalcChirpData.o'
clang: error: no such file or directory: 'cudaAcc_fft.o'
clang: error: no such file or directory: 'cudaAcc_gaussfit.o'
clang: error: no such file or directory: 'cudaAcc_PowerSpectrum.o'
clang: error: no such file or directory: 'cudaAcc_pulsefind.o'
clang: error: no such file or directory: 'cudaAcc_summax.o'
clang: error: no such file or directory: 'cudaAcc_transpose.o'
clang: error: no such file or directory: 'cudaAcc_utilities.o'
clang: error: no such file or directory: 'cudaAcc_autocorr.o'
make[2]: [seti_cuda] Error 1 (ignored)
...
/bin/cp seti_cuda setiathome_x41zc_x86_64-apple-darwin_cuda65
cp: seti_cuda: No such file or directory
make[2]: [setiathome_x41zc_x86_64-apple-darwin_cuda65] Error 1 (ignored)
strip setiathome_x41zc_x86_64-apple-darwin_cuda65
error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/strip: can't open file: setiathome_x41zc_x86_64-apple-darwin_cuda65 (No such file or directory)
make[2]: [setiathome_x41zc_x86_64-apple-darwin_cuda65] Error 1 (ignored)

In Mountain Lion the SSE errors are not there.
Also, with the Modified code it is just complaining about 'compute_10' now, but it has the same complaint with the Stock code;