I checked the dispatch sequence and Octave is passing the arguments to the dbetai.f function. This function comes from slatec (http://www.netlib.org/slatec/fnlib/). I looked a bit more and the version of the Fortran code used by Octave is up to date. However, the last modification was in 1992 so this is some very old code. It appears that the Fortran code itself is the source of the innaccuracies.

Just to check, I made a quick C program which uses the GNU Scientific Library to calculate betainc and it arrives at the same values as Matlab and Wolfram Alpha. So, it would appear that the Fortran code is simply too old and it should be replaced.

It's more complicated than that. The actual code that should be called for double precision values is dbetai.f.

Perhaps there is a dispatch problem when passing execution between the Octave interpreter written in C++ (betainc.cc), to liboctave also written in C++ (lo-specfun.cc), and finally to Fortran (dbetai.f). That would be the first thing to check.

If it is dispatching to the correct Fortran code, then maybe the slatec Fortran routines are simply too old.

The following example should not point out the incompatibility with Matlab, but the inaccuracy of Octave implementation.
e.g. try the following:
Octave's results are:
while e.g. Matlab or Wolfram Alpha (CDF[BetaDistribution[250.005, 49750.995], 0.00780]) yields:
To my opinion Octave function betainc.cc
http://octave.org/doxygen/4.0/dd/d3c/betainc_8cc_source.html
returns double precision, while the underlying
Fortran77 code betai.f:
http://octave.org/doxygen/4.0/d1/dc2/betai_8f_source.html
returns single precision only (according to the source header).
Betainc.cc distinguishes between double and single precision.

All implemented tests performed on betainc.cc go to single precision only (sqrt(eps) in asset function), even if declared as double precision tests.

The deviations have some real life consequences for e.g. Harrell-Davis Estimator, which would return negative weights if no error handling is performed.