Author
Topic: Are Some Bets Worse Than Others? (Read 5722 times)

That's how I do it Dobble, long-term testing on certain criteria to generate a "confidence level".

Quote

In mathematics, a combination is a selection of items from a collection, such that (unlike permutations) the order of selection does not matter. For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. If the set has n elements, the number of k-combinations is equal to the binomial coefficient

Bagging (stands for Bootstrap Aggregating) is a way to decrease the variance of your prediction by generating additional data for training from your original dataset using combinations with repetitions to produce multisets of the same cardinality/size as your original data.

The above quote comes from the site you linked and resonates with the way I see roulette. The quote I referenced comes from the link in the above quote, which is to a Wikipedia article about Combinations in statistics.

I'm just trying to figure out how to apply this to a roulette sequence of say, 16M spins.

I was going to ask a similar question. I just realized when reading further that the purpose of "reducing variance" is to obtain a more "solid" statistical model and so it doesn't ACTUALLY reduce variance it just "averages" it so statisticians can study the "normal data"?

So if that is true, we still have to analyze and deal with the variance because we can't make the wheel "bootstrap itself"?

The way I look at roulette now, we actually WANT the variance so we can profit from it!?

1) Hot numbers is POSITIVE variance (but I have not found how to predict this [yet?])2) Gapping is NEGATIVE variance that can be observed and acted upon for profit

So instead of reducing variance we want to study and quantify it!?

I mean granted in a perfect (not you Mr. Perfect! XD) world according to our desires, we would all love our selection to have reduced variance so we would always win without worry or effort but I don't think that is a practical goal??

Reyth you're right that the purpose of reducing variance is to obtain a better statistical model but this in turn will result in an algorithm which makes better predictions, resulting in fewer losing bets.

Quote

The way I look at roulette now, we actually WANT the variance so we can profit from it!?

1) Hot numbers is POSITIVE variance (but I have not found how to predict this

You could try using some of the ML (machine learning) algorithms given in this pdf. ML is not an easy subject to learn because of the higher maths involved, but this is a nice step by step guide which gives the basic theory and examples and only assumes minimal maths. If you can code you should have no trouble writing the algorithms in any programming language or excel.

The 2 'ensemble' methods given at the end are ways of 'boosting' performance but you need to learn the basics first.

It's all pointless. What is a point to bother with all of this? Fancy words? Model is physical, stats or uplied stats are used to create and refine this model. Physics come first, stats second. You guys do not dry rope before washing , right?

1. In terms of the representation used by the algorithm (the actual numbers stored in a file).

2. In terms of the abstract repeatable procedures used by the algorithm to learn a model from data and later to make predictions with the model.

3. With clear worked examples showing exactly how real numbers plug into the equations and what numbers to expect as output.

This approach doesn't sound useless to me. I don't think the Physics of StatisticsTM is useless either. Probability is useful and the HE is miniscule, overrated and only used as an excuse not to investigate Random BiasTM and to berate system players!

If players would have put as much effort into investigating the Physics of Statistics as they do quoting the miniscule and insignificant HE to system players, they already would have understood about Random BiasTM!

Ya I am on page 8 and I am already starting to formulate the beginnings of my f(InputVector)!

I want to learn 3 things:

1) When does a hot streak start2) When does a hot streak end3) When does the first substantial gap occur

#3 is the most important. So, Instance #1 is GAP >= 27 and the input variables is the most difficult and important part:

a) HS Ratio (hits to spins)b) Previous gap (<=26) historyc) Current streak (hits within the selection)d) Current gapping (within the selection)e) Number of current coupsf) NSP (number of total spins as a sum aggregate since a hit has been obtained by each number)g) [Relevant data from all the other selections] <=== very large & detailed

And what types of algorithms you have been using up till now? Loosely translation of term " non parametric algorithm " would be " have no idea what lm doing or gonna do next".... Past data is just past data. Need to model and look how current data fits your model. It's simply imposible with non parametric algorithms. Fancy words .

At this point though, I have a large set of data to analyze with many attributes and so if I can get the computer to analyze them for me and tell me what causes PLE's without me having to manually sort through millions of loss events, I think that's helpful!

Its non-parametric because I don't know what attribute configurations cause loss events but I have alot of data that can be analyzed to see if there is a correlation between any of the data and the loss events; i.e. f(cause).

And if you try and tell me there is no f(cause) in random data, I will refer you back to the first part of my last post!

It's all pointless. What is a point to bother with all of this? Fancy words? Model is physical, stats or uplied stats are used to create and refine this model. Physics come first, stats second. You guys do not dry rope before washing , right?

You can use physical parameters or a physical model just as easily as 'statistical' parameters with any of these algorithms, so it's not pointless. And what you're saying is that the stats don't reflect anything happening 'physically' at all which makes no sense. If that were true it wouldn't be possible to identify biased wheels purely by recording spins.

Reyth, remember 'Garbage in, Garbage out'. I don't think there's much point in using the algos over millions of spins because the only thing you will 'predict' is what the law of large numbers and basic probability tells you, and it's unlikely that using an algo on one sequence of 'raw' spins will work either because of the randomness. My approach is to create a diverse set of bet selections for a number or group of numbers, generate the sequence of wins and losses for each and then use one of the ensemble algorithms on the whole set. From this you will get a sequence of predictions which will show lower variance than any one of the bet selections used on its own. All machine learning algorithms are basically trying to fit a line to data and the main problem is 'overfitting' which means that the line doesn't generalise well to new data. This raises the variance, and is what the ensemble algorithms are designed to reduce.

Jason also has a blog which covers all aspects of Machine Learning for beginners. The emphasis is practical not theoretical. It's a good starting point for anyone who wants to get into the subject, which is vast. https://machinelearningmastery.com/[nofollow]

Guys, really... If you stop to confuse spins with past numbers, half of your problems will be resolved by itself. Numbers are result of spins, not vice- versa!!! Spins are spins, numbers are numbers, no need to mix beefsteak and fly, these has nothing to do with each other. Spins are procceses.. things are spinning ( ball/ wheel), ball is jumping... things going on. Numbers are just numbers, when ball is in the number , nothing is going on. With spins we can judge likehood of resulting numbers, create a model, forcast, make rergessinons, test variables, create hypothesis. ... ets. What can we do with numbers? We can count them , yes. But can we predict ( forcast) ? What should we base our forcast upon, how reliable it is? It doesn't make any difference in roulette , what numbers was results on previous trials, there is absolutely no reason to belive that numbers that fall before, will continue to fall because of past results. Spins we can measure ( direction, velocities, distances, timings, ball behaviours. ..ets)... but what is there to measure or model on past numbers??

Please can anyone explain physical parameters and physical model in relation with a random number event. My dixtionary give me no answer.

As I understand it a physical model is created only from physical parameters relating to the spin, such as direction, distances, timings etc. The AP view is that *only* these parameters or observations can have a bearing on the outcome of the next spin. I disagree.