Articles with maths

You can merge cells in Excel. “Duh!” you say. So how does Excel ensure that the cells you want to merge don’t overlap any existing merged cells? How do you check programmatically that the cells you want to merge won’t overlap any existing merged cells?

Well, that was the problem I faced while writing my spreadsheet library software (check out SpreadsheetLight here!). The inkling of an idea was formed when I visited StackOverflow. Can’t remember what I was there for, but the answer page wasn’t about Excel or spreadsheets. It was about collision detection.

Yes, the type of collision detection used in games so you don’t walk through walls and stuff.

And thus I learnt a new mathematical theorem: the separating axis theorem (or hyperplane separation theorem). From what I understand, given two N-dimensional thingies, if you can find one (N-1)-dimensional hyperplane that separates those two thingies, then those two thingies are separate (or not overlapping).

Yes, “thingies” is a technical term.

For example, given two 3D objects, if you can find one 2D plane that separates them, then the 3D objects are not overlapping. Another example is when you’re given two 2D objects, and if you can find one line (which is 1D) that separates them, then the 2D objects are not overlapping.

There are some conditions to be fulfilled, such as the objects being convex and disjoint and whatever. (yes I’m a mathematician…) I’ll leave it to you to do your own reading.

But for our purposes, how do we check if that rectangle of Excel cells we want to merge won’t overlap with any existing merged cells? We have these conditions that make our lives easier:

The merged cells are rectangular

The 4 end points (top-left, top-right, bottom-left, bottom-right) of a merged cell are in whole numbers

The (merged) cells map strictly to a 2D rectangular grid

Since the merged cells are rectangular, they’re also convex (I’m not going to explain why, just trust me). Since the 4 end points are in whole numbers, line boundaries can be easily calculated (it’s easy to check if something is <7 or >=8 for example). And since they map strictly to a 2D rectangular grid, the separating axis is a line. And even better, you can always find a subset of solutions of those separating axes that are either horizontal or vertical.

Ok, diagrams are in order.

So in our case, as long as you can find one horizontal line or one vertical line that separates the 2 merged cells we’re checking, then the 2 merged cells are separate.

Let’s try an example. Excel uses cell references that are in the A1 format, which means row 3 column 4 becomes “D3”. The column index uses alphabets and is in base-26. Read more here.

We’re going to simplify that. Let’s say our existing set of merged cells only has one entry. This merged cell has the top-left corner at row 1 column 1, and the bottom-right corner at row 3 column 4. Suppose we want to check if this merge cell overlaps: top-left corner at row 7 column 8, bottom-right corner at row 10 column 10.

The horizontal line of 5 fits the bill, or the line y=5 if you want to be mathematical about it (but y goes from negative to positive downwards instead of the usual Cartesian y). Or y=6. Or even y=7 (note that the line can “touch” one of the merged cells, but not both. This is where the “whole number” condition comes in).

The vertical lines x=5, x=6 or x=8 also fit the bill.

Thus, our 2 merged cells don’t overlap.

So what’s the programmatic way to check? You’d be surprised at the simplicity. To make it easier, the variables with “StartRowIndex” mean the top row of the merged cell, “EndRowIndex” mean the bottom row of the merged cell. And “StartColumnIndex” mean the left-most column of the merged cell, and “EndColumnIndex” mean the right-most column of the merged cell.

So the merge cell we want to check has top-left corner at (iStartRowIndex, iStartColumnIndex) and bottom-right corner at (iEndRowIndex, iEndColumnIndex).

The variable “mc” refers to a custom class I use to represent merged cells. Obviously, you’d run that condition in a loop through all your existing merged cells. If you can’t find an overlap after running through that loop, then the merge cell you’re checking is good to go.

(iEndColumnIndex < mc.StartColumnIndex) means our merged cell is completely to the left of the existing merged cell

(iStartColumnIndex > mc.EndColumnIndex) means our merged cell is completely to the right of the existing merged cell

The first 2 conditions check for the existence of a horizontal separating axis. The next 2 conditions check for the existence of a vertical separating axis.

Note the negation of the entire boolean condition. Those 4 conditions check for existence of solutions. Negation means checking for overlaps.

Note also that we use strictly greater than or strictly less than checks. If the merge cells share a row or column, then they overlap, right?

Did you know my first draft of writing the code had me checking 6 different conditions? Each condition had its own if statements. This one just had one. I’m so lucky to have learnt the separating axis theorem. I was checking if a point was above, within or below the merge cell, then I had to check if the point was to the left, within or to the right of the merge cell, and … urgh, it was horrible…

I never thought my maths training would come in handy again. I was working on a financial report, and one of the requirements was to have a particular calculated value show up. The formula didn’t make sense to me, but it was a business/financial logic requirement, so I just dealt with it.

So here’s the core of the problem (specific values had been changed):

select sum(A)/sum(B) - 0.7 from sometable

where “A” and “B” are columns of the database table “sometable”.

So what’s the problem? That select query won’t run. Or at least it didn’t run from a Sybase database (or was it an SQL Server database?). I’m not saying it ran but the value was wrong. I mean it didn’t even execute. Just in case you asked, “A” and “B” are numeric data columns so the sum function will work.

I don’t know how I came up with the idea of using partial fractions. Given that only 0.08%* of staff in the entire office building had maths background, and I probably made up the entire 0.08%, I didn’t have anyone to bounce ideas off of and be told “How about you try using partial fractions on that, Vincent?”

(* a completely made up statistic)

Anyway, I tried using partial fractions, and it worked. Now in partial fractions, you typically deal with decomposing a fraction into 2 or more fractions. Here, we’re combining fractions into 1 fraction. Let me show you.

Alternatives

Now I know there’s another option. I could get sum(A) and sum(B) individually, and then do the required calculation in code (C# code as opposed to database SQL code. I was dealing with ASP.NET then).

After considering my options, I decided to leave all the calculations at the database side. This makes the ASP.NET code “cleaner”. Then I only have to deal with one return value (instead of 2, sum(A) and sum(B)), and I can bind it directly to my database objects for display on the web browser.

Also, there were where (and group-by? Can’t remember…) clauses in the SQL query. I didn’t know if I obtained sum(A) and sum(B) individually (even if they were in the same query) that that will affect their values. I decided to play it safe, and just get it all in one resulting value from the same query.

I didn’t check for efficiency. It wasn’t an oft-used report, so the code execution won’t be run frequently enough to matter.

But if you’re curious enough to do some tests, go ahead. If you then want to share your results, I’d very much appreciate it too.

I came upon a realisation in my first year of university. There are very few absolutes in the universe.

I was sitting in the lecture theatre, taking notes of the professor’s lesson. I was in my first year, and in my first semester, and I haven’t any friends I knew there. “Friends” were people I saw frequently (if you could call the first 2 weeks “frequently”). After two and a half years of being conscripted in the army, my brain was raring to go (about the most mentally strenuous activity I remember was figuring out how dBase IV and FoxPro work [those are database systems by the way]. Oh and how to make the records print nicely. That’s interesting, considering that I had to send letters to over 500 military personnel, and one choke of the printer sent hours and tons of paper to waste. Lining up address information on the paper could be quite challenging. Another story for another time…).

It was a maths lecture (not surprisingly…) about fundamental logic (I think). I learnt of the term “for all” (or “for every” or something similar), which is represented by an inverted capital A. For example, for all even numbers, one of the factors is 2.

This “for all” term is not used lightly. When you say, “for all”, it means for all conditions, circumstances, situations, universes, even alternate realities, that the next statement is true.

And as the professor said, all you need is one example of the statement being false, and the whole statement collapses.

A related concept is the negative. For example, “There is no such thing as a zombicorn.” First, there’s no such thing as a zombie. Second, there’s no such thing as a unicorn. Uh, I’m not going to contest you on that. Go Google it or something…

What you mean by “There is no such thing as XYZ”, is that for all conditions, circumstances, situations, universes, even alternate realities, that the statement is true. And all you need is the existence of XYZ for that statement to be false.

Because of this, the professor also said that it’s impossible to prove a negative. We just barely found Earth-sized, possibly habitable planets at the outer reaches of our exploration of space. How can anyone possibly search the entire known (and unknown) to find that one existence of proof? Have you searched all the planets out there? Do you know for sure, that there is absolutely no way that XYZ can possibly exist?

This is why I find maths interesting. Some of its concepts can be proven. For example, the statement “There is no such thing as an even number with a factor that’s 2.” All I need to disprove that statement is find the existence of an even number with 2 as a factor. For example, 6 (with 2 and 3 as factors).

Once I realised that about the only absolute statements are in maths (and possibly physics, but I suck at physics, so, yeah…), and even then those absolute statements are held in the strictest of scrutiny, I also realised that (almost?) everything is relative. Yes, I know I’m way behind some fellow named Einstein who mentioned some theory of relativity somewhat…

The notion that nothing is absolute scares the hashbrown out of some people. Those people need to know that this is right and that is wrong. That this is black and that is white. That there are clear cut lines with which they can stand behind of. That there are “right” arguments and opinions they can back themselves with.

For example, the statement “You’re always late.” really means the person is saying you’re always late in the person’s mind. Never mind the few times that you are early, because that person will conveniently forget about those instances (because of cognitive dissonance. The person had to convince herself that she’s right about you being always late, and thus fabricates proof that substantiates her opinion). Also, it presupposes that you will continue to always be late. And all you need is to be early once, and that statement is false (but cognitive dissonance will thwart your attempts at defending yourself).

This taught me to be tolerant of other people’s views. When nothing is absolute, then nothing is certain. Like I said, this scares the cranberry out of some people.

And this coming from a maths lecture. More interesting still, it came from a discipline that’s known for its strictness.

A while ago, I received an email from a distraught salesman. He believed his sales commissions were wrongly calculated, and asked me to shed some light.

Note that I’m not using the exact numbers he gave in his email.

The story goes that Michael (as I’ll call him) and his colleagues were given sales targets that were negative. How could sales targets be negative? Shouldn’t you be trying to sell something? The reason given was that the current economy was disastrous, and basically each sales person was trying to not lose sales.

Now I’m not going to discuss how that works out. I can’t presume to understand the business logic behind the sales commission in this case, but I’ll discuss the mathematics behind the numbers.

The normal sales targets and commission

Let’s say your sales target for this month is $1000. This means you’re expected to sell about $1000 worth of products or services. We’ll ignore the condition that you will get some commission based on what you sell, regardless of how much you sold (my brother’s a sales person), as well as other types of commissions.

Let’s say the sales commission is based on how much extra you sold beyond your sales target. Makes sense, right? Let’s use simple percentages.

If you sold $1100 worth of products or services, then your percentage commission might be calculated as follows:
(Difference between Your Sales and Your Sales Target) / (Your Sales Target)

Or ($1100 – $1000) / ($1000) = 10% commission.

This is assuming that your sales amount exceeded the sales target, of course.

The case of negative sales targets

Now if the sales target is negative, as in Michael’s case, the mathematical formula still applies. But you have to note the negative sign. For some reason, “business” people (no offense to business people) tend to see -4567 as larger than 12, even though 12 > -4567. They see the magnitude first, not the value itself. (It’s also why I get emails about calculations involving negative numbers… anyway…)

Let’s say the sales target is -$1000. Everyone’s expected to lose money, but you try not to lose more than $1000. At least that’s what I’m interpreting it as.

have to be modified to this
(Difference between Your Sales and Your Sales Target) / (Magnitude of Your Sales Target)

In maths and programming terms, the “magnitude” part refers to the absolute function. Meaning you ignore any negative signs. Actually, the modified version works for the normal case too (which is why you should use it for the normal version anyway to take care of weird cases like this but I digress…).

That’s the “foolproof” version. Consider it a bonus for reading this far. Frankly speaking, any competent programmer should be able to come up with that formula, even without much maths background. You just need to think about the situation a little carefully (ask “what if?” more often).

Michael’s calculated commission

When Michael wrote to me, he said his commission was calculated as follows (given that he only lost $50):
-$50 / -$1000 = 5%

Clearly it makes more sense to lose more money! This was why Michael wrote to me.

I don’t propose the method I gave is correct, business-logic-wise. Michael didn’t give me any details on what he’s selling, or what his company is (or even why it’s acceptable to have negative sales targets, regardless of the economy). So I cannot give any help other than from a pure mathematical point of view. But I hope it’ll at least give Michael a fairer commission amount.

Questions

Given Michael’s situation, what do you think is an appropriate calculation formula?

Can you think of (or know of) a realistic situation where a negative sales target is acceptable? I say “acceptable”, but seriously, no company should “accept” that they lose money every month.

This is a mini-series on how revenue sharing and operations research are linked. You might want to read part 1 on the specific business problem I was solving, and part 2 for the mathematical formulation of the problem. In this final part, I’ll tell you what was the solution eventually used.

First, although I said it was basically an assignment problem in part 1, on further thought, I don’t believe it is. What I was thinking was “how to assign that $0.01 such that there are no rounding errors”. Thus the “assignment” part. I apologise for any confusion.

Second, I said the financial/maths formulation was an integer problem. But the values are money values with decimal values, right? How can it be an integer problem? Because it has to also be correct up to 2 decimal places. That means fixed arithmetic. Therefore it becomes an integer problem. Just think of the values as cents (instead of dollars correct up to 2 decimal places).

Now, if you’ve read part 2 (and I applaud you if you actually sat through that mass of maths formulations), you should have guessed that using operations research to solve the business problem was not advisable. It might not even be suitable.

However, the problem still needed to be solved. How do you get rid of any extra or missing money?

More maths…

Going back to the example I gave in part 1, there were 3 products with revenue to be split between 2 parties. So there were 6 parts. If each part generated a rounding error of +$0.01, then there was a maximum potential difference of $0.06 between the original revenue to be shared and the sum of the parts after revenue sharing calculations.

I remind you that whatever solution I came up with had to make mathematical sense and financial sense to the programmers and the users. There are actually 2 goals:

To have the sum of the parts be equal to the original revenue amount

To have each part’s amount (after rounding) be as close to the calculated value as possible

The 1st goal ensures no summation errors. After revenue sharing, there are no extra or missing money amounts. This part is actually easy to fulfill. The 2nd goal is fulfilled with a bit of adjustments. So here’s the solution.

The easy-to-understand maths solution

We split the revenue accordingly to each part first, rounding each part’s amount to 2 decimal places. Then we sort each part in ascending order. Then we sum all the part’s amounts together. If there’s a discrepancy, we correct the discrepancy by adjusting the largest amount. This calls for an example.

Here’s the original example used. Total revenue for the 3 products are:

ProductA: $63.13

ProductB: $20.75

ProductC: $16.12

Assuming a 30-70 percentage split, we have:

ProductA: $18.94 (us), $44.19 (them)

ProductB: $6.23 (us), $14.53 (them)

ProductC: $4.84 (us), $11.28 (them)

Sorting all the parts in ascending order, we have:

ProductC: $4.84 (us)

ProductB: $6.23 (us)

ProductC: $11.28 (them)

ProductB: $14.53 (them)

ProductA: $18.94 (us)

ProductA: $44.19 (them)

The sum of the parts’ amounts is $100.01, which is not equal to the original revenue being shared ($100). The discrepancy is a +$0.01. So we adjust the largest amount. Specifically, we deduct $0.01 from the largest amount (because our discrepancy is positive).

So the revenue share for the content provider for ProductA becomes $44.18, and thus the sum of the parts become $100.

This method ensures that the sum of the each part’s amounts is still equal to the original revenue, which is very important (because this is a financial operation!). This satisfies the 1st goal.

And for each part, the amount is rounded to the nearest 2 decimal place. So each part’s amount is as close to the calculated split value as possible. The only exception is the largest amount might be off a little.

Now I chose the largest amount to “absorb” any rounding discrepancy precisely because it is the largest amount. Note that the term “largest” refers to the magnitude, so if you happen to deal with negative values (it happens, even in financial situations. Consider debt as an example), use the maths absolute function to do the sorting.

Any discrepancy can be mathematically shown to be at most equal to (number of parts) multiply by $0.01 (rounding error).

D <= ± (N * $0.01)
where D is the discrepancy and N equals the number of parts.
Note that the discrepancy is bounded, which is the mathematical way of saying it has an upper and lower limit (or bound).
Note also that in a fraction, a larger numerator means a larger fraction and a smaller numerator means a smaller fraction. A larger denominator means a smaller fraction and a smaller denominator means a larger fraction.

Now, whatever the discrepancy value is, it is bounded, it is fixed, and it is a small value. If we want any amount to “absorb” this discrepancy, then the larger the amount, the smaller the resulting error fraction or error percentage.

For example, if the discrepancy is $0.01 and the amount is $1, the resulting error percentage is 1% ($0.01 / $1.00 * 100%). If the amount is $5, the resulting error percentage becomes 0.2% ($0.01 / $5.00 * 100%).

Suppose the discrepancy is $0.02. We could spread the discrepancy error among the largest 2 amounts, each amount absorbing $0.01. But this makes the programming a little more complicated than it is. Also, it makes the algorithm a bit “dynamic”, which makes tracing any calculations by a programmer or user difficult.

Implementing it in code

All the revenue amounts were stored in the database. Because of this, I recommended that any revenue sharing calculations be done within the database environment itself. Namely, with stored procedures running SQL statements.

Benefits of using stored procedures within the database environment:

Can sort values easily (use the SORT BY clause)

Can handle dynamic number of values (with temp tables or the original tables)

Some calculations can be grouped into a single UPDATE statement

All the values are in the database!

The last benefit means there’s little switching of context from the database environment to the… whatever environment. In that project, it was scheduled Unix shell scripts combined with C programs that called the stored procedures. We didn’t want the context to switch back to a Unix environment to do calculations. Doing calculations in the Unix environment with C might be fast, but there are many content providers and many products. The context switching might eat up any performance benefits. Besides, having the calculations in a few stored procedures mean better modularity and separation of functions.

Further considerations

Because we’re in the business of … uh, doing business, we might want the customer to have a better deal. Or at least an easier report to read.

In the method above, we sorted the amounts in ascending order, regardless of whether it’s us or them. So it could well mean that the largest revenue share of the content provider be used to absorb the discrepancy.

This might mean when they read the revenue sharing report, they might question why that amount is $44.18 instead of $44.19. It might be just $0.01 to you, but it’s still money!

What we can do is sort the amounts by theirs first, then ours. And within each, sort by ascending order. So we could have this instead:

ProductC: $11.28 (them)

ProductB: $14.53 (them)

ProductA: $44.19 (them)

ProductC: $4.84 (us)

ProductB: $6.23 (us)

ProductA: $18.94 (us)

In this case, we adjust our revenue share for ProductA to be $18.93 (instead of $18.94) so the revenue sum is correct. Basically, we absorb any discrepancy, using our largest revenue share amount.

And that’s the end of the discussion of revenue sharing: the business part, the maths part and the implementing/programming part. Let me know if you have any questions.

This is a mini-series on how revenue sharing and operations research are linked. You might want to read part 1 on the specific business problem I was solving. In this part, I’ll be telling you about the maths behind the business solution.

But first, I have to tell you about a couple of maths concepts.

Converging from opposite directions

Let’s say someone asked you how tall you are. Instead of giving a straight answer, you decide to give a simple maths puzzle. You say you are at least 6 feet tall. Then you also say you are at most 6 feet tall. So what’s your height? Let’s put those 2 conditions into mathematical form:

H >= 6

H <= 6

To satisfy those 2 conditions, there’s only 1 answer: You must be 6 feet tall.

It looks elementary, but it will come into play later on. Just keep in mind that an equality can be split into 2 inequalities. This actually reminds me of the squeeze theorem, but I digress.

Reversing the direction of an inequality

The next maths concept is how to reverse the direction of an inequality. For example, we have

x >= 5

This can also be expressed as

-x <= -5
This will also come in handy later on.

System of linear inequalities

My experience in operations research had been confined to course work in an academic semester. Operations research mainly is about maximising or minimising some objective. You are given a series of conditions to fulfill. Then you’re given an objective to maximise or minimise. And you’re to translate those conditions and objective into a mathematical model formulation. Let’s look at an example.

Jake’s mom gave him $15 to buy some sweets (because he did awesomely in a maths test. Parental note: I don’t think this is a good reward, but hey, it’s your kid). But mom told Jake that he cannot buy more than 3 lollipops (although Jake already decided he’s not buying more than 4). There’s this contest where if you submit wrappers from peppermints and lollipops, you win an iPad (I’m totally making this up!). Jake needed just 4 more wrappers, and he didn’t want more than 4 because he wanted to buy more chocolates. Also, Jake had secretly decided to buy at least $10 worth of sweets.

Now, logically speaking the condition for the number of wrappers doesn’t make sense. We could say the number of wrappers for peppermints and lollipops be at least 4, which makes better sense. But this is a hypothetical example. More to the point, it’s my hypothetical example, and I needed an equality condition. So there.

You might also note that there are redundant conditions.
L <= 3
L <= 4
can be represented by just
L <= 3
because if it's satisfied, then L is definitely <= 4. This is to show you that in your mathematical formulation, you can get redundant conditions. It's up to you to eliminate them so you work with less conditions. Real life problems are messy, don't you know?
Let's clean up the formulation.
Objective: max C + P + L
2C + P + 3L <= 15
-2C - P - 3L <= -10
P + L <= 4
-P - L <= -4
L <= 3
I've rearranged the conditions a little for clarity. Note the inequality reversal for the "more than $10" condition. Note the split of the equality to 2 inequalities. Why are we doing this? So we get the general form Ax <= B where
A is the coefficient matrix
x is the variable vector
B is the value vector
We need to do the inequality reversal so the "direction" for all the inequalities is the same. Otherwise we cannot have Ax <= B.
In this case, A is
| 2 1 3 |
|-2 -1 -3 |
| 0 1 1 |
| 0 -1 -1 |
| 0 0 1 |
x is
| C |
| P |
| L |
B is
| 15 |
|-10 |
| 4 |
| -4 |
| 3 |
With this Ax <= B form, we're dealing with matrices and vectors. That means we can use all the mathematical techniques for solving such formulations, such as Gaussian elimination or even Gauss-Jordan elimination. The idea is to solve all of our unknowns at one go.

You might notice that we have more inequalities than unknowns. This means we have multiple solutions.

For example, a possible solution for (C, P, L) is (2, 3, 1). That solution satisfies all the conditions. It doesn’t mean it’s optimal, but it’s a solution. This means solutions exist for the problem (sometimes just knowing this is reassuring…).

The dual

I want you to know that the objective can always be stated in the opposite manner. If the objective is to “maximise X”, it is equivalent to “minimising -X”. This is known as the dual of the problem. For example, maximising profit is equivalent to minimising the negative value of profit. NOTE: Maximising profit is NOT equivalent to minimising cost. Profit and cost are generally not the opposite of each other (they’re different business terms), even if they sound like they are (and sometimes give the same solutions). The mathematical formulation and solution is different for both of them. Know what you’re solving.

This duality property is useful if your program is optimised to solve only minimisation problems. So you just convert a maximisation problem to a minimisation problem, and your program works just fine!

Linearity

And you might also note that the objective and the conditions are all linear, meaning unknowns are up to power of 1 (as opposed to quadratic or cubic). Sometimes you get a non-linear condition, and depending on the situation, you might be able to form linear conditions that are equivalent to that non-linear condition.

For example, CP = 2 (number of chocolates multiply by number of peppermints equal to 2).
This set of conditions might be equivalent:

C >= 1

P >= 1

C <= 2

P <= 2

C + P = 3

The first 2 conditions are inferred from the fact that C and P cannot be zero. If they are, then CP = 2 is never satisfied. If they’re never zero, then they must be at least 1.

The 3rd and 4th conditions are inferred from the fact that C and P cannot be more than 2. If they are, then the other unknown is fractional. For example, if C is 4, then P must be 1/2. But these are number of items, so they must be integer. Therefore, C and P must be at most 2.

Based on the first 4 conditions, C and P can only take on either 1 or 2 as values. The only combination of permutations that still satisfies the “CP = 2” condition is one unknown must be 1 and the other unknown be 2. Hence the sum of the two unknowns must be equal to 3.

Not all non-linear conditions can be translated into linear conditions. Go ask an operations research professor for more.

Integral conditions

And we come to the worst part. All the unknowns must be integer (you can’t buy 2.37 chocolates. Well, at least not without the candy store owner going crazy). This integral condition makes the problem much harder to solve. If you solve the Ax <= B form, you generally get fractional values for the unknowns. This is the optimum solution, but doesn't satisfy the integral condition.
If I remember correctly, you solve Ax <= B as usual. Then you take one of the unknowns and work with the 2 integers that are the floor and ceiling values of that unknown. Then you solve iteratively again, with reduced and reformulated conditions for Py <= Q.
For example, if you get C = 2.37, then you split off 2 scenarios with C = 2 and C = 3 as the optimum solutions. Scenario 1 is
Objective: max P + L
P + 3L <= 11
-P - 3L <= -6
P + L <= 4
-P - L <= -4
L <= 3
Scenario 2 is
Objective: max P + L
P + 3L <= 9
-P - 3L <= -4
P + L <= 4
-P - L <= -4
L <= 3
Basically, you just substitute C = 2 and C = 3. For each scenario, you continue splitting with the other unknowns. There are 3 unknowns, so there are eventually 8 scenarios (2^3 = 8). It's a binary tree. Well, it's at most 8 scenarios, because you might get lucky and hit an original optimised solution with one or more of the unknowns having an original value that's already an integer (now that's a long sentence...).
For each of the scenarios, you either get a solution or you get a contradiction somewhere (meaning it's unsolvable). The contradiction can come from the splitting, because when you substitute a possible value into one of the unknowns, the resulting set of conditions contain a contradiction. For example, out of the set of conditions, you get 2 conditions as follows:
x <= 4
-x <= -8 ( meaning x >= 8 )

Since an unknown variable cannot be both (less than or equal to 4) AND be (more than or equal to 8), the scenario becomes unsolvable.

Then you examine all those scenarios with solutions, substitute the solutions to the unknowns, and compare the result. If it’s a maximising problem, you check which scenario gave a solution that’s the largest. If it’s a minimisation problem, you check which scenario gave a solution that’s the smallest. And that particular solution becomes the optimum integral solution.

Let me tell you, you do not want to do this by hand. This is why software is written to handle the Gaussian eliminations and checking all the possible scenarios due to the integral condition. With just 3 unknowns, you’re expected to do 9 Gaussian eliminations (1 for the original, 8 for the branch scenarios), and check through all 8 scenarios. Imagine having hundreds of variables and conditions…

Mathematics and programming in one. Having skills in one but not the other makes writing the software quite difficult. Imagine a mathematician who cannot express his ideas in programming code, or a programmer who cannot understand the maths involved.

Back to the business problem

After that extremely long mathematical discussion, we can now turn back to the original business problem. If you haven’t read part 1, you should do so now, otherwise you’d be lost.

Let R be the total revenue for a content provider for that particular month
Let R1 be the total revenue for ProductA
Let R2 be the total revenue for ProductB
Let R3 be the total revenue for ProductC
(so R1 + R2 + R3 = R)
Let X1 be the revenue share for the telecommunications company for ProductA
Let X2 be the revenue share for the telecommunications company for ProductB
Let X3 be the revenue share for the telecommunications company for ProductC
Let Y1 be the revenue share for the content provider for ProductA
Let Y2 be the revenue share for the content provider for ProductB
Let Y3 be the revenue share for the content provider for ProductC

Let the revenue sharing split for the telecommunications company be 30%
Let the revenue sharing split for the content provider be 70%

The objective is to minimise rounding errors. Actually, there should be no errors, but we can live with a possible solution first. If it’s non-zero, then we panic…

As you can see, there’s a lot of equal conditions. And it’s a problem with integer conditions. Frankly speaking, I don’t want to program a solution either. Just looking at what I’ve written is already giving me a headache…

I’m glad my ex-colleague said no. *whew* This was too complicated to implement. And the content provider might have more products, so the number of unknowns is unknown (no pun intended).

There’s actually a little bit more to the maths formulation, but I’ll tell you about it in the 3rd part. I had to come up with a solution that made mathematical sense and could be understood by other people. Mostly, the users (that is, “normal” people) must be able to understand the logic behind it.

Leave a comment if you have any questions about the maths involved (or even tell me I’m wrong), and I’ll do my best to answer them.

Back when I was writing my thesis, I came across this abbreviation: a.e. It took me a while, but I eventually found that it meant almost every. This might have been almost everywhere in measure theory, but I don’t think so. I seem to recall just “almost every”.

Almost every what, you ask?

Well, I was doing research on computer virus behaviour, so I had books from computer security, graph theory, biological viruses, mathematical models (with exponents and ordinary differential equations and such). I think it was in graph theory. An author was talking about a result or theorem and the proof included almost every type of graph, which was good enough.

I thought that was interesting, because I’ve thought of mathematics as absolute. Maybe this was why I suck at statistics… The idea of some event having a probability of happening, instead of just be or be not, shakes my world somewhat. Of course, I’m less shaken now since life isn’t really absolute…

My greatest accomplishment came when I was in the computer lab, and a Masters student was around. She’s from China, and you know those people are wicked clever. She held up a book, looked at me, then walked over to me.

“Do you know what a.e. means?”

AHA! Me, honours student, knew something a Masters student didn’t!

“I think it means almost every.”
“Oh. OHHHH! Thanks!”

I suspect she asked me because she believed my English was better than hers, and not because I was more knowledgeable in whatever topic that book of hers was about. Have you seen programming books translated to Chinese? I can read the words, but that doesn’t mean I know what the hashbrown that meant… She approached me probably (yay statistics) because she believed a.e. stood for something that someone moderately versed in English will know.

Still, it was an accomplishment. I could tell you I went on to brag about it to all my friends, but truth be told, I continued working on my program code… I was in the computer lab for something, you know…

I was mucking around in my image editor (Paint.NET) because I was doing some CSS colour editing. While I was playing around with the HSV of the colour, I saw this in the RGB box: 314159. You know what that reminds me of? PI. No, not that pie, PI! Great, now I’m hungry…

So I wondered what numbers would look like if they had colours.

First, we have PI as 3.14159 (ratio of a circle’s circumference to its diameter)

We also have the golden ratio, 1.61803. It’s also the limit as the (n+1)th term divide by the nth term in the Fibonacci sequence.

For some reason, I remember the Avogadro constant, even though I don’t do chemistry or physics anymore… It’s 6.02214179 x 10^23. Yes, it’s a big number… The colour also reminds me of bromine gas, which is reddish-brown in colour. Wait, how come I still remember these things?

Here’s an interesting one. Absolute zero is the theoretical temperature where everything barely has energy. It’s defined as 0 Kelvin, and is equal to -273.15 degree Celsius. And you thought ice at 0 degree Celsius was cold…

John Cook wrote an article on where to wait for an elevator. Where do you wait so you walk the minimum distance to the elevator? Read his article for the full explanation and solution. Here’s a pictorial summary of the solution:

The problem was from a paper written by James Handley (and collaborated with others). Basically, you don’t wait at the average position among all the elevators. You wait at the elevator with the median position.

In reality, however, you still have to enter the space where the elevators are. Therefore, the best position with the minimum distance to walk is

to remain at the point of entry, and to not move at all!

That of course, assumes that the elevators had been activated to arrive at your floor. You know, you need to push the button. How do you push that button if you’re 5 metres away? With a long pole? Other people? Telekinesis?

Here’s something to think about. James Handley is a mathematician. What’s he doing in the faculty of medicine? Because he’s helping with epidemiology. If you’re not familiar with the word, “epidemiology” means the study of epidemics. The spread of viruses, rates of infection and the like.

This reminds me of the time when my thesis mentor (back in my university days) suggested I work in the field of epidemiology. Because my thesis dealt with computer virus behaviour. Oh my god, I just looked at my thesis again… oh dear, ordinary differential equations! *closes thesis hurriedly*