Category: Case Study

Okay. Okay. OKAY. Look. I know you have a problem. You've been screwed by someone and now want your money back. Totally agree.

But first take a big breath and relax - you don't want to get into bigger trouble. Let's do it another way. I want to help to go one step further and do it like a PRO. And believe this makes a huge difference.

So go, grab your drink, and read this 5 tips.

How to do this

Read each step carefully. After the end, you will find what should you have after accomplishing it.

Formulate hypothesis you want to validateA null hypothesis () is a statement we want to validate. Unless we will find sufficient evidence, there will be no reasons to reject it.

A customer doubts drug's purity. He states that it contains more than 10% additives. The alternative hypothesis can be ()

After this step, you should have formulated ) and ()

Choose test statisticsOur overall aim is to validate the null hypothesis. We have to assure that it is true and then look for arguments to demolish it. Yeah.In more scientific speech we have to come up with probabilistic distribution ensuring that null hypothesis is correct.

A customer bought 15 decks of a drug. After hosting a big party he realized that ONLY 11 decks were meeting the norm guaranteed by the dealer (test statistics). Remembering the wise words of a dealer, his test distribution can be (). Someone will have a problem.

After this step, you should have figured the test statistics (based on the experience) and the test distribution

Choose a critical region (one-tail or two-tail test)Right now we have our probability distribution of test statistics, but still need to choose which values the null hypothesis get rejected (critical region) and for which accepted (acceptance region).We use a term of significance level () which is a parameter describing certain probability, that for an event the likelihood of it's occurrence is small enough to agree that the null hypothesis gets rejected.

A customer have chosen a value of significance level () meaning that the critical region (when we reject the null hypothesis) can be described as: ()

Depending on the form of () we can also specify whether the critical region is one-tailed or two-tailed.

One-tail critical region occurs when the alternative hypothesis is expressed with inequities. For example if () we should use left one-tailed critical region, and for () right one-tailed.

When the () is expressed with the () sign we are dealing with two-tail critical region. In this case, the critical region is placed in both tails of the distribution, where each side corresponds to the () probability.

Because the alternative hypothesis is () the scammed customer is dealing with one-tailed critical region.

After this step, you should specify the significance level () and know whether the critical region is one-tailed or two-tailed.

Calculate the probability (p-value)P-value is a probability of getting the same (or worse) results from the perspective of a null hypothesis.It's value depend on two things:

form of alternative hypothesis () (one or two tails),

a value of test statistics (based on the test distribution)

In the case of our customer the test statistics is 11 (doses of pure drugs) and the critical region is located in left tail. The formula for p-value is (). Taking into consideration () it's value is (). To calculate this he used this snippet.

After this step, you should obtain p-value

Make a decisionIn this last step, we are finally deciding if the null hypothesis gets rejected or not (i.e. dealer was right or not).The null hypothesis will get rejected if the p-value will get into critical region.For example if the critical region is in the left tail the () will get rejected if ().

Customer has to reject his hypothesis (). In this case the P-value ()) is greater than the significance level (), which means that the drug dealer was right () is true). DAMN.

After this step you finally know if there are reasons to reject ()

Q&A

Question: What value of significance level should I choose?

Answer: It all depends on how sure you want to be that you are making no mistake when rejecting a null hypothesis. For example, choosing () gives you more certainty that your decision about rejecting () was correct than ().

Summary

I have to admit it. I'm a bit scared. You have received a powerful tool. Tool that help to prove you that you're RIGHT in many cases.

But please, remember about other that still might need some help. Share it with them, and make them your debtors.

Overview

Networking events are becoming more and more popular. People are attending them expecting to find someone capable of solving their concerns.

The problem of telling "who should talk to who" is relatively easy when the number of attendees is low. In this case, conveyors can do the matching manually. However when the number of people (or a variety of skills) increases things beginning to complicate.

Desired scenario

Let's consider the following plot:

Before an event, each person fills-in the form describing his skills and needs

For example:

1

2

3

4

5

6

Person A

Skills:

- web development,

- mobile development

Needs:

- investor

1

2

3

4

5

6

7

Person B

Skills:

- investor,

- entrepreneur

Needs:

- graphic designer,

- internet marketer

Later on, each person receives the information telling which people are somehow especially valuable and should talk to.

1

2

3

4

Person A should talk to:

- Person X

- Person Y

- Person Z

Problem

The main question is: "Who should each person talk to?"

To estimate the difficulty of the problem let's assume that there are (n) people and each one receives only three recommendations ().

This gives a total number of combinations expressed with:

That is estimated like follows:

1

2

3

4

5

6

7

8

9

10

11

4 people = 4 combinations

5 people = 20 combinations

6 people = 60 combinations

7 people = 140 combinations

...

20 people = 19 380 combinations

21 people = 23 940 combinations

...

50 people = 921 200 combinations

...

100 people = 15 684 900 combinations

It is obvious that the number of possible combinations is growing faster for the bigger amount of participants.

In naive approach (i.e. brute-force) each combination should be evaluated and compared with the rest (lots of computation).

Additionally, to compare the solutions (to know whether they are good or bad) they have to be somehow measured - this leads to a concept of a fitness function.

Fitness function

A fitness function (f) is generally a function taking a possible combination as an argument and returning a numeric value.

For example a fitness function for a very bad combination (lot's of mismatches) will have a very low score.

In our case the function can return values in the range from 0 (worst) to 1 (best):

In this case, a fitness function will check two things:

do the matched candidates have skills matching our needs?

how often is the analyzing person being recommended to others?

The last conditions deal with a problem of rare competencies. There is a possible scenario with a person with rare skills, and lot's of others who are needing it.

Finally, each condition has a weight assigned to it. A fact of fulfilling needs will be more important than the popularity.

Having a way of evaluating the solution we can proceed with the algorithm.

Algorithm

To approach this problem a variation of an evolutionary algorithm (EA) will be used. EA are sort of meta-heuristics based on Darwinian principles of evolution.

Intuitively their workflow looks like follows:

Description of process:

Randomly assign 3 people to each participant

Randomly pick-up two participant

Make the connection if the second one is interesting for the first one (cross-over)

Randomly pick-up two participant and connect them (mutation, happens very rarely)

Go to step 2

Each course from 2-5 is called an epoch or generation. An epoch is represented by the possible solution (list of all participants with their matchings) that can be also represented using a fitness function (average of all individuals).

1

2

3

4

5

6

7

8

finaldef POPULATION_SIZE=50

finaldef GENERATIONS=3000

Population population=newPopulation(POPULATION_SIZE)

for(generation in2..GENERATIONS){

population=Algorithm.evolvePopulation(population)

}

In step 3, we are randomly selecting two people (cross-over) and trying to match the second one to the first one. A possible individual is being put either as the first, second or third match. In all cases, the overall fitness function is calculated. If a better solution than the current one if found - the matching is performed.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

privatestaticdef crossover(Population pop){

def indiv1=(Participant)pop.getRandomIndividual()

def indiv2=(Participant)pop.getRandomIndividual()

if(indiv1!=indiv2){

if(indiv1.isUseful(indiv2)){

def alternatives=[]

indiv1.matches.eachWithIndex{match,index->

def altPop=(Population)pop.clone()

altPop.match(indiv1.id,indiv2.id,index)

alternatives<<([index:index,fitness:altPop.fitness()])

}

alternatives.sort{-it.fitness}

def bestAlt=alternatives.first()

if(bestAlt.fitness>pop.fitness()){

pop.match(indiv1.id,indiv2.id,bestAlt.index)

}

}

}

}

Step 4 represent a mutation - a very rare possibility of accepting a worse solution. The main idea is to introduce some diversity into the solution.

Results

The testing was performed in 3 cases:

20 people,

50 people,

100 people

In each one, the initial population (skills, needs, and matches) was randomly generated. In real case scenario, you will have to load this data from other sources (like Excel, DB, or CSV file). Also, there was an upper limit of 5000 generations (all took about 5 seconds to complete).

After looking at the plot you can see some interesting facts:

All cases are starting random solution, which is generally bad (the worst one is for 50 participants (),

All cases are getting very close the perfect matching after 3000 generations,

There was one case of mutation (red line drop, near 5000-th generation),

Learning is slower when the greater number of participants

Conclusion

This experiment concludes that evolutionary algorithm provides a very efficient way of solving match-making problems. They are easy to implement and very extendible to custom restrictions and limitations.

The usage of EA can also provide an extra value for generating online recommendations during the event. It's common that some of the participants are absent which is disturbing others expectations.

Posts navigation

Practical XGBoost in Python (it’s free)

Click the button below to get access to comprehensive guide for using one of the best algorithm available. Start using it confidently in your own projects. Join 1000+ other satisfied students and uncover exclusive techniques.