Click Fraud Resistant Methods for Learning Click-Through Rates

Transcription

1 Click Fraud Resistant Methods for Learning Click-Through Rates Nicole Immorlica, Kamal Jain, Mohammad Mahdian, and Kunal Talwar Microsoft Research, Redmond, WA, USA, Abstract. In pay-per-click online advertising systems like Google, Overture, or MSN, advertisers are charged for their ads only when a user clicks on the ad. While these systems have many advantages over other methods of selling online ads, they suffer from one major drawback. They are highly susceptible to a particular style of fraudulent attack called click fraud. Click fraud happens when an advertiser or service provider generates clicks on an ad with the sole intent of increasing the payment of the advertiser. Leaders in the pay-per-click marketplace have identified click fraud as the most significant threat to their business model. We demonstrate that a particular class of learning algorithms, called clickbased algorithms, are resistant to click fraud in some sense. We focus on a simple situation in which there is just one ad slot, and show that fraudulent clicks can not increase the expected payment per impression by more than o(1) in a click-based algorithm. Conversely, we show that other common learning algorithms are vulnerable to fraudulent attacks. 1 Introduction The Internet is probably the most important technological creation of our times. It provides many immensely useful services to the masses for free, including such essentials as web portals, web and web search. These services are expensive to maintain and depend upon advertisement revenue to remain free. Many services such as Google, Overture, and certain components of MSN, generate advertisement revenue by selling clicks. In these pay-per-click systems, an advertiser is charged only when a user clicks on his ad. A scenario of particular concern for service providers and advertisers in payper-click markets is click fraud the practice of gaming the system by creating fraudulent clicks, usually with the intent of increasing the payment of the advertiser. As each click can cost on the order of $1, it does not take many fraudulent clicks to generate a large bill. Just a million fraudulent clicks, generated perhaps by a simple script, can cost the advertiser $1,000,000, easily exhausting his budget. Fraudulent behavior threatens the very existence of the pay-per-click advertising market and has consequently become a subject of great concern [5, 7, 8]. Recently, Google CFO George Reyes said, in regards to the click fraud problem, that I think something has to be done about this really, really quickly, because I think, potentially, it threatens our business model. [8].

2 A variety of proposals for reducing click fraud have surfaced. Most service providers currently approach the problem of click fraud by attempting to automatically recognize fraudulent clicks and discount them. Fraudulent clicks are recognized by machine learning algorithms, which use information regarding the navigational behavior of users to try and distinguish between human and robotgenerated clicks. Such techniques require large datasets to train the learning methods, have high classification error, and are at the mercy of the wisdom of the scammers. Recent tricks, like using cheap labor in India to generate these fraudulent clicks [9], make it virtually impossible to use these machine learning algorithms. Another line of proposals attempts to reduce click fraud by removing the incentives for it. Each display of an ad is called an impression. Goodman [2] proposed selling advertisers a particular percentage of all impressions rather than user clicks. Similar proposals have suggested selling impressions. For a clickthrough-rates of 1%, the expected price per impression in the scenario mentioned above is just one cent. Thus, to force a payment of $1,000,000 upon the advertiser, 100,000,000 fraudulent impressions must be generated versus just 1,000,000 fraudulent clicks in the pay-per-click system. When such large quantities of fraud are required to create the desired effect, it ceases to be profitable to the scammer. Although percentage and impression based proposals effectively eliminate fraud, they suffer from three major drawbacks. First, the developed industry standard sells clicks, and any major departure from this model risks a negative backlash in the marketplace. Second, by selling clicks, the service provider subsumes some of the risk due to natural fluctuations in the marketplace (differences between day and night or week and weekend, for example). Third, by requesting a bid per click, the service provider lessons the difficulty of the strategic calculation for the advertiser. Namely, the advertiser only needs to estimate the worth of a click, an arguably easier task than estimating the worth of an impression. In this paper, we attempt to eliminate the incentives for click fraud in a system that sells clicks. We focus on a common pay-per-click system, generally believed to be used by Google [3] among others, which has been shown empirically to have higher revenue [1, 4] than other pay-per-click systems like that of Overture [6]. This system is based on estimates of the click-through rate (CTR) of an ad. The CTR is defined as the likelihood, or probability, that an impression of an ad generates a click. In this system, each advertiser submits a bid which is the maximum amount the advertiser is willing to pay per click of the ad. The advertisers are then ranked based on the product of their bids and respective estimated CTRs of their ads. This product can be interpreted as an expected bid per impression. The ad space is allocated in the order induced by this ranking. Advertisers are charged only if they receive a click, and they are charged an amount inversely proportional to their CTR. In pay-per-click systems, when a fraudulent click happens, an advertiser has to pay for it, resulting in a short term loss to the advertiser whose ad is being clicked fraudulently. However, in the system described above, there is a long term benefit too. Namely, a fraudulent click will be interpreted as an increased

3 likelihood of a future click and so result in an increase in the estimate of the CTR. As the payment is inversely proportional to the CTR, this results in a reduction in the payment. If the short term loss and the long term benefit exactly cancel each other, then there will be less incentive to generate fraudulent clicks; in fact, a fraudulent click or impression will only cost the advertiser as much as a fraudulent impression in a pay-per-impression scheme. Whether this happens depends significantly on how the system estimates the CTRs. There are a variety of sensible algorithms for this task. Some options include taking the fraction of all impressions so far that generated a click, or the fraction of impressions in the last hour that generated a click, or the fraction of the last hundred impressions that generated a click, or the inverse of the number of impressions after the most recent click, and so on. We prove that a particular class of learning algorithms, called click-based algorithms, have the property that the short term loss and long term benefit in fact cancel. Click-based algorithms are a class of algorithms whose estimates are based upon the number of impressions between clicks. To compute the current estimate, a click-based algorithm computes a weight for each impression based solely on the number of clicks after it and then takes the weighted average. An example of an algorithm in this class is the one which outputs an estimate equal to the reciprocal of the number of impressions before the most recent click. We show that click-based algorithms satisfying additional technical assumptions are fraud-resistant in the sense that a devious user can not change the expected payment of the advertiser per impression (see Section 3 for a formal definition). We provide an example that a traditional method for estimating CTR (that is, taking the average over a fixed number of recent impressions) is not fraudresistant. The structure of this paper is as follows. In Section 2, we describe the setting. In Section 3, we define a very general class of algorithms for learning the CTR of an ad, called CTR learning algorithms. In Section 4, we define a special class of these algorithms, called click-based algorithms, and prove that they are fraudresistent. In Section 5, we give examples showing that other common algorithms for learning the CTR are not fraud-resistent. 2 Setting We consider a simple setting in which a service provider wishes to sell space for a single ad on a web page. There are a number of advertisers, each of whom wishes to display their ad on the web page. The service provider sells the ad space according to the pay-per-click model and through an auction: the advertiser whose ad is displayed is charged only when a user clicks on his ad. Each advertiser i submits a bid b i indicating the maximum amount he is willing to pay the service provider when a user clicks on his ad. The allocation and price is computed using the mechanism described below. For each ad, the service provider estimates the probability that the ad receives a click from the user requesting the page, if it is displayed. This probability is

4 called the click-through-rate (CTR) of the ad. Each bid b i is multiplied by the estimate λ i of the CTR of the ad. The product λ i b i thus represents the expected willingness-to-pay of advertiser i per impression. The slot is awarded to the advertiser i with the highest value of λ i b i. If the user indeed clicks on the ad, then the winning advertiser is charged a price equal to the second highest λ i b i divided by his (that is, the winner s) estimated CTR (that is, λ i ). Thus, if we label advertisers such that λ i b i > λ i+1 b i+1, then the slot is awarded to advertiser 1 and, upon a click, he is charged a price λ 2 b 2 /λ 1. In this paper, we study the mechanism over a period of time during which the same advertiser wins the auction, and the value of λ 2 b 2 does not change. If the advertisers do not change their bids too frequently and λ 1 b 1 and λ 2 b 2 are not too close to each other, it is natural to expect this to happen most of the time. We will henceforth focus on the winner of the auction, defining p := λ 2 b 2 and λ := λ 1. 3 CTR Learning Algorithms Of course, we have left unspecified the method by which the algorithm learns the CTRs. The subject of this work is to study different algorithms for computing the CTR of an advertiser. There are a variety of different algorithms one could imagine for learning the CTR of an ad. Some simple examples, described below, include averaging over time, impressions, or clicks, as well as exponential discounting. Average over fixed time window: For a parameter T, let x be the number of clicks received during the last T time units and y be the number of impressions during the last T time units. Then λ = x/y. Average over fixed impression window: For a parameter y, let x be the number of clicks received during the last y impressions. Then λ = x/y. Average over fixed click window: For a parameter x, let y be the number of impressions since the x th last click. Then λ = x/y. Exponential discounting: For a parameter α, let e αi be a discounting factor used to weight the i th most recent impression. Take a weighted average over all impressions, that is, i x ie αi / i e αi where x i is an indicator variable that the i th impression resulted in a click. These algorithms are all part of a general class defined below. The algorithm estimates the CTR of the ad for the current impression as follows: Label the previous impressions, starting with the most recent, by 1, 2,.... Let t i be the amount of time that elapsed between impression i and impression 1, and c i be the number of impressions that received clicks between impression i and impression 1 (impressions 1 included). The learning algorithms we are interested in are defined by a constant γ and a function δ(t i, i, c i ) which is decreasing in all three parameters. This function can be thought of as a discounting parameter, allowing the learning algorithm to emphasize recent history over more distant history. Let x i be an indicator variable for the event that the i th impression

5 resulted in a click. The learning algorithm then computes i=1 λ = x iδ(t i, i, c i ) + γ i=1 δ(t i, i, c i ) + γ. The constant γ is often a small constant that is used to guarantee that the estimated click-through-rate is strictly positive and finite. Notice that in the above expression, the summation is for every i from 1 to. This is ambiguious, since the advertiser has not been always present in the system. To remove this ambiguity, the algorithm assumes a default infinite history for every advertiser that enters the system. This default sequence could be a sequence of impressions all leading to clicks, indicating that the newly arrived advertiser is initialized with a CTR equal to one, or (as it is often the case in practice) it could be a sequence indicating a system-wide default initial CTR for new advertisers. For most common learning algorithms, the discount factor becomes zero or very small for far distant history, and hence the choice of the default sequence only affects the estimate of the CTR at the arrival of a new advertiser. Note that all three learning methods discussed above are included in this class (for γ = 0). Average over fixed time window: The function δ(t i, i, c i ) is 1 if t i T and 0 otherwise. Average over fixed impression window: The function δ(t i, i, c i ) is 1 if i y and 0 otherwise. Average over fixed click window: The function δ(t i, i, c i ) is 1 if c i x and 0 otherwise. Exponential discounting: The function δ(t i, i, c i ) is e αi. 4 Fraud Resistance For each of the methods listed in the previous section, for an appropriate setting of parameters (e.g., large enough y in the second method), on a random sequence generated from a constant CTR the estimate computed by the algorithm gets arbitrarily close to the true CTR, and so it is not a priori apparent which method we might prefer. Furthermore, when the learning algorithm computes the true CTR, the expected behavior of the system is essentially equivalent to a pay-perimpression system, with substantially reduced incentives for fraud. This might lead to the conclusion that all of the above algorithms are equally resistant to click fraud. However, this conclusion is wrong, as the scammer can sometimes create fluctuations in the CTR, thereby taking advantage of the failure of the algorithm to react quickly to the change in the CTR to harm the advertiser. In this section, we introduce a notion of fraud resistance for CTR learning algorithms, and prove that a class of algorithms are fraud-resistant. The definition of fraud resistance is motivated by the way various notions of security are defined in cryptography: we compare the expected amount the advertiser has to pay in two scenarios, one based on a random sequence generated from a constant CTR without any fraud, and the other with an adversary who can change

6 a fraction of the outcomes (click vs. no-click) on a similar random sequence. Any scenario can be described by a time-stamped sequence of the outcomes of impressions (i.e., click or no-click). More precisely, if we denote a click by 1 and a no-click by 0, the scenario can be described by a doubly infinite sequence s of zeros and ones, and a doubly infinite increasing sequence t of real numbers indicating the time stamps (the latter sequence is irrelevant if the learning algorithm is time-independent, which will be the case for the algorithms we consider in this section). The pair (s, t) indicates a scenario where the i th impression (i can be any integer, positive or negative) occurs at time t i and results in a click if and only if s i = 1. Definition 1. Let ɛ be a constant between zero and one, and (s, t) be a scenario generated at random as follows: the outcome of the ith impression, s i, is 1 with an arbitrary fixed probability λ and 0 otherwise, and the time difference t i t i 1 between two consecutive impressions is drawn from a Poisson distribution with an arbitrary fixed mean. For a value of n, let (s, t ) be a history obtained from (s, t) by letting an adversary insert at most ɛn impressions after the impression indexed 0 in (s, t). The history (s, t ) is indexed in such a way that impression 0 refers to the same impression in (s, t) and (s, t ). We say that a CTR learning algorithm is ɛ-fraud resistant if for every adversary, the expected average payment of the advertiser per impression during the impressions indexed 1,..., n in scenario (s, t ) is bounded by that of scenario (s, t), plus an additional term that tends to zero as n tends to infinity (holding everything else constant). More precisely, if q j (q j, respectively) denotes the payment of the advertiser for the jth impression in scenario (s, t) ((s, t ), respectively), then the algorithm is ɛ-fraud resistant if for every adversary, E[ 1 n n q j] E[ 1 n j=1 n q j ] + o(1). Intuitively, in a fraud-resistant algorithm, a fraudulent click or impression only costs the advertiser as much as a fraudulent impression in a pay-perimpression scheme. Some details are intentionally left ambiguous in the above definition. In particular, we have not specified how much knowledge the adversary has. In practice, an adversary probably can gain knowledge about some statistics of the history, but not the complete history. However, our positive result in this section holds even for an all powerful adversary that knows the whole sequence (even the future) in advance. We prove that even for such an adversary, there are simple learning algorithms that are fraud-resistant. Our negative result (presented in the next section), shows that many learning algorithms are not fraud-resistant even if the adversary only knows about the learning algorithm and the frequency of impressions in the scenario. Therefore, our results are quite robust in this respect. Another point that is worth mentioning is that the assumption that the true click-through rate λ is a constant in the above definition is merely a simplifying j=1

7 assumption. In fact, our results hold (with the same proof) even if the parameter λ changes over time, as long as the value of λ at every point is at least a positive constant (i.e., does not get arbitrarily close to zero). Also, the choice of the distribution for the time stamps in the definition was arbitrary, as our positive result only concerns CTR learning algorithms that are time-independent, and our negative result in the next section can be adapted to any case where the time stamps come from an arbitrary known distribution. In this section, we show that CTR learning algorithms for which the discounting factor, δ, depends only on the number of impressions in the history which resulted in clicks, that is the parameter c i defined above (and not on i and t i ), are fraud-resistant. We call such algorithms click-based algorithms. Definition 2. A CTR learning algorithm is click-based if δ(t i, i, c i ) = δ(c i ) for some decreasing function δ(.). Of the schemes listed in the previous section, it is easy to see that only averaging over clicks is click-based. Intuitively, a click-based algorithm estimates the CTR by estimating the Expected Click-Wait (ECW), the number of impression it takes to receive a click. The following theorem shows that click-based algorithms are fraud-resistant. Theorem 1. Consider a click-based CTR learning algorithm A given by a discounting function δ(.) and γ = 0. Assume that i=1 iδ(i) is bounded. Then for every ɛ 1, the algorithm A is ɛ-fraud-resistant. Proof. The proof is based on a simple charging argument. We distribute the payment for each click over the impressions preceding it, and then bound the expected total charge to any single impression due to the clicks after it. We begin by introducing some notations. For any scenario (s, t) and index i, let S i,j denote the set of impressions between the j th and the (j 1) st most recent click before impression i (including click j but not click j 1) and n i,j = S i,j. Then the estimated CTR at i can be written as j=1 δ(j) j=1 n i,jδ(j). (1) and hence the payment at i if impression i receives a click is j=1 p n i,jδ(j) j=1 δ(j). We now introduce a scheme to charge this payment to the preceding impressions (for both with-fraud and without-fraud scenarios). Fix an impression i for i < i and let j be the number of clicks between i and i (including i if it is a click). If impression i leads to a click, we charge i an amount equal to p δ(j) (2) j=1 δ(j).

8 for this click. Summing over all impressions preceding click i, we see that the total payment charged is equal to the payment for the click at impression i. The crux of the argument in the remainder of the proof is to show that in both scenarios, with or without fraud, the average total payment charged to an impression i in the interval [1, n] by clicks occurring after i is p ± o(1). We start by proving an upper bound on the total amount charged to the impressions. We first focus on bounding the total amount charged to impressions before impression 0. The impressions in the set S 0,j are charged by the i th click after impression 0 a total of p n 0,jδ(i + j) k=1 δ(k). Summing over all clicks between impression 0 and impression n (inclusive), we see that the total charge to the impressions in S 0,j is at most i=1 p n 0,jδ(i + j) k=1 δ(k). Summing over all sets S 0,j, we see that the total charge to impressions before 0 is at most j=1 i=1 p n 0,jδ(i + j) k=1 δ(k). Since the denominator k=1 δ(k) is a positive constant, we only need to bound the expected value of the numerator j=1 i=1 n 0,jδ(i + j). Since in both with-fraud and without-fraud scenarios the probability of click for each impression before impression 0 is λ, the expected value of n 0,j in both scenarios is 1/λ. Therefore, the expectation of the total amount charged to impressions before 0 in both scenarios is at most p λ j=1 i=1 δ(i + j) k=1 δ(k) = p λ k=1 kδ(k) k=1 δ(k), which is bounded by a constant (i.e., independent of n) since k=1 kδ(k) is finite. We now bound payments charged to impressions after impression 0. For a fixed impression i with 0 i n, the payment charged to i by the j th click after i is given by Equation 2. Summing over all j, we see that the total payment charged to i is at most j=1 p δ(j) = p. (3) j=1 δ(j) By the above equation along with the fact that the expected total charge to impressions before impression 0 is bounded, we see that the expected total charge of all impressions is at most np+o(1), and therefore the expected average payment per impression (in both with-fraud and without-fraud scenarios) is at most p + o(1).

9 We now show that in the scenario (s, t) (i.e., without fraud), the expected average payment per impression is at least p o(1). Let k be the number of clicks in the interval [1, n] and consider an impression in S n,j. Then, by Equation 2, this impression is charged an amount equal to j 1 i=1 p δ(i) i=1 δ(i) = p p i=j δ(i) i=1 δ(i). Therefore, the total amount charged to the impressions in the interval [1, n] is at least np p k+1 j=1 n n,j i=j δ(i) i=1 δ(i). As in the previous case, the expected value of n n,j in the scenario without any fraud is precisely 1/λ. Therefore, the expectation of the total charge to impressions in [1, n] is at least np p k+1 j=1 i=j δ(i) λ i=1 δ(i) np p λ i=1 iδ(i) i=1 δ(i). Therefore, since i=1 iδ(i) is bounded, the expected average payment per impression in the scenario without fraud is at least p o(1). This shows that the difference between the expected average payment per impression in the two scenarios is at most o(1), and hence the algorithm is ɛ-fraud resistant. 5 Non-Click-Based Algorithms In this section, we give an example that shows that in many simple non-clickbased algorithms (such as averaging over fixed time window or impression window presented in Section 3), an adversary can use a simple strategy to increase the average payment of the advertiser per impression. We present the example for the learning algorithm that takes the average over a fixed impression window. It is easy to see that a similar example exists for averaging over a fixed time window. Consider a history defined by setting the outcome of each impression to click with probability λ for a fixed λ. Denote this sequence by s. We consider the algorithm that estimates the CTR by the number of click-throughs during the past l impressions plus a small constant γ divided by l + γ, for a fixed l. If l is large enough and γ is small but positive, the estimate provided by the algorithm is often very close to λ, and therefore the average payment per impression on any interval of length n is arbitrarily close to p. In the following proposition, we show that an adversary can increase the average payment by a non-negligible amount. Proposition 1. In the scenario defined above, there is an adversary that can increase the average payment per impression over any interval of length n, for

10 any large enough n, by inserting ɛn fradulent impressions and clicking on some of them. Proof Sketch: Consider the following adversary: The adversary inserts ɛn fradulent impressions distributed uniformly in the interval starting at the impression indexed 1 and ending at the impression indexed (1 ɛ)n (with an outcome described below), so that in the scenario with fraud, each of the first n impressions after impression 0 is fradulent with probability ɛ. Divide the set of impressions after impression 0 into a set of intervals I 1, I 2,..., where each interval I j contains l impressions (real or fradulent). In other words, I 1 is the set of the first l impressions after impression 0, I 2 is the set of the next l impressions, etc. The adversary sets the outcome of all fradulent impression in I j for j odd to click and for j even to no-click. This means that the true CTR during I j is (1 ɛ)λ + ɛ for odd j and (1 ɛ)λ for even j. The algorithm estimates the CTR for the r th impression of the interval I j by dividing the number of clicks during the last l impressions by l. Of these impressions, r are in I j and l r are in I j 1. Therefore, for j even, the expected number of clicks during the last l impressions is r(1 ɛ)λ + (l r) ((1 ɛ)λ + ɛ), and therefore the estimated CTR is the above value plus γ divided by l + γ in expectation. When l is large and γ is small, this value is almost always close to its expectation. Therefore, price of a click for this impression is close to pl divided by the above expression. Thus, since the probability of a click for this impression is (1 ɛ)λ, the expected payment of the advertiser for this impression can be approximated using the following expression. pl(1 ɛ)λ r(1 ɛ)λ + (l r) ((1 ɛ)λ + ɛ) The average of these values, for all r from 1 to l, can be approximated using the following integral. 1 l l 0 ( pl(1 ɛ)λ p(1 ɛ)λ dr = ln 1 + r(1 ɛ)λ + (l r) ((1 ɛ)λ + ɛ) ɛ p((1 ɛ)λ + ɛ) ɛ ) ɛ. (1 ɛ)λ Similarly, the total expected payment of the advertiser in the interval I j for j odd and j > 1 can be approximated by the following expression. ( ) ln 1 +. ɛ (1 ɛ)λ ɛ Denote α = (1 ɛ)λ. Therefore, the average payment of the advertiser per impression can be written as follows. p( ) ln(1 + α) α

11 Since α is a positive constant, it is easy to see that the above expression is strictly greater than p. 6 Discussion In this paper, we discussed pay-per-click marketplaces and proved that a particular class of learning algorithms can reduce click fraud in a simplified setting. Our results lead to several interesting extensions and open questions. Pay-Per-Acquisition Marketplaces. We focused on pay-per-click marketplaces. Our reasons for this were three-fold: it is a common industry model, it absorbs risk due to market fluctuations for the advertiser, and it simplifies the strategic calculations of the advertiser. The latter two of these comments can be equally well employed to argue the desirability of a pay-per-acquisition marketplace. In these marketplaces, a service provider receives payment from an advertiser only when a click resulted in a purchase. Such systems are used by Amazon, for example, to sell books on web pages: a service provider, say Expedia, can list an Amazon ad for a travel guide with the understanding that, should a user purchase the product advertised, then the service provider will receive a payment. The problem with pay-per-acquisition systems is that the service provider must trust the advertiser to truthfully report those clicks which result in acquisitions. Our results hint at a solution for this problem. We have seen that in a simple scenario with a single ad slot, click-based algorithms are fraud-resistant in the sense that the expected payment per impression of an advertiser can not be increased by click fraud schemes. In fact, it can also be shown that this payment can not be decreased either. Thus, as click-based learning algorithms reduce fraud in payper-click systems, acquisition-based learning algorithms induce truthful reporting in pay-per-acquisition systems. Computational Issues. We have shown that click-based learning algorithms eliminate click fraud. However, in order to be practical and implementable, learning algorithms must also be easily computed with constant memory. The computability of a click-based algorithm depends significantly on the choice of the algorithm. Consider, for example, a simple click-based exponentially-weighted algorithm with δ(i) = e αi. Just two numbers are needed to compute this estimate: the estimate of the click-through rate for the most recent impression that lead to a click and a counter representing the number of impressions since the last click. However, other click-based algorithms have worse computational issues. Consider an algorithm in which δ i {0, 1} with δ i = 1 if and only if i l for some (possibly large) l. Then at least l numbers must be recorded to compute this estimate exactly. One interesting question is how efficiently (in terms of the space) a given estimate can be computed. 7 Acknowledgement We like to thank Omid Etesami and Uriel Feige for fruitful discussions.

Pay-per-action model for online advertising Mohammad Mahdian Kerem Tomak Abstract The online advertising industry is currently based on two dominant business models: the pay-perimpression model and the

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

Chapter 7 Sealed-bid Auctions An auction is a procedure used for selling and buying items by offering them up for bid. Auctions are often used to sell objects that have a variable price (for example oil)

Universal hashing No matter how we choose our hash function, it is always possible to devise a set of keys that will hash to the same slot, making the hash scheme perform poorly. To circumvent this, we

4.0 Exponent Property Review First let s start with a review of what exponents are. Recall that 3 means taking four 3 s and multiplying them together. So we know that 3 3 3 3 381. You might also recall

Chapter 8 Inflation This chapter examines the causes and consequences of inflation. Sections 8.1 and 8.2 relate inflation to money supply and demand. Although the presentation differs somewhat from that

Advertisement Allocation for Generalized Second Pricing Schemes Ashish Goel Mohammad Mahdian Hamid Nazerzadeh Amin Saberi Abstract Recently, there has been a surge of interest in algorithms that allocate

Lecture 1: Elementary Number Theory The integers are the simplest and most fundamental objects in discrete mathematics. All calculations by computers are based on the arithmetical operations with integers

How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

Pay-Per-Percentage of Impressions: An Advertising Method that is Highly Robust to Fraud Joshua Goodman Microsoft Research One Microsoft Way Redmond, WA 425-705-2947 joshuago@microsoft.com ABSTRACT In this

Single-Period Balancing of Pay Per-Click and Pay-Per-View Online Display Advertisements Changhyun Kwon Department of Industrial and Systems Engineering University at Buffalo, the State University of New

ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1

parent ROADMAP MATHEMATICS SUPPORTING YOUR CHILD IN HIGH SCHOOL HS America s schools are working to provide higher quality instruction than ever before. The way we taught students in the past simply does

ECE302 Spring 2006 HW3 Solutions February 2, 2006 1 Solutions to HW3 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in

Problem 3 If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C? Suggested Questions to ask students about Problem 3 The key to this question

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

Coefficient of Potential and Capacitance Lecture 12: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay We know that inside a conductor there is no electric field and that

Chapter 8 Advertising on the Web One of the big surprises of the 21st century has been the ability of all sorts of interesting Web applications to support themselves through advertising, rather than subscription.

CHAPTER 1 Compound Interest 1. Compound Interest The simplest example of interest is a loan agreement two children might make: I will lend you a dollar, but every day you keep it, you owe me one more penny.

Math Circle Beginners Group October 18, 2015 Warm-up problem 1. Let n be a (positive) integer. Prove that if n 2 is odd, then n is also odd. (Hint: Use a proof by contradiction.) Suppose that n 2 is odd

Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

Online Ad Auctions By Hal R. Varian Draft: February 16, 2009 I describe how search engines sell ad space using an auction. I analyze advertiser behavior in this context using elementary price theory and

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages

Calculus for Middle School Teachers Problems and Notes for MTHT 466 Bonnie Saunders Fall 2010 1 I Infinity Week 1 How big is Infinity? Problem of the Week: The Chess Board Problem There once was a humble

A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes

CPC/CPA Hybrid Bidding in a Second Price Auction Benjamin Edelman Hoan Soo Lee Working Paper 09-074 Copyright 2008 by Benjamin Edelman and Hoan Soo Lee Working papers are in draft form. This working paper

2 Applications to Business and Economics APPLYING THE DEFINITE INTEGRAL 442 Chapter 6 Further Topics in Integration In Section 6.1, you saw that area can be expressed as the limit of a sum, then evaluated

MAT2400 Analysis I A brief introduction to proofs, sets, and functions In Analysis I there is a lot of manipulations with sets and functions. It is probably also the first course where you have to take

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

Teacher s Guide Getting Started Shereen Khan & Fayad Ali Trinidad and Tobago Purpose In this two-day lesson, students develop different strategies to play a game in order to win. In particular, they will

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

CHAPTER 3 Numbers and Numeral Systems Numbers play an important role in almost all areas of mathematics, not least in calculus. Virtually all calculus books contain a thorough description of the natural,

Pascal s wager So far we have discussed a number of arguments for or against the existence of God. In the reading for today, Pascal asks not Does God exist? but Should we believe in God? What is distinctive

8.7. MATHEMATICAL INDUCTION 8-135 8.7 Mathematical Induction Objective Prove a statement by mathematical induction Many mathematical facts are established by first observing a pattern, then making a conjecture

Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

ORDERS OF ELEMENTS IN A GROUP KEITH CONRAD 1. Introduction Let G be a group and g G. We say g has finite order if g n = e for some positive integer n. For example, 1 and i have finite order in C, since

Evaluating Trading Systems By John Ehlers and Ric Way INTRODUCTION What is the best way to evaluate the performance of a trading system? Conventional wisdom holds that the best way is to examine the system

6.42/8.62J Mathematics for Computer Science Srini Devadas and Eric Lehman May 3, 25 Lecture otes Expected Value I The expectation or expected value of a random variable is a single number that tells you

CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter

ROI-Based Campaign Management: Optimization Beyond Bidding White Paper October 2009 www.marinsoftware.com Executive Summary The major search engines get paid only when an ad is clicked. Their revenue is

Distributivity and related number tricks Notes: No calculators are to be used Each group of exercises is preceded by a short discussion of the concepts involved and one or two examples to be worked out

Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

MATH REVIEW KIT Reproduced with permission of the Certified General Accountant Association of Canada. Copyright 00 by the Certified General Accountant Association of Canada and the UBC Real Estate Division.

We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

ANALYTICAL MATHEMATICS FOR APPLICATIONS 206 LECTURE NOTES 8 ISSUED 24 APRIL 206 A series is a formal sum. Series a + a 2 + a 3 + + + where { } is a sequence of real numbers. Here formal means that we don