Statistically speaking, finding the right shade of makeup can be tough, no mater what your age. The data experts at Sephora, one of the world’s largest cosmetic companies, indicates that the average woman tries seven times before she finds the ideal foundation. (I wonder if we could create a computer algorithm to speed that up?) That can be an expensive proposition because of the price of quality makeup.

All hope is not lost, however, as Sephora is attempting to make the process easier by teaming up with Pantene, the company renowned as an authority on color and color systems, to help women, and maybe even some adventurous men, to find the correct shade of makeup on the first try.

The new foundation is called ColorIQ. The name is quite fitting, as the amount of research, development and data they had to process to develop this foundation is staggering. It features digital imaging technology that records 27 separate color-corrected images in under two seconds while using eight visible light and one ultraviolet light settings to recommend the ideal foundation color.

Sephora’s makeup experts indicate that skin tone is dictated by ethnicity, melanin levels, skin conditions, hemoglobin, sun exposure and freckles. Julie Bornstein, Sephora senior vice president, said that Sephora, which has spent decades working on the science behind color, has in conjunction with Pantone mapped 110 skin tone shades in the United States.

After Color IQ measures your signature skin tone, it takes the digital images of your face and breaks them into 100-pixel blocks. Sephora then creates a color composite along with a Pantone Skin Tone number for you.

Sephora has tested skin from women all over the world, producing 1,000 different skin combinations. That data will determine which foundation will work best for you, according to Bornstein. Once you get your Skin Tone number, a Sephora representative will enter it into the company’s iPad app for individualized suggestions.

Sephora’s New York and San Francisco stores will make this process available to women in those cities within the next few weeks. Roll-out of the skin scans to the company’s 300-plus stores in the United States will happen at a later date.

Bornstein noted that the concept of virtually trying on clothing and makeup is growing in both industries. What sets apart Sephora’s process is it has a certain purity that will make it effective.

Many will consider these customized fashion and makeup trends that help women find their best beauty solutions to be the next must-haves, but others will pass based on what may appear to be artificiality.

As Bornstein stated, however, this process give the industry the ability to truly recommend a product based on an individual’s skin.

As we saw with binary search, certain ageless data structures such as a binary search tree can help improve the efficiency of searches. From linear search to binary search, we improved our search efficiency from O(n) to O(logn) beautifully. We now present a new data structure, called a hash table, that will increase our efficiency to O(1), or constant time.

In computer science, a hash table, or a hash map, is a data structure that associates keys with values. The primary operation it supports efficiently is a lookup: given a key (e.g. a person’s name), find the corresponding value (e.g. that person’s telephone number). It works by transforming the key using a hash function into a hash, a number that is used to index into an array to locate the desired location (”bucket”) where the values should be.

Hash tables are often used to implement associative arrays, sets and caches. Like arrays, hash tables provide constant-time O(1) lookup on average, regardless of the number of items in the table. (O(1) means that it takes a constant amount of time independent of the number of items involved.) However, the rare worst-case lookup time can be as bad as O(n). Compared to other associative array data structures, hash tables are most useful when large numbers of records are to be stored, especially if the size of the data set can be predicted.

Hash tables may be used as in-memory data structures. Hash tables may also be adopted for use with persistent data structures; database indexes sometimes use disk-based data structures based on hash tables, although balanced trees are more popular.

Data transmission and storage cost money. The more information being dealt with, the more it costs. In spite of this, most digital data are not stored in the most compact form. Rather, they are stored in whatever way makes them easiest to use, such as: ASCII text from word processors, binary code that can be executed on a computer, individual samples from a data acquisition system, etc.

Data compression is the general term for the various algorithms and programs developed to address this problem. A compression program is used to convert data from an easy-to-use format to one optimized for compactness. Likewise, an uncompression program returns the information to its original form.

There are many different reasons for and ways of encoding data, and one of these ways is Huffman coding. This is used as a compression method in digital imaging and video as well as in other areas. The idea behind Huffman coding is simply to use shorter bit patterns for more common characters, and longer bit patterns for less common characters.

Huffman coding is based on the frequency of occurance of a data item (pixel in images). The principle is to use a lower number of bits to encode the data that occurs more frequently.

The first version of Huffman encoding

#include
#include // a multimap will be used so we have to include it
using namespace std;
void binary(int number) { //thise procedure is used to convert a
int remainder; //number into a binary number
if(number <= 1) {
cout << number;
return;
}
remainder = number%2;
binary(number >> 1);
cout << remainder;
}
int main()
{
char ch, alphabet[31]; //store each char in ch, 31 chars used because of 26 alphabet, dot, comma, space
typedef multimap huffman; //make a multimap that will be sorted by integers, we use a multimap
huffman sentence; //because some characters may have the same number of occurences
int i=0,j=0;
for(int j=0; j<31; j++) //alphabet[0..30] = 0;
alphabet[j]=0;
while((ch=getchar())!=‘\n‘) //insert chars into ch until enter(return) is pressed
{
if(ch==‘ ‘)
alphabet[28]++; //increment number of spaces
else if(ch==‘,’)
alphabet[29]++; //increment number of commas
else if(ch==‘.’)
alphabet[30]++; //increment number of dots
else{
ch = toupper(ch); //convert all chars to upper case
alphabet[(static_cast(ch))-65]++;//increment (char ASCII value) - 65 (this is the index number)
}
}
for(int j=0; j<31; j++)
{
if(alphabet[j]>0) //check if char occured in the text
{
i++; //count the number of found chars, commas, dots, spaces
if(j==28)
sentence.insert(make_pair(alphabet[j],‘ ‘)); //make the pair from space and its occurences
else if(j==29)
sentence.insert(make_pair(alphabet[j],‘,’)); //make the part from comma and its occurences
else if(j==30)
sentence.insert(make_pair(alphabet[j],‘.’)); //make the pair from dot and its occurences
else
sentence.insert(make_pair(alphabet[j],static_cast(65+j))); //make the pair from chars and its occurences
}
}
huffman::iterator pos; //make an iterator for the multimap
for(pos = sentence.begin(); pos!=sentence.end(); ++pos)
{
j++; //number of different chars
cout << “Letter “ << pos->second << ” “; //print out the letter and then its occurence code
binary(i-j); //convert the number to binary
cout << endl;
}
return 0;
}