Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

8.
 Webpage classification or webpage categorization is the process of assigning a webpage to one or more category labels. E.g. “News”, “Sport” , “Business” GOAL: They observe the existing of web classification techniques to find new area for research. Including web-specific features and algorithms that have been found to be useful for webpage classification.

9.
 What will you learn?  A Detailed review of useful features for web classification  The algorithms used  The future research directions Webpage classification can help improve the quality of web search. Knowing is thing help you to improve your SEO skill. Each search engine, keep their technique in secret.

11.
 The general problem of webpage classification can be divided into  Subject classification; subject or topic of webpage e.g. “Adult”, “Sport”, “Business”.  Function classification; the role that the webpage play e.g. “Personal homepage”, “Course page”, “Admission page”.

12.
 Based on the number of classes in webpage classification can be divided into  binary classification  multi-class classification Based on the number of classes that can be assigned to an instance, classification can be divided into single-label classification and multi-label classification.

15.
 How are they doing?  By human effort ▪ July 2006, it was reported there are 73,354 editor in the dmoz ODP. As the web changes and continue to grow so “Automatic creation of classifiers from web corpora based on use-defined hierarchies” has been introduced by Huang et al. in 2004 The starting point of this presentation !!

21.
 In this section, we review the types of features that useful in webpage classification research.  The most important criteria in webpage classification that make webpage classification different from plaintext classification is HYPERLINK <a>…</a> We classify features into  On-page feature: Directly located on the page  Neighbors feature: Found on the pages related to the page to be classified.

23.
 Textual content and tags  N-gram feature ▪ Imagine of two different documents. One contains phrase “New York”. The other contains the terms “New” and “York”. (2-gram feature). ▪ In Yahoo!, They used 5-grams feature.  HTML tags or DOM ▪ Title, Headings, Metadata and Main text ▪ Assigned each of them an arbitrary weight. ▪ Now a day most of website using Nested list (<ul><li>) which really help in web page classification.

24.
 Textual content and tags  URL ▪ Kan and Thi 2004 ▪ Demonstrated that a webpage can be classified based on its URL

25.
 Visual analysis  Each webpage has two representations 1. Text which represent in HTML 2. The visual representation rendered by a web browser  Most approaches focus on the text while ignoring the visual information which is useful as well  Kovacevic et al. 2004 ▪ Each webpage is represented as a hierarchical “Visual adjacency multi graph.” ▪ In graph each node represents an HTML object and each edge represents the spatial relation in the visual representation.

27.
 Motivation  The useful features that we discuss previously, in a particular these features are missing or unrecognizable

28.
 Underlying Assumptions  When exploring the features of neighbors, some assumptions are implicitly made in existing work.  The presence of many “sports” pages in the neighborhood of P-a increases the probability of P-a being in “Sport”.  Chakrabari et al. 2002 and Meczer 2005 showed that linked pages were more likely to have terms in common . Neighbor selection  Existing research mainly focuses on page with in two steps of the page to be classified. At the distance no greater than two.  There are six types of neighboring pages: parent, child, sibling, spouse, grandparent and grandchild.

29.
 Neighbor selection cont.  Furnkranz 1999 ▪ The text on the parent pages surrounding the link is used to train a classifier instead of text on the target page. ▪ A Target page will be assigned multiple labels. These label are then combine by some voting scheme to form the final prediction of the target page’s class  Sun et al. 2002 ▪ Using the text on the target page. Using page title and anchor text from parent pages can improve classification compared a pure text classifier.

30.
 Neighbor selection cont.  Summary ▪ Using parent, child, sibling and spouse pages are all useful in classification, siblings are found to be the best source. ▪ Using information from neighboring pages may introduce extra noise, should be use carefully.

31.
 Features  Label : by editor or keyworder  Partial content : anchor text, the surrounding text of anchor text, titles, headers  Full content ▪ Among the three types of features, using the full content of neighboring pages is the most expensive however it generate better accuracy.

32.
 Utilizing artificial links (implicit link)  The hyperlinks are not the only one choice. What is implicit link?  Connections between pages that appear in the results of the same query and are both clicked by users. Implicit link can help webpage classification as well as hyperlinks.

33.
 However, since the results of different approaches are based on different implementations and different datasets, making it difficult to compare their performance. Sibling page are even more use full than parents and children.  This approach may lie in the process of hyperlink creation.  But a page often acts as a bridge to connect its outgoing links, which are likely to have common topic.

36.
 Feature weighting o Another important role for webpage classification o Way of boosting the classification by emphasizing the features with the better discriminative power o Special case of weighing: “Feature Selection”

37.
 A special case of “feature weighting” ‘Zero weight’ is assigned to the eliminated features The role: Reduce the Classification Computational dimensionality can be more complexity of the feature accurate in the reduction space reduced space

39.
 Using the first fragment of each documents  Assumption: a summary is at beginning of the document  Fast and accurate classification for news articles  Not satisfying for other types of documents• First fragment applied to Hierarchical classification of web pages  Useful for web documents

40.
 Using expected mutual information and mutual information  Two well-known metrics based on variation of the k- Nearest Neighbor algorithm  Weighted terms according to its appearing HTML tags  Terms within different tags handle different importance Using information gain  Another well-known metric  Still not apparently show which one is more superior for web classification

41.
 Approving the performance of SVM classifiers  By aggressive feature selection  Developed a measure with the ability to predict the selection effectiveness without training and testing classifiers A popular Latent Semantic Indexing (LSI)  In Text documents: ▪ Docs are reinterpreted into a smaller transformed, but less intuitive space ▪ Cons: high computational complexity makes it inefficient to scale  in Web classification ▪ Experiments based on small datasets (to avoid the above ‘cons’) ▪ Some work has approved to make it applicable for larger datasets which still needs further study

45.
• Flow of the algorithm Nodes with their text classifier assigned class probabilities Nodes’ probabilities reevaluated taking into Same process to each account the latest Nodes considered in turn node’s neighbors estimates of the neighbors’

46.
 Using a combined logistic classifier  based on content and link information ▪ Shows improvement over a textual classifier ▪ Outperforms a single flat classifier based on both content and link features Selecting the proper Neighbors ONLY  Not all neighbors are qualified  The chosen neighbors’ option: ▪ Similar enough in content

47.
 Two popular link-based algorithms:  Loopy belief propagation  Iterative classification Better performance on a web collection than textual classifiers During the scientists’ study, ‘a toolkit’ was implemented  Toolkit features ▪ Classify the networked data which ▪ utilized a relational classifier and a collective inference procedure ▪ Demonstrated its great performance on several datasets including web collections

49.
 The traditional algorithms adjusted in the context of Webpage classification  k-Nearest Neighbors (kNN) ▪ Quantify the distance between the test document and each training documents using “a dissimilarity measure” ▪ Cosine similarity or inner product is what used by most existing kNN classifiers  Support Vector Machine (SVM)

50.
 Varieties of modifications:  Using the term co-occurrence in document  Using probability computation  Using “co-training”

51.
 Using the term co-occurrence in documents  An improved similarity measure  The more co-occurred terms two documents have in common, the stronger the relationship between them  Better performance over the normal kNN (cosine similarity and inner product measures) Using the probability computation  Condition: ▪ The probability of a document d being in class c is determined by its distance b/w neighbors and itself and its neighbors’ probability of being in c ▪ Simple equation ▪ Prob. of d @ c = (distance b/w d and neighbors)(neighbors’ Prob. @ c)

52.
 Using “Co-training”  Make use of labeled and unlabeled data  Aiming to achieve better accuracy  Scenario: Binary classification ▪ Classifying the unlabeled instances ▪ Two classifiers trained on different sets of features ▪ The prediction of each one is used to train each other ▪ Classifying only labeled instances ▪ The co-training can cut the error rate by half  When generalized to multi-class problems ▪ When the number of categories is large ▪ Co-training is not satisfying ▪ On the other hand, the method of combining error-correcting output coding (more than enough classifiers in use), with co-training can boost performance

53.
 In classification, both positive and negative examples are required SVM-Based aim:  To eliminate the need for manual collection of negative examples while still retaining similar classification accuracy

56.
 Not so many research since most web classifications focus on the same level approaches Approaches:  Based on “divide and conquer”  Error minimization  Topical Hierarchy  Hierarchical SVMs  Using the degree of misclassification  Hierarchical text categoriations

57.
 The use of hierarchical classification based on “divide and conquer”  Classification problems are splitted into sub-problems hierarchically ▪ More efficient and accurate that the non-hierarchical way Error minimization  when the lower level category is uncertain, ▪ Minimize by shifting the assignment into the higher one Topical Hierarchy  Classify a web page into a topical hierarchy  Update the category information as the hierarchy expands

58.
 Hierarchical SVMs  Observation: ▪ Hierarchical SVMs are more efficient than flat SVMs ▪ None are satisfying the effectiveness for the large taxonomies ▪ Hierarchical settings do more harm than good to kNNs and naive Bayes classifiers Hierarchical Classification By the degree of misclassification  Opposed to measuring “correctness”  Distance are measured b/w the classifier-assigned classes and the true class. Hierarchical text categorization  A detailed review was provided in 2005

60.
 Different sources are utilized Combining link and content information is quite popular Common combination way:  Treat information from ‘different sources’ as ‘different (usually disjoint) feature sets’ on which multiple classifiers are trained  Then, the generation of FINAL decision will be made by the classifiers Mostly has the potential to have better knowledge than any single method

64.
 Web site is the collection of we pages One branch of research focuses only on web site contents. Another branch of research focuses on utilizing the structural properties of web sites There is also research that utilize both structural and content information. Classification of web pages helpful to classifying a web site.

65.
 piere 2001  Proposed an approach to the classification of web sites into industry categories using HTML tages  Accuracy around 90% Amitay et al(2003) used structural information of web site to determine its functionality(such as search engine, web directories, corporate sites) Ester et al(2002)  Investigate three different approaches to determining the topical category of web site based on different web site representations  By a single virtual page  By a vector of topic frequencies  By a tree of its pages with topic

67.
 The word “blog” was originally a short form of “web log” Blogging has gained in popularity in recent years, an increasing amount of research about blog has also been conducted. Broken into three types  Blog identification (to determine whether a web document is a blog)  Mood classification or sentient of blogs.  Genre classification

71.
 Webpage classification is a type of supervised learning problem that aims to categorize webpage into a set of predefined categories based on labeled training data. They expect that future web classification efforts will certainly combine content and link information in some form.

72.
 Future work would be well-advised to  Emphasize text and labels from siblings over other types of neighbors.  Incorporate anchor text from parents.  Utilize other source of (implicit or explicit) human knowledge, such as query logs and click-through behavior, in addition to existing labels to guide classifier creation.