This question detector (in python) can work with any sentence but it’s designed specifically for twitter or instant messages (IM) . It mainly relies on the following:

The following assumption is made: any sentence containing a question mark is considered a question.

Using a twitter word tokenizer from NLTK to make sure all sentences classified as questions do contain at least one of such key words: “what, why, how, when, where, did, do, does, have, has, am, is, are, can, could, may , would, will, ? ..etc.”. It’s highly unlikely that a question does not have one of these tokens.

Question vs Query

Note that a question is simply an interrogative sentence does not necessarily imply a query, According to this paper from Google Research[1], below are the 6 different types of interrogative sentences:

Advertisement. This kind of tweets ask questions to the reader and deliver advertisements in the following. E.g., ‘ Incorporating your business this year? Call us today for a free consultation with one of our attorneys. 855- 529-8753. http://buz.tw/FjJCV&#8217;

Question with Answer. These tweets contain questions followed by their answers. E.g., ‘ I even tried staying away from my using my Internet for a couple hours. The result? Insanity’

Question as Quotation. These tweets contain questions in quoted sentences as references to what other people said. E.g., ‘I think Brian’s been drinking in there because I’m hearing him complain about girls, and then he goes “Wright, are you sure you’re not gay?’

Rhetorical Question. This kind of tweets include rhetorical questions, which seem to be questions but without the expectation of any answer. In another words, these tweets encourage readers to think about the obvious answers. E.g., ‘ You ruined my life and I’m supposed to like you’

Qweet. (Queries) These kinds of tweets ask for some information or help. E.g., ‘ What’s your favorite Harry Potter scene?’
Tweet author posts a question asked by someone on the web, e.g., CQA portals, forums, etc. The following is an example: ‘Questions about panda update. When will the effect end? http://goo.gl/fb/iiRjn&#8217;

References

]]>https://omarsbrain.wordpress.com/2017/01/09/question-detector-for-twitter/feed/0merotheheroLabels found useful for Native C++ Developmenthttps://omarsbrain.wordpress.com/2015/12/30/labels-found-useful-for-native-c-development/
https://omarsbrain.wordpress.com/2015/12/30/labels-found-useful-for-native-c-development/#respondWed, 30 Dec 2015 17:00:08 +0000http://omarsbrain.wordpress.com/?p=238Read more Labels found useful for Native C++ Development]]>Using labels in modern programming is often considered a bad practice, as it results in spaghetti (unmaintainable) code. However, I often find it very useful for native C/C++ programming, where no exceptions are used (error handling using return values is extensively used) and memory management is needed.It increases the readability and maintainability of the code to a great extent.

]]>https://omarsbrain.wordpress.com/2015/12/30/labels-found-useful-for-native-c-development/feed/0merotheheroRepresenting a state in a compact wayhttps://omarsbrain.wordpress.com/2011/06/04/representing-a-state-in-a-compact-way/
https://omarsbrain.wordpress.com/2011/06/04/representing-a-state-in-a-compact-way/#respondSat, 04 Jun 2011 00:24:31 +0000http://omarsbrain.wordpress.com/?p=223Many algorithms depend on representing the states of many objects in the domain world in the smallest way ever.

This is some C++ code to do this:

int GetWorldState(vector&amp;lt;Obj&amp;gt; *Objs)
{
int state = 0;
for(int i = 0 ; i &amp;lt;Objs-&amp;gt;size(); i++)
{
//Shift the next bit to be initialized
state &amp;lt;&amp;lt;= 1;
//Get the current object
Obj * ch = &amp;amp;Objs-&amp;gt;operator [](i);
//Fill the bit if object has a special property, otherwise don't
if ( Obj-&amp;gt;isLeft ) state |= 1;
}
//Return the final result
return state;
}

In a recent post, I’ve declared our paper “Intelligent Online Case Based Planning Agent Model for RTS Games” to be published in ISDA’10 and stated its abstract and keywords. Today you can download the paper . Below is the link. Enjoy !

AI research aims to devise techniques to make the computer perceive, reason and act. On the other hand, Software Engineering (SWE) aims to support humans in developing large software faster and more effectively. This article gives the gigantic picture on the relationship between both SWE and AI and how can they contribute to each other.

Software Engineering

The main concern of SWE is the efficient and effective development of high-qualitative and mostly very large software systems. The goal is to support software engineers and managers in order to develop better software faster with (intelligent) tools and methods.

How do they overlap?

Both deal with modeling real objects from the real world like business processes, expert knowledge or process models.

How can AI contribute to SWE Research?

3) AI Testing: The diversity of test cases while testing a SW may cause a buggy release (as not all test cases could be applied). AI’s role here is applying only the sufficient test cases (instead of all the test cases) – just as humans do – to save time.

Cloud Computing

Cloud computing simply means that the programs you run and the data you store, are somewhere in a server around the world. You won’t bother yourself by storing any information on your personal computer or even use it to run a complicated program that requires sophisticated computers. Your personal computer will barely do nothing but upload the information to be processed or download the information you need to access. All the programs you will use will be web-based via the internet. Cloud computing is considered the paradigm shift following the shift from mainframe to client–server in the early 1980s.

If you look around you, you will figure out that cloud computing is taking over. Many of the desktop applications are turning to be web applications, as well as current Web Applications are getting more powerful. What really make good use of cloud computing nowadays are mobile cell phones, since they have relatively small processing powers and thus favor a lot from processing on the cloud instead. To understand more about the pros and cons of cloud computing visit this link.

The Effect of Cloud Computing on Computer Hardware Industry

I think that, Cloud Computing will lead to polarizing the computer hardware industry to 2 distinct poles: one pole is the giant servers that contain all the data and programs and work out all the processing of the clouds, and the other pole is the simple computer terminals with relatively minimal storage and processing power which use the clouds as their main storage and computation resource. This means that the hardware industry will not care about advancing personal computers’ hardware like it did before (as everything is done in the cloud)

AI as cloud-based services

Google has launched the cloud-based service Google Predication API that provides a simple way for developers to create software that learns how to handle incoming data. For example, the Google-hosted algorithms could be trained to sort e-mails into categories for “complaints” and “praise” using a dataset that provides many examples of both kinds. Future e-mails could then be screened by software using that API, and handled accordingly. (Technology Review Reference)

On the other hand, AI Solver Studios said they will be rolling out cloud computing services to allow instant access beside their desktop application AI Solver Studio. AI Solver Studio is a unique pattern recognition application that deals with finding optimal solutions to classification problems and uses several powerful and proven artificial intelligence techniques including neural networks, genetic programming and genetic algorithms.

How can Cloud Computing improve AI

Since Cloud Computing emphasizes that all the data as well as the programs running are stored somewhere in a cloud, this means that a large amount of data can be used for analysis and use by AI programs in order to perform data mining or other AI-related techniques to deduce useful information.

For example, consider the WordPress.com application, in which you have your own capacity to store on it what you need of posts and multimedia. If the data and the behavior of users – such as you – weren’t all stored in the cloud of WordPress.com, not enough data will be available to be used for AI purposes.

Thus I consider Cloud Computing to enhance the performance of AI by providing a lot of Data to be used by AI techniques.

Cloud Computing and Ubiquitous Computing

Cloud Computing is essential for Ubiquitous Computing (see my previous post to know more about it) to flourish. This is because most Ubiquitous computers will suffer from relatively limited hardware resources (due to their ubiquitous nature), this will make them really favor from the resources on a cloud in the internet.

Conclusion

There’s no doubt that merging the 2 trends (Ubiquitous and Cloud Computing) and supporting them with AI will result in tremendous technological advances. I think I will be talking about them more in the future!

]]>https://omarsbrain.wordpress.com/2010/09/17/artificial-intelligence-raining-from-%e2%80%9cthe-cloud%e2%80%9d-on-ubiquitous-computers/feed/4merotheherocloud-computing-kitchen-sinkMy First Research Paper ! (To Be Published)https://omarsbrain.wordpress.com/2010/09/08/my-first-research-paper-to-be-published/
https://omarsbrain.wordpress.com/2010/09/08/my-first-research-paper-to-be-published/#commentsWed, 08 Sep 2010 18:32:57 +0000http://omarsbrain.wordpress.com/?p=171Read more My First Research Paper ! (To Be Published)]]>Greetings ! I wanted to share with you my first AI-related paper which will be published soon. I might (or maybe not) upload the whole paper in another post later to gain your reviews, but for now, I’m showing the abstract and keywords.

Abstract

Research in learning and planning in real-time strategy (RTS) games is very interesting in several industries such as military industry, robotics, and most importantly game industry. A Recent published work on online case-based planning in RTS Games does not include the capability of online learning from experience, so the knowledge certainty remains constant, which leads to inefficient decisions. In this paper, an intelligent agent model based on both online case-based planning (OLCBP) and reinforcement learning (RL) techniques is proposed. In addition, the proposed model has been evaluated using empirical simulation on Wargus (an open-source clone of the well known Real-Time Strategy Game Warcraft 2). This evaluation shows that the proposed model increases the certainty of the case base by learning from experience, and hence the process of decision making for selecting more efficient, effective and successful plans.

Since I’m personally in hunger for big pictures for everything around me, and since natural language processing (NLP) is highly important for computers to control human kind, and since many of the AI-related careers depend on it, and since it is used extensively for commercial uses, For all those reasons, I will give a big picture about it (Yes, I mean NLP).

What’s Natural Language Processing?

The simple definition is obvious : making computers understand or generate a human text in a certain language, however, the complete definition is : a set of computational techniques for analyzing and representing naturally occurring texts (at one or more levels) for the purpose of achieving human-like language processing for a range of applications.

NLP can be done for any language of any mode or genre, for oral or written texts. It works over multiple types or levels of language processing starting from the level of understanding a word to the level of understanding the big picture of a complete book.

A brain with language samples. (Image courtesy of MIT OCW.)

Related Sciences?

Linguistics: focuses on formal, structural models of language and the discovery of language universals – in fact the field of NLP was originally referred to as Computational Linguistics.

Computer Science: is concerned with developing internal representations of data and efficient processing of these structures.

Cognitive Psychology: looks at language usage as a window into human cognitive processes, and has the goal of modeling the use of language in a psychologically plausible way.

Language Processing VS Language Generation

NLP may focus on language processing or generation. The first of these refers to the analysis of language for the purpose of producing a meaningful representation, while the latter refers to the production of language from a representation. The task of language processing is equivalent to the role of reader/listener, while the task of language generation is that of the writer/speaker. While much of the theory and technology are shared by these two divisions, Natural Language Generation also requires a planning capability. That is, the generation system requires a plan or model of the goal of the interaction in order to decide what the system should generate at each point in an interaction.

What are its sub-problems?

NLP’s performed by solving a number of sub-problems, where each sub-problem constitute a level (mentioned earlier). Note that, a portion of those levels could be applied, not necessarily all of them. For example some applications require the first 3 levels only. Also, the levels could be applied in a different order independent of their granularity.

Level 1 – Phonology : This level is applied only if the text origin is a speech. It deals with the interpretation of speech sounds within and across words. Speech sound might give a big hint about the meaning of a word or a sentence.

Level 2 – Morphology : Deals with understanding distinct words according to their morphemes ( the smallest units of meanings) . For example, the word preregistration can be morphologically analyzed into three separate morphemes: the prefix “pre”, the root “registra”, and the suffix “tion”.

Level 3 – Lexical : Deals with understanding everything about distinct words according to their position in the speech, their meanings and their relation to other words.

Level 4 – Syntactic : Deals with analyzing the words of a sentence so as to uncover the grammatical structure of the sentence.

Level 5- Semantic : Determines the possible meanings of a sentence by focusing on the interactions among word-level meanings in the sentence. Some people may thing its the level which determines the meaning, but actually all the level do.

Level 6 – Discourse : Focuses on the properties of the text as a whole that convey meaning by making connections between component sentences.

Level 7 – Pragmatic : Explains how extra meaning is read into texts without actually being encoded in them. This requires much world knowledge, including the understanding of intentions, plans, and goals. Consider the following 2 sentences:

The city councilors refused the demonstrators a permit because they feared violence.

The city councilors refused the demonstrators a permit because they advocated revolution.

The meaning of “they” in the 2 sentences is different. In order to figure out the difference, world knowledge in knowledge bases and inferencing modules should be utilized.

What are the approaches for performing NLP?

Natural language processing approaches fall roughly into four categories: symbolic, statistical, connectionist, and hybrid. Symbolic and statistical approaches have coexisted since the early days of this field. Connectionist NLP work first appeared in the 1960’s.

Symbolic Approach: Symbolic approaches perform deep analysis of linguistic phenomena and are based on explicit representation of facts about language through well-understood knowledge representation schemes and associated algorithms. The primary source of evidence in symbolic systems comes from human-developed rules.

Statistical Approach: Statistical approaches employ various mathematical techniques and often use large text input to develop approximate generalized models of linguistic phenomena based on actual examples of these phenomena provided by the text input without adding significant linguistic or world knowledge. In contrast to symbolic approaches, statistical approaches use observable data as the primary source of evidence.

Connectionist Approach: Similar to the statistical approaches, connectionist approaches also develop generalized models from examples of linguistic phenomena. What separates connectionism from other statistical methods is that connectionist models combine statistical learning with various theories of representation – thus the connectionist representations allow transformation, inference, and manipulation of logic formulae. In addition, in connectionist systems, linguistic models are harder to observe due to the fact that connectionist architectures are less constrained than statistical ones.

OK !, Today I give a very big picture about that crucial term called “Ubiquitous Computing” and it’s relation to AI (YES, I mean Artificial Intelligence) . Actually, it’s a shame that any AI geek doesn’t know it. Just remember that, the word “Ubiquitous” means “Existing or being everywhere, or in all places, at the same time”.

What is Ubiquitous Computing?

I can say about Ubiquitous Computing (UbiComp) as “The Incorporation of computers into the background of human life without any physical interaction with them”. It’s considered the future third era of computing where the first era was the mainframes era and the second era is the personal computers era (what we are living now). The term Ubiquitous Computing – also known as Calm Technology- was found by Mark Weiser -The father of Ubiquitous Computing- in the late 80s (so it’s not a new thing).

Ubiquitous Computing involves tens, hundreds or even thousands of different sized (often tiny) computers sensing the environment, deducing stuff, communicating and performing actions to help a human without actually using any interface. Thousands of computers should be embedded in everyday’s objects such as paper, pens, books, doors, buildings, walls, food containers, clothes, furniture, equipment … etc. to maintain a human’s life.

Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box (the computer itself). However, Ubiquitous Computing means no human attention to any computer interface when using it. Take a look at motors technology which is considered ubiquitous; a glance through the shop manual of a typical automobile, for example, reveals twenty-two motors and twenty-five more solenoids. They start the engine, clean the windshield, lock and unlock the doors, and so on. By paying careful attention it might be possible to know whenever one activated a motor, but there would be no point to it. Similarly, computers in the Ubiquitous Computing era should be like that.

Suppose you want to lift a heavy object. You can call in your strong assistant to lift it for you, or you can be yourself made effortlessly, unconsciously, stronger and just lift it. There are times when both are good. Much of the past and current effort for better computers has been aimed at the former; ubiquitous computing aims at the latter.

An Example for life with Ubiquitous Computing.

AI and UbiComp ?

According to this essay, AI will play a major role in UbiComp in 3 different ways:

Ubiquitous Computing needs a transparent interface to work, which means a natural way for communication with human-kind. This involves a lot of artificial intelligence such as gesture recognition, sound and speech recognition and computer vision.

Ubiquitous Computing needs computers to be aware of their context such as location and time. Artificial Intelligence plays an important role in context awareness where it helps the computer perceive people’s location and generate proper service accordingly. For example, when you are at office, you may want to read some business reports, but when you go back home, you want to watch movie and enjoy coffee for a rest. These scenarios impose requirements to artificial intelligence agents.

Ubiquitous Computing will also favor from automated learning from their past experience and capturing people’s experience. Learning agents are introduced into this framework to perceive people’s behavior and make decision based on people’s preference.