it’s just common sense!

Posted on Aug 26, 2008 by in artificial intelligence | 0 comments

Common-sense in communication

Common sense is one of the most important but fuzzy concepts to consider in order to understand the challenges of Artificial Intelligence. An approach is to consider that common sense should be acquired through the processing of Machine Learning algorithms and therefore emerge naturally with the increase of the complexity of the learning scope. Another is to consider it should result from an adequate usage of a very sophisticated knowledge-base. Another is to consider it should be included in a very complex logic following a case-base reasoning approach (however, this approach seams quite obsolete now due to the obvious too big complexity such system would have).

Common sense is also what is expected from a system to be qualified as “intelligent”. Indeed, people will consider that given the following  sentence “Ralph is a duck” a system considered as “intelligent” should now that the sentences “Ralph can fly” and “Ralph is a mammal” are true while the sentence “Ralph is a human” is false. They also consider that a sentence like “Lily is a nice chick” should not imply that Lily has to be a female chicken, but might be more tolerant for this case.

In reality, nobody knows what common sense really is, but believes otherwise: it’s common sense to understand what common sense is. It is therefore somehow a performative ability: the ones, only the ones, and all the ones who have common sense understand what common sense is.  For this reason, it would be great to have computers having common sense so they could explain us a little more clearly what it is, as they would probably have better skills to define it than human beings.

Now it’s interesting to analyse how people use their common sense to judge the “intelligence” of a system while they also believe the system to have no common sense. While a more structured way of thinking would imply that another measure would be necessary regarding this contradiction, people have usually no problem to go over this technicality. Let’s take the example of Machine Learning and more especially of the domain of classification. People will judge the “intelligence” of the system based on the perceived accuracy and difficulty of the classification task for themselves. However, in this judgment, they assume the system does have common sense and also that humans are individually the only frame of reference to judge the “intelligence” level. Of course this is in complete contradiction with the fact that common sense is itself considered as the proof of an intelligence: in other words, people evaluate with common sense the difficulty of a task showing how close a system is from having common sense.

Now let’s consider interesting aspects in the way people judge the “intelligence” of a classification task:

– The accuracy perception  is not systematically shared between people (some people might disagree that a document belongs to one or another category). However, each one tends to see every perceived mistake as the proof of a lack of “intelligence”, without considering the fact that their perception is discussable, which they would naturally do if the classification was coming from a human decision and not from a computer.

– The complexity perception of the classification task is based on the effort and expertise-level which would be required for a human. Therefore, if the classification is very fast, it is perceived as more intelligent than a slower system; if it is classifying legal document, it is perceived as more intelligent than classifying pornographic web sites, even with the same accuracy.

An interesting point is that, at the beginning,  the search engine of Google was considered as more intelligent than the others because it was faster.  The contradiction, of course, is that nobody believes that the same computer becomes smarter by upgrading its CPU and therefore its speed. The major reason is that what makes it faster is very clear to the user (a new piece of hardware) while the functioning of Google seems much more magical (it is able to look at billions of pages in a fraction of second; how does it do that? It must be intelligent!).

What is even more interesting is that, although considering the complexity of the task makes full sense to analyse the intelligence of the system, people define this complexity based on how difficult it would for a human being: everybody can classify pornographic web-sites but not many people can classify legal documents. This, of course, makes no sense as the evaluation of “intelligence” would then rely on the effort required for a human being who is doted of common sense to perform this task, while machines have no common sense to start with.

Strangely, people agree when they speak about the architecture of an environment that what makes the system intelligent is the complexity and efficiency of its learning model. However, when evaluating a specific application they only rely on a human reference and therefore will rank two systems as having highly different “intelligence”, while the classification algorithms are the same and that only the database changes.

To conclude with this point: people deeply believe that computers have no common sense; they believe that only a system who can demonstrate common sense should be qualified as “intelligent”; but they judge the intelligence of such system based on how difficult this task would be for a human with preexisting common sense abilities.

Now, I would like to link this post to my last post about the context-free concept.  Indeed, context-free and common sense are highly connected for the following reason: Let’s consider a speech recognition system and the sentence “I am learning pi by heart”. You can understand that, when this sentence is read, a speech recognition system could understand “I am learning pie by heart”. A system able to analyse the phonetic understanding of this sentence needs to rely on algorithms which cannot be context-free.

One reason why common sense is an important concept for AI is due to the fact that a context-free system is not very useful to address complex problems requiring preexisting knowledge and that the most important and general prior-knowledge is common sense itself. In other words a context-free system can represent the tool that an AI system can use to make very smart analysis and decisions, and common sense is the generic knowledge-base the system can use to extend the coverage of its analysis capacities to any context. Of course, if the set of available knowledge is more or less infinite, any problem could become somehow a context-free problem.

It is clear that these statements don’t make full sense from a algorithm perspective and that we are not able to  extend a context-free algorithm by simply connecting it to a common sense knowledge-base. However, I am convinced that it does make sense to speak about it this way because the assumption of a useful common sense knowledge-base implies the existence of a complete structure controlling all the links between the common sense concepts. Assuming the existence of such structure automatically implies (at least to a large extent) a standard way to access it and therefore to use it within context-free algorithms.

Let me take an illustration to make the last paragraph simple to understand. When a human thinks about something particular in order to draw a valuable conclusion, he naturally simplifies the world as he knows it into a well defined context where every concept has one specific meaning and is connected with the other concepts in a well defined way. Therefore, this process, even if it requires a contextual preparation requiring common sense, results in a context-free processing. In that logic, my claim is that, as a common sense knowledge-base could extract the relevant set of concepts for specific  usage (therefore reduce the number of meaning of every concept to one matching the current context), the output would be a simplified data-set in which the context-free algorithms could perform appropriately.

Let me take an example:  “Ralph is a duck”. Now let’s consider the longevity of mammals. We can see that a common sense knowledge-base could provide us all useful concepts about the longevity of mammals as well as about duck. The context around the sentence could give us information about “Ralph”: “Ralph is five years old”; “Ralph has a green neck”, … Based on all these information, we simplify the world to the current context and then use logic to make conclusion : “Ralph will live probably 5 other years, because ducks with a green neck are mammals with an average longevity of 10 years and Ralph is already 5 years old”. We can make this conclusion without requiring all information because of our common sense, but we do select specific information to complete the assumptions required to finally process a context independent manner.

To finish with this post, I would like to illustrate how common sense is a good representation of the eternal struggle between the data used by algorithms and used to store the algorithms themselves. Indeed, algorithms are data and data can contain the result of algorithm’s processing. Common sense is nothing else that the incarnation of the contradiction between the idea that we need knowledge versus learning to make a system intelligent: knowledge doesn’t come from nowhere and must contain internal linkage which is very similar to a processing; and learning must always perform analysis on data and are of no value without them. Indeed, when we speak about common sense, we don’t only refer to the existence of a knowledge, but also as a way to use it: “it’s just common sense to do that”, means that common sense is both the data required to know what to do as well as the processing to use this knowledge to do the right thing.

For all the above reasons, common sense is a key problematic of AI and is probably not addressed enough by the research institutes. Of course, it is more intellectually rewarding to work on very complex learning algorithm complying with very complex statistical measures and providing very specific properties, but we have to admit that creating a very simple system with just a bit of common sense would be a much more major breakthrough.

My personal opinion is that common sense should not be considered as a structured database for the simple reason that the number of links between concepts is simply to complex to be feasible. I also believe you don’t need a lot of knowledge to have common sense (most people only now just a fraction of the words existing in dictionaries but still have common sense). However, it is clear that reference data are necessary to create the emergence of common sense. But the links within these data, both in their existence but also in their definition must be learned dynamically. Therefore, as the structure must result from an AI processing and not be provided as a premise, I am convinced that the data-base required to create common sense is already available today (as we don’t need it to be structured)  and is nothing else than the world wide web.

In other words “The Web” + “Context-Free Algorithms” are the ingredient which can enable an “intelligent” system in the logic that it would be recognized as having common sense. We need now to enable the processing and the required infrastructure to make such system extract the required information from the web and this is exactly what Tom Mitchel is currently doing in Carnegie Mellon (CMU World Wide Knowledge Base (Web->K) project).

We need common-sense!

PDF24    Send article as PDF   

Submit a Comment

Your email address will not be published. Required fields are marked *