In the past twenty years, much time, effort, and money has been expended on designing an unambiguous representation of natural languages to make them accessible to computer processing. These efforts have centered around creating schemata designed to parallel logical relations with relations expressed by the syntax and semantics of natural languages, which are clearly cumbersome and ambiguous in their function as vehicles for the transmission of logical data. Understandably, there is a widespread belief that natural languages are unsuitable for the transmission of many ideas that artificial languages can render with great precision and mathematical rigor.
In the early 1900s, analytic philosophers such as Russell and initially Wittgenstein too, tried to develop artificial languages, which, unlike ordinary language, would provide them with a more logical grammar, and words with unambiguous meanings. Language was a major preoccupation for later analytic philosophers such as Austin too, although he felt ordinary language itself would serve the purpose of the philosopher.
About generative grammar, linguist Noam Chomsky said that grammar books do not show how to generate even simple sentences, without depending on the implicit knowledge of the speaker. He said this is true even of grammars of "great scope" like Jespersen's 'A Modern English Grammar on Historical Principles'. There is some "unconscious knowledge" that makes it possible for a speaker to "use his language". This unconscious knowledge is what generative grammar must render explicit. Chomsky said there were classical precedents for generative grammar, Panini's grammar being the "most famous and important case".
There is at least one language, Sanskrit, which for the duration of almost 1,000 years was a living spoken language with a considerable literature of its own. Besides works of literary value, there was a long philosophical and grammatical tradition that has continued to exist with undiminished vigor until the present century. Among the accomplishments of the grammarians can be reckoned a method for paraphrasing Sanskrit in a manner that is identical not only in essence but in form with current work in Artificial Intelligence.
Sanskrit is one language in a tree-like structure of languages with Proto-Indoeuropean at its root. Like many of its sister languages, it features an extense system of verb conjugation that encodes person, time, mood, etc. It also has a rich declension for nouns indicating their relationship to each other in sentences. These two features mean that the Sanskrit relies less on word order than English or Chinese, for example. Sanskrit is not unique, as these characteristics are shared by Russian, Latin and Greek.
Sanskrit has a long written and oral tradition. Panini's codification of Sanskrit has conferred scholars unprecedented knowledge of its inner workings. His works illustrate meticulously how an unlimited number of things can be expressed in Sanskrit. However, the claim that a natural language, however well documented its grammar may be, is more fit for use that another in AI sounds highly suspect. A meticulouly defined grammar does not absolve Sanskrit from any potential ambiguities that can arise in expression. Ask people who read Sanskrit prose to tell you about the challenge that it can be to tease out the meaning in those beautiful and rich verses. Furthermore, a fundamental nature of every natural language is its ability to express virtually every thought possible.
The idea of logic in languages is part of a bigger debate in linguistics about the role language has in shaping human thought. The Sapir-Whorf theory proposed that language directly constrains what thoughts humans can conceive. Most linguists seem to have softened their stance to believe that it only affects one's thoughts somewhat.
So, Sanskrit is best language for computer because natural language processing can be done faster in it. The program can be executed fromtwosidesmeansmeaningofsentencedoesnotchangeinsanskritwhenchangingthewordstobe executed.