No. For many reasons: wrong grammatical parsing, incorrect anaphoras, idiomatic sentences. Moreover, the current version does not understand direct speech. We are constanly working to improve the framework.
The database is a compromise between speed, memory and accuracy. Personalized solutions - taylored to specific problems - would be more efficent.
The text is tagged by using the tag frequencies provided in the Open American National Corpus (OANC), an open source project available with this license. Sentences are then parsed by using parsing frequencies extracted from the OANC. A “distance” between words is obtained by using the Wordnet corpus (3.1), available freely under the wordnet-license.The parsing is then improved by choosing the sentences that make more sense according to the Framenet dataset, distributed under a Creative Commons license.
Please note that NLUlite is developed indipendently and is not endorsed by any of the previously mentioned projects.