I have completed an enhanced robots.txt file.
Initial Website XML Maps I was making were returning Just under 500 results. The Sitemap now returns 45 results, one of which is an error, that I have filed a separate bug report for. No pages are duplicated, all pages with relevant content are crawled.
Of note is that I have re-enabled the file gallery being crawled. It was returning large number of files and was disabled to prevent pressure on the web server. Results can now be indexed without the massive duplication that existed before.
There is a section in the file that can be commented out to prevent duplicated pages when SEF URL's is enabled. Applied automatically, it will prevent the entire website from being crawled.
Of importance is also that I have only tested this for the feature set that I am using. It WILL create a better crawling and therefor better Search Engine ranking with other features, but may not take it as far as one could be achieved.
Using this robots.txt file not only decreases server load, but also leads to better search engine rankings.
To help developers solve the bug, we kindly request that you demonstrate your bug on a show2.tiki.org instance. To start, simply select a version and click on "Create show2.tiki.org instance". Once the instance is ready (in a minute or two), as indicated in the status window below, you can then access that instance, login (the initial admin username/password is "admin") and configure the Tiki to demonstrate your bug. Priority will be given to bugs that have been demonstrated on show2.tiki.org.