This example process uses the operator SPARQL Data Importer provided by the RapidMiner LOD Extension to receive additional data about books including the author, isbn, country, abstract, number of pages and language from dbpedia (www.dbpedia.org).

This process was created by the MLWizard Extension.
Start RapidMiner Studio, download the extension from the RapidMiner Marketplace, open the menu Tools and click on "Automatic System Construction".
A wizard opens, which suggests some models which fit best to build a model from your data. You can choose the one which has the most accuracy and the wizard creates the RapidMiner process for you.

This example process crawls the web (RapidMiner forum) for entries, extracts the information with the Process Documents operator and applies Clustering on the results. The process shows the interaction between the Web Mining Extension and the Text Mining Extension from RapidMiner.

This example process can be used with the Multimedia Extension (http://www.burgsys.com/) in RapidMiner Studio. It shows how to create a QR code with the extension, and how to make image transformations.

This process is using the Rapid Miner Linked Open Data extension and the Recommender extension, to build a hybrid Linked Open Data enabled recommender system for books.The input data for the process can be found here.More information about the process can be found here.

This process is using features extracted from DBpedia to predict the fuel consumption of cars. It uses operators from the Linked Open Data and the Weka extensions.The process first reads a list of cars with the fuel consumption value. The DBpedia lookup linker is used to link the car names to DBpeida resources. The established links are used to generate additional features, i.e. direct types and categories of each car. Finally, a M5 Rules Operator is used to predict the fuel co...

This simple workflow shows how to detect wrong links in Linked Open Data. It uses operators from the Linked Open Data and the Anomaly Detection extensions.
The process first reads a list of links from the EventMedia endpoint, linking to DBpedia, then creates feature vectors for each of those links. Finally, an outlier detection operator is employed to find suspicious links.