Skip to content.

SEKT Portal

Personal tools
You are here: Home Demo Videos
Document Actions

Demo Videos

by Denny Vrandecic last modified 2007-04-16 06:20 PM

A list of videos of demos of the software tools developed in the SEKT project.

SEKT Demo Videos

This website lists videos of the demos of a number of SEKT achievements.
For viewing some of the AVI Videos, you may require the TSCC codec or use the Camtasia player (both Windows). For viewing the Macromedia Flash encoded videos ([Flash]), provided, the Flash Plug-In is required.

Ontology generation


In this demonstration we will present, how can OntoGen, a system for semi-automatic construction of ontologies, be used to build a topic ontology on top of a collection of companies from Yahoo! Finance.

Simultaneous Ontologies

In this demonstration we will present how can OntoGen be used to model different ontologies on top of the same data. We will use Reuters news articles for the data and will construct two different ontologies. First one will be based on geographic information associated with the news articles and the second will be based on topic information provided by Reuters.

Metadata generation

CLIE: Controlled Language Information Extraction

CLIE (Controlled Language Information Extraction) allows the user to define and populate ontologies with a simplified sublanguage of English. This controlled language has a restricted syntax but the lexicon (of class, instance and property names) is defined according to what the user enters rather than in advance.

OBIE: Ontology Based Information Extraction

OBIE (Ontology-Based Information Extraction) allows the user to inspect a document that has been automatically annotated with respect to an ontology and manually improve it. The user adds, deletes and changes annotations, then sends the corrected document back to the machine learning tool's trainer, so that further automatic annotation will be better.

Massive Automatic Annotation

The Massive Semantic Annotation demo movie shows the capability of software from the next generation web to analyse textual data and provide powerful access over the content. The analysis part, aka semantic annotation, identifies real-world entities in texts (like Brands, Companies, People). Afterwards, these entities are then represented in an instance base with respect to a basic upper-level ontology. Subsequently the content is being indexed with respect to these entities, allowing for entity-centric access methods to be implemented. The demonstration shows a number of variations of semantic search in which one could specify the entities (or content) of interest by restrictions over their class/type, names/aliases, attributes or relations to other entities. The semantic search may result in a list of entities matching the criteria and their semantic descriptions may be explored. The search can also result in textual content, which refers these entities. Although semantic annotation, indexing and retrieval is still neither as scalable nor as usable as traditional keyword-based search, it has huge potential to change the way we access information on the web, over corporate networks or on our own computer. By allowing the user to specify more accurately the search criteria (e.g. by class restrictions) the results referring entities with the same (or similar) name, but of another class will be omitted. In this way, less results will be returned, retaining all the correct ones. The semantic search is based not only on the index over the content, but also on background knowledge (ontologies, instance bases). Using this background knowledge, more correct results can be obtained in the cases when entities have alternative names in their semantic description. An example could be searching for Beijing, and getting results which refer only Peking. This is possible since both aliases are referring the same entity and its URI is used for the purpose of indexing the content. In conclusion, although semantic search is still not as mature as traditional keyword search, it has significant potential to bring access to information to a new level.

KIM CORE Search - KIM Co-Occurence and Ranking of Entities

CORE Search stands for Co-Occurrence and Ranking of Entities Search. It is a hybrid technology combining Semantic Web technology, information extraction and relational databases. The essence of CORE Search is the idea to record information about the co-appearance of entities in the same context, which speaks of "soft" or "associative" relations between them. In general there are two different approaches to the interpretation of this information. In the first place, this data can be used for the implementation of an alternative search technique based on co-occurrence and presented in the demo. By selecting entities, the set of textual documents is narrowed down and the information space is being limited only to other entities co-appearing with the selected ones. In a sequence of such selections, the user may limit the entities and the content to readable and apprehensible result sets. Another approach to using the co-occurrence information is the calculation of statistics about the popularity of entities in a given context, information sub-space and period. This technique is also referred to as Timelines generation. It allows the tracking of trends and tendencies, and the association of the each point in the timeline with a set of documents forming it, allows the navigation from the timeline to the documents, where the events forming the peaks or drops are evident.

Ontology and metadata management


The demo video shows the graphical user interface of Text2Onto - a framework for incremental ontology learning and data-driven ontology evolution.
First, the user sets up an ontology learning workflow by selecting appropriate algorithms for each ontology learning task. When multiple algorithms are applied for the same task, so-called combiners can be chosen to merge the individual results.
After having specified the ontology learning workflow, the user creates a corpus by adding a small number of text or HTML files. The documents selected for this demo are paper abstracts belonging to the 'semantic web' information space of the BT digital library.
As soon as the results of the ontology learning process are available the Model of Possible Ontologies (POM) consisting of concepts, instances and various types of relations is displayed in the results panel. Each of the learned ontology elements is associated with a confidence or relevance value, which indicates how certain Text2Onto is about the correctness of the regarding element or how relevant it is for the current domain.
When a new document is added to the corpus new ontology elements will be extracted and all confidence and relevance values of existing ontology elements will be adapted in an incremental manner.
The user can access the change history and explanations for each ontology element by means of a context menu.
By giving positive or negative feedback individual ontology elements can be explicitly included or excluded from the model of possible ontologies.

Question Answering with KAON2 and ORAKEL

In this demo we present an approach to query answering over knowledge sources that makes use of different ontology management components within an application scenario of the BT Digital Library. The novelty of the approach lies in the combination of different semantic technologies providing a clear benefit for the application scenario considered.

KAON2 OWL tools

The KAON2 OWL tools are a set of tools for working on OWL files, exposing some abilities of the KAON2 ontology infrastructure to the command line. It is a continuation of the dlpconvert development done in the first year of SEKT.

DION: Debugger of Inconsistent Ontologies

DION is a debugger of inconsistent ontologies, based on an informed bottom-up approach. DION is powered by XDIG, an extended DIG Decription Logic Interface for Prolog, in particular, for SWI-Prolog.

PION: a Reasoner/System for Processing Inconsistent ONtologies

The classical entailment in logics is explosive: any formula is a logical consequence of a contradiction. Therefore, conclusions drawn from an inconsistent ontology by classical inference may be completely meaningless. An inconsistency reasoner is one which is able to return meaningful answers to queries, given an inconsistent ontology.
> PION is a reasonersystem which can return meaningful answers to queries on inconsistent ontologies. PION is powered by XDIG, an extended DIG Decription Logic Interface for Prolog, in particular, for SWI-Prolog. PION supports TELL requests both in DIG and in OWL, and ASK requests in DIG.

MORE: Multi-version Ontology REasoner

MORE is a multi-version ontology reasoner which is based on a temporal logic approach. MORE is powered by XDIG, an extended DIG Decription Logic Interface for Prolog, in particular, for SWI-Prolog.

Ontology mediation

Mapping API

This demo presents the ontology based information integration research results. This work realized as part of the SEKT work package 4 shows how to relate two structured data sources, here an ontology to a digital library database. The demo shows how to realize the mappings between them using Ontomap and then querying the integrated data using KAON2.

FOAM and OntoMap

OntoMap is a tool that helps to map ontologies intuitively onto each other.

Knowledge access

Squirrel Search and Browse

Squirrel combines keyword based and semantic searching by allowing the user to initially enter free text terms and see results immediately but following this to allow them to refine their queries with the use of ontological support e.g. by selecting from a set of returned topics or matching entities. The intention is to provide a balance between the speed of simple free text search and the power of semantic search. In addition, the ontological approach provides the user with much more opportunity to browse around to related document and entities. The key features of Squirrel are:
  • Combined free text and semantic search
  • Based on W3C's OWL recommendation
  • Ontology based browsing through documents, authors, topics, entities, etc.
  • Highly configurable interface
  • 'Meta-result' to help guide search by indicating the types of resource that match the query
  • Topic-based browsing and refinement
  • Use of rules and reasoning through KAON2 (ontology management tool from the University of Karlsruhe) to infer knowledge that is not explicitly stated.
  • Natural language summaries of ontological knowledge
  • User profile based result ranking


This video shows a demo of the visualisation component. The main goal of this component is to visualise ontologies using 3D technologies into a user-friendly visualisation. The approach chosen is based on the definition of a new ontology, named visualisation ontology. This ontology allows to define which elements from the domain ontology will be visualised in the final application. Also it defines graphical issues such as colour, graphical contexts, etc. This video shows an example of use of this component in the visualisation of the Digital Library Ontology.


Integrated Active Learning and Text2Onto

The demo video shows the interaction of a user with the Active Learning module of Text2Onto - a machine learning approach to instance classification.
First, the user sets up the standard ontology learning workflow using the main Text2Onto components Concept Extraction, Instance Extraction and Instance Classification.
After running the selected slgorithms on the specified corpus from the geographical domain, the sets up the informtaion for the active learning task. In the demo example, the task is to classifiy instances on whether they belong to the concept "island" or not.
The Text2Onto pattern based instance classification provides initial input for the active learning task in terms of positive training examples, the user may add further training examples.
After starting the active learning task, the learning module builds SVM classification models based on the currently available training examples. Based on this information, the user is asked about those unlabled examples for which the prediction confidence is low. The user can give explicit feedback whether the instantiation relationship holds between the instance and the concept in question and the model is retrained. This process can be repeated until the user decides to stop, e.g. because the prediction confidence is getting high.
The user can inspect the results of the final classification sorted according to confidence and add them to the model of possible ontologies.


EvA: Evolution Annotator

EvA is an OntoStudio plugin that allows the collaborative engineering of ontologies based on the DILIGENT methodology. Stakeholders in the ontology can discuss the evolution and refinement of the ontology with an interface allowing to use the argumentation framework which was been evaluated to be the most effective with regards to ontology engineering.

Semantic MediaWiki

Semantic MediaWiki is a plug in to the MediaWiki software that runs popular wikis like Wikipedia. It enables the users to add more semantic data and to export it as RDF. On the other hand, it also allows for the collaborative evolution of big amounts of instance data, and offers a known to work discussion opportunity.

KMM Knowledge Maturity Model Questionnaire - SIMONET

SIMONET implements the KMM Questionnaire, and thus allows the user to asses the maturity of the knowledge management within a given organization. It points to the fields that deserve more attention, and offers a way to control the improvement of the organization.

Case Studies

Intelligent Integrated Decision Support for Legal Professionals

This video shows some of the results of the application of SEKT technologies in the case study Intelligent Integrated Decision Support for Legal Professionals. The goal of this case study is to provide support to professional judges through the developing of an intelligent frequent asked question system. This system uses Semantic Web technologies in order to find the frequent asked question pairs that have the best match with a question posed by the user (usually a new judge). Also this video shows the second part of the system, which goal is to provide an explanation about the pair question-answer provided by the system in the form of jurisprudence. Finally we show the integration of search and browse technologies in order to search into jurisprudence database system.