Semantic Web Days
2007

abstracts

 

home

Semantic Web Days @ I-Semantics 2007

program

 

supported by:







 


Presentation: Semantic web policies for security and privacy

Piero Bonatti (Universita' di Napoli Federico/REWERSE)
Semantic web policies are semantic markup for static and dynamic resources. Such policies are not only executable specifications of a service's access control mechanism or a user's information disclosure criterion: Based on machine understandable policies one may select the service that best fits his or her privacy requirements; understand how to get a service; find out why a request is denied. Semantic web policies are self-describing artifacts and may include small lightweight ontologies.
With suitable tools, this feature improves software agent interoperability as well as user awareness and control. For example, policy-driven trust negotiation services enable flexible and cooperative security/privacy policy enforcement. Natural language front-ends such as controlled natural language parsers and second generation explanation facilities enable automated documentation, and help common users in understanding negotiations, writing their own policies, and validating them. Deploying and maintaining user-friendly secure services becomes easier and faster with such tools.

Presentation: Semantic Technologies for Business Intelligence applications

Paul Buitelaar (DFKI)
Real-world applications scenarios for semantic technologies used in Business Intelligence (BI) are developed within the RTD project MUSING (MUlti-industry, Semantic-based next generation business IntelliGence, www.musing.eu), co-funded by the European Commission, within the 6th Framework Programme, Information Society. MUSING provides semantics-driven BI services for three application areas with high social impact: Financial Risk Management (FRM), Internationalisation and IT Operational Risk.
In one of the use cases in the FRM area we investigate the use of an ontology infrastructure, together with rule-based and statistical natural language processing, for guide the automated process of extracting and merging relevant information from both structured and unstructured (financial) documents. This process allows to include qualitative information, typically contained in unstructured documents, like news articles or web pages, in systems that are designed for rating and or scoring companies, adding thus a high-level of semantics and transparency to decision procedures towards credit accordance to companies.
An issue is also to be able to distinguish between the sources the most actual one and to organize the detected information along a tume line and to be able to infere relevant knowledge on this base...

Presentation: GroupMe! - Capturing Semantics in Social Tagging Systems

Fabian Abel (L3S Hannover, NoE REWERSE)
Today, we can observe many different ways to edit, search, and share Web content.
Social tagging systems allow users to annotate single resources; blogs and wikis support the creation of content. However, the automated deduction of the semantics of annotations or created content, which is of sufficiently high quality, is very limited. Convincing solutions still need to be discovered. On the other hand, it seems that more sophisticated and powerful systems which can automatically produce semantic information do not match the way Web users like to interact: Plug some content there, just tag something, release a note, etc.
The GroupMe! application was designed to close the gap between these two needs: handy and ''fun-to-use''-tools for the users, that allow for the automatic capturing of quality-rich semantics.
GroupMe! extends the idea of social tagging systems by enabling users to build groups of arbitrary (multimedia) Web resources by simple drag & drop operations, allowing users to (re-)arrange resources contained in such groups (the visualization of the resources is adapted to content type), and capturing grouping, tagging, and arranging activities of users into RDF descriptions in an easy and collaborative way.

Presentation: NeOn Toolkit - an extensible Ontology Engineering Environment

Hans-Peter Schnurr (ontoprise GmbH, NEON)
The NeOn project is a multi-million Euro project involving 14 European partners and co-funded by the European Commission’s Sixth Framework Programme. NeOn started in March 2006 and aims over the course of four years to advance the state of the art in using ontologies for large-scale semantic applications in the distributed organizations. This particularly aims at improving the capability to handle multiple networked ontologies that exist in a particular context, are created collaboratively, and might be highly dynamic and constantly evolving.
One of the first outcomes of the NeOn project is the NeOn Toolkit, an extensible Ontology Engineering Environment that is part of the reference implementation of the NeOn architecture. It contains plugins for ontology management and visualization. A number of commercial plugins extend the toolkit by various functionalities, including rule support (graphical/textual editing, debugging), mediation between ontologies (graphical mapping, interpretation of mapping rules), database integration (import of database schema, life database-access during query answering), or query and reasoning support.
The NeOn reference architecture with the NeOn Toolkit enables efficient implementations of semantic applications, in open environments such as the Semantic Web, in support of the automation of Business to Business relationships, and also in company intranets.
Wernher Behrendt (Salzburg Research)
We present an ontology for content-related knowledge objects (KCO) which consists of six components: a domain model that describes what the content is about; a task model which explains how the content is intended to be used by different stakeholders; a business model which specifies the rules and regulations under which the content (and the knowledge object itself) can be traded; a presentation and interaction model which is used to specify the rendering of content and also the intended modes of interaction between user and host system of the KCO; a model of trust and security where trust concerns measures of confidence for the users whereas security concerns measures of confidence for the owners of the content. Finally, there is a self-reflection component which exposes the semantics of the five functional components to intelligent services or agents. The six components define in essence, an ontology-based generic schema which can be specialised to handle more specific application semantics and we propose a specialisation methodology which leads to four ontological layers: foundational, sector-specific standard, enterprise-layer, application-layer. The paper also argues that KCOs can be the basis for an exchange format for semantic wikis. We ask the workshop participants to challenge two claims: (1) KCO as a means of declaring finite knowledge bases on the semantic web thus re-introducing closed world inferencing at semantic web level. (2) KCO as a sufficiently complete and sound model to be the basis for an exchange format between applications of semantic social software, Wikis in particular.

 

last update 22-feb-08