Area 1 - Internet Computing
|
Title: |
EXPERIMENTATION MANAGEMENT KNOWLEDGE SYSTEM |
Author(s): |
R. William Maule, Shelley P. Gallup and Gordon
Schacher |
Abstract: |
A current focus in the DoD involves the
integration of information across the different military
branches for operations. Network-centric information methods
will enable efficiencies through the integration of best-of-
breed software and hardware from each branch of the military,
together with the latest advances from government laboratories
and the private sector. Information merging will promote synergy
and expand effective use of the enterprise infrastructure to
realize improved operational and organizational processes.
Research to date has focused on core network and infrastructure
capabilities but has not fully addressed strategic
organizational objectives in the context of systems integration.
A model is advanced that establishes variables for enterprise
analysis to assess strategic technical objectives enabled or
hindered through new network-centric capabilities. Examples are
derived from operational experimentation in network-centric
warfare but presented generically to apply to any organization
seeking to assess the effectiveness of organizational strategy
as enabled or hindered through network-based communications and
enterprise-level systems integration. |
|
Title: |
WEB SERVICES AS AN INFORMATION ENABLER IN THE
RETAIL INDUSTRY |
Author(s): |
Sudeep Mallick and Anuj Sharma |
Abstract: |
Retail organizations works on thin margins and
hence it is very imperative that they utilize information
technology to achieve optimization on time and space in the
entire retail supply chain in order to remain competitive.
However, as emerging technologies present new opportunities for
lowering cost of operation, increasing efficiency, productivity
and the overall customer satisfaction, it is also a big risk to
experiment with these new technologies especially in a low
margin scenario. Web services is an emerging technology holding
tremendous promise as a platform neutral, easy to implement
mechanism to achieve information and business process
integration in the extended enterprise. In this paper, we review
how Web services in consonance with other technologies can prove
to be an effective business enabler and analyze the application
of Web services in the retail vertical. The paper also proposes
a road map approach towards adoption of Web services in this
industry vertical which will help increase ROI and at the same
time minimize the risk of adoption of this emerging technology. |
|
Title: |
INTEGRATED METHODOLOGY FOR INTERNET-BASED
ENTERPRISE INFORMATION SYSTEMS DEVELOPMENT |
Author(s): |
Sergey V. Zykov |
Abstract: |
The paper considers software development issues
for large-scale enterprise information systems (IS) with
databases (DB) in global heterogeneous distributed computational
environment. Due to high IT development rates, the present-day
society has accumulated and rapidly increases an extremely huge
data burden. Manipulating with such huge data arrays becomes an
essential problem, particularly due to their global
distribution, heterogeneous and weak-structured character. The
conceptual approach to integrated Internet-based IS design,
development and implementation is presented, including formal
models, software development methodology and original software
development tools for visual problem-oriented development and
content management. IS implementation results proved shortening
terms and reducing costs of implementation compared to
commercial software available. |
|
Title: |
RESERVING IMMUTABLE SERVICES THROUGH WEB SERVICE
IMPLEMENTATION VERSIONING |
Author(s): |
Robert Steele and Takahiro Tsubono |
Abstract: |
Widespread adoption of a Web services-based
paradigm for software applications will imply that applications
will typically have potentially many dependencies upon Web
services that they invoke or consume. These invoked services
might typically be available from a remote site and be under the
administration of third parties. This scenario implies a
significant vulnerability of a Web service-based application:
one or more of the services which it consumes may become
altered, hence potentially “breaking” the application. Such
alterations might be such as those that alter the WSDL signature
of the service or could be changes to the underlying service
implementation that do not change the WSDL signature. In this
paper, we will focus on the second of these two cases and will
introduce a versioning system that can detect changes to service
implementations and that can avoid the breaking of applications
that call services in the face of changes to the implementations
of those called services. |
|
Title: |
EMPOWERING DISABLED USERS THROUGH: THE CONCEPT
CODING FRAMEWORK AN APPLICATION OF THE SEMANTIC WEB |
Author(s): |
Andy Judson, Nick Hine, Mats Lundälv and Bengt
Farre |
Abstract: |
The World Wide Web offers many services from
typical textual web content to shopping, banking and educational
services, for example, virtual learning environments. These
technologies are inherently complex to use, but in their very
nature offer many benefits to the disabled person. The traversal
of the web relies upon the cognitive skills of the user. You
need to know what you want to do. You need to understand what
the site is allowing you to do, and you need to be able to
complete the task by interacting with the website. The emergence
of the semantic web offers the potential to reduce the cognitive
burden of understanding what the site can do and how to complete
a task, whilst also offering new solutions to typical
accessibility issues. In this paper, we aim to present how the
semantic web can be used to enhance accessibility. Firstly we’ll
give some examples of what is currently possible. Secondly we’ll
motivate some research initiatives to enhance user independence
for the disabled person, particularly those that use
Augmentative and Alternative Communication (AAC) systems and/or
are Learning Impaired. |
|
Title: |
FORMAL VERIFICATION OF TRANSACTIONAL SYSTEMS |
Author(s): |
Mark Song, Adriano Pereira and Sergio Campos |
Abstract: |
Today, the trend in software is toward bigger,
more complex systems. This is due in part to the fact that
computers become more powerful every year leading users to
expect more from them. People want software that is better
adapted to their need which, in turn, merely makes software more
complex. This trend has also been influenced by the expanding
use of the internet for exchanging all kinds of information. As
a new computational infra-structure has become available, new
distributed applications which were previously too expensive or
too complex have become common. In this context, web based
systems has become a popular topic for business and academic
research. However, web applications tend to generate complex
systems. As new services are created, the frequency with which
errors appear has increased significantly. This paper presents
the UML-CAFE, an environment which can be used to help the
designer in the development of transactional systems, such as
web based ones. It is divided into the UML-CAFE Methodology, a
set of transformation patterns, and the UML-CAFE translator to
describe and map UML specifications into a formal model to be
verified. |
|
Title: |
A META-MODEL FOR THE DIALOG FLOW NOTATION |
Author(s): |
Matthias Book, Volker Gruhn and Nils Mirbach |
Abstract: |
While the separation of presentation and
application logic is widely practiced in web-based applications
today, many do not cleanly separate application and dialog
control logic, which leads to inflexible implementations
especially when multiple presentation channels shall be served
by the same application logic. We therefore present a notation
for specifying the complete dialog flow of an application
separately from the application logic and show how to construct
a formal metamodel for it using the OMG's Meta-Object Facility
(MOF). This allows the validation of dialog flow models, as well
as the generation of machine-readable dialog flow specifications
from graphical models. |
|
Title: |
LOGIC-BASED MOBILE AGENT FRAMEWORK USING WEB
TECHNOLOGIES |
Author(s): |
Shinichi Motomura, Takao Kawamura and Kazunori
Sugahara |
Abstract: |
We have proposed Maglog which is a framework for
mobile multi-agent systems. Maglog is based on Prolog, and has
the concept of field. A field is an object which can contain a
knowledge base. With the concept of field, Maglog provides a
simple and unified interface for 1)inter-agent communication,
2)agent migration between computers, and 3)utilization of data
and programs on computers. An agent migrates using HTTP as the
transport protocol and XML as the encoding format itself. In
this paper, we present the implementation of Maglog on Java
environment, in detail. Since we have implemented both
command-line shell and GUI for Maglog, users can choose them for
their needs. In addition, through XML-RPC interface for Maglog
which we have also implemented, other systems can easily utilize
Maglog. As examples, we outline several applications developed
through XML-RPC interface. |
|
Title: |
P2P WEB-BASED TRAINING SYSTEM USING MOBILE AGENT
TECHNOLOGIES |
Author(s): |
Shinichi Motomura, Takao Kawamura, Ryosuke
Nakatani and Kazunori Sugahara |
Abstract: |
In this paper, we present a novel framework for
asynchronous Web-based training. The proposed system has two
distinguishing features. Firstly, it is based on P2P
architecture for scalability and robustness. Secondly, all
contents in the system are not only data but also agents so that
they can mark user's answers, can tell the correct answers, and
can show some extra information without human instruction. We
also present a prototype implementation of the proposed system
on Maglog. Maglog is a Prolog-based framework for building
mobile multi-agent systems we have developed. The agent migrates
using HTTP as transfer protocol and XML as encoding format
itself. The user interface program of the proposed system is
built on Squeak. Performance simulations demonstrate the
effectiveness of the proposed system. |
|
Title: |
AN ARCHITECTURE FOR CONTEXT-SENSITIVE
TELECOMMUNICATION APPLICATIONS |
Author(s): |
Agnieszka Lewandowska, Maik Debes and Jochen
Seitz |
Abstract: |
Nowadays, everybody utilizes information from
different sources and services from various providers. However,
one must know how to find the interesting information and how to
access services. Wouldn’t it be better if information and
service access could adapt to the user and his interests? This
approach is the central point of the ubiquitous computing
research. And the adaptation is done based on the current
context describing the user and his equipment. Although there
are many projects that claim to be context sensitive, a general
architecture for context-sensitive applications has not yet been
introduced. This paper closes this gap and describes an
architecture that has been developed for assisting handicapped
tourists during their holidays in the Thuringian Forest.
However, this architecture can easily be generalized for any
context-sensitive telecommunication application, because it
gives a well-defined and flexible model for context information
and defines several processes to gather, transfer, process and
apply context information. |
|
Title: |
A DISTRIBUTED ALGORITHM FOR MINING FUZZY
ASSOCIATION RULES |
Author(s): |
George Stephanides, Mihai Gabroveanu, Mirel
Cosulschi and Nicolae Constantinescu |
Abstract: |
Data mining, also known as knowledge discovery in
databases, is the process of discovery potentially useful,
hidden knowledge or relations among data from large databases.
An important topic in data mining research is concerned with the
discovery of association rules. The majority of databases are
distributed nowadays. In this paper is presented an algorithm
for mining fuzzy association rules from these distributed
databases. This algorithm is inspired from DMA (Distributed
Mining of Association rules) algorithm for mining boolean
association rules. |
|
Title: |
AN AUTOMATIC GENERATION METHOD OF DIFFERENTIAL
XSLT STYLESHEET FROM TWO XML DOCUMENTS |
Author(s): |
Takeshi Kato, Norihiro Ishikawa and Norihiro
Ishikawa |
Abstract: |
We propose a differential XSLT stylesheet
generation method for arbitrary pairs of XML contents. It is
possible to obtain the revised XML document by supplying the
XSLT stylesheet with the differential data to the original XML
document. Comparing with sending whole revised XML document, the
original XML document can be updated by sending less
information, the differential data. This paper introduces a
difference detection algorithm based on the DOM tree and a
difference representation method that permits the expression of
difference information. We also discuss a new XSLT function for
the proposed method. We also introduce prototype software
implemented based on proposed method and evaluation result that
shows the effectiveness of our method. An experiment shows that
the proposed method is suitable for updating XML contents,
especially for web service in the costly mobile network. |
|
Title: |
ON GEOSPATIAL AGENTS |
Author(s): |
Merik Meriste, Tõnis Kelder, Jüri Helekivi,
Andres Marandi and Leo Motus |
Abstract: |
As access to spatial and real-time data improves,
the need for appropriate software tools has become prevalent. To
develop geospatial applications in this context requires an
approach to software architecture that helps developers evolve
their solutions in flexible ways. Two concepts are today
considered reasonable here – web services and agents. This paper
presents generic geospatial agents in a prototype of agent
development environment KRATT. Pilot applications are described
and experience discussed. |
|
Title: |
PIGGYBACK META-DATA PROPAGATION IN DISTRIBUTED
HASH TABLES |
Author(s): |
Erik Buchmann, Sven Apel and Gunter Saake |
Abstract: |
Distributed Hashtables (DHT) are intended to
provide Internet-scale data management. By following the
peer-to-peer paradigm, DHT consist of independent peers and
operate without central coordinators. Consequentially, global
knowledge is not available and any information have to be
exchanged by local interactions between the peers. Beneath data
management operations, a lot of meta-data have to be exchanged
between the nodes, e.g., status updates, feedback for reputation
management or application-specific information. Because of the
large scale of the DHT, it would be expensive to disseminate
meta-data by peculiar messages. In this article we investigate
in a lazy dissemination protocol that piggybacks attachments to
messages the peers send out anyhow. We present a software
engineering approach based on mixin layers and aspect-oriented
programming to cope with the extremely differing
application-specific requirements. The applicability of our
protocol is confirmed by means of experiments with a CAN
implementation. |
|
Title: |
A NEW VISION OF CONTROL FOR A SMART HOME |
Author(s): |
Pavlo Krasovsky and Jochen Seitz |
Abstract: |
Intelligent systems that provide integrated
control for many functions, such as lighting, safeguard,
air-conditioner, heating, housekeeping equipment and maintenance
of electronics, are close at hand for the mass market. Main
factors for the increase in needs of intelligent systems are:
reduction of price, reputation of products, and technical
improvements. There are many corresponding systems, control
equipment and ser-vices at the market now. The main goal is the
complete integration of all functions. It demands a high level
of interoperability between equipment and subsystems. The most
important and advanced stage of devel-opment is the remote
control via internet or telephone. In the context of our project
we developed a new concept of interaction between server-side
hardware and end-user software. The main purpose is to develop a
control system for the smart home in terms of communication and
automation modules, first of all in a wireless range. A
communication interface should allow changing properties of
end-user software without recompilation. All necessary changes
should happen in the server-side software with the help of
configura-tion files. One can use any text editor to change
these files. Client software has to have full control of the
automation equipment at real time. |
|
Title: |
USING RELEVANT SETS FOR OPTIMIZING XML INDEXES |
Author(s): |
Paz Biber and Ehud Gudes |
Abstract: |
Local bisimilarity has been proposed as an
approximate structural summary for XML and other semi-structured
databases. Approximate structural summary, such as A(k)-Index
and D(k)-Index, reduce the index's size (and therefore reduce
query evaluation time) by compromising on the long path queries.
We introduce the A(k)-Simplified and the A(k)-Relevant,
approximate structural summaries for graph documents in general,
and for XML in particular. Like A(k)-Index and D(k)-Index, our
indexes are based on local bisimilarity, however, unlike the
previous indexes, they support the removal of non-relevant
nodes. We also describe a way to eliminate false drops that
might occur due to nodes removal. Our experiments shows that
A(k)-Simplified and A(k)-Relevant are much smaller then
A(k)-Index, and give accurate results with better performance,
for short relevant path queries |
|
Title: |
RESIDENTIAL GATEWAY FOR THE INTELLIGENT BUILDING:
A DESIGN BASED ON INTEGRATION, SERVICE AND SECURITY PERSPECTIVE |
Author(s): |
Budi Erixson and Jochen Seitz |
Abstract: |
In this paper we present the architecture of a
residential gateway, which is designed with the OSGi (Open
Service Gateway Initiative) that coordinate with the LDAP
(Lightweight Directory Access Protocol) in order to integrate
and connect the various home networks and appliances with the
internet securely. The presented architecture is applied in the
intelligent building project, LISTIG (LAN-integrated control
system for intelligent building technique), the cooperation
project between Technische Universität Ilmenau, Desotron
(Sömmerda), with the University of Applied Science, Jena and the
HFWK (Hörmann Funkwerk Kolleda GmbH) in Germany. This project is
currently still on the progress to achieve the full integration
of the several of home networking technologies, protocols and
services. |
|
Title: |
SEAMLESS AND SECURE AUTHENTICATION FOR GRID
PORTALS |
Author(s): |
Jean-Claude Côte, Mohamed Ahmed, Gabriel
Mateescu, Roger Impey and Darcy Quesnel |
Abstract: |
Grid portals typically store user grid
credentials in a credential repository. Credential repositories
allow users to access Grid portals from any machine having a Web
browser, but their usage requires several authentication steps.
Current portals require users to explicitly go through these
steps, thereby hindering their usability. In this paper we
present intuitive and easy to use tools to manage certificates.
We also describe the integration of Grid Security Infrastructure
authentication into a Java-based SSH terminal tool. Based on
these tools, we build an innovative portal authentication
mechanism that enables transparent delegation of credentials
between clients, grid portal and the credential repository. |
|
Title: |
DEVELOPING A WEB CACHING ARCHITECTURE WITH
CONFIGURABLE CONSISTENCY: A PROPOSAL |
Author(s): |
Francisco J. Torres-Rojas, Esteban Meneses and
Alexander Carballo |
Abstract: |
In recent years, Web Caching has been considered
as one of the key areas to improve web usage efficiency.
However, caching object from the web proposes many
considerations about the validity of the cache. Ideally, it
would be valuable to have a consistent cache, where no invalid
relations between objects are held. Several alternatives have
been offered to keep consistency in the web cache, each one
being better in different situations and for diverse
requirements. Usually, web cachers implement just one strategy
for maintaining consistency, sometimes giving bad results if
circumstances are not appropriate for such strategy. Given that,
a web cacher where this policy can be adapted to different
situations, can offer good results in a long execution. |
|
Title: |
DYNAMIC AND DECENTRALIZED SERVICE COMPOSITION:
WITH CONTEXTUAL ASPECT-SENSITIVE SERVICES |
Author(s): |
Thomas Cottenier and Tzilla Elrad |
Abstract: |
This paper introduces a new technique to
dynamically compose Web Services in a decentralized manner. Many
of the shortcomings of current Web Service composition
mechanisms stem from the difficulty of defining, modularizing
and managing service behaviour that is dependent on the context
of service invocation. Contextual Aspect-Sensitive Services
(CASS) enables crosscutting and context-dependent behaviour to
be factored out of the Service implementations and modularized
into separate units of encapsulation that are exposed as Web
Services. Service orchestrations can then be defined in a much
more flexible way, as Services can be dynamically customized to
address context-dependent requirements. Moreover, CASS does not
require a centralized orchestration engine to coordinate the
message exchanges. The coordination logic is woven directly into
the composed services. The CASS specification language offers a
powerful alternative to static and centralized business process
specification languages such as BPEL4WS. |
|
Title: |
XML-BASED EVALUATION OF SYNTHESIZED QUERIES |
Author(s): |
Ron G. McFadyen, Yangjun Chen and Fung-Yee Chan |
Abstract: |
XML repositories are a common means for storing
documents that are available through Web technologies. As the
use of XML increases, there is a need to integrate XML
repositories with other data sources to supply XML-oriented
applications. In this paper, we examine documents that express
business rules in XML format, and where the triggering and
instantiation of rules requires execution of database queries.
In this way, an inference process is governed by an XML document
tree that controls the synthesis and evaluation of database
queries. |
|
Title: |
UDDI ACCESS CONTROL FOR THE EXTENDED ENTERPRISE |
Author(s): |
Robert Steele and Juan Dai |
Abstract: |
Web Services are designed to provide easier B2B
integration among enterprises. UDDI defines a standard way for
businesses to list their services and discover each other on the
Internet. Due to security concerns organizations prefer to build
their own private UDDI registries in their corporate network,
which are only accessible by invited business partners. Since an
organization may only want the right business partners to see
only the right service information they are entitled to, access
control mechanisms inside the private registry are required.
Hence we propose a role based access control model in private
UDDI registries to help achieve information confidentiality
inside corporate registries. Based on XACML, the model exploits
XML’s own ability to build access control in the UDDI. |
|
Title: |
RAPID PROTOTYPING OF MULTIMEDIA ANALYSIS SYSTEMS:
A NETWORKED HARDWARE/SOFTWARE SOLUTION |
Author(s): |
Fons de Lange and Jan Nesvadba |
Abstract: |
This paper describes a hardware/software
framework and approach for fast integration and testing of
complex real-time multimedia analysis algorithms. It enables the
rapid assessment of combinations of multimedia analysis
algorithms, in order to determine their usefulness in future
consumer storage products. The framework described here consists
of a set of networked personal computers, running a variety of
multimedia analysis algorithms and a multi-media database. The
database stores both multimedia content and metadata – as
generated by multimedia content analysis algorithms – and
maintains links between the two. The multimedia (meta)database
is crucial in enabling applications to offer advanced content
navigation and searching capabilities to the end-user. The full
hardware/software solution functions as a test-bed for new,
advanced content analysis algorithms; new algorithms are easily
plugged-in into any of the networked PCs, while outdated
algorithms are simply removed. Once a selected consumer system
configuration has passed important user-tests, a more dedicated
embedded consumer product implementation is derived in a
straightforward way from the framework. |
|
Title: |
XPACK: A HIGH-PERFORMANCE WEB DOCUMENT ENCODING |
Author(s): |
Daniel Rocco, James Caverlee and Ling Liu |
Abstract: |
XML is an increasingly popular data storage and
exchange format whose popularity can be attributed to its
self-describing syntax, acceptance as a data transmission and
archival standard, strong internationalization support, and a
plethora of supporting tools and technologies. However, XML's
verbose, repetitive, text-oriented document specification syntax
is a liability for many emerging applications such as mobile
computing and distributed document dissemination. This paper
presents XPack, an efficient XML document compression system
that exploits information inherent in the document structure to
enhance compression quality. Additionally, the utilization of
XML structure features in XPack's design should provide valuable
support for structure-aware queries over compressed documents.
Taken together, the techniques employed in the XPack compression
scheme provide a foundation for efficiently storing,
transmitting, and operating over Web documents. Initial
experimental results demonstrate that XPack can reduce the
storage requirements for Web documents by up to 20\% over
previous XML compression techniques. More significantly, XPack
can simultaneously support operations over the documents,
providing up to two orders of magnitude performance improvement
for certain document operations when compared to equivalent
operations on unencoded XML documents. |
|
Title: |
SFS-KNOPPIX WHICH BOOTS FROM INTERNET |
Author(s): |
Kuniyasu Suzaki, Kengo Iijima, Toshiki Yagi,
Hideyuki Tan and Kazuhiro Goto |
Abstract: |
KNOPPIX is a bootable CD with a collection of
GNU/Linux software. KNOPPIX is very convenient but it requires
downloading 700MB iso image and burning a CD-ROM when it is
renewed. In order to solve this problem we make SFS-KNOPPIX
which boots from Internet with SFS (Self-certifying File
System). SFS-KNOPPIX requires 20MB boot-loader with Linux-kernel
and miniroot. Root file system is obtained from Internet with
SFS at boot time. It enables to change root file system and
makes easy to try new version of KNOPPIX. SFS-KNOPPIX is also
customized for Linux kernel emulators; “UserModeLinux” and
“coLinux”. They enable us to use KNOPPIX as an application on
Linux and Windows. In this paper we describe the detail of
SFS-KNOPPIX and its performance. |
|
Title: |
A FRAMEWORK FOR IDENTIFYING ARCHITECTURAL
PATTERNS FOR E-BUSINESS APPLICATIONS |
Author(s): |
Feras T. Dabous, Fethi A. Rabhi and Tariq
Al-Naeem |
Abstract: |
The success of today's enterprises is critically
dependant on their ability to automate the way they conduct
business with customers and other enterprises by means of
e-business applications. Legacy systems are valuable assets that
must play an important role in this process. Selecting the most
appropriate architectural design for an e-business application
would have critical impact on participating enterprises. This
paper discusses an initiative towards a systematic framework
that would assist in identifying a range of possible alternative
architectural designs for such applications and how some of
these alternatives can evolve into formal architectural patterns
or anti-patterns. This paper focuses on a category of e-business
applications with special requirements and assumption that is
often presented in a few specific domains. The concepts
presented in this paper are demonstrated using a real life case
study in the domain of e-finance and in particular capital
markets trading. |
|
Title: |
A COSMIC-FFP APPROACH TO ESTIMATE WEB APPLICATION
DEVELOPMENT EFFORT |
Author(s): |
Gennaro Costagliola, Sergio Di Martino, Filomena
Ferrucci, Carmine Gravino, Genoveffa Tortora and Giuliana
Vitiello |
Abstract: |
Web applications are constantly increasing both
in complexity and number of offered features. In this paper we
address the problem of estimating the effort required to develop
dynamic web applications, which represents an emerging issue in
the field of web engineering. In particular, we formalize a
method which is based on the main ideas underlying COSMIC-FFP
(Cosmic Full Function Point), which is an adaptation of the
Function Point method, especially devised to tackle real-time
and embedded applications. The method is focused on counting
data movements and turns out to be suitable for capturing the
specific aspects of dynamic web applications which are
characterized by data movements to and from web servers. The
method can be applied to analysis and design documentation in
order to provide an early estimation. We also describe the
empirical analysis carried out to verify the usefulness of the
method for predicting web application development effort. |
|
Title: |
SERVICE ORIENTED GRID RESOURCE MODELING AND
MANAGEMENT |
Author(s): |
Youcef Derbal |
Abstract: |
Computational grids (CGs) are large scale
networks of geographically distributed aggregates of resource
clusters that may be contributed by distinct providers. The
exploitation of these resources is enabled by a collection of
decision-making processes; including resource management and
discovery, resource state dissemination, and job scheduling.
Traditionally, these mechanisms rely on a physical view of the
grid resource model. This entails the need for complex
multi-dimensional search strategies and a considerable level of
resource state information exchange between the grid management
domains. Consequently, it has been difficult to achieve the
desirable performance properties of speed, robustness and
scalability required for the management of CGs. In this paper we
argue that with the adoption of the Service Oriented
Architecture (SOA), a logical service-oriented view of the
resource model provides the necessary level of abstraction to
express the grid capacity to handle the load on hosted services.
In this respect, we propose a Service Oriented Model (SOM) that
relies on the quantification of the aggregated resource
behaviour using a defined service capacity unit that we call
servslot. The paper details the development of SOM and
highlights the pertinent issues that arise from this new
approach. A preliminary exploration of SOM integration as part
of a nominal grid architectural framework is provided along with
directions for future works. |
|
Title: |
THE DESIGN OF THE MIRAGE SPATIAL WIKI |
Author(s): |
Nels Anderson, Adam Bender, Carl Hartung, Gaurav
Kulkarni, Anuradha Kumar, Isaac Sanders, Dirk Grunwald and Bruce
Sanders |
Abstract: |
Location based services can simplify information
access but despite the numerous efforts and prototypes that
attempt to provide location based services, there are very few
such systems in wide spread use. There are three common problems
that face the designers of location based systems - the basic
location technology, the complexity of establishing a service
for a particular location and the complexity of maintaining and
presenting information to users of the systems. This paper
outlines the software architecture of and experience with the
Mirage Spatial Wiki. We describe the design decisions that have
led to a system that is easy to deploy and use. |
|
Title: |
QOS-AWARE MULTIMEDIA WEB SERVICES ARCHITECTURE |
Author(s): |
Ikbal Taleb, Abdelhakim Hafid and Mohamed Adel
Serhani |
Abstract: |
Due to the increasing growth of Web Services,
Quality of Service (QoS) is becoming a key issue in web services
community. Providers and clients need to use QoS-aware
architectures to get/ensure end-to-end QoS. The QoS delivery to
clients is highly affected by the web service performance
itself, by the hosting platform (e.g., Application Server) and
by the underlying network (e.g., Internet). Thus, even if web
services together with hosting platform provide acceptable QoS,
they also require sufficient available network resources to
deliver end-to-end QoS. In this paper, we propose a solution
approach to the problem of end-to-end QoS support for web
services. Our approach rely on the utilization of a web service,
called Network resources manager NRM, to take care of the QoS
support in the network connecting the client location and the
matching web service location. NRM either relies on the network
QoS capabilities (e.g., Integrated Services, Differentiated
Services, Multiprotocol Label Switching), if any, to uses a
measurement-based scheme to estimate the quality that can be
delivered between the two locations. One of the key
differentiator of our solution is that it does not require any
changes to the currently used infrastructure by the users and
web services providers. |
|
Title: |
DESIGN, IMPLEMENTATION AND TESTING OF MOBILE
AGENT PROTECTION MECHANISM FOR MANETS |
Author(s): |
Khaled E. A. Negm |
Abstract: |
A caching proxy server acts as an invisible
intermediary between browsing clients and the internet servers.
In the case of a Web cache, cacheable objects are always
separate and are always read in their entity with no
pre-fetching. In the present study we present a novel design for
a system to remote control an array of proxy system. The
administrator can monitor and configure the caching array system
from any normal client computational facility. The current
system emphasizes on the fact that the administrator can change
the configuration of the system with no need either to start the
system or alter the clients’ statuses. This is achieved by
implementing the concurrency control system under the system
hierarchy. The primarily local testing of the system shows
promising results to get implemented on large scale of
enterprise systems |
|
Area 2 - Web Interfaces and
Applications
|
Title: |
CHARACTERISTICS OF THE BOOLEAN WEB SEARCH QUERY:
ESTIMATING SUCCESS FROM CHARACTERISTICS |
Author(s): |
Sunanda Patro and Vishv Malhotra |
Abstract: |
Popular web search engines use Boolean queries as
their main interface for users to search their information
needs. The paper presents results based on a user survey
employing volunteer web searchers to determine the effectiveness
of the Boolean queries in meeting the information needs. A
metric for measuring the quality of a web search query is also
presented. This enables us to relate attributes of the search
session and the Boolean query with its success. Certain easily
identified characteristics of a good web search query are
identified. |
|
Title: |
WEB MINING FOR AN AMHARIC - ENGLISH BILINGUAL
CORPUS |
Author(s): |
Atelach Alemu Argaw and Lars Asker |
Abstract: |
We present recent work aimed at constructing a
bilingual corpus consisting of comparable Amharic and English
news texts. The Amharic and English texts were collected from an
Ethiopian news agency that publishes daily news in Amharic and
English through their web page. The Amharic texts are
represented using Ethiopic script and archived according to the
Ethiopian calender. The overlap between the corresponding
Amharic and English news texts in the archive is comparatively
small, only approximately one article out of ten has a
corresponding translated version. Thus a major part of the work
has been to identify the subset of matching news texts in the
archive, transliterating the Amharic texts into an ASCII
representation, and aligning them with their respective
corresponding English version. In doing so, we utilised a number
of available software and data sources that were (mainly) found
on the Internet. Amharic is a language for which very few
computational linguistic tools or corpora (such as electronic
lexica, part-of-speech taggers, parsers or tree-banks) exist. A
challenge has therefor been to show that it is possible to
create a comparable corpus even in the absence to these
resources. We used fuzzy string matching between words in the
English and Amharic titles as a way to determine how likely it
is that two news items are referring to the same event. In order
to restrict the matching algorithm further, we only compared
titles of news items that were published on the corresponding
same date and at the same place. We present an experimental
evaluation of the algorithm, based on data from one year, and
show that fuzzy string matching of news titles can be sufficient
to align Amharic and English news text with relatively high
precision despite the obvious difference between the two
languages. |
|
Title: |
XML-BASED RDF QUERY LANGUAGE (XRQL) AND ITS
IMPLEMENTATION |
Author(s): |
Norihiro Ishikawa, Takeshi Kato, Hiromitsu
Sumino, Johan Hjelm and Kazuhiro Miyatsu |
Abstract: |
Resource Description Framework (RDF) is a
language which represents information about resources. In order
to search RDF resource descriptions, several RDF query languages
such as RQL and SquishQL have been proposed. However, these RDF
query languages do not use XML syntax and they have limited
functionality. xRQL is proposed to solve these issues by
defining an XML-based RDF query language with enhanced
manipulations of the RDF metadata. xRQL is a logic language
relying on a functional approach. It consists of an operator
declaration, a RDF data description and a result description.
Based on RDF graph data model, xRQL defines a graphical path
expression with variables, which is similar to GOQL for
describing RDF data. It also adopts the object-oriented model
for creation, modification and deletion operations of RDF data.
Users can define their favorite XML-compliant result
descriptions by themselves, which is similar to XQuery. In
addition, a set of RDF operations for RDF schema is defined to
manipulate the class and property hierarchies in RDF schema.
xRQL has been implemented as an RDF query language over a native
RDF database management system. This paper also briefly
describes the evaluation and implementation status. |
|
Title: |
SEMANTIC DISCOVERY OPTIMIZATION: MATCHING
COMPOSED SEMANTIC WEB SERVICES AT PUBLISHING TIME |
Author(s): |
Andreas Friesen and Michael Altenhofen |
Abstract: |
This paper describes an algorithm optimizing the
discovery process for composed semantic web services. The
algorithm can be used to improve discovery of appropriate
component services at invocation time. It performs semantic
matchmaking of goals of a composed service to appropriate
component services at publishing time. The semantic discovery
problem at invocation time is therefore reduced to a selection
problem from a list of available (already discovered) component
services matching a goal of the composed service. |
|
Title: |
EFFICIENT RSS FEED GENERATION FROM HTML PAGES |
Author(s): |
Jun Wang and Kanji Uchino |
Abstract: |
Although RSS demonstrates a promising solution to
track and personalize the flow of new Web information, many of
the current Web sites are not yet enabled with RSS feeds. The
availability of convenient approaches to “RSSify” existing
suitable Web contents has become a stringent necessity. This
paper presents EHTML2RSS, an efficient system that translates
semi-structured HTML pages to structured RSS feeds, which
proposes different approaches based on various features of HTML
pages. For the information items with release time, the system
provides an automatic approach based on time pattern discovery.
Another automatic approach based on repeated tag pattern
discovery is applied to convert the regular pages without the
time pattern. A semi-automatic approach based on labelling is
available to process the irregular pages or specific sections in
Web pages according to the user’s requirements. Experimental
results show that our system is efficient and effective in
facilitating the RSS feed generation. |
|
Title: |
ERGOMANAGER: A UIMS FOR MONITORING AND REVISING
USER INTERFACES FOR WEB SITES |
Author(s): |
Walter de Abreu Cybis, Dominique L. Scapin and
Marcelo Morandini |
Abstract: |
This paper describes the results of studies
dedicated to the specification of ErgoManager, a UIMS (User
Interface Management System) specifically intended to support
the user interface revision phase over changeable Web sites
running B2B, ERP or Intranets transactions. This UIMS contains
two basic components: ErgoMonitor and ErgoCoIn. ErgoMonitor
applies task-oriented analysis and usability oriented processing
on interaction traces stored in log files as a way to identify
“average” usability levels that have been occurring during the
accomplishment of transactional tasks on web sites. ErgoCoIn is
a checklist based CSEE (Computer Supported Ergonomic Evaluation)
tool that features automatic services to inquire context of use
aspects and to recognize web page components as a way to conduct
inspections of only the context pertinent aspects of a Web page.
By integrating these tools, ErgoManager aims to support quality
assurance strategies over the revision phase of web sites
lifecycle by confronting, in an iterative way, usability
quantitative metrics and qualitative aspects of user interfaces. |
|
Title: |
MODELING PREFERENCES ONLINE |
Author(s): |
Maria Cleci Martins and Rosina Weber |
Abstract: |
The search for an online product that matches
e-shoppers’ needs and preferences can be frustrating and
time-consuming. Browsing large lists arranged in tree-like
structures demands focused attention from e-shoppers. Keyword
search often results in either too many useless items (low
precision) or few or none useful ones (low recall). This can
cause potential buyers to seek another seller or choose to go in
person to a store. This paper introduces the SPOT (Stated
Preference Ontology Targeted) methodology to model e-shoppers’
decision-making process and use it to refine a search and show
products and services that meet their preferences. SPOT combines
probabilistic theory on discrete choices, the theory of stated
preferences, with knowledge modeling (i.e. ontologies). The
probabilistic theory on discrete choices coupled with
e-shoppers’ stated preferences data allow us to unveil
parameters e-shoppers would employ to reach a decision of choice
related to a given product or service. Those parameters are used
to rebuild the decision process and evaluate alternatives to
select candidate products that are more likely to match
e-shoppers’ choices. We use a synthetic example to demonstrate
how our approach distinguishes from currently used methods for
e-commerce. |
|
Title: |
BUILDING E-COMMERCE WEB APPLICATIONS: AGENT- AND
ONTOLOGY-BASED INTERFACE ADAPTIVITY |
Author(s): |
Oscar Martinez, Federico Botella, Antonio
Fernández-Caballero and Pascual González |
Abstract: |
E-Commerce Web based applications designed to
facilitate data-exchange collaboration are enjoying growing
popularity. In the next few years, business companies will want
their web resources linked to ontological content –because of
the many powerful tools that will be available for using it by
potential customers. Thus, product information will be exchanged
between applications, allowing computer programs to collect and
process web content, and to exchange information freely with
each other. In this paper, few pointers are used for this
emerging area, and then go on to show how the ontology languages
of the semantic web can lead directly to more powerful
agent-based approaches to using services offered on the web. As
a result, e-commerce architecture is outlined as an agent-based
system to retrieve information products. In this framework, an
ontology representing fashion clothing domain used by potential
consumers is also introduced, where RDF-S (Resource Description
Framework Schema) is used. |
|
Title: |
PERFORMANCE ANALYSIS OF WEB SERVERS: APACHE AND
MICROSOFT IIS |
Author(s): |
Andrew J. Kornecki, Nick Brixius and Ozeas Vieira
Santana Filho |
Abstract: |
The Internet has become the leading means for
people to get information and interact between organizations.
Each year there is an increase of the numbers of Internet users.
Organizations must be aware of the performance of their web
servers to be able to accommodate this growing demand. Networks,
connections, hardware, web servers and operating systems each
have a role to play in this market, but the web server could be
a bottleneck for the entire system. The goal of this research
paper is to discuss the issues related to the performance
analysis of web servers. The focus is on measurement technique
as a solution to performance analysis. Also, the paper describes
a practical method to compare two web servers. |
|
Title: |
A DISTRIBUTED INFORMATION FILTERING : STAKES AND
SOLUTION FOR SATELLITE BROADCASTING |
Author(s): |
Sylvain Castagnos, Anne Boyer and François
Charpillet |
Abstract: |
This paper is a preliminary report which presents
information filtering solutions designed within the scope of a
collaboration between our laboratory and the company of
broadcasting per satellite SES ASTRA. The latter have finalized
a system sponsored by advertisement and supplying to users a
high bandwidth access to hundreds of web sites for free. This
project aims at highlighting the benefits of collaborative
filtering by including such a module in the architecture of
their product. The term of collaborative filtering (Goldberg et
al., 2000) denotes techniques using the known preferences of a
group of users to predict the unknown preference of a new user.
Our problem has consisted in finding a way to provide scale for
hundreds thousands of people, while preserving anonymity of
users (personal data remain on client side). Thus, we use an
existing clustering method, that we have improved so that it is
distributed respectively on client and server side.
Nevertheless, in the absence of numerical votes for marketing
reasons, we have chosen to do an innovative combination of this
decentralized collaborative filtering method with a user
profiling technique. We have also been submitted to constraints
such as a short answer time on client side, in order to be
compliant with the ASTRA architecture. |
|
Title: |
WHAT HAPPENS IF WE SWITCH THE DEFAULT LANGUAGE OF
A WEBSITE? |
Author(s): |
Te Taka Keegan and Sally Jo Cunningham |
Abstract: |
In this paper we investigate the effect of the
default interface language setting on a bilingual website. Log
file analysis is undertaken to determine usage patterns of the
Niupepa digital library (a collection of historic Māori language
newspapers) when the default interface language is switched
between Māori and English in alternate weeks. Activity is
grouped into active user sessions, which are further analysed to
determine methods of access and searching patterns. The results
clearly show that changing the default language of a website
will affect the ways in which users access information. |
|
Title: |
SUPPORTING AWARENESS IN ASYNCHRONOUS
COLLABORATIVE ENVIRONMENTS |
Author(s): |
Shang Gao, Dongbai Xue and Igor Hawryszkiewycz |
Abstract: |
One of the major challenges in asynchronous
collaborative environment is to provide a sense of awareness of
other users actions. The amount of awareness needed varies due
to specific roles users undertaking during collaboration. While
emphasizing the importance of roles, this paper discusses
awareness-role relationship and proposes a role-based approach
to specifying the awareness characteristics in asynchronous
collaborative environment. An example of implementation of
role-based awareness supporting system LiveNet4 is also
illustrated at the end of this paper. |
|
Title: |
A TAXONOMY OF PROGRAMMABLE HTTP PROXIES FOR
ADVANCED EDGE SERVICES |
Author(s): |
Delfina Malandrino and Vittorio Scarano |
Abstract: |
In this paper, we present the state of the art in
the field of programmability in HTTP proxies. In particular, we
first deal with programmability and show how it is a crucial
requirement to easily realize and assemble edge services that
can enhance the quality and the user perception of the
navigation into a crowded and confusing World Wide Web. Then, we
compare some of the most used HTTP proxies to provide an
analysis of their programmability and, finally, show some
evidence of successful edge services realized on top of existing
programmable HTTP proxy frameworks. |
|
Title: |
EVALUATION OF TEXT CLASSIFICATION ALGORITHMS: FOR
A WEB-BASED MARKET DATA WAREHOUSE |
Author(s): |
Carsten Felden and Peter Chamoni |
Abstract: |
Decision makers in enterprises cannot handle
information flooding without serious problems. A market data
information system (MAIS), which is the base of a decision
support system for German energy trading, uses search and filter
components to provide decision-relevant information from
Web-documents for enterprises. The already implemented filter
component in form of a Multilayer Perceptron has to be
benchmarked against different existing algorithms to enhance the
classification of search results. An evaluation environment with
appropriate algorithms and a metric is developed for this
purpose. Also a set of test data is provided and a tool
selection as well as the implementation of different text mining
algorithms for classification took place. The benchmark results
will be shown in the paper. |
|
Title: |
BUILDING WEB APPLICATIONS WITH XQUERY -
INTEGRATING TECHNOLOGIES IN WEB DEVELOPMENT |
Author(s): |
Javier J. Gutiérrez, María J. Escalona, Manuel
Mejías and Jesús Torres |
Abstract: |
Today, it is needed to apply a set of
heterogeneous technologies to implement every layer or element
into a web application. These technologies must be combined and
must work together. This implies the need for heterogeneous
development teams with heterogeneous formation and high costs in
tools and formation. This work shows how XML with XQuery could
be a valid technology to unify the technologies used in web
development. This works shows how a combination of XML and
XQuery could be a valid selection to unify the used technologies
in web development. Thus, it is possible decrease costs in tools
and formation applying only one technology in web development.
To justify why XML with XQuery is a valid technology to
implement a whole system this work shows, at first time, the
main characteristics of XQuery focuses in web development. At
second time, this work shows how to apply those characteristics
in a web development and how to implement every layer or
component of a web application with XQuery. Finally, this work
exposes a brief overview about the open-source tools available
to implement a web application developed with XQuery. |
|
Title: |
EFFICIENTLY LOCATING COLLECTIONS OF WEB PAGES TO
WRAP |
Author(s): |
Lorenzo Blanco, Valter Crescenzi and Paolo
Merialdo |
Abstract: |
Many large web sites contain highly valuable
information. Their pages are dynamically generated by scripts
which retrieve data from a back-end database and embed them into
HTML templates. Based on this observation several techniques
have been developed to automatically extract data from a set of
structurally homogeneous pages. These tools represent a step
towards the automatic extraction of data from large web sites,
but currently their input sample pages have to be manually
collected. To scale the data extraction process this task should
be automated, as well. We present techniques to automatically
gathering structurally similar pages from large web sites. We
have developed an algorithm that takes as input one sample page,
and crawls the site to find pages similar in structure to the
given page. The collected pages can feed an automatic wrapper
generator to extract data. Experiments conducted over real life
web sites gave us encouraging results. |
|
Title: |
AN ADAPTABLE MIDDLEWARE FOR PERSONALIZING WEB
APPLICATIONS |
Author(s): |
Zahi Jarir and Mohammed Erradi |
Abstract: |
The personalization is an important topic for the
Web industry. It consists in providing the capabilities to
accommodate Web applications to user’s requirements such as
defining preferences on the execution of the application,
associating the provided application to a specific terminal,
specifying or modifying QoS parameters, and so on. The
contribution of this paper is to present a solution to ensure an
advanced Web application personalization by focusing on the
middleware level rather than the application level. We provide
an enhanced architecture to personalize Web applications using
the EJB technology. An implementation using JOnAS environment is
presented. It has the advantage to adapt and/or reconfigure Web
application’s behavior at runtime according to the user’s
specific needs. |
|
Title: |
ON-THE FLY ANNOTATION OF DYNAMIC WEB PAGES |
Author(s): |
Mamdouh Farouk, Samhaa R. El-Beltagy and Mahmoud
Rafea |
Abstract: |
The annotation of web pages is a critical task
for the success of the semantic web. While many tools exist to
facilitate the annotation of static web pages, annotation of
dynamically generated ones has not been sufficiently addressed.
This paper addresses the task of annotating web pages whose
dynamic content is derived from a database. The approach adopted
is based on annotating a database schema based on public
ontologies and using this database annotation to generate
annotations for dynamic web pages that access that database, on
the fly. This paper both presents details about the adopted
approach as well as a tool that supports this approach. |
|
Title: |
IDIOLECT-BASED IDENTITY DISCLOSURE AND AUTHORSHIP
ATTRIBUTION IN WEB-BASED SOCIAL SPACES |
Author(s): |
Natalie Ardet |
Abstract: |
In this paper, we inspect new possible methods of
Web surveillance combining web mining with sociolinguistic and
semiotic related knowledge of human discourse. We first give an
overview of telecommunication surveillance methods and systems,
with focus on the Internet, and we describe the legal issues
involved in Web or Internet communications investigations. We
put the emphasis on identity disclosure and anonymity or
pseudonymity undermining in open web spaces. Further, we give an
overview of new trends in Internet mediated communication, and
examine the virtual social networks they create. Finally, we
present the results of a new method using the semiotic features
of web documents for authorship attribution and identity
disclosure. |
|
Title: |
AUTOMATIC IDENTIFICATION OF SPECIFIC WEB
DOCUMENTS BY USING CENTROID TECHNIQUE |
Author(s): |
Udomsit Sukakanya and Kriengkrai Porkaew |
Abstract: |
In order to reduce time to find specific
information from high volume of information on the Web, this
paper proposes the implementation of an automatic identification
of specific Web documents by using centroid technique. The
Initial training sets in this experiment are 4113 Thai
e-Commerce Web documents. After training process, the system
gets a Centroid e-Commerce vector. In order to evaluate the
system, six test sets were taken under consideration. In each
test set has 100 Web pages both known e-Commerce and non
e-Commerce Web pages. The average system performance is about 90
%. |
|
Title: |
IMPLICIT INDICATORS FOR INTERESTING WEB PAGES |
Author(s): |
Hyoung-rae Kim and Philip K. Chan |
Abstract: |
A user’s interest in a web page can be estimated
by observing the user’s behavior unobtrusively (implicitly)
without asking the user directly (explicitly). Implicit methods
are naturally less accurate than explicit methods, but they do
not waste a user’s time or effort. Implicit indicators can also
be used to create models that change with a user’s interest over
time. Research has shown that a user’s behaviour is related to
his/her interest in a web page. We compare previous implicit
indicators and examine the time spent on a page in more detail
depending on whether a user is really looking at the monitor.
Our results indicate that the duration is related to a user’s
interest of a web page regardless a user’s attention to the web
page. |
|
Title: |
EFFICIENT INFORMATION ACCESS FROM CONSTRAINT
WIRELESS TERMINALS - EXPLOITING PERSONALIZATION AND
LOCATION-BASED SERVICES |
Author(s): |
Hans Weghorn |
Abstract: |
Today, the success of data services used from
small mobile devices, like digital phones or PDAs, appears very
limited. Different reasons can be identified, which prevent the
average customer from broadly using wireless data services: At
first, the user has to deal with very uncomfortable devices in
terms of UI ergonomy, and on the other hand, the costs for
wireless data communication are extremely high. These
restrictions can be overcome by employing a system concept,
which is built up on two main components: A personalized display
software allows simplifying the information access on the
wireless terminal, while an intermediate agent residing on the
Internet takes care of mining the desired contents from the open
Web. In addition to the improved UI handling, this concept
offers a reduction of costs and an increase in access speed.
Real-world experiments with an information system on actual
train departures are reported for measuring and demonstrating
the benefit of the described system concept. |
|
Title: |
AN INNOVATIVE TOOL TO EASILY GET USABLE WEB SITES |
Author(s): |
Cosimo Antonio Prete, Pierfrancesco Foglia and
Michele Zanda |
Abstract: |
This paper describes the actual methodologies to
develop usable web sites. We consider significant tools to model
web sites and pages, and then we propose an innovative approach
to create usable web sites rapidly and easily. Our goal has been
the inclusion of new methodologies in the web application
development process |
|
Area 3 - Web Security
|
Title: |
A NEW MECHANISM FOR OS SECURITY: SELECTIVE
CHECKING OF SHARED LIBRARY CALLS FOR SECURITY |
Author(s): |
Dae-won Kim, Geun-tae Bae, Yang-woo Roh and
Dae-yeon Park |
Abstract: |
This paper presents a systematic solution to the
serious problem of GOT/PLT exploitation attacks. A large class
of security mechanisms has been defeated by those attacks. While
some security mechanisms are con-cerned with preventing GOT/PLT
exploitation attacks, however they are not complete against
GOT/PLT ex-ploitation attacks or the considerable performance
decline occurs. We describe the selective checking of shared
library calls, called SCC. The SCC dynamically relocates a
program’s Global Offset Table (GOT) and checks whether the
accesses via Procedure Linkage Table (PLT) are legal. The SCC is
implemented by modi-fying only the Linux dynamic loader, hence
it is transparent to applications and easily deployable. In
experi-ment results, we show that the SCC is effective in
defeating against GOT/PLT exploitation attacks and is the
mechanism with the very low runtime overhead. |
|
Title: |
THE USE OF DATA MINING IN THE IMPLEMENTATION OF A
NETWORK INTRUSION DETECTION SYSTEM |
Author(s): |
John Sheppard, Joe Carthy and John Dunnion |
Abstract: |
This paper focuses on the domain of Network
Intrusion Detection Systems, an area where the goal is to detect
security violations by passively monitoring network traffic and
raising an alarm when an attack occurs. But the problem is that
new attacks are being deployed all the time. This particular
system has been developed using a range of data mining
techniques so as to automatically be able to classify network
tracffic as normal or intrusive. Here we evaluate decision trees
and their performance based on a large data set used in the 1999
KDD cup contest. |
|
Title: |
DESIGN, IMPLEMENTATION AND TESTING OF MOBILE
AGENT PROTECTION MECHANISM FOR MANETS |
Author(s): |
Khaled E. A. Negm |
Abstract: |
In the current research, we present an operation
framework and protection mechanism to facilitate secure
environment to protect mobile agents against tampering. The
system depends on the presence of an authentication authority.
The advantage of the proposed system is that security measures
is an integral part of the design, thus common security
retrofitting problems do not arise. This is due to the presence
of AlGamal encryption mechanism to protect its confidential
content and any collected data by the agent from the visited
host . So that eavesdropping on information from the agent is no
longer possible to reveal any confidential information. Also the
inherent security constraints within the framework allow the
system to operate as an intrusion detection system for any
mobile agent environment. The mechanism is tested for most of
the well known severe attacks against agents and networked
systems. The scheme proved a promising performance that makes it
very much recommended for the types of transactions that needs
highly secure environments, e. g., business to business. |
|
Area 4 - Society and
e-Business
|
Title: |
THE EFFECT OF ORGANIZATIONAL CULTURE ON KNOWLEDGE
SHARING INTENTIONS AMONG INFORMATION SYSTEM PROFESSIONALS |
Author(s): |
Jin-Shiang Huang |
Abstract: |
On knowledge management discipline, little
empirical research has been carried out to verify the
differences of knowledge sharing among individuals within
different organizational settings. In the current study, theory
of Competing Value Approach (CVA) and knowledge classification
structures from existing literature are applied to conduct a
conceptual framework to explore knowledge sharing intentions of
different knowledge categories for information system
professionals from firms that exhibit various strengths on
distinct cultural dimensions. The hypothesized model is tested
by Pearson correlation analysis and canonical analysis with data
from 172 full time workers of various job titles engaged in
system development and maintenance projects of different firms
in Taiwan. Findings support the notion that knowledge sharing
intentions of information system professionals under distinct
cultural types are quite different. Evidences also show that,
given the same organizational culture, the observed sharing
intentions of various knowledge categories are of equal level. |
|
Title: |
A MEDIATOR FOR E-BUSINESS |
Author(s): |
Sven Apel, Gunter Saake, Sebastian Herden and
André Zwanziger |
Abstract: |
Partner Relationship Management Systems are being
implemented in many companies to improve the relations to
important partners along the value-chain. The goal is to
differentiate themselves from competitors and their products.
That leads to different types of relationships between
customers, suppliers and other market-participants. Some of
these types require a mediator, that connects the partners, if a
direct connection is impossible. This article will introduce the
ideas of Partner Relationship Management within E-Business and
will show a system called Mediator, which is able to tie
together business partners in a distributed environment. The two
parts of the mediator (communication system and information
system) are presented and their key technologies peer-to-peer
network and an agent-based system are discussed. |
|
Title: |
EFFICIENT MANAGEMENT OF MULTI-VERSION XML
DOCUMENTS FOR E-GOVERNMENT APPLICATIONS |
Author(s): |
Federica Mandreoli, Riccardo Martoglia, Fabio
Grandi and Maria Rita Scalas |
Abstract: |
This paper describes our research activities in
developing efficient systems for the management of multi-version
XML documents in an e-Government scenario. The application aim
is to enable citizens to access personalized versions of
resources, like norm texts and information made available on the
Web by public administrations. In the first system developed,
four temporal dimensions (publication, validity, efficacy and
transaction times) were used to represent the evolution of norms
in time and their resulting versioning and a stratum approach
was used for its implementation on top of a relational DBMS.
Recently, the multi-version management system has migrated to a
different architecture (``native'' approach) based on a
multi-version XML query processor developed on purpose.
Moreover, a new semantic dimension has been added to the
versioning mechanism, in order to represent applicability of
norms to different classes of citizens according to their
digital identity. Classification of citizens is based on the
management of an ontology with the deployment of semantic Web
techniques. Preliminary experiments showed an encouraging
performance improvement with respect to the stratum approach and
a good scalability behaviour. Current work includes a more
accurate modeling of the citizen's ontology, which could also
require a redesign of the document storage scheme, and the
development of a complete infrastructure for the management of
the citizen's digital identity. |
|
Title: |
INTERNET DIFFUSION AMONG ITALIAN FIRMS: THE
DIGITAL DIVIDE EXISTS |
Author(s): |
Maurizio Martinelli, Irma Serrecchia and Michela
Serrecchia |
Abstract: |
This paper reports about a study to analyse the
Internet diffusion in Italy by Italian firms, using domain names
under the .it ccTLD as metric. The penetration rate calculated
according to the number of companies is computed for highly
dissimilar geographical areas (regions). A concentration
analysis was performed in order to discover whether the
geographical distribution of the Internet is less concentrated
with respect to both the number of companies present in Italy
and income level, suggesting a diffusive effect. Regression
analysis was carried out using social, economic and
infrastructure indicators. Results show that a “digital divide”
exists in terms of geographical distribution (i.e., in
macro-areas – Northern, Central, and Southern Italy - and at the
regional level). In future we are going to carry out a research
in order to compare the number of domains registered to
businesses with that of domains registered in the non-profit
sector. |
|
Area 5 - e-Learning
|
Title: |
A FRAMEWORK FOR DEVELOPMENT AND MANAGEMENT OF
E-LESSONS IN E-LEARNING |
Author(s): |
Azita A. Bahrami |
Abstract: |
The use and or re-use of the existing e-lessons
for the creation of new ones make the e-learning both time and
cost effective. To accomplish this, however, requires the
removal of some obstacles first. This paper presents a framework
for that purpose. The progression of the concepts leading to the
framework includes the introduction of a multi-dimensional
e-lesson model that leads to the construction of an e-lesson
cube. This cuboid is the backbone of an e-lesson warehouse,
which is the main component of the proposed framework. |
|
Title: |
IMPACTS OF E-LEARNING ON ORGANISATIONAL STRATEGY |
Author(s): |
Charles A. Shoniregun, Paul Smith, Alex
Logvynovskiy and Vyacheslav Grebenyuk |
Abstract: |
E-learning is a relatively new concept. It has
been developed to describe the convergence of a whole range of
learning tools, which use technology as their basis for
delivery. E-learning is using technology to assist in delivering
learning experiences to learners. It is also a concept which is
built around the philosophy of “anytime and anywhere” learning
meaning that learners can access learning materials when and as
required, no matter where they happen to be located in the world
or, indeed, off world. E-learning gives both strategic and
competitive advantage to organisations. Business organisations
recognised knowledge and people are critical resources that
should be treated as treasures. In the information ages the
speed of introducing new products, and services, requires
employees to learn and consolidate new information quickly and
effectively. Organisations worldwide are now seeking more
innovative and efficient ways to deliver training to their
geographically dispersed workforce, and with traditional
training methods, companies generally spend more money on
transporting and housing trainees than on actual training
programs. E-learning has the capacity to reduce these costs
significantly and enabled organisations to secured their product
or service knowledge. This paper focuses on how organisation can
secure their internal E-learning development and advocate
combination of technological approaches. It also provides a
framework on how organisation /or academic institution should
make a rational decision regarding the implementation of
E-learning The question posed by this paper is that: can
organisations improve their E-learning while also securing
higher level of knowledge based? |
|
Title: |
TOWARDS A GRID-BASED COLLABORATIVE PLATFORM FOR
E-LEARNING |
Author(s): |
Wang GuiLing, Li YuShun, Yang ShengWen, Miao
ChunYu, Xu Jun and Shi MeiLin |
Abstract: |
Large-scale cooperation support for learners
becomes even more important, when e-Learning is implemented in
scalable, open, dynamic and heterogeneous environment. This
paper presents how to realize collaborative learning support in
distributed learning environments based on grid technology. Our
approach fills the existing gap between current cooperative
platform and complex, cross-organization infrastructure. We
propose grid architecture for establishing collaborative
platform for e-Learning, where grid middleware and CSCW services
are provided. A Learning Assessment Grid, abbreviated as LAGrid,
is built on top of these services and provides collaborative
learning in large-scale cross-organization environment. |
|
Title: |
DEVELOPMENT AND DEPLOYMENT OF A WEB-BASED COURSE
EVALUATION SYSTEM - TRYING TO SATISFY THE FACULTY, THE STUDENTS,
THE ADMINISTRATION, AND THE UNION |
Author(s): |
Jesse M. Heines and David M. Martin Jr. |
Abstract: |
An attempt to move from a paper-based university
course evaluation system to a Web-based one ran into numerous
obstacles from various angles. While development of the system
was relatively smooth, deploy¬ment was anything but. Faculty had
trouble with some of the system's basic concepts, and students
seemed insufficiently motivated to use the system. Both faculty
and students demonstrated mistrust of the system’s security and
anonymity. In addition, the union threatened grievances
predicated on their perception that the system was in conflict
with the union contract. This paper describes the system’s main
technical and, per¬haps more important, political aspects,
explains implementation decisions, relates how the system
evolved over several semesters, and discusses steps that might
be taken to improve the entire process. |
|
Title: |
ETHEMES: AN INTERNET INSTRUCTIONAL RESOURCE |
Author(s): |
Laura Diggs and John Wedman |
Abstract: |
This paper describes a major initiative to
support teachers in integrating Internet resources into the
instructional process while shifting their instruction to a more
constructivist approach. This paper also presents the background
of this initiative and results of a study to determine teacher
perceptions of this initiative based on the Performance
Pyramid.1 Referred to as eThemes, this service accepts requests
from teachers, finds Web sites that meet the requirements
specified in the requests, and creates an archive of quality
Internet resources for easy access and searching. It minimizes
teachers’ resource-seeking time and maximizes their
resource-using time in their instruction to enhance teaching
practice and student performance. |
|
Title: |
AN AFFECTIVE ROLE MODEL OF SOFTWARE AGENT FOR
EFFECTIVE AGENT-BASED E-LEARNING BY INTERPLAYING BETWEEN
EMOTIONS AND LEARNING |
Author(s): |
Shaikh Mostafa Al Masum and Mitsuru Ishizuka |
Abstract: |
E-learning could become the major form of
training and development in organizations as technologies will
improve to create a fully interactive and humanized learning
environment (Tim L. Wentling, et al, 2000). Hence to recognize
this objective this paper would like to explain about an
affective role model of a software agent to facilitate
interactive online learning by considering and incorporating
emotional features associated to learning with a view to
strengthening the expectation of Lister (Lister, et al, 1999)
that the differences between F-to-F and purely web-based courses
are rapidly disappearing. The paper first presents the
relationships between emotion and learning from different
literatures and surveys. Then an affective model for e-learning
is explored. After the model the paper enlists the emotion
dynamics underpinned by a software agent. The paper concludes
with the notion of future and extension of further research. |
|
Title: |
SOFTWARE-AGENT ARCHITECTURE FOR INTERACTIVE
E-LEARNING |
Author(s): |
Shaikh Mostafa Al Masum and Mitsuru Ishizuka |
Abstract: |
Many universities worldwide have developed a
variety of web-based e-learning environments, hoping to benefit
from this new and fast spreading IT (Harasim, 2000). The main
intention of this paper is to describe an e-learning model that
would act as a prudent teacher to teach and test the aptitude of
e-learners based on available knowledgebase. Here, we provide an
overall view of the proposed model and then describe in brief
about the purposes of different components of the model. This
paper provides a visualization model named Web Online
Force-Directed Animated Visualization (WebOFDAV) (Huang, et al,
1998) and also points out the implementation issues. The
proposed model is designed to be compatible with any e-learning
module designed according to the guideline mentioned in this
paper. Finally an animating cartoon like character agent will
act to interact with the learner. Such an agent based software
system under this model is in development phase with necessary
linguistic and emotion support |
|
Title: |
BEYOND COLLABORATIVE LEARNING - COMMUNAL
CONSTRUCTION OF KNOWLEDGE IN AN ONLINE ENVIRONMENT |
Author(s): |
John Cuthell |
Abstract: |
The drive for e-learning as a cost-effective and
flexible channel for distance and life-long learning has focused
on the benefits of a just-in-time delivery of content to the
learner. The assumption is that knowledge is inseparable from,
and follows, content. An obvious and important aspect of
e-learning has been the need for online tutors to deploy a range
of Soft Skills to support learners. E-learning relies on
e-tutoring: the concept of e-tutoring embodies mentoring,
coaching and facilitating techniques. In an online environment
in which student discussion forums constitute one of the tools
for knowledge construction the role of the facilitator assumes
greater importance that that of mentor, moderator or coach. The
ability to facilitate a discussion or a debate becomes central
to the construction of new knowledge for the participants
(Holmes et al, 2001) In spring and early summer 2004 a group of
teachers from diverse backgrounds engaged in an intensive course
in e-facilitation techniques. This paper describes how they
learned and were taught, and evaluates the ways in which an
online collaborative environment enabled the development of the
basic skills required for e-facilitation. The paper then
assesses the effectiveness of individuals as both contributors
and e-facilitators in a range of online educational forums. It
examines the contribution seach made, and details the
e-facilitation techniques deployed in various forums. Outcomes
are measured against the input that individuals made. The ways
in which the participants were able to construct new knowledge
in the online communal context are detailed. These are compared
with some other models of learning in an online environment:
Cuthell (2001); (Salmon (2002). Finally, the paper evaluates the
ways in which e-facilitation enables individuals to construct
new knowledge, both with and for others. An interesting
consequence of participating in a course of this nature is that
perceptions of teaching, learning and knowledge change. Do these
perceptions follow through into the daily praxis of the
teachers? The implications for teaching and learning in a range
of educational environments are identified |
|
Title: |
IDENTIFYING FACTORS IMPACTING ONLINE LEARNING |
Author(s): |
Dennis Kira, Raafat Saade and Xin He |
Abstract: |
The study presented in this paper sought to
explore several dimensions to online learning. Identifying the
dimensions to online learning entails important basic issues
which are of great relevance to educators today. The primary
question is “what are the factors that contribute to the
success/failure of online learning?” In order to answer this
question we need to identify the important variables that (1)
measure the learning outcome and (2) help us understand the
learning experience of students using specific learning tools.
In this study, the dimensions we explored are student’s
attitude, affect, motivation and perception of an Online
Learning Tool usage. A survey utilizing validated items from
previous relevant research work was conducted to help us
determine these variables. An exploratory factor analysis (EFA)
was used for a basis of our analysis. Results of the EFA
identified the items that are relevant to the study and that can
be used to measure the dimension to online learning. Affect and
perception were found to have strong measurement capabilities
with the adopted items while motivation was measured the
weakest. |
|
Title: |
A WEB-BASED ALGORITHM ANALYSIS TOOL - AN ONLINE
LABORATORY FOR CONDUCTING SORTING EXPERIMENTS |
Author(s): |
James TenEyck |
Abstract: |
In this paper, an on-line laboratory is described
in which students can test theoretical analyses of the run-time
efficiency of common sorting algorithms. The laboratory contains
an applet that allows students to select an algorithm with a
type of data distribution and sample size and view the number of
compares required to sort a particular instance of that
selection. It provides worksheets for tabulating the results of
a sequence of experiments and for entering qualitative and
quantitative observations about the results. It also contains a
second applet that directly measures the goodness of fit of
recorded data with common functions such as cn2 and cn(lg(n)).
The laboratory is intended to reinforce classroom learning
activities and other homework assignments with a practical
demonstration of the performance of a variety of sorting
algorithms on different kinds of data sets. It is a singular
on-line tool that complements other online learning tools such
as animations of various sorting algorithms and visualizations
of self-adjusting data structures. The laboratory has been used
in algorithms courses taught by the author at (omitted) and
(omitted), and is available on-line for use by a more general
audience |
|
Title: |
THE AUTOTUTOR 3 ARCHITECTURE: A SOFTWARE
ARCHITECTURE FOR AN EXPANDABLE, HIGH-AVAILABILITY ITS |
Author(s): |
Patrick Chipman, Andrew Olney and Arthur C.
Graesser |
Abstract: |
Providing high quality of service over the
Internet to a variety of clients while simultaneously providing
good pedagogy and extensibility for content creators and
developers are key issues in the design of the computational
architecture of an intelligent tutoring system (ITS). In this
paper, we describe an ITS architecture that attempts to address
both issues using a distributed hub-and-spoke metaphor similar
to that of the DARPA Galaxy Communicator. This architecture is
described in the context of the natural language ITS that uses
it, AutoTutor 3. |
|
Title: |
ACTIVE LEARNING BY PERSONALIZATION - LESSONS
LEARNT FROM RESEARCH IN CONCEPTUAL CONTENT MANAGEMENT |
Author(s): |
Hans-Werner Sehring, Sebastian Bossung and
Joachim W. Schmidt |
Abstract: |
Due to their increasing popularity, e-learning
systems form a system class of growing importance. We believe,
however, that contemporary e-learning systems can be further
enhanced by improving the support for active participation of
learners, which is believed to improve recall. In this paper we
describe how personalization of both content and structure can
be used to enable this active learning. Personalization is also
a key feature of research-oriented content management systems.
Therefore systems for learning and research can efficiently be
integrated by linking and exchanging content between them.
Synergies which stem from this include processes which
transparently span system boundaries, and sharing of content
between systems. We explain how personalization can be used for
enabling autonomous learning activities and also for supporting
research-oriented workflows. Personalization and content
coupling technologies are at the heart of one of our operational
web-based application systems, the Warburg Electronic Library.
This system is successfully used in a number of research as well
as learning projects, during which advantages of joint research
and learning systems have been identified. |
|
Title: |
A METHODOLOGY TO BUILD E-LEARNING MULTIMEDIA
RESOURCES |
Author(s): |
Giovanni Casella, Gennaro Costagliola and
Filomena Ferrucci |
Abstract: |
In this paper, we present a methodology to
develop e-learning resources. It focuses on the resources
creation so it can be integrated in a general methodology for
big e-learning projects or it can be used in small e-learning
projects in which the main task is the content creation. The
methodology has been conceived by taking into account several
crucial issues, such as e-learning standards, accessibility, and
easy of its application. A case study is also described to show
the effectiveness of the proposal. |
|
Title: |
THE TEACHING OF AUDITING: FROM SCHOOL ATTENDANCE
TO VIRTUAL SCHOOL |
Author(s): |
Agostinho Inácio Bucha, Francisco Alegria
Carreira and Maria da Conceição Aleixo |
Abstract: |
The concept of education has evolved in time and
is now seen not only as a natural phenomenon, but specially as a
social phenomenon essential to the development of the society.
School shouldn’t be an organization closed in itself apart from
the surrounding environment, on the contrary it should be an
open organization and it should take part in the constant
changes. In the specific situation of the Higher Education
Institutions they can not remain indifferent to the economical,
social and specially technological change happening everywhere
and that are urging a different behaviour from the teachers who
need now new skills to face the competitive environment of the
present. Therefore we believe that the scientific and technical
training must be complemented with another training capable of
stimulating other skills concerning attitude and emotion. Along
with the development of management skills the teachers should
also invest on new learning proceedings related to the New
Information and Communication Technologies (NICT), namely the
web pages, portals and enterprise simulation. Hence we have
tried to identify the new environments reflecting about the
actual and virtual teachings and sharing our experience
concerning the teaching of auditory using the Internet. Thus we
have created a web page for the discipline of auditing, a
positive experience, if we consider the results which show a
clear preference from the students who find it relevant,
convenient and a precious aid to improve the process of
teaching/learning. |
|