scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Information Science and Engineering in 2010"


Journal ArticleDOI
TL;DR: It is observed that carry-and-forward is the new and key consideration for designing all routing protocols in VANETs, and min-delay and delay-bounded routing protocols for VANets are discussed.
Abstract: Vehicular Ad hoc Network (VANET), a subclass of mobile ad hoc networks (MANETs), is a promising approach for the intelligent transportation system (ITS). The design of routing protocols in VANETs is important and necessary issue for support the smart ITS. The key difference of VANET and MANET is the special mobility pattern and rapidly changeable topology. It is not effectively applied the existing routing protocols of MANETs into VANETs. In this investigation, we mainly survey new routing results in VANET. We introduce unicast protocol, multicast protocol, geocast protocol, mobicast protocol, and broadcast protocol. It is observed that carry-and-forward is the new and key consideration for designing all routing protocols in VANETs. With the consideration of multi-hop forwarding and carry-and-forward techniques, min-delay and delay-bounded routing protocols for VANETs are discussed in VANETs. Besides, the temporary network fragmentation problem and the broadcast storm problem are further considered for designing routing protocols in VANETs. The temporary network fragmentation problem caused by rapidly changeable topology influence on the performance of data transmissions. The broadcast storm problem seriously affects the successful rate of message delivery in VANETs. The key challenge is to overcome these problems to provide routing protocols with the low communication delay, the low communication overhead, and the low time complexity. The challenges and perspectives of routing protocols for VANETs are finally discussed.

243 citations


Journal ArticleDOI
TL;DR: An adaptive traffic control system based on a new traffic infrastructure using Wireless Sensor Network (WSN) and using new techniques for controlling the traffic flow sequences, which is dynamically adaptive to traffic conditions on both single and multiple intersections.
Abstract: Vehicular traffic is continuously increasing around the world, especially in large urban areas. The resulting congestion has become a major concern to transportation specialists and decision makers. The existing methods for traffic management, surveillance and control are not adequately efficient in terms of performance, cost, maintenance, and support. In this paper, the design of a system that utilizes and efficiently manages traffic light controllers is presented. In particular, we present an adaptive traffic control systembased on a new traffic infrastructure using Wireless Sensor Network (WSN) and using new techniques for controlling the traffic flow sequences. These techniques are dynamically adaptive to traffic conditions on both single and multiple intersections. A WSN is used as a tool to instrument and control traffic signals roadways, while an intelligent traffic controller is developed to control the operation of the traffic infrastructure supported by the WSN. The controller embodies traffic system communication algorithm (TSCA) and the traffic signals time manipulation algorithm (TSTMA). Both algorithms are able to provide the system with adaptive and efficient traffic estimation represented by the dynamic change in the traffic signals' flow sequence and traffic variation. Simulation results show the efficiency of the proposed scheme in solving traffic congestion in terms of the average waiting time and average queue length on the isolated (single) intersection and efficient global traffic flow control on multiple intersections. A test bed was also developed and deployed for real measurements. The paper concludes with some future highlights and useful remarks.

156 citations


Journal ArticleDOI
TL;DR: This paper reviews fuzzy conceptual data models proposed in the literature, where fuzzy ER/EER, IFO and UML data models are mainly discussed, and reviews the applications of fuzzy conceptualData models.
Abstract: Fuzzy set theory has been extensively applied to extend various database models and resulted in numerous contributions, mainly with respect to the popular relational model or to some related form of it. To satisfy the need of modeling complex objects with imprecision and uncertainty, recently many researches have been concentrated on fuzzy semantic (conceptual) data models. This paper reviews fuzzy conceptual data models proposed in the literature, where fuzzy ER/EER, IFO and UML data models are mainly discussed, and reviews the applications of fuzzy conceptual data models.

63 citations


Journal Article
TL;DR: The experimental results show that the Random Forest enhances the prediction accuracy as well as reduces time consumption of the protein fold prediction task, compared to the previous works found in the literature.
Abstract: The functioning of a protein in biological reactions crucially depends on its three-dimensional structure. Prediction of the three-dimensional structure of a protein (tertiary structure) from its amino acid sequence (primary structure) is considered as a challenging task for bioinformatics and molecular biology. Recently, due to tremendous advances in the pattern recognition field, there has been a growing interest in applying classification approaches to tackle the protein fold prediction problem. In this paper, Random Forest, as a kind of ensemble method, is employed to address this problem. The Random Forest, is a recently introduced method based on bagging algorithm that trains a group of base classifiers by randomly selecting sets of features and then, combining results obtained from base classifiers by majority voting. To investigate the effectiveness of the number of base learners to the performance of the Random Forest, twelve different numbers of base classifiers (between 30 and 600) are applied for this classifier. To study the performance of the Random Forest and compare its results with previously reported results, the dataset produced by Ding and Dubchak is used. Our experimental results show that the Random Forest enhances the prediction accuracy (using same set of features proposed by Dubchak et al.) as well as reduces time consumption of the protein fold prediction task, compared to the previous works found in the literature.

50 citations


Journal ArticleDOI
TL;DR: To overcome found security weaknesses, tag privacy divulgence and the new known DoP attack in previous proofs schemes, this paper introduces three anonymous coexistence proofs protocols that possess all required security properties with the same complexity order of the clumping-proofs protocol on computation cost.
Abstract: In a world with RFID carriers everywhere, the coexistence proof of multiple RFID-tagged objects shown at the same time and the same place can become a very useful mechanism and be adopted in many application areas such as computer forensics, evidences in law, valuables security, and warning or notification systems, etc In order to support the correctness of derived proofs, it is necessary to design secure and robust coexistence proofs protocols based on RFID characteristics In this paper we address the security and privacy requirements for a secure coexistence proofs protocol on RFID tags to defend against tag privacy divulgence, forward secrecy disclosure, denial-of-proof (DoP) attack, and authentication sequence disorder Along with these design criterions, a recent published secure proofs protocol [11] is evaluated to identify the demand area for security enhancement To overcome found security weaknesses, tag privacy divulgence and the new known DoP attack in previous proofs schemes, we introduce three anonymous coexistence proofs protocols According to our security and performance analyses, the proposed protocols possess all required security properties with the same complexity order of the clumping-proofs protocol on computation cost

46 citations


Journal Article
TL;DR: Experimental results show that the proposed secure deletion scheme reduces secure deletion overhead compared with the simple encryption scheme, and ensures that a single erase operation is sufficient to securely delete a file.
Abstract: Secure deletion for flash file systems is essential for enhancing the security of embedded systems. Due to the characteristics of flash memory, existing secure deletion schemes cannot be directly adapted to flash file systems. In this paper, we propose an efficient secure deletion scheme for flash file systems. It encrypts a file's data and stores all keys of the file in the same block. This ensures that a single erase operation is sufficient to securely delete a file. Experimental results show that our scheme significantly reduces secure deletion overhead compared with the simple encryption scheme.

42 citations


Journal ArticleDOI
TL;DR: This work proposes a methodology for switching hardware context in dynamically partially reconfigurable FPGA systems that can reduce the reconfiguration time and memory size of hardware context-switching by analyzing the characteristic of bit indexes and frame addresses.
Abstract: Nowadays, the hardware of field programmable gate arrays (FPGAs) can be reconfigured both dynamically and partially. A dynamically and partially reconfigurable system can share hardware contexts among various hardware tasks. However, such FPGA systems require much memory to save the hardware context. To solve this problem, this work proposes a methodology for switching hardware context in dynamically partially reconfigurable FPGA systems. This method can reduce the reconfiguration time and memory size of hardware context-switching by analyzing the characteristic of bit indexes and frame addresses. The experimental results show that the proposed method reduces 10.674% in hardware reconfiguration time, 47.47% in memory size, and 41.25% in resource overhead.

39 citations


Journal ArticleDOI
TL;DR: In this article, a difference image based JPEG communication scheme and water level measurement scheme using sparsely sampled images in time domain were proposed to measure water levels from remote sites using a narrowband channel.
Abstract: To measure water levels from remote sites using a narrowband channel, this paper propose a difference image based JPEG communication scheme and water level measurement scheme using sparsely sampled images in time domain. In the slave system located in the field, the image is converted to difference image, and are compressed using JPEG, then larger changes are sampled and transmitted. To measure the water level from the images received in the master system which may contain noises caused by various sources, the averaging filter and Gaussian filter are used to reduce the noise and the Y-axis profile of an edge image is used to read the water level. Considering the wild condition of the field, a simplified camera calibration scheme is also introduced. The implemented slave system was installed in a river and its performance has been tested with the data collected for a year.

36 citations


Journal ArticleDOI
TL;DR: This study evaluates the user experience of the Taiwan high speed rail ticket vending machine in light of recent research results in human computer interaction and suggests a set of user interface design heuristics that can help kiosk designers avoid the most common design mistakes.
Abstract: Low hardware costs and high staffing costs have resulted in an increase in public information and transaction kiosks. Ticket vending machines are one important subset of such kiosks. These machines must offer efficient service with zero tolerance for human error, require no user training and be accessible to a wide range of users. This study evaluates the user experience of the Taiwan high speed rail ticket vending machine in light of recent research results in human computer interaction. Several design problems are identified and improvements are suggested. A set of user interface design heuristics are suggested that can help kiosk designers avoid the most common design mistakes.

33 citations


Journal ArticleDOI
TL;DR: CRSVM that builds the model of RSVM via RBF (Gaussian kernel) construction and Systematic Sampling RSVM that incrementally selects the informative data points to form the reduced set while the RSVM used random selection scheme are introduced.
Abstract: In dealing with large datasets the reduced support vector machine (RSVM) was proposed for the practical objective to overcome the computational difficulties as well as to reduce the model complexity. In this paper, we propose two new approaches to generate representative reduced set for RSVM. First, we introduce Clustering Reduced Support Vector Machine (CRSVM) that builds the model of RSVM via RBF (Gaussian kernel) construction. Applying clustering algorithm to each class, we can generate cluster centroids of each class and use them to form the reduced set which is used in RSVM. We also estimate the approximate density for each cluster to get the parameter used in Gaussian kernel which will save a lot of tuning time. Secondly, we present Systematic Sampling RSVM (SSRSVM) that incrementally selects the informative data points to form the reduced set while the RSVM used random selection scheme. SSRSVM starts with an extremely small initial reduced set and adds a portion of misclassified points into the reduced set iteratively based on the current classifier until the validation set correctness is large enough. We also show our methods, CRSVM and SSRSVM with smaller size of reduced set, have superior performance than the original random selection scheme.

31 citations


Journal ArticleDOI
TL;DR: This algorithm uses the low cost and high sensitive magnetic sensors to measure the magnetic field distortion when vehicle crosses the sensors and detect vehicle via an adaptive threshold, and finally identifies vehicle type from an intelligent neural network classifier.
Abstract: To improve the accuracy of real time vehicle surveillance, utilize the advances in wireless sensor networks to develop a magnetic signature and length estimation based vehicle classification methodology with binary proximity magnetic sensor networks and intelligent neuron classifier. In this algorithm, we use the low cost and high sensitive magnetic sensors to measure the magnetic field distortion when vehicle crosses the sensors and detect vehicle via an adaptive threshold. The vehicle length is estimated with the geometrical characteristics of the proximity sensor networks, and finally identifies vehicle type from an intelligent neural network classifier. Simulation and on-road experiment obtains high recognition rate over 90%. It verified that this algorithm enhances the vehicle surveillance with high accuracy and solid robustness.

Journal ArticleDOI
TL;DR: A real-time traffic surveillance system for the detection, recognition, and tracking of multiple vehicles in roadway images that has undergone roadside tests in Hsinchu and Taipei, Taiwan and demonstrates the robustness, accuracy, and responsiveness of the method.
Abstract: This paper proposes a real-time traffic surveillance system for the detection, recognition, and tracking of multiple vehicles in roadway images. Moving vehicles can be automatically separated from the image sequences by a moving object segmentation method. Since CCD surveillance cameras are typically mounted at some distance from roadways, occlusion is a common and vexing problem for traffic surveillance systems. The segmentation and recognition method uses the length, width, and roof size to classify vehicles as vans, utility vehicles, sedans, mini trucks, or large vehicles, even when occlusive vehicles are continuously merging from one frame to the next. The segmented objects can be recognized and counted in accordance with their varying features, via the proposed recognition and tracking methods. The system has undergone roadside tests in Hsinchu and Taipei, Taiwan. Experiments using complex road scenes under various weather conditions are discussed and demonstrate the robustness, accuracy, and responsiveness of the method.

Journal ArticleDOI
TL;DR: This paper addresses the problem of delivering multimedia content to a sparse vehicular network from roadside info-stations, using efficient vehicle-to-vehicle collaboration, and describes an efficient way to achieve reliable dissemination to all nodes (even disconnected clusters in the network).
Abstract: In this paper, we address the problem of delivering multimedia content to a sparse vehicular network from roadside info-stations, using efficient vehicle-to-vehicle collaboration. Due to the highly dynamic nature of the underlying vehicular network topology, we depart from architectures requiring centralized coordination, reliable MAC scheduling, or global network state knowledge, and instead adopt a distributed paradigm with simple protocols. In other words, we investigate the problem of reliable dissemination from multiple sources when each node in the network shares a limited amount of its resources for cooperating with others. By using rateless coding at the Road Side Unit (RSU) and using vehicles as data carriers, we describe an efficient way to achieve reliable dissemination to all nodes (even disconnected clusters in the network). In the nutshell, we explore vehicles as mobile storage devices. We then develop a method to keep the density of the rateless coded packets as a function of distance from the RSU at the desired level set for the target decoding distance. We investigate various tradeoffs involving buffer size, maximum capacity, and the mobility parameter of the vehicles.

Journal ArticleDOI
TL;DR: This work proposes HarpiaGrid, a geography-aware grid-based routing protocol for VANETs, which uses map data to generate a shortest transmission grid route, effectively trades route discovery communication overhead with insignificant computation time.
Abstract: Vehicular Ad Hoc Network (VANET) is a research field attracting growing attention. Current routing protocols in VANETs usually use route discovery to forward data packets to the destination. In addition, if vehicle density is low in the network, there might not be vehicles available to deliver the packet. This paper proposes HarpiaGrid, a geography-aware grid-based routing protocol for VANETs. The protocol uses map data to generate a shortest transmission grid route, effectively trades route discovery communication overhead with insignificant computation time. By restricting packets in grid sequences rather than blindly greedy search and making use of route cache approach, HarpiaGrid reduces many unnecessary transmissions, thus greatly improving routing efficiency. Moreover, in the route maintenance, this work proposes a local recovery scheme and uses backtracking techniques to generate a new grid forwarding route, providing superior fault-tolerance capability. Experiments were conducted, and the results demonstrated that the proposed scheme is indeed more efficient than other protocols.

Journal ArticleDOI
TL;DR: A more general model of Wang et al.'s L SB substitution scheme, called as transforming LSB substitution, is proposed to find a better solution to overcome the problem associated with the two previous approaches of a long running time.
Abstract: Simple least-significant-bit (LSB) substitution is a method used to embed secret data in least significant bits of pixels in a host image. The LSB approaches typically achieve high capacity. A simple LSB substitution, which hides secret data directly into LSBs, is easily implemented but will result in bad quality of the stego-image. In order to reduce the degradation of the host image after embedding, an LSB substitution scheme was proposed by Wang et al. They proposed a genetic algorithm to search for approximate solutions. Also, Chang et al. proposed a dynamic programming strategy to efficiently obtain a solution. In this paper, we propose a more general model of Wang et al.'s LSB substitution scheme, called as transforming LSB substitution. In order to overcome the problem associated with the two previous approaches of a long running time, a more efficient approach, referred to as the matching approach, is proposed to find a better solution. Some experiments, demonstrations and analyses are shown in this paper to demonstrate our new scheme and approach.

Journal ArticleDOI
TL;DR: The simulation results demonstrate that the efficient VLSI of extended linear interpolation at 267MHz with 25980 gates in a 450 × 450μm^2 chip is able to process digital image scaling for HDTV in real-time.
Abstract: This paper presents a novel image interpolation method, extended linear interpolation, which is a low-cost and high-speed architecture with interpolation quality compatible to that of bi-cubic convolution interpolation. The method of reducing computational complexity of generating weighting coefficients is proposed. Based on the approach, the efficient hardware architecture is designed under real-time requirement. Compared to the latest bi-cubic hardware design work, the architecture saves about 60% of hardware cost. The architecture is implemented on the Virtex-II FPGA, and the high-speed VLSI has been successfully designed and implemented with TSMC 0.13μm standard cell library. The simulation results demonstrate that the efficient VLSI of extended linear interpolation at 267MHz with 25980 gates in a 450 × 450μm^2 chip is able to process digital image scaling for HDTV in real-time.

Journal Article
TL;DR: By using the limited dominance-based rough set approach, this paper can obtain higher accuracies of approximations than using the traditional dominance- based rough set in the incomplete decision system.
Abstract: In this paper, we introduce a new rough set approach, which is called the limited dominance-based rough set model into the incomplete decision system. The limited dominance relation is different from the traditional dominance relation in the incomplete environment because we are on the assumption that the unknown value can only be compared with the maximal or minimal value in the domain of the corresponding attribute. By using the limited dominance-based rough set approach, we can obtain higher accuracies of approximations than using the traditional dominance-based rough set in the incomplete decision system. Further on the problems of knowledge reductions in terms of the limited dominance relation is also addressed. Some numerical examples are employed to substantiate the conceptual arguments.

Journal ArticleDOI
TL;DR: This paper considers a key-insulated signature scheme for certifying anonymous public keys of vehicles to the system model to issue on-the-fly anonymous public key certificates to vehicles by road-side units and shows that the proposed protocol has better performance than other protocols based on group signature schemes.
Abstract: As vehicular communications bring the promise of improved road safety and optimized road traffic through cooperative systems applications, it becomes a prerequisite to make vehicular communications secure for the successful deployment of vehicular ad hoc networks. In this paper, we propose an efficient authentication protocol with anonymous public key certificates for secure vehicular communications. The proposed protocol follows a system model to issue on-the-fly anonymous public key certificates to vehicles by road-side units. In order to design an efficient authentication protocol, we consider a key-insulated signature scheme for certifying anonymous public keys of vehicles to the system model. We demonstrate experimental results to confirm that the proposed protocol has better performance than other protocols based on group signature schemes.

Journal ArticleDOI
TL;DR: A texture tiling approach that combines poly-cube-maps and Wang tiles for 3D models is proposed and the results show that the proposed approach leads to seamless texture mapping on3D models.
Abstract: In mapping textures onto 3D models, it is essential to eliminate all seams and avoid excessive distortions. To achieve these goals, texture tiling provides an alternative approach for texturing surfaces. In this paper, a texture tiling approach that combines poly-cube-maps and Wang tiles for 3D models is proposed. The poly-cube of a 3D model is first constructed automatically and a tiling mechanism is then used to fill the tiles on the polycube. Finally, rectangular cells which are transformed from the polycube are generated and textures are mapped onto the 3D model. The results show that the proposed approach leads to seamless texture mapping on 3D models.

Journal Article
TL;DR: This method improved the accuracy of the calcu-lated feature scores and outperformed existing methods when determining the sentiment polarity of context-sensitive words and opinioned-feature frequency is used to determine feature scores.
Abstract: With the steadily increasing volume of e-commerce transactions, the amount of user- provided product reviews is increasing on the Web. Because many customers feel that they can purchase product based on the experiences of others that are obtainable through product reviews, the review summarization process has become important. In particular, feature-based product review summarization is needed in order to satisfy the detailed needs of some customers. To achieve such summarization, numerous techniques using natural language processing (NLP), machine learning, and statistical approaches that can evaluate product features within a collection of review documents have been studied. Many of these techniques require sentiment analysis or feature scoring methods. How-ever, existing sentiment analysis methods are limited when determining the sentiment polarity of context-sensitive words, and existing feature scoring methods are limited when only the overall user score is used to evaluate individual product features. In our summarization approach, context-sensitive information is used to determine sentiment polarity while opinioned-feature frequency is used to determine feature scores. Based on experiments with actual review data, our method improved the accuracy of the calcu-lated feature scores and outperformed existing methods.

Journal Article
TL;DR: A new optimal arbiter is designed, a set of optimal Boolean functions and the corresponding circuit for it are proposed, and it is shown that the arbitration Boolean functions derived are optimal (simplest).
Abstract: A new optimal arbiter is designed. We proposed a set of optimal Boolean functions and the corresponding circuit for it, and showed that the arbitration Boolean functions derived are optimal (simplest). This new arbiter is fair for any input combinations and faster than all previous arbiters we knew. Using Synopsys design tools with TSMC 0.18μm technology, the design results have shown that our arbiter has 22.8% improvement of execution time and 39.1% of cost (area) reduction compared with the existing fastest arbiter, SA [7]. Because of this small arbiter's the high-performance, it is extremely useful for the realizations of NoC routers, MPSoC arbitration, and ultra-high- speed switches. This new arbiter is being applied for a patent of the R.O.C. (application No.: 0972020612-0).

Journal ArticleDOI
TL;DR: Experimental results suggest that the proposed L-Fisherfaces provides a better representation and achieves higher accuracy in face recognition.
Abstract: An appearance-based face recognition approach called the L-Fisherfaces is proposed in this paper, By using Local Fisher Discriminant Embedding (LFDE), the face images are mapped into a face subspace for analysis. Different from Linear Discriminant Analysis (LDA), which effectively sees only the Euclidean structure of face space, LFDE finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. Different from Locality Preserving Projections (LPP) and Unsupervised Discriminant projections (UDP), which ignore the class label information, LFDE searches for the project axes on which the data points of different classes are far from each other while requiring data points of the same class to be close to each other. We compare the proposed L-Fisherfaces approach with PCA, LDA, LPP, and UDP on three different face databases. Experimental results suggest that the proposed L-Fisherfaces provides a better representation and achieves higher accuracy in face recognition.

Journal Article
TL;DR: In this paper, the authors proposed an energy-aware, cluster-based routing algorithm (ECRA) for wireless sensor networks to maximize the network's lifetime, which selects some nodes as cluster-heads to construct Voronoi diagrams and rotates the cluster-head to balance the load in each cluster.
Abstract: Cluster-based routing protocols have special advantages that help enhance both scalability and efficiency of the routing protocol. Likewise, finding the best way to arrange clustering so as to maximize the network's lifetime is now an important research topic in the field of wireless sensor networks. In this paper, we present an Energy-Aware, Cluster-Based Routing Algorithm (ECRA) for wireless sensor networks to maximize the network's lifetime. The ECRA selects some nodes as cluster-heads to construct Voronoi diagrams and rotates the cluster-head to balance the load in each cluster. A two-tier architecture (ECRA-2T) is also proposed to enhance the performance of the ECRA. The simulations show that both the ECRA-2T and ECRA algorithms outperform other routing schemes such as direct communication, static clustering, and LEACH. This strong performance stems from the fact that the ECRA and ECRA-2T rotate intra-cluster-heads to balance the load to all nodes in the sensor networks. The ECRA-2T also leverages the benefits of short transmission distances for most cluster-heads in the lower tier.

Journal ArticleDOI
TL;DR: This paper presents a novel example-based deformation transfer approach to solve the problem of how to transfer the deformation of a source deforming model to a target model, with the aid of a few target examples.
Abstract: Deformation transfer is to transfer the deformation of a source deforming model to a target model. Not only the pose but the detailed deformations of a source model are transferred to a target model, causing the characteristic deformations of the target model are mixed with those of the source model. This leads to unnatural results. In this paper, we present a novel example-based deformation transfer approach to solve this problem. With the aid of a few target examples, the characteristic deformations of transferred target models are recovered. We evaluate our approach with several full-body articulated models. The experimental results show that our approach can generate more natural and convincing deformation transfer results than other approaches.

Journal Article
TL;DR: It is shown that the Kim-Koc protocol is vulnerable to impersonation, guessing, and stolen-verifier attacks, and improvements are proposed to increase the security level of the protocol.
Abstract: In 2008, Kim-Koc proposed a secure hash-based strong-password authentication protocol using one-time public key cryptography. He claimed that the protocol was secure against guessing, stolen-verifier, replay, denial-of-service, and impersonation attacks. However, we show that the protocol is vulnerable to impersonation, guessing, and stolen-verifier attacks. We propose improvements to increase the security level of the protocol.

Journal Article
TL;DR: In this article, a semi-probabilistic routing (SPR) protocol is proposed to address routing problem in ICMANs, which takes into account information about host mobility and connectivity changes to produce estimates enabling accurate message forwarding.
Abstract: ICMANs (Intermittently Connected Mobile Ad hoc Networks) are wireless networks where most of the time there does not exist a complete path from the source to the destination. In this paper, a novel routing approach, SPR (Semi-Probabilistic Routing), is proposed to address routing problem in ICMANs. SPR takes into account information about host mobility and connectivity changes to produce estimates enabling more accurate message forwarding. These include maintaining proactive routing zones for stable local topology to minimize blind message forwarding, and identifying potential carriers to maximize message delivery despite network partitions and intermittent connectivity. That information is also utilized to manage buffer space more efficiently. Under energy constrained circumstance, energy-aware delivery probability model is adopted to reserve the energy of nodes providing critical intermittently connected path. We compare the performance of our protocol against others, using a mobility model validated with real-world traces.

Journal Article
TL;DR: EWC (Event-driven Web service Composer) is the tool for supporting the proposed quality-driven web service composition methodology and it is illustrated how users can benefit from EWC to provide ubiquitous services transparently, with an example of monitoring diabetics.
Abstract: Ubiquitous computing enables computational services pervasive. Web service is an efficient technology to provide interoperability between components dispersed on networks and various devices, regardless of platforms and languages, and thus, is massively used to develop ubiquitous computing applications. In order to provide transparent services in ubiquitous environment, we need to consider various quality constraints during execution of web services, selection of contexts to use, and determination of operational devices. In this paper, we define a quality model for ubiquitous computing applications, and propose a quality-driven web service composition methodology. EWC (Event-driven Web service Composer) is our tool for supporting the proposed methodology. We illustrate how users can benefit from EWC to provide ubiquitous services transparently, with an example of monitoring diabetics.

Journal Article
TL;DR: This paper proposes a workflow enactment event logging mechanism supporting three types of event log information ― workcase event type, activity event type and workite m event type so as to be embedded into the e-Chautauqua system that has been recently developed by the CTRL research group as a very large scale workflow management system.
Abstract: As the workflow/BPM systems and their applications are prevailing in the wide and variety industries, we can easily predict not only that very large-scale workflow systems (VLSW) become more prevalent and much more needed in the markets, but also that the quality of workflow (QOW) and its related topics be issued in the near future. Particularly, in the QOW issues such as work flow knowledges/intelligence, workflow validations, workflow verifications, workflow mining and workflow rediscovery problems, the toughest challenging and the m ost impeccable issue is the workflow knowledge m ining and discovery problems that are based upon workflow enactment event history information logged by workflow engines equipped with a certain logging mechanism. Therefore, having an efficient event logging m echanism is the most valuable as well as A and Ω of those QOW issues and solutions. In this paper, we propose a workflow enactment event logging mechanism supporting three types of event log information ― workcase event type, activity event type and workite m event type, and descr ibe the im plementation details of the mechanism so as to be embedded into the e-Chautauqua system that has been recently developed by the CTRL research group as a very large scale workflow management system. Finally, we summarize the implications of the mechanism and its log information on workflow knowledge mining and discovery techniques.

Journal ArticleDOI
TL;DR: The fuzzy inventory model is developed based on Chang et al.'s model by fuzzifying the rate of interest charges, the rateof interest earned, and the deterioration rate into the triangular fuzzy number by utilizing the fuzzy set theory.
Abstract: The inventory problem associated with trade credit is a popular topic in which interest income and interest payments are important issues. Most studies related to trade credit assume that the interest rate is both fixed and predetermined. However, in the real market, many factors such as financial policy, monetary policy and inflation, may affect the interest rate. Moreover, within the environment of merchandise storage, some distinctive factors arise which ultimately affect the quality of products such as temperature, humidity, and storage equipment. Thus, the rate of interest charges, the rate of interest earned, and the deterioration rate in a real inventory problem may be fuzzy. In this paper, we deal with these three imprecise parameters in inventory modeling by utilizing the fuzzy set theory. We develop the fuzzy inventory model based on Chang et al.'s [1] model by fuzzifying the rate of interest charges, the rate of interest earned, and the deterioration rate into the triangular fuzzy number. Subsequently, we discuss how to determine the optimal ordering policy so that the total relevant inventory cost, in the fuzzy sense, is minimal. Furthermore, we show that Chang et al.'s [1] model (the crisp model) is a special case of our model (the fuzzy model). Finally, numerical examples are provided to illustrate these results.

Journal Article
TL;DR: A new algorithms decoding the (23, 12, 7, 7) and the (41, 21, 9) Quadratic Residue (QR) codes are presented, based on one-to-one mapping between the syndromes "S 1 " and correctable error patterns, suitable for DSP software implementation.
Abstract: In this paper, a new algorithms decoding the (23, 12, 7) and the (41, 21, 9) Quadratic Residue (QR) codes are presented. The key ideas behind this decoding technique are based on one-to-one mapping between the syndromes "S 1 " and correctable error patterns. Such algorithms determine the error locations directly by lookup tables without the operations of m ultiplication over a finite fiel d. Moreover, the m ethod of utilizing shift-search algorithm, to dr amatically reduce the memory requirement is given for decoding QR codes. The algorithm has been verified through a software simulation that program in C-language. The new approach is modular, regular and natur al ly suitable for DSP software implementation.