Missing values is a crucial problem in the area of big data analysis, which hinders data integrity. Various regression methods have been employed for estimating missing values, but they exhibit significant prediction errors. To ensure the integrity of data collected from a wood sensor monitoring system and address the issue of data loss and anomaly, we propose a missing value estimation method based on the random forest regression model. This study focuses on the environmental data, including temperature, relative humidity, and absolute humidity surrounding the wood subjects. We simulate a number of methods on the data for comparison purpose. The experiment results in terms of prediction performance indicate that the random forest regression model algorithm we developed for estimating moisture content's missing values yields favourable outcomes with consistently low estimation errors. © 2024 IEEE.
This study conducts an in-depth verification and analysis of the authenticity of information in WeChat group chats by integrating Latent Dirichlet Allocation (LDA) topic modeling with advanced Natural Language Processing (NLP) techniques such as XLNet and BERT. Leveraging LDA, the thematic structure of group chat content is revealed, and through the integration of NLP technologies like XLNet and BERT, a comprehensive analysis of the information is achieved. Experimental results demonstrate that our developed model performs exceptionally well in identifying the authenticity of information, confirming the effectiveness of this method in the domain of social media information verification. This research not only deepens our understanding of the authenticity of information in WeChat group chats but also provides a more effective tool for social media platforms to detect and prevent the spread of false information. It opens up a new perspective on social media information authentication research and points out future research directions. © 2024 IEEE.
This study explores the complex relationship between wood moisture content and environmental factors, temperature and relative humidity. Utilizing a novel Autoregressive Polynomial Regression Model (APRM), data from sensors placed in reconstituted bamboo and pine planks at various positions were analyzed. The APRM, adept at handling polynomial and interaction terms, revealed a nuanced, non-linear relationship between moisture content and environmental conditions. The research findings underscore significant material-specific differences in response to environmental changes. This study not only contributes to the understanding of wood-environment interactions but also demonstrates the efficacy of APRM in environmental science, providing a foundational approach for future research in this field. © 2024 IEEE.
To maintain the life of building materials, it is critical to understand the hygrothermal transfer mechanisms (HTM) between the walls and the layers inside the walls. Due to the extreme instability of weather data, the actual data models of the HTM—the data being collected for actual buildings using modern sensor technologies—would appear to be a great difference from any theoretical models, in particular, for wood building materials. In this paper, we aim to consider a variety of data analysis tools for hygrothermal transfer features. A novel approach for peak and valley detection is proposed based on the discrete differentiation of the original data. Not to be limited to the measure of peak and valley delays for HTM, we propose a cross-correlation analysis to obtain the general delay between two daily time series, which seems to be representative of the delay in the daily time series. Furthermore, the seasonal pattern of the hygrothermal transfer combined with the correlation analysis reveals a reasonable relationship between the delays and the indoor and outdoor climates. © 2023 by the authors.
Due to the uncertainty and inconsistency of measurement data from multiple sensors in the same space, a multi-sensor data fusion algorithm is used to fuse the measurement data of multiple nodes. We propose a multi-Bayesian estimation method for fusing multi-sensor data, and combine Bayesian estimation with ARIMA model to predict the ambient temperature of bamboo and wood building materials. It can utilize the redundancy of data to reduce this uncertainty and improve the reliability of subsequent predictions. © 2023 ACM.
This review aims to comprehensively assess and synthesize the existing literature on the use of data-driven methods for studying hygrothermal transfer in building exterior walls. The review is conducted by an exhaustive search strategy to identify relevant articles from Web of Science and Scopus databases. There are 20 eligible studies included in this review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The most used data-driven methods are traditional neural networks, such as Multi-Layer Perceptrons and 2D Convolutional Neural Networks. Results suggested that neural network models hold potential for accurately predicting hygrothermal attributes of building exteriors. However, a conspicuous gap in the literature is the absence of studies drawing direct comparisons between data-driven methodologies and conventional simulation techniques. © 2023 ACM.
In the era of big data, it is indispensable to apply data science and technology for big data analysis to solve the big data problems. With the advancement of the big data technologies, we are also facing many problems when dealing with the big data and their studies. It is obvious that big data become "bigger"and "bigger", more complex than before, with a good number of attributes and features in various formats and styles. On the other hand, many data analysis techniques have been proposed for various application domain problems in different purposes. This worsens the situation of choosing an appropriate method for a right problem of right data. In this paper, the author intends to propose an approximation approach toward this problem, through discussing the ways of identification of patterns of the original data, be they of data features or analysis methods. The author attempts to apply the idea to a case of fault detection of a household photovoltaic system. © 2023 IEEE.
As a stochastic optimiser, the firefly algorithm (FA) has been successfully and widely used in the solutions to various optimisation problems. Recent related research shows that the standard FA does not sufficiently balance between exploration and exploitation. Especially in high-dimensional problems, it is easy for the standard FA to fall into the local optimum and lead to premature convergence. To overcome the problems as mentioned above, DMTgFA uses three strategies: dynamic step length setting strategy (DS), non-elite two-way guidance model (TG) and elites dimensional mutation strategy (DM). The dynamic step length setting strategy makes the algorithm convergence speed faster. The non-elite two-way guidance model and the elite dimensional mutation strategy cooperate to solve the balance problem between global search and local search. Experimental results show that DMTgFA has stronger optimisation ability and faster convergence speed than other state-of-the-art FA variants.
In the field of arrhythmia classification, classification accuracy has always been a research hotspot. However, the noises of electrocardiogram (ECG) signals, the class imbalance of ECG data, and the complexity of spatiotemporal features of ECG data are all important factors affecting the accuracy of ECG arrhythmias classification. In this article, a novel DSCSSA ECG arrhythmias classification framework is proposed. First, discretewavelet transform (DWT) is used to denoise and reconstruct ECG signals to improve the feature extraction ability of ECG signals.Then, the synthetic minority oversampling technique (SMOTE) oversampling method is used to synthesize a new minority sample ECG signal to reduce the impact of ECG data imbalance on classification. Finally, a convolutional neural network (CNN) and sequence-to-sequence (Seq2Seq) classification model with attention mechanism based on bi directional long short-term memory(Bi-LSTM) as the codec is used for arrhythmias classification, and the model can give corresponding weight according to the importance of heartbeat features and can improve the ability toextract and filter the spatiotemporal features of heartbeats. In the classification of five heartbeat types, including normal beat (N), supraventricular ectopic beat (S), ventricular ectopic beat (V),fusion beat (F), and unknown beat (Q), the proposed method achieved the overall accuracy (OA) value and Macro-F1 score of 99.28% and 95.70%, respectively, in public the Massachusetts Institute of Technology - Boston’s Beth Israel Hospital (MIT-BIH)arrhythmia database. These methods are helpful to improve the effectiveness and clinical reference value of computer-aided ECG automatic classification diagnosis.
The chicken swarm optimization (CSO) algorithm is a new swarm intelligence optimization (SIO) algorithm and has been widely used in many engineering domains. However, there are two apparent problems with the CSO algorithm, i.e., slow convergence speed and difficult to achieve global optimal solutions. Aiming at attacking these two problems of CSO, in this paper, we propose an adaptive fuzzy chicken swarm optimization (FCSO) algorithm. The proposed FCSO uses the fuzzy system to adaptively adjust the number of chickens and random factors of the CSO algorithm and achieves an optimal balance of exploitation and exploration capabilities of the algorithm. We integrate the cosine function into the FCSO to compute the position update of roosters and improve the convergence speed. We compare the FCSO with eight commonly used, state-of-the-art SIO algorithms in terms of performance in both low- and high-dimensional spaces. We also verify the FCSO algorithm with the nonparametric statistical Friedman test. The results of the experiments on the 30 black-box optimization benchmarking (BBOB) functions demonstrate that our FCSO outperforms the other SIO algorithms in both convergence speed and optimization accuracy. In order to further test the applicability of the FCSO algorithm, we apply it to four typical engineering problems with constraints on the optimization processes. The results show that the FCSO achieves better optimization accuracy over the standard CSO algorithm.
Power generation prediction of residential photovoltaic systems has always been a more and more crucial topic when such new types of energy have been widely applied in people's daily life. In this paper, the four seasons are identified more scientifically by studying the variation of solar altitude angles in a year, and hence the meteorological factors hidden in the data collected from a PV system are extracted by clustering and used in the model. Combined with the advantages of the online learning and transfer learning approach, the online transfer learning model is developed to predict power generation. Finally, our experimental results show that the proposed online transfer learning model outperforms the other learning methods. © 2021 ACM.
In early spring 2020, Covid-19 was categorized as a pandemic and has since infected several millions of people in many countries and claimed hundreds of thousands of lives. Various strict strategies and prevention measures, such as curfews and lockdowns of cities or entire countries, have been enforced by governments to mitigate the spread of the virus. While the results of the aforementioned enforced measures deemed promising for some countries, the same could not be said about others. This paper serves as an initial analysis of the effect of government enforced strategies and safety measures on the transmission of Covid-19. We propose a three-stage periodic model: The rise stage, plateau stage, and decline stage, to describe the changes of the spread of Covid-19. The results show a positive and constructive answer to our proposed three-stage model. © 2020 ACM.
It is generally agreed upon that Knowledge Sharing (KS) is an effective process within organizational settings. It is also the corner-stone of many firm’s Knowledge Management (KM) Strategy. Despite the growing significance of KS for organization’s competitiveness and performance, analyzing the level of KS make it difficult for KM to achieve the optimum level of KS. Because of these causes, this study attempts to develop a conceptual model based on one of the IS Theories that is determined as the best model for evaluating the level of KS. In other words, it is Delone and McLean IS Success model that is presented according to the Communication Theory and it covers various perspectives of assessing Information Systems (IS). Hence, these dimensions cause it to be a multidimensional measuring model that could be a suitable model for realizing the level of KS.
A dynamic Risk Management (RM) provides monitoring, recognition, assessment, and follow-up action to reduce the risk whenever it rises. The main problem with dynamic RM (when applied to plan for, how the unknown risk in unexpected conditions should be addressed in information systems) is to design an especial control to recover/avoid of risks/attacks that is proposed in this research. The methodology, called Dynamic Intelligent RM (DIRM) is comprised of four phases which are interactively linked; (1) Aggregation of data and information (2) Risk identification (3) RM using an optional control and (4) RM using an especial control. This study, therefore, investigated the use of artificial neural networks to improve risk identification via adaptive neural fuzzy interface systems and control specification using learning vector quantization. Further experimental investigations are needed to estimate the results of DIRM toward unexpected conditions in the real environment.
Firefly algorithm is a new heuristic intelligent optimization algorithm and has excellent performance in many optimization problems. However, in the face of some multimodal and high-dimensional problems, the algorithm is easy to fall into the local optimum. In order to avoid this phenomenon, this paper proposed an improved firefly algorithm with proportional adjustment strategy for alpha and beta. Thirteen well-known benchmark functions are used to verify the performance of our proposed algorithm, the computational results show that our proposed algorithm is more efficient than many other FA algorithms.
The RRT algorithm is widely used in the high-dimensional path planning in a dynamic environment, and well adapted to the dynamics of motion of the mobile node needs. However, in large scale wireless sensor networks (WSN), the RRT algorithm lacks stability and is easy to deviate from the optimal path. In this paper we proposes a path planning algorithm called E-RRT to improve the problems that RRT has. The method proposed includes the coverage density of obstacle for initialize searching area for the exploring random tree, and the gradually extended region used to ensure the path to be found. The method also adopts the greedy algorithm to delete the intermediate point in the point sequence of path for an optimal path, and the quadratic Bezier curve to smooth the path for the mobile sensor node. The path found can be the shortest, collision-free and smoothing, and therefore to satisfy the requirement of path planning for mobile sensor nodes. The simulation results show that the E-RRT algorithm outperforms the RRT algorithm.
The most widely used method in recommendation systems is collaborative filtering, of which, a critical step is to analyze a user's preferences and make recommendations of products or services based on similarity analysis with other users' ratings. However, collaborative filtering is less usable for recommendation facing the 'cold start' problem, i.e. few comments being given to products or services. To tackle this problem, we propose an improved method that combines collaborative filtering and data classification. We use hotel recommendation data to test the proposed method. The accuracy of the recommendation is determined by the rankings. Evaluations regarding the accuracies of Top-3 and Top-10 recommendation lists using the 10-fold cross-validation method and ROC curves are conducted. The results show that the Top-3 hotel recommendation list proposed by the combined method has the superiority of the recommendation performance than the Top-10 list under the cold start condition in most of the times.
Semantic analysis is an important part of natural language processing, and it has a broadly application in the network information processing. This paper presents a semantic analysis framework based on the directed weighted graph. The paper uses a directed weighted graph for semantic classification. The methodology takes the information semantic analysis as the goal in network-oriented approach, particularly in E-commerce user reviews. It looks the formal semantics lexical as semantic body and denoted by nodes. It uses links to represent relationship between the nodes and calculated by weights. Directed links are used to connect to each other in nodes, which semantic vocabulary is interrelated between nodes. Then a directed weighted graph is constructed by semantic nodes and their interrelationships relations. The experimental results and analysis show that the proposed method in the paper can classify the semantics into different classification according to the path length threshold requirement.
Semantic and sentimental analysis plays an important role in natural language processing, especially in textual analysis, and has a wide range of applications in web information processing and management. This paper intends to present a sentimental analysis framework based on the directed weighted graph method, which is used for semantic classification of the textual comments, i.e. user reviews, collected from the e-commerce websites. The directed weighted graph defines a formal semantics lexical as a semantic body, denoted to be a node in the graph. The directed links in the graph, representing the relationships between the nodes, are used to connect nodes to each other with their weights. Then a directed weighted graph is constructed with semantic nodes and their interrelationships relations. The experimental results show that the method proposed in the paper can classify the semantics into different classification based on the computation of the path lengths with a threshold. © Springer International Publishing AG 2016.
Recommendation systems aim to help users make decisions more efficiently. The most widely used method in recommendation systems is collaborative filtering, of which, a critical step is to analyze a user's preferences and make recommendations of products or services based on similarity analysis with other users' ratings. However, collaborative filtering is less usable for recommendation facing the "cold start" problem, i.e. few comments being given to products or services. To tackle this problem, we propose an improved method that combines collaborative filtering and data classification. We use hotel recommendation data to test the proposed method. The accuracy of the recommendation is determined by the rankings. Evaluations regarding the accuracies of Top-3 and Top-10 recommendation lists using the 10-fold cross-validation method and ROC curves are conducted. The results show that the Top-3 hotel recommendation list proposed by the combined method has the superiority of the recommendation performance than the Top-10 list under the cold start condition in most of the times.
With the development of E-commerce and Internet, items are becoming more and more, which brings a so called information overload problem that it is hard for users to find the items they would be interested in. Recommender systems emerge to response to this problem through discovering user interest based on their rating information automatically. But the rating information is usually sparse compared to all the possible ratings between users and items. Therefore, it is hard to find out user interest, which is the most important part in recommender systems. In this paper, we propose a recommendation method TT-Rec that employs trust propagation and topic-level user interest expansion to predict user interest. TT-Rec uses a reputation-based method to weight users' influence on other users when propagating trust. TT-Rec also considers discovering user interest by expanding user interest in topic level. In the evaluation, we use three metrics MAE, Coverage and F1 to evaluate TT-Rec through comparative experiments. The experiment results show that TT-Rec recommendation method has a good performance.
Mobile Social Networks (MSNs) is a kind of opportunistic networks, which is composed of a large number of mobile nodes with social characteristic. Up to now, the prevalent communitybased routing algorithms mostly select the most optimal social characteristic node to forward messages. But they almost don't consider the effect of community distribution on mobile nodes and the time-varying characteristic of network. These algorithms usually result in high consumption of network resources and low successful delivery ratio if they are used directly in mobile social networks. We build a time-varying community-based network model, and propose a community-aware message opportunistic transmission algorithm (CMOT) in this paper. For inter-community messages transmission, the CMOT chooses an optimal community path by comparing the community transmission probability. For intra-community in local community, messages are forwarded according to the encounter probability between nodes. The simulation results show that the CMOT improves the message successful delivery ratio and reduces network overhead obviously, compared with classical routing algorithms, such as PRoPHET, MaxProp, Spray and Wait, and CMTS.
An Opportunistic Networks is a wireless self-organized network, in which there is no need to build a fixed connectivity between source node and destination node, and the communication depends on the opportunity of node meeting. There are some classical message transmission algorithms, such as PRoPHET, MaxProp, and so on. In the Opportunity Networks with community characteristic, the different message transmission strategies can be sued in inter-community and intra-community. It improves the message successful delivery ratio significantly. The classical algorithms are CMTS and CMOT. We propose an energy efficient message forwarding algorithm (EEMF) for community-based Opportunistic Networks in this paper. When a message is transmitted, we consider not only the community characteristic, but also the residual energy of each node. The simulation results show that the EEMF algorithm can improve the message successful delivery ratio and reduce the network overhead obviously, in comparison with classical routing algorithms, such as PRoPHET, MaxProp, CMTS and CMOT. Meanwhile the EEMF algorithm can reduce the node's energy consumption and prolong the lifetime of network.
Business development or renovation is to introduce newer, more efficient routines and processes through redesign or re-engineering of businesses, which form a set of business patterns. Business patterns encapsulate the best solutions for business practices and tasks confirming business strategies of the enterprise. Nowadays, services with SOA (Service oriented-Architecture) become more and more important in implementing and supporting business routines and processes. An enterprise that can encapsulate their SOA solutions into patterns will make the business more agile and effective. However, with the SOA solutions to automation of locating relevant instance services for its business patterns with minimum human intervention, one has to look into the semantic and operational difference between the description of a business pattern and that of an instance service - a gap between the two levels of descriptions. In this paper, the authors introduce a conceptual modelling method to address how to bridge the gap, by a semantic service description for usage contextual approach formalized with the conceptual graphs formalism. Most importantly, they evaluate this model in this paper to study its usability in practice.
The Twitter System is the biggest social network in the world, and everyday millions of tweets are posted and talked about, expressing various views and opinions. A large variety of research activities have been conducted to study how the opinions can be clustered and analyzed, so that some tendencies can be uncovered. Due to the inherent weaknesses of the tweets - very short texts and very informal styles of writing - it is rather hard to make an investigation of tweet data analysis giving results with good performance and accuracy. In this paper, we intend to attack the problem from another aspect - using a two-layer structure to analyze the twitter data: LDA with topic map modelling. The experimental results demonstrate that this approach shows a progress in twitter data analysis. However, more experiments with this method are expected in order to ensure that the accurate analytic results can be maintained.
Social commerce is a promising new paradigm of e-commerce. Given the open and dynamic nature of social media infrastructure, the governance structures of social commerce are usually realized through reputation mechanisms. However, the existing approaches to the prediction of trust in future interactions are based on personal observations and/or publicly shared information in social commerce application. As a result, the indications are unreliable and biased because of limited first-hand information and stake-holder manipulation for personal strategic interests. Methods that extract trust values from social links among users can improve the performance of reputation mechanisms. Nonetheless, these links may not always be available and are typically sparse in social commerce, especially for new users. Thus, this study proposes a new graph-based comprehensive reputation model to build trust by fully exploiting the social context of opinions based on the activities and relationship networks of opinion contributors. The proposed model incorporates the behavioral activities and social relationship reputations of users to combat the scarcity of first-hand information and identifies a set of critical trust factors to mitigate the subjectivity of opinions and the dynamics of behaviors. Furthermore, we enhance the model by developing a novel deception filtering approach to discard "bad-mouthing" opinions and by exploiting a personalized direct distrust (risk) metric to identify malicious providers. Experimental results show that the proposed reputation model can outperform other trust and reputation models in most cases. (C) 2014 Elsevier Inc. All rights reserved.
In the view of the current service development of the express delivery industry and the data quality problem experienced thereof, we consider to construct an index-based evaluation system for the service quality for the express delivery industry through the analysis of market investigation and data analysis. This system applies analytic hierarchy process (AHP), to survey expert’s options and obtain the index weights. The analytical evaluation of service quality for the specific express delivery company is conducted with the fuzzy comprehensive evaluation method. A service satisfaction degree for the express delivery company is generated to improve the overall service performance. Through evaluating results of the solution to the problems in the quality of service, this paper aims at establishing a guideline to improve their service quality in express delivery enterprises. This research aims at the development of a novel method for service quality evaluation in the area of the fast growing businesses of express delivery enterprises.
In the wireless sensor networks, the hardware limitations of sensor nodes cause high transmission failure rate. We usually increase the density of nodes to improve the quality of information transmission. However, it is difficult for the limited energy supply, storage, and communication bandwidth to transfer large amount of redundant sensory data. So we use data fusion technology to remove the redundant data as much as possible before the data transmission. Data fusion becomes a research hotspot in recent years. In this paper we propose a multilayer and multi-agent data fusion mode, and analyze the proposed mode performance in three aspects: hops, energy consumption and network delay. The simulation experiments show that, if reasonably suitable parameters, such as the network scale, the number and size of agents, the data processing cost, are selected, the mobile agent mode is much better than the client/server mode.
In the past few years, Geo-spatial data quality has received increasing attention and concerns. As more and more business decisions are made based on data analytic result from geo-spatial related data, low quality data means wrong or inappropriate decisions, which could have substantial effects on a business's future. In this paper, we propose a framework that can systematically ensure and improve geo-spatial data quality throughout the whole life cycle of data.
This paper reports the development and evaluation of a mobile-based telemedicine framework for enabling remote monitoring of Parkinson’s disease (PD) symptoms. The system consists of different measurement devices for remote collection, processing and presentation of symptom data of advanced PD patients. Different numerical analysis techniques were applied on the raw symptom data to extract clinically symptom information which in turn were then used in a machine learning process to be mapped to the standard clinician-based measures. The methods for quantitative and automatic assessment of symptoms were then evaluated for their clinimetric properties such as validity, reliability and sensitivity to change. Results from several studies indicate that the methods had good metrics suggesting that they are appropriate to quantitatively and objectively assess the severity of motor impairments of PD patients.
E-health (or e-healthcare) aims at applying modern information and telecommunication technologies in the healthcare sector. Recently, more and more research attention has been paid to this area, from clinic data analysis to patient record management. If we say the e-health has been mainly dealing with patient data analysis and various disease diagnosis at its early time, nowadays various data sources, such as social network, physician and patients’ blogs, and various monitoring data (like sensors attached to PD patients at home) are widely used for aid of better disease diagnosis through gathering richer experts’ knowledge and expertise, richer data (from patients, diseases, diagnosis, medical experiments, etc.) for analysis, and quicker (or better timely) access to various resources (e.g. telemedicine) pervasively.
With the rapid advancement of the web technology, more and more educational resources, including software applications for teaching/learning methods, are available across the web, which enables learners to access the learning materials and use various ways of learning at any time and any place. Researchers from both computer science and education are working together, collaboratively focusing on development of pedagogically enabling technologies which are believed to improve the infrastructure of education systems and processes, including curriculum development models, teaching/learning methods, management of educational resources, systematic organization of communication and dissemination of knowledge and skills required by and adapted to users. In this paper we address the following two aspects of systematic integration architecture of educational systems: 1) learning objects – a semantic description and organization of learning resources using the web service models and methods, and 2) learning services discovery and learning goals match for educational coordination and learning service planning.
With the rapidly increasing number of independently developed Web services that provide similar functionalities with varied quality of service (QoS), service composition is considered as a problem in the selection of component services that are in accordance with users' QoS requirements; a practice known as the QoS-aware service composition problem. However, current solutions are unsuitable for most real-time decision-making service composition applications required to obtain a relatively optimal result within a reasonable amount of time. These services are also unreliable (or even risky) given the open service-oriented environment. In this paper, we address these problems and propose a novel heuristic algorithm for an efficient and reliable selection of trustworthy services in a service composition. The proposed algorithm consists of three steps. First, a trust-based selection method is used to filter untrustworthy component services. Second, convex hulls are constructed to reduce the search space in the process of service composition. Finally, a heuristic global optimization approach is used to obtain the near-optimal solution. The results demonstrate that our approach obtains a close-to-optimal and reliable solution within a reasonable computation time.
Business development or renovation is to introduce newer, more efficient routines and processes through redesign or re-engineering of businesses, which form a set of business patterns. Business patterns encapsulate the best solutions for business practices and tasks confirming business strategies of the enterprise. Nowadays, services with SOA (Service oriented-Architecture) become more and more important in implementing and supporting business routines and processes. An enterprise that can encapsulate their SOA solutions into patterns will make the business more agile and effective. However, with the SOA solutions to automation of locating relevant instance services for its business patterns with minimum human intervention one has to look into the semantic and operational difference between the description of a business pattern and that of an instance service-a gap between the two levels of semantic descriptions. In this paper, we propose a conceptual modeling method to address how to bridge the gap, by a semantic service description for usage contextual approach formalized with the conceptual graphs formalism.
The inherent weakness of the data on user ratings collected from the web, such as sparsity and cold-start, has limited the data analysis capability and prediction accuracy. To alleviate this problem, trust is incorporated in the CF approaches with encouraging experimental results. In this paper, we propose a computational model for trust-based CF with a method to generate and propagate trust in a social network. We apply this method to measure trusts on users’ ratings of hotels and show its feasibility by comparing the testing results with the traditional CF methods, e.g. Mean Absolute Error.
Recommender Systems (RS) aim to suggest users with items that they might like based on users' opinion on items. In practice, information about the users' opinion on items is usually sparse compared to the vast information about users and items. Therefore it is hard to analyze and justify users' favorites, particularly those of cold start users. In this paper, we propose a trust model based on the user trust network, which is composed of the trust relationships among users. We also introduce the widely used conceptual model Topic Map, with which we try to classify items into topics for Recommender analysis. We novelly combine trust relations among users with Topic Maps to resolve the sparsity problem and cold start problem. The evaluation shows our model and method can achieve a good recommendation effect.
Making good use of Service Level Agreements (SLA) becomes crucial for an enterprise both to provide value added products and services to customers and to protect the interest of parties involved in the business activities. Well formed and effective structural representation and management of SLAs in conceptual modeling can greatly support the understanding and communication of service development and deployment as well as maintenance of quality of service. Existing specifications and structures for SLAs do not fully formalize and support for different automatic and dynamic behavioral aspects needed within business enterprises due to lack of study focusing on SLAs templates and their contents, which are mostly written on Natural Language (NL). We address the issues of how to use conceptual models to describe the structures of SLAs and the various relationships between SLAs and their items, and hence to better depict business domains. With focus on the contents, process, and dependencies among SLAs, we aim to use so generated concept model for service discovery, service delivery and scheduling.
With the rapid advancement of the webtechnology, more and more educationalresources, including software applications forteaching/learning methods, are available acrossthe web, which enables learners to access thelearning materials and use various ways oflearning at any time and any place. Moreover,various web-based teaching/learning approacheshave been developed during the last decade toenhance the capability of both educators andlearners. Particularly, researchers from bothcomputer science and education are workingtogether, collaboratively focusing ondevelopment of pedagogically enablingtechnologies which are believed to improve theinfrastructure of education systems andprocesses, including curriculum developmentmodels, teaching/learning methods, managementof educational resources, systematic organizationof communication and dissemination ofknowledge and skills required by and adapted tousers. Despite of its fast development, however,there are still great gaps between learningintentions, organization of supporting resources,management of educational structures,knowledge points to be learned and interknowledgepoint relationships such as prerequisites,assessment of learning outcomes, andtechnical and pedagogic approaches. Moreconcretely, the issues have been widelyaddressed in literature include a) availability andusefulness of resources, b) smooth integration ofvarious resources and their presentation, c)learners’ requirements and supposed learningoutcomes, d) automation of learning process interms of its schedule and interaction, and e)customization of the resources and agilemanagement of the learning services for deliveryas well as necessary human interferences.Considering these problems and bearing in mindthe advanced web technology of which weshould make full use, in this report we willaddress the following two aspects of systematicarchitecture of learning/teaching systems: 1)learning objects – a semantic description andorganization of learning resources using the webservice models and methods, and 2) learningservices discovery and learning goals match foreducational coordination and learning serviceplanning.
Use of Service Level Agreements (SLAs) is crucial for a business organization to provide the value added goods and services to customers to achieve the business goals successfully. Efficient structural representation and management of SLAs can solve the problems of quality of service evaluation during service development and deployment. Existing specifications and structures for SLAs do not fully formalize and support for different automatic and dynamic behavioural aspects needed within the Business Organizations for SLAs. We address the issues on how to formalize and document the structures of SLAs for better utilization and improved results in various business domains. Our main focus is on the contents and processes of SLAs during service discovery, service delivery and scheduling of the services effectively and efficiently.
In a service-oriented environment, it is inevitable and indeed quite common to deal with web services, whose reliability is unknown to the users. The reputation system is a popular technique currently used for providing a global quality score of a service provider to requesters. However, such global information is far from sufficient for service requesters to choose the most qualified services. In order to tackle this problem, the authors present a trust based architecture containing a computational trust model for quantifying and comparing the trustworthiness of services. In this trust model, they firstly construct a network based on the direct trust relations between participants and rating similarity in service oriented environments, then propose an algorithm for propagating trust in the social network based environment which can produce personalized trust information for a specific service requester, and finally implement the trust model and simulate various malicious behaviors in not only dense but also sparse networks which can verify the attack-resistant and robustness of the proposed approach. The experiment results also demonstrate the feasibility and benefit of the approach.
Web services as a new distributed system technology have been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organization (VO). However, lack of semantics in the current Web service standards has become a major barrier in service discovery and composition. To tackle the semantic issues of Web services, this paper proposes a comprehensive semantic service description framework – CbSSDF and a two-step service discovery mechanism based on CbSSDF—to help service users to easily locate their required services. The authors give a detailed explanation of CbSSDF, and then evaluate the framework by comparing it with OWL-S to examine how the proposed framework can improve the efficiency and effectiveness of service discovery and composition. The evaluation is carried out by analysing the different proposed solutions based on these two frameworks for achieving a series of tasks in a scenario.