The main objective of this thesis work is to develop communication link between Runrev Revolution (IDE) and JADE (Multi-Agent System) through Socket programming using TCP/IP layer. These two independent platforms are connected using socket programming technique. Socket programming is considered to be newly emerging technology among these two platforms, the work done in this thesis work is considered to be a prototype. A Graphical simulation model is developed by salixphere (Company in Hedemora) to simulate logistic problems using Runrev Revolution (IDE). The simulation software/program is called “BIOSIM”. The logistic problems are complex, and conventional optimization techniques are unlikely very successful. “BIOSIM” can demonstrate the graphical representation of logistic problems depending upon the problem domains. As this simulation model is developed in revolution programming language (Transcript) which is dynamically typed and English-like language, it is quite slow compared to other high level programming languages. The object of this thesis work is to add intelligent behaviour in graphical objects and develop communication link between Runrev revolution (IDE) and JADE (Multi-Agent System) using TCP/IP layers. The test shows the intelligent behaviour in the graphical objects and successful communication between Runrev Revolution (IDE) and JADE (Multi-Agent System).
The aim of this paper is to investigate the feasibility of using the Dynamic Time Warping (DTW) method to measure motor states in advanced Parkinson’s disease (PD). Data were collected from 19 PD patients who experimented leg agility motor tests with motion sensors on their ankles once before and multiple times after an administration of 150% of their normal daily dose of medication. Experiments of 22 healthy controls were included. Three movement disorder specialists rated the motor states of the patients according to Treatment Response Scale (TRS) using recorded videos of the experiments. A DTW-based motor state distance score (DDS) was constructed using the acceleration and gyroscope signals collected during leg agility motor tests. Mean DDS showed similar trends to mean TRS scores across the test occasions. Mean DDS was able to differentiate between PD patients at Off and On motor states. DDS was able to classify the motor state changes with good accuracy (82%). The PD patients who showed more response to medication were selected using the TRS scale, and the most related DTW-based features to their TRS scores were investigated. There were individual DTW-based features identified for each patient. In conclusion, the DTW method can provide information about motor states of advanced PD patients which can be used in the development of methods for automatic motor scoring of PD.
Parkinson's disease (PD) is a progressive movement disorder caused by the death of dopamine-producing cells in the midbrain. There is a need for frequent symptom assessment, since the treatment needs to be individualized as the disease progresses. The aim of this paper was to verify and further investigate the clinimetric properties of an entropy-based method for measuring PD-related upper limb temporal irregularities during spiral drawing tasks. More specifically, properties of a temporal irregularity score (TIS) for patients at different stages of PD, and medication time points were investigated. Nineteen PD patients and 22 healthy controls performed repeated spiral drawing tasks on a smartphone. Patients performed the tests before a single levodopa dose and at specific time intervals after the dose was given. Three movement disorder specialists rated videos of the patients based on the unified PD rating scale (UPDRS) and the Dyskinesia scale. Differences in mean TIS between the groups of patients and healthy subjects were assessed. Test-retest reliability of the TIS was measured. The ability of TIS to detect changes from baseline (before medication) to later time points was investigated. Correlations between TIS and clinical rating scores were assessed. The mean TIS was significantly different between healthy subjects and patients in advanced groups (p-value = 0.02). Test-retest reliability of TIS was good with Intra-class Correlation Coefficient of 0.81. When assessing changes in relation to treatment, TIS contained some information to capture changes from Off to On and wearing off effects. However, the correlations between TIS and clinical scores (UPDRS and Dyskinesia) were weak. TIS was able to differentiate spiral drawings drawn by patients in an advanced stage from those drawn by healthy subjects, and TIS had good test-retest reliability. TIS was somewhat responsive to single-dose levodopa treatment. Since TIS is an upper limb high-frequency-based measure, it cannot be detected during clinical assessment.
Objectives: The aim of this paper is to investigate whether a smartphone-based system can be used to quantify dexterity in Parkinson’s disease (PD). More specifically, the aim was to develop data-driven methods to quantify and characterize dexterity in PD. Methods: Nineteen advanced PD patients and 22 healthy controls participated in a clinical trial in Uppsala, Sweden. The subjects were asked to perform tapping and spiral drawing tests using a smartphone. Patients performed the tests before, and at pre-specified time points after they received 150% of their usual levodopa morning dose. Patients were video recorded and their motor symptoms were assessed by three movement disorder specialists using three Unified PD Rating Scale (UPDRS) motor items from part III, the dyskinesia scoring and the treatment response scale (TRS). The raw tapping and spiral data were processed and analyzed with time series analysis techniques to extract 37 spatiotemporal features. For each of the five scales, separate machine learning models were built and tested by using principal components of the features as predictors and mean ratings of the three specialists as target variables. Results: There were weak to moderate correlations between smartphone-based scores and mean ratings of UPDRS item #23 (0.52; finger tapping), UPDRS #25 (0.47; rapid alternating movements of hands), UPDRS #31 (0.57; body bradykinesia and hypokinesia), sum of the three UPDRS items (0.46), dyskinesia (0.64), and TRS (0.59). When assessing the test-retest reliability of the scores it was found that, in general, the clinical scores had better test-retest reliability than the smartphone-based scores. Only the smartphone-based predicted scores on the TRS and dyskinesia scales had good repeatability with intra-class correlation coefficients of 0.51 and 0.84, respectively. Clinician-based scores had higher effect sizes than smartphone-based scores indicating a better responsiveness in detecting changes in relation to treatment interventions. However, the first principal component of the 37 features was able to capture changes throughout the levodopa cycle and had trends similar to the clinical TRS and dyskinesia scales. Smartphone-based scores differed significantly between patients and healthy controls. Conclusions: Quantifying PD motor symptoms via instrumented, dexterity tests employed in a smartphone is feasible and data from such tests can also be used for measuring treatment-related changes in patients.
This paper is reviewing objective assessments of Parkinson’s disease(PD) motor symptoms, cardinal, and dyskinesia, using sensor systems. It surveys the manifestation of PD symptoms, sensors that were used for their detection, types of signals (measures) as well as their signal processing (data analysis) methods. A summary of this review’s finding is represented in a table including devices (sensors), measures and methods that were used in each reviewed motor symptom assessment study. In the gathered studies among sensors, accelerometers and touch screen devices are the most widely used to detect PD symptoms and among symptoms, bradykinesia and tremor were found to be mostly evaluated. In general, machine learning methods are potentially promising for this. PD is a complex disease that requires continuous monitoring and multidimensional symptom analysis. Combining existing technologies to develop new sensor platforms may assist in assessing the overall symptom profile more accurately to develop useful tools towards supporting better treatment process.
AIM: To construct a Treatment Response Index from Multiple Sensors (TRIMS) for quantification of motor state in patients with Parkinson's disease (PD) during a single levodopa dose. Another aim was to compare TRIMS to sensor indexes derived from individual motor tasks.
METHOD: Nineteen PD patients performed three motor tests including leg agility, pronation-supination movement of hands, and walking in a clinic while wearing inertial measurement unit sensors on their wrists and ankles. They performed the tests repeatedly before and after taking 150% of their individual oral levodopa-carbidopa equivalent morning dose.Three neurologists blinded to treatment status, viewed patients' videos and rated their motor symptoms, dyskinesia, overall motor state based on selected items of Unified PD Rating Scale (UPDRS) part III, Dyskinesia scale, and Treatment Response Scale (TRS). To build TRIMS, out of initially 178 extracted features from upper- and lower-limbs data, 39 features were selected by stepwise regression method and were used as input to support vector machines to be mapped to mean reference TRS scores using 10-fold cross-validation method. Test-retest reliability, responsiveness to medication, and correlation to TRS as well as other UPDRS items were evaluated for TRIMS.
RESULTS: The correlation of TRIMS with TRS was 0.93. TRIMS had good test-retest reliability (ICC = 0.83). Responsiveness of the TRIMS to medication was good compared to TRS indicating its power in capturing the treatment effects. TRIMS was highly correlated to dyskinesia (R = 0.85), bradykinesia (R = 0.84) and gait (R = 0.79) UPDRS items. Correlation of sensor index from the upper-limb to TRS was 0.89.
CONCLUSION: Using the fusion of upper- and lower-limbs sensor data to construct TRIMS provided accurate PD motor states estimation and responsive to treatment. In addition, quantification of upper-limb sensor data during walking test provided strong results.
A decision support system (DSS) was implemented based on a fuzzy logic inference system (FIS) to provide assistance in dose alteration of Duodopa infusion in patients with advanced Parkinson’s disease, using data from motor state assessments and dosage. Three-tier architecture with an object oriented approach was used. The DSS has a web enabled graphical user interface that presents alerts indicating non optimal dosage and states, new recommendations, namely typical advice with typical dose and statistical measurements. One data set was used for design and tuning of the FIS and another data set was used for evaluating performance compared with actual given dose. Overall goodness-of-fit for the new patients (design data) was 0.65 and for the ongoing patients (evaluation data) 0.98. User evaluation is now ongoing. The system could work as an assistant to clinical staff for Duodopa treatment in advanced Parkinson’s disease.
The Thesis focused on hardware based Load balancing solution of web traffic through a load balancer F5 content switch. In this project, the implemented scenario for distributing HTTPtraffic load is based on different CPU usages (processing speed) of multiple member servers. Two widely used load balancing algorithms Round Robin (RR) and Ratio model (weighted Round Robin) are implemented through F5 load balancer. For evaluating the performance of F5 content switch, some experimental tests has been taken on implemented scenarios using RR and Ratio model load balancing algorithms. The performance is examined in terms of throughput (bits/sec) and Response time of member servers in a load balancing pool. From these experiments we have observed that Ratio Model load balancing algorithm is most suitable in the environment of load balancing servers with different CPU usages as it allows assigning the weight according to CPU usage both in static and dynamic load balancing of servers.
In recent years, it has been observed that software clones and plagiarism are becoming an increased threat for one?s creativity. Clones are the results of copying and using other?s work. According to the Merriam – Webster dictionary, “A clone is one that appears to be a copy of an original form”. It is synonym to duplicate. Clones lead to redundancy of codes, but not all redundant code is a clone. On basis of this background knowledge ,in order to safeguard one?s idea and to avoid intentional code duplication for pretending other?s work as if their owns, software clone detection should be emphasized more. The objective of this paper is to review the methods for clone detection and to apply those methods for finding the extent of plagiarism occurrence among the Swedish Universities in Master level computer science department and to analyze the results.The rest part of the paper, discuss about software plagiarism detection which employs data analysis technique and then statistical analysis of the results. Plagiarism is an act of stealing and passing off the idea?s and words of another person?s as one?s own. Using data analysis technique, samples(Master level computer Science thesis report) were taken from various Swedish universities and processed in Ephorus anti plagiarism software detection. Ephorus gives the percentage of plagiarism for each thesis document, from this results statistical analysis were carried out using Minitab Software. The results gives a very low percentage of Plagiarism extent among the Swedish universities, which concludes that Plagiarism is not a threat to Sweden?s standard of education in computer science. This paper is based on data analysis, intelligence techniques, EPHORUS software plagiarism detection tool and MINITAB statistical software analysis.
"Moln" är ett känt ord använt för att beskriva molntjänster. Begreppet moln uppstod för längesen, och användes bland annat på 70-talet för att representera delar som ingen riktigt förstod sig på. Det är inte bara ordet moln som fortfarande orsakar förvirring, utan även dess flertaliga beskrivningar. Dessa gör det svårt att få grepp om vad molntjänster egentligen innebär. Kraven för molntjänster är flera, men man talar oftast om bred tillgänglighet, konsolidering av resurser, snabb elasticitet och mätbara, självbetjäningstjänster. Implementering av säkerhet är viktig för molntjänster för att kunna skydda dem från säkerhetshot. En vanlig men också viktig säkerhetskontroll är autentisering. Autentisering tillsammans med federationslösningar möjliggör för integration mellan molntjänster som vill dela autentisering.
Rapporten är strukturerad i olika sektioner, innehållande bakgrund till problem, syften som leder fram till frågeställningar och vald metodik. Den teoretiska biten består av en litteraturstudie om bland annat; säkerhetshot mot autentisering i molnet; skydd mot dessa; autentiseringsmetoder passande för molnet; federationslösningar. I analysen jämför rapporten skillnader och likheter mellan resultat från litteraturstudien och från flera intervjuer. I slutsatsen presenteras svar på frågeställningar och resultat från analysdelen. Integration mellan molntjänsters autentiseringslösning kan lösas av federerade identiteter, som även medför fördelarna användarvänlighet och enklare administration. Att alla identiteter samlas på en och samma plats kan dock bli ett attraktivt mål för attacker. Säkerhetshot mot molntjänster existerar i all högsta grad, dock skiljer de sig inte mycket mot traditionella tjänster. Slutsatsen påvisar att kausalitet fortfarande råder för säkerhet i molntjänster, men att förtroende har blivit ytterligare en förklaringsvariabel.
The use of Intrusion Detection Systems is a normal thing today in bigger companies, butthe solutions that are to be found in market is often too expensive for the smallercompany. Therefore, we saw the need in investigating if there is a more affordablesolution. In this report, we will show that it is possible to use low cost single boardcomputers as part of a bigger centralized Intrusion Detection System. To investigate this,we set up a test system including 2 Raspberry Pi 3 Model B, a cloud server and the use oftwo home networks, one with port mirroring implemented in firmware and the other withdedicated span port. The report will show how we set up the environment and the testingwe have done to prove that this is a working solution.
With the rapid development of telecommunication industry, the IP multimedia Subsystem (IMS) could very well be the panacea for most telecom operators. It is originally defined as the core network for 3G mobile systems by the 3rd Generation Partnership Project (3GPP), the more recent development is merging between fixed line network and wireless networkd This report researchs the characteristic of the IMS data and proposes an IMS characterization analysis. We captured the IMS traffic data with 10 tousands users for about 41 hours. By analyzing the characteristics of the IMS, we know that the most important application in the IMS is VoIP call. Then we use the tool designed by Tsinghua University & Ericsson Company to recognize the data, and the results we got can be used to build the traffic models. From the results of the traffic models, I will get some reasons and conclusion. The traffic model gives out the types of session and types of VoIP call. I bring into a concept—busy hour. This concept is very important because it can help us to know which period is the peak of the VoIP call. The busy hour is from 10:00 to 11:00 in the morning. I also bring into another concept—connection ratio. This concept is significant because it can evaluate whether the VoIP call is good when it use IMS network. By comparing the traffic model with other one’s models, we found the different results from them, both the accuracy and the busy hour. From the contract, we got the advantages of our traffic models.
Idag finns det många företag som erbjuder olika program/databaser som har till uppgift att lagra och hantera data. Oracle är världens näst största mjukvaruföretag. Oracles databas genererar ungefär 75 % av deras inkomst. De senaste 40 åren har databashantering utvecklats från enkelt filsystem till objektrelationsdatabas, vilket har medfört att allt fler företag idag erbjuder verktyg som databas utvecklare kan arbeta med. Verktygen underlättar själva databashanteringen. Det är många företag som erbjuder olika sorters verktyg och databas utvecklaren, eller själva databasföretaget, får svårt att skilja på verktygen. Utbudet av verktyg är så omfattande att det är näst intill omöjligt att ha kännedom om alla. Syftet med denna uppsats är att undersöka vilket verktyg en Oracle-utvecklare bör använda sig av. Att jämföra olika SQL och PL/SQL verktyg, samt lämna en rekommendation om vilket/vilka verktyg som är bäst att använda ur olika perspektiv. Vid jämförelsen laddade jag hem och testade de olika verktygen utifrån vissa kriterier. Jag kontaktade även fyra företag för att få svar på frågan utifrån företagens perspektiv. Tre av företagen svarade på mina enkäter, medan ett av företagen svarade muntligt på mina frågor via telefonen. Då jag genomförde denna studie kom jag fram till slutsatsen att alla verktyg i stort sett har samma grundfunktioner. Valet av verktyg styrs av hur många extra funktioner utvecklaren är villig betala för. Det är dock utvecklaren själv som måste avgöra vilket verktyg som passar just honom. Det verktyg som är ledande är Toad och detta verktyg passar bäst för experter. För enklare funktioner har det framkommit av enkäterna att SQL*Plus används.
This paper presents a methodology to formulate natural language rules for an adaptive neuro-fuzzy system based on discovered knowledge, supported by prior knowledge and statistical modeling. Relationships between disease related variables and fluctuations in Parkinson’s disease is often complex. Experts have simplified but mostly reliable “fuzzy” rules based on experience. These rules could be improved using statistical methods and neural nets. This gives clinicians a valuable tool to explore the importance of different variables and their relations in a disease and could aid treatment selection. A prototype using the proposed methodology has been used to induce an Adaptive Neuro Fuzzy Inference Model that has been used to “discover” relationships between fluctuation, treatment and disease severity. More data is needed to confirm these findings. The project shows that artificial intelligence techniques and methods in combination with statistical methods offer medical research and applications valuable opportunities.
Condition monitoring of wooden railway sleepers applications are generally carried out by visual inspection and if necessary some impact acoustic examination is carried out intuitively by skilled personnel. In this work, a pattern recognition solution has been proposed to automate the process for the achievement of robust results. The study presents a comparison of several pattern recognition techniques together with various nonstationary feature extraction techniques for classification of impact acoustic emissions. Pattern classifiers such as multilayer perceptron, learning cector quantization and gaussian mixture models, are combined with nonstationary feature extraction techniques such as Short Time Fourier Transform, Continuous Wavelet Transform, Discrete Wavelet Transform and Wigner-Ville Distribution. Due to the presence of several different feature extraction and classification technqies, data fusion has been investigated. Data fusion in the current case has mainly been investigated on two levels, feature level and classifier level respectively. Fusion at the feature level demonstrated best results with an overall accuracy of 82% when compared to the human operator.
In this work, the ability to create a prototype for a throughout secure and childproof system for Android is shown. This is done by combining already available products combined with some own code. Internet traffic is being pushed through a VPN tunnel to an already existing LAN in the own home. This mitigates the risks of eavesdropping attacks and other threats, and by using DNS Filtering, unwanted web pages can be blocked beforehand.
The mobile unit can be used for calls (and with further development SMS) but incoming calls from unknown numbers, and SMS, will be blocked. With the addition of Applocker, a sort of locking mechanism for software access, the use of unwanted application will be restricted. By adding a custom App-Launcer, this restriction can be even more firm.
The complete solution brings a system where the Smartphone preferably can be handled by a child and where chosen functions can be used in their regular manner but where, predetermined, functions has been eliminated from the unit – or is only accessible by the administrator. What is needed for this prototype, besides of a Smartphone, is a working LAN in the home, and an always-on computer that acts like a server, preferably a Raspberry Pi.
To perform a computer forensic investigation can sometimes be compared to putting a puzzle together, especially if the person doing the investigation was not present at the initial stages were the IT devices was seized. To get an idea of the usage of a computer solely from the information on the hard drive can be a difficult and time consuming task. Situations that can be considered particularly complicated arise when devices have been overlooked at the raid, in other words when pieces of the puzzle are missing.During the examination of a hard drive there is often traces of removable media previously connected to the computer. These kinds of traces can be of interest during the computer forensic investigation. One aspect can be to put different devices in relation to each other, to link a digital trace to a physical action. Another possible conclusion can be that some items of relevance was missed during the raid and there for not seized.This thesis describes the goals that are relevant to strive for when conducting a computer forensic investigation. Multiple aspects was reviewed to make the result usable to a computer forensic examiner working in law enforcement.The work done was basically based on an experiment where a removable USB thumb drive was connected to a computer. The computer was subsequently examined for changes and the files identified were analysed with digital tools built for these purposes. All results were reviewed with secondary digital tools or validated with manual validation techniques.The traces from the analysis provided a method. The method is supposed to be of use when a computer forensic examiner working in law enforcement is conducting searches for removable USB storage devices that was previously connected to a computer. Finally the method was reviewed and conclusions regarding advantages and disadvantages were drawn.
This paper presents a novel traffic sign recognition system which can aid in the development of Intelligent Speed Adaptation. This system is based on extracting the speed limit sign from the traffic scene by Circular Hough Transform (CHT) with the aid of colour and non-colour information of the traffic sign. The digits of the speed limit sign are then extracted and classified using SVM classifier which is trained for this purpose. In general, the system detects the prohibitory traffic sign in the first place, specifies whether the detected sign is a speed limit sign, and then determines the allowed speed in case the detected sign is a speed limit sign. The SVM classifier was trained with 270 images which were collected in different light conditions. To check the robustness of this system, it was tested against 210 images which contain 213 speed limit traffic sign and 288 Non-Speed limit signs. It was found that the accuracy of recognition was 98% which indicates clearly the high robustness targeted by this system.
The main objective for this degree project was to analyze the Endpoint Security Solutions developed by Cisco, Microsoft and a third minor company solution represented by InfoExpress. The different solutions proposed are Cisco Network Admission Control, Microsoft Network Access Protection and InfoExpress CyberGatekeeper. An explanation of each solution functioning is proposed as well as an analysis of the differences between those solutions. This thesis work also proposes a tutorial for the installation of Cisco Network Admission Control for an easier implementation. The research was done by reading articles on the internet and by experimenting the Cisco Network Admission Control solution. My background knowledge about Cisco routing and ACL was also used. Based on the actual analysis done in this thesis, a conclusion was drawn that all existing solutions are not yet ready for large-scale use in corporate networks. Moreover all solutions are proprietary and incompatible. The future possible standard for Endpoint solution might be driven by Cisco and Microsoft and a rude competition begins between those two giants.
The problem of scheduling a parallel program presented by a weighted directed acyclic graph (DAG) to the set of homogeneous processors for minimizing the completion time of the program has been extensively studied as academic optimization problem which occurs in optimizing the execution time of parallel algorithm with parallel computer. In this paper, we propose an application of the Ant Colony Optimization (ACO) to a multiprocessor scheduling problem (MPSP). In the MPSP, no preemption is allowed and each operation demands a setup time on the machines. The problem seeks to compose a schedule that minimizes the total completion time. We therefore rely on heuristics to find solutions since solution methods are not feasible for most problems as such. This novel heuristic searching approach to the multiprocessor based on the ACO algorithm a collection of agents cooperate to effectively explore the search space. A computational experiment is conducted on a suit of benchmark application. By comparing our algorithm result obtained to that of previous heuristic algorithm, it is evince that the ACO algorithm exhibits competitive performance with small error ratio.
This work is an example of how to adapt a classification method, in this case a classification tree, to the present standardized method for the development of settings for strength grading machines. Data from commercially available industrial strength grading equipment were used on a large sample (approximately 1440 pieces) of Norway spruce (Picea abies (L. Karsten)) in various sawn dimensions. The equipment is a multisensor scanning device combining planar X-ray and resonance frequency measurement. Destructive testing was done according to European standard EN408. The goal was to make the classification, based on machine data, as close as possible to the optimum grading, which was done according to standard. Two different approaches for classification by cost-sensitive decision trees were applied to the data and compared to classification accredited according to EN14081. Classification accuracy increased from 64% correctly classified to 73%, and a reduction from 33% False Negative to 23% was achieved. False Positive increased from 3% to 4%. The outcome was an increase in value for the producer by 0.9%–2.1% at 2007 average price level. The improvement came mainly from an in-yield increase in C30 by 10%.
The aim of this thesis is to investigate computerized voice assessment methods to classify between the normal and Dysarthric speech signals. In this proposed system, computerized assessment methods equipped with signal processing and artificial intelligence techniques have been introduced. The sentences used for the measurement of inter-stress intervals (ISI) were read by each subject. These sentences were computed for comparisons between normal and impaired voice. Band pass filter has been used for the preprocessing of speech samples. Speech segmentation is performed using signal energy and spectral centroid to separate voiced and unvoiced areas in speech signal. Acoustic features are extracted from the LPC model and speech segments from each audio signal to find the anomalies. The speech features which have been assessed for classification are Energy Entropy, Zero crossing rate (ZCR), Spectral-Centroid, Mean Fundamental-Frequency (Meanf0), Jitter (RAP), Jitter (PPQ), and Shimmer (APQ). Naïve Bayes (NB) has been used for speech classification. For speech test-1 and test-2, 72% and 80% accuracies of classification between healthy and impaired speech samples have been achieved respectively using the NB. For speech test-3, 64% correct classification is achieved using the NB. The results direct the possibility of speech impairment classification in PD patients based on the clinical rating scale.
In order to achieve the high performance, we need to have an efficient scheduling of a parallel program onto the processors in multiprocessor systems that minimizes the entire execution time. This problem of multiprocessor scheduling can be stated as finding a schedule for a general task graph to be executed on a multiprocessor system so that the schedule length can be minimize [10]. This scheduling problem is known to be NP- Hard. In multi processor task scheduling, we have a number of CPU’s on which a number of tasks are to be scheduled that the program’s execution time is minimized. According to [10], the tasks scheduling problem is a key factor for a parallel multiprocessor system to gain better performance. A task can be partitioned into a group of subtasks and represented as a DAG (Directed Acyclic Graph), so the problem can be stated as finding a schedule for a DAG to be executed in a parallel multiprocessor system so that the schedule can be minimized. This helps to reduce processing time and increase processor utilization. The aim of this thesis work is to check and compare the results obtained by Bee Colony algorithm with already generated best known results in multi processor task scheduling domain.
A customer is presumed to gravitate to a facility by the distance to it and the attractiveness of it. However regarding the location of the facility, the presumption is that the customer opts for the shortest route to the nearest facility.This paradox was recently solved by the introduction of the gravity p-median model. The model is yet to be implemented and tested empirically. We implemented the model in an empirical problem of locating locksmiths, vehicle inspections, and retail stores ofv ehicle spare-parts, and we compared the solutions with those of the p-median model. We found the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
The p-median model is used to locate P facilities to serve a geographically distributed population. Conventionally, it is assumed that the population patronize the nearest facility and that the distance between the resident and the facility may be measured by the Euclidean distance. Carling, Han, and Håkansson (2012) compared two network distances with the Euclidean in a rural region with a sparse, heterogeneous network and a non-symmetric distribution of the population. For a coarse network and P small, they found, in contrast to the literature, the Euclidean distance to be problematic. In this paper we extend their work by use of a refined network and study systematically the case when P is of varying size (1-100 facilities). We find that the network distance give as good a solution as the travel-time network. The Euclidean distance gives solutions some 4-10 per cent worse than the network distances, and the solutions tend to deteriorate with increasing P. Our conclusions extend to intra-urban location problems.
The p-median model is used to locate P facilities to serve a geographically distributed population. Conventionally, it is assumed that the population patronize the nearest facility and that the distance between the resident and the facility may be measured by the Euclidean distance. Carling, Han, and Håkansson (2012) compared two network distances with the Euclidean in a rural region witha sparse, heterogeneous network and a non-symmetric distribution of thepopulation. For a coarse network and P small, they found, in contrast to the literature, the Euclidean distance to be problematic. In this paper we extend their work by use of a refined network and study systematically the case when P is of varying size (2-100 facilities). We find that the network distance give as gooda solution as the travel-time network. The Euclidean distance gives solutions some 2-7 per cent worse than the network distances, and the solutions deteriorate with increasing P. Our conclusions extend to intra-urban location problems.
Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
A system for weed management on railway embankments that is both adapted to the environment and efficient in terms of resources requires knowledge and understanding about the growing conditions of vegetation so that methods to control its growth can be adapted accordingly. Automated records could complement present-day manual inspections and over time come to replace these. One challenge is to devise a method that will result in a reasonable breakdown of gathered information that can be managed rationally by affected parties and, at the same time, serve as a basis for decisions with sufficient precision. The project examined two automated methods that may be useful for the Swedish Transport Administration in the future: 1) A machine vision method, which makes use of camera sensors as a way of sensing the environment in the visible and near infrared spectrum; and 2) An N-Sensor method, which transmits light within an area that is reflected by the chlorophyll in the plants. The amount of chlorophyll provides a value that can be correlated with the biomass. The choice of technique depends on how the information is to be used. If the purpose is to form a general picture of the growth of vegetation on railway embankments as a way to plan for maintenance measures, then the N-Sensor technique may be the right choice. If the plan is to form a general picture as well as monitor and survey current and exact vegetation status on the surface over time as a way to fight specific vegetation with the correct means, then the machine vision method is the better of the two. Both techniques involve registering data using GPS positioning. In the future, it will be possible to store this information in databases that are directly accessible to stakeholders online during or in conjunction with measures to deal with the vegetation. The two techniques were compared with manual (visual) estimations as to the levels of vegetation growth. The observers (raters) visual estimation of weed coverage (%) differed statistically from person to person. In terms of estimating the frequency (number) of woody plants (trees and bushes) in the test areas, the observers were generally in agreement. The same person is often consistent in his or her estimation: it is the comparison with the estimations of others that can lead to misleading results. The system for using the information about vegetation growth requires development. The threshold for the amount of weeds that can be tolerated in different track types is an important component in such a system. The classification system must be capable of dealing with the demands placed on it so as to ensure the quality of the track and other pre-conditions such as traffic levels, conditions pertaining to track location, and the characteristics of the vegetation. The project recommends that the Swedish Transport Administration: Discusses how threshold values for the growth of vegetation on railway embankments can be determined Carries out registration of the growth of vegetation over longer and a larger number of railway sections using one or more of the methods studied in the project Introduces a system that effectively matches the information about vegetation to its position Includes information about the growth of vegetation in the records that are currently maintained of the track’s technical quality, and link the data material to other maintenance-related databases Establishes a number of representative surfaces in which weed inventories (by measuring) are regularly conducted, as a means of developing an overview of the long-term development that can serve as a basis for more precise prognoses in terms of vegetation growth Ensures that necessary opportunities for education are put in place
The aim of this work was to design a set of rules for levodopa infusion dose adjustment in Parkinson’s disease based on a simulation experiments. Using this simulator, optimal infusions dose in different conditions were calculated. There are seven conditions (-3 to +3)appearing in a rating scale for Parkinson’s disease patients. By finding mean of the differences between conditions and optimal dose, two sets of rules were designed. The set of rules was optimized by several testing. Usefulness for optimizing the titration procedure of new infusion patients based on rule-based reasoning was investigated. Results show that both of the number of the steps and the errors for finding optimal dose was shorten by new rules. At last, the dose predicted with new rules well on each single occasion of majority of patients in simulation experiments.
The automated timetabling and scheduling is one of the hardest problem areas. This is because of constraints and satisfying those constraints to get the feasible and optimized schedule, and it is already proved as an NP Complete (1) [1]. The basic idea behind this study is to investigate the performance of Genetic Algorithm on general scheduling problem under predefined constraints and check the validity of results, and then having comparative analysis with other available approaches like Tabu search, simulated annealing, direct and indirect heuristics [2] and expert system. It is observed that Genetic Algorithm is good solution technique for solving such problems and later analysis will prove this argument. The program is written in C++ and analysis is done by using variation in various parameters.
Penetrationstest har blivit en populär metod för organisationer att se datasäkerheten ur en angripares ögon och ofta inkluderas både system och personal av ett penetrationstest.Rapporten går igenom attackerna SQL Injection, Cross-site Scripting (XSS) och Cross-site Request Forgery (CSRF) som är specifikt för webbapplikationer och även förslag på hur man kan motverka dessa. Syftet med studien är att kontrollera om företagets webbapplikation läcker information, eller om en användare kan komma förbi autentisering eller uppgradera sina rättigheter i webbapplikationen. Vidare beskrivs även vilka verktyg som används för att inhämta information för fortsatta analyser. Diskussionen kan användas som underlag för att skapa en säkrare webbapplikation. Rapporten avslutas med en indikering om att företaget har god säkerhet, fast även ytterligare underlag för framtida undersökningar. En exekutiv sammanställning av arbetet, se Bilaga 1.
The widespread use of smartphones today, and the hardware available in these smartphones should make it possible to use these devices as a foundation for digital guides at, for example, tourist attractions with a historical connection. This report examines whether it is possible to create an Android application for smartphones that functions as a digital guide. Furthermore an attempt to delete sensitive data, in this case location data from the volatile memory on a smartphone running Android, is done. It turns out that a smartphone can be used as a foundation for a digital guide as long as the area that is to be covered by the guide is located in an area that is covered by the mobile networks and that it is possible to communicate with the GPS system. Deleting all the sensitive data from the volatile memory however, is more or less impossible.
This paper presents a new boosting algorithm called NormalBoost which is capable of classifying a multi-dimensional binary class dataset. It adaptively combines several weak classifiers to form a strong classifier. Unlike many boosting algorithms which have high computation and memory complexities, NormalBoost is capable of classification with low complexity. Since NormalBoost assumes the dataset to be continuous, it is also noise resistant because it only deals with the means and standard deviations of each dimension. Experiments conducted to evaluate its performance shows that NormalBoost performs almost the same as AdaBoost in the classification rate. However, NormalBoost performs 189 times faster than AdaBoost and employs a very little amount of memory when a dataset of 2 million samples with 50 dimensions is invoked.
Data mining is a relatively new field of research that its objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available [27]. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make rather accurate decisions. In this thesis, the goal is finding a pattern among patients who got pneumonia by clustering of lab data values which have been recorded every day. By this pattern we can generalize it to the patients who did not have been diagnosed by this disease whose lab values shows the same trend as pneumonia patients does. There are 10 tables which have been extracted from a big data base of a hospital in Jena for my work .In ICU (intensive care unit), COPRA system which is a patient management system has been used. All the tables and data stored in German Language database.
The Amharic language is the Official language of over 70 million people mainly in Ethiopia. An extensive literature survey and the government report reveal no single Amharic character recognition is found in the country. The Amharic script has 33 basic characters each with seven orders giving 310 distinct characters, including numbers and punctuation symbols. The characters are visually similar; there is a typeface, but no capitalization. Beside this there is no any standard font to use the language in the computer but they use different fonts developed by different stakeholders without keeping a standard on their own way and interest and this create a problem of incompatibility between different fonts and documents. This project is to investigate the reason why Amharic optical character recognition is not addressed by local and international researchers and developers and finally to develop Amharic optical character recognition uses the features and facilities of Microsoft windows Vista or 7 using Unicode standard.
In 2001 Ho"gskolan Dalarna launched a masters programme in computer science. This programme hasattracted a large number of applications from international students. This has yielded many exciting opportunities, but also given rise to some problems, both practical and academic. A key element of thesuccess in solving some of these problems has been to make the programme highly modular in structure, allowing two intakes per year. This has been the key to developing a peer group support system that ismuch appreciated by the students. Another key element in the modular structure is that studies can be partly done in partner universities member of the INHEE network. INHEE is a International Network forHigher Education in Engineering and most partners are small european universities.
In this article intelligent systems are placed in the context of accelerated Turing machines. Although such machines are not currently a reality, the very real gains in computing power made over previous decades require us to continually reevaluate the potential of intelligent systems. The economic theories of Adam Smith provide us with a useful insight into this question. © de Gruyter 2012.
Binocular vision disorders such as heterophoria and hyperphoria are relatively common. This paper shows, that if a cave survey team does not take careful account of the possibility of one or more of its members suffering from such a disorder, very serious errors can occur. Both theoretical as well as empirical evidence is presented. The likely magnitude of these errors implies that BCRA grade 5 cannot be achieved unless surveyors check for, or if necessary correct for, binocular vision disorders. Various techniques to eliminate such errors are explained and compared. Finally a recommendation is made concerning an additional clause to the notes which accompany the BCRA cave surveying grade definitions. To achieve BCRA grade 5, the possibility of binocular vision disorders must be taken into account.
This paper explores certain issues concerning the Turing test; non-termination, asymmetry and the need for a control experiment. A standard diagonalisation argument to show the non-computability of AI is extended to yields a socalled “April fool Turing test”, which bears some relationship to Wizard of Oz experiments and involves placing several experimental participants in a symmetrical paradox – the “April Fool Turing Test”. The fundamental question which is asked is whether escaping from this paradox is a sign of intelligence. An important ethical consideration with such an experiment is that in order to place humans in such a paradox it is necessary to fool them. Results from an actual April Fool Turing Test experiment are reported. It is concluded that the results clearly illustrate some of the difficulties and paradoxes which surround the classical Turing Test.