> o|[ 00ybjbj ;ΐΐv})))))===8u=7TU B'(''')JU*|*@zS|S|S|S|S|S|S$XZSQ)+))++S))''HSq8q8q8+)')'zSq8+zSq8q8KM'+du=+RLfST07TlL^m[a1|m[4MMm[)M++q8+++++SS4+++7T++++m[+++++++++ : Journal of Advanced Computer Science and Technology, 2 (xx) (2013) xxx-xxxScience Publishing Corporationwww.sciencepubco.com/index.php/JACST
ComputerAided Diagnosis of Diabetes Using
Least Square Support Vector MachineBehnaz Naghash Almasi, Omid Naghash Almasi*, Mina Kavousi, Amirhossein SharifiniaDepartment of Medical Science, Faculty of Nursing and Midwifery, Islamic Azad University, Mashhad, IranDepartment ofElectrical Engineering, Islamic Azad University, Gonabad Branch, IranDepartment ofElectrical Engineering, Semnan University, Semnan, IranDepartment ofElectrical Engineering, Islamic Azad University, Mashhad Branch, Iran*o.almasi@ieee.org
Abstract
Diabetes incidence is one of the most serious health challenges in both industrial and developing countries; however, it is for sure that the early detection and accurate diagnosis of this disease can decrease the risk of affiliation to other relevant disease in diabetes patients. Because of the effective classification and high diagnostic capability, expert systems and machine learning techniques are now gaining popularity in this field. In this study, Least square support vector machine (LS-SVM) was used for diabetes diagnosis. The effectiveness of the LS-SVM is examined on Pima Indian diabetes dataset using k-fold cross validation method. Compared to thirteen well-known methods for the diabetes diagnosis in the literature, the study results showed the effectiveness of the proposed method.
Keywords: Diabetes disease diagnosis, k-fold cross validation, Medical diagnosis, Least square support vector machine, Pima Indian Diabetes Datasets.
Introduction
Diabetes is a major health problem in both developed and developing countries, and its morbidity is increasing. It is a disease in which the body unable to produce or properly use insulin [1]. It is proved that diabetes increases the risks of kidney disease, blindness, nerve damage, blood vessel damage and it contributes to heart disease.
The cause of diabetes continues to be a mystery, although both genetics and environmental factors such as obesity and lack of exercise appear to play roles [2]. The most common form of diabetes is Type 2 diabetes [3]. This type diabetes results from insulin resistance (a condition in which the body fails to properly use insulin), combined with relative insulin deficiency.
Although detection of diabetes is improving, about half of the patients with Type 2 diabetes are undiagnosed and the delay from disease onset to diagnosis may exceed 10 years. Thus, earlier detection of Type 2 diabetes and treatment of hyperglycemia and related metabolic abnormalities is of vital importance.
Fortunately, in recent years with an increased emphasis on diagnostic techniques and more effective treatments, the mortality rate from diabetes has declined. A key factor in this approach is the early detection and accurate diagnosis of this affliction [4-6].
Undoubtedly, the evaluation of data taken from patients and decisions of experts are the most important factors in diagnosis. Therefore, the use of classifier systems in medical diagnosis has been gradually increasing. After all, expert systems and various artificial intelligence techniques for classification also help experts to a considerable extent. Classification systems can help minimize possible errors that might occur due to in experienced experts, and also provide medical data to be examined in shorter time and more detailed [6, 7].
Proposed as effective statistical learning methods for classification [8], Support Vector Machines (SVMs) rely on support vectors (SV) to identify the decision boundaries between different classes. Nonlinearly related to the input space, SVM is based on a linear machine in a high dimensional feature space, which has allowed the development of somewhat quick training techniques, despite the large number of input variables and large training sets. SVMs have successfully been used to address many problems including handwritten digit recognition [9], object recognition [10], speaker identification [11], face detection in images [12], and text categorization [13].
The Least Square Support Vector Machine (LS-SVM) was first proposed by Suykens and et al. by modifying the formulation of standard SVM [14]. The LSSVM was modified at two points: First, instead of inequality constraints, it takes equality constraints and changed the quadratic programming to a linear programming. Second, a squared loss function is taken from the error variable [14, 15].
In this study, LS-SVM was employed to diagnose the diabetes disease. For training and testing experiments, Pima Indian diabetes dataset taken from the University of California at Irvine (UCI) machine learning repository was used. It was observed that the proposed method yielded the highest classification accuracies among the thirteen other methods in the literature. In this study, the performance was evaluated by the well-known k-fold cross validation method.
The rest of the paper is organized as follows. Section 2 briefly discusses the methods and results of previous studies on diabetes disease diagnosis. Section 3 reviews basic SVM and LS-SVM concepts, respectively. Section 4 elaborates on the Pima Indian diabetes dataset. Section 5 presents the experimental results achieved by applying the proposed method to diagnose diabetes. Finally, we would make our concluding remarks in section 6.
Review of literature
A great deal of approaches has been proposed to deal with automated diagnosis of diabetes with Pima Indian diabetes disease, and most of them have managed to achieve high generalization performances.
Deng et al. obtained 78.4% classification accuracy using ESOM [16]. Kayaer et al. [17] achieved 77.08% classification accuracy using multilayer neural network with LM algorithm. They have used conventional (one training and one test) validation method. Temurtas et al. use the probabilistic neural network (PNN) and they obtained 78.05% classification accuracy [18]. Smith et al. [19] proposed a neural network ADAP algorithm to build associative models and acheive to accuracy of 76%. Quinlan [20] applied C4.5 algorithm and the classification accuracy was 71.1%. Sahan et al. [21] used Attribute Weighted Artificial Immune System with 10-fold cross validation method and obtained a classification accuracy of 75.87%. Kumari and Chitra used SVM as famous computer aid diagnostic system in diabetes disease and obtained accuracy of 78% [22]. In [23], the researcher reached 76.73% accuracy with SSVM approach. Moreover, in order to prepare the performance of the LS-SVM in automated diagnostics, five different variant of Artificial Neural Networks (ANNs) were employed in this study, which are frequently used in the literature. There are different kinds of ANNs, which are determined by their training algorithms and topologies. Adjusting the weights and bias of the ANN, to train an ANN, means to select a model from the set of allowed models that minimize the error of the generalization criterion. In this study, three training algorithms were used for training a three-layer ANN. The first is a well-known Levenberg- QUOTE Marquardt Back Propagation (LM BP), the second is Gradient Descent Back Propagation (GD BP), the third is Gradient Descent with Momentum Back Propagation (GDM BP), and the fourth is Gradient Descent with Adaptive learning rule Back Propagation (GDA BP). The fifth comparison method is Radial Basis Function (RBF), which turned out as a famous variant of ANNs.
Support vector machines for classification
In this section, we summarize the basic SVM concepts with regard to typical two-class classification problems. Support vector machines (SVM) originally developed by Boser et al. [24] and Vapnik [25], is based on the VapnikChervonenkis (VC) theory and structural risk minimization (SRM) principle [25], by trying to find a trade-off between minimizing the training set error and maximizing the margin to achieve the best generalization ability and remain resistant to over fitting. Moreover, one major advantage of SVM is its use of convex quadratic programming, which provides only global minima; therefore, it avoids being trapped in local minima. For more details, cf. [25, 26], which give a complete description of the theory of SVM. In this section we will discuss the basic SVM concepts for typical binary-classification problems.
3.1 Linear separable case-hard margin SVM
Let us consider a binary classification task: EMBED Equation.DSMT4 EMBED Equation.DSMT4 , EMBED Equation.DSMT4 , where EMBED Equation.DSMT4 are data points and EMBED Equation.DSMT4 are corresponding labels. They are separated with a hyper plane given by EMBED Equation.DSMT4 , where w is an n-dimensional coefficient vector which is normal to the hyperplane and b is the offset from the origin.
There are lots of hyperplanes that can separate the two classes, whereas the decision boundary should be as far away from the data of both classes as possible, the support vector algorithm seeks an optimal separating hyper plane that maximizes the separating margin between the two classes of data. As the wider margin can acquire the better generalization ability, we can define a canonical hyper plane [25] such that EMBED Equation.DSMT4 for the closet points on one side and EMBED Equation.DSMT4 for the closest on the other. Now to maximize the separating margin is equivalent to maximizing the distance between hyper plane H1 and H2. Hence we can get the maximal width between them: EMBED Equation.DSMT4 . To maximize the margin the task is therefore:
EMBED Equation.DSMT4 (1)
Therefore, the learning task could be reduced to minimization of the primal Lagrangian:
EMBED Equation.DSMT4 (2)
where EMBED Equation.DSMT4 are Lagrangian multipliers, hence EMBED Equation.DSMT4 . The minimum with respect to b and w of the Lagrangian, Lp, is given by,
EMBED Equation.DSMT4 (3)
Now we substitute back b and w in the primal, which gives the dual Lagrangian:
EMBED Equation.DSMT4 (4)
Obviously, it is a quadratic optimization problem (QP) with linear constraints. From Karush KuhnTucker (KKT) condition, we know that: EMBED Equation.DSMT4 ,Thus, only support vectors have EMBED Equation.DSMT4 ,which carry all the relevant information about the classification problem. Hence the solution has the form: EMBED Equation.DSMT4 , where SV is the number of support vectors. And gets b from EMBED Equation.DSMT4 , where EMBED Equation.DSMT4 is support vector. Therefore, the linear discriminant function takes the form: EMBED Equation.DSMT4 .
3.2 Linear non-separable case-soft margin SVM
In practice, it is impossible to classify two classes accurately, because the data is always subject to noise or outliers, so in order to extend the support vector algorithms and solve imperfect separation, positive slack variables EMBED Equation.DSMT4 [25, 26] are introduced to allow misclassification of noisy data points, and to take into account the misclassification errors a penalty value C is introduced for the points that cross the boundaries. In fact, parameter C can be viewed as a way of controlling over-fitting. Therefore, the new optimization problem can be reformulated as follows:
EMBED Equation.DSMT4 (5)
Translate this problem into a Lagarangian dual problem
EMBED Equation.DSMT4 (6)
The solution to this minimizations problem is identical to the separable case except for the upper bound C on the Lagrange multipliers EMBED Equation.DSMT4 .
3.3 Non-linear separable case-kernel trick
In most cases, one cant linearly separate the two classes. In order to extend the linear learning machine to work well with nonlinear cases, a general idea is introduced, i.e., the original input space can be mapped into some higher-dimensional feature space where the training set is separable. With this mapping, the discriminant function is of following form:
EMBED Equation.DSMT4 (7)
where EMBED Equation.DSMT4 in the input space is represented as the form of EMBED Equation.DSMT4 in the feature space. The functional form of the mapping EMBED Equation.DSMT4 does not need to be known since it is implicitly defined by the choice of kernel: EMBED Equation.DSMT4 . Thus, the optimization problem can be rewritten as:
EMBED Equation.DSMT4 (8)
After the optimal values of EMBED Equation.DSMT4 have been found, the decision function would be based on the sign of:
EMBED Equation.DSMT4 (9)
As a rule, any positive semi-definite functions K(x, y) that satisfy Mercers condition could be kernel functions [27]. Kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space. There are many kernel functions that can be employed in SVM. The most commonly used kernels in SVM are listed in Table 1.
Table 1: The conventional Kernel function
NameKernel Function expressionLinear Kernel EMBED Equation.DSMT4 Polynomial Kernel EMBED Equation.DSMT4 RBF Kernel EMBED Equation.DSMT4 MLP Kernel EMBED Equation.DSMT4
In this Table QUOTE EMBED Equation.DSMT4 and QUOTE EMBED Equation.DSMT4 are constants and those parameters must be set by a user. For MLP kernel a suitable choice for QUOTE EMBED Equation.DSMT4 and QUOTE EMBED Equation.DSMT4 is needed to enable the kernel function to meet Mercers condition.
3.4 Least Square Support Vector Regression
The Least Square Support Vector Regression (LS-SVR) fully described in [28], is considered as an approximation tool in this study. The formulation of SVR was modified by Suykens and et al. at two points: First, instead of inequality constraints, it takes equality constraints and changed the quadratic programming to a linear programming. Second, a squared loss function is taken from the error variable [28, 29]. These modifications greatly simplified the problem and can be specifically described as follows:
EMBED Equation.DSMT4 (10) EMBED Equation.DSMT4 EMBED Equation.DSMT4
where QUOTE EMBED Equation.DSMT4 are error variables that play a similar role as the slack variables QUOTE EMBED Equation.DSMT4 in Vapnik SVM formulation and QUOTE EMBED Equation.DSMT4 is a regularization parameter in determining the trade-off between minimizing the training errors and minimizing the model complexity.
The Lagrangian corresponding to (10) can be defined as:
EMBED Equation.DSMT4 (11)
where QUOTE EMBED Equation.DSMT4 are the Lagrange multipliers. The KKT optimality conditions for a solution can be obtained by partially differentiating with respect to QUOTE EMBED Equation.DSMT4 , QUOTE EMBED Equation.DSMT4 , QUOTE EMBED Equation.DSMT4 , and QUOTE EMBED Equation.DSMT4
EMBED Equation.DSMT4 EMBED Equation.DSMT4 (12) EMBED Equation.DSMT4 EMBED Equation.DSMT4 EMBED Equation.DSMT4 After elimination of the variable EMBED Equation.DSMT4 and QUOTE EMBED Equation.DSMT4 , the following linear equation can be obtained:
EMBED Equation.DSMT4 (13)where EMBED Equation.DSMT4 QUOTE , QUOTE EMBED Equation.DSMT4 and EMBED Equation.DSMT4 . The kernel trick is applied here as follows
EMBED Equation.DSMT4 (14)where EMBED Equation.DSMT4 QUOTE is the kernel function meeting Mercers condition. EMBED Equation.DSMT4 and QUOTE EMBED Equation.DSMT4 can be obtained by the solution to the linear system
EMBED Equation.DSMT4 (15) EMBED Equation.DSMT4 (16)
Eventually, the resulting LS-SVR model for function estimation can be expressed as:
EMBED Equation.DSMT4 (17)
3.5 Model selection
LS-SVMs have two adjustable sets of parameters. One of them is called kernel parameter(s) and the other is called regularization parameter ( EMBED Equation.DSMT4 ). LS-SVM generalization ability depends on the proper choosing of those parameters. The best performance of SVM is realized with an optimal choice of the kernel parameter(s) and the regularization parameter. The optimal choice of those parameters is called LS-SVMs model selection problem [30-32].
Kernel parameter(s) are implicitly characterizing the geometric structure of data in high dimensional space named feature space. In the feature space the data becomes linearly separable in such a way that the maximal margin of separation between two classes is reached. The selection of kernel parameter(s) will change the shape of the separating surface in input space. Selecting improperly large or small values in kernel parameter results Over-fitting or Under-fitting in the LS-SVM model surface, so the model would be unable to accurately separate data [33, 34].
In non-separable problems, noisy training data will introduce slack variables to measure their violation of the margin. Therefore, a penalty factor EMBED Equation.DSMT4 is considered for controlling the amount of margin violation. Other words, the penalty factor EMBED Equation.DSMT4 is defined to determine the trade-off between minimizing empirical error and structural risk error and also to guarantee the accuracy of classifier outcome in the presence of noisy training data. Higher EMBED Equation.DSMT4 values cause the margin to be hard and the cost of violation to become too high, so the separating model surface over-fits the training data. In contrast, lower EMBED Equation.DSMT4 values allow the margin to be soft, which results in under-fitting separating model surface. In both cases, the generalization performance of classifier is unsatisfactory, so it makes the LS-SVM model useless [33, 35].
In this research, we employ a grid-search technique [36] using 5-fold cross-validation to find out the optimal model selection of LS-SVM.
The Pima Indian diabetes disease diagnosis problem
In order to perform the research reported in this manuscript, Pima Indian diabetes dataset taken from the UCI machine learning respiratory were used [17-19, 37]. The reason for using this dataset is that because it is very commonly used among the other classification systems that we have used to compare this study with for Pima Indian diabetes diagnosis problem. All patients in this database are Pima Indian women at least 21 years old and living near Phoenix, Arizona, USA.
The dataset which consists of Pima Indian diabetes disease measurements contains two classes and 768 samples. The class distribution is
Class 1: normal (500)
Class 2: Pima Indian diabetes (268)
All samples have eight features. These features are:
1. Number of times pregnant ( EMBED Equation.DSMT4 ).
2. Plasma glucose concentration a 2 h in an oral glucose tolerance test ( EMBED Equation.DSMT4 ).
3. Diastolic blood pressure (mm Hg) ( EMBED Equation.DSMT4 ).
4. Triceps skin fold thickness (mm) ( EMBED Equation.DSMT4 ).
5. 2-h serum insulin (lU/ml) ( EMBED Equation.DSMT4 ).
6. Body mass index (weight in kg/(height in m)^2) ( EMBED Equation.DSMT4 ).
7. Diabetes pedigree function Feature 8: 2-h serum insulin (lU/ml) ( EMBED Equation.DSMT4 ).
8. Age (years) ( EMBED Equation.DSMT4 ).
Although the dataset is labeled as there are no missing values, there were some liberally added zeros as missing values. Five patients had a glucose of 0, 28 had a diastolic blood pressure of 0, 11 more had a body mass index of 0, 192 others had skin fold thickness readings of 0, and 140 others had serum insulin levels of 0. After the deletion there were 460 cases with no missing values.
Experimental Results and Discussion
In this section, we introduce the performance evaluation method, which is used to evaluate the proposed method. Finally, we would present the experimental results and discuss our observations of the results. All the experiments reported here are implemented using RBF kernels for the following reasons:
When the relation between desired output and input attributes is nonlinear, the RBF kernel non-linearly maps datasets into the feature space so that it can handle the datasets. The number of hyper-parameters is the second reason which influences the complexity of model selection. The RBF kernel has less hyper-parameter than the polynomial kernel. Eventually, the RBF kernel is numerically less difficult [38-41].
5.1 Performance evaluation methods
In this study, k-fold cross validation method was used for performance evaluation of diabetes diagnosis using LS-SVM. k-fold cross validation is a way to improve over the holdout method. The data set is divided into k subsets, and the holdout method is repeated k times. Every time, one of the k subsets is used as the test set, and the other k-1subsets are gathered to form a training set. Then the average error across all k trials is calculated. The advantage of this method is that it is less significant for this method how the data gets divided. Every data point gets to be in a test set only once, and gets to be in a training set k-1 times. As k increases, the variance of the resulting estimate reduces. The downside of this method is that the training algorithm must rerun k times from scratch, in other words, it takes k times computation to make an evaluation. To randomly divide the data into a test and training set k different times is a variant of this method. The advantage of this method is that you can independently choose how large you wish each test set to be and after how many trials you average should be over [42].
5.2 Results and Discussion
We conducted some experiments on the Pima Indian diabetes disease dataset mentioned in section 4, so that we can evaluate the effectiveness of LS-SVM. We compared our results with those of earlier methods. Table 2 shows the classification accuracies of our method and ten previous methods. As the results show, our method using 10-fold cross validation has obtained the highest classification accuracy, 79.66% reported up to now.
Given the research findings, the SVM-based model that we have developed yielded very promising results in classifying the disease. We believe that the proposed system could be very helpful for physicians in their final decisions about their patients. Using such a tool, they can make reasonably accurate decisions.
Table 2: Classification accuracies obtained with LS-SVM and other classifiers from the literature
MethodClassification accuracy (%)ESOM [16]78.40MLNN with LM [17]77.08PNN [18]78.05SVM [19]75.53C4.5 [20]71.10AWAIS [21]75.87SVM [22]78.00SSVM [23]76.73BP-GD70.48BP-GDM71.36BP-GDA73.28BP-LM75.14RBF75.00LS-SVM79.66
Conclusion
Classification systems used in medical decision making, provide medical data to be examined in a shorter time and more detail. In this study, a medical decision making system based on LSSVM was applied in diagnosing diabetes and the most accurate learning methods were evaluated. To diagnose diabetes in a fully automatic manner using LSSVM, experiments were conducted on the Pima Indian diabetes disease dataset. The experiment results strongly suggest that LSSVM could be helpful in diagnosis of disease. Compared to thirteen well-known methods in the literature, the experiment results demonstrated that the proposed method was more effective than other ten methods in the disease diagnosis.
References
Ehab I. Mohamed, R. Linderm, G. Perriello, N. Di Daniele, S. J. Poppl, and A. De Lorenzo, Predicting type 2 diabetes using an electronic nose-base artificial neural network analysis, Diabetes Nutrition & Metabolism, Vo.15, No. 4, 2002, pp.215221.
Kemal Polat, Salih Gunes, and Ahmet Aslan, A cascade learning system for classification of diabetes disease: Generalized discriminant analysis and least square support vector machine, Expert Systems with Applications, Vol. 34, No.1, 2008, pp. 214221.
Rajendra Acharya, Peck Ha Tan, Tavintharan Subramaniam, Toshiyo Tamura, Kuang Chua Chua, Seach Chyr Ernest Goh, Choo Min Lim, Shu Yi Diana Goh, Kang Rui Conrad Chung, and Chelsea Law, Automated identification of diabetic type 2 subjects with and without neuropathy using wavelet transform on pedobarograph, Journal of Medical Systems, Vol. 32, No.1, 2008, pp. 2129.
David West, Paul Mangiameli, Rohit Rampal, and Vivian West, Ensemble strategies for a medical diagnosis decision support system: A breast cancer diagnosis application, European Journal of Operational Research, Vol. 162, No. 2, 2005, pp. 532551.
Kemal Polat and Salih Gne_, Breast cancer diagnosis using least square support vector machine ,Digital Signal Processing, Vol.17, No.4, 2007, pp. 694 701.
Hui-Ling Chen, Bo Yang, Jie Liu, and Da-You Liu, A support vector machine classifier with rough set-based feature selection for breast cancer diagnosis, Expert Systems with Applications,Vol. 38, No.7, 2011, pp. 90149022.
Mehmet Fatih Akay, Support vector machines combined with feature selection for breast cancer diagnosis,Expert systems with applications,Vol. 36, No.2, 2009, pp. 32403247.
Vladimir N. Vapnik, Statistical Learning Theory, New York: Wiley, 1998.
HYPERLINK "http://ieeexplore.ieee.org/search/searchresult.jsp?searchWithin=p_Authors:.QT.Scholkopf,%20Bernhard.QT.&searchWithin=p_Author_Ids:37265889200&newsearch=true"Bernhard Scholkopf, Sung Kah-Kay, Christopher J. Burges, Federico Girosi, Partha Niyogi, Tomaso Poggio, and Vladimir Vapnik, Comparing support vector machines with Gaussian kernels to radial basis function classifiers, IEEE Transactions on Signal Processing, Vol. 45, No.11, 1997, pp. 27582765.
MassimilianoPontil and Alessandro Verri, Support vector machines for 3-D object recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 6, 1998, pp. 637646.
Vincent Wan and William M. Campbell, Support vector machines for speaker verification and identification, Proceedings of IEEE Workshop Neural Networks for Signal Processing, 2000, pp. 775784.
Edgar Osuna, Robert Freund, and Federico Girosi, Training support vector machines: Application to face detection, In Proceedings of computer vision and pattern recognition, 1997, pp. 130136.
Thorsten Joachims, Transductive inference for text classification using support vector machines, In Proceedings of international conference machine learning, vol. 99, 1999, pp. 200209.
Johan A. K. Suykens, Tony Van Gestel, Jos De Brabanter, Bart De Moor, and Joos Vandewalle, Least squares support vector machines, World Scientific Publishing, 2002, Singapore.
Johan A. K. Suykens, Joos Vandewalle, and Bart. De Moor, Optimal control by least squares support vector machines, Neural Networks, Vol.14, No.1, 2001, pp. 2335.
Deng, D., & Kasabov, N. (2001). On-line pattern analysis by evolving self-organizing maps. In Proceedings of the fifth biannual conference on artificial neural networks and expert systems (ANNES) (pp. 46 51).
Kayaer, K., & Y1ld1r1m, T. (2003). Medical diagnosis on Pima Indian diabetes using general regression neural networks. In Proceedings of the international conference on artificial neural networks and neural information processing (ICANN/ICONIP) (pp. 181184).
Hasan Temurtas, Nejat Yumusak, Feyzullah Temurtas, A comparative study on diabetes disease diagnosis using neural networks, Expert Systems with Applications, vol. 36, pp. 86108615, 2009.
Smith, J.W., J. E. Everhart, et al.- Using the ADAP learning algorithm to forecast the onset of diabetes mellitus, Proceedings of the Symposium on Computer Applications and Medical Care (Washington, DC). R.A. Greenes. Los Angeles, CA, IEEE Computer Society Press, 1988, pp. 261-265.
Quinlan, J.R. C4.5: programs for machine learning, San Mateo, Calif., Morgan Kaufmann Publishers, 1993.
S.Sahan, K.Polat, H. Kodaz, and S. Gunes, The medical applications of attribute weighted artificial immune system (awais): Diagnosis of heart and diabetes diseases, in ICARIS, 2005,p. 456-468.
V. Anuja Kumari, R.Chitra, Classification Of Diabetes Disease Using Support Vector Machine, International Journal of Engineering Research and Applications, Vol. 3, No. 2, 2013, pp.1797-1801.
Santi Wulan Purnami, Abdullah Embong, Jasni Mohd Zain and S. P. Rahayu, A New Smooth Support Vector Machine and Its Applications in Diabetes Disease Diagnosis, Journal of Computer Science, Vol. 5, No. 12, 2009, pp.1003-1008,.
Bernhard E. Boser, Isabelle M Guyon, and Vladimir N. Vapnik, A training algorithm for optimal margin classifiers, In Fifth annual workshop on computational learning theory, 1992, pp. 144152.
Corinna Cortes and Vladimir Vapnik, Support-vector networks, Machine Learning, Vol. 20, No.3, 1995, pp. 273297.
Nello Cristianini and John Shawe-Taylor, An introduction to support vector machines: And other kernel-based learning methods, Cambridge, UK: Cambridge University Press, 2000.
Alexander J. Smola, Learning with kernels: Support vector machines, regularization, optimization, and beyond, The MIT Press, 2002.
Johan A. K. Suykens, Support vector machines: a nonlinear modeling and control perspective,European J. of Control, Vol.7, No. 23, 2001, pp. 311327.
Chen-Chia Chuang, Fuzzy weighted support vector regression with a fuzzy partition, IEEE Transaction on System, Man, Cybern. B, Cybern., Vol. 37, No. 3, 2007, pp. 630640.
Xinjun Peng and Yifei Wang, A geometric method for model selection in support vector machine, Expert Systems with Applications, Vol. 36, No.3, 2009, pp. 57455749.
Shuzhou Wang, Bo Meng, Parameter selection algorithm for support vector machine, Procedia Environmental Sciences, Vol. 11, 2011, pp. 538544.
Olivier Chapelle, Vladimir N. Vapnik, Olivier Bousquet, and Sayan Mukherjee, Choosing multiple parameters for support vector machines, Machine Learning, Vol. 46, No. 1, 2002, pp. 131159.
S. Sathiya Keerthi, Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms, IEEE Transaction on Neural Networks, Vol. 13, No. 5, pp.12251229.
Peter Williams, Sheng Li, Jianfeng Feng, and Si Wu, A geometrical method to improve performance of the support vector machine, IEEE Transaction on Neural Networks, Vol. 18, No. 3, 2007,pp. 942947.
Sheng Ding and Xiaoming Liu, Evolutionary computing optimization for parameter determination and feature selection of support vector machines, IEEE Conference on Computational Intelligence and Software Engineering, 2009, pp. 1-5.
Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin, A practical guide to support vector classification, Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, 2003. Available at http://www.csie.ntu.edu.tw/cjlin/libsvm/.
Christopher J. Merz and Patrick M. Murphy, UCI repository of machine learning databases, http:www.ics.uci.edu:~mlearn:MLRepository.html, 1996.
S. Sathiya keerthi and Chih-Jen Lin, Asymptotic behavior of support vector machines with Gaussion kenerl, Neural Computation, Vol. 15, No. 7, 2003, pp.16671689.
Hsuan-Tien Lin and Chih-Jen Lin, A study on sigmoid kernels for SVM and the training of non-PSD kernels by SMO type methods, Technical report, Department of Computer Science , National Taiwan University, 2003, pp. 132.
Antoine Bordes, Seyda Ertekin , Jason Weston, and Leon Bottou ,Fast Kernel Classifiers with Online and Active Learning, The Journal of Machine Learning Research, Vol. 6 , 2005, pp. 15791619.
Jiancheng Sun, Chongxun Zheng, Xiaohe Li, and Yatong Zhou, Analysis of the Distance Between Two Classes for Tuning SVM Hyperparameters, IEEE Transaction on Neural Networks, Vol. 21, NO. 2, 2010, pp. 305318.
'578ACLMOoq⼪rg\gF*h0h05CJ KH \^JaJ nH tH hghUI#CJaJhghmCJaJhIhUI#PJnH tH h`6CJ]aJhIhUI#6CJ]aJ"h[hUI#6CJ]aJmH sH "h[hz^6CJ]aJmH sH .h[hUI#6CJPJ]aJmH nHsH tHh`56CJ\]aJ hIh`56CJ\]aJhUI#jhxUMNOopq?OkdG$$Ifl4i0 _I64
la $IfgdMOkd$$Ifl40`_I64
la$$Ifa$gdM $IfgdUI#q$$7$8$H$Ifa$gd0$a$gdmOkd$$Ifl40 _I64
la$$7$8$H$Ifa$gdM & > @ B R vgXF4"h#eh~6CJ]aJnH tH "hghW>5CJ\aJnH tH h
5CJ\aJnH tH h'5CJ\aJnH tH hM#5CJ\aJnH tH h(P5CJ\aJnH tH h25CJ\aJnH tH hghW>CJaJnH tH hghW>CJaJnH tH *h0hW>5CJ KH \^JaJ nH tH *h0h05CJ KH \^JaJ nH tH $h05CJ KH \^JaJ nH tH ? dS$$7$8$H$Ifa$gd
Dkdp$$IfTlB
t644
laT$$7$8$H$Ifa$gd
cDkd#$$IfTlB
t644
laT? @ A B dS$$7$8$H$Ifa$gd0b{Dkd
$$IfTlB
t644
laT$$7$8$H$Ifa$gd
cDkd$$IfTlB
t644
laTR c v z
I
J
K
Z
;xixXxiixXFiF"hgh
6CJ]aJnH tH hgh
CJPJaJnH tH h
6CJ]aJnH tH "h#eh
6CJ]aJnH tH hgh~CJPJaJnH tH &hgh~6CJPJ]aJnH tH h$*E6CJ]aJnH tH hgh~CJaJnH tH "h#eh~6CJ]aJnH tH "h^xh~6CJ]aJnH tH h~6CJ]aJnH tH
J
dS$$7$8$H$Ifa$gd=8Dkd$$IfTlB
t644
laT$$7$8$H$Ifa$gdDkdW$$IfTlB
t644
laTJ
K
dS$$7$8$H$Ifa$gdM#Dkd>$$IfTlB
t644
laT$$7$8$H$Ifa$gd'Dkd$$IfTlB
t644
laT
{|}~i$d&dPa$gd`$&dPa$gd\$7$8$H$a$gd\$a$gdeO-$$dNa$gdg$a$gdeO-Dkd$$IfTlB
t644
laT
,~
%
+
,
J
K
V
X
ƻsf\h0CJPJaJhL3h0CJPJaJh\h\6CJ]aJh0B*CJPJaJph!h@4h0B*CJPJaJphh(CJaJhL3h0CJaJh0CJaJhu`h0CJaJhgh5jCJaJhgh_CJaJhghCCJaJhgheO-CJaJhgh
5CJ\aJ#X
]
_
{
\yz{|}߿rrrrrfWIhghTdF5CJ\aJhgh`CJaJnH tH h$1CJaJnH tH hYh\CJaJnH tH hYhPCJaJnH tH "hYhC6CJ]aJnH tH "hYhC5CJ\aJnH tH hgheO-CJaJh0h5jCJPJaJhL3h0CJaJh0CJaJhL3h0CJPJaJh0CJPJaJhCJPJaJ}XS^_t=%%gdb$7$8$H$a$gd2$7$8$H$a$gd_G
&Fd^`gd~(gdv'$7$8$H$a$gd$7$8$H$a$gd;
&Fd^`gd;!}&EIK^yz+,mnv9:WXDRXYbc@ӽȱhKoh;CJZaJhKh;CJaJhKoh;CJaJhEh;CJaJh;CJaJh/h;CJaJhghC^JaJnH tH F@AJK();<rs/289OPRSz{/0B_yzѾѲhL3h;CJZaJhCJaJh;CJZaJo(hL3h;CJaJh;CJZaJh\3CJaJh/h;CJaJh;CJaJF
'-.=GWX"2YZ ]^_st~shL3h_GCJaJhghC^JaJnH tH h~(h~(^JaJnH tH hghv'CJaJnH tH hL3h;B*CJZaJphPh;CJOJQJ^JaJhL3h;CJPJaJhhCJaJhS{h;CJaJhL3h;CJaJh;CJaJ,<=DHQRz{|~&'9:IOt , . 9 ~ !!ĮěģhN\Rh_GCJaJhCJaJh*CJaJhWeh_GCJaJhuMh_GCJaJhh_GCJaJh/&CJaJhh_GCJaJh|h_GCJaJh5h_GCJaJhL3h_GCJaJh_GCJaJ5!!!!!!!!!!!!!!"%"\"]"k"z"""##6#7#9#:#.$/$0$7$8$9$;$_$$$$$$$8%9%=%B%%%ĵ̄̕ hghCJPJaJnH tH h{CJaJjhbhbEHUhbhbCJaJjhbhbCJUaJhbCJaJhL3hbCJaJh_GCJaJhL3h_GCJaJhN\Rh_GCJaJhN\Rh/&CJaJ.%%%%%%>&?&&&&&&&&&&&&&&&&&''U'V'''''((%(&(0(1(H(I(o(p(r(s(v(w(x(((
)))ƼƲƥƑƑƑƇƼƼƼƼƼƼƼƼƼƼƼh_GCJPJaJhTCJPJaJhN\RCJPJaJhN\RhTCJPJaJhCJPJaJhYCJPJaJhL3hYCJPJaJhghC^JaJnH tH hgh@c^JaJnH tH h{h{^JaJnH tH 3%%))C)D)*--
..$$7$8$H$Ifa$gd},7$7$8$H$If`7gd},$7$8$H$a$gdIgdgdIgd^-$7$8$H$a$gdN\R
&Fd^`gd{
))))B)C)D)r)s)))))))ĵwaNw8+jjU
hL3hICJEHPJUVaJ%j=hL3hICJEHPJUaJ+jpU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJhL3hICJPJaJhghZ$CJaJnH tH hghC^JaJnH tH hIhI^JaJnH tH hghQQ^JaJnH tH hgh^-CJPJaJnH tH hYhCCJPJaJ)))))))))))))))))**̹r\Ir3+jTU
hL3hICJEHPJUVaJ%jHhL3hICJEHPJUaJ+jSU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jDhIhICJEHPJUaJ+jeU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJhL3hICJPJaJ%jhL3hICJEHPJUaJ%j}AhIhICJEHPJUaJ***\*]*s*t*u*v*k+l+++[,\,^,_,,,,,,,,,̹wwwwmcmT>+jU
hL3hICJEHPJUVaJhL3hICJEHPJaJhN\RCJPJaJh7CJPJaJhICJPJaJ%jMhL3hICJEHPJUaJ+j\U
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJhL3hICJPJaJ%jhL3hICJEHPJUaJ%jJhL3hICJEHPJUaJ,,,,,,,,,--l-m-n-s-t-u------ٽ̊zizi̊VGhL3hICJEHPJaJ%jhL3hICJEHPJUaJ!hL3hI6CJH*PJ]aJhL3hI6CJPJ]aJhICJPJaJ%jThIhICJEHPJUaJ+jU
hL3hICJEHPJUVaJhL3hICJEHPJaJhL3hICJPJaJ%jhL3hICJEHPJUaJ%jQhIhICJEHPJUaJ---------- .
...
..öv`M@1h},hCJPJ_H)aJh},hCJPJaJ%ju\h},hCJEHPJUaJ+jYU
h},hCJEHPJUVaJh},hCJEHPJaJ%jh},hCJEHPJUaJhH*!CJPJaJhCJPJaJhICJPJaJhL3hICJPJaJ%jhL3hICJEHPJUaJ%jIXhH*!h)CJEHPJUaJ+jNU
hL3h)CJEHPJUVaJ...j.k.l.m...........ubS=*%jfhL3hICJEHPJUaJ+j[U
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJ%jgah},hCJEHPJUaJ+jZU
h},hCJEHPJUVaJh},hCJEHPJaJ%jh},hCJEHPJUaJhCJPJaJhICJPJaJhL3hICJPJaJh},hCJPJaJ"h},hCJPJZ_H)aJo(...k.l...}$$7$8$H$Ifa$gd},$$7$8$H$Ifa$gd},$7$8$H$a$gdITkda$$Ifl0&?@
t644
layt},...2/3/N/R/s$$7$8$H$Ifa$gd},7$7$8$H$If`7gd}, 7$8$H$gdI$7$8$H$a$gdITkdef$$Ifl0&?@
t644
layt},........///
/!/"/#/1/2/3/4/J/K/кߗߗߗ|r_P:+j]U
h},hTCJEHPJUVaJh},hTCJEHPJaJ%jh},hTCJEHPJUaJhTCJPJaJhICJPJaJ!hL3hI6CJH*PJ]aJhL3hI6CJPJ]aJ%jihL3hICJEHPJUaJ+j\U
hL3hICJEHPJUVaJhL3hICJEHPJaJhL3hICJPJaJ%jhL3hICJEHPJUaJK/L/M/N/O/P/Q/S/T/k/l/q/r///////̥̿~taR<+j^U
h},hTCJEHPJUVaJh},hTCJEHPJaJ%jh},hTCJEHPJUaJhTCJPJaJhICJPJaJhL3hI6CJPJ]aJhL3hICJPJaJhL3hCJPJaJh},h8CJPJaJh},hxhCJPJaJh},hCJPJaJ%jh},hTCJEHPJUaJ%jlh},hTCJEHPJUaJR/S/T/////$$7$8$H$Ifa$gd},7$7$8$H$If`7gd}, 7$8$H$gdITkdq$$Ifl0&?@
t644
layt},/////////K0L0b0c0d0e0f0000̥̿v`M3zgXB/g%jhL3hICJEHPJUaJ+jeU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJhxhCJPJaJhL3hICJPJaJ"hghZ$5CJ\aJnH tH hIhI^JaJnH tH hgh^JaJnH tH hgh^-CJ^JaJnH tH hL3hICJaJ!jhL3hICJEHUaJ!jhIhICJEHUaJ>3@3A3C3D3E3F333334444444444444ջջձo\OBOh},h8CJPJaJh},hCJPJaJ%jh},hTCJEHPJUaJ+jfU
h},hTCJEHPJUVaJh},hTCJEHPJaJ%jh},hTCJEHPJUaJhCJPJaJhICJPJaJhL3hI6CJPJ]aJhxhCJPJaJhL3hICJPJaJhQ{CJPJaJhN\RCJPJaJh^pCJPJaJ444444
55|p$$Ifa$gd},
7$If`7gd},gdI$7$8$H$a$gdITkdJ$$Ifl0&?@
t644
layt},$$7$8$H$Ifa$gd},444444 5
555
5555555ҿzmzZK5+jhU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJh},h8CJPJaJh},hTCJPJaJ%jh},hTCJEHPJUaJ+jgU
h},hTCJEHPJUVaJh},hTCJEHPJaJ%jh},hTCJEHPJUaJhTCJPJaJhICJPJaJhL3hICJPJaJhL3hCJPJaJ5555555Q7l7p7ve$$7$8$H$Ifa$gd},7$7$8$H$If`7gd},$7$8$H$a$gdIgdI 7$8$H$gdIgdITkd$$Ifl0&?@
t644
layt}, 5555555566k6l666661727P7Q7R7h7i7j7k7¶̎̎̎̎}p\K}!jih},h_*CJEHUaJ'jiU
h},h_*CJEHUVaJh},h_*CJEHaJ!jh},h_*CJEHUaJhxhCJPJaJhghrCJaJnH tH hIhI5\]nH tH hI5\]nH tH hICJPJaJhL3hICJPJaJ%jhL3hICJEHPJUaJ%jhL3hICJEHPJUaJk7l7o7q7r7w7x7777777777788+8,8̹̹kX̹B+jlU
hL3hICJEHPJUVaJ%jަhL3hICJEHPJUaJ+jkU
hL3hICJEHPJUVaJ%jhL3hICJEHPJUaJ+jjU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJhL3hICJPJaJhL3h_*CJPJaJh},h8CJPJaJh},h_*CJPJaJp7q7r78888s$$7$8$H$Ifa$gd},7$7$8$H$If`7gd}, 7$8$H$gdI$7$8$H$a$gdITkd$$Ifl0&@
t644
lalyt},,8-8.88888888888888̹wm`M>h},h_*CJEHPJaJ%jh},h_*CJEHPJUaJhL3hxhCJPJaJhICJPJaJhxhCJPJaJ%jOhL3hICJEHPJUaJ+jmU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJhL3hICJPJaJ%jhL3hICJEHPJUaJ%j.hL3hICJEHPJUaJ8888888899%9&9'9(9n9o9p9q9öt^KC2!jh},hxhCJEHUaJhICJaJ%j϶hL3hICJEHPJUaJ+joU
hL3hICJEHPJUVaJhL3hICJEHPJaJ%jhL3hICJEHPJUaJhL3hICJaJhxhCJaJh},h8CJPJaJh},h_*CJPJaJ%jh},h_*CJEHPJUaJ%jh},h_*CJEHPJUaJ+jnU
h},h_*CJEHPJUVaJ888o9p999$$7$8$H$Ifa$gd},Y$7$8$H$If`Ygd}, 7$8$H$gdITkdd$$Ifl0&@
t644
lalyt},q99999999999999::::E:F:::::;;;νuiu]uiiiQh0b{B*CJaJphhN\RB*CJaJphhwjB*CJaJphh^pB*CJaJphhH*!B*CJaJphhL3hIB*CJaJphhL3hxhCJaJh},h8CJaJh},hxhCJaJ!jh},hxhCJEHUaJ!jh},hxhCJEHUaJ'jpU
h},hxhCJEHUVaJh},hxhCJEHaJ999;;0;5;P;znn$$Ifa$gdk-$nn]n^na$gd0b{$7$8$H$a$gdQ{$7$8$H$a$gdN\R 7$8$H$gdITkd$$Ifl0&@
t644
lalyt},;;;;
;;;/;0;P;_;`;w;x;y;z;{;;;;;պudWE#jU
h0b{h0b{CJUVaJh0b{h\CJaJ!jӾh0b{h0b{CJEHUaJ#j^U
h0b{h0b{CJUVaJh0b{h\CJaJjh0b{h\CJUaJh0b{hICJaJh0b{hI5CJ\aJh0b{hICJaJnH tH h0b{CJaJnH tH h0b{h0b{CJaJnH tH hL3h_|kB*CJaJphhIB*CJaJphP;Q;_;{;thY$d$Ifa$gd_|k$$Ifa$gdk-kd$$IfTl0|Q
t09644
lapytk-T{;|;;;thY$d$Ifa$gd_|k$$Ifa$gdk-kdV$$IfTl0|Q
t09644
lapytk-T;;;;;;;;;;;;;;;;;;;;;;
<<<߹Ĺ߹sdYJ?hIhICJaJjhIhICJUaJh%hICJaJhL3hIB*CJaJph!jh0b{h0b{CJEHUaJ#jU
h0b{h0b{CJUVaJ!jch0b{h0b{CJEHUaJ#jU
h0b{h0b{CJUVaJh0b{h\CJaJh0b{hICJaJh0b{h\5CJ\aJjh0b{h\CJUaJ!jh0b{h0b{CJEHUaJ;;;;thY$d$Ifa$gd_|k$$Ifa$gdk-kd$$IfTl0|Q
t09644
lapytk-T;;;;thY$d$Ifa$gd_|k$$Ifa$gd\kdG$$IfTl0|Q
t09644
lapytk-T;;;U=V===tk_SSJgd;!$7$8$H$a$gdI$7$8$H$a$gd\ 7$8$H$gdIkd$$IfTl0|Q
t09644
lapytk-T<<<<<-<.</<0<1<6<7<><?<@<A<B<Y<Z<[<\<]<<<<<<<<<<Ȼأ،rأe[NjU
h\EHUVjfh\EHUj!hIhIEHUjh\h\EHUjU
h\EHUVjh\EHUj*hIhIEHUh%hICJaJjTh\h\EHUjU
h\EHUV
h\EHjh\EHUjhIhICJUaJhIhICJaJjhIhIEHU<<<<<<<<<<<=
====T=V=\========ö٬΄xi]RJRBRhN\RCJaJhKRCJaJh$JhICJaJhICJaJnH tH hIhI5\]nH tH hI5\]nH tH hICJaJjth\h\EHUjU
h\EHUV
h\EHjh\EHUjLhIhIEHUhIhICJaJh%hICJaJjhIhICJUaJjfh\EHUjIh\h\EHU==:>@>y>z>???? ??????????????????ʿtmeZMej[|hxhh8EHUjݢU
h8UVjHfh8UhUsh8)hIh8B*CJOJQJ^JaJphh8h8CJaJhIh8jhwhk-h8EHUj&ޢU
h8UVh8jh8Uh$JhwjCJaJhICJaJh<,&CJaJhN\RCJaJhQ{hICJaJh$JhICJaJhKRCJaJ=???????^HYxx$7$8$H$If`Ygdk-Xkd{$$Ifl40;!&u
t644
laytk-$xx$7$8$H$Ifa$gdk-7$7$8$H$If`7gdk-$d1$7$8$H$a$gdI$d1$7$8$H$a$gdN\R?????????????????????@ȽȥԓobUboKE
hk-EHjfhk-EHUjhIhIEHUhIhICJPJaJ!jhIhICJPJUaJhICJaJh*hICJaJ#h hI6CJOJQJ^JaJh8h8CJaJjhk-hwjEHUjU
hwjUVh8jh8UhUsh8 hIh8CJOJQJ^JaJ%hIh8CJOJQJ^JaJ?????(Xkd$$Ifl40;!&u
t644
laytk-$xx$7$8$H$Ifa$gdk-Y$7$8$H$If`Ygdk-XkdJ$$Ifl40;!&u
t644
laytk-??IAAAAAAANXkd$$Iflz0=&
t644
laytk-$xx$7$8$H$Ifa$gdk-7xx$7$8$H$If`7gdk-$7$8$H$a$gdI$7$8$H$a$gdk-$d1$7$8$H$`a$gdI@
@@@@U@V@]@^@_@`@a@x@y@z@{@|@@@@@@@@@ʽʰʙyʽj_R_jHjhghk-EHUjhIhIEHUhIhICJaJjhIhICJUaJjhk-hk-EHUjߢU
hk-EHUV
hk-EHj ghk-EHUjhIhIEHUhIhICJPJaJh*hICJPJaJ!jhIhICJPJUaJjfhk-EHUj˫hk-hk-EHUjߢU
hk-EHUV@@@@@AAAAAAAAAAAAAAAAAAAxq`UHU>jHlhk-EHUj^hIhIEHUhIhICJaJ hL3hICJOJQJ^JaJhUshk-h8hk-CJaJhIhk-jchk-hk-EHUjxߢU
hk-UVhk-jhk-Uh*hk-CJaJhICJaJh*hICJaJjhIhICJUaJjhghk-EHUjhk-hk-EHUjU
hk-EHUVAAAAAAAAYBZBaBbBcBdBeB|B}B~BBBBBBBBBBBBǼǩǒxǼǩkaTj5U
hk-EHUVjhhk-EHUjG\hIhIEHUjxYhk-hk-EHUj(U
hk-EHUVjhhk-EHUj3hIhIEHUhIhICJaJhH*!CJaJh*hICJaJjhIhICJUaJjHlhk-EHUj]0hk-hk-EHUj_U
hk-EHUV
hk-EHBBBBBBBBBBBBBBBBBBBBBBBBBBνylν_UHjIU
hk-EHUVjHlhk-EHUjOhIhIEHUh*hICJPJaJjhk-hk-EHUj?U
hk-EHUV
hk-EHjlhk-EHUjhIhIEHUhIhICJPJaJ!jhIhICJPJUaJh*hICJaJjhIhICJUaJjhhk-EHUjhk-hk-EHUBBBBBCCCCCCCC4C5C6C7C8C=C>C?C@CWC轰轃vcXQ@ hIhICJOJQJ^JaJhUshk-hIhICJaJ%hIhICJOJQJ^JaJj0hk-hk-EHUjU
hk-EHUV$hIhICJOJPJQJ^JaJjhH*!hH*!EHvUjU
hH*!EHUV
hk-EHhk-CJPJaJhICJPJaJ!jhIhICJPJUaJjHlhk-EHUjhk-hk-EHUABCC8C=C>C?CCkkd$$Ifl4F>"%
t|644
laytB$xx$7$8$H$Ifa$gdB$7$8$H$If`gdk-$xx$7$8$H$Ifa$gdI$7$8$H$a$gdk- 7$8$H$gdH*!WCXCYCZC[C\C]C^C_CvCwCxCyCzC{C|C}C~CCCCCCCCCsfSHh*hICJaJ%hIhICJOJQJ^JaJjphBhH*!EHUjU
hH*!EHUV
hH*!EHjHlhH*!EHUjhBhBEHUjU
hBEHUV
hBEHjHlhBEHUhUshk- hIhICJOJQJ^JaJhBhBEHjHlhk-EHUjhk-hk-EHUjVU
hk-EHUV?C[C\C]C^CzC{CmX$xx$7$8$H$Ifa$gdIkkd5$$Ifl4F>"%
t|644
laytB$xx$7$8$H$Ifa$gdI$7$8$H$If`gdB{C|C}CCCq\$xx$7$8$H$Ifa$gdI
$7$8$H$IfgdH*!$xx$7$8$H$Ifa$gdIkkd$$Ifl4F>"%
t|644
laytBCC5D6DRDWDt_$xx$7$8$H$Ifa$gdB7$7$8$H$If`7gdB$7$8$H$a$gdBkkd$$Ifl4F>"%
t|644
laytBCCCCCCCCCCCCCCDDDDD4D5D6D7DNDwqdWwJB>hBjhBUh*hH*!CJPJaJj>hBhBEHUjIU
hBEHUV
hBEHj9hBEHUj8hIhIEHUhIhICJPJaJ!jhIhICJPJUaJh*hICJPJaJhICJPJaJ%jihBhBCJEHPJUaJ!jQU
hBCJPJUVaJhBCJPJaJjhBCJPJUaJNDODPDQDRDWDXD^D_DvDwDxDyDzDDDDDDDD{j]P]jCj]h$JhICJPJaJj&hIhIEHUhIhICJPJaJ!jhIhICJPJUaJ!j"hBhBCJEHUaJj&U
hBCJUVaJhBCJaJjhBCJUaJh$JhICJaJhUshIhIhICJaJ%hIhBCJOJQJ^JaJjhBUjhBhBEHUj~U
hBUVWDXDDDEEs$xx$7$8$H$Ifa$gd87$7$8$H$If`7gd8$7$8$H$a$gdBVkd"$$Ifl0a!&&b
t644
laytBDDDDDDDDDDDDDDDDDDDDDDEǻǉzi_RD:h8CJPJaJjh8CJPJUaJh$JhBCJPJaJhICJPJaJ!jhBhBCJEHUaJjU
hBCJUVaJhBCJPJaJ!j6hBhBCJEHUaJjYU
hBCJUVaJhBCJaJjhBCJUaJh$JhICJPJaJ!jhIhICJPJUaJhIhICJPJaJjVhIhIEHUEEEEEE E%E&E=E>E?E@EAEHEIEJELEEEͺziZOBOZ6jhwjCJUaJjhIhIEHUhIhICJaJjhIhICJUaJ!jh8h8CJEHUaJj#U
h8CJUVaJh8CJaJjh8CJUaJh$JhICJaJhUsh8h8h8CJaJ%hIh8CJOJQJ^JaJjh8CJPJUaJ%jFhBhwjCJEHPJUaJ!jU
hwjCJPJUVaJE EEEFFxc$xx$7$8$H$Ifa$gd\7x$7$8$H$If`7gd\$7$8$H$a$gdI$7$8$H$a$gdwjXkds$$Ifl0!&_)
t644
layt8EEEEEEEEEEEEEEEEEEEEEFF}pbVN?jU
h8CJUVaJh8CJaJjh8CJUaJhICJOJQJ^JaJjhwjhwjEHUjU
hwjEHUVj#Q hwjEHU
hwjEHjhIhIEHUhIhICJaJjhIhICJUaJh$JhICJaJjhwjCJUaJ!jhwjhwjCJEHUaJj|U
hwjCJUVaJhwjCJaJFFFFFFF6F7F8F9F:F?F@FAFFFFFFFĽĽujbWOK@jsU
h\UVh\j} h\Uh*h\CJaJhICJaJh*hICJaJ hL3hICJOJQJ^JaJ%hIh8CJOJQJ^JaJjh8h\EHUjU
h\UVh8jh8UhUshIh\hICJaJ%hIhICJOJQJ^JaJjh8CJUaJ!jnh8h8CJEHUaJFF:F?F@F}&Vkd$$Ifl0-!&|
t644
layt\$xx$7$8$H$Ifa$gd\7xx$7$8$H$If`7gd\Vkd&$$Ifl0-!&|
t644
layt\@FAFFFFFFFFFHlcccW$7$8$H$a$gdN\RgdVVkd!$$Ifl0-!&|
t644
layt\$xx$7$8$H$Ifa$gd\7$7$8$H$If`7gd\$7$8$H$a$gdI 7$8$H$gdI
FFFFFFFFFFFFFAGBG\G]GtGuGvGwGzGŷtlt`lQ@`t!jhRhRCJEHUaJjU
hRCJUVaJjhRCJUaJhRCJaJh.h.CJaJh.CJaJ!hVhV5\]_HnH tH hgK5\]_HnH tH !hVhgK5\]_HnH tH hV5\]_HnH tH hUshIh\hICJaJ%hIh\CJOJQJ^JaJj} h\Ujh\h\EHUzG}GGG-H.HyHHHHHHII{I|IIIJJJJJJJJJJJKKKoKpKKKKKKKKKLLLʿݿݿ|k!j5h+h+CJEHUaJjU
h+CJUVaJ!j`h+h+CJEHUaJjU
h+CJUVaJh+CJaJjh+CJUaJh<,&h<,&CJaJh<,&h.CJaJh<,&CJaJhN\RCJaJhRCJaJh.h.CJaJhOCJaJ*HJNO?OQQQQQQRPRRR:SuSS+TXTU%
&F7$8$H$gdQ{ 7$8$H$gdQ{$7$8$H$a$gdQ{$7$8$H$a$gdy
&Fd7$8$H$^`gdQ{$7$8$H$a$gdN\RLL`LaLLLLLLLZM[MMMMMMMMMWNXNbNhNwNyNzN}N~NNNNNNNNNNNNNɸɧ藏{pphphhgKCJaJhgKhgKCJaJ*h.CJaJh<,&h<,&CJaJhN\RCJaJh<,&CJaJhOCJaJ!jh+h+CJEHUaJ!j
h+h+CJEHUaJjU
h+CJUVaJh+CJaJhRCJaJh.h.CJaJjh+CJUaJ(NN O
OO>O?O|O}OOOOOOOOOOO
PPJPKPPPPPQQWQXQQQQq`UhyhQ{CJaJ!hyhN\RB*CJPJaJphh`1B*CJPJaJph!hyhyB*CJPJaJph!hyhQ{B*CJPJaJphhQ{B*CJPJaJph!h@4hQ{B*CJPJaJphhIhI^JaJnH tH hQ{hQ{^JaJnH tH hgKhgK5\]nH tH hOCJaJhgKhgKCJaJ!QQQQQQQRR1R3R4RJRKRLRMRNRORPRSRRRRRRļļxaļļļG3j@IU
hL3hQ{CJEHOJQJUV^JaJ-jhL3hQ{CJEHOJQJU^JaJ3j?IU
hL3hQ{CJEHOJQJUV^JaJ$hL3hQ{CJEHOJQJ^JaJ-jhL3hQ{CJEHOJQJU^JaJhQ{CJaJhKFhQ{CJaJ!hKFhQ{B*CJPJaJph!h@4hQ{B*CJPJaJphhQ{B*CJPJaJphRRRRRRRRRRRRRRRRSSS4S5S6S7S8S:S~~ h=hICJOJQJ^JaJhIhI^JaJnH tH hL3hICJOJQJ^JaJh@4hQ{CJaJ-j&hL3hQ{CJEHOJQJU^JaJ3jFIU
hL3hQ{CJEHOJQJUV^JaJ$hL3hQ{CJEHOJQJ^JaJhKFhQ{CJaJhQ{CJaJ-jhL3hQ{CJEHOJQJU^JaJ-j7hL3hQ{CJEHOJQJU^JaJUUV3WXXXXn]o]]]<_w`x`y`z`$7$8$H$a$gdQ{$7$8$H$a$gd_|k$7$8$H$a$gdQ{ 7$8$H$gdI 7$8$H$gdR$7$8$H$a$gdN\R$7$8$H$a$gdI
&Fd^`gdI$7$8$H$a$gdJ3(WWZX[XcXdXxXyXXXXXXXXXXXXXXXXXXYY Y/Yŷŷᩘ}qbN&h$"hI6CJOJQJ]^JaJhRhR5\]nH tH hI5\]nH tH hRhI5\]nH tH hR5\]nH tH h=hRCJOJQJ^JaJhICJOJQJ^JaJhN\RCJOJQJ^JaJhJUCJOJQJ^JaJh]CJOJQJ^JaJ h=hICJOJQJ^JaJhKRCJOJQJ^JaJ/Y0YNYVYWYnYoYpYYYYYYZZ ZPZSZZZZZU[V[w[z[[[[[[\ \\\7\8\\\\\i]k]m]ᴠ~~phN\RCJOJQJ^JaJ&hBS%hI6CJOJQJ]^JaJhBS%CJOJQJ^JaJ&h$"hI6CJOJQJ]^JaJh$"CJOJQJ^JaJ h=hQ{CJOJQJ^JaJhQ{CJOJQJ^JaJ h=hICJOJQJ^JaJhsUmCJOJQJ^JaJ+m]n]o]u]]]]]]c^d^^^^^^^!_'_0_1_;_<_A_ƺxjjjYxjH hL3h_|kCJOJQJ^JaJ h?hQ{CJOJQJ^JaJhQ{CJOJQJ^JaJ hQ{CJOJQJZ^JaJo( h
/hQ{CJOJQJ^JaJ h=hQ{CJOJQJ^JaJhRhR5\]nH tH hI5\]nH tH hRhI5\]nH tH hR5\]nH tH h=hRCJOJQJ^JaJhICJOJQJ^JaJA_B_y_z______ `!`v`w`y`|``aaaaaaa$a+a,a/a4a;akl0mzmOoppqUrsstv?w\xxy%$
&F7$8$H$a$gd'%$
&F7$8$H$a$gd
%$
&Fa$gd%$
&F7$8$H$a$gdPSfh0m8m9m$gdZ|$!$a$gd&!$a$gdZ|Skd$$$Ifl0&
t644
la/y0y$7$8$9DH$^a$gd9
0&P 1h:p&.. A!n"n#n$n%DdL
C(ABB-Logob9yj苇ӌnpD
n
yj苇ӌnpPNG
IHDR4;L~gIFxXMP DataXMP?xpacket begin="" id="W5M0MpCehiHzreSzNTczkc9d"?> ~}|{zyxwvutsrqponmlkjihgfedcba`_^]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!
.PLTEPUኍ%*駪nq38BG噛|춸ͤ_czQj!tRNS!bKGD k=cmPPJCmp0712HsIDATx^b<gGti_!a^ ƤΟo>,ɲ-M?i(e鷕vy%SnK41.lXLDNWvφ>mbo
]2ӾovMx^\+_G}Nҋ꒚^qp'yUW4ҁ3aEwGWݞ/RH#soAoR'ǊO&}CE}Eߊu>u^h~,,6A4_?E-oX=F})\om P?zN@fgJU]bBT@g[]Vl11_ƟfVe힃'с+o~~?Y/Kg~\l%
ė_CU2M?udYzÏ==
X^_(Z0|OlXTfL,3?laRO9#x}ҏ'bgZ_c6?B8{R|s*5'ݴ}?
''Y+Ǉ'Ez~B"=Gğyd`pw
˺_63|V#台UoFR3߉߿p)-7ď~OS~>|zm(AOW#`f T=ʺ>V%|du!#>0vv?17P_~O{?|R7o
͞Ug~?{ʟd`A̛EEgZ %Ww.+j}*=?{S/(V$ٗu /\|#J^d']ѹq%O)G(l-ezSmnsr8]қѾtjAfzӼcqCGx=an{>oM-->CӖr>߰vL_QoKi>%Qft?=S+ϳ}gw{=ў>C+#~L{~X'T;B
{zwy)~prhE/oL˾އa%dͣv/v3JNMJz߯=Fx)-X4MYć10+W6h_.Rs*|Fk"\i^_\EJ>Sը/gIC1]z2kw u>}/s4\6˥=MQEQTz{c&܍}{$C}yʳo[#D?eT_Yxx+NzruN >hN]Rg˶[vgAuH58>/Ƿ?,w%
نyI8ɏ:0Qn_QG^sDgOo'ׄ쩂V?(5@يIbCws_Kqʿmxa-mZbǏ?f4u@1k>bgEmؒp:fVd"xqX#a;R#~V|np4'w4mBoˏ 7QH9ZA֍QB?yŻ]x\3>3{AЍ=Tz6nᖽg
12g|~~**5|xuҔ@eSzyt\>_\d/yP3]JOIA~JT ue3oc2$FoT: ~2:[z}qyn:MST0|/[`zV<
_hwktOxIC9|>q8)|_[pwG5n)ektG>9>]L[ ~UOu>h˓Migwju{MRçLUekӓڋ|u|AzR?]fO~C|:"Uz@1)e|RӟIy(ҧVɐW;s*}zSnE5E%JnIVƧ3y./uzivOo'>_e(?!}:VWIszӹO0q_؞`t#/(?w=BW%s^/zǽaK;]^?0{B_X7C|q["z{[{˥ߧHs}qC!>פӹ.q_~i>w3Me\ОANOM{\lp\Τt֓XW>5YNEZe(%s'|vim_#4.! *%}'Sw&dbJJO[Sݞ
%WT#J`ϰ{?Ms4>}1řh@,Kp&tuwЀ_NEE
ɄҿyM< +YJϗGۏh θl#6}o6osy{#tX\XMdϟI>.{#j7E};~*[G@vF~!lA#Ts_a"0ܯ@)t0}*%L_0}4J˯4,˟x=PJ__p|_p|H=s+U~~{dQS!SNB/D}!Q_$13!SPu!Q_B/D}|ĠDԇ{pCHmGRl TIENDB`l$$If!vh5_5#v_#v:Vl4I6+5_54l$$If!vh5_5#v_#v:Vl4iI6+5_54l$$If!vh5_5#v_#v:Vl4I6+5_54K$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65TK$$If!vh5#v:Vl
t65T%Dd
$
3A"##
12**