Anda di halaman 1dari 276

ACKNOWLEDGEMENT

It gives me a great pleasure in presenting the Dissertation on Evolutionary


Algorithms for Multi-Objective Optimization: Modelling and Comparative
Evaluation. The project work has certainly rendered me a tremendous learning.

Apart from my efforts, the success of this project depends largely on the
encouragement and guidelines of many others. I take this opportunity to express my
gratitude to the people who have been instrumental in the successful completion of this
project.

First and foremost, I would like to thank to my guide, Dr. Satish S. Chinchanikar
(Professor, Department of Mechanical Engineering, VIIT, Pune) for his valuable
guidance and advice. My special thanks to my Co-guide, Mr. Mahendra G. Gadge
(Assistant Professor, Department of Mechanical Engineering, VIIT, Pune) for his
valuable guidance and advice. He inspired me greatly to work in this project. His
willingness to motivate me contributed tremendously to this project.

I would like to thank Dr. Atul P. Kulkarni (Associate Professor and Head,
Department of Mechanical Engineering, VIIT, Pune) for allowing me to work on this
project and giving me valuable guidance and advice in my Project.

I am highly grateful to Dr. Bilavari S. Karkare (Principal) and I would also thank
our Institution, faculties and technical staff of mechanical engineering department who
helped me directly or indirectly during this project work. I also extend my heartfelt
thanks to my family, siblings, all my friends and well-wishers.

Omkar Mahesh Manav

i
LIST OF FIGURE

Fig. N0 Name of Figures Page No

1.1 Hierarchy of Computational Intelligence 2

1.2 Approaches to Computational Intelligence 3

1.3 Global Search Optimization Hierarchy 6

1.4 Synergies of Computational Intelligence 7

1.5 Applied Optimization and Learning Methodology 12

3.1 Workflow for chapter 3 28

3.2 Evolutionary Model 33

3.3 NSGA II Algorithm 41

3.4 Rank and Pareto for Ra (35HRC) 50

3.5 Pareto-front for Tf (35HRC) 50

3.6 Average distance between consecutive generations (35HRC) 50

3.7 Rank and Pareto for Ra (45HRC) 52

3.8 Pareto-front for Tf (45HRC) 52

3.9 Average distance between consecutive generations (45HRC) 52

3.10 SPEA2 Algorithm 54

3.11 Rank and Pareto for Ra (35HRC) 58

3.12 Pareto-front for Tf (35HRC) 58

3.13 Average distance between consecutive generations (35HRC) 58

3.14 Rank and Pareto for Ra (45HRC) 59

3.15 Pareto-front for Tf (45HRC) 59

ii
3.16 Average distance between consecutive generations (45HRC) 60

3.17 PSO Algorithm 66

3.18 Pareto spread surface roughness and cutting force 35HRC 73

3.19 3D surface plot of optimal Ra with best position 35HRC 73

3.20 3D surface plot of Tf with best position 35HRC 74

3.21 Depth of cut influence on cutting forces 35 HRC 74

3.22 Pareto spread surface roughness and cutting force 45HRC 75

3.23 3D surface plot of optimal Ra with best position 45HRC 75

3.24 3D surface plot of Tf with best position 45 HRC 76

3.25 Depth of cut influence on cutting forces 45 HRC 76

3.26 Solution Spectrum for 35 hrc NSGA II 77

3.27 Solution Spectrum for 45 hrc NSGA II 77

3.28 Solution Spectrum for 35 hrc PSO 77

3.29 Solution Spectrum for 45 hrc PSO 77

3.30 Solution Spectrum for 35 hrc SPEA 2 77

3.31 Solution Spectrum for 45 hrc SPEA2 77

4.1 Workflow for Chapter 4 80

4.2 Simple network 82

4.3 Multi-layer feed forward network structure 85

4.4 Feed forward neural network for AISI 4340 Hard turning 88

Plots for NN 35 HRC

4.5 Performance plot of Network 89

iii
4.6 Training state of Network at each epoch 89

4.7 Training error in Ra 90

4.8 Regression fit plot for Ra 90

4.9 Training error in Ft 90

4.10 Regression fit plot for Ft 90

4.11 Training error in Fa 91

4.12 Regression fit plot for Fa 91

4.13 Training error in Fr 91

4.14 Regression fit plot for Fr 91

4.15 Training error in Tf 91

4.16 Regression fit plot for Tf 91

Plots for NN 45 HRC

4.17 Performance plot of Network 92

4.18 Training state of Network at each epoch 92

4.19 Training error in Ra 92

4.20 Regression fit plot for Ra 92

4.21 Training error in Ft 92

4.22 Regression fit plot for Ft 92

4.23 Training error in Fa 93

4.24 Regression fit plot for Fa 93

4.25 Training error in Fr 93

4.26 Regression fit plot for Fr 93

iv
4.27 Training error in Tf 93

4.28 Regression fit plot for Tf 93

4.29 ANFIS two input model 95

4.30 Applied ANFIS grid partitioning architect 101

ANFIS Grid Partitioning Plots for 35 HRC

4.31 Training Error Plots for Ra (Target vs Output) 103

4.32 Testing Error Plots for Ra (Target vs Output) 103

4.33 Validation Error Plots for Ra (Target vs Output) 103

4.34 Regression Plots for Ra (Train /Test/Validate) 103

4.35 Response Surface Plot for Ra 104

4.36 Training Error Plots for Ft (Target vs Output) 104

4.37 Testing Error Plots for Ft (Target vs Output) 104

4.38 Validation Error Plots for Ft (Target vs Output) 105

4.39 Regression Plots for Ft (Train /Test/Validate) 105

4.40 Response Surface Plot for Ft 105

4.41 Training Error Plots for Fa (Target vs Output) 105

4.42 Testing Error Plots for Fa (Target vs Output) 105

4.43 Validation Error Plots for Fa (Target vs Output) 106

4.44 Regression Plots for Fa (Train /Test/Validate) 106

4.45 Response Surface Plot for Fa 106

4.46 Training Error Plots for Fr (Target vs Output) 106

4.47 Testing Error Plots for Fr (Target vs Output 106

v
4.48 Validation Error Plots for Fr (Target vs Output) 107

4.49 Regression Plots for Fr (Train /Test/Validate) 107

4.50 Response Surface Plot for Fr 107

4.51 Training Error Plots for Tf (Target vs Output) 108

4.52 Testing Error Plots for Tf (Target vs Output) 108

4.53 Validation Error Plots for Tf (Target vs Output) 108

4.54 Regression Plots for Tf (Train /Test/Validate) 108

4.55 Response Surface Plot for Tf 108

ANFIS Grid Partitioning Plots For 45 HRC

4.56 Training Error Plots for Ra (Target vs Output) 109

4.57 Testing Error Plots for Ra (Target vs Output) 109

4.58 Validation Error Plots for Ra (Target vs Output) 109

4.59 Regression Plots for Ra (Train /Test/Validate) 109

4.60 Response Surface Plot for Ra 109

4.61 Training Error Plots for Ft (Target vs Output) 110

4.62 Testing Error Plots for Ft (Target vs Output) 110

4.63 Validation Error Plots for Ft (Target vs Output) 110

4.64 Regression Plots for Ft (Train /Test/Validate) 110

4.65 Response Surface Plot for Ft 110

4.66 Training Error Plots for Fa (Target vs Output) 111

4.67 Testing Error Plots for Fa (Target vs Output) 111

4.68 Validation Error Plots for Fa (Target vs Output) 111

vi
4.69 Regression Plots for Fa (Train /Test/Validate) 111

4.70 Response Surface Plot for Fa 111

4.71 Training Error Plots for Fr (Target vs Output) 112

4.72 Testing Error Plots for Fr (Target vs Output 112

4.73 Validation Error Plots for Fr (Target vs Output) 112

4.74 Regression Plots for Fr (Train /Test/Validate) 112

4.75 Response Surface Plot for Fr 112

4.76 Training Error Plots for Tf (Target vs Output) 113

4.77 Testing Error Plots for Tf (Target vs Output) 113

4.78 Validation Error Plots for Tf (Target vs Output) 113

4.79 Regression Plots for Tf (Train /Test/Validate) 113

4.80 Response Surface Plot for Tf 113

4.81 Developed ANFIS (Subtractive Cluster) 115

ANFIS Subtractive Cluster plots For 35 HRC

4.82 Training Error Plots for Ra (Target vs Output) 117

4.83 Testing Error Plots for Ra (Target vs Output) 117

4.84 Validation Error Plots for Ra (Target vs Output) 117

4.85 Regression Plots for Ra (Train /Test/Validate) 117

4.86 Response Surface Plot for Ra 117

4.87 Training Error Plots for Ft (Target vs Output) 118

4.88 Testing Error Plots for Ft (Target vs Output) 118

4.89 Validation Error Plots for Ft (Target vs Output) 118

vii
4.90 Regression Plots for Ft (Train /Test/Validate) 118

4.91 Response Surface Plot for Ft 118

4.92 Training Error Plots for Fa (Target vs Output) 119

4.93 Testing Error Plots for Fa (Target vs Output) 119

4.94 Validation Error Plots for Fa (Target vs Output) 119

4.95 Regression Plots for Fa (Train /Test/Validate) 119

4.96 Response Surface Plot for Fa 119

4.97 Training Error Plots for Fr (Target vs Output) 120

4.98 Testing Error Plots for Fr (Target vs Output 120

4.99 Validation Error Plots for Fr (Target vs Output) 120

4.100 Regression Plots for Fr (Train /Test/Validate) 120

4.101 Response Surface Plot for Fr 120

4.102 Training Error Plots for Tf (Target vs Output) 121

4.103 Testing Error Plots for Tf (Target vs Output) 121

4.104 Validation Error Plots for Tf (Target vs Output) 121

4.105 Regression Plots for Tf (Train /Test/Validate) 121

4.106 Response Surface Plot for Tf 121

ANFIS Subtractive Cluster plots For 45 HRC

4.107 Training Error Plots for Ra (Target vs Output) 122

4.108 Testing Error Plots for Ra (Target vs Output) 122

4.109 Validation Error Plots for Ra (Target vs Output) 122

4.110 Regression Plots for Ra (Train /Test/Validate) 122

viii
4.111 Response Surface Plot for Ra 122

4.112 Training Error Plots for Ft (Target vs Output) 123

4.113 Testing Error Plots for Ft (Target vs Output) 123

4.114 Validation Error Plots for Ft (Target vs Output) 123

4.115 Regression Plots for Ft (Train /Test/Validate) 123

4.116 Response Surface Plot for Ft 123

4.117 Training Error Plots for Fa (Target vs Output) 124

4.118 Testing Error Plots for Fa (Target vs Output) 124

4.119 Validation Error Plots for Fa (Target vs Output) 124

4.120 Regression Plots for Fa (Train /Test/Validate) 124

4.121 Response Surface Plot for Fa 124

4.122 Training Error Plots for Fr (Target vs Output) 125

4.123 Testing Error Plots for Fr (Target vs Output 125

4.124 Validation Error Plots for Fr (Target vs Output) 125

4.125 Regression Plots for Fr (Train /Test/Validate) 125

4.126 Response Surface Plot for Fr 125

4.127 Training Error Plots for Tf (Target vs Output) 126

4.128 Testing Error Plots for Tf (Target vs Output) 126

4.129 Validation Error Plots for Tf (Target vs Output) 126

4.130 Regression Plots for Tf (Train /Test/Validate) 126

4.131 Response Surface Plot for Tf 126

4.132 Developed ANFIS Fuzzy C Mean Clustering architect 128

ix
ANFIS FCM plots For 35 HRC

4.133 Training Error Plots for Ra (Target vs Output) 130

4.134 Testing Error Plots for Ra (Target vs Output) 130

4.135 Validation Error Plots for Ra (Target vs Output) 130

4.136 Regression Plots for Ra (Train /Test/Validate) 130

4.137 Response Surface Plot for Ra 130

4.138 Training Error Plots for Ft (Target vs Output) 131

4.139 Testing Error Plots for Ft (Target vs Output) 131

4.140 Validation Error Plots for Ft (Target vs Output) 131

4.141 Regression Plots for Ft (Train /Test/Validate) 131

4.142 Response Surface Plot for Ft 131

4.143 Training Error Plots for Fa (Target vs Output) 132

4.144 Testing Error Plots for Fa (Target vs Output) 132

4.145 Validation Error Plots for Fa (Target vs Output) 132

4.146 Regression Plots for Fa (Train /Test/Validate) 132

4.147 Response Surface Plot for Fa 132

4.148 Training Error Plots for Fr (Target vs Output) 133

4.149 Testing Error Plots for Fr (Target vs Output 133

4.150 Validation Error Plots for Fr (Target vs Output) 133

4.151 Regression Plots for Fr (Train /Test/Validate) 133

4.152 Response Surface Plot for Fr 133

4.153 Training Error Plots for Tf (Target vs Output) 134

x
4.154 Testing Error Plots for Tf (Target vs Output) 134

4.155 Validation Error Plots for Tf (Target vs Output) 134

4.156 Regression Plots for Tf (Train /Test/Validate) 134

4.157 Response Surface Plot for Tf 134

ANFIS FCM plots For 45 HRC

4.158 Training Error Plots for Ra (Target vs Output) 135

4.159 Testing Error Plots for Ra (Target vs Output) 135

4.160 Validation Error Plots for Ra (Target vs Output) 135

4.161 Regression Plots for Ra (Train /Test/Validate) 135

4.162 Response Surface Plot for Ra 135

4.163 Training Error Plots for Ft (Target vs Output) 136

4.164 Testing Error Plots for Ft (Target vs Output) 136

4.165 Validation Error Plots for Ft (Target vs Output) 136

4.166 Regression Plots for Ft (Train /Test/Validate) 136

4.167 Response Surface Plot for Ft 136

4.168 Training Error Plots for Fa (Target vs Output) 137

4.169 Testing Error Plots for Fa (Target vs Output) 137

4.170 Validation Error Plots for Fa (Target vs Output) 137

4.171 Regression Plots for Fa (Train /Test/Validate) 137

4.172 Response Surface Plot for Fa 137

4.173 Training Error Plots for Fr (Target vs Output) 138

4.174 Testing Error Plots for Fr (Target vs Output 138

xi
4.175 Validation Error Plots for Fr (Target vs Output) 138

4.176 Regression Plots for Fr (Train /Test/Validate) 138

4.177 Response Surface Plot for Fr 138

4.178 Training Error Plots for Tf (Target vs Output) 139

4.179 Testing Error Plots for Tf (Target vs Output) 139

4.180 Validation Error Plots for Tf (Target vs Output) 139

4.181 Regression Plots for Tf (Train /Test/Validate) 139

4.182 Response Surface Plot for Tf 139

Comparison Error Plots of Neural Network and


ANFIS Prediction

FOR 35 HRC

4.183 Error Estimation Plots For R a 140

4.184 Error Estimation Plots For F t 140

4.185 Error Estimation Plots For F a 141

4.186 Error Estimation Plots For F r 141

For 45 HRC

4.187 Error Estimation Plots For R a 141

4.188 Error Estimation Plots For F t 141

4.189 Error Estimation Plots For F a 141

4.190 Error Estimation Plots For F r 141

Error Plots of ANFIS (Grid Partitioning Clustering)


Results

xii
FOR 35 HRC

4.191 Error Estimation Plots For R a 142

4.192 Error Estimation Plots For F t 142

4.193 Error Estimation Plots For F a 142

4.194 Error Estimation Plots For F r 142

FOR 45 HRC

4.195 Error Estimation Plots For R a 142

4.196 Error Estimation Plots For F t 142

4.197 Error Estimation Plots For F a 143

4.198 Error Estimation Plots For F r 143

Error Plots of ANFIS (Subtractive Clustering)

FOR 35 HRC

4.199 Error Estimation Plots For R a 143

4.200 Error Estimation Plots For F t 143

4.201 Error Estimation Plots For F a 143

4.202 Error Estimation Plots For F r 143

FOR 45 HRC

4.203 Error Estimation Plots For R a 144

4.204 Error Estimation Plots For F t 144

4.205 Error Estimation Plots For F a 144

4.206 Error Estimation Plots For F r 144

Error Plots of ANFIS (Fuzzy C-Mean Clustering))

xiii
FOR 35 hrc

4.207 Error Estimation Plots For R a 144

4.208 Error Estimation Plots For F t 144

4.209 Error Estimation Plots For F a 145

4.210 Error Estimation Plots For F r 145

FOR 45 HRC

4.211 Error Estimation Plots For R a 145

4.212 Error Estimation Plots For F t 145

4.213 Error Estimation Plots For F a 145

4.214 Error Estimation Plots For F r 145

5.1 Chapter 5 flow chart 147

5.2 Developed NSGANN architect 149

NSGA-NN 35HRC

5.3 Performance plot of Network 151

5.4 Training state of Network at each epoch 151

5.5 Training error in Ra 151

5.6 Regression fit plot for Ra 151

5.7 Training error in Ft 151

5.8 Regression fit plot for Ft 151

5.10 Training error in Fa 152

5.11 Regression fit plot for Fa 152

5.12 Training error in Fr 152

xiv
5.13 Regression fit plot for Fr 152

5.14 Training error in Tf 152

5.15 Regression fit plot for Tf 152

Plots for NSGA-NN 45 HRC

5.16 Performance plot of Network 153

5.17 Training state of Network at each epoch 153

5.18 Training error in Ra 153

5.19 Regression fit plot for Ra 153

5.20 Training error in Ft 153

5.21 Regression fit plot for Ft 153

5.22 Training error in Fa 154

5.23 Regression fit plot for Fa 154

5.24 Training error in Fr 154

5.25 Regression fit plot for Fr 154

5.26 Training error in Tf 154

5.27 Regression fit plot for Tf 154

5.28 . SI-NN collaborative combination 155

5.29 Applied SI-NN synergy architect 155

Results of NSGA-NN 35 HRC

5.30 Performance plot of Network 157

5.31 Training state of Network at each epoch 157

5.32 Training error in Ra 157

xv
5.33 Regression fit plot for Ra 157

5.34 Training error in Ft 157

5.35 Regression fit plot for Ft 157

5.36 Training error in Fa 158

5.37 Regression fit plot for Fa 158

5.38 Training error in Fr 158

5.39 Regression fit plot for Fr 158

5.40 Training error in Tf 158

5.41 Regression fit plot for Tf 158

5.42 GA based ANFIS (FCM) applied Strategy 159

ANFIS-GA FCM plots For 35 HRC

5.43 Training Error Plots for Ra (Target vs Output) 160

5.44 Testing Error Plots for Ra (Target vs Output) 160

5.45 Regression Plots for Ra (Train /Test/Validate) 161

5.46 Response Surface Plot for Ra 161

5.47 Training Error Plots for Ft (Target vs Output) 161

5.48 Testing Error Plots for Ft (Target vs Output) 161

5.49 Regression Plots for Ft (Train /Test/Validate) 162

5.50 Response Surface Plot for Ft 162

5.51 Training Error Plots for Fa (Target vs Output) 162

5.52 Testing Error Plots for Fa (Target vs Output) 162

5.53 Regression Plots for Fa (Train /Test/Validate) 163

xvi
5.54 Response Surface Plot for Fa 163

5.55 Training Error Plots for Fr (Target vs Output) 163

5.56 Testing Error Plots for Fr (Target vs Output 163

5.57 Regression Plots for Fr (Train /Test/Validate) 164

5.58 Response Surface Plot for Fr 164

5.59 Training Error Plots for Tf (Target vs Output) 164

5.60 Testing Error Plots for Tf (Target vs Output) 164

5.61 Regression Plots for Tf (Train /Test/Validate) 165

5.62 Response Surface Plot for Tf 165

FCM plots For 45 HRC

5.63 Training Error Plots for Ra (Target vs Output) 166

5.64 Testing Error Plots for Ra (Target vs Output) 166

5.65 Regression Plots for Ra (Train /Test/Validate) 166

5.66 Response Surface Plot for Ra 166

5.67 Training Error Plots for Ft (Target vs Output) 167

5.68 Testing Error Plots for Ft (Target vs Output) 167

5.69 Regression Plots for Ft (Train /Test/Validate) 167

5.70 Response Surface Plot for Ft 167

5.71 Training Error Plots for Fa (Target vs Output) 168

5.72 Testing Error Plots for Fa (Target vs Output) 168

5.73 Regression Plots for Fa (Train /Test/Validate) 168

5.74 Response Surface Plot for Fa 168

xvii
5.75 Training Error Plots for Fr (Target vs Output) 169

5.76 Testing Error Plots for Fr (Target vs Output 169

5.77 Regression Plots for Fr (Train /Test/Validate) 169

5.78 Response Surface Plot for Fr 169

5.79 Training Error Plots for Tf (Target vs Output) 170

5.80 Testing Error Plots for Tf (Target vs Output) 170

5.81 Regression Plots for Tf (Train /Test/Validate) 170

5.82 Response Surface Plot for Tf 170

5.83 PS0-ANFIS applied strategy 171

PSO based ANFIS (FCM) Plots For 35 HRC

5.84 Training Error Plots for Ra (Target vs Output) 172

5.85 Testing Error Plots for Ra (Target vs Output) 172

5.86 Regression Plots for Ra (Train /Test/Validate) 172

5.87 Response Surface Plot for Ra 172

5.88 Training Error Plots for Ft (Target vs Output) 173

5.89 Testing Error Plots for Ft (Target vs Output) 173

5.90 Regression Plots for Ft (Train /Test/Validate) 173

5.91 Response Surface Plot for Ft 173

5.92 Training Error Plots for Fa (Target vs Output) 174

5.93 Testing Error Plots for Fa (Target vs Output) 174

5.94 Regression Plots for Fa (Train /Test/Validate) 174

5.95 Response Surface Plot for Fa 174

xviii
5.96 Training Error Plots for Fr (Target vs Output) 175

5.97 Testing Error Plots for Fr (Target vs Output 175

5.98 Regression Plots for Fr (Train /Test/Validate) 175

5.99 Response Surface Plot for Fr 175

5.100 Training Error Plots for Tf (Target vs Output) 176

5.101 Testing Error Plots for Tf (Target vs Output) 176

5.102 Regression Plots for Tf (Train /Test/Validate) 176

5.103 Response Surface Plot for Tf 176

PSO based ANFIS (FCM) Plots FOR 45 HRC

5.104 Training Error Plots for Ra (Target vs Output) 177

5.105 Testing Error Plots for Ra (Target vs Output) 177

5.106 Regression Plots for Ra (Train /Test/Validate) 177

5.107 Response Surface Plot for Ra 177

5.108 Training Error Plots for Ft (Target vs Output) 178

5.109 Testing Error Plots for Ft (Target vs Output) 178

5.110 Regression Plots for Ft (Train /Test/Validate) 178

5.111 Response Surface Plot for Ft 178

5.112 Training Error Plots for Fa (Target vs Output) 179

5.113 Testing Error Plots for Fa (Target vs Output) 179

5.114 Regression Plots for Fa (Train /Test/Validate) 179

5.115 Response Surface Plot for Fa 179

5.116 Training Error Plots for Fr (Target vs Output) 180

xix
5.117 Testing Error Plots for Fr (Target vs Output 180

5.118 Regression Plots for Fr (Train /Test/Validate) 180

5.119 Response Surface Plot for Fr 180

5.120 Training Error Plots for Tf (Target vs Output) 181

5.121 Testing Error Plots for Tf (Target vs Output) 181

5.122 Regression Plots for Tf (Train /Test/Validate) 181

5.123 Response Surface Plot for Tf 181

Comparison Error Plots of NN-NSGA Prediction

FOR 35 HRC

5.124 Error Estimation Plots For R a 182

5.125 Error Estimation Plots For F t 182

5.126 Error Estimation Plots For F a 183

5.127 Error Estimation Plots For F r 183

For 45 HRC

5.128 Error Estimation Plots For R a 183

5.129 Error Estimation Plots For F t 183

5.130 Error Estimation Plots For F a 183

5.131 Error Estimation Plots For F r 183

Error Plots of PSO-NN Results FOR 35 HRC

5.132 Error Estimation Plots For R a 184

5.133 Error Estimation Plots For F t 184

5.134 Error Estimation Plots For F a 184

xx
5.135 Error Estimation Plots For F r 184

Error Plots of ANFIS (FCM)-GA FOR 35 HRC

5.136 Error Estimation Plots For R a 184

5.137 Error Estimation Plots For F t 184

5.138 Error Estimation Plots For F a 185

5.139 Error Estimation Plots For F r 185

Error Plots of ANFIS (FCM)-GA FOR 45 HRC

5.140 Error Estimation Plots For R a 185

5.141 Error Estimation Plots For F t 185

5.142 Error Estimation Plots For F a 185

5.143 Error Estimation Plots For F r 185

Error Plots of ANFIS (FCM)-PSO FOR 35 HRC

5.144 Error Estimation Plots For R a 186

5.145 Error Estimation Plots For F t 186

5.146 Error Estimation Plots For F a 186

5.147 Error Estimation Plots For F r 186

Error Plots of ANFIS (FCM)-PSO FOR 45 HRC

5.148 Error Estimation Plots For R a 186

5.149 Error Estimation Plots For F t 186

5.150 Error Estimation Plots For F a 187

5.151 Error Estimation Plots For F r 187

xxi
LIST OF TABLES

Table Name of Table Pg. No.


No`

3.1 Machining Constraints 30

3.2 NSGA II Setting 41

3.3 Results of NSGA II family of best solution for AISI 4340 35 HRC Steel 48

3.4 Results of NSGA II family of best solution for AISI 4340 45 HRC Steel 50

3.5 SPEA2 Parameter Setting 54

3.6 Results of SPEA II family of best solution for AISI 4340 35 HRC Steel 57

3.7 Results of SPEA II family of best solution for AISI 4340 45 HRC Steel 58

3.8 PSO Setting 67

3.9 MOPSO family of optimal solutions for 35 HRC AISI 4340 steel 72

3.10 MOPSO family of optimal solutions for 35 HRC AISI 4340 steel 74

3.11 Diversity of Evolutionary Algorithm 78

4.2.4 (a) Description of Neural network 88

4.2.4(b) Calibrated weights and bias above Neural Network 89

4.3.4(a) Grid Partioning Fuzzy Structure 102

4.3.4(b) Statistical Results of ANFIS Grid Partioning Cluster for 35 HRC and 102
45HRC

4.3.16(a) Subtractive Fuzzy Structure 115

4.3.16(b) Statistical Error analysis of ANFIS Subtractive clustering for 35 HRC 116
and 45 HRC

xxii
4.3.26 (a) Fuzzy structure 127

4.3.26(b) Statistical Results of ANFIS FCM for 35 HRC and 45HRC 129

4.4.1 Statistical Comparison of Neural Network and ANFIS Prediction 140


Results with Experimental Statistics for AISI 4340 Steel 35hrc

5.2.1 (a) Description of NSGA-NN 150

5.2.1 (b) Calibrated weigths and bias for NSGA-NN 35 HRC and 45HRC Steel 150

5.3.1 (a) Description of PSO-NN 156

5.3.2 (b) Calibrated weigths and bias for 35 HRC and 45HRCSteel 156
5.4.1 (a) Statistical Error analysis of GA and PSO based ANFIS (FCM) for 35 159
HRC and 45HRC

5.6.1 (a) Statistical Comparison of Prediction Results with Experimental 182


Statistics for AISI 4340 Steel 35HRC and 45HRC

6.1 Tradeoffs among forces for Surface roughness and Tool life 190

6.2 Tradeoffs among forces for Surface roughness and Tool life 192

6.3 Tradeoffs among forces for Surface roughness and Tool life 193

6.4 Tradeoffs among forces for Surface roughness and Tool life 194

6.5 Mean error and Standard Deviation between Experimental and Predicted 201
statistics

6.6 Table.6.6 MSE and RMSE between Experimental and Predicted statistics 201

6.7 Mean error and Standard Deviation between Experimental and Predicted 205
statistics

6.8 Mean Square Error and Root Mean Square Error between Experimental 205
and Predicted statistics

xxiii
NOMENCLATURE
HRC Rockwell C Hardness
V, (vc) Cutting Speed
f Feed Rate
d Depth Of Cut
Ra Surface Roughness
Ft Tangential Force
Fa Axial Force
Fr Radial Force
Tf Tool Life
AI Artificial Intelligence
CI Computational Intelligence
HC Hard Computation
SC Soft Computing
FS Fuzzy Systems
NN Neural Network
EA Evolutionary Algorithm
GSO Global Search Optimization
SI Swarm Intelligence
GA Genetic Algorithm
DE Differential Evolution
CA Culture Algorithm
PSO Particle Swarm Optimization
FA Fire Fly Algorithm
BBO Biogeography Based Optimization

xxiv
W1/W2/W3 Weights
EA-NN Evolutionary Based Neural Network
NN-EA Neural Network Based Evolutionary Algorithm
NSGA Non-Sorted Domination Based Genetic Algorithm
PESA Pareto Envelope Based Selection Algorithm
SPEA Strength Pareto Based Evolutionary Algorithm
MF Membership Function
ANFIS Adaptive Neuro-Fuzzy Interface System
ACO Ant Colony Algorithm
HLGA Hybrid Adaptive Learning Based Genetic Algorithm
Pmt Probability of m bits in K sting length to Mutate

Stn.d , rt n,d Random number generators


X t , Vt Particle Position and Velocity
Gt Global Best attractor
n ', n Local Guide attractor
Lt
nt ,d Swarm Potential

Sij Layers
Membership cluster vector
d ( xi , j ) Dissimilarity Function

J q ( , u) Cluster Function

xxv
CONTENTS

Sr. Title Pg. No


No
Acknowledgement i
List of Figures ii
List of Tables xix
Nomenclature xxi
Contents xxiii
Abstract xxviii

1 INTRODUCTION
1.1 Computational Intelligence (CI) 1
1.2 Approaches to Computational Intelligence 2
1.2.1 Fuzzy logic 4
1.2.2 Neural Network 4
1.2.3 Evolutionary Computing 4
1.2.4 Learning Theory 5
1.2.5 Probabilistic Methods 5
1.2.6 Swarm Intelligence 6
1.2.7 Global Search Optimization 6
1.3 Synergies of Computational Intelligence Techniques 6
1.4 Applications of Computational Intelligence 8
1.4.1 Application of NN 8
1.4.2 Application of Evolutionary Systems 9
1.4.3 Application of Fuzzy system 9
1.5 Overview of the chapter 10
1.6 Problem Statement 10
1.7 Objectives 10
1.8 Methodology 11
1.9 Thesis Outline 12

2 LITERATURE REVIEW

xxvi
2.1 Literature Review 14
2.2 Literature Summary 27

3 GLOBAL SEARCH ALGORITHMS FOR MULTI-OBJECTIVE


3.1 Introduction 28
3.2 Multi-objective Optimization 29
3.3 Application of MOOPs to Machining system 30
3.3.1 Machining Model (Surface Roughness Cutting force components material 31
hardness: 35 HRC).
3.3.2 Machining Model (Surface Roughness Cutting force components work 31
material hardness: 45 HRC).
3.3.3 Tool life model for 35 HRC 32
3.4 Evolutionary Algorithms (EAs) 32
3.4.1 Mathematical Formulation of Evolutionary Algorithms 33
3.4.1. Definition of Evolutionary systems 34
3.4.1.
1 Convergence Analysis of Evolutionary Algorithm 34
3.4.1. Criteria for mutation 35
3
3.4.1. Criteria for Crossover 37
3.4.2 4 points from convergence analysis
Key 40
3.5 NSGA II Algorithm 41
3.5.1 Initialize Variables and Evaluate Objectives 41
3.5.2 Non_dominated_sort 42
3.5.3 Selection. 45
3.5.4 Genetic Operators 45
3.5.5 Recombination of parent and off springs 47
3.5.6 Plots for NSGA II results (35 HRC) 50
3.5.7 Plots for NSGA II results (45 HRC) 52
3.6 Strength Pareto Evolutionary Algorithm 2 52
3.6.1 Initialize Variables and Evalaute Objectives 54
3.6.2 Tournament Selection 55
3.6.3 Genetic Operator 55
3.6.4 Plots for SPEA 2 results( 35 HRC) 57
3.6.5 Plots for SPEA 2 results( 45 HRC) 59

xxvii
3.7 Swarm Intelligence 60
3.8 Mathematical Formulation of PSO Algorithm 60
3.8.1 Typical Initialization strategy 62
3.8.2 Topologies of PSO 62
3.8.3 Definitions of Swarm topology 63
3.8.4 Convergence criteria 63
3.8.5 Criteria for Inertia clamping and acceleration co-efficeint 64
3.9 PSO Algorithm 66
3.9. Initialize population and Evaluate fitness 67
1
3.9. Create Grid Index 68
2
3.9. Select Leader 69
3
3.9. Delete extra elements 70
3.9.
4 Swarm Movement 71
6
3.9. Plots for MOPSO results (35 HRC) 73
73.9. Plots for MOPSO results (45 HRC) 75
3.10 8
Comparison between EA and SI technique 76
3.10. Comparison Based on Spectrum of solution space 77
1
3.10. Comparison Based on Diversity in solution space. 78
4 2
PREDICTION MODELS FOR MACHINING SYSTEM
THROUGH INTELLIGENT LEARNING TECHNIQUES
4.1 Introduction 80
4.2 Neural Network 81
4.2.1 Feed forward Neural network 81
4.2.2 Mathematical background of neural network 82
4.2.2.1 Gradient Descent Approach 83
4.2.3 Key notes form feed forward analysis 87
4.2.4 Multi-layer Perceptron for Turning of AISI 4340 Steel 87
4.2.5 Results of perceptron for 35 HRC Steel 89
4.2.6 Results of perceptron for 45 HRC Steel 92
4.3 Adaptive Neuro-Fuzzy Interference System (ANFIS) 94
4.3.1 Hybrid learning in ANFIS 96

xxviii
4.3.2 Backpropogation Learning 97

4.3.3 Fuzzy clustering Algorithms 98


4.3.4 Grid Partition clustering based Adaptive Neuro-Fuzzy Interference System 101

4.3.5 ANFIS Grid Partioning Cluster Plots For Ra 35 HRC 103


4.3.6 ANFIS Grid Partioning Cluster Plots For Ft 35 HRC 104
4.3.7 ANFIS Grid Partioning Cluster Plots For Fa 35 HRC 105
4.3.8 ANFIS Grid Partioning Cluster Plots For Fr 35 HRC 106
4.3.9 ANFIS Grid Partioning Cluster Plots For Tf 35 HRC 108
4.3.10 ANFIS Grid Partioning Cluster Plots For Ra 45 HRC 109
4.3.11 ANFIS Grid Partioning Cluster Plots For Ft 45 HRC 110
4.3.12 ANFIS Grid Partioning Cluster Plots For Fa 45 HRC 111
4.3.13 ANFIS Grid Partioning Cluster Plots For Fr 45 HRC 112
4.3.14 ANFIS Grid Partioning Cluster Plots For Tf 45 HRC HRC 113
4.3.15 Subtractive Clustering 114
4.3.16 ANFIS Substractive Cluster Plots For Ra 35 HRC 117
4.3.17 ANFIS Substractive Cluster Plots For Ft 35 HRC 118
4.3.18 ANFIS Substractive Cluster Plots For Fa 35 HRC 119
4.3.19 ANFIS Substractive Cluster Plots For Fr 35 HRC 120
4.3.20 ANFIS Substractive Cluster Plots For Tf 35 HRC 121
4.3.21 ANFIS Substractive Cluster Plots For Ra 45 HRC 122
4.3.22 ANFIS Substractive Cluster Plots For Ft 45 HRC 123
4.3.23 ANFIS Substractive Cluster Plots For Fa 45 HRC 124
4.3.24 ANFIS Substractive Cluster Plots For Fr 45 HRC 125
4.3.25 ANFIS Substractive Cluster Plots For Tf 45 HRC 126
4.3.26 Fuzzy C Mean Clustering 127
4.3.27 ANFIS FCM Cluster Plots For Ra 35 HRC 130
4.3.28 ANFIS FCM Cluster Plots For Ft 35 HRC 131
4.3.29 ANFIS FCM Cluster Plots For Fa 35 HRC 132
4.3.30 ANFIS FCM Cluster Plots For Fr 35 HRC 133
4.3.31 ANFIS FCM Cluster Plots For Tf 35 HRC 134
4.3.32 ANFIS FCM Cluster Plots For Ra 45 HRC 135

xxix
4.3.33 ANFIS FCM Cluster Plots For Ft 45 HRC 136

4.3.34 ANFIS FCM Cluster Plots For Fa 45 HRC 137


4.3.35 ANFIS FCM ve Cluster Plots For Fr 45 HRC 138
4.3.36 ANFIS FCM Cluster Plots For Tf 45 HRC 139
4.4 Comparison of Prediction Results with Experimental Statistics 140
4.4.2 Error Plots of Neural Network Prediction Results with Experimental 140
Statistics for AISI 4340 Steel 35hrc
4.4.3 Error Plots of Neural Network Prediction Results with Experimental 141
Statistics for AISI 4340 Steel 45HRC
4.4.4 Error Plots of ANFIS (Grid Partitioning Clustering) Results with 142
Experimental Statistics for AISI 4340 Steel 35hrc
4.4.5 Error Plots of ANFIS (Grid Partitioning Clustering) Results with 142
Experimental Statistics for AISI 4340 Steel 45hrc
4.4.6 Error Plots of ANFIS (Subtractive Clustering) Results with Experimental 143
Statistics for AISI 4340 Steel 35hrc
4.4.7 Error Plots of ANFIS (Subtractive Clustering) Results with Experimental 144
Statistics for AISI 4340 Steel 45hrc
4.4.8 Error Plots of ANFIS (Fuzzy C-Mean Clustering)) Results with 144
Experimental Statistics for AISI 4340 Steel 35hrc
4.4.9 Error Plots of ANFIS (Fuzzy C-Mean Clustering)) Results with 145
Experimental Statistics for AISI 4340 Steel 45hrc
4.5 Conclusion 146

5 HYBRIDISATION OF C.I SYNERGIES


5.1 Introduction 147
5.2 EA-NN Synergism 148

5.2.1 NSGA combined Neural Network 149


5.2.2 Results of NSGA-NN 35 HRC Steel 151
5.2.3 Results of NSGA-NN for 45 HRC Steel 153
5.3 SI-NN synergism 154
5.3.1 PSO combined Neural Network 155
5.3.2 Results of PSO-NN for 35 HRC Steel 157

xxx
5.4 Synergies of EA and ANFIS 158
5.4.1 ANFIS GA 159
5.4.2 GA based ANFIS (FCM) Plots For Ra 35 HRC 169
5.4.3 GA based ANFIS (FCM) Plots For Ft 35 HRC 161
5.4.4 GA based ANFIS (FCM) Plots For Fa 35 HRC 162
5.4.5 GA based ANFIS (FCM) Plots For Fr 35 HRC 163
5.4.6 GA based ANFIS (FCM) Plots For Tf 35 HRC 164
5.4.7 GA based ANFIS (FCM) Plots For Ra 45 HRC 166
5.4.8 GA based ANFIS (FCM) Plots For Ft 45 HRC 167
5.4.9 GA based ANFIS (FCM) Plots For Fa 45 HRC 168
5.4.10 GA based ANFIS (FCM) Plots For Fr 45 HRC 169
5.4.11 GA based ANFIS (FCM) Plots For Tf 45 HRC 170
5.5 PSO based ANFIS (FCM) 35 HRC and 45HRC 171
5.5.1 PSO based ANFIS (FCM) 171
5.5.2 PSO based ANFIS (FCM) Plots For Ra 35 HRC 172
5.5.3 PSO based ANFIS (FCM) Plots For Ft 35 HRC 173
5.5.4 PSO based ANFIS (FCM) Plots For Fa 35 HRC 174
5.5.5 PSO based ANFIS (FCM) Plots For Fr 35 HRC 175
5.5.6 PSO based ANFIS (FCM) Plots For Tf 35 HRC 176
5.5.8 PSO based ANFIS (FCM) Plots For Ra 45 HRC 177
5.5.9 PSO based ANFIS (FCM) Plots For Ft 45 HRC 178
5.5.10 PSO based ANFIS (FCM) Plots For Fa 45 HRC 179

5.5.11 PSO based ANFIS (FCM) Plots For Fr 45 HRC 180


5.5.12 PSO based ANFIS (FCM) Plots For Tf 45 HRC 181
5.6 Comparison of Prediction Results with Experimental Statistics 182
5.6.2 Error Plots of NSGA-NN Prediction Results with Experimental Statistics 182
for AISI 4340 Steel 35HRC
5.6.3 Error Plots of NSGA-NN Prediction Results with Experimental Statistics 183
for AISI 4340 Steel 45HRC
5.6.4 Error Plots of PSO-NN Results with Experimental Statistics for AISI 4340 184
Steel 35HRC
5.6.5 Error Plots of PSO-NN Results with Experimental Statistics for AISI 4340 184
Steel 45HRC

xxxi
5.6.6 Error Plots of GA based ANFIS (Fuzzy C-Mean Clustering)) Results with 185
Experimental Statistics for AISI 4340 Steel 45HRC
5.6.7 Error Plots of PSO based ANFIS (Fuzzy C-Mean Clustering)) Results with 186
Experimental Statistics for AISI 4340 Steel 35HRC
5.6.8 Error Plots of PSO based ANFIS Prediction Results with Experimental 186
Statistics for AISI 4340 Steel 45HRC
5.7 Conclusion 187

6 RESULTS AND DISCUSSION


6.1 Global Search Optimization 189
6.1.1 NSGAII 189
6.1.1 (a) For AISI 4340 35HRC 189
6.1.1(b) For AISI 4340 45HRC 191
6.1.2 Results for SPEA 2 192
6.1.2 (a) For AISI 4340 35HRC 192
6.1.2 (b) For AISI 4340 45HRC 193
6.1.3 Results for PSO 194
6.1.3(a) For AISI 4340 35HRC 194
6.1.3(b) For AISI 4340 45HRC 195
6.1.4 Comparison between EA and SI 196
6.1.4 (a) Comparison among the Solution spectrum 196
6.1.5 Evident from the literature 197
6.2 Intelligent Learning Techniques 198
6.2.1 Neural Network 198
6.2.2 Adaptive Neuro Fuzzy Interference Technique 199
6.2.2(a) ANFIS Grid partition 199
6.2.2 (b) ANFIS Subtractive Clustering 200
6.2.2 (c) ANFIS Fuzzy C-mean Clustering 200
6.2.3 Comparative Evaluation of the predictive technique on Experimental statistics 200
6.3 Synergies of CI 202

6.3.1 EA-NN 203


6.3.2 SI-NN 203

xxxii
6.3.3 ANFIS Synergies 204
6.3.4 Comparative Evaluation of the predictive technique on Experimental statistics 204

7 CONCLUSION 207

Appendix- A: Conferences and Publications

Appendix- B: PG-CON Certificate

Appendix- C: Certificates

xxxiii
ABSTRACT

Obtaining the process parameters to optimize machining performance is vital in


machining execution since they significantly affect the productivity rate, cost and
quality of machining operation. Although process parameters optimization has been
widely investigated for conventional machining operation, very limited work is
reported on optimization of hard turning using evolutionary algorithm. In this work
multi-objective optimization of hard turning with evolutionary optimization technique
is attempted (i.e, NSGA II, SPEA II, PSO) during hard turning of hardened AISI 4340
Steel at different hardness level (35 and 45 HRC) with experimental based multi
regression models as objective functions. The process variables are cutting speed, feed
rate, and depth of cut with appropriate constraints. Further -more different intelligent
learning techniques (i.e, Neural network, Adaptive Neural network based fuzzy
learning) were applied using (EA-NN and NN-EA) supportive combinations to
recognize the pattern of optimal solution through learning This learnt prediction model
is compared with experimental statistics a comparative evaluation is made which is in
good agreement with experimental data.

Keywords: Multi-Objective Optimization, Adaptive Neuro-Fuzzy Optimization, Hard


Turning;

xxxiv
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Chapter-1

1. Introduction

The complexity of design optimization of any dynamic system has many aspects among
them the major facets are the ambiguity of objectives, conflicting nature of objectives and
many possible solutions this brings an issue in characterizing the difficulty of design
optimization task. Any design solution has combination of values for parameters of a
solution and the challenge lies in identifying the solution. Considering second issue
functionality of obtained solution, the solution should be practical enough, look appealing
and have moderate cost. Third issue is contributing to the ambiguity of design
optimization is conflicting objectives which inhibits unidirectional solution. To tackle
these complexity several attempts are made through conventional methods but the
solution in general are partially satisfactory To address this issue new computational
approaches are followed which has multi-agent system each agent is defined by its
behavior that are classified into various categories in Computational Intelligence.

1.1 Computational Intelligence (CI)

Computational Intelligence deals with the design of intelligent agents which act
intelligently for goal attainment in any circumstances, flexible enough to adopt changing
environment and goals. Computational Intelligence has the ability to comprehend reason,
learn, and simulate intelligent behavior in systems for complete knowledge formulation.
Much real-time system behavior cannot be captured exactly through classical
mathematical description in spite of complex formulations; moreover complexity of
mathematical description inhibits development of system model. Hence it is really
advantageous to model a real time system with piece-wise linearity and non-linearity so
that the highly complex and un-anticipatory models can be captured by intelligent agents.

Any real time problem has uncertainties involved in it with multiple objectives and the
risk in decision making should be such that the performance criteria are maintained even
in drastic change, this necessity of capturing the dynamic behavior of system is replacing
conventional techniques with intelligent techniques. Computational Intelligent techniques

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
1
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

are thus an illustration of alternative methods to conventional technique when the system
knowledge is highly important in system modeling and control. The structure of such
systems is determined by experimental evidence where direct input-output response
behavior is utilized to develop system model. Intelligent systems are meant for the
processes that are not properly defined, complex and stochastic in nature, time varying.
The fundamental property of any intelligent system is that it must sense and reason
without prior knowledge about the environment and adapt to the control action in a robust
manner. Many attempts have been made to define by different researchers but the
property of a system to be computationally intelligent is if it deals with numerical data
and has ability of pattern recognition. CI is a subdivision of machine intelligence where
subtle difference between the techniques lies in the type of computing. Machine
intelligence has two constituents Artificial Intelligence based on hard computing (HC)
and Computational Intelligence based on soft computing (SC) [Fuzzy Sets (FS), Neural
Network (NN), Evolutionary Algorithms(EA)] (Fig. 1.1) distinguishes clearly the
components of Machine intelligence and their components.

Fig. 1.1 Hierarchy of Computational Intelligence

1.2 Approaches to Computational Intelligence

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
2
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

The core of the computational intelligence is designing process or system model which is
not responsive to mathematical modeling since the process exhibits following attributes

Too complex to represent mathematical model


Models difficult to Compute
Uncertainties in operations
Nonlinear, Stochastic and disturbed in nature

The system is capable of learning to adapt to unknown situations and is able to make
predictions about the process status in future time step. CI is a combination of soft
computing and numerical technique with methods involving adaptive control, (Fig. 1.2)
optimal control, learning theory, fuzzy logic, neural-network, evolutionary computing.
All methods tuned to attain common goal set. There are five elemental methods to CI

Fuzzy logic
Neural network

Fig. 1.2 Approaches to Computational Intelligence

Evolutionary computing
Learning theory

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
3
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Probabilistic methods
Swarm Intelligence

1.2.1 Fuzzy logic

In any real-time process, the measurement, process modeling, and control can never be
exact to the theoretical definitions [1]. There is always a certain amount of uncertainties
i.e., incompleteness, randomness of data. Fuzzy assimilates human experimental
knowledge converts it into engineering model and control. Most process which are ill-
defined with nonlinearity and uncertainties. The fuzzy logic is more of reasoning and
inference technique based on high level linguistic or semantic rules and operations.

1.2.2 Neural Network

Neural network neural network is a technique adopted from the biological brain which
involves a neuron as a fundamental building block [1]. These neurons receive signals
from neighboring neurons through their cell body and transfer the results through a long
fiber called an axon. The axons behave like signal conducting device. A similar electrical
analogy of biological neural network is artificial neural network which is characterized
by computational power learning of real-time data error tolerance, pattern recognition,
and generalization capabilities, low-level computational algorithms which manifest good
performance in numerical data processing. The learning is in different form supervised,
unsupervised, and competitive and reinforcement learning.

1.2.3 Evolutionary Computing

Evolutionary computing is the imitation of the process of natural selection in a search


procedure based on evolutionary theory of Charles Darwin [1]. The species undergoes
reproduction, gives birth to new offspring with features of combating adverse
environment and survive The process of natural selection makes sure that the individuals
with better fitness have opportunity to make most of the time, with expectations that the
offsprings will have similar higher fitness levels. Evolutionary computation uses iterative

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
4
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

progress and development in a population. This population is then selected in random


search to achieve the desired population of solution.

1.2.4 Learning Theory

Learning theory is based on the human learning capabilities without much effort in a
conventional sense. The mechanism of learning in humans is the process of bringing
together cognitive, emotional and environmental effects to acquire enhance or change
knowledge, skills. In general, learning is characterized by how the information is input,
processed and stored. Learning theories fall into three framework behaviorism cognitive
theories and constructivism. Behaviorism is learning based on objectively observable
feature learning Cognitive learning is how learning occurs in brain. Constructivism
learning is a process in which permutation of existing idea builds a new idea. In most of
the machine learning four basic forms of learning si adopted i.e., supervised learning
where a mapping of input to desired output is done, unsupervised learning where a set of
input feature is modeled and mapping of input to output is done with similar pattern.
Semi-supervised learning, combination of both learned and unlearned datasets are used to
generate an appropriate classifier. Reinforcement learning involves decision making on
given observation and feedback is taken from the consequence to supervise the learning
process [1].

1.2.5 Probabilistic Methods

Probabilistic theory is methodology which guides in dealing with the uncertainties and
imprecisions. The probabilistic methods involve a space consisting probabilities of whole
system. The uncertainties of complex dynamic system are calculated and combined
behavior of system is analyzed for the degree of causticity. The chaotic behavior of
system is estimated by the past. In general, the chaotic behavior of system grows
exponentially with time [1].

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
5
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

1.2.6 Swarm Intelligence

Swarm systems are based on behavior of school of birds, insects, fireflies where a flocks
of birds twisting, V-shaped structure of migrating geese, winter birds hunting for food,
the synchronized flashing of fireflies are tried to imitating. The well-choreographed
collective behavior without any leader is adopted to search for optimal solutions for
instance, ants living in colony, their behavior is driven by the goal of colony survival
instead of individual survival, While searching for food ants initially explores
surrounding nests in random manner. A similar behavior is observed with flocks of birds
where a leader keeps guiding the flock to updated food location [1].

1.2.7 Global Search Optimization

Both EA and SI together form a broader class of optimization driven search techniques
defined as global search optimization technique as shown in Fig.1.3 below.

Fig. 1.3 Global Search Optimization Hierarchy

1.3 Synergies of Computational Intelligence Techniques

The different combination of all the methodologies can be used to design intelligent
systems. Though a particular technique might be excellent in approximate reasoning and
modeling uncertainty but may not be so good at learning and adopting with experimental
data. A combined approach with computational intelligence technique and their
implementation can help in designing better intelligent agents.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
6
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Different forms of synergisms (Fig. 1.4) of fuzzy logic, neural networks and evolutionary
algorithms the common forms of weakly coupled synergism of neural network and
evolutionary algorithms include training designing, optimizing architecture and
parameters of neural networks and feature selection scaling training data for neural
network using evolutionary algorithms. A strongly coupled synergism between the two
methodologies where genetic operators are represented in the form of neural network and
the epochs are meant to be the generations of evolutions [1].

Synergisms of neural networks and fuzzy systems have proven to be very powerful for
system modeling and learning. In weakly coupled synergism, neural networks and fuzzy
system work independently towards a common goal where neural network assist fuzzy
logic to form rules and tuning membership functions. In strongly coupled synergism
fuzzy system assist neurons to assign weights to its membership functions where neural
network learns data over the epoch. There is other synergism possible between swarm
intelligence, fuzzy systems, evolutionary algorithms and neural network.

Fig. 1.4 Synergies of Computational Intelligence

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
7
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

1.4 Applications of Computational Intelligence

The system designed on the basis of computational intelligence elucidates data


irrespective of its orientation and it has upper hand over the classical domain based
analysis where the data computing and processing becomes difficult. CI exhibits
characteristic of domain independence where same methodology can be applied to
different fields For instance both neural network and fuzzy logic can be applied to solve a
problem the only difference would be in the performance.

Neural network can be applied in five ways i.e., data analysis, classifier, clustering,
pattern recognition, control strategy neural network has been successfully applied in
problems behaving non-linearly whereas fuzzy logic has been applied to appliances
where a module control is required the most adaptive implementation is done on
stabilizing an unsteady image. Fuzzy expert systems are applied to medical systems,
diagnostic, scheduling, and financial systems.

1.4.1 Application of NN

1. In aerospace neural networks are applied to high performance autopilot flight path
simulation, aircraft control systems, fault detection system.

2. In automotive neural networks are used for automatic guidance system.

3. In Banking, Financial and Business it is applied for document reading, credit


application evaluation, credit, and activity.

4. In defense, it is used for weapon steering, target tracking, object discrimination, facial
recognition.

5. In Industrial, Manufacturing and Electronics to control, process identification, machine


diagnosis, quality inspection.

6. In Medical cancer cell analysis, EEG and ECG signal analysis, optimization of
transplantation.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
8
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

7. In speech, applied for speech recognition, compression, and text to speech synthesis.

8. Telecommunication applied for image and data compression, speech processing, real-
time translation of spoken language.

1.4.2 Application of Evolutionary Systems

1. In automotive design, including research on composite material design and multi-


objective design of automotive components for crashworthiness, weight savings and
other characteristics.

2. In optimizing the structural and operational design of Industry and Manufacturing


systems. For optimization of mechanical systems like heat exchanger, turbines, flywheel
and computer-assisted engineering design.

3. In automotive design of mechatronic systems using bond graphs and industrial


equipment design using catalogs of exemplar lever pattern.

4. In Travelling Salesman Problem (TSP) and sequences scheduling.

5. In Control-gas pipeline, pole balancing, missile evasion, design-semiconductor layout,


aircraft design, keyboard configuration, communication network.

6. In Combinatorial Optimization, Scheduling applications, including job-shop


scheduling with objective to schedule jobs in both sequence-dependent and non-sequence
dependent for maximum production volume.

1.4.3 Application of Fuzzy system

1. Fuzzy systems are used in automobiles and vehicle subsystems such as automatic
transmissions, ABS and cruise control.

2. In air conditioners, washing machines and other home appliances.

3. In digital image processing, such as edge detection and video gam artificial
intelligence.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
9
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4. In Pattern recognition in remote sensing, microcontrollers and microprocessors.

6. In Hydrometeor classification for polarimetric weather radar.

1.5 Overview of the chapter

In this chapter a complete overview of ideas and foundations of computational


Intelligence and their methodologies are discussed (i.e., Evolutionary Systems, Neural
networks and Fuzzy systems). The Possible interactions of these techniques and the type
of synergism in which the limitations of one technique can be surpassed by combination
of two or more CI techniques is discussed. Depending on the compatibility of individual
methodologies, a better computational model can be built which could complement
respective methodologies.

1.6 Problem Statement

Computational Intelligence has captured the attention of major manufacturing segment


for its robust and dynamic adaptability, flexibility, versatility in problem solving and
decision making skill. Computational Intelligence has potential competency to capture
and compare real time data. Application of Computational Intelligence is not yet explored
in machining problems to its fullest potential Most of the machining problems are
modeled and optimized through conventional techniques.

In the present work different soft computing technique is applied over machining system
to optimize machining performance and recognize machining pattern with a case study
from literature [2-9] in which extensive machinability aspects of AISI 4340 alloy steel
with different machining characteristics during, hard turning. Operation with different
faceted is discussed.

1.7 Objectives

In the present work, an attempt is made to apply the Computational Techniques and their
synergies with the objective to optimize and build prediction models for conventional
machining system. Different Computational methods are applied over the machining

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
10
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

system for optimization and learning of machining data Further, the optimized and learnt
data are compared with the result obtained from the literature

1. To optimize hard turning of AISI 4340 steel by applying evolutionary algorithm


and swarm intelligence (i.e., Non-Dominated Sorting Genetic Algorithm, Strength
Pareto, Particle Swarm Optimization technique) and compare optimized results
with literature.
2. To develop prediction models for hard turning of AISI 4340 steel by applying
intelligent learning techniques i.e., Neural network (NN) and Adaptive Neuro-
Fuzzy systems (ANFIS). (Tagaki-Sugeno fuzzy based Neural network) and
compare the prediction model with the experimental statistics.
3. To develop supportive-combination of evolutionary based prediction neural
network, furthermore the predicted model will be tested with experimental data.
4. To develop Adaptive Neuro-Fuzzy based evolutionary estimator for predicting
optimized parameters for AISI 4340 steel hard turning machining operation.

1.8 Methodology

The adapted methodology is developed (Fig. 1.5) to achieve the above mentioned
objective with focus to optimize and develop prediction models.

1. Methodology follows two directions where in one division optimization and


prediction models are applied exclusively and another division supportive
combination of optimization-prediction models is applied.
2. After obtaining results from each technique comparative evaluation of respective
techniques is made with the experimental statistics of hard turning operation.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
11
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.1.5 Applied Optimization and Learning Methodology

1.9 Thesis Outline

The present thesis comprises of six chapters,

In chapter 1, a brief introduction of Computer Intelligence along with the various


techniques and their combination is discussed, applications of CI techniques how CI can
be applied to machining systems, problem statement, objectives, and methodology for the
current work has been discussed.

In Chapter 2, a review of literature pertaining to problem and objectives is made and


conclusion from literature is drawn.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
12
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

In Chapter 3, Optimization techniques is applied to machining model of hard turned AISI


4340 steel both evolutionary and swarm intelligence based algorithm is described in
brief, mathematical aspects and the pseudo code along with their results is discussed.

In Chapter 4, applied predictive models i.e., neural network and adaptive neuro fuzzy
based learning model are discussed. Mathematical aspects and pseudo code along with
their results is discussed.

In Chapter 5, synergies of CI techniques along with their mathematical background and


applied pseudo code is described the results of the applied techniques are discussed.

Finally, in Chapter 6 results of applied techniques relative comparison and evaluation


with the experimental statistics is discussed. Conclusions of present research work and
future scope is briefed.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
13
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

CHAPTER-2

2.1 LITERATURE REVIEW

The literature was explored with an aim to gather research work of authors who utilized
different evolutionary and learning techniques. In the literature authors have utilized
optimization and predictive techniques exclusively; multi-objective machining systems
were converted to single objective for performance evaluation. Many authors have
optimized machining systems with unit control parameter to enhance machining
performance avoiding the complex nature of conflicting objective. Literature review was
emphasized on the process parameters and control parameters utilized to model
machining system, applied optimization techniques and their degree of accuracy in
comparison to conventional techniques.

Ansalam et al. [10] investigated on improvising machining performance of hard turning


operation on SS420 Steel. Process parameters considered were cutting speed, feed rate,
and depth of cut to model surface roughness. RSM based regression model was built for
prediction and further improvised optimization technique was applied viz., Integrated GA
(IGA) which was comprehended with Conventional GA (CGA). The IGA gave better
results than CGA.

Hesam et al. [11] executed EDM process on DIN 1.452 stainless steel in which surface
roughness and white layers were control parameters. The machining model was built on
Taguchi technique and NSGA II was applied for optimization which could produce
convincing results.

Garg et al. [12] improvised machining turning operation of AISI 1040 Steel with surface
roughness as a control parameter. Taguchi technique was applied to model surface
roughness apart from that (Artificial Neural Network) ANN, (Support Vector Regression)
SVR techniques were used to build regression. Genetic programming (C-GP) coupled
with classifier was used as optimization technique. The results suggested that C-GP was
on par with the ANN but SVR performed poorer than C-GP and ANN.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
14
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Khaider et al. [13] examined hard turning operation of AISI 52100 bearing steel with
CBN (7020). The machining performance was measured in surface roughness, tool wear
and material removal rate were modeled with Taguchi, RSM and grey-relation, these
models were utilizes to optimize performance by applying GA and the results obtained
from GA predicted parameters which gave better machining performance.

Ozel and Karpat [14] investigated on enhancing the performance of AISI H13 grade
steel turning with CBN tool. Prediction model for surface roughness and tool wear was
built on process parameters (cutting speed, feed rate, depth of cut). Experimental data of
AISI52100 steel was referred form literature and further data experiment was performed
on AISI H-13 steel these data was utilized to train neural network, Regression was also
carried out, two feed-forward neural networks was modeled. In the first model the input
layer constituted edge geometry, hardness, cutting speed, feed rate and depth of cut to
predict tool wear and surface roughness, while in the latter network material hardness,
cutting speed, feed rate, depth of cut and forces were utilized to model tool wear and
surface roughness. The latter network performed well than the former.

Alhameri et al. [15] studied multi pass turning of austenitic AISI 302 Steel. Box-Behken
design was utilized to develop model. Prediction models were also built by Regression
analysis and NN to predict Tool life and machining economics with motive to minimize
machine economics and maximize tool life.

Abbas et al. [16] carried out research on turning operation of J steel with Tungsten-
Carbide insert. Models were built to predict Surface roughness and Material removal rate
by applying Taguchi technique. The formulated regression equations were utilized as
objectives with appropriate constraints in process parameters. Multi-objective EGO
algorithm was implemented to optimize machining performance.

Zhenghua et al [17] investigated high speed milling aluminum alloy AlMN1CU with
Carbide tipped tool. Both Linear and quadratic regression models were built and
Bayesian Neural nets (BNN) was built using experimental data to predict surface
roughness, the regression model built was utilized as objective functions with precise

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
15
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

constraints applied to process parameters. GA was applied to optimize parameters GA


predicted optimized parameters for each surface roughness which was verified by BNN.

Yunguang et al. [18] worked on micro-grinding of nickel based super alloy (DD98),
surface roughness was modeled in linear and non-linear degree using CCD based RSM.
GA was applied to predict control parameters for best machining performance. The
results were verified experimentally and were found to show pretty good agreement.

Shaharam et al. [19] examined on cellular manufacturing systems with objective to


minimize cellular movement distance and machine idle time. Regression models were
developed and optimization techniques viz., NSGA II, lingo, Fuzzy-GA was applied and
results concluded that NSGA II gives better results than Lingo and Fuzzy-GA.

N.Alberti and Perrone [20] worked on multi pass turning operation to predict least
power consumption, machine economics and surface roughness for which three different
modelling approaches was adopted viz., deterministic model, possibilistic model, a fuzzy
possibilistic-GA model with constrained and unconstrained search space. The results
established that fuzzy-possibilistic model predicted most failures and fuzzy-possibilistic-
GA optimized objectives to practically feasible solution.

Garge et al. [21] experimented on EDM of Titanium and Inconel alloy in which surface
roughness and cutting speed were control parameters modeled with process parameters.
NSGA II was applied to optimize performance.

Pramanic et al. [22] worked on EDM of ZrB2 where cutting speed, material removal
rate, and surface roughness was modeled with process parameters by applying Taguchi
technique and optimization based on Taguchi based grey relation. ANN was used to
predict cutting speed and surface roughness, the predicted accuracy was checked with
experimental statistics for confidence level and it gave the appreciative result.

Sahali et al. [23] worked on multi point turning operation, modeling machine economics
with process parameters and constrains in surface roughness, chip-tool temperature, tool
life and force was applied. Optimization techniques applied were viz., Deterministic

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
16
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

technique, probabilistic technique, probabilistic- NSGA II (P-NSGAII).Results concluded


that P-NSGA II outperformed deterministic and probablisitc technique.

Dureja et al. [24] reviewed different optimization and modeling techniques used in hard
turning operations viz., RSM, Taguchi, Regression analysis, NN, Fuzzy modeling, GA.

Ganesan and kumar [25] investigated in performance enhancement of turning operation


by predicting machining cost, machining time, tool wear with process parameters and GA
was applied to optimize objective function.

Jawahir [26] presented an analytical and numerical solution to 2D and 3D chip


formation, hybrid predictive model was developed to characterize and optimize chip
breakability and chip curl geometry. Furthermore, GA was used to optimize chip
formation so that machining happening according to desirability, the optimized results
were verified with FEM simulation results.

Sundaraman et al. [27] in contrast worked on fixture design and layout of end milling;
quadratic model was built using RSM and optimization done by GA and PSO. The model
was built to predict work piece deformation with positions of clamp and location as
parameters. The results of optimization suggested that RSM-PSO gave better solution
than RSM-GA technique. Furthermore, these results were compared with FEM
simulation of fixture layout.

Costa [28] investigated on multi-pass turning with the objective to minimize unit
production cost. The objective was constituted of actual machining cost, machining idle
cost, tool replacement cost. The characteristic equation was built on process parameters
viz., cutting speed, depth of cut, feed rate both in rough and finish pass, operation
constraints were applied in tool life, cutting forces, power and surface roughness. A novel
hybrid technique in PSO was formulated for optimization. Furthermore, this solution was
compared with other techniques suggested 2.035 unit production cost, while FEGA gave
2.3057 as optimal cost SA gave 2.29, MGA gave 2.30, HC gave 2.27 and ACO gave
2.25.The hybrid PSO technique could optimize solution superior than other techniques.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
17
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Bharathi et al. [29] investigated on turning operation with diamond shape tungsten
carbide tool on four different materials i.e., brass, aluminium, copper and mild steel.
Machining was characterized by forces, surface roughness with cutting speed, feed rate,
and depth of cut as process parameters. Furthermore, these equations were optimized
using PSO technique; the optimal solution suggested the trend that higher cutting speed,
lower feed, and depth of cut gave better surface roughness. The optimal surface
roughness obtained from PSO for brass was 0.07m, for copper 0.08m, for aluminium
0.08m and mild steel 0.08m.

Bharathi et al. [30] carried out investigation on milling operation of aluminum bar with
carbide tool. Machining time and surface roughness were characterized with process
parameters such as spindle speed, feed rate, and depth of cut. The characterized equations
were optimized using PSO technique. The solution obtained from PSO was verified by
conducting confirmation test. The solution trend showed that higher speed, lower depth
of cut, lower feed rate gave better surface roughness and feed rate had a greater influence
on surface roughness. The prediction ability of present approach was found to be 96 %
for machining time and 85% for surface roughness.

Bharathi et al. [31] investigated on modeling and optimizing both turning and grinding
operation. The turning operation was done on single and multi-pass while grinding was
done in the single-pass. The performance of turning was measured on machining time
while grinding was done on machining time and material removal rate. Optimization
technique applied were PSO, GA, and SA whose optimal solutions were
comprehensively evaluated. The computational time obtained by PSO in single and
multi-pass turning was 11 sec and 12 sec respectively, while for grinding 4 sec. Similarly,
results of GA gave 15 sec in both single and multi- pass turning, while in grinding it gave
6 sec as optimal computational time. Likewise in SA, it was 12, 13 and 5 respectively.
Optimal material removal rate in grinding was in the range of 0.17-0.44 m, from the
results it could be inferred that PSO proved to be better than GA and SA.

Chandrasen et al. [32] reviewed different soft computing techniques that could be
applied to machining performance prediction. Any machining system could be

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
18
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

generalized by its corresponding inputs and outputs. Input in general are process
parameters, material properties, sensory feed and output of the system are concerned
about machining performance i.e., dimensional deviation, cutting forces and tool wear,
after machining characterizations done various soft computing techniques are applied to
optimize machine model. The review concluded that best strategy to predict performance
is to couple fuzzy with a neural network. Likewise, to optimize precisely GA, PSO and
similar heuristic techniques are the best technique.

Prabhakaran et al. [33] carried out work on machining fixture analysis where location
and displacement of clamp and locator were objective functions. Regression models were
developed for displacement and location and optimized using GA and ACA. Ant colony
algorithm gave near optimal solution than GA.

Farahnakian et al. [34] investigated end milling operation; performance was modeled in
cutting force and surface roughness with process parameters as cutting speed, depth of
cut, feed rate. The characterized equation was utilized to frame optimization problem,
coupled PSO-NN technique was applied to optimize. The applied technique gave better
Pareto-spread in solution space with good convergence.

Yang et al. [35] carried out worked on multi- pass face milling operation. Performance
was characterized by unit production cost with process parameters such as number of
pass, depth of cut, cutting speed, feed rate. Fuzzy based multi-objective PSO was applied
to optimize process parameters which gave better solution with fast convergence.

Escamilla et al. [36] experimented end milling with a performance characterized in


surface roughness with process parameters cutting speed, feed rate and depth of cut.
Taguchi based regression equation was utilized to formulate optimization. PSO was used
to optimize parameters.

Li et al. [37] worked on in improving the performance of milling operation. The


performance was measured such as cutting force, tool life, surface roughness and cutting
power with process parameters spindle speed, feed rate, depth of cut. PSO technique was

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
19
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

applied to optimize process parameters to optimize process parameters. The Pareto-


spread of optimal solutions was well spread and converged well.

Chen and Li [38] worked on optimizing grinding operation by maximizing material


removal rate. PSO was applied to optimize material removal rate.

Sukla and Singh [39] investigated on Abrasive Water jet Machining (AJM) of
Aluminum alloy with garnet abrasive particles. Machining model was built for kerf width
and taper angle prediction with process parameters by applying Taguchi technique and
Optimization techniques applied, PSO, Firefly, Simulated Annealing, Black Hole, Bio-
Geographical, NSGA. PSO gave better results than other techniques.

Asilturk and Cunkas [40] carried experimental investigation on turning operation of


AISI 1040 steel with Al2O3 coated carbide insert. Tool variable considered were tool
material, nose radius, and rake angle cutting edge geometry while work piece variable
considered material hardness with cutting conditions such as cutting speed, feed rate and
depth of cut. With these variables, a multi-regression model was developed and full
factorial experimental design was built Further, ANN was developed with back
propagation training algorithm to predict surface roughness. ANN and multi-regression
were had close estimated of surface roughness prediction. ANN performed better with
99% regression coefficient and regression with 97% regression coefficient.

Senthil et al. [41] predicted performance of cutting tool inserts using neural network.
Experiments were performed on workpiece with carbide inserts with process parameters
such as cutting speed, feed rate, depth of cut, material hardness, and cutting insert shape
(relief angle, nose radius) to model surface roughness and flank wear. The Taguchi based
ANN model was built with these process parameters as input layer. Results predicted by
neural network model were compared with experimental values which predicted values
close to experimental statistics.

Miron et al. [42] worked on dynamic characterization and vibration analysis of lathe
machining system by which the machine condition was determined. Modal analysis was

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
20
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

done to determine the natural frequencies, the frequency was compared with numerical
model and validation experiment was performed.

Dilbag and Venkateshwara [43] developed analytical tool wear model while turning
bearing steel with ceramic tool. The model incorporated abrasion, adhesion and diffusion
wear mechanism further it was validated by conducting experiment. The analytical model
had capability of predicting flank wear using cutting parameter and tool geometry.

Yahya et al. [44] worked on turning operation of steel at different conditions with P25
HSS tool at different working conditions. Surface roughness, flank wear, and crater wear
were modeled with process parameters to determine machinability of tool steel. The
relative degree of influence of each parameter on control parameters was quantified. This
work can help in sorting the priority of objective functions and the contribution to overall
machining performance.

Hamdi et al. [45] investigated behaviour of hard turning while machining AISI H11
Steel with CBN tool. Forces and surface profile were considered as process responses. A
CCD based RSM was applied to build machining model furthermore a comprehensive
analysis was done on influence of process parameters over machining quality.

Shihab et al. [46] conducted experiment on hard turning of AISI 52100 Steel alloy with
coated carbide tool in which surface roughness and micro hardness were modelled and
optimized utilizing CCD based RSM approach. The RSM based optimization technique
gave satisfactory results but by reducing multi objective to single objective.

Waleed et al. [47] worked on hard turning of AISI 4340 Steel with CBN tool; in his
work surface roughness and tool flank wear were modelled by Taguchi technique to form
multi-regression equation. This equation was used as objective functions along with
constraints in process parameters; the S/N ratio analysis was done on regression to
optimize control parameters.

saha et al. [48] experimented on EDM Hard facing on Nano- card-11.Mathematical


model was built on material removal rate, cutting speed and machining time for both

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
21
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

brass wire and zinc coated brass wire by applying both RSM and multi criteria grey
relation. RSM was utilized to optimize the objectives individually for both the wires.

Emeryl et al. [49] worked on hard turning of Ni-Steel alloy (62HRC) with CBN tool
insert, the performance was modeled to predict cutting forces, surface roughness with
process parameters by applying Taguchi method and optimization was done by Taguchi
based S/N ratio. The results were in agreement with experimental data with good level of
confidence.

Ilhan and asks [50] worked on hard turning of AISI 4140 (51HRC) steel by carbide tool
coated with Al2O3 and TiC. A three-level full factorial with Taguchi based experimental
design was applied to model surface roughness by applying cutting conditions and
control factors. The process variability was measured by S/N ratio. Taguchi based S/N
response suggested that larger difference in S/N ratio will have more significant effect on
surface roughness. The result of process variables for optimum surface roughness was
120 m/min, 0.18 mm/rev and 0.4 mm cutting speed, feed rate and depth of cut
respectively.

Gaurav and Choudhary [51] focused study on hard turning of EN31 bearing steel (58-
62HRC) with CBN tool insert. A three-level full factorial experimental design was
developed; ANOVA was performed to find the relative contribution. RSM was utilized to
build regression equation on cutting forces and surface roughness, further RSM
optimization was done. Results showed that depth of cut had more influence on cutting
forces and while speed had the least influence. Results also revealed that initially forces
decreased with increasing speed later on increased along with speed due thermal
softening of tool material.

Ashvin and Nanavati [52] enquired on turning operation of AISI 410 steel with carbide
inserts of TNMG series differing in nose radius. A three-level full factorial experimental
design was done, further, RSM was utilized to model and optimize surface roughness
which suggested optimal solution 225 m/min,0.1 mm/rev, 0.3 mm, 0.12 mm for cutting
speed, feed rate, depth of cut and tool radius respectively.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
22
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Asilturk and Suleyman [53] investigated hard turning of AISI 304 austenite stainless
steel with carbide inserts (SNMG series). A three-level full factorial Taguchi based
experimental design was built. RSM based regression equation was modeled for surface
parameters (Ra and Rz) using process variable, S/N ratio was determined then RSM
based optimization was done. The optimized control factors setting for Ra was found to
be cutting speed 50 m/min, feed rate 0.15 mm/rev, depth of cut 1.5mm and for Rz was
cutting speed 150m/min, feed rate 0.15mm/rev, depth of cut 1mm. These authors applied
Taguchi and RSM to model and optimize machining parameters. RSM relates response
based input parameters by experimental statistics and applying regression. RSM consists
of three stages design of experiment, regression, and optimization to find best optimal
solution RSM is coupled with meta-heuristic technique.

Aggarwal and Singh [54] reviewed different machining modeling technique for
conventional machining model and types of optimization methodology to optimize and
characterize machine model.

Chinmaya et al. [55] experimented on hybrid machining where laser assisted machining
(LAM) was coupled with turning operation. High strength alloy (Ti-6Al-4V) was
machined with cobalt bound tungsten carbide, liquid nitrogen was used coolant. The
LAM hybrid turning operation reduced specific cutting energy and improved surface
roughness when compared to conventional machining.

Wang et al. [56] worked on multi-pass turning operation of AISI 1045 with a different
set of tools (TNMG carbide inserts). A hybrid model was built to predict machining
performance, surface roughness, forces, and chip breakability were characterized by
process parameters with operation constraints such as surface roughness, forces, and tool
life. RSM was utilized to optimize process parameter. The hybrid model developed could
predict slip line field accurately, which was verified by Finite Element Modelling results.

Devender and Kumar [57] worked on turning of Aluminum matrix composites


reinforced with SiC (Al 6061) using coated tungsten carbide tool. The effect of
reinforcement on cutting forces was characterized to improve machining performance and

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
23
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

it was found that weight forces have maximum impact on the cutting forces. A quadratic
model was developed using RSM. Optimal solution concluded that cutting forces was
majorly affected by the type of reinforcement.

EranAlsan et al. [58] investigated hard turning of AISI 4140 steel with ceramic tool
mixed with Al2O3 and TiCN. Machining performance was modeled using Taguchi based
RSM on flank wear and surface roughness as control parameters with process variables.
The optimization results obtained from RSM technique suggested cutting speed
250m/min, feed rate 0.1mm and depth of cut 0.25-0.4 mm for surface roughness and
flank wear.

Hashimoto et al. [59] identified the fundamental difference in the surface integrity of
hard turning and ground surface, their subsequent impact on rolling contact fatigue life.
The work concluded that the mechanical deformation could play a large role during hard
turning than grinding while the size effect in grinding introduced surface hardness
furthermore the hard turned surface may have more than 100% longer fatigue life than a
ground one with an equivalent surface finish due to very different characterization of
surface integrity. The effect of turned or ground surface free of white layer was clarified a
super finished turned surface may have twice a fatigue life than ground surface.

Ozel et al. [60] investigated on hard turning of AISI 4340 steel with uniform and variable
edge PCBN insert where the forces and tool wear was measured and 3D finite model was
utilized to predict chip formation, temperature and tool wear on both type of inserts the
predicted tool wear and forces were compared with experimentation. The result showed
that the variable edge tool insert has advantage of less tool wear and good temperature
distribution profile.

Ravinder and Santram [61] investigated the effects of cutting parameters on surface
roughness in turning of Al7075 hard ceramic composites and Al7075 hybrid composite
using polycrystalline diamond tool (PCD) dry turning was conducted to examine the trend
of roughness by using roughness tester for both composites. It was concluded that surface
roughness of hybrid composite was lesser in all combination of experiment. Further

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
24
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

RSM based artificial neural network was applied to validate the results obtained during
experimentation and to protect the behavior of the system under any condition within the
operation range.

Mia and Dhar [62] developed a predictive model of average tool-workpiece interface in
hard turning of AISI 1060 steel by coated-carbide insert. Cutting condition used were
cutting speed, feed rate, and depth of cut utilized to model temperature profile.
Experiment was conducted in both dry and high pressure coolant environment with full-
factorial design. Temperature was measured using tool-work thermocouple. Response
Surface Methodology (RSM) and Artificial Neural Network (ANN) were employed to
predict the temperature. The accuracy of both RSM and ANN model were in region of
acceptance. The regression coefficient of ANN for both the environment was greater than
99.8%. ANN model demonstrated a higher accuracy which was found convincing if
employed for controlling cutting temperature in turning of hardened steel.

Pontes et al. [63] worked on turning of AISI52100 hardened steel with multi-layered
coating (Al2O3+Tic+TiN) chamfer edge. Experiments were conducted with training sets
of different size to compare performance of best network in each experiment. Process
parameters considered were cutting speed, feed rate, depth of cut to model performance in
surface roughness. Radial base function (RBF) neural network was developed with the
use of Taguchis orthogonal array as a tool to design parameter of network. The factors
considered in designing RBF-NN were number of radial units, algorithm for selection of
radial center and algorithm for selection of spread factor for evaluating performance of
RBF-NN. The results revealed that algorithm for calculation of radial spread factor was
most influencing among the three factors RBF-NN trained gave least mean standard
deviation for worst trained case. The results suggested that DOE based RBF network are
more efficient and effective than trail and error based NN architect.

Gaitonde et al. [64] investigated the influence of process parameters on machinability


characteristics of turning AISI D2 (cold work) tool steel with different ceramic inserts. A
multi-layer feed forward Neural was developed with inputs as ceramic insert grade,
cutting speed, feed rate and machining time to predict specific cutting force, surface

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
25
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

roughness and tool wear. Statistical comparison was done between the predicted results
and experimental results further more interaction effects among the process parameters
were studied.

Wang [65] developed neural network based optimal estimator for predicting CBN tool
wear during hard turning operation. Prediction model was based on fully forward
connected neural network, with inputs as cutting condition and machining time and
predicted output in tool flank wear. The feed forward fully connected neural network
(FFCNN) based optimization was validated with experimental data. Comparison showed
that the (FFCNN) estimated a close value to experimental tool wear and developed
FFCNN model was found to be faster, accurate than other neural network approaches.

Umbrello et al. [66] developed predictive hybrid model based on neural network and
finite element method with objective to predict residual stress profile in hard turning for
different combination of material properties, cutting tool geometry and cutting condition.
A converse prediction of cutting conditions and geometry was made for a given residual
stress profile which acted as constrained based process parameters determination.
Furthermore, this model was utilized as closed feedback where the predicted residual
stress of ANN were applied to simulate cutting condition in FEA and vice-versa The
results obtained from ANN based FE simulation gave practical results.

Ravinder and Santram [67] investigated the effect of cutting parameters (cutting speed,
feed rate and approach angle) on roughness while turning Al 7075 hard ceramic based
composite using polycrystalline tool diamond tool (PCD). The surface roughness was
modeled by both RSM and ANN. Moreover, the influence of parameters on surface
roughness was analyzed both RSM and ANN model correlated fairly to the experimental
data.

Yildiz [68] presented a Hybrid Differential Evolution Algorithm (HEADA) for


minimizing production cost in multi-pass turning operation the algorithm was illustrated
two case studies. Taguchi based differential evolution was applied to solve machining
economics problem. Further hybrid differential evolution based optimization technique

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
26
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

was compared with PSO, HEGA, Scattered Search (SS), Simulated Annealing (SA),
Pattern Search (PS), Floating encoding genetic algorithm (FEGA) and Hybrid Harmony
Search (HSS) HEDEA outperformed all other techniques.

Kara et al. [69] worked on turning of AISI 310L Stainless steel with both coated and
uncoated cutting tool (TiCN+Al2O3+TiN). The cutting condition (cutting speed, feed rate,
and depth of cut) were used to model tangential forces and feed force Prediction model
was developed for both the responses with ANN. Two learning methods were deployed
i.e., scaled conjugate learning and Lavenberg-Madquart learning. The predicted forces
were accurate with error within 5%.

Sener Karabulut [70] worked on milling of metal matrix composite (Aluminum Alloy
7039/Al2O3 powder metallurgy) with CVD carbide tools the process parameters were
material removal rate, cutting speed, feed rate and axial depth of cut to model machining
performance such as surface roughness and cutting force ANN model was developed
with cutting condition to predict performance. The predicted performance was compared
with experimental model which gave close results with 99.8% regression.

2.2 Literature Summary

Authors have applied Evolutionary techniques on different machining systems for


optimization. From the literature it could be concluded that very few authors have applied
swarm intelligence techniques for optimization. While concerning to regression based
learning techniques, authors have applied neural network based prediction models and
few authors have utilized hybrid learning based evolutionary optimization estimators.In
the literature authors, have utilized optimization and predictive techniques exclusively;
multi-objective machining systems were converted to single objective for performance
evaluation. Many authors have optimized machining systems with unit control parameter
to enhance machining performance avoiding the complex nature of conflicting objectives.
The literature motivates for research work in multi performance of machining system and
multi-regression prediction models as literature lacks application of synergies of
computational techniques for performance evaluation and prediction.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
27
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

CHAPTER 3

GLOBAL SEARCH ALGORITHMS FOR MULTI-OBJECTIVE


OPTIMIZATION

3.1Introduction

In this chapter Multi-Objective Optimization problem (MOOPs) of machining system is


solved by applying Evolutionary Algorithms (EAs) and Swarm Intelligence (SI) which
are Global Search based Optimization techniques (ref Fig.1.3). In the first section
Evolutionary Based NSGA II (Non-Dominated Sort Genetic Algorithm) and SPEA 2
(Strength Pareto Evolutionary Algorithm) algorithms are applied to Machining MOOPs
and a comparison of between both is made on basis of diversity of solution. In the second
section Particle Swarm based Swarm Intelligence is applied to existing Machining
MOOPs and optimized results are interpreted through, swarm surface and pareto plots. In
the third section comparative evaluation of obtained results between EA and SI technique
are analyzed and the difference in the nature of solution space between the two search
optimization techniques is assessed. The workflow of this chapter is explained below,

Fig.3.1 Workflow for Chapter 3

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
28
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

3.2 Multi-objective Optimization

Most of the practical problems are complex and their definition of optimality is not
simple as they need to satisfy multiple competing objective functions at the same time.
Moreover, some of these objectives may have conflicting relations with others, which
makes the optimization difficult. Problems requiring simultaneous optimization of more
than one objective function are known as multi-objective optimization problems
(MOOPs). They can be defined as problems consisting of multiple objectives, which are
to be minimized or maximized while maintaining some constraints. Formally, they can be
defined as:

Minimize/maximize f (x) Subject to objectives and constraints

g j ( x) 0 j 1, 2,3... j

hk ( x) 0, k 1,2, 3,..., K

Here, the problem optimizes g j ( x) objectives and satisfies J inequality and hk ( x)

equality constraints. This type of problem has no unique perfect solution. In traditional
multi-objective optimization, it is very common to simply combine all the objectives
together to form a single (scalar) fitness function. But the obtained solution using a single
scalar is sensitive to the weight vector used in the scaling process. This requires
knowledge about the underlying problem which is not known before in most cases.
Moreover, the objectives can interact or conflict with each other. Therefore, trade-offs
exists when dealing with such MOOPs, rather than a single solution. Most MOOPs do not
provide a single solution; rather, they offer a set of solutions. Such solutions are the
trade-offs or good compromises among the objectives. In order to generate these trade-
off solutions, an old notion of optimality called the Pareto-optimum set is normally
adopted.

In multi-objective optimization, the definition of quality of solution is more complex than


for single-objective optimization problems. The main challenges in a multi- objective
optimization are: converge as closely as possible to the Pareto-optimal front, and

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
29
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

maintain as diverse a set of solutions as possible. The first task ensures that the obtained
set of solutions is near optimal, while the second task ensures that a wide range of trade-
off solutions is obtained.

3.3 Application of MOOPs to Machining system

The Machining data from hard turning of AISI 4340 steel [2] is utilized as machining
objectives in which machining is performed on two different hardness and regression
equations were built using RSM with process variables involving in cutting speed, feed
rate and depth of cut to model surface roughness, cutting forces and tool life. The
machining constraints and objectives are as follows,

Table 3.1 Machining Constraints [2]

Parameter Constraints lower Higher lower Higher


bound(35) bound(35) bound(45) bound(45)

Process Velocity(m/min) 142 265 125 175


parameters
Feed 0.15 0.25 0.15 0.25
rate(mm/rev)

Depth of cut(mm) 1 2 1 2

Control Tangential 337 1197 492 1296


parameters forces(N)

Axial forces(N) 219 605 298 663

Radial forces(N) 197 496 256 564

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
30
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

3.3.1 Machining Model (Surface Roughness Cutting force components and Tool
Life work material hardness: 35 HRC).

Ra 12.793 0.03118 v 28.8786 f 2.8599 d 0.0354 v f


0.000236 v d 11 f d 0.00000381 v 2 32.039 f 2
0.2853 d 2 3.1

FT 373.0294 0.5308 v 788.39 f 697.2733 d 7.2420 v f


1.9860 f d 235 f d 0.00075 v 2 6659.8 f 2
0.598 d 2 3.2

Fa 375 2.971 v 360.24 f 76.68 d 7.9052 v f 0.4 v f


145 f d 0.000398 v 2 1528.4 f 2
66.71 d 2 3.3

Fr 239.69 2.4094 v 755.0606 f 133.18 d 0.0559 v f


0.2472 v d 585 f d 0.000415 v 2 2593.5 f 2
22.93 d 2 3.4

3.3.2 Machining Model (Surface Roughness Cutting force components and Tool Life work
material hardness: 45 HRC).

Ra 11.3037 0.0614 v 16.075 f 2.3075 d 0.0006 v f


0.102 v d 3.7 f d 0.0000128 v 2 49 f 2
0.22 d 2 3.5

Ft 50.57 0.1484 v 3270 f 143.102 d 33.5 v f


1.11 v d 1175 f d 0.01183 v 2 5909.091 f 2
38.9091 d 2 3.6

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
31
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fa 260.483 0.86523 v 287.159 f 113.8068 d 6.9 v f


1.39 v d 695 f d 0.01210 v 2 3477.273 f 2
41.271 d 2 3.7

Fr 86.3465 1.5970 v 948.0681 f 212.0113 d 7.9 v f


1.19 v d 95 f d 0.01518 v 2 2695.4545 f 2
26.95 d 2 3.8

3.3.3 Tool life model for 35 HRC

423
Tf 3.9
(v) 0.59
( f )0.4697 (d )0.47

Tool life model for 45 HRC

23135.13
Tf 3.10
(v) 0.59
( f )0.4697 (d )0.47

3.4 Evolutionary Algorithms (EAs)

Optimum seeking is one of the central issue in Manufacturing system. Every problem
solved is outcome of best possible choice for which a variety of tools and techniques
have been developed and applied to systems for optimum seeking.

Meanwhile optimum seeking in nature, biological and social systems takes place in a
completely different way i.e., natural evolution they have adapted themselves to a
constantly shifting and changing environment in order to survive. Those weaker and
lesser fit members of species tend to die away leaving create stronger and fitter to mate
create to create offspring and ensure the containing survival of species and it is upon this
dictated idea that evolutionary computing is based on. Evolutionary computing is
emulation of the process of natural selection in search procedure (as shown in Fig.3.1).

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
32
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

( p 1) [ ( ( p))]
Fig 3.2 Evolutionary Model

The current EAs applied in Multi-Objective Optimization Problem (MOOPs) and the
combination became known as a multi-objective evolutionary algorithm (MOEA). An
MOEA will be considered good only if both the goals of convergence and diversity are
satisfied simultaneously. The MOEAs population-based approach helps to preserve and
utilize the non-dominated diverse set of solutions in a population. The MOEA converges
to a Pareto-optimal front with a good spread of solutions in some fixed number of
generations. Most MOEAs use the concept of domination to attain the set of Pareto-
optimal solutions.

3.4.1 Mathematical Formulation of Evolutionary Algorithms

Evolutionary algorithm is stochastic in nature. The probability of finding best solution or


no solution is equally likely, if the parameters of genetic operator are not appropriate
depending upon nature of problem in hand. A prior convergence analysis is essential for
favorable working of Evolutionary algorithm So at each genetic operations the
probability of obtaining best solution and its heritance in the subsequent operation is
essential.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
33
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

3.4.1.1 Definition of Evolutionary systems

Considering a generic function f x 0 on X. with no constraints imposed on X. further

EA system which does not use any specific properties of the set X, the only condition is
that the function f should be defined at every point of X i.e, X R n . Then the problem
of optimization can be defined as,

Max / Min xX f ( x) where f : SD R

f x 0 X Rn

Let S be a space of binary string and C(X) be an encoding function: C : X S and we


discretize search space X to XD by a simple binary code as encoding function the
optimization problem is converted to finite set SD S , SD C ( X ) .

max SD S f (s) where f : SD R

There are Variety of Evolutionary system with different types of selection, crossover and
mutation. This section discusses most generalized terms with no bias in different genetic
operation.

3.4.1.2 Convergence Analysis of Evolutionary Algorithm

Consider the following events in evolutionary system with their respective properties let
P be a population and n be the size of population

P A | H Probability that the population does not contain solution after mutation

provided it contained after crossover.


P A | H Probability that the no solution is found after mutation and crossover


P H Probability that solution will be found after a crossover.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
34
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

_
P H No solution after crossover.

max (f(S)) where f 0, S S

S is a finite S 2m

m Encoding capacity (No. of bits).

3.4.1.3 Criteria for mutation

Let Pmt be the probability of mutation s S be selected individual, then the probability
*

of individual not getting mutated be P S x mut PD and not giving rise S * in the

population m / 2 integral part of quotient m/2.

and P S S n is probability of mutation that S mutates to S * .

Let K be the no. of different bits in S & S * 0 k m then the probability that S

mutates in S * is given by binomial theorem.

P P , P PS S 1 P
k
m
k
mt

m
k
Pm Pmt

1 k
k m
P 1 Pmt : Pmk is The probability of K mutation
mk

Cm

1 Pmt
mk
is the probability that the remaining (m-k ) bits do not mutate

1
probability that precisely the necessary K-bits mutate but not other bits.
Cmk

m
pmt
Then P{s s ' s*} 1 P{s s*} 1 is the probability that s does not mutate
Cm[ m/2]
in S*

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
35
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Now, the probability that best solution is obtained after mutation is

Pmtm

P S S *
1 k
Cm
mk 1 m
k Pmt 1 Pmt k Pmt ( m /2)
Cm Cm

For an individual to get mutated the minimum condition is that half of its bit length must
participate in mutation and in contradictory assuming that mutation does not happen then,


P S * mut P P{ S S ' S *}
SP

Pmt m P

P S S S * 1 ( m /2) 1 mmt/2
'
(3.2)
s p s p
Cm Cm

The above condition is valid when the probability of mutation pmt <0.5, if the probability
of mutation exceeds 0.5 then the probability that the population does not contain solution
S* is given as follows.

(1 Pmt )

P S * mut ( P) 1
Cmm /2


P S * mut ( P)

Now evaluating binary strings over the objective f P k max ( f ( S )) if elite method
SSp
is utilized for selection then,

f x f ( Pk ) f ( Pk 1 ) ........... f ( P0 ) Where f ( P K ) which are randomly calculated


and subsequently we get,

f f f ( p kf ) f ( s ) f p
0
Re sults
p p 'f p kf Pf0 Pr obability

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
36
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Here Pfk Probability that f ( P k ) f after k th iteration and Pfk is the probability that

f ( P k ) f * after the k th iteration now that the required expectation probability does not
decrease,

E P k f .Pfk , k 0,1, 2...,

E P k E P k 1 E P k 2 ....... E P0

Now if l individual with fitness f *( x) in the general population S (i.e. optimization)

problem has l solution S


1 ,........., Sl if elistism is applied then the expectation is

unchanged i.e, E[ P K ] f *

Consider situation when A =A that no solution is found after the first iteration,
consequently suppose that a hypothesis H stating at least one of the solutions results from
_ _
crossover, Then possible events P A P A | H P H P A | H P H

3.4.1.4 Criteria for Crossover

Estimating P{H} from the above which only differ from the pmt by the fact that
population contains one solution before mutation, where as it does not contain any in
second case. Applying from the above,

n
n l
Pmt m
PA | H P S .......S
*
1
*

P

mut cross( P 0 ) 1 mmt/2 1

m /2

C m
l
Cm

n
n l
Pmt m

S .......S
P

_
* *
P A | H P mut cross( P ) 1 mmt/2
0
1 m /2


1
C m
l
Cm

While Estimating P H we say that pairs S1 , S2 is good if it yield solution after

crossover it can be concluded that a pair is good if both the individuals in the pair
contains fragments of the same solution as sub-string.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
37
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Consider the following events

B Event of at least a good pair will be chosen.

C Event that all the pair is chosen for crossover are good.

P H P H | B P B P H | B P H | C

This can be concluded from the fact that if all the pairs are good then the probability is
maximum compared to other events,

So the probability that a good pair (s1,s2) yields a solution .can be written as

q
P{s1 s2 s*} pc pc q m 1,
m 1
P{s1 s2 s*} 1 pc

In contrary P{H | C}, the probability that a solution does not arise after a crossover,
provided that all pairs are good:

P{H | C}
( S1 , S2 )Cross ( P )
P{s1 s2 s1* ,...sl } (1 pc ) (1 pc ) n

P{H | C} 1 (1 pc ) n

Assuming that atleast n pairs take part in the crossover. Then probability of good pairs
after crossover

P{H } P{H | C} 1 P{H | C} (1 (1 pc )n

Now if Event A happens then the possibility of event A can be written as follows

P{A} P{A | H }P{H } P{A | H }P{H }

m n
pmt pmt
(1 [ m /2]
) n
(1 (1 pc ) n
) (1 [ m /2]
)n *1
CM Cm

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
38
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

m
pmt
S (1 [ m /2]
)n (2 (1 pc ) n ) (3.3)
CM

Therefore probability in kth iteration can be written through mathematical induction as,

P{ Ak } = P{AK | Ak 1}P{AK 1} SS K 1 S K , where P{ AK | Ak 1} is the probability

That no solution arises after the kth iteration now from above frame works the
expectation of solution after the kth iteration

E[ P K ] fp kf f * p kf f * (1 S k ) f *

With this following conclusions can be drawn from the expectation regarding the
parameters which influence mean convergence rate

1. AS pc ( 0 pc 1 ) increases S also increases

2. As pmt (0 pmt 1) increases S decreases

3. As m increases S also increases


4. The dependence of S on n can be drawn from the below expression

m
pmt
(1 [ m /2]
)n 0 as (n infinity)
CM

and (2 (1 pc ) ) 2 as (n infinity)
n

Minimum value of S =1 for n=0

In order extract the extreme limits of convergence we consider the extreme of the
function S(n).

Finding optimal parameters for above events we use eq.

m
pmt
Let a 1 [ m /2] b 1 pc
CM

S (n) a n (2 bn )

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
39
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

For extremum S ' (n) =0

S ' (n) a n (2 bn ) ln a bn ln b

2log a
The optimum n is given by n= log b
ln ab

3.4.2 Key points from convergence analysis

With increase in population size the string length increases, reaches optimum peek
and then decreases. The least string length possible for encoding is 1 for zero size
population.
From above result it can be concluded that n exists and is real and S attains its
maximum i.e, as n increase, S first increases and then decrease.
For accelerating convergence pc should be minimum while pmt should be maximum
but the drawback of having pc minimum is that the best solution are not inherited to
next generation and high pmt destroys best solution space.
With elitism the expectation of solution getting transferred to next generation
increases moreover if crossover probability is increased the density of best solution
increases including good solution in every iteration.
If the mutation probability is decreased then the string length participating decrease
which decreases the passivity of killing best solutions, but if mutation is accurately
tuned then there is quite a possibility that worst strings could give good solution.
If the crossover size is increased the convergence decelerates but the chances of
obtaining good solution increases. In contrast for accelerating convergence if the
mutation rate is increased then good solutions are lost leading to no solution so a
good balance between convergence time and crossover-mutation rate is essential.
It is always suggested that in order to obtain good solution if convergence rate is
allowed to float freely whenever possible and when convergence rate is strict criteria
then it is suggested that the string length is kept minimum so that humming effect and
relative degrees of change in string character is merely small.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
40
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

With the key points in mind evolutionary algorithm is chosen with properties which
could overcome these drawbacks A fast elite NSGA II and SPEA2 algorithms are utilized
to optimize our machining system.

3.5 NSGA II Algorithm

Fig. 3.3 NSGA II Algorithm [69-70]

Table 3.2 NSGA II Setting


Population size 1000
Generation 100
Crossover probability 0.8
Crossover constant 0.1
Mutation probability 0.1
Mutation constant 0.2

3.5.1 Initialize Variables and Evaluate Objectives

Initialize Variables uses the bounds of V decision variables and randomly generates N
number of population over the bound range and each objective M is evaluated over
this population pop for fitness through Evaluate Objectives.

Initialize Variables (N, M, V, range)

1. For i :[1-N]
2. For j :[1-V]
3. V[i,j]= Rmin+ random(0,1)*Range

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
41
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4. Pop: Initialize Variables


5. Evaluate Objectives [Pop, M, V, N]
6. For i :[1-N]
7. For j :[1-V]
8. Objectives[I,j]= f (v)

3.5.2 Non_dominated_sort

Now the evaluated objective is sorted using Non_dominated_sort technique where


population is sorted on the basis of domination front and over this front the population.

Procedure for Non-dominate sorting

1. For each individual p in main population P do the following


2. Initialize Sp = . This set would contain all the individuals that is being
dominated by p.
3. Initialize np = 0. This would be the number of individuals that
dominate p.
4. for each individual q in P if p dominated q then add q to the set Sp i.e. Sp = Sp
{q} else if q dominates p then
5. Increment the domination counter for p i.e. np = np + 1
6. If np= 0 i.e. number of individuals dominate p then p belongs to the first front;
Set rank of individual p to one i.e. prank = 1. Update the first front set by
adding p to front one i.e F1 = F1 {p}
This is carried out for all the individuals in main population P.

7. Initialize the front counter to one. i = 1 following is carried out while the ith
front is nonempty i.e. F[ ]= .

8. Q = . The set for storing the individuals for (i + 1)th front.


9. for each individual p in front Fi for each individual q in Sp (Sp is the set of
individuals dominated by p).
10. nq = nq 1, decrement the domination count for individual q.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
42
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

11. if nq = 0 then none of the individuals in the subsequent fronts would


dominate q. Hence set qrank = i + 1.
12. Update the set Q with individual q i.e. Q = Q q.
13. Increment the front counter by one.
14. Now the set Q is the next front and hence Fi = Q.
Non_dominated_sort (pop,M,V,N)

1. For i :[1-N]//Initialize domination set


2. Initiate Pop[i].domination set []
3. Pop[i].domination count=0;
4. //Initialize empty front F[1]=[]
5. For i :[1-N]
6. For j :[i+1-N]
7. P=pop[i] q=pop[j] //Consequetive population
8. //Check for domination
9. If dominates(p,q)
10. d=dominates(p,q)
11. d=all(p<=q)&&any(p<q)
12. P.dominates set=[P.dominates set ,j]
13. q.dominates count=[q.dominated count +1]
14. if dominates (q.cost p.cost)
15. q.dominate set=[q.dominates ,i]
16. p.dominate set=p.dominated count+1
17. Exchange pop[i] with p and pop[j] with q
18. If pop[i].dominated count==0
19. F[i] = [F[i],i] &pop[i].rank=1
20. While (~front not empty)
21. //calculated the subsequent fronts
22. Exchange p=pop[F[i] & q=pop[j]
23. q.dominated count =q.dominated count-1
24. q.dominated count ==0

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
43
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

25. Q=[q,j]
26. q.rank=pop(F[i])+1
27. exchange q with pop[j]
28. F[pop[i]+1]=Q

Once the non-dominated sort is complete the crowding distance is assigned. Since the
individuals are selected based on rank and crowding distance all the individuals in the
population are assigned a crowding distance value. Crowding distance is assigned front
wise and comparing the crowding distance between two individuals in different front is
meaningless. The crowing distance is calculated as below

15. For each front Fi, n is the number of individuals.


16. Initialize the distance to be zero for all the individuals i.e. Fi (dj ) = 0, where j

corresponds to the jth individual in front Fi.


17. for each objective function m
18. Sort the individuals in front Fi based on objective m i.e. I = Sort (Fi, m).

19. Assign infinite distance to boundary values for each individual in Fi i.e. I (d k ) =
and I (dn) =
20. for k = 2 to (n 1)
I (k 1).m I ( K 1).m
I (d k ) I (d k ) 3.4
f mmax f mmin

I(k).m is the value of the mth objective function of the kth individual in I
Distance (M, V, N)

1. For front : [i-length[(F)]-1] //Calculate distance


2. //Initiate disyance d=0
3. //Index of fronts F[p,q]
4. S.up=S.up+1
5. S.down=S. up
6. Push[S,pop[i]]
7. I=sort(pop(push(S,pop[i])

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
44
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

8. Set Fmax=sort(pop[I],first]=inf && Fmin=sort(pop[I],last)=inf


I (k 1).m I ( K 1).m
I (d k ) I (d k )
f mmax f mmin
9. Return fronts and distance

3.5.3 Selection

Once the individuals are sorted based on non-domination and with crowding distance
assigned, the selection is carried out using a crowded- comparison-operator

Non-domination rank prank i.e. individuals in front Fi will have their rank

Crowding distance Fi(dj )


1. p <n q if
2. prank < qrank
3. or if p and q belong to the same front Fi then Fi(dp) > Fi(dq) i.e. the crowing
distance should be more.

The individuals are selected by using a binary tournament selection with crowed-
comparison-operator

Tournament selection (pop, Toursize, V, M, N)


1. For i :[1-N]
2. For j :[1-Tour size]
3. I:random(N,Tour size)
4. Get [i1,i2]:I(j)
5. //Check rank and distance of candidate
6. [I_Rank, I_distance]=pop[min(pop(Rank),max(pop(distance))]
7. I_min : pop[find(min(pop(rank)&&max(pop(distance))]

3.5.4 Genetic Operators

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
45
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Real-coded GAs use Simulated Binary Crossover (SBX) operator for crossover and
polynomial mutation Simulated Binary Crossover. Simulated binary crossover simulates
the binary crossover simulated.

Simulated Binary Crossover (SBX)

c1, k 0.5 1 k p1, k 1 k p2, k 3.5

c2, k 0.5 1 k p1, k 1 k p 2, k 3.6


Where ci, k is the ith child with kth component, pi,k is the selected parent and k ( 0) is a
sample from a random number generated having the density.

p( ) 0.5(c 1)1/ c 2 if 0 1 3.7


p( ) 0.5(c 1) c 2 if 0 1 3.8
p( ) 0.5(c 1)1/ c 2 if 1 3.9
This distribution can be obtained from a uniformly sampled random number u between
(0, 1). c is the distribution index for crossover

(u) (2u)1/( 1)
c
3.10
(u) (1/ 2(1 u)) 1/(c 1)
3.11
Crossover (parent pop, M, V, Rang, PC)

1. For i:[1-N]
2. If Random(0,1)<PC
3. //Child initiation child1 and child 2
4. Select parents
5. P1: round [N*random(0,1)]
6. P2: round[N*random(0,1)]
7. Parent 1=parent pop[P1,:]
8. Parent 2=parent pop[P2,:]
9. //Simulated Binary Crossover

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
46
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

10. Ui[i] : random(0,1)


11. If Uj[i] <0.5
1
(2U i )1 Pc

12. Else
1

2(1 U i ) Pc 1

13. Evaluate objective(child1,M,V)


14. Evaluate objective(child2,M,V)
15. Polynomial Mutation

ck pk ( p u k pkl ) k
k (2rk )1/ m 1
1 if 0.5 3.12
1/m 1
k 1 [2(1 rk )]

16. Mutate (parent pop, M, V, Range, Pm)


17. For i :[1-N]
18. P3=round[N*random(0,1)]
19. Child3=parentpop(P3,1)
20. m(i)=random(0,1)
21. if m(i)<0.5

(i ) 2 * m(i ) pm 1
else
1
(i )
2(1 m(i )) pc 1

22. Child 3=child3[i]+(i)


23. evaluate objective(child 3,M,V,N)
Where, rK is a uniformly sampled random number between (0, 1) and m is mutation rate.

3.5.5 Recombination of parent and off springs

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
47
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

The best parents from former generations and off springs in current generations are
combined to preserve parents so that the parents are conserved in the consecutive
generations
Replace pop (Intermediate pop,M,V,N)
1. For i : [1-N]
2. Sort pop
3. Max_rank=Intermediate pop[find(maxRank)]
4. For i: 1 to max_rank
5. J=max(find[sorted pop(max>rank)==i)
6. If (j>N)
7. //sorted with rank
8. //find the number of individuals with current rank
9. k=j-N
10. p=sorted_pop(k:N)
11. //Sort according to distance
12. For j : [1-N]
13. F[N+K:]=p(N:j)
14. Elseif j<N
15. F[N:j]=sorted_pop[j:N]
16. For i :[1-Kmax]
17. Pool=round(Np/2),tour=2
18. Parent pop=Tournament selection (pop, Tour)
19. [Child 1,Child 2]=Crossover (Parent pop, M, V, Range, PC)
20. [Child 3]=Mutate(Parent pop, M, V, Range, Pm)
21. Offspring pop=[Child 1, Child 2, Child 3]
22. Intermediate pop =[pop, offspring]
23. Replace pop(Intermediate pop, M, V, N)

Table.3.3 Results of NSGA II family of best solution for AISI 4340 35 HRC Steel

Vc f d Ra Ft Fa Fr Tf R D

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
48
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

265 0.15 2 1.440575 689.4132 257.6872 376.0769 27.67777 1 65535


265 0.15 1 1.732035 481.3859 108.6272 196.3489 38.33675 1 65535
265 0.25 2 2.972375 889.7312 341.015 436.8659 21.77353 1 65535
169.434 0.15 1 4.023594 412.8885 152.2218 231.225 49.91373 1 65535
261.1212 0.15 1 1.823688 475.9385 108.9812 196.2864 38.67171 1 65535
265 0.25 2 2.972375 889.7312 341.015 436.8659 21.77353 1 65535
264.9902 0.25 2 2.972568 889.7437 341.0119 436.8632 21.774 1 65535
264.9983 0.241889 1.927966 2.753395 851.9404 321.1578 419.3945 22.49811 1 0.02106
231.6619 0.150082 1 2.523533 442.0419 115.6496 199.9704 41.49104 1 0.017777
256.2502 0.157696 1.086001 1.906649 498.2758 125.4584 216.8083 36.74274 1 0.016073
265 0.241178 1.922463 2.735899 848.8689 319.5894 417.9773 22.55941 1 0.015979
255.922 0.160701 1.078365 1.924486 500.1209 127.8311 218.4533 36.56701 1 0.01586
264.9996 0.246009 1.921864 2.81867 861.1237 322.9102 422.1392 22.35356 1 0.015844
185.7026 0.15 1 3.628576 414.8737 139.6661 219.9264 47.28547 1 0.015625
177.0656 0.15 1 3.838041 413.3254 146.0696 225.6509 48.63301 1 0.015374
265 0.197867 1.329979 1.873269 621.6293 191.4162 291.9741 29.43819 1 0.014759
264.961 0.161601 1.172733 1.681103 531.3475 137.4695 234.1206 34.35057 1 0.014673
218.6432 0.150107 1 2.834868 431.2042 120.7764 203.8858 42.92803 1 0.014661
264.9999 0.187048 1.311578 1.792259 598.7628 178.8963 278.8227 30.42464 1 0.014372
264.8573 0.241097 1.982864 2.795903 862.7214 331.3484 426.7589 22.24438 1 0.014369
174.3167 0.15 1 3.904826 413.0673 148.2322 227.6029 49.08405 1 0.014283
265 0.244146 1.891249 2.756463 849.2464 315.7876 416.0992 22.60344 1 0.014065
265 0.198594 1.349524 1.881922 627.2379 194.4536 295.5071 29.18677 1 0.014015
225.3303 0.150002 1 2.674819 436.3252 117.9001 201.6127 42.18555 1 0.013776
185.282 0.15 1 3.638763 414.7724 139.9641 220.1908 47.34876 1 0.013629
186.0766 0.15 1 3.619519 414.966 139.4022 219.6925 47.22937 1 0.013486
264.9959 0.240879 1.953051 2.760167 855.148 325.316 422.2024 22.40593 1
189.9865 0.15 1 3.524895 416.0567 136.7103 217.3171 46.65346 1 0.013037
264.9613 0.150352 1.247266 1.609129 533.0857 133.4604 236.7962 34.52022 1 0.01294
264.5977 0.171423 1.103768 1.747466 529.6774 141.0782 232.7329 34.4049 1 0.01288
232.9983 0.150418 1 2.491644 443.7511 115.4963 199.9651 41.30708 1 0.012815
265 0.246117 1.759588 2.664722 823.9152 293.6823 399.6592 23.29502 1 0.012699
208.8492 0.150014 1 3.070018 424.5623 125.443 207.6562 44.11734 1 0.01264
Table 3.3 consists optimized results for 35 HRC process parameters in first three
columns (vc,f,d) and latter columns contains optimized results for objective functions (Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
49
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

,Ft ,Fa ,Fr ,Tf ).Out of the 1000 chromosome solutions few best chromosomes

are listed in the above table.

3.5.6 Plots for NSGA II results (35 HRC)

Fig. 3.4 Rank and Pareto for Ra(35HRC) Fig. 3.5 Pareto-front for Tf (35HRC)

Fig. 3.6 Average distance between consecutive generations (35HRC)

Table.3.4 Results of NSGA II family of best solution for AISI 4340 45 HRC Steel
Vc f d Ra Ft Fa Fr Tf R D
175 0.2500 2 5.4703 1.1135e+03 576.9927 494.4395 8.9403 1 65535
135.9606 0.1500 1 3.9895 519.5461 331.3059 282.9869 28.2786 1 65535
130.8263 0.1500 1 4.0725 524.2045 331.6248 282.5873 30.0149 1 65535

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
50
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

175 0.1500 1 3.5798 504.5262 349.7478 312.2032 19.1319 1 65535


175 0.2500 2 5.4703 1.1135e+03 576.9927 494.4395 8.9403 1 65535
171.7381 0.1500 1 3.5991 504.4006 346.7949 307.9906 19.6973 1 65535
170.6747 0.1512 1 3.6100 505.0240 346.2216 307.2963 19.8126 1 0.0195
175.0000 0.2487 1.9979 5.4454 1.1089e+03 574.4013 492.7024 8.9698 1 0.0172
160.3800 0.1500 1.0066 3.6897 508.6927 339.0356 296.4326 21.7907 1 0.0168
174.9624 0.2458 1.9982 5.3980 1.1015e+03 569.8832 489.6478 9.0210 1 0.0160
175 0.2373 2 5.2635 1.0800e+03 556.7698 480.8408 9.1639 1 0.0156
171.7067 0.1500 1.0037 3.6009 505.9857 346.9955 308.2302 19.6489 1 0.0154
167.9552 0.1514 1.0066 3.6322 508.2227 344.5538 304.7724 20.2004 1 0.0154
168.5030 0.1500 1.0865 3.6577 541.3019 350.1705 311.2217 19.0637 1 0.0146
175 0.2306 2 5.1613 1.0631e+03 546.5998 474.0485 9.2885 1 0.0145
175 0.1857 1.4158 4.0521 716.6107 404.1508 368.3416 13.3300 1 0.0144
136.3303 0.1500 1 3.9838 519.2347 331.3076 283.0465 28.1599 1 0.0143
174.4113 0.1512 1.1067 3.6398 551.0508 356.2463 320.0333 17.7614 1 0.0142
174.8377 0.1511 1.0501 3.6089 526.7210 352.9048 316.2096 18.4048 1 0.0141
145.7619 0.1500 1 3.8500 512.3850 332.4684 285.9718 25.3898 1 0.0141
174.9083 0.1500 1 3.5803 504.5192 349.6613 312.0804 19.1474 1 0.0140
175 0.2393 2 5.2960 1.0853e+03 559.9760 482.9890 9.1264 1 0.0139
174.9987 0.2313 1.5566 4.6773 856.1857 468.7186 420.8659 11.1902 1 0.0137
174.9734 0.2064 1.1378 4.0238 611.7845 388.5914 358.3491 14.9399 1 0.0137
171.7164 0.1528 1.0761 3.6445 538.6585 352.6485 315.3194 18.4855 1 0.0136
175 0.1504 1.0131 3.5873 510.3848 350.6039 313.3299 18.9233 1 0.0136
175 0.2458 2 5.4006 1.1023e+03 570.2226 489.8743 9.0119 1 0.0135
175.0000 0.2028 1.1931 4.0322 634.4821 392.0288 360.3824 14.5323 1 0.0135
174.9971 0.2481 2 5.4377 1.1083e+03 573.8392 492.3111 8.9735 1 0.0133
175 0.1500 1.0286 3.5935 516.9707 351.4236 314.2841 18.7316 1 0.0132
168.5690 0.1500 1.0653 3.6481 532.3890 348.6946 309.5442 19.3321 1 0.0131
175 0.2295 1.0450 4.1928 593.8836 396.2472 370.1817 15.1377 1 0.0131

Table 3.4.consists optimized results for 45 HRC process parameters in first three
columns (vc ,f, d) and latter columns contains optimized results for objective functions
(Ra ,Ft , Fa , Fr , Tf ).Out of the 1000 chromosome solutions few best chromosomes are
listed in the above table.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
51
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

3.5.7 Plots for NSGA II results (45 HRC)

Fig. 3.7 Rank and Pareto for Ra (45HRC) Fig. 3.8 Pareto-front for Tf (45HRC)

Fig. 3.9 Average distance between consecutive generations (45HRC)


3.6 Strength Pareto Evolutionary Algorithm (Type 2)

Initialize population structure with fields in position cost fitness variables, dominance
field and cumulative fitness. Then objectives are evaluated over random pop. Position
with initial fitness, pop.cost. A fitness subset archive for best individuals is initiated,
first, all non-dominated population members are copied to the archive; any dominated
individuals or duplicates are removed from the archive during this update operation. If the
size of the updated archive exceeds a predefined limit, further archive members are
deleted by a clustering technique which preserves the characteristics of the non-

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
52
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

dominated front. Afterwards, fitness values are assigned to both archive and population
members:

Each individual i in the archive is assigned a strength value S(i) [0, 1), which at the same
time represents its fitness value F (i).and S(i) is the number of population members j
that are dominated by or equal to i with respect to the objective values, divided by the
population size plus one.

The fitness F (j) of an individual j in the population is calculated by summing the


strength values S(i) of all archive members i that dominate or are equal to j, and adding
one at the end. To avoid the situation that individuals dominated by the same archive
members have identical fitness values, for each individual both dominating and
dominated solutions are taken into account. Each individual i in the archive P t and the
population Pt is assigned a strength value S(i), representing the number of solutions it
dominates.

S i {j j P.t Pt i j} |

R(i)
j Pt Pt , j i
S ( j)

On the basis of the S values, the raw fitness R(i) of an individual i is calculated:
That is the raw fitness is determined by the strengths of its dominators in both archive
and population. fitness is to be minimized here, i.e., R(i) =0 corresponds to a non-
dominated individual, while a high R(i) value means that i is dominated by many
individuals.

Additional density information is incorporated to discriminate between individuals


having identical raw fitness values.

The density estimation utilizes kth nearest neighbor method, where the density at any
point is a (decreasing) function of the distance to the kth nearest data point. Here the
inverse of the distance to the kth nearest neighbor is used for as density estimate. For each
individual i the distances (in objective space) to all individuals j in archive and population

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
53
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

are calculated and stored in a list. After sorting the list in increasing order, the kth element

gives the distance sought, denoted as k. The kth nearest parameter is defined as square
root of the sample size, NN and density distribution as

1
D(i) (3.13)
k 2

F (i) R(i) D(i) (3.14)

In the denominator, two is added to ensure that its value is greater than zero and that
D(i) < 1. Finally, adding D(i) to the raw fitness value R(i) of an individual i yields its
fitness F (i).

Fig. 3.10 SPEA2 Algorithm[71]

Table 3.5 SPEA2 Parameter Setting


Population size 1000
Generation 100
Archive size 300
Crossover probability 0.7
Crossover constant 0.1
Mutation probability 0.1
Mutation constant 0.2

3.6.1 Initialize Variables and Evalaute Objectives

Initialize {pop.position, pop.cost, pop.S, pop.R, pop., pop.D, pop.F}


1. For i : [1-N]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
54
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

2. Pop.position :random(range,N)
3. Pop.cost=evalautefunction(pop.position)
4. //Initialize archive
5. Archive={ }

3.6.2 Tournament Selection


The tournament selection is similar to that applied in NSGA II

Binary tournament selection(archive,[archive.F],N)

6. I=random(N,2)
7. I1=I(1)
8. I2=I(2)
9. If F(i1)<F(i2)
10. P=pop[i1]
11. Else
12. P=pop[i2]

3.6.3 Genetic Operator


Genetic operator applied is similar to that of NSGA II with slight variation in mutation
technique.
Crossover (p1,p2,crossover parameters)
13. Parametrs :(,range)
14. :random(-,1+,N)
y1 * p1 (1 )* p 2
15. y 2 * p 2 (1 )* p1
16. Y1=min[max(y1,range)]
17. Y2=min[max(y2,range)]
18. Mutate (p3,mutation parameters, range)
19. Parameters : , range
20. Rmin=min(range)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
55
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

21. Rmax=max(range)
22. dr=Rmax-Rmin
23. : *dr
24. y=p1+*random(Nm)
25. y=min(max(y,range))
26. Main learning
27. Do until( max iteration IT)
28. P=[pop,archive]
29. //check for domination
30. [dom,p.S]=dominates (p[i])
31. S=[p.S]
32. P[I].R=sum(S[dom])
33. Q=[p.cost]
34. : Euclidiean distance[q]
35. : sort[]
36. p[i]. =
37. p[i].[k]=p[i].[k]
1
p[i].D
38. p[i]. [k 2]
39. //Fitness Calculation
40. P[i].F=p[i].R+p[I].D
41. Fit =sum(find(p.R==0))
42. P.F=[fit]
1. Archive =p[size[p.R]]
2. While[min()==max((k)&&k<size())]
3. Pareto front=archive[archive.R==0]
4. [p1,p2]=binary tournament selection(archive,[archive.F],N)
5. Popc.cost=evalautefitness[child1,child2]
6. [child 1 child2]=crossover(p1,p2,crossoverparameters)
7. [p3]=binarytournamentselection(archive,[archive.F],N)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
56
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

8. [child 3]=mutate(p3,mutation parameters,N)


9. Popm.cost=evalautefitness[child3]
10. pop=[popc.cost popm.cost]

Table 3.6 Results of SPEA II family of best solution for AISI 4340 35 HRC Steel
Position Cost S R sigmaK D F
[259.15 0.17 1.07] [1.88 514.11 136.22 225.95 35.51] [9.00] [0.00] 0.375 [0.32] [0.32]
[265.00 0.16 1.84] [1.60 674.93 238.28 350.68 27.63] [16.00] [0.00] 0.3563 [0.32] [0.32]
[265.00 0.24 1.81] [2.57 813.37 295.99 398.32 23.38] [17.00] [0.00] 0.2863 [0.34] [0.34]
[265.00 0.20 1.96] [2.05 757.72 292.73 391.58 24.60] [9.00] [0.00] 0.305 [0.33] [0.33]
[264.60 0.15 1.94] [1.51 683.95 250.23 366.11 27.68] [6.00] [0.00] 0.4035 [0.32] [0.32]
[264.83 0.17 1.74] [1.64 661.12 224.65 335.27 28.00] [26.00] [0.00] 0.3147 [0.33] [0.33]
[264.98 0.21 1.77] [2.11 737.27 265.23 367.09 25.17] [26.00] [0.00] 0.2558 [0.34] [0.34]
[265.00 0.19 1.79] [1.89 706.15 252.72 356.92 26.15] [20.00] [0.00] 0.3483 [0.33] [0.33]
[264.23 0.21 1.92] [2.25 776.56 295.04 392.75 24.13] [11.00] [0.00] 0.2511 [0.34] [0.34]
[264.85 0.22 1.47] [2.09 689.39 226.10 330.51 26.92] [34.00] [0.00] 0.3906 [0.32] [0.32]
[265.00 0.24 1.69] [2.57 802.55 280.54 388.16 23.84] [16.00] [0.00] 0.2971 [0.34] [0.34]
[264.99 0.18 1.88] [1.81 711.36 262.25 367.37 26.09] [17.00] [0.00] 0.3296 [0.33] [0.33]
[264.72 0.25 1.93] [2.91 874.97 328.11 427.52 22.13] [6.00] [0.00] 0.2971 [0.34] [0.34]
[256.63 0.16 1.09] [1.90 505.03 129.90 221.26 36.21] [13.00] [0.00] 0.3225 [0.33] [0.33]
[264.97 0.19 1.45] [1.83 636.05 200.63 303.73 28.72] [39.00] [0.00] 0.3818 [0.32] [0.32]
[263.42 0.20 1.31] [1.91 616.67 188.86 289.03 29.75] [30.00] [0.00] 0.368 [0.32] [0.32]
[264.62 0.21 1.91] [2.16 763.63 289.27 387.95 24.44] [12.00] [0.00] 0.2511 [0.34] [0.34]
[264.57 0.23 1.41] [2.16 695.37 225.90 331.84 26.94] [30.00] [0.00] 0.3986 [0.32] [0.32]
[265.00 0.21 1.97] [2.28 789.31 306.06 402.04 23.74] [8.00] [0.00] 0.2477 [0.34] [0.34]

Table 3.6 has results of optimized solution obtained from SPEA2 algorithm for 35HRC
position matrix represents the process parameters (vc,f,d), cost matrix represents
optimized solution for objectives (Ra ,Ft ,Fa ,Fr ,Tf ). and the remaining cells correspond to
SPAE2 parameters. Out of the 300 solutions in archive only few are listed in the above
table.

3.6.4 Plots for SPEA 2 results( 35 HRC)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
57
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig. 3.12 Pareto-front for Tf (35HRC)


Fig. 3.11 Rank and Pareto for Ra (35HRC)

Fig. 3.13 Average distance between consecutive generations (35HRC)

Table 3.7 Results of SPEA II family of best solution for AISI 4340 45 HRC Steel
Position Cost S R sigmaK D F
[175.00 0.22 1.90] [4.92 997.97 515.49 452.74 9.82] [14.00] [0.00] 0.238 [0.35] [0.35]
[174.72 0.23 1.54] [4.67 849.67 466.88 419.61 11.30] [10.00] [0.00] 0.271 [0.34] [0.34]
[174.92 0.18 1.34] [3.93 674.57 389.76 355.44 14.19] [29.00] [0.00] 0.427 [0.31] [0.31]
[174.83 0.23 1.70] [4.80 920.84 489.93 435.20 10.54] [14.00] [0.00] 0.247 [0.35] [0.35]
[174.85 0.18 1.55] [4.14 773.13 418.81 380.47 12.54] [21.00] [0.00] 0.299 [0.33] [0.33]
[175.00 0.16 1.33] [3.83 656.51 380.43 346.35 14.81] [26.00] [0.00] 0.482 [0.30] [0.30]
[174.99 0.19 1.70] [4.37 852.70 448.58 404.23 11.40] [21.00] [0.00] 0.253 [0.34] [0.34]
[175.00 0.19 1.44] [4.14 737.76 413.56 376.50 12.91] [24.00] [0.00] 0.373 [0.32] [0.32]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
58
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[174.99 0.22 1.47] [4.45 793.05 442.97 401.34 11.96] [11.00] [0.00] 0.278 [0.34] [0.34]
[175.00 0.21 1.84] [4.64 934.49 483.32 430.12 10.45] [20.00] [0.00] 0.238 [0.35] [0.35]
[175.00 0.22 1.51] [4.54 819.43 453.20 409.14 11.62] [13.00] [0.00] 0.26 [0.34] [0.34]
[173.87 0.19 1.34] [4.06 693.36 401.38 366.13 13.76] [26.00] [0.00] 0.435 [0.31] [0.31]
[175.00 0.21 1.40] [4.23 738.85 419.98 382.61 12.77] [23.00] [0.00] 0.348 [0.33] [0.33]
[175.00 0.24 1.29] [4.51 735.25 436.39 400.33 12.70] [7.00] [0.00] 0.346 [0.33] [0.33]
[168.21 0.16 1.02] [3.67 519.42 348.53 310.79 19.45] [15.00] [0.00] 1.069 [0.33] [0.33]
[175.00 0.24 1.19] [4.48 688.80 426.53 395.02 13.40] [8.00] [0.00] 0.425 [0.31] [0.31]
[174.95 0.23 1.26] [4.33 698.11 419.46 385.62 13.28] [16.00] [0.00] 0.347 [0.33] [0.33]
[175.00 0.22 1.70] [4.68 901.42 478.44 426.62 10.73] [18.00] [0.00] 0.266 [0.34] [0.34]
[175.00 0.24 1.82] [5.07 1000.50 524.52 459.35 9.80] [11.00] [0.00] 0.353 [0.32] [0.32]

Table 3.7 has results of optimized solution obtained from SPEA2 algorithm for 45HRC
position matrix represents the process parameters, (vc, f, d) cost matrix represents
optimized solution for objectives (Ra ,Ft ,Fa , Fr , Tf ) and the remaining cells correspond to
SPAE2 parameters Out of the 300 solutions in archive only few are listed in the above
table

3.6.5 Plots for SPEA 2 results( 45 HRC)

Fig. 3.14 Rank and Pareto for Ra (45HRC) Fig. 3.15 Pareto-front for Tf (45HRC)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
59
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig. 3.16 Average distance between consecutive generations (45HRC)

3.7 Swarm Intelligence

Swarm systems are based on behavior of school of birds, insects, fireflies where a flocks
of birds twisting ,V-shaped structure of migrating geese, winter birds hunting for food,
the synchronized flashing of fireflies are tried to imitating. The well-choreographed
collective behavior without any leader is adopted to search for optimal solutions. For
instance Ants living in colony, their behavior is driven by the goal of colony survival
instead of individual survival, while searching for food ants initially explores surrounding
nests. In random manner A similar behavior is observed with flocks of birds where a
leader keeps guiding the flock to updated food location.

3.8 Mathematical Formulation of PSO Algorithm

Assuming a swarm S of N(n) particle moves through dimension D in search space RD


Let f be objectives of our optimization problem the function f is defined over the
discretized space as f : R D R over the space R. The definition of swarm can be
condensed to

S St tN X t ,Vt , Lt , Gt tN [ X 0 ,V0 , L0 , G0 , X1 ,V1 , L1 , G1 ............... X n , Vn , Ln , Gn ]


o o

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
60
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Where

X t X tn,d 1 n Nt & 1 d D

(dth coordinate of the velocity of particle n after time step t)

Vt Vt n,d 1 n Nt & 1 d D

(dth coordinate of the velocity of particle n after time step t)

Lt Lnt ,d 1 n Nt & 1 d D

(dth coordinate of local best (attraction) of particle n after time step t)

Gt Gtn,d 1 n Nt & 1 d D

(dth coordinate of global best (attraction) of particle n after time step t)

n 1
Furthermore Gtn ,d = Gt if (n<N) and satisfying initial and final conditions Gt Gt 1
N ,d l ,d

with given distribution for initial position and velocity (x0,v0) Initial grid index G0I is
determined by minimum argument of function.

G0l arg min1 n N{ f ( X 0n )}atl0 x0

St 1 X t 1 ,Vt 1 , Lt 1 , Gt 1 which is determined by movement equation.


vtn,1d vtn,1d C1.r1n,d Lnt ,d X tn,d C2 .S1n,d Gtn,d X tn,d (3.15)

Where C1 & C2 control the influence of personal best of particle and the common

knowledge of swarm also known as acceleration co-efficient.

Stn.d & rtn,d are randomness which are drawn uniformly at random [0,1]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
61
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

If a particle velocity component exceed a certain interval [-Vmax, Vmax]. It is set back to
interval found. The movement equation is altered by clamping inertia.


vtn,d 1.vtn,d C1.rt n,d Lnt ,d X tn,d C2 .Stn,d Gtn,d X tn,d
and position is altered as

X tn,1d X tn,d vtn,1d 0,1

0.72 C1 C2 1.49

3.8.1 Typical Initialization strategy

1) Random initiation of position, velocity

2) Initiate velocity with zero matrix movement

v n,d .vn,d C1rand (0,1) Ln,d X n,d C2 rand (0,1) G n,d X n,d

3) Update position along with the personal and global best

X n,d X n,d v n,d

If f X n L Ln then Ln X n


f X n L G then G X n

3.8.2 Topologies of PSO

To better predict social learning process the global best particle (Gd) is replaced by the
local guide particle (Ld) Topology is typically represented as graph whose nodes are
particles and edges connect neighboring particles. The edge connections between any two
grid points n1 & n2 as a particle for its own local guide, the edge connections are
determined by minimum argument of function

n ', n
Pt n Euclidean(arg min f ( x) where x [ Lt | n' N (n) |]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
62
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

n ', n
Lt is local guide attraction of particle n at the time step t when particle n
makes its move.

n ' n ',n
n ', n
Lt
L n ' L If n n
L t 1

Attempts to form neighborhood topology depend on Inclined distance of particle in


search space. In general neighborhood topology is choose independent of particle
position in search space.

3.8.3 Definitions of Swarm topology

For a Swarm, definition of topology and its potential drives solution space, leader of
swarm both locally and global depends on the topology, though topology remains static
the co-ordinates of particle attractors monitors the distribution of swarm. In general the
global and local attractor is determined by fitness augments i.e.

G0l arg min1 n N{ f ( X 0n )}atl0 x0


Lnt arg min[ X tn1 , Lnt ]

For a given swarm S in stochastic process X t ,Vt , Lt , Gt the potential function n in


t ,d

dimension d is determines the swarm fitness level at interval time step t. It is determined
by global best and personal best of swam at all interval.

n 1 n N
nt ,d ( | vtn ,d | | Gtn,1d X tn ,d |) ( | v
' '
n' , d '

t 1 | | Gtn,1d X tn1,d |) (3.16)


n' 1 n' n 1

3.8.4 Convergence criteria

It is important to control swarm topology for determining the desired solution space
which depends on the movement constants applied on swarm so for a swarm to converge

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
63
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

the inertia damping co efficient which keeps the swarm in bound and the acceleration
coefficient keep drives the co-ordinates of local and global attractors.

3.8.5 Criteria for Inertia clamping and acceleration co-efficeint

At the topological development both global and local attractors are bound to same value

X t1,1 X 01,1 v1,1s

For consecutive iterations

t
X t1,1 X 01,1 v1,1
s
t 0
t 1
X 01,1 v0 s
t 0
t 1
Lt s X 1,1
t Lt s [ X 1,1
0 v s
]
0 t 0

1
X 01,1 v0 [ ]
1

So for swarm to be in bounds the inertia damping co-efficient should be between [0<
<1]. Now for acceleration co-efficient the movement equation is analyzed

vtn,1d 1.vtn,d C1.rt n ,d Lnt ,d X tn ,d C2 .Stn,1d Gtn ,d X tn ,d


X tn,1d X tn,d vtn,1d
[ X tn,1d , vtn ,d ]tN

The consecutive velocities cab be relates as follows

vtn,1d X tn,1d X tn ,d
vtn ,d X tn ,d X tn,1d
vtn,1d vtn ,d [ X tn,1d X tn,1d ]

and position can be re written as

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
64
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

X tn,1d [1 (C1.rtn1,d C2 .Stn,1d ) X tn,d X tn,1d C1.rtn,d Lnt ,d C2 .Stn,1d Gtn,d ]

Now to calculate the error in position between the expected and attained the expentancy
operator is applied on position vector.

E[ X tn,1d ] E[ X tn,d ][1 0.5*(C1 C2 )] E[ X tn,1d ] 0.5*[C1.Lnt ,d C2 .Gtn,d ] (3.17)

Assuming the expectance to be , now for a particle in swarm it is expecte that the
calculated and expected positions are equal so for optimum goal attainment the
expectancy is equal to zero which converts the position expectancy equation to

2 [1 0.5*(C1 C2 )] =0

Finding roots of expectancy

1,2 0.5*([1 0.5*(C1 C)] [1 0.5*(C1 C2 )]2 4

Now for expectancy is always a positive real

[1 0.5*(C1 C2 )]2 4. 0 (3.18)

We arrive at optimal co-efficient criteria for better convergence.

0 (C1 C2 ) 4(1 )

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
65
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

3.9 PSO Algorithm

Fig.3.17 PSO Algorithm[74]

Table 3.8 PSO Setting


MOPSO Definition
Stopping/convergence 100

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
66
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Total particle population 1000


Max no of repository elements 500
Inertia weight ( w) 0.5
Inertia weight damping rate (w.damp) 0.99
Personal learning co efficient (a1) 1
Global learning co-efficient (a2) 2
No of grid in each dimension 7
Inflation rate ( ) 0.1
Leader selection pressure ( ) 2
Deletion selection pressure ( ) 2
Mutation rate (mu) 0.1

3.9.1 Initialize population and Evaluate fitness

Initialize population structure with fields Particle. Evaluate fitness over particle
position and determine the dominance levels of each particle by dominates function

//Initialize particle structure

{Particle.Position, Particle.Velocity, Particle.Cost, Particle.Best Position,


Particle.Bestcost, Particle.Is Dominated, Particle.Grid Index, Particle.Grid Subindex}

//Evaluate particle.position and cost

1. For i :[1-N]
2. Pop[i].position=random[Range,N]
3. Pop[i].velocity : zeros[N]
4. Pop[i].cost :evaluate (pop[i].position)
5. //update personal best
6. Exchange pop[i].best position with pop[i].position
7. Exchange popi[i].best cost with pop[i].cost
Initiate repository element which is subset of all particles with best position and cost
then select leader for swarm at every iteration through select leader function and
i=update the particle structure for current leader swarm.

//determine domination level

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
67
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Pop : domination(pop,N)

1. For i:[1-N]
2. For j:[i+1-N]
3. If dominates (pop[i], pop[j])
4. True(pop[j] Is dominated)
5. Else if dominates(pop[j],pop[i])
6. True(pop[i] Is dominated)
7. b=dominates(pop[i],pop[j])
8. b= all(x<y)&&any(x<y)
9. //Initiate repository element
10. Rep=pop[~dominated pop)

Apply mutation operator on the updated particle structure then calculate the
dominance level for current structure. Create neighborhood for swarm by initiating
grid topology for swarm. Update the swarm in the repository element with current
dominance level.

3.9.2 Create Grid Index

Now the topology is built for swarm which was initialized through particle structure
topology is static and remains unchanged at every generation. Naumann /Grid based
topology is built which utilizes Euclidean co-ordinates based on position of each swarm

//create grid index

Grid=create grid(rep,ngrid,)

1. P=[pop.cost]
2. Rmin=min(p,[],2)
3. Rmax=max(p,[],2)
4. dr=Rmax-Rmin

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
68
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Rmin Rmin * dr

5. Rmax Rmin * dr
6. //Initiate grid index
7. Grid.L : [] Grid.H :[]
8. Object : size[p,1]
9. For j: [1-object]
10. p.object[equal spacing in R]
11. Grid[j].L =[-, p.object]
12. Grid[j].H=[p.object, ]
13. For i :[1-size(Rep)]
14. Rep[i]=find Grid index (Rep[i],Grid)
15. Obj=size(particle.cost)
16. Grid size=size(Grid.L)
17. Particle.Grid sub Index :zeros(p.object)
18. For i : [1-obj]
19. Particle.Grid sub Index[i]=find(particle.cost[i]<grid]
20. Particle.Grid Index=N*Grid size*particle grid index+ particle grid sub index
21. //Initiate repository element
22. Rep=pop[~dominated pop)

3.9.3 Select Leader

After building topology each particle in repository is recognized with its position and
velocity coordinates the identity of each particle is recognized through this co ordinate
Swarm is lead most fittest particle/particles At each generation the swarm changes its
leader according to the swarm velocities and position evaluated through swarm
movement equation.

//select leader

Select leader(rep, )

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
69
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

1. //Grid Index of all repository


2. I=[rep.Grid Index]
3. //occupied cells
4. C=select unique cells(I)
5. Q=find(I==C)
6. //selection probability
7. p=exp(- *N)
p
P
8. p
9. s=select(P>random(0,1))
10. //select cell
11. Sc=unique (s)
12. Find(I==s)
13. Leader=Rep(sc)
14. Mutation(pop,pm,Range)
15. If pm<random[0,1]
16. Pop[i]=R+dr*R
17. dr=pm*(Rmax-Rmin)

3.9.5 Delete extra elements

Excess particles in the repository are either deleted or replaced by better fit particles in
each generation if repository exceed its size then the interia damping factor reduces the
velocity of swarm resulting in poor convergence hence at aevery generation repository is
checked for its size.

//Delete extra elements

Delete Rep member(Rep,)

[1] Grid Index =(Rep.Grid Index)


[2] Deletion=select leader (Rep,)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
70
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[3] Deletion={ }

3.9.6 Swarm Movement

Now that the swarm and its topology is built the swarm is allowed to move over the
topology with governing movement equation followed by a slight mutation in fit particles
which would accelerate motion of swarm by randomly changing the velocity and position
coordinates.

Main Learning

Do until max (IT)


For i : [1-N]
//Select Leader
1. leader=Select leader(rep, )
pop i .Velocity w * pop i .Velocity c1* rand VarSize .
* pop i .Best.Position pop i .Position c 2* rand VarSize .
* leader.Position pop i .Position
pop i .Position pop i .Position pop i .Velocity
pop i .Position max pop i .Position, VarMin
pop i .Position min pop i .Position, VarMax
pop i .Cost Evaluate pop i .Position

2. Newpop=mutatute(pop,pm,Range)
3. newpop.cost=evaluate(newpop.position)
4. Determine domination(Rep)
5. If dominates(New.pop.position, pop.position)
6. True(Is dominated pop.position)
7. Else if dominates(pop.position, New.pop.position)
8. True(Is dominated New.pop.position)
9. Grid=Create Grid(Rep, grid size,)
10. Check if resize>maxrep

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
71
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

11. Rep=delete rep members (Rep,).

Table 3.9 MOPSO family of optimal solutions for 35 HRC AISI 4340 steel
Grid
Position Velocity Cost Best Position Best Cost Index GridSubIndex
[265.00 0.16 [1.55 702.05 266.48 379.44 [263.49 0.22 [2.01 595.25 181.77 277.57
2.00] [1.62 -0.06 1.07] 26.93] 1.01] 31.89] 10668 [2,6,6,7,3]
[212.34 0.15 [-43.26 -0.04 - [2.99 426.70 123.66 206.18 [255.60 0.19 [2.03 659.89 217.50 320.77
1.00] 0.97] 43.69] 1.59] 28.39] 33631 [6,2,2,2,7]
[193.76 0.15 [137.29 -0.02 - [3.43 417.33 134.23 215.15 [193.76 0.15 [3.43 417.33 134.23 215.15
1.00] 0.35] 46.12] 1.00] 46.12] 40282 [7,2,3,3,7]
[174.19 0.15 [-85.70 -0.01 - [3.91 413.06 148.34 227.70 [174.19 0.15 [3.91 413.06 148.34 227.70
1.00] 0.24] 49.11] 1.00] 49.11] 46844 [8,2,3,3,8]
[265.00 0.15 [1.45 652.07 221.19 340.52 [265.00 0.15 [1.45 652.07 221.19 340.52
1.82] [7.93 -0.01 0.20] 28.93] 1.82] 28.93] 9850 [2,5,5,6,4]
[174.72 0.15 [-32.71 -0.00 - [3.89 413.10 147.91 227.31 [207.43 0.15 [3.10 427.11 127.30 209.77
1.00] 0.01] 49.02] 1.01] 44.08] 46844 [8,2,3,3,8]
[215.30 0.15 [-30.50 -0.03 - [2.92 428.66 122.23 205.01 [215.30 0.15 [2.92 428.66 122.23 205.01
1.00] 1.23] 43.34] 1.00] 43.34] 27070 [5,2,2,2,7]
[211.21 0.15 [3.01 425.98 124.22 206.64 [211.21 0.15 [3.01 425.98 124.22 206.64
1.00] [-1.61 -0.04 -0.73] 43.83] 1.00] 43.83] 33631 [6,2,2,2,7]
[194.42 0.15 [-14.35 -0.05 - [3.42 417.57 133.80 214.78 [194.42 0.15 [3.42 417.57 133.80 214.78
1.00] 0.63] 46.02] 1.00] 46.02] 40282 [7,2,3,3,7]
[265.00 0.20 [1.95 694.04 238.22 342.10 [265.00 0.20 [1.95 694.04 238.22 342.10
1.66] [37.22 -0.00 0.20] 26.52] 1.66] 26.52] 17139 [3,6,5,6,3]
[171.25 0.15 [-32.00 -0.03 - [3.98 412.91 150.71 229.85 [203.25 0.18 [3.13 673.17 231.63 322.24
1.00] 0.62] 49.60] 1.60] 33.15] 46844 [8,2,3,3,8]
[265.00 0.15 [1.66 509.07 120.80 217.67 [265.00 0.15 [1.66 509.07 120.80 217.67
1.13] [32.08 -0.01 0.13] 36.15] 1.13] 36.15] 14684 [3,3,2,3,5]
[265.00 0.15 [26.00 -0.05 - [1.49 603.83 180.32 296.71 [265.00 0.15 [1.49 603.83 180.32 296.71
1.59] 0.03] 30.84] 1.59] 30.84] 9031 [2,4,4,5,4]
[249.95 0.16 [2.10 480.93 125.34 211.22 [249.95 0.16 [2.10 480.93 125.34 211.22
1.00] [8.34 -0.09 -0.99] 37.96] 1.00] 37.96] 21236 [4,3,2,2,5]
[265.00 0.16 [1.50 696.28 262.59 377.89 [265.00 0.16 [1.50 696.28 262.59 377.89
2.00] [32.06 0.00 1.15] 27.25] 2.00] 27.25] 10659 [2,6,6,6,3]
[265.00 0.16 [1.57 608.71 184.51 296.17 [265.00 0.24 [2.38 749.62 248.99 359.27
1.55] [2.84 -0.08 0.06] 30.28] 1.49] 25.43] 9031 [2,4,4,5,4]
[265.00 0.15 [1.48 610.04 185.18 302.22 [265.00 0.15 [1.49 603.83 180.32 296.71
1.62] [1.51 -0.02 0.03] 30.57] 1.59] 30.84] 9031 [2,4,4,5,4]
[265.00 0.20 [1.96 698.30 241.07 344.82 [265.00 0.20 [1.96 698.30 241.07 344.82
1.68] [10.04 0.01 0.04] 26.38] 1.68] 26.38] 17139 [3,6,5,6,3]
[265.00 0.17 [125.17 -0.00 - [1.74 530.80 141.76 233.24 [265.00 0.17 [1.74 530.80 141.76 233.24
1.10] 0.30] 34.33] 1.10] 34.33] 15494 [3,4,3,3,5]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
72
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[265.00 0.16 [1.61 708.49 270.60 381.20 [265.00 0.15 [1.49 603.83 180.32 296.71
2.00] [0.68 0.01 0.47] 26.59] 1.59] 30.84] 10668 [2,6,6,7,3]
[265.00 0.24 [28.76 -0.01 - [2.70 846.43 323.67 419.78 [265.00 0.24 [2.70 846.43 323.67 419.78
1.96] 0.04] 22.55] 1.96] 22.55] 31169 [5,7,7,8,2]
[265.00 0.15 [1.48 694.34 261.24 377.37 [265.00 0.15 [1.48 694.34 261.24 377.37
2.00] [3.61 0.00 0.67] 27.37] 2.00] 27.37] 10659 [2,6,6,6,3]
[250.88 0.17 [2.09 486.86 128.60 214.65 [250.88 0.17 [2.09 486.86 128.60 214.65
1.00] [-7.54 -0.02 -0.11] 37.52] 1.00] 37.52] 21326 [4,3,3,3,5]

Table 3.9 has results obtained from MOPSO for 35 HRC the position, Best position matrix
represent process parameters (vc,f,d),. and Cost, Best Cost represents objective fitness (Ra ,Ft ,Fa
,Fr ,Tf ) and the latter columns are corresponding to grid index and grid sub index of PSO
topology Out of the 500 repository elements few are listed in above table.

3.9.7 Plots for MOPSO results (35 HRC)

Fig 3.18 Pareto spread surface roughness and Fig 3.19 3D surface plot of optimal Ra with
cutting force 35HRC best position 35HRC

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
73
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig 3.20 3D surface plot of Tf with best Fig 3.21. Depth of cut influence on cutting
position 35HRC forces 35 HRC

Table 3.10 MOPSO family of optimal solutions for 35 HRC AISI 4340 steel
Table 1 Repository elements Solution for Population
Position Velocity Cost Best Position Best Cost Grid Index GridSubIndex
[131.18 0.15 [4.07 523.87 331.58 282.59 [169.76 0.15 [3.91 730.05 393.60 357.11
1.00] [-38.59 -0.02 -0.73] 29.89] 1.55] 14.41] 20510 [4,2,2,2,8]
[145.95 0.15 [3.85 512.27 332.51 286.06 [145.95 0.15 [3.85 512.27 332.51 286.06
1.00] [-29.05 -0.08 -1.19] 25.34] 1.00] 25.34] 13948 [3,2,2,2,7]
[175.00 0.24 [5.24 1075.52 554.08 479.04 [172.92 0.18 [4.53 953.91 482.02 431.27
2.00] [2.94 0.05 0.95] 9.20] 2.00] 10.62] 45011 [7,8,7,7,2]
[175.00 0.15 [3.88 720.39 390.67 356.83 [175.00 0.15 [3.88 720.39 390.67 356.83
1.52] [18.14 -0.00 0.06] 13.99] 1.52] 13.99] 15501 [3,4,3,4,3]
[175.00 0.19 [4.67 977.77 494.94 440.17 [175.00 0.19 [4.67 977.77 494.94 440.17
2.00] [2.26 0.01 0.10] 10.11] 2.00] 10.11] 31070 [5,7,6,6,2]
[157.18 0.15 [3.72 506.91 336.75 293.12 [157.18 0.15 [3.72 506.91 336.75 293.12
1.00] [-17.82 -0.04 -1.28] 22.59] 1.00] 22.59] 13947 [3,2,2,2,6]
[175.00 0.19 [4.62 969.60 489.97 436.99 [136.93 0.15 [4.01 617.69 362.52 315.19
2.00] [56.89 -0.05 0.12] 10.21] 1.26] 23.55] 31070 [5,7,6,6,2]
[175.00 0.24 [4.99 944.81 506.19 447.74 [175.00 0.24 [4.99 944.81 506.19 447.74
1.68] [4.45 -0.01 0.22] 10.29] 1.68] 10.29] 36911 [6,6,6,7,2]
[132.82 0.15 [4.04 522.32 331.42 282.65 [132.82 0.15 [4.04 522.32 331.42 282.65
1.00] [-53.49 -0.04 -0.47] 29.32] 1.00] 29.32] 20510 [4,2,2,2,8]
[149.38 0.15 [3.80 510.32 333.49 287.81 [149.38 0.15 [3.80 510.32 333.49 287.81
1.00] [-7.05 -0.03 -1.52] 24.44] 1.00] 24.44] 13947 [3,2,2,2,6]
[154.57 0.15 [3.75 507.89 335.50 291.14 [154.57 0.15 [3.75 507.89 335.50 291.14
1.00] [-20.43 -0.12 -0.45] 23.19] 1.00] 23.19] 13947 [3,2,2,2,6]
[162.55 0.15 [3.67 505.40 339.86 297.86 [166.04 0.15 [3.64 504.79 342.25 301.40
1.00] [-3.48 -0.02 -0.06] 21.45] 1.00] 20.75] 7395 [2,2,2,3,6]
[173.21 0.15 [0.62 -0.06 -0.09] [3.64 549.52 354.83 317.92 [172.59 0.20 [4.02 635.55 389.86 357.26 8204 [2,3,3,3,5]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
74
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

1.10] 18.04] 1.19] 14.92]


[175.00 0.19 [4.07 721.13 405.90 369.87 [175.00 0.19 [4.07 721.13 405.90 369.87
1.42] [4.58 0.01 0.22] 13.24] 1.42] 13.24] 22143 [4,4,4,4,3]
[175.00 0.23 [4.70 864.69 471.99 423.23 [175.00 0.23 [4.70 864.69 471.99 423.23
1.57] [1.33 -0.02 -0.43] 11.10] 1.57] 11.10] 36822 [6,6,5,6,3]
[175.00 0.23 [4.70 876.32 474.30 424.45 [175.00 0.23 [4.70 876.32 474.30 424.45
1.61] [0.30 0.01 0.40] 10.97] 1.61] 10.97] 36822 [6,6,5,6,3]
[175.00 0.20 [4.04 639.51 392.88 360.94 [175.00 0.20 [4.04 639.51 392.88 360.94
1.21] [0.75 -0.05 -0.42] 14.45] 1.21] 14.45] 14854 [3,3,4,4,4]
[175.00 0.24 [5.28 1082.79 558.47 481.98 [172.40 0.25 [4.61 740.79 441.72 405.77
2.00] [1.67 0.02 1.18] 9.14] 1.25] 13.04] 45020 [7,8,7,8,2]
[175.00 0.15 [3.63 548.62 356.03 319.83 [175.00 0.15 [3.63 548.62 356.03 319.83
1.10] [2.22 -0.01 0.10] 17.79] 1.10] 17.79] 8204 [2,3,3,3,5]
[131.18 0.15 [4.07 523.87 331.58 282.59 [169.76 0.15 [3.91 730.05 393.60 357.11
1.00] [-38.59 -0.02 -0.73] 29.89] 1.55] 14.41] 20510 [4,2,2,2,8]
[145.95 0.15 [3.85 512.27 332.51 286.06 [145.95 0.15 [3.85 512.27 332.51 286.06
1.00] [-29.05 -0.08 -1.19] 25.34] 1.00] 25.34] 13948 [3,2,2,2,7]
[175.00 0.24 [5.24 1075.52 554.08 479.04 [172.92 0.18 [4.53 953.91 482.02 431.27
2.00] [2.94 0.05 0.95] 9.20] 2.00] 10.62] 45011 [7,8,7,7,2]

Table 3.10 has results obtained for MOPSO for 45 HRC the position, Best position matrix
represent process parameters (vc,f,d),. and Cost, Best Cost represents objective fitness (Ra ,Ft ,Fa
,Fr ,Tf ) and the latter columns are corresponding grid index and gris sub index of PSO topology
Out of the 500 repository elements few are listed in above table.

3.9.8 Plots for MOPSO results (45 HRC)

Fig 3.22 Pareto spread surface roughness Fig 3.23 3D surface plot of optimal Ra
and cutting force 45HRC with best position 45HRC

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
75
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.3.24 3D surface plot of Tf with best Fig.3.25 Depth of cut influence on cutting
position 45 HRC forces 45 HRC

3.10 Comparison between EA and SI technique

Fig.3.26. Solution Spectrum for 35 hrc Fig.3.27. Solution Spectrum for 45 hrc

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
76
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.3.28 Solution Spectrum for 35 hrc Fig.3.29 Solution Spectrum for 45 hrc PSO

Fig 3.30. Solution Spectrum for 35 hrc Fig. 3.31 Solution Spectrum for 45 hrc

3.10.1 Comparison Based on Spectrum of solution space

1. The search exploration for solution in both EA and SI varies in demography of


population size and density which can be observed from the spectrum distribution
of solution space.
2. The spectrum of solution space reveals the demographic changes in solution space
at each generation hence it is crucial to compare saturation levels in solution
spectrum.
3. Solution Spectrum in NSGA II :

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
77
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

3.1 The solution spectrum distribution in NSGA II (as shown in Fig. (3.26-3.27)) has
attained uniform amplitude with periodic crusts and troughs in all the objectives
exhibiting quit a good saturation level in the demography in solution space for
both hardness levels.
3.2 Solution Spectrum in SPEA 2 While in SPEA 2 the demography of solution space
(as shown Fig. (3.30-3.31)) is quite different from the NSGA II as the solution
space is built on niche Pareto sharing
3.3 The spectrum is pretty disturbed with low levels of saturation and crusts and
troughs varying throughout the wavelength of the data. for both the hardness
levels.
4. Solution Spectrum in PSO

4.1 The trend in PSO shows moderate level of disturbance in spectrum of solution
space (as shown Fig. (3.28-3.29)) but the level of saturation is appreciable when
compared to the SPEA 2 and change in demography is not as amplifying as SPEA 2
while in comparison with the NSGA II it is inferior in terms of saturation levels.

From the nature of spectrum in solution space conclusion can be condensed to as follows.

NSGA II relatively better compared to PSO and SPAE 2, PSO is better compared to
SPEA 2.

3.10.2 Comparison Based on Diversity in solution space.

1. The diversity of solution space is evaluated through the average Pareto spread in
each generation, diversity measures exploration potential in search space.
2. Higher diversity in generations gives greater chances of solutions getting retained
from varying locals of search space hence increasing the strength of solution.

Table 3.11Diversity of Evolutionary Algorithm


Evolutionary Algorithm Diversity
NSGA II [0.01-1]
SPEA 2 [0.27-0.37]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
78
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

3 The diversity of NSGA II is superior when compared to SPEA 2 the range of search
exploration between the generations in NSGA II varied from Euclidean spread of [0.01 to
1] (Fig.3.6 and Fig. 3.9) which shows that the exploration happened between two
extremum and in mid generations solution spread curled towards mean solution and
drifted away from the mean solution.

4 While in the SPEA 2 the Euclidean spread was in a short range of 0.27-0.37 (as shown
in (Fig.3.26 and Fig. 3.27). with all the equal dominating and non-dominated solutions in
the spread, the diversity oscillated in short range at each generation with most of the
generations between average spread [0.3-0.35].

5 The overall analysis between EA and PSO suggests that NSGA II performs well in
terms of diverse solution while SPEA 2 and PSO performs well when the solutions
spread is short In NSGA II more diverse solutions are preserved while in PSO and SPEA
2 neighborhood solutions are preserved.

Further potentials of both EA and SI are explored with synergism with prediction models
in chapter 5 where EA and PSO are utilized to enhance learning in prediction models.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
79
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

CHAPTER 4

PREDICTION MODELS USING INTELLIGENT LEARNING


TECHNIQUES

4.1 Introduction

This chapter illustrates the development of prediction modeling using intelligent


learning techniques on machining system. Learning algorithm with their
mathematical framework is extensively discussed and applied to existing machining
system. In the first segment prediction model by applying neural network is
developed for both steels. In the second segment adaptive learning techniques are
developed and the third section developed models are analyzed over machining
statistics. Further extensive statistical analysis is done between the experimental da ta
and the prediction results for evaluating the accuracy among the developed models.
The objective of this chapter is explained through the following work flow.

Fig.4.1 Workflow for Chapter 4

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
80
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.2 Neural Network

Neural network consists of nodes connected by direct links each link has numeric
weight wij associated with it which determines the strength and configuration
between links. A threshold function activation function f(.) is applied to model
which alters the topology of links, these connection between nodes forms layer
pattern so called network architecture. Depending on the direction of propagation of
weights in the link neural model is classified into feed forward and feed backward
/recurrent network.

4.2.1 Feed forward Neural network

Feed forward networks are arranged in layers such that each unit receives input only
from the units in the preceding layer. The architect of the feed forward network has
following composition.

First layer is input layer receiver of data or input from the external stimuli,
the incoming data is then sent to the next layer where the number of layers
can be more than one.
Second layer consists of hidden layer in which the number of nodes depends
on the complexity and non-linearity of data to be handled with weights
defining connections between node and bias at each node A single hidden
layer constituents a network activated by threshold /activation function which
takes augment of weights and bias matrix from the net This augmented net is
propagated to the subsequent layers depending number of hidden layers.
Data processed in hidden layers are routed to the output layer. This layer
plays a role in determining the validity of data that are analyzed based on the
existing limits in the activation function.
Neural network runs training examples through the net one at a time,
adjusting the weights slightly after each example to reduce the error. Each
cycle through the examples called epoch.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
81
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.2.2 Mathematical background of neural network

Since the essential elements of neural network are discussed now the working matrix
by which each layer learns is discussed through simple network.

Fig.4.2 Simple network

Let X be input vector and Y be output layer vector with weight matrix mapping
between the input and output layer. Then the neural network is characterized by the
learning model
n
Yi Wx Xi bi
i 1

Where the input layer and output layers are defined by the vectors
X {x1 , x2 , x3 ,......xl }
Y { y1 , y2 , y3 ,....... yl }

Now for training and mapping between the input and output layer learning law which
describes that prediction accuracy to increase the weights of nodes should be
correlated to attain minimum error in predictor consequently to store a prototype
( xi , yi ) .The weights are altered by weight matrix

w .yi x iT

Where is learning factor and is generally kept positive, elements of weight matrix w
starts from zero to a perfectly associative neuron weight. And inverse mapping is possible
at any instance of learning stage by recalling the weight matrix as

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
82
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

L
W Yi .Xi T
i 1

W . X K YK K 1, 2...l
Now that the weight matrix is defined and one to one mapping between the input and
output layer is established the next step is to minimize the error of mapping in weight
matrix by applying gradient descent approach.

4.2.2.1 Gradient Descent Approach

According to gradient descent the mean square E(w) associated gradient of expected
error .The error gradient ( points in the direction in which E(W) will decrease at fastest
possible rate
w k 1 w k (E)

arbitrary constant similarly the lest mean square error for predicted output and weight is
calculated as

1
ek [w(k), y(k)] [| w(k).y(k) | w k .y k ]
2

To minimize this error the error gradient showed be headed in the direction where error is
least

ek w k y k 1
y k .Tb w k .y k y k
w k 2

w k 1
0
From extremum of error gradient we get the

Tb w k 1 if w(k).y(k) 0
1 if w(k).y(k) 0

By fixed incrementing weights at every epoch we get

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
83
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation


w k 1 w k
2

y k y k Tb w k y k
Now validating weights for desired output d(k), the augments w(k) and y(k)determines
the rate at which the desired output is approached .

ek d k w k * y k

Evaluating least squared errors for the error function.

2
e2k d k w k .y k d k w k .y k
2 T

To minimize the least square error expectation operator E[ek2:]is applied over the squared
gradient of error
e2k
k 2e k y k
w


E k 2E ek y k

E(K ) 2E y k .yT k .w k d k .y k

By estimating the mean of the gradient the direction of least error propagation can be
known

K 2E y k .yT k .w k d k .y k

the expectancy of gradient is expected to be zero and with few manipulations weight
matrix which has least error propagation is obtained.as
K 0

P y k .yT k .
Q d k .y k

W* Q1.P

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
84
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.3 Multi-layer feed forward network structure

Now that the weights matrix for a single layer is established an extension of weight
matrix for consecutive layers are evaluated for the above multi-layer network.

y j k d k
1 m
E k
2

2 j1
N
ET E k
k 1

Where ET is the expectation of all the layers similarly evaluating the gradients of
expectation

E k
yj dj
dy j

s j yi wil j

The activation function or transfer function characterizes the input output relationship
y j f j (s j )

Most common choice of activation function is sigmoidal function which satisfies the

continuous differential function and continuous everywhere.


1 1
yj
1 exp( s j ) n
1 exp[ ( wij yi(1) j )]
i 1

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
85
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Now for the above function the error propagation for every input to layer can be written

by chain rule as
E k E k y j

s j y j s j

After few assumptions in the activation function and applying the above chain rule

following error propagation in unit layer j can be expressed as

dyi d 1
( ) y j (1 y j )
ds j ds j 1 exp( s j )
E (k )
( y j d j ) y j (1 y j )
s j

Where j are the threshold value generally referred as bias and yj in the consecutive layers
is determined by transfer function.so at each layer the targets changes with the weights
which are expected to approach to desired matrix. Thus the error gradient at each layer
with respect to the weights can be written by chain rule as

E k E k yl si

w m yl sl w ml

Summing up all errors in all the layers by chain rule we arrive at


E k E k
w ij
yi j s j

There are two approaches to apply gradient descent method to the training method of a
multi-layer feed forward neural network .The first is based on periodic updating and
second is based on continuous updating. In both the cases the weights are repeatedly
monitored either sequentially or randomly until the convergence criteria is satisfied

ET E(k) E(k) E(k) E(k) T


[ , , .... ]
w w1 w 2 w 3 w m

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
86
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

With m donating the number of weights in the network and weights are updated only
once every epoch after all the training patterns are evaluated the weights get updated by
the generalized fixed increment/decrement rule.

E k
w new w old n
w

Where is small constant referred to as learning rate. Wnew and wold are weights vector
at epoch k+1 and k respectively.

4.2.3 Key notes form feed forward analysis

In order to build accurate prediction model sufficient input and output vectors is
necessary to be included in the network.

A reasonably sufficient amount of exemplars is essential for a prediction model to work


accurately. There is no hard rule for selecting number of nodes and layers it is purely a
trail error based mapping technique. If the developed model is accurate enough for a
given set of nodes and layers and satisfies the stopping criteria then the network
hypothesis is acceptable.

4.2.4 Multi-layer Perceptron for Turning of AISI 4340 Steel

With the above fundamentals a prediction model for machining system [1] is built for
both the steels i.e., 35 HRC and 45HRC with process parameters as input vectors and
cutting speed, forces and tool life as output vectors. In the Fig a schematic description of
developed prediction model is explained in detail. This model is adopted for predicting
both the steels.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
87
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.4 Feed forward neural network for AISI 4340 Hard turning

Table 4.2.4 (a) Description of Neural network


Neural Network Type Feed forward neural network
Training function Levenberg-Marqaurdt
No of neurons in Hidden 10
layer
No of neurons in output 5
Weights in hidden layer 30 [310]
Weights in output layer 50[510]
Training samples [700 3]
Testing samples [150 3]
Validation samples [150 3]
Transfer function Tan-sigmoid function
Training performance 2.861*10-4 (35HRC) NN 7.930*10-
4
Testing performance 4.147*10-4 (35HRC) NN (45HRC) NN
0.00106(45HRC)
Validation performance 1.777*10-4 (35HRC) NN 0.00118(45HRC)
NN

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
88
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table 4.2.4 (b) Calibrated weights and bias above Neural Network
Calibrated weights and bias for 35 HRC Steel

Hidden layer Definition(sij) Output layer Definition(sij)


Bias(i) W1(vc) W2(f) W3(d) bias(i) W1(Ra) W2(Ft) W3(Fa) W4(Fr) W5(Tf)
0.660 -0.135 -0.086 0.005 4.388 -3.260 -2.039 -8.270 -6.538 -4.691
-0.660 0.0024 -0.034 0.1812 -0.80 1.545 -3.425 3.573 2.873 3.957
0.661 0.002 0.148 -0.099 11.57 4.809 -0.977 1.910 -5.394 0.541
0.901 -0.710 0.535 0.138 10.89 0.429 0.058 -0.07 0.042 -0.035
-0.677 0.002 -0.180 -0.119 12.74 6.030 0.239 0.002 -3.204 3.619
0.167 -0.307 -0.020 0.295 1.690 0.597 0.981 -0.136 -0.025
-0.180 -0.441 0.334 0.086 0.489 0.539 -0.648 0.390 -0.115
-0.672 -0.134 -0.005 -0.07 -0.067 -6.242 6.290 6.749 10.459
-0.99003 -0.255 -0.0165 0.2469 1.143 2.986 4.917 -0.703 0.274
-1.376 -0.465 0.351 0.0912 0.6855 1.169 -1.4074 0.850 -0.113
Caliberated weigths and bias for 45 HRC Steel
2.991 0.541 1.972 -2.656 0.746 0.176 0.0013 -0.007 -0.027 -0.017
0.245 0.276 0.134 0.323 0.635 -0.127 -1.326 0.707 1.233 -1.894
-2.418 2.11 1.853 1.524 0.974 0.200 0.0059 0.0128 0.0091 -0.010
0.210 0.180 0.099 0.428 0.620 0.860 2.276 0.0923 0.0118 0.566
0.519 -2.88 -1.639 -0.557 0.261 0.676 -0.0322 -0.0670 -0.039 -0.100
0.615 2.144 1.797 1.122 0.728 -0.0492 -0.088 -0.0488 -0.106
-1.914 -0.230 -0.261 -2.013 0.198 -0.063 0.0148 0.0102 0.072
0.493 0.646 -0.628 0.776 -0.298 0.0085 -0.029 -0.188 -0.062
6.36 3.857359 1.379878 2.528731 -0.051 -0.0027 -0.004 -0.011 -0.08
0.733 0.268918 -0.332 -0.4 -1.295 -1.324 -1.956 -1.324 -0.01

4.2.4 Results of perceptron for 35 HRC Steel

Fig.4.5 Performance plot of Network Fig.4.6 Training state of Network at each

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
89
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.7 Training error in Ra Fig.4.8 Regression fit plot for Ra

Fig.4.9 Training error in Ft Fig.4.10 Regression fit plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
90
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.12 Regression fit plot for Fa


Fig.4.11 Training error in Fa

Fig.4.13 Training error in Fr


Fig.4.14 Regression fit plot for Fr

Fig.4.15 Training error in Tf


Fig.4.16 Regression fit plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
91
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.2.6 Results of perceptron for 45 HRC Steel

Fig.4.17 Performance plot of Network Fig.4.18 Training state of Network at each


epoch

Fig.4.19 Training error in Ra Fig.4.20 Regression fit plot for Ra

Fig.4.21 Training error in Ft Fig.4.22 Regression fit plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
92
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.24 Regression fit plot for Fa


Fig.4.23 Training error in Fa

Fig.4.25 Training error in Fr Fig.4.26 Regression fit plot for Fr

Fig.4.27 Training error in Tf Fig.4.28 Regression fit plot for Tf

The hypothesis of network for both the steels were quite accurate with the man
square error approaching to order of 10 -3.Error gradients and mean error gradients in

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
93
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

learning stage is found to converge to minimum criteria. Error analysis for each
target output is evaluated at learning stages i.e., training testing and validating and
targets are fir to linear regression model with regression coefficient approach to 1.

4.3 Adaptive Neuro-Fuzzy Interference System (ANFIS)

A neuro-fuzzy interference system utilizes the human ability of recognizing pattern


in modeling information either numeric or linguistic by employing fuzzy
membership function and fuzzy if-then rules combined with neural network architect.
ANFIS shares methodology of fuzzy sets and neural network for building learning
model by interpreting the fuzzy system in terms of neural nodes with slight
modification in the fuzzy rule base which is replaced by weights instead of linguistic
rules.

There are two ways of implying fuzzy neural systems. In the first method the fuzzy
rules are modified with no change in the input and output membership functions. In
the second method fuzzy neural systems with learning algorithms such as back-
propagation or hybrid learning are applied to learn and adjust the membership
function parameters.

Different combinations of fuzzy neural systems are possible with varying input
output membership functions. The applied adaptive neuro fuzzy interference system
is explained through a two input-single output model utilizing Sugeno-type fuzzy
system also known as Takagi-Sugeno-Kang type fuzzy system where rule base is
replaced by neural network weights and output membership are defined by linear
function instead of fuzzy linguistic model.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
94
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.29 ANFIS two input model

Layer 1 Every node in this layer is defined by membership function where each node
gives membership values after evaluating inputs over the membership function. The
applied membership function can be linear or exponential with each input defined by
desired subsets in membership function In most cases Gaussian membership function
is applied

x ci 2
A,i exp[( ) ]
2ai
O1,i A,i ( x), i 1, 2..k
O1,i B ,i ( y ), i 1, 2..k

Layer 2 In this layer the node is fixed and takes the fuzzified value as input from the
layer 1. The output of this node is the result of fuzzy multiplication of membership
function which goes into the next node. Each node represents the firing strength of
each rule in the second layer. The T-norm operator with and operation is applied to
obtain the output. This layer is known as andecent layer

O2,i wi A,i ( x)* B,i ( y) i 1, 2..k

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
95
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Layer 3 In this layer the normalized weight for each firing strength with respect to all
the cumulative fire strength is calculated. The resulting weights is called normalized
firing strength
wi
O3,i wi
wi
Layer 4 Every node in this layer is adaptive node known as consequent layer and
gives output with node function defined as
O4,i wi fi

O4,i = wi ( pi x qi y ri )

Layer 5 Single node fixed node that calculates overall output of from consequent
layer
O5,i wi fi

4.3.1 Hybrid learning in ANFIS

The adaptive layers i.e., first and fourth layer contain parameters which can be
modified at every iteration. The antecedent and consequent parameters can be
updated through learning method. There are two paths of learning forward and
backward path. In the forward path recursive least square method is used to alter
consequent parameters. While in the backward path the antecedent parameters are
changed through gradient descent method.at each iteration which is also called
epochs.

Forward Learning In the forward learning consequent parameters are adjusted

f w1 f1 w2 f 2
w1 ( p1 x q1 y r1 ) w2 ( p2 x q2 y r2 )
( w1 x) p1 ( w1 y )q1 w1r1 ( w2 x) p2 ( w2 y )q2 w2 r2

When N training data are given as input vector the n the consequent function changes
to

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
96
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

( w1 x)1 p1 ( w1 y )1 q1 ( w1r1 )1 ( w2 x)2 p2 ( w2 y )2 q2 ( w2 r2 )3 f1


( w1 x)n p1 ( w1 y) n q1 ( w1r1 ) n ( w2 x) n p2 ( w2 y ) n q2 ( w2 r2 ) n f n

The above equation is simplified and expressed in matrix form

A y

Where is the vector M1 M is the number of elements that the consequent


parameters is set and A is the vector P* M where P is the number of N data training
provided to the adaptive network and y is the output vector P*1 whose elements are
N number of output data of an adaptive network. The optimum solution for is
defined as
* ( AT A)1. AT y
T 1
Where AT is the inverse of A and if not singular, ( A A) is pseudo inverse of A by
using recursive LSE method then

i 1 i Pi 1ai 1 ( yiT1 aiT1i )

Where a a row is vector of matrix A and Pi sometimes called a covariance matrix


and is defined by
Pi ( AT A)1

4.3.2 Back propogation Learning

The parameters in Gaussian parameters are trained for minimizing error For a given
adaptive network where the network consists of five layers and has total of N (L)
node in layer L then the square error in the L layer to n data is 1 n N the error at
each node can be written as

N (l )
En d k X kL,n
k 1

Where dk is the k-th component of the vector of the desired output while X kL,n is k-th

component of the vector of actual output generated by adaptive network with input

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
97
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

from the input vector n. The main aim of adaptive learning system is to reduce error
that occurs .i.e.,

En
L ,i 2(di ,n X iL,n )
iL,n

Applying chain rule for consecutive layers for error propagation we get
l 1
En N ( l 1)
En X m, p
l 1 l 1
X l ,i m 1 X m , n X m , n

With 0 l L 1 internal node error is a cumulative node error in the layer l+1. For a
specific node in adaptive layer the error rate corresponding to parameter is given as

En E x
n
xS x

Where S is the set of nodes containing the parameter the error specific to this
parameters is given as

En En
N
=
n 1
4.3.3 Fuzzy clustering Algorithms

Fuzzy clustering algorithms are utilized to discretize the membership function into
subsets in the input vectors so that each input vector is sub divided into topologies
defined by densities of points in respective region. Building a fuzzy set requires
following key points.

Selection of inputs-outputs vectors choice of specific type of fuzzy interference


system the no of mfs function and subsets in mfs Generating the antecedent and
consequent rule

a) Choosing an appropriate family of parameters mfs. To illustrate fuzzy


clustering a fuzzy set with following definition is declared. Let u(t), y(t), x(t) denote
the input, output and state of a system S at time t

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
98
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

X (t 1) f ( x(t ), u (t ))
Y (t 1) f ( x(t ), u (t ))
f &g
f : X *U X
G : X *U Y
[m1T , m2T ,.....mmT ]T

b) Clustering approaches: A clustering approach can be applied to estimate the data


distribution and resulting clustering which produces the membership function

c) Clustering in mfs
In any clustering technique the goal is to estimate that characterizes the best
cluster for input vector X. The parameter vector is sensitive to the shape of
clusters. To define the topology of clusters a set of m points mi in the l-dimensional
space is required which corresponds to a cluster

[m1T , m2T ,.....mmT ]T / C (c1T , r1 , c2T , r2 ,.......cmT , rm )

d) Definition of cluster
Let X be data set, for which m clustering is defined in R partition of X into m sets so
that the following three conditions are met

X {x1 , x2 ,......xn }

[m1T , m2T ,.....mmT ]T

Conditions for clustering


1.Ci , i 1,........m
m
2. Ci X
i 1

3.Ci C j , i, j 1,......, m

The alternative definition is in terms of fuzzy sets is characterized by m functions u j


where

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
99
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

u j : X [0,1], J 1,....m

And

m N

u j ( xi ) 1, 2,.....N ,
j 1
0 u j ( xi ) N , j 1, 2,....m
i 1

These are called membership functions. The value of fuzzy membership function is a
mathematical characterization of clusters which is not precisely defined and each
vector x belongs to more than one cluster simultaneously.

X {x1 , x2 ,......xn }
Ci , i 1,........m
m
Ci X
i 1

Ci C j , i, j 1,......, m
u j : X [0,1], J 1,....m
m N

u ( x ) 1, 2,.....N ,
j 1
j i 0 u j ( xi ) N , j 1, 2,....m
i 1

e) Proximity measures: This parameter measure will quantify the similarities and
dissimilarities between the two clusters and within the clusters with no bias in
selected clusters that each cluster should contributed equally with no domination
among each other. The proximity measure has two property functions which measure
dissimilarity and similarity between two vectors.

Dissimilarity measure

d : X *X R
d 0 R : d 0 d ( x, ) , x,
d ( xi , j )

f) Clustering Algorithm Having adopted proximity measure clustering criteria is


applied to choose specific algorithm scheme that forms the clustering structure Most
of the fuzzy clustering algorithm are derived by minimizing functions of form

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
100
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

N m
J q ( , u ) uijq d ( xi , j )
t 1 j 1

Where J q ( , u) is clustering structure and uijq is membership function for each input

vector and d ( xi , j ) is the proximity measure between the clusters.

Now that the fuzzy clustering is defined the clusters are applied to adaptive layer of
neuro fuzzy interference model.

4.3.4 Grid Partition clustering based Adaptive Neuro-Fuzzy Interference System

Grid portioning is morphological clustering technique where the membership is sub


divided into grid elements. Depending on the fuzzy rules R a fuzzy portioning is
done on input membership function with cluster definitions

N m
J q ( , u ) uijq d ( xi , j )
t 1 j 1

Fig.4.30 Applied ANFIS grid partitioning architect

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
101
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table 4.3.4 (a)Fuzzy Structure


ANFIS type Grid Partitioning
Fuzzy system Sugeno
And method Prod
Or method Max
Defuzzification method Weigthed average
Implication method Prod
Aggregation method Max
Membership function Five
Fuzzy rule 125
Input Mf Type Gaussmf
Output Mf type Linear
Max Epoch 200
Error Goal 0
Initial Step 0.01
Step size Decrease rate 0.9
Step size Increament rate 1.1

Table 4.3.4 b)Statistical Results of ANFIS Grid Partioning Cluster for 35 HRC and
45HRC
Statistical Results of ANFIS Grid Partioning Cluster for Results of ANFIS Grid Partioning Cluster for 45
35 HRC HRC
ANFIS Grid- Error Error MSE RMSE Error Error MSE RMSE
35 mean() STD() mean() STD()

Surface roughness (Ra)


Train Ra (m) 7.74*10-4 6.41*10-4 4.11*10-7 6.41*10-4 5.49*10-7 6.185*10-4 3.82*10-7 6.181*10-4
Test Ra (m) 1.74*10-4 0.0020 3.99*10-6 0.0020 1.09*10-4 0.0019 3.44*10-6 0.0019
-5 -4 -7 -4 -5 -4 -7
Validation Ra 2.63*10 9.74*10 9.84*10 9.74*10 1.56*10 9.17*10 8.41*10 9.17*10-4
(m)
Tangential force (Ft)
Train Ft(N) -5 0.1352 0.0183 0.1351 5.69*10-5 0.0788 0.0062 0.0787
4.49*10

Test Ft(N) -0.0501 0.5101 0.2610 0.5109 7.612*10-4 0.3501 0.128 0.3148
Validation -0.0023 0.714 0.41 0.57 1.62*10-4 0.153 0.0235 0.153
Ft(N)
Axial force (Fa)
Train Fa(N) 1.54*10-5 0.0847 0.0072 0.0847 3.76*10-5 0.0675 0.0045 0.0674
Test Fa(N) 0.0203 0.3453 0.118 0.3448 0.0055 0.1399 0.0195 0.1396

Validation 0.0031 0.1547 0.0239 01547 8.58*10-4 0.0824 0.0068 0.0824


Fa(N)
Radial force(Fr)

Train Fr(N) 2.4*10--5 0.082 0.0068 0.0823 3.59*10-5 0.0654 0.00043 0.0654

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
102
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Test Fr(N) 0.0567 0.3802 0.1468 0.3831 -0.0207 0.2173 0.0473 0.0654

Validation 0.0085 0.1665 0.0278 0.1666 -0.0031 0.1036 0.0107 0.1036


Fr(N)
Tool Life (Tf)
Train Tf (min) 7.56*10-6 0.0120 1.43*10-4 0.0120 4.44*10-6 0.0074 5.409*10-5 0.0074

Test Tf (min) -6.69*10-4 0.0271 7.28*10-4 0.0270 0.0014 0.0235 5.487*10-4 0.0234

Validation Tf -9.34*10-4 0.0152 2.313*10-4 0.015 2.206*10-4 0.011 1.282*10-4 0.0113


(min)

4.3.5 ANFIS Grid Partioning Cluster Plots For Ra 35 HRC

Fig.4.31 Training Error Plots for Ra (Target vs Fig.4.32 Testing Error Plots for Ra (Target
vs Output)

Fig.4.33 Validation Error Plots for Ra (Target vs Fig.4.34 Regression Plots for Ra (Train
Output) /Test/Validate)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
103
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.35 Response Surface Plot for Ra

4.3.6 ANFIS Grid Partioning Cluster Plots For Ft 35 HRC

Fig.4.36 Training Error Plots for Ft (Target vs Fig.4.37 Testing Error Plots for Ft (Target vs
Output) Output)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
104
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.38 Validation Error Plots for Ft (Target vs Fig.4.39 Regression Plots for Ft (Train

Output) /Test/Validate)

Fig.4.40 Response Surface Plot for Ft


4.3.7 ANFIS Grid Partioning Cluster Plots For Fa 35 HRC

Fig.4.41 Training Error Plots for Fa (Target vs Fig.4.42 Testing Error Plots for Fa (Target vs
Output) Output)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
105
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.43 Validation Error Plots for Fa (Target vs Fig.4.44 Regression Plots for Fa (Train
Output) /Test/Validate)

Fig.4.45 Response Surface Plot for Fa


4.3.8 ANFIS Grid Partioning Cluster Plots For Fr 35 HRC

Fig.4.46 Training Error Plots for Fr (Target vs Output) Fig.4.47 Testing Error Plots for Fr (Target vs Output)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
106
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.48 Validation Error Plots for Fr (Target vs Fig.4.49 Regression Plots for F (Train /Test/Validate)
r
Output)

Fig.4.50 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
107
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.9 ANFIS Grid Partioning Cluster Plots For Tf 35 HRC

Fig.4.51 Training Error Plots for Tf (Target vs Fig.4.52 Testing Error Plots for Tf (Target
Output) vs Output)

Fig.4.53 Validation Error Plots for Tf (Target vs Fig.4.54 Regression Plots for Tf (Train
Output) /Test/Validate)

Fig.4.55 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
108
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.10 ANFIS Grid Partioning Cluster Plots For Ra 45 HRC

Fig.4.56 Training Error Plots for Ra (Target Fig.4.57 Testing Error Plots for Ra (Target
vs Output) vs Output)

Fig.4.58 Validation Error Plots for Ra Fig.4.59 Regression Plots for Ra (Train
(Target vs Output) /Test/Validate)

Fig.4.60 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
109
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.11 ANFIS Grid Partioning Cluster Plots For Ft 45 HRC

Fig.4.61 Training Error Plots for Ft (Target vs Fig.4.62 Testing Error Plots for Ft
Output) (Target vs Output)

Fig.4.63 Validation Error Plots for Ft (Target vs Fig.4.64 Regression Plots for Ft (Train
Output /Test/Validate)

Fig.4.65 Response Surface Plot for Ft a

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
110
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.12 ANFIS Grid Partioning Cluster Plots For Fa 45 HRC

Fig.4.66 Training Error Plots for Fa (Target Fig.4.67 Testing Error Plots for Fa
vs Output) (Target vs Output)

Fig.4.68 Validation Error Plots for Fa (Target Fig.4.69 Regression Plots for Fa (Train
vs Output /Test/Validate)

Fig.4.70 Response Surface Plot for Fa a

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
111
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.13 ANFIS Grid Partioning Cluster Plots For Fr 45 HRC

Fig.4.71 Training Error Plots for Fr (Target Fig.4.72 Testing Error Plots for Fr (Target
vs Output) vs Output)

Fig.4.74 Regression Plots for Fr (Train


Fig.4.73 Validation Error Plots for Fr
/Test/Validate)
(Target vs Output

Fig.4.75 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
112
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.14 ANFIS Grid Partioning Cluster for TF Plots For Tf 45 HRC

Fig.4.76 Training Error Plots for Tf (Target vs Fig.4.77 Testing Error Plots for Tf
Output) (Target vs Output)

Fig.4.78 Validation Error Plots for Tf (Target Fig.4.79 Regression Plots for Tf (Train

vs Output) /Test/Validate)

Fig.4.80 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
113
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.15 Subtractive Clustering

In subtractive clusetering each point in membership function forms ac luster center


and the point with higher influence on any cluster center will take in the that cluster.
Recalling the cluster function

N m
J q ( , u ) uijq d ( xi , j )
t 1 j 1

For subtractive clustering the dissimilarity function is d ( xi , j ) is defined by

exponential function

N || xi x j ||
d ( xi , j ) exp[ ]
J 1 j (ra / 2)2

Where j (ra / 2) is the radius of xi & x j are membership function points if a point has

many other points surrounded around itself then point has highest density point.

The highest density point is as first cluster center xc and in the consecutive the
iteration the density measure of each point is obtained by subtracting cluster points
by applying the following equation.

N || xi x j ||
di di d . exp[ ]
J 1 j (ra / 2)2

This is continued until the points in membership functions are exhausted.

After the calculation of the dissimilarity at each point first cluster center is identified
as the point having highest density.

Eliminate all points in the vicinity around the first cluster center of its defined radius
value. For the next iteration update the dissimilarity and apply the cluster function.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
114
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.81 Developed ANFIS (Subtractive Cluster) for machining system

Table 4.3.15 (a) Fuzzy Structure


ANFIS type Substractive Clustering
Fuzzy system Sugeno
And method Prod
Or method Probor
Defuzzification method Weigthed average
Implication method Prod
Aggregation method Max
Cluster Radius 0.5
No of clusters 12
Max Epoch 100
Error Goal 0
Initial Step 0.01
Step size Decrease rate 0.9
Step size Increament rate 1.1

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
115
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table 4.3.15 (b) Statistical Error analysis of ANFIS Subtractive clustering for
35 HRC and 45 HRC
Statistical Results of ANFIS Subtractive Cluster for 35 HRC Results of ANFIS Subtractive Cluster for 45
HRC
ANFIS Grid- Error Error MSE RMSE Error Error MSE RMSE
35 mean() STD() mean() STD()
Surface roughness (Ra)

Train Ra (m) 1.0458*10-7 0.0044 1.922*10-5 0.0044 6.64*10-6 4.32*10-4 2.63*10-7 5.12*10-4
Test Ra (m) -2.82*10-5 0.0052 2.86*10-5 0.0052 1.165*10-5 0.0039 3.48*10-5 5.89*10-3
Validation Ra -4.121*10-6 0.0045 2.036*10-5 0.0045 5.58*10-5 0.0035 1.25*10-5 0.0035
(m)
Tangential force (Ft)
Train Ft(N) -4.28*10-6 0.5928 0.3510 0.5924 -1.29*10-6 0.4917 0.2415 0.4914
Test Ft(N) -0.0135 0.6604 0.433 0.658 0.0128 0.6335 0.3988 0.6315
Validation -0.0020 0.6031 0.3633 0.6028 0.0019 0.515 0.2651 0.5149
Ft(N)
Axial force (Fa)
Train Fa(N) 2.98*10-6 0.3191 0.1017 0.318 3.621*10-6 0.2806 0.07806 0.2805
Test Fa(N) -0.0233 0.4351 0.1886 0.434 0.0414 0.4176 0.1749 0.4183
Validation -0.35 0.3389 0.1147 0.3387 0.0062 0.305 0.0931 0.3051
Fa(N)
Radial force(Fr)
Train Fr(N) -3.86*10-6 0.3432 0.1176 0.3430 4.85*10-6 0.3020 0.0911 0.3013
Test Fr(N) -0.0484 0.380 0.1458 0.3818 -0.0051 0.352 0.1233 0.3512
Validation -0.0073 0.3849 0.121 0.349 -7.62*10-4 0.3099 0.096 0.3098
Fr(N)
Tool Life (Tf)
Train Tf (min) 1.49*10-6 0.0325 0.011 0.0325 7.78*10-4 0.0249 6.19*10-4 0.0249

Test Tf (min) 0.0048 0.0358 0.0013 0.0360 0.0018 0.0283 7.96*10-4 0.0282

Validation Tf 7.24*10-4 0.0311 0.0011 0.0311 2.776*10-4 0.0254 6.45*10-4 0.0254


(min)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
116
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.16 ANFIS Subtractive clustering Plots For Ra 35 HRC

Fig.4.82 Training Error Plots for Ra (Target Fig.4.83 Testing Error Plots for Ra (Target
vs Output) vs Output)

Fig.4.84 Validation Error Plots for Ra Fig.4.85 Regression Plots for Ra (Train
(Target vs Output) /Test/Validate)

Fig.4.86 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
117
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.17 ANFIS Subtractive clustering Plots For Ft 35 HRC

Fig.4.87 Training Error Plots for Ft (Target vs Fig.4.88 Testing Error Plots for Ft
Output) (Target vs Output)

Fig.4.89 Validation Error Plots for Ft (Target vs Fig.4.90 Regression Plots for Ft
Output) (Train /Test/Validate

Fig.4.91 Response Surface Plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
118
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.18 ANFIS Subtractive clustering Plots For Fa 35 HRC

Fig.4.92 Training Error Plots for Fa (Target vs Fig.4.93 Testing Error Plots for Fa
Output) (Target vs Output)

Fig.4.94 Validation Error Plots for Fa (Target Fig.4.95 Regression Plots for Fa
vs Output) (Train /Test/Validate)

Fig.4.96 Response Surface Plot for Fa

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
119
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.19 ANFIS Subtractive clustering Plots For Fr 35 HRC

Fig.4.97 Training Error Plots for Fr (Target Fig.4.98 Testing Error Plots for Fr
vs Output) (Target vs Output)

Fig.4.100 Regression Plots for Fr


Fig.4.99 Validation Error Plots for Fr (Target
(Train /Test/Validate)
vs Output)

Fig.4.101 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
120
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.20 ANFIS Subtractive clustering Plots For Tf 35 HRC

Fig.4.102 Training Error Plots for Tf (Target Fig.4.103 Testing Error Plots for Tf
vs Output) (Target vs Output)

Fig.4.104 Validation Error Plots for Tf (Target Fig.4.105 Regression Plots for Tf (Train
vs Output) /Test/Validate)

Fig.4.106 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
121
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.21 ANFIS Subtractive clustering Plots For Ra 45 HRC

Fig.4.107 Training Error Plots for Ra Fig.4.108 Testing Error Plots for Ra (Target
(Target vs Output) vs Output)

Fig.4.109 Validation Error Plots for Ra Fig.4.110 Regression Plots for Ra (Train
(Target vs Output) /Test/Validate)

Fig.4.111 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
122
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.22 ANFIS Subtractive clustering Plots For Ft 45 HRC

Fig.4.112 Training Error Plots for Ft Fig.4.113 Testing Error Plots for Ft
(Target vs Output) (Target vs Output)

Fig.4.114 Validation Error Plots for Ft Fig.4.115 Regression Plots for Ft (Train
(Target vs Output) /Test/Validate)

Fig.4.116 Response Surface Plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
123
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.23 ANFIS Subtractive clustering Plots For Fa 45 HRC

Fig.4.117 Training Error Plots for Fa (Target Fig.4.118 Testing Error Plots for Fa (Target vs
vs Output) Output)

Fig.4.119 Validation Error Plots for Fa (Target Fig.4.120 Regression Plots for Fa (Train
vs Output) /Test/Validate)

Fig.4.121 Response Surface Plot for Fa

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
124
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.24 ANFIS Subtractive clustering Plots For Fr 45 HRC

Fig.4.122 Training Error Plots for Fr (Target vs Fig. 4.123 Testing Error Plots for Fr (Target
Output) vs Output)

Fig.4.124 Validation Error Plots for Fr (Target Fig.4.125 Regression Plots for Fr (Train
vs Output) /Test/Validate)

Fig.4.126 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
125
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.25 ANFIS Subtractive clustering Plots For Tf 45 HRC

Fig.4.127 Training Error Plots for Tf Fig.4.128 Testing Error Plots for Tf
(Target vs Output) (Target vs Output)

Fig.4.130 Regression Plots for Tf


Fig.4.129 Validation Error Plots for TF
(Target vs Output) (Train /Test/Validate

Fig.4.131 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
126
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.26 Fuzzy C Mean Clustering

Fuzzy C-Mean clustering is another circular invariant clustering technique the radius
of clusters is calculated by membership functions. Recall the cluster function

N m
J q ( , u ) uijq d ( xi , j )
t 1 j 1

u
j 1
ij
m
* xj
Compute the cluster mean i M

(u
j 1
ij )m

Compute dissimilarity function d ( xi , j ) || xi j ||

1
Update the member partition matrix uij by uij M
dik
(d
k 1
) 2/ m1
kj

Evaluate cluster function J q ( , u) and do until the cluster criteria is reached

Table 4.3.26 a) Fuzzy structure


ANFIS type Fuzzy C Mean
Fuzzy system Sugeno
And method Prod
Or method Probor
Defuzzification method Weigthed average
Implication method Prod
Aggregation method Max
No of clusters 15
Partition matrix Exponenet 2
Maximum iteration 200
Improvemenet level 1*10-5
Max Epoch 200
Error Goal 0
Initial Step 0.01
Step size Decrease rate 0.9
Step size Increament rate 1.1

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
127
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.132 Developed ANFIS Fuzzy C Mean Clustering architect

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
128
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table 4.3.26 b) Statistical Results of ANFIS FCM for 35 HRC and 45 HRC
Statistical Results of ANFIS Fuzzy C -Mean for 35 HRC Results of ANFIS Fuzzy C -Mean Cluster

ANFIS Error Error MSE RMSE for 45 HRC


Error Error MSE RMSE
Grid-35 mean() STD() mean() STD()
Surface roughness (Ra)
Train Ra 7.014*10-8 0.0046 2.077*10-5 0.0046 5.58*10-8 0.0033 1.076*1 0.0033
(m) 0-5
Test Ra (m) 1.165*10-4 0.0053 2.78*10-5 0.0053 -0.0032 0.0457 0.0021 0.0456

Validation 1.753*10-5 0.0047 2.182*10-5 0.0047 2.64*10-5 8.89*10- 7.34*10- 8.56*10-


4 7 4
Ra (m)
Tangential force (Ft)
Train Ft(N) -1.02*10- 1.2296 1.5101 1.228 -2.93*10-6 1.2581 1.581 1.2574
5

Test Ft(N) -0.0224 1.897 3.577 1.893 -0.1034 1.4454 2.085 1.44

Validation -0.0034 1.3498 1.8202 1.3491 -0.0155 1.287 1.656 1.287


Ft(N)
Axial force (Fa)
Train Fa(N) -3.10*10- 0.7534 0.5669 0.753 6.36*10-6 0.466 0.2172 0.4661
6

Test Fa(N) 0.0835 0.9753 0.953 0.976 -0.0626 0.59206 0.3500 0.5916

Validation 0.0125 0.7908 0.625 0.7905 -0.0094 0.4871 0.2371 0.487


Fa(N)

Radial force(Fr)

Train Fr(N) -2.149*10- 0.863 0.744 0.862 -3.36*10-7 2.12*10-4 0.0041 0.064
6

Test Fr(N) 0.1296 0.935 0.886 0.941 -0.0112 0.559 0.3132 0.559

Validation 0.0194 0.875 0.765 0.875 -0.0017 0.559 0.3132 0.5597


Fr(N)

Tool Life (Tf)


Train Tf 2.648*10-6 0.052 0.0027 0.0524 1.347*10-6 0.0460 0.0021 0.046
(min) 9
Test Tf (min) 0.0058 0.072 0.0053 0.0729 -0.0032 0.0457 0.0021 0.0456
9
Validation 8.8668*10-4 0.056 0.0031 00560 -4.80*10-4 0.0459 0.0021 0.0459
Tf (min) 0

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
129
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.27 ANFIS Fuzzy C -Mean Clustering Plots For Ra 35 HRC

Fig. 4.133 Training Error Plots for Ra Fig.4.134 Testing Error Plots for Ra
(Target vs Output) (Target vs Output)

Fig.4.135 Validation Error Plots for Ra Fig.4.136 Regression Plots for Ra (Train
(Target vs Output) /Test/Validate

Fig.4.137 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
130
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.28 ANFIS Fuzzy C -Mean Clustering Plots For Ft 35 HRC

Fig.4.138 Training Error Plots for Ft (Target Fig.4.139 Testing Error Plots for Ft
vs Output) (Target vs Output)

Fig.4.140 Validation Error Plots for Ft Fig.4.141 Regression Plots for Ft

(Target vs Output) (Train /Test/Validate

Fig.4.142 Response Surface Plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
131
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.29 ANFIS Fuzzy C -Mean Clustering Plots For Fa 35 HRC

Fig.4.143 Training Error Plots for Fa Fig.4.144 Testing Error Plots for Fa
(Target vs Output) (Target vs Output)

Fig.4.145 Validation Error Plots for Fa


Fig4.146. Regression Plots for Fa
(Target vs Output)
(Train /Test/Validate

Fig.4.147 Response Surface Plot for Fa

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
132
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.30 ANFIS Fuzzy C -Mean Clustering Plots For Fr 35 HRC

Fig.4.148 Training Error Plots for Fr Fig.4.149 Testing Error Plots for Fr
(Target vs Output) (Target vs Output)

Fig.4.150 Validation Error Plots for Fr Fig.4.151 Regression Plots for Fr


(Target vs Output) (Train /Test/Validate

Fig.4152 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
133
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.31 ANFIS Fuzzy C -Mean Clustering Plots For Tf 35 HRC

Fig.4.153 Training Error Plots for Tf Fig.4.154 Testing Error Plots for Tf
(Target vs Output) (Target vs Output)

Fig.4.155 Validation Error Plots for Tf Fig.4.156 Regression Plots for Tf


(Target vs Output) (Train /Test/Validate)

Fig.4.157 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
134
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.32 ANFIS Fuzzy C -Mean Clustering Plots For Ra 45 HRC

Fig.4.158 Training Error Plots for Ra Fig.4.159 Testing Error Plots for Ra
(Target vs Output) (Target vs Output)

Fig.4.160 Validation Error Plots for Ra Fig.4.161 Regression Plots for Ra (Train
(Target vs Output) /Test/Validate

Fig.4.162 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
135
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.33 ANFIS Fuzzy C -Mean Clustering Plots For Ft 45 HRC

Fig.4.163 Training Error Plots for Ft Fig.4.164 Testing Error Plots for Ft
(Target vs Output) (Target vs Output)

Fig.4.165 Validation Error Plots for Ft Fig.4.166 Regression Plots for Ft (Train
(Target vs Output) /Test/Validate

Fig.4.167 Response Surface Plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
136
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.34 ANFIS Fuzzy C -Mean Clustering Plots For Fa 45 HRC

Fig.4.168 Training Error Plots for Fa (Target Fig.4.169 Testing Error Plots for Fa (Target vs
vs Output) Output)

Fig.4.171 Regression Plots for Fa (Train


Fig.4.170 Validation Error Plots for Fa (Target
/Test/Validate)
vs Output)

Fig.4.172 Response Surface Plot for Fa

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
137
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.35 ANFIS Fuzzy C -Mean Clustering Plots For Fr 45 HRC

Fig.4.173 Training Error Plots for Fr Fig.4.174 Testing Error Plots for Fr
(Target vs Output) (Target vs Output)

Fig.4.175 Validation Error Plots for Fr Fig.4.176 Regression Plots for Fr (Train
(Target vs Output) /Test/Validate)

Fig.4.177 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
138
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.3.36 ANFIS Fuzzy C -Mean Clustering Plots For Tf 45 HRC

Fig.4.178 Training Error Plots for Tf Fig.4.179 Testing Error Plots for Tf
(Target vs Output) (Target vs Output)

Fig.4.180 Validation Error Plots for Tf Fig.4.181 Regression Plots for Tf


(Target vs Output) (Train /Test/Validate)

Fig.4.182 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
139
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.4 Comparison of Prediction Results with Experimental Statistics

4.4.1 Statistical Comparison of Neural Network and ANFIS Prediction Results with
Experimental Statistics for AISI 4340 Steel 35hrc
NN35 Error Error MSE RMSE Error Error MSE RMSE
NN VS Experimental 35 HRC
STD() NN VS Experimental
STD() 45 HRC
Ra 0.0442 1.0814 1.1130 1.0550 -0.1700 0.2769 0.1018 0.3190
Ft (N) 0.186.6 126.4 594.81 24.38 -57.23 128.3 1.8929*104 137.5
4
Fa (N) 103.98 33.77 1.897*10 109.07 54.67 33.57 4.0607*103 63.72
Fr (N) 1.1318 93.95 93.95 9.69 0.415 14.0720 188.29 13.72
ANFIS grid partitioning VS Experimental 35 HRC ANFIS grid partitioning VS Experimental
Ra -0.2308 0.5964 0.3912 0.6255 -0.9278 0.5964 1.198 1.0949
4 3
Ft (N) 17.49 106.9 1.116*10 105.66 -22.35 77.58 6.128*10 78.85
Fa (N) 157 80 3.09*104 176 -7.5709 40.08 1.5841*103 39.8
Fr (N) 32.57 41.29 2.6*103 51.78 -7.69 38.35 1.457*103 38.17
ANFIS Subtractive VS Experimental 35 HRC ANFIS Subtractive VS Experimental 45
Ra -0.226 0.5964 0.3875 0.6225 -0.947 0.594 1.23 1.109
Ft (N) 15.29 105.36 1.07*104 103 -23.106 76.64 6.1143*103 78.19
4
Fa (N) 157.79 82.3 3.133*10 177.016 -6.9912 39.58 1.537*103 39.213
Fr (N) 27.89 40.81 2.3604*103 48.58 -8.525 36.57 1.343*103 36.65
ANFIS FCM VS Experimental 35 HRC ANFIS FCM VS Experimental 45 HRC
Ra -0.2339 0.5964 0.392 0.6266 -0.856 0.596 1.0714 1.0351
Ft (N) 18.19 104.9 1.08*104 103.9 -21.7 78.8 6.38*103 79.8
Fa (N) 157 79.42 3.073*104 175.32 -6.8003 38.45 6.36*10-6 0.4664
Fr (N) 32.23 42.20 2.73*103 52.25 -6.63 36.89 1.33*103 36.56

4.4.2 Error Plots of Neural Network Prediction Results with Experimental


Statistics for AISI 4340 Steel 35hrc

Fig.4.183 Error Estimation Plots For R a Fig4.184.Error Estimation Plots For F t

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
140
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.185.Error Estimation Plots For F a Fig.4.186 Error Estimation Plots For F r

4.4.3 Error Plots of Neural Network Prediction Results with Experimental


Statistics for AISI 4340 Steel 45hrc

Fig.4.187.Error Estimation Plots For Ra Fig.4.188 Error Estimation Plots For F t

Fig.4.189 Error Estimation Plots For F a Fig.4.190 Error Estimation Plots For F r

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
141
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.4.4 Error Plots of ANFIS (Grid Partitioning Clustering) Results with


Experimental Statistics for AISI 4340 Steel 35hrc

Fig.4.191 Error Estimation Plots For R a Fig.4.192 Error Estimation Plots For F t

Fig.4.194 Error Estimation Plots For F r


Fig.4.193 Error Estimation Plots For F a

4.4.5 Error Plots of ANFIS (Grid Partitioning Clustering) Results with


Experimental Statistics for AISI 4340 Steel 45hrc

Fig.4.195 Error Estimation Plots For R a Fig.4.196 Error Estimation Plots For F t

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
142
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.197 Error Estimation Plots For F a Fig.4.198 Error Estimation Plots For F r

4.4.6 Error Plots of ANFIS (Subtractive Clustering) Results with Experimental


Statistics for AISI 4340 Steel 35hrc

Fig.4.199 Error Estimation Plots For R a Fig.4.200 Error Estimation Plots For F t

Fig.4.201 Error Estimation Plots For F a Fig.4.202 Error Estimation Plots For F r

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
143
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.4.7 Error Plots of ANFIS (Subtractive Clustering) Results with Experimental


Statistics for AISI 4340 Steel 45hrc

Fig.4.203 Error Estimation Plots For R a Fig.4.204 Error Estimation Plots For F t

Fig.4.205 Error Estimation Plots For F a Fig.4.206 Error Estimation Plots For F r

4.4.8 Error Plots of ANFIS (Fuzzy C-Mean Clustering)) Results with


Experimental Statistics for AISI 4340 Steel 35hrc

Fig.4.207 Error Estimation Plots For R a Fig.4.208 Error Estimation Plots For F t

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
144
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.4.209 Error Estimation Plots For F a Fig.4.210 Error Estimation Plots For F r

4.4.9 Error Plots of ANFIS (Fuzzy C-Mean Clustering)) Results with


Experimental Statistics for AISI 4340 Steel 45hrc

Fig.4.211 Error Estimation Plots For R a Fig.4.212 Error Estimation Plots For F t

Fig.4.214 Error Estimation Plots For F r


Fig.4.213 Error Estimation Plots For F a

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
145
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

4.5 Conclusion

From the Comparison PLOTS and table the mean error and RMS of neural
network was found to be lower compared to ANFIS models
Though the errors in neural network was less the prediction curve for surface
roughness should poor match with the experimental curve while in ANFIS the
curve match for surface roughness was better in comparison to neural
network.
With this results learning techniques for were further attempted to improve with
synergies.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
146
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

CHAPTER 5

HYBRIDIZATION OF C.I SYNERGIES

5.1 Introduction

In the previous chapters optimization and prediction techniques were applied exclusively
and the results obtained from them were found to be pretty convincing when compared
with the experimental statistics In this chapter hybridization of these techniques is applied
to our current machining problem with objective to improve the ability of techniques
through mutual assistance and improve prediction and optimization ability of exclusive
techniques. A exposition of adapted synergies is illustrated in brief. In the first section the
combinations of Neuro-Evolutionay and Neuro-Swarm techniques is implemented , in the
second section combination of Evolutionary - Neuro fuzzy and Swarm-Neuro fuzzy is
exercised and in the third segment a .comparison is made between the predicted results
obtained from synergism and experimental statistics. The objective of this chapter is
depicted through the flow chart.

Fig.5.1 Chapter flow chart

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
147
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.2 EA-NN Synergism

Various types of EAs and NNs synergies are possible which can be broadly classified in
three combinations supportive combination, collaborative combination and amalgamated
combination

EA uses a population of entire solution space of optimization problem and in


contrast NN uses these optimized results as exemplars for training and the
learning then converges depending on the learning parameters and topology of
NN. The performance of both EA and NN can be improved by accelerating the
convergence if an appropriate population of data sets and strategic learning
parameters is applied.
In a supportive combination EAs and NNs are used sequentially where one is
primary problem solver and other is secondary.
In collaborative combination they are used simultaneously where both EAs and
NNs solve the problem together and in amalgamated combination the EA search
technique and NN as pattern model.
Collaborative Combination In collaborative learning both EAs and NNs are used
simultaneously using the result of one to prepare data set for other .In other words
one technique plays primary role of solving problem and the other technique is a
supportive to solve the problem.

Finding an appropriate topology of NN for a given problem is trial and error task.
Synergies between EAs and NN assists in determining optimal network architecture and
then evaluate neural network.

A typical procedure for EA-NN synergy can be outlines as below

1. Create an initial population of individuals which would go into Evolutionary


algorithm and generate a optimal population which will be data set for Neural
Network architect.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
148
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

2. Set up the training data for neural network as received from the EA results by
permuting train and target sets and dividing them for training testing and
validation
3. Apply learning criteria for network and evaluate the weights and bias for the
targets. Test the targets against the expected outputs
4. Evaluate the training errors and fitness of network for current learning, transfer
back weights and bias along with the targets to EA. Repeat the step2-4 until the
convergence or max generation is reached.

5.2.1 NSGA combined Neural Network

Fig.5.2 Developed NSGANN architect

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
149
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table 5.2.1 (a) Description of NSGA-NN


Population size 1000
Generation 100
Crossover probability 0.8
Crossover constant 0.1
Mutation probability 0.1
Mutation constant 0.2
Neural Network Type Feed forward neural network
Training function Levenberg-Marqaurdt
No of neurons in Hidden layer 10
No of neurons in output layer 5
Weights in hidden layer 30 [310]
Weights in output layer 50[510]
Training samples [700 3]
Testing samples [150 3]
Validation samples [150 3]
Transfer function Tan-sigmoid function
Training performance 2.861e-04(35HRC) 1.861e-04(45HRC)
Testing performance 4.147e-04(35HRC) 5.847e-04(45HRC)
Validation performance 1.777e-04(35HRC) 8.777e-04(45HRC)

5.2.1 (b) Calibrated weigths and bias for NSGA-NN architect


Calibrated weights and bias for 35 HRC Steel
Hidden layer Definition(sij) Output layer Definition(sij)
Bias( W1(v W2(f) W3(d bias W1(R W2(F W3(F W4(Fr W5(Tf


-0.771 0.242 -0.284 0.444 3.444 -2.004 0.010 0.068 -0.154 -0.138
-0.410 -0.040 -0.039 -0.248 1.881 0.511 -1.764 0.516 -4.511 1.559
-0.734 0.065 0.155 -0.224 1.784 -1.875 2.474 1.788 1.7166 0.607
-0.279 -0.208 0.206 0.0684 3.254 3.329 2.564 0.611 -0.616 0.637
-1.215 -0.633 0.895 0.274 3.678 0.772 0.0842 -0.040 0.0041 -0.136
0.746 0.164 0.173 -0.102 0.5492 0.377 -2.206 -0.811 -0.746
0.895 -0.051 -0.037 -0.306 -3.277 -0.713 -3.158 -2.335 -0.383
0.870 -0.0149 0.227 0.050 -1.800 -1.466 3.607 -1.017 -2.811
0.164 -0.16 0.2133 1.23 2.071 1.67 -2.687 0.373
0.55
-1.672 -0.23 -0.21 -1.002 1.107 -0.036 0.1082 -0.077 0.5352

Calibrated weights and bias for 45 HRC Steel


-0.810 -0.856 -2.308 1.099 2.240 -0.856 -2.308 1.099 1.1200 0.528
0.874 -0.651 -2.535 -0.896 0.017 -0.651 -2.535 -0.896 -1.368 -0.039
4
-1.434 0.0990 0.0157 -0.020 2.414 0.099 0.0157 -0.0209 -0.0158 -0.0248
0.731 -0.103 0.478 -0.793 3.064 -0.1033 0.4782 -0.793 1.200 -3.515
-0.135 1.867 1.823 2.756 3.370 1.867 1.823 2.7567 2.046 0.589
2.358 0.005 -0.001 0.002 0.0051 -0.001 0.0002 0.0021 0.0335
-0.733 -0.013 -0.773 0.496 -0.013 -0.773 0.496 1.115 0.933

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
150
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

-6.767 -0.0140 0.024 -0.020 -0.014 0.0242 -0.0207 -0.009 0.275


1.321
0.261 -0.444 -0.415 -1.545 -0.692 -1.112 -0.749 0.111
0.118
-0.109 0.164 0.697 -0.938 0.5015 1.499 -0.848 -0.583

5.2.2 Results of NSGA-NN 35 HRC Steel

Fig.5.3 Performance plot of Network Fig.5.4 Training state of Network at each


epoch

Fig.5.5 Training error in Ra Fig.5.6 Regression fit plot for Ra

Fig.5.7 Training error in Ft Fig.5.8 Regression fit plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
151
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.10 Training error in Fa Fig.5.11 Regression fit plot for Fa

Fig.5.12 Training error in Fr Fig.5.13 Regression fit plot for Fr

Fig.5.14 Training error in Tf Fig.5.15 Regression fit plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
152
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.2.3 Results of NSGA-NN for 45 HRC Steel

Fig.5.16 Performance plot of Network Fig.5.17 Training state of Network at


each epoch

Fig.5.18 Training error in Ra Fig.5.19 Regression fit plot for Ra

Fig.5.20 Training error in Ft Fig.5.21 Regression fit plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
153
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.22 Training error in Fa Fig.5.23 Regression fit plot for Fa

Fig.5.24 Training error in Fr


Fig.5.25 Regression fit plot for Fr

Fig.5.26 Training error in Tf Fig.5.27 Regression fit plot for Tf

5.3 SI-NN synergism

Similar to EA-NN synergies, SI-NN is has three class of synergies which differ in the
degree of coupling and interdependency in working towards solution. The collaborative

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
154
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

combination strategy is analogues to EA-NN strategy. The figure below illustrates the
collaborative combination of SI and NN.

Fig.5.28. SI-NN collaborative combination

5.3.1 PSO combined Neural Network

Fig.5.29 Applied SI-NN synergy architect

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
155
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table 5.3.1 (a) Description of PSO-NN


Total particle population 1000
Max no of repository elements 500
Inertia weight ( w) 0.5
Inertia weight damping rate (w.damp) 0.99
Personal learning co efficient (a1) 1
Global learning co-efficient (a2) 2
No of grid in each dimension 7
Inflation rate ( ) 0.1
Leader selection pressure ( ) 2
Deletion selection pressure ( ) 2
Mutation rate (mu) 0.1
Neural Network Type Feed forward neural network
Training function Levenberg-Marqaurdt
No of neurons in Hidden layer 10
No of neurons in output layer 5
Weights in hidden layer 30 [310]
Weights in output layer 50[510]
Training samples [700 3]
Testing samples [150 3]
Validation samples [150 3]
Transfer function Tan-sigmoid function
Training performance 1.5224e-05
Testing performance 2.089e-05
Validation performance 2.03e-05

5.3.2 (b) Calibrated weights and bias for 35 SI-NN


Hidden layer Definition(sij) Output layer Definition(sij)
Bias(i) W1(vc) W2(f) W3(d) bias(i) W1(Ra) W2(Ft) W3(Fa) W4(Fr) W5(Tf)
-0.904 1.656 1.671 -0.155 1.100 -0.027 -0.001 -0.003 0.0011 0.004
-3.013 2.294 -1.05 0.092 2.257 -0.121 -0.006 0.006 -0.005 -0.03
0.829 -0.131 0.344 0.001 1.922 -0.200 -0.453 0.908 -0.629 -1.07
-3.295 1.724 1.051 0.0081 0.426 0.111 0.061 0.057 0.049 -0.01
-1.695 0.330 0.163 -0.869 0.227 -0.292 0.017 0.243 0.167 0.2031
-0.369 -0.369 0.143 0.166 3.723 2.250 0.860 -1.183 0.1944
2.0164 -2.643 3.987 -0.614 0.001 0.0006 -0.0001 0.00007 0.0015
-1.248 2.022 -0.81 3.412 0.001 -0.001 -0.003 -0.0002 0.0024
-0.629 0.403 0.346 -0.012 0.44 0.556 0.616 0.436 0.122
-0.705 3.124 -1.47 1.981 -0.018 -0.002 0.0009 0.0004 -0.001
-0.633 -0.304 -0.02 -0.185 0.833 -2.606 1.609 0.649 2.208
0.3125 0.390 -0.23 0.075 0.375 -0.425 1.281 -1.364 0.150
0.896 2.234 -1.43 1.056 0.087 0.010 -0.005 -0.008 0.007
-0.362 -0.010 -0.09 -0.21 0.540 -0.184 -1.76 -3.427 0.285
1.834 -0.298 -0.63 -1.286 -0.166 0.0019 -0.0005 -0.0023 -0.002

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
156
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

-4.050 -4.123 2.584 1.262 -1.35 0.284 0.0736 -0.914 -2.361


-0.983 -0.091 -0.05 0.440 -0.911 0.0689 1.9151 1.440 -0.087
1.326 2.797 -1.62 -0.057 0.107 0.0142 -0.0097 -0.029 -0.024
1.975 -0.015 0.364 0.611 -1.441 -0.074 0.12064 0.341 -0.802
3.971 1.211 -1.51 -0.028 0.4813 -1.663 -0.3358 -0.469 0.126

5.3.2 Results of PSO-NN for 35 HRC Steel

Fig.5.30 Performance plot of Network Fig.5.31 Performance plot of Network

Fig.5.32 Training error in Ra Fig.5.33 Regression fit plot for Ra

Fig.5.34 Training error in Ft Fig.5.35 Regression fit plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
157
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.36 Training error in Fa Fig.5.37 Regression fit plot for Fa

Fig.5.38 Training error in Fr Fig.5.39 Regression fit plot for Fr

Fig.5.40 Training error in Tf Fig.5.41 Regression fit plot for Tf

5.4 Synergies of EA and ANFIS

The synergism between EA and ANFIS is strongly coupled here the EA technique is
toapplied on the membership function which optimizes fitness value of fuzzy output. The

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
158
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

below flow chart outlines the applied strategy in synergism .Membership function are
clustered through Fuzzy C-mean clustering and the fuzzy structure utilized is same as
previously applied (refer chapter 4 table (4.3.27 (a) )). A detailed description is given in
appendix.

5.4.1 ANFIS GA

Fig.5.42 GA based ANFIS (FCM) applied Strategy [72]

5.4.1 (a) Statistical Error analysis of GA based ANFIS (FCM) for 35 HRC and
45HRC
Results of GA based ANFIS (FCM) for 35 HRC Results of GA based ANFIS (FCM) for 45 HRC

ANFIS Grid- Error Error MSE RMSE Error Error MSE RMSE
35 mean() STD() mean() STD()
Surface roughness (Ra)
Train Ra -1.5*10-15 0.1016 0.0103 0.1015 3.14*10-16 0.0682 0.0046 0.0681
Test Ra (m) -0.0020 0.1004 0.0100 0.1002 0.0014 0.069 0.0048 0.0692
Tangential force (Ft)
Train Ft(N) 9.22*10-3 23.7 561.42 23.69 -2.69*10-13 18.31 335.08 18.30
Test Ft(N) -0.3057 25.68 657.6 25.64 1.3611 19.58 384 19.59
Axial force (Fa)
Train Fa(N) 1.21*10-13 11.57 133.7 11.56 -1.69*10-13 8.42 70.8 8.4176
Test Fa(N) 0.0194 10.79 116.09 10.77 1.681 9.79 98.45 9.92
Radial force(Fr)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
159
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Train Fr(N) 4.037*10-13 7.568 57.2 7.56 -4.3*10-13 6.266 38.70 6.22
Test Fr(N) -0.0449 7.90 62.314 7.894 1.082 7.41 55.97 7.481
Tool Life (Tf)
Train Tf (min) -3.06*10-14 0.7850 0.6153 0.7844 2.69*10-14 0.6304 0.39 0.62
Test Tf (min) 0.034 0.80 0.6431 0.0802 -0.0157 0.16 0.38 0.618
Results of PSO based ANFIS (FCM) (35 HRC) Results of PSO based ANFIS (FCM) 45 HRC
Surface roughness (Ra)
Train Ra (m) -8.43*10-17 0.0988 0.2168 0.4656 1.4*10-15 1.4*10-15 0.0042 0.0681
Test Ra (m) -4.79*10-4 0.1010 0.0102 0.1008 1.4*10-15 1.4*10-15 0.0048 0.0692
Tangential force (Ft)
Train Ft (N) -3.3*10-13 23.87 589.7 23.86 -4.98*10-13 18.57 344.4 12.55
Test Ft (N) -2.214 23.79 569.38 23.86 1.178 18.90 357.6
Axial force (Fa)
Train Fa (N) 1.02*10-13 10.98 120.46 10.96 -5.59*10-13 8.78 77.08 8.78
Test Fa (N) -0.2655 11.98 143.22 11.96 0.080 9.045 81.54 9.03
Radial force(Fr)
Train Fr (N) -2.36*10- 7.57 57.22 7.56 -3.23*10-13 6.57 42.79 6.54
13
Test Fr (N) 0.221 7.71 59.42 7.708 -0.1446 6.73 45.25 6.72
Tool Life (Tf)
Train Tf (min) 3.03*10-14 0.796 0.6335 0.795 2.90*10-14 0.6186 45.25 6.72
Test Tf (min) 0.0042 0.88 0.774 0.8817 -0.0036 0.64 0.416 0.645

5.4.2 GA based ANFIS (FCM) Plots For Ra 35 HRC

Fig.5.43 Training Error Plots for Ra (Target Fig.5.44 Testing Error Plots for Ra (Target
vs Output) vs Output)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
160
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.45 Regression Plots for Ra (Train /Test/Validate)

Fig.5.46 Response Surface Plot for Ra


5.4.3 GA based ANFIS (FCM) Plots For Ft 35 HRC

Fig.5.47 Training Error Plots for Ft (Target vs Fig.5.48 Testing Error Plots for Ft (Target vs
Output) Output)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
161
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.49 Regression Plots for Ft (Train /Test/Validate)

Fig.5..50 Regression Plots for Ft (Train /Test/Validate)

5.4.4 GA based ANFIS (FCM) Plots For Fa 35 HRC

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
162
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.51 Training Error Plots for Fa (Target vs Output) Fig.5.52 Testing Error Plots for Fa (Target vs
Output)

Fig.5.53 Testing Error Plots for Fa (Target vs Output)

Fig.5.54 Response Surface Plot for Fa

5.4.5 GA based ANFIS (FCM) Plots For Fr 35 HRC

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
163
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.55 Training Error Plots for Fr (Target Fig.5.56 Testing Error Plots for Fr (Target
vs Output) vs Output)

Fig.5.57 Regression Plots for Fr (Train /Test/Validate)

Fig.5.58 Response Surface Plot for Fr

5.4.6 GA based ANFIS (FCM) Plots For Tf 35 HRC

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
164
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.59 Training Error Plots for Tf (Target Fig.5.60 Testing Error Plots for Tf (Target
vs Output) vs Output)

Fig.5.61 Regression Plots for Tf (Train /Test)

Fig.5.62 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
165
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.4.7 GA based ANFIS (FCM) Plots For Ra 45 HRC

Fig.5.63 Training Error Plots for Ra (Target Fig.5.64 Testing Error Plots for Ra (Target
vs Output) vs Output)

Fig.5.65 Regression Plots for Ra (Train /Test)

Fig.5.66 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
166
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.4.8 GA based ANFIS (FCM) Plots For Ft 45 HRC

Fig.5.67 Training Error Plots for Ft (Target Fig.5.68 Training Error Plots for Ft (Target
vs Output) vs Output)

Fig.5.69 Regression Plots for Ft (Train /TesT

Fig.5.70 Response Surface Plot for Ft a

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
167
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.4.9 GA based ANFIS (FCM) Plots For Fa 45 HRC

Fig.5.71 Training Error Plots for Fa (Target Fig.5.72 Testing Error Plots for Fa (Target
vs Output) vs Output)

Fig.5.73 Regression Plots for Fa (Train /Test)

Fig.5.74 Response Surface Plot for Fa a

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
168
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.4.10 GA based ANFIS (FCM) Plots For Fr 45 HRC

Fig.5.75 Training Error Plots for Fr (Target Fig.5.76 Testing Error Plots for Fr (Target
vs Output) vs Output)

Fig.5.77 Regression Plots for Fr (Train /Test/Validate)

Fig.5.78 Response Surface Plot for Fr a

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
169
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.4.11 GA based ANFIS (FCM) Plots For Tf 45 HRC

Fig.5.79 Training Error Plots for Tf (Target Fig.5.80 Testing Error Plots for Tf (Target
vs Output) vs Output)

Fig.5.81 Regression Plots for Tf (Train /Test)

Fig.5.82 Response Surface Plot for Tf a

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
170
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5 PSO based ANFIS (FCM) 35 HRC and 45HRC Steel

The synergism between SI and ANFIS is also strongly coupled where the SI technique is applied
on the membership function to optimize fitness value of fuzzy output. The below flow chart
outlines the applied strategy in synergism .Membership function are clustered through Fuzzy C-
mean clustering and the fuzzy structure utilized is same as previously applied (refer chapter 4
table (4.3.27 (a) )). A detailed description is given in appendix.

5.5.1 PSO based ANFIS (FCM)

Fig.5.83 PS0-ANFIS applied strategy [72]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
171
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.2 PSO based ANFIS (FCM) Plots For Ra 35 HRC

Fig.5.84 Training Error Plots for Ra (Target Fig.5.85 Testing Error Plots for Ra (Target
vs Output) vs Output)

Fig.5.86 Regression Plots for Ra (Train /Test)

Fig.5.87 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
172
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.3 PSO based ANFIS (FCM) Plots For Ft 35 HRC

Fig.5.88 Training Error Plots for Ft Fig.5.89 Testing Error Plots for Ft (Target
(Target vs Output) vs Output)

Fig.5.90 Regression Plots for Ft (Train /TesT)

Fig.5.91 Response Surface Plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
173
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.4 PSO based ANFIS (FCM) Plots For Fa 35 HRC

Fig.5.92 Training Error Plots for Fa (Target Fig.5.93 Testing Error Plots for Fa
vs Output) (Target vs Output)

Fig.5.94 Regression Plots for Fa (Train /Test)

Fig.5.95 Response Surface Plot for Fa

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
174
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.5 PSO based ANFIS (FCM) Plots For Fr 35 HRC

Fig.5.96 Training Error Plots for Fr (Target Fig.5.97 Testing Error Plots for Fr
vs Output) (Target vs Output)

Fig.5.98 Regression Plots for Fr (Train /Test/)

Fig.5.99 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
175
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.6 PSO based ANFIS (FCM) Plots For Tf 35 HRC

Fig.5.100 Training Error Plots for Tf Fig.5.101 Testing Error Plots for Tf (Target
(Target vs Output) vs Output)

Fig.5.102 Regression Plots for Tf (Train /Test)

Fig.5.103 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
176
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.7 PSO based ANFIS (FCM) Plots For Ra 45 HRC

Fig.5.104 Training Error Plots for Ra Fig.5.105 Testing Error Plots for Ra (Target
(Target vs Output) vs Output)

Fig.5.106 Regression Plots for Ra (Train /Test)

Fig.5.107 Response Surface Plot for Ra

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
177
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.8 PSO based ANFIS (FCM) Plots For Ft 45 HRC

Fig.5.108 Training Error Plots for Ft Fig.5.109 Testing Error Plots for Ft (Target
(Target vs Output) vs Output)

Fig.5.110 Regression Plots for Ft (Train /Test)

Fig.5.111 Response Surface Plot for Ft

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
178
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.9 PSO based ANFIS (FCM) Plots For Fa 45 HRC

Fig.5.112 Training Error Plots for Fa Fig.5.113 Testing Error Plots for Fa (Target
(Target vs Output) vs Output)

Fig.5.114 Regression Plots for Fa (Train /Test)

Fig.5.115 Response Surface Plot for Fa

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
179
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.10 PSO based ANFIS (FCM) Plots For Fr 45 HRC

Fig.5.116 Training Error Plots for Fr Fig.5.117 Testing Error Plots for Fr
(Target vs Output) (Target vs Output)

Fig.5.118 Regression Plots for Fr (Train /Test/Validate

Fig.5.119 Response Surface Plot for Fr

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
180
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.5.11 PSO based ANFIS (FCM) Plots For Tf 45 HRC

Fig.5.120 Training Error Plots for Tf (Target Fig.5.121 Testing Error Plots for Tf (Target vs
vs Output) Output)

Fig.5.122 Regression Plots for Tf (Train /Test

Fig.5.123 Response Surface Plot for Tf

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
181
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.6 Comparison of Prediction Results with Experimental Statistics

5.6.1 NSGA-NN

5.6.1 (a) Statistical Comparison of Prediction Results with Experimental


Statistics for AISI 4340 Steel 35HRC and 45HRC
NN35 Error Error MSE RMSE Error Error MSE RMSE
NSGA-NN VS Experimental
STD() 35 HRC NSGA-NN VS Experimental
STD() 45 HRC
Ra --0.221 0.979 0.960 0.980 -0.355 0.378 0.262 0.512
Ft (N) 42.67 86.21 8.88*103 94.24 -57.23 128.36 1.89*104 137.58
Fa (N) 156.2 49.56 2.675*104 114.49 -13.571 38.046 1.55*103 39.488
Fr (N) -0.723 14.77 206.55 14.37 -11.74 35.99 1.36*103 36.99
PSO NN VS Experimental 35 HRC PSO NN VS Experimental 45 HRC
Ra 1.749 0.732 3.56 1.88
Ft (N) -53.25 104.6 1.32*14 115.04
Fa (N) 138.22 40.44 2.06*104 43.73
Fr (N) 40.08 64.63 5.57*103 74.66
ANFIS GA VS Experimental 35 HRC ANFIS GA VS Experimental 45 HRC
Ra -0.1572 0.4507 0.2177 0.466 0.0371 0.2796 0.075 0.2751
Ft (N) 17.6 57.06 3.4*103 58.3 -21.84 107.08 1.64*10 4
107.76
Fa (N) 136.152 51.87 2.109*104 145.2 -45.95 41.46 3.74*103 61.19
Fr (N) 11.065 20.54 523 22.87 -0.91 36.86 1.29*103 35.93
ANFIS PSO VS Experimental 35 HRC ANFIS PSO VS Experimental 45 HRC
Ra 0.1578 0.4494 0.2168 0.4656 0.0372 0.2798 0.075 0.275
Ft (N) 17.13 57.16 3.398*103 58.29 -27.34 106.19 1.14*104 107.05
Fa (N) 135.9 52 2.11*104 145.4 -46.83 41.55 3.83*103 61.91
Fr (N) 10.89 70.79 517 22.75 -0.747 36.77 1.28*103 35.84

5.6.2 Error Plots of NSGA-NN Prediction Results with Experimental Statistics


for AISI 4340 Steel 35hrc

Fig.5.124 Error Estimation Plots For R a Fig.5.125 Error Estimation Plots For F t

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
182
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.126 Error Estimation Plots For F a Fig.5.127 Error Estimation Plots For Fr

5.6.3 Error Plots of NSGA-NN Prediction Results with Experimental Statistics


for AISI 4340 Steel 45hrc

Fig.5.128 Error Estimation Plots For R a Fig.5.129 Error Estimation Plots For F t

Fig.5.130 Error Estimation Plots For F a Fig.5.131 Error Estimation Plots For F r

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
183
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.6.4 Error Plots of PSO-NN Results with Experimental Statistics for AISI 4340
Steel 35 HRC

Fig.5.132 Error Estimation Plots For R a Fig.5.133 Error Estimation Plots For F t

Fig.5134 Error Estimation Plots For F a Fig.5.135 Error Estimation Plots For F r

5.6.5 Error Plots of GA based ANFIS (Fuzzy C-Mean Clustering)) Results with
Experimental Statistics for AISI 4340 Steel 45hrc

Fig.5.136 Error Estimation Plots For R a Fig.5.137 Error Estimation Plots For F t

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
184
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.138 Error Estimation Plots For F a Fig.5.139 Error Estimation Plots For F r

5.6.6 Error Plots of GA based ANFIS (Fuzzy C-Mean Clustering)) Results with
Experimental Statistics for AISI 4340 Steel 45hrc

Fig.5.140 Error Estimation Plots For R a Fig.5.141 Error Estimation Plots For F t

Fig.5.142 Error Estimation Plots For F a Fig.5.143 Error Estimation Plots For F r

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
185
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

5.6.7 Error Plots of PSO based ANFIS (Fuzzy C-Mean Clustering)) Results with
Experimental Statistics for AISI 4340 Steel 35hrc

Fig.5.144 Error Estimation Plots For R a Fig.5.145 Error Estimation Plots For F t

Fig.5.146 Error Estimation Plots For F a Fig.5.147 Error Estimation Plots For F r

5.6.8 Error Plots of PSO based ANFIS Prediction Results with Experimental
Statistics for AISI 4340 Steel 45hrc

Fig.5.148 Error Estimation Plots For R a Fig.5.149 Error Estimation Plots For F t

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
186
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fig.5.150 Error Estimation Plots For F a


Fig.5.151 Error Estimation Plots For F r

5.7 Conclusion

An extensive statistical error analysis of developed prediction model through possible


synergies is performed and at all the three learning stages the errors were monitored i.e,
while training testing and validating. The learning converged well for all applied
techniques.

Furthermore the predicted results are tested against experimental statistics for evaluating
the prediction accuracy of each learning model. The prediction models demonstrated
relatively varying accuracy results which are tabulated in each sections.

Following crucial observations can be made through statistical error analysis.

1. The results of EA NN were more accurate than SI-NN and less accurate when
compare to the synergies of ANFIS.
2. EA-NN accuracy was convincing but the accuracy of ANFIS-EA and ANFIS-SI
were better than the EA-NN and SI-NN prediction models which can be clearly
commented from the statistical error table.
3. Between the ANFIS EA and ANFIS-SI the relative difference of accuracy is
negligible as both the techniques demonstrated almost similar results in prediction
4. The Adaptive neuro fuzzy combination proved to be better than the neuro
computing combination this difference is possible due to the adaptive layers
introduced in the neuro-fuzzy inference. Though in both the techniques back

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
187
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

propagation learning algorithm is applied for error minimization in learning, the


adaptive layers and clustering technique in membership function of fuzzy
structure improved the learning ability of prediction model.
5. However the synergism of neural network gave better accuracy than the exclusive
techniques exhibiting improvement in learnability of pattern

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
188
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

CHAPTER 6

RESULTS AND DISCUSSION


The results from optimization and predictive techniques are elaborately discussed and
comparison with experimental statistics is made.

6.1 Global Search Optimization

Global Search optimization technique are explained extensively in chapter 3 and


implemented strategy is shown in fig.3.1 both techniques from EA and SI were utilized to
optimize machining performance

6.1.1 NSGAII

NSGA II utilized non-domination technique for its population selection and mating, the
fitness of individuals were determined by two criteria (a) Pareto-Individual rank (b)
Pseudo Euclidean distance

According to criteria an individual with least rank and maximum Euclidean distance is
the fittest and these individual/individuals represents globally optimized solutions. The
degree of optimality depends on their relative function of rank and distance which
represents diversity of individuals in each generation. The results of both the steels AISI
4340 35HRC and 45HRC had initial population of 1000 and elitism was applied at each
generation.

The Table 3.3 and Table 3.4 records results of family of optimized individuals for both
steels after hundred generations arranged in descending order of fronts and fitness levels
which are calculated by rank and distance. Only the first 20 solutions are listed out of the
1000 population in the tables.

6.1.1 (a) For AISI 4340 35HRC

From Table 3.3 the first front has maximum distance and minimum rank the solutions
and these solutions shows a good tradeoffs between the surface roughness and tool life
among them fifth individual with fitness which gives both good surface finish and

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
189
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

maximum tool life .This individual corresponds to process parameters (cutting speed
,feed rate and depth of cut) 170m/min, 0.15mm/rev, 1mm respectively with machining
objectives(Ra, Ft, Fa, Fr, Tf) 4.0m, 412N, 152N, 231N and 50 minutes. The subsequent
individual with process parameter (cutting speed, feed rate and depth of cut)
231m/min,0.15mm/rev,1mm corresponding to machining performance (Ra, Ft, Fa, Fr, Tf)
2.5m, 442N, 115N, 199N and 42.49 minutes can be picked out as optimal solution.
Though the fitness of first front is fittest among others, the individuals from subsequent
front can also be chosen from the solution space with permitted tradeoffs.

The Fig.3.3-3.5 constitutes plots of NSGA II for 35HRC, in the Fig.3.3 the first subplot
shows the rank of individual at each generation, from the figure it can be inferred that the
rank converges to one for most of the population as non-dominated technique is applied
for sorting, in the second subplot of Fig 3.3 Pareto plots between the objectives are
plotted to evaluate relative tradeoffs between two objectives.

Pareto front depicts plots for elite members which can be between two or three objectives
that are non-inferior. The subplots 2, 3 and 4 in the figure 3.3 are pareto plots between
force and surface roughness, the second subplot represents pareto-front between Ra-Ft, in
third and fourth sub plot pareto front between Ra-Fa and Ra-Fr.

The Fig 3.4 contains pareto-fronts between tool life and forces; first subplot is pareto-
front between Tf -Ft, second subplot between Tf-Fa and third sub plot between Tf-Fr. From
the pareto fronts for minimum surface roughness and maximum tool life following region
range of forces were found to be favorable (approximately) optimal surface profile.

Table.6.1 Tradeoffs among forces for Surface roughness and Tool life
Forces Surface Roughness(1-2.5 m) Tool life (Tf>40 mins)
Tangential Forces 800-600 N <470N
Axial Forces 300-200 <160N
Radial forces 400-300 <250N

In fig. 3.5 average pareto diversity plots in the consecutive generations is plotted which is
determined by average pareto (Euclidean) distance among individuals The diversity of

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
190
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

population varied from 1-0.001 The population in overall generations exhibited vivid
diversity. Initially the average distance between the generations was maximum after 20
generations a shift in distance was observed with distance spread to 0.6 in mid generation
and as final generations are approached the distance between generations converges to
0.001 showing a good migration of individuals across generations.

6.1.1 (b) For AISI 4340 45HRC

For AISI 4340 45 HRC steel the family of solutions is listed in table 3.4 out of the 1000
elite solutions only 20 solutions are tabulated. The best individual in the first front is
corresponding to process parameters (cutting speed, feed rate and depth of cut)
130.8m/min, 0.15 mm/rev, 1mm and objective fitness (Ra, Ft, Fa, Fr, Tf) 4.0 m, 542N,
331N, 282N, 30 minutes is fittest. Similarly preceding individual with process parameters
(cutting speed, feed rate and depth of cut) 135m/min, 0.15mm/rev, 1mm corresponding to
objective fitness (Ra, Ft, Fa, Fr, Tf) 3.98, 519N, 331N, 282N, 28 minutes can be picked as
best solution. Likewise favorable solutions can be chosen with the degree of tradeoffs
permitted among the objectives.

The Fig.3.6 - 3.8 are plots of NSGA II for 45HRC, in the Fig.3.6 the first subplot depicts
rank of individual at each generation and the rank converges to one for most of the
population, in the second subplot of Fig.3.7 Pareto plots between the objectives are
graphed and in Fig.3.8 diversity plot at each generation is drawn.

The subplots 2, 3 and 4 in the figure 3.6 are Pareto plots between surface roughness and
force, the second subplot represents pareto-front between Ra-Ft, the third and fourth sub
plot shows pareto front between Ra-Fa and Ra-Fr.

The Fig.3.7 contains pareto-fronts between tool life and forces; first subplot is pareto-
front between Tf-Ft, second subplot between Tf-Fa and third sub plot between Tf-Fr. From
the pareto fronts for minimum surface roughness and maximum tool life following region
range of forces were found to be favorable (approximately) for optimal solution in
45HRC machining.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
191
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table.6.2 Tradeoffs among forces for Surface roughness and Tool life
Forces Surface Roughness(3.5-4m) Tool life (Tf>10 mins)
Tangential Forces 600-400 N <500N
Axial Forces 360-300N <160N
Radial forces 300-330N <350N

In Fig.3.8 diversity of generations is plotted, at initial generations the average pareto


distance was maximum latter on at mid generations a sharp drop is observed between 25-
35 generations and then at final generations the distance converged to as low as 0.001
showing good migration ability and diversity.

6.1.2 Results for SPEA 2

6.1.2 (a) For AISI 4340 35 HRC

Since SPEA 2 is a combination of pareto envelope and nichepareto the rank and fitness
strength is calculated to both best elements in archive and individuals in population
through dominance level. The proposition rule of determining best solution is by
calculating effective fitness value which is sum of raw fitness and density distribution
about individuals. Table 3.6 holds optimal solution for AISI 4340 steel 35 HRC which
contains 300 archive elements. The matrix F contains the fitness strength of each
individual which calculated by summation of R (rank) and D (distance /density function)
matrix (in table 3.6), the distance D is calculated by K nearest cluster algorithm .The
matrix S is strength value which is evaluated by dominance count. The position matrix
represents process parameters and the cost matrix represents machining objectives.

The individual with least strength value represents the fittest individual i.e., the
individual in first front with strength fitness S 0.32 can be picked as best individual with
position (process parameters :cutting speed, feed rate and depth of cut) 259m/min,
0.17mm/rev, 1.07mm corresponding to machining objectives (Ra, Ft, Fa, Fr, Tf) 1.88m,
514N, 136N, 225N, 35.51 minutes.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
192
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

In Fig.3.10 subplots have individual ranks and pareto fronts between surface roughness-
forces, the subplots 2, 3, 4 depicts the pareto front between Ra-Ft, Ra-Fa, Ra-Fr, which are
archive element containing only fittest individual. Similarly the pareto fronts between
tool life and forces are plotted in fig.3.11 the subplots 1,2,3 are pareto fronts for Tf- Ft,
Tf- Fa, Tf- Fr are represented. Though the solutions suggested by SPEA 2 is different
individuals from the NSGA II the force constraint obtained is same as NSGA II for
minimum surface roughness and maximum tool life

Table.6.3 Tradeoffs among forces for Surface roughness and Tool life
Forces Surface Roughness(1-2.5m) Tool life (Tf>35 mins)
Tangential Forces 780-590 N <500N
Axial Forces 300-200N <160N
Radial forces 300-330N <250N

In Fig.3.12 diversity among generation is plotted the average pareto distance is different
from the NSGA II and the spread is limited between 0.37-0.27 the spread is random
representing a good mix in individual among generations but the combination is restricted
between limited distance.

6.1.2 (b) For AISI 4340 45 HRC

The results of 45 HRC steel is recorded in the Table 3.7 with same definition of elements.
From the table the strength fitness is least for third individual with F 0.31 the position
(process parameters: cutting speed, feed rate and depth of cut) of this individual is 174
m/min,0.23 mm/rev, 1.7 mm with machining performance (Ra, Ft, Fa, Fr, Tf) 3.93m,
674N, 389N, 355N, 14.19 minutes and next fittest individual is in consequent front with
strength fitness S 0.33 with process parameters168m/min,0.16mm/rev,1.02mm for
machining performance 3.67m,519.42N,384N,310N,19 minutes.

Pareto fronts for 45 HRC objectives are in Fig.3.13 and Fig.3.14,in fig 3.13 the sub plot
2,3,4 represents the pareto front between Ra-Ft, Ra-Fa, Ra-Fr, and subplots in Fig.3.14
shows subplots of Tf-Ft, Tf - Fa, Tf- Fr respectively for minimum surface roughness and
maximum tool life the force constraints should be as follows

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
193
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Table.6.4 Tradeoffs among forces for Surface roughness and Tool life
Forces Surface Roughness(3.5-4m) Tool life (Tf>10 mins)
Tangential Forces 600-400 N <500N
Axial Forces 360-300N <160N
Radial forces 300-330N <350N

In Fig.3.15 diversity among generation is determined the average pareto distance is


limited between 0.4-0.26 the spread is random representing a good combinations in
individual among generations but this trend is limited.

6.1.3 Results for PSO

6.1.3 (a) For AISI 4340 35HRC

The PSO algorithm utilizes swarm movement to find optimal solution. The applied
swarm constitutes of particle structure with position, cost and their best position, best cost
associated with it a pseudo velocity and acceleration associated for each particle. Table
3.9 comprises of optimal solution for AISI 4340 steel where the position matrix holds
process parameters cost holds the machining performance for each position best cost ,best
position is defined for each position. The velocity matrix is which correspond to the
position matrix is utilized to move swarm in search space and changes at each transition.
The Grid Index matrix contains the topology of swarm at each transition and grid sub
index contains the neighborhood topology of grid index. The table consists of 500 swarm
particles which are in repository element out of 1000 swarm.

In Fig.3.17 pareto front between the surface roughness and tangential force is graphed
which gives similar results as pareto front of Evolutionary Techniques.

Fig.3.18-3.19 are swam surfaces for surface roughness and tool life respectively which
shows the potential of swarm. Each particle on the swarm surface is defined by particle
position in Fig.3.18 the particles at the declination of surface are fittest particles which
converged to minimum surface roughness values and in Fig.3.19 the particles at the
projection of swarm surface represents the optimal tool life showing convergence at the
foot of swarm surface.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
194
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

In Fig.3.20 the influence of depth of cut on force component on optimal particles is


plotted. The subplots 1, 2 and 3 show the force gradients with respect to depth of cut for
best particles. The change in force about the mean line is shows that best particles are
about the mean line presenting favorable solutions for surface roughness and tool life.
From the table 3.9 optimal solution is determined by leaders among swarm the first
leader corresponding is to gird index 40282 with best position 193.76m/min,0.15mm/rev,
1 mm with machining performance (Ra, Ft, Fa, Fr, Tf) 3.43 m, 417N, 134N, 215N, 46.12
minutes the subsequent solution is corresponding to particle with grid index 46844
similarly other solutions are equally fit depending on the permitted tradeoffs solutions
can be chosen.

6.1.3 (b) For AISI 4340 45HRC

The solution for AISI 4340 45 HRC steel has same entities which are listed in table
3.10.The pareto-fronts and swarm surface are plotted in fig 3.21 and fig 3.22-3.23
respectively. The swarm surface of 45HRC showed similar trends as to 35 HRC steel the
45HRC steel shows similar trends in swarm surface. In Fig.3.21 the pareto front between
surface roughness and tangential force is drawn which gives force constraints similar to
NSGA II 45 HRC fronts .In fig 3.22 and fig 3.23 the swarm surface for surface roughness
and tool life is drawn.

In fig 3.24 the influence of depth of cut on force component on optimal particles is
plotted. The subplots 1, 2 and 3 show the force gradients with respect to depth of cut for
best particles.. From the table 3.10 optimal solution is determined by swarm leaders
among them the first leader corresponds to gird index 20510 with best position 136
m/min, 0.15mm/rev, 1mm with machining performance (Ra, Ft, Fa, Fr, Tf) 4.04 m, 522N,
134N, 331N, 29.32 minutes the subsequent solution is corresponding to particle with grid
index 31070 similarly other solutions are equally fit depending on the allowed tradeoffs
solutions can be chosen.3

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
195
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

6.1.4 Comparison between EA and SI

The Fig.3.25 - Fig.3.30 are plots of solution spectrum obtained from applied global
search optimization technique. Each plot has solutions for five objectives i.e, Surface
roughness, tool life, radial force, axial force and tangential force. In fig 3.25 and fig 3.26
the solution space obtained from NSGA II for 35 HRC and 45 HRC steels is drawn.
Clearly distinction can be made the graphs that in 35 HRC steel radial forces are
dominating axial force while in 45 HRC steel axial force is dominating radial force. A
similar trend can be inferred from solutions of SPEA2 and PSO.

Fig 3.27 and Fig 3.28 are solution space obtained from the PSO technique for 35 HRC
and 45 HRC steel respectively and Fig.3.29 and Fig.3.30 are solution space obtained
from the SPEA2 for 35HRC and 45HRC respectively.

For 35 HRC the sequence of solution for each objectives are represented in following
order surface roughness, tool life, axial force, radial force and tangential force ,while for
35 HRC the solution for each objective is in order of surface roughness, tool life ,radial
force, axial force and tangential force.

6.1.4 (a) Comparison among the Solution spectrum

The solution space of NSGA II (fig 3.25 & 3.26) has thousand populations which is a
combination of parents and offspring formed by tournament selection. The solution space
is distributed with constant amplitude in solution space. While the solution space of PSO
and SPEA2 (fig.3.27-fig.3.30) vary in amplitude in solution space at each generation.

The solution space of PSO (Fig 3.27 & fig 3.28) has five hundred best solutions which
are elite solutions. The solution trend is random when compared to NSGAII showing
varying amplitude across wavelength with immediate change in crest and troughs in local
and global minima and maxima.

However the solution space of SPEA2 has three hundred best elite solutions which are
highly disturbed amplitude when compared to NSGAII and PSO the jumps in maxima
and minima is uneven exhibiting unsaturation in local and global minima .

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
196
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

From the solution space analysis the NSGAII showed a better saturation in local global
maxima minima while PSO should a moderate saturation and SPEA2 exhibited lower
saturation levels in maxima and minima. The saturation levels also depicts tradeoffs in
objectives from which an following inference can be made.

The solution obtained from the NSGAII solution gave better tradeoffs among objectives
which can be observed from each front in solution space. While PSO gave a moderate
trade off with the solution space was favoring lower surface roughness by reducing tool
life for few swarm particles though the overall trade off was found to be good enough.

The solution space of SPEA2 gave unbalance tradeoff between objectives favoring
surface roughness and reducing tool life for many individuals. From above discussion it
can be concluded that the solution space provided by NSGAII is better than other two
while the solution space provided by the PSO is better than SPEA2.

6.1.5 Evident from the literature

The solutions obtained by fittest individuals in NSGAII and PSO keeps the flank wear of
tool under working limit. The tool used for current hard turning was multi-layer coated
carbide inserts and the cutting speed suggested by both the optimization techniques
suggested cutting speed below 200 m/min and feed rate, depth of cut in between LFLD
(low federate low depth of cut) and HLHD (high feed high depth of cut) at this condition
the flank wear is less than 0.15 mm for tool life greater than 40 minutes [2]. This cutting
condition keeps flank wear under appreciable level, restricting sharp rise in cutting forces
due to high flank wear rate [3]. If the flank wear is inhibited above this then machining
chatter due to excessive forces is controlled and better surface finish is obtained.

In contrast SPEA2 suggests cutting speed close to 260m/min (250-260 m/min) and feed
rate, depth of cut near (HLHD condition) for which flank wear about 0.2mm for time
cutting greater than 35 minutes.[2]. At this flank wear the forces tend to increase sharply
which may increase machining vibrations resulting in poor surface finish. The tradeoff
between the surface roughness and tool life should be picked wisely for a successful hard
turning.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
197
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

6.2 Intelligent Learning Techniques

Intelligent learning systems were applied to recognize machining pattern sequence which
is discernible in the machining statistics. In fig 4.1 the adopted strategy for chapter 4 is
illustrated and accordingly the results were discussed. Learned networks were utilized to
predict machining performance on experimental runs and obtained results were analyzed
for statistical error. An extensive mathematical framework is discussed for the all the
applied learning techniques.

6.2.1 Neural Network

The applied network architect for both steels is illustrated in fig 4.4. A multi-layer feed
forward perceptron type network was used to for both steels which require many
exemplars for mapping multiple vectors. From the regression model thousand machining
data were generated for each vector model for better interpolation of target vectors in
learning space. These data sets were randomly permutated and split for training, testing
and validating samples. Out of the thousand sets 700 samples were utilized for training
150 sample for training and 150 for validation.

The description of applied neural network for AISI 4340 steel 35 HRC is listed in table
4.2.4 (a) and the calibrated weights and bias for the targets is given in table 4.2.4 (b). The
network performance plot in Fig.4.5 shows drop in mean square error from 104 to 10-3 in
1000 epoch while learning (training, testing and validating stage). The error gradient
Fig.4.6 between the targets and output converged to 10-2 at 1000 epoch from gradient of
105. For each objective an error histogram and regression plots were drawn in (Fig. 3.7-
Fig. 3.16) in each error plot the mean error was close enough to the zero error line
depicting that the error minimized to its least possible value. Linear regression fit was
obtained between targets and output while learning each objective with regression co
efficient close to 1.

Same network was utilized for leaning machine statistics of 45HRC steel. The network
description and calibrated weights and bias are listed in Table 4.2.5(c) and 4.2.5(d).The
mean square error while learning converged to similar order (Fig.5.17) to that of 35HRC.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
198
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

While in training state the error gradient gave sharp slopes though at final epoch the
gradient converged to 1.125 with zero validation fails. The error histogram and regression
plot for each vector is in (Fig.4.19-Fig.4.28).The error for each machining vector was in
order of 10-2.

6.2.2 Adaptive Neuro Fuzzy Interference Technique

In this network two adaptive layers with learnable parameters were used viz., antecedent
and consequent layer. Three different clustering techniques were applied on membership
function i.e., Grid portioning clustering, Subtractive clustering and Fuzzy C- mean
clustering. Each cluster techniques had different weights (connections) and the fuzzy
structure of each technique varied. The analytical description of hybrid neuro-fuzzy
techniques is discussed in brief. For each machining vector fuzzy structure learns
machining data sequentially. From the RSM model thousand machining data for learning
were generated. These vectors were randomly permutated and split into training (850
samples), testing (150 samples) and validated on complete machining vector.

6.2.2 (a) ANFIS Grid partition

The applied structure is illustrated in fig.4.30 and fuzzy structure implemented is


tabulated in table 4.3.4 (a) fuzzy structure. For each machining vector following plots
were drawn.

(i) Simultaneous plots between expected target and output.


(ii) Errors between the target-output.
(iii) Normal density fit between mean error and standard deviation.

At each epoch the error between targets and output while learning was calculated and
error gradients were minimized through hybrid leaning and back propagation algorithm.

In table 4.3.4 (b) statistical errors in training, testing and validating for each objective in
35HRC is listed and mean error for each vector were as low of order 10-5 these statistical
results are plotted in Fig.4.36-Fig.4.60.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
199
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Same architect and fuzzy structure is utilized for learning machining pattern for 45 HRC.
The statistical error in training, testing and validation for each objective is tabulated in
table 4.3.4 (c) the error for was as low as low as 10-5 and plots for each objective are
illustrated in between Fig.4.61-Fig.4.85.

6.2.2 (b) ANFIS Subtractive Clustering

The developed structure for subtractive clustering is illustrated in fig.4.86, fuzzy structure
implemented is tabulated in table 4.3.16 (a) and in table 4.3.16 (b) &4.3.16 (C) statistical
errors in training, testing and validating for objective in 35 HRC and 45HRC respectively
is listed and these statistical results are plotted for each objective.

In comparison to grid partition clustering the mean error in subtractive clustering is


reduced to order of 10-7.for 35 HRC and 10-6 in 45 HRC showing improvement in leaning
ability. The plots for each objective and their corresponding figures for 35HRC and
45HRC steel acquired from ANFIS subtractive Clustering are illustrated in figures
Fig.4.87-Fig.4.136.

6.2.2 (c) ANFIS Fuzzy C-mean Clustering

The developed structure for FCM clustering is shown in fig.4.137 and fuzzy structure
implemented is tabulated in table 4.3.27 (a). In table 4.3.27 (b) & 4.3.27 (C) statistical
errors in training, testing and validating for both steels is listed. The learning error in
FCM was further reduced to order of 10-8 in both the steels though not for all vectors The
mean error in all vectors was of similar order (10-6) showing similar results as that of
subtractive clustering. The plots for each objectives and their corresponding figures for
35HRC and 45HRC steel obtained from ANFIS FCM are shown in Fig.4.138-Fig.4.187.

6.2.3 Comparative Evaluation of the predictive technique on Experimental statistics

The developed predictive model was tested on Experimental statistics and a statistical
analysis of the errors in predictions was made for each model. Comparison graphs for
each objective in each technique were plotted (fig.4.188-fig.4.219). The Mean errors and

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
200
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

their standard deviations of each objectives in both the steel are listed in table (6.5) and
the means squared error (MSE), root mean squared error (RMSE) is tabulated in table.

Table.6.5 Mean error and Standard Deviation between Experimental and Predicted
statistics
Machining Neural ANFIS ANFIS ANFIS Neural ANFIS ANFIS ANFIS
Objective Network Grid Sub FCM Network Grid Sub FCM

Error Mean Error Standard Deviations


Measure

Comparison Errors for 35 HRC AISI 4340 Steel


Ra(m) 0.0442 -0.2308 -0.226 -0.2339 1.0814 0.5964 0.5964 0.5964

Ft (N) 0.186.6 17.49 15.29 18.19 126.4 106.9 105.36 104.9

Fa (N) 103.98 157 157.79 157 33.77 80 82.3 79.42

Fr (N) 1.1318 32.57 27.89 32.23 93.95 41.29 40.81 42.20

Comparison Errors for 45 HRC AISI 4340 Steel


Ra(m) -0.1700 -0.9278 -0.947 -0.856 0.2769 0.5964 0.594 0.596

Ft (N) -57.23 -22.35 -23.106 -21.7 128.3 77.58 76.64 78.8

Fa (N) 54.67 -7.5709 -6.9912 -6.8003 33.57 40.08 39.58 38.45

Fr (N) 0.415 -7.69 -8.525 -6.63 14.0720 38.35 36.57 36.89

Table.6.6 MSE and RMSE between Experimental and Predicted statistics

Machini Neural ANFIS ANFIS ANFIS Neural ANFIS ANFIS ANFIS


ng Networ Network
Objectiv k Grid Sub FCM Grid Sub FCM
e
Error Mean Square Error Root Mean Square Error
Measure

Comparison Errors for 35 HRC AISI 4340 Steel


Ra(m) 1.1130 0.3912 0.3875 0.392 1.0550 0.6255 0.6225 0.6266

Ft (N) 594.81 1.116*104 1.07*104 1.08*104 24.38 105.66 103 103.9

Fa (N) 1.89*10 3.09*104 3.13*104 3.07*104 109.07 176 177.016 175.32


4

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
201
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fr (N) 93.95 2.6*103 2.36*103 2.73*103 9.69 51.78 48.58 52.25

Comparison Errors for 45 HRC AISI 4340 Steel

Ra(m) 0.1018 1.198 1.23 1.0714 0.3190 1.0949 1.109 1.0351

Ft (N) 1.89*10 6.128*103 6.11*103 6.38*103 137.5 78.85 78.19 79.8


4

Fa (N) 4.06*10 1.58*103 1.53*103 6.36*10-6 63.72 39.8 39.213 0.4664


3

Fr (N) 188.29 1.45*103 1.33*103 1.33*103 13.72 38.17 36.65 36.56

Though the statistical errors in neural network was relatively in comparison to ANFIS
models the relative change in error at each prediction points in neural network is higher
than the ANFIS models and the curve traced by neural network in both the steels did not
match well when compared to curved traced by ANFIS models. This observation can be
inferred by analyzing corresponding comparison graphs for each objectives in specific
techniques. The error plots for each technique are drawn in Fig.4.188-4.219.
The results obtained from the prediction models were accurate enough to predict around
the experimental statistics, though the relative degree of accuracy varied for different
learning technique. The mean error of neural network was less than the neuro-fuzzy but
the prediction curve could not trace well with the experimental curve in converse the mean
error of ANFIS was comparatively larger but the prediction curve traced well with the
experimental curve.

6.3 Synergies of CI

The combination of optimization techniques were applied to prediction model to further


improve results in pattern learning of current machining statistics The combination of
synergies applied is shown in Fig.5.1 results of each combination is discussed
individually.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
202
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

6.3.1 EA-NN

For EA-NN combination NSGA II optimization technique was applied to optimize


weights. The applied architect is shown in Fig.5.2. The network description, calibrated
weights of combined network is listed in table 5.2.2 (a) and 5.2.3 (b).The optimized
populations from NSGAII were given input to neural network at each epoch and
depending on drop in error gradients the weights were adjusted and fed to NSGA II for
optimization. The optimized solution space (1000 population) of NSGA II are permutated
randomly and split into training (700) testing (150) and validating samples. These
samples would adjust weights accordingly at every epoch.

The mean square error for network converged to order of 10-3 just in 300 epoch exhibiting
good convergence in network error. The plot for network performance is in Fig.5.3.The
training state plots is illustrated in Fig.5.4 the error gradient dropped to 0.194 which is
lesser than the neural network. Likewise the error plots and regression for each vector is
in Fig.5.5-Fig.5.15.

Same architect is utilized for 45 HRC steel the learning performance was better than the
exclusive neural network with mean square error as low as 10-4 in training, testing,
validation stages. The performance plot is drawn in Fig.5.16 and the training state plots
are in Fig.5.17, error gradients were also found to be minimized to gradient as low as 10 -,
error and regression plots for each vector are in Fig.5.16-Fig.5.41.

6.3.2 SI-NN

Applied architect for PSO-NN combination is illustrated in Fig.5.29. The best solutions
in archive elements of PSO were utilized for network learning. The archive had 500
elements out of which 350 samples were utilized for training, 75 samples for testing and
75 samples for validation. The network description and calibrated weights are tabulated
in table 5.3.1 (a) and table.5.3.1 (b). Two hidden layers were defined for network to work
accurately since the examples provided by PSO were less compared to NSGA II. For
each input and output vectors 20 weights were utilized for mapping and 10 bias were
added at each hidden layer.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
203
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

The network performance of combined SI-NN was better than NN with mean square
error dropping as low as 10-4 in all the learning state and error gradient going as low as
10-4 at final epoch. The performance plot of network is illustrated in Fig.5.30 and training
state plots are in Fig.5.31.The overall network performance was better than the exclusive
neural network.

6.3.3 ANFIS Synergies

To enhance ANFIS, GA and PSO are applied to optimize membership function, FCM
structure from previously developed model was utilized for combination. The applied
strategy utilized for ANFIS-GA is explained in Fig.5.42, the architect utilized is same as
that of ANFIS-FCM. Thousand learning samples were generated from RSM model these
exemplars were randomly permuted and used for leaning. Out of thousand exemplars 700
were split for training sets and 300 for testing. Since fuzzy maps only one vector per
training the multi objective NSGA was reduced to single objective GA with no change in
genetic and selection operators.

From the error table 5.3.1 (a) the mean error for each vector of 35HRC dropped to order
of 10-13 which is way better compared to exclusively applied ANFIS (10 -6) demonstrating
excellent improvement in pattern learning ability. The error plots for each vector are
drawn in Fig.5.44-Fig.5.64.Similarly for 45 HRC from the Table.5.3.7 mean error for
each vector dropped to order of 10-13 exhibiting better leaning than exclusive ANFIS.

Developed strategy for ANFIS-PSO technique is shown in Fig.5.84, the thousand


samples employed in ANFISGA is used for ANFIS-PSO learning. From table 5.4.7 the
learning error was reduced to order of 10-14 exhibiting better learning ability The error
plots for both the techniques is represented in Fig.5.44-Fig.5.125.

6.3.4 Comparative Evaluation of the predictive technique on Experimental statistics

The developed predictive model was tested on Experimental statistics and a statistical
analysis of the errors in predictions was made for each model. Comparison graphs for
each objective in each technique were plotted (fig.4.188-fig.4.219).The Mean errors and

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
204
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

their standard deviations of each objectives in both the steel are listed in table 6.7 and the
means squared error(MSE), root mean squared error (RMSE) is tabulated in table 6.8

Table.6.7 Mean error and Standard Deviation between Experimental and Predicted
statistics
Machinin NSG PSO-NN ANFIS- ANFIS- NSGA PSO-NN ANFIS ANFIS-
g A-NN GA PSO -NN -GA PSO
Objective
Error Mean Error Standard Deviation
Measure
Comparison Errors for 35 HRC AISI 4340 Steel
Ra(m) -0.221 1.749 -0.1572 0.1578 0.979 0.732 0.4507 0.4494
Ft (N) 42.67 -53.25 17.6 17.13 86.21 104.6 57.06 57.16
Fa (N) 156.2 138.22 136.152 135.9 49.56 40.44 51.87 52
Fr (N) -0.723 40.08 11.065 10.89 14.77 64.63 20.54 70.79
Comparison Errors for 45 HRC AISI 4340 Steel
Ra(m) -0.355 0.0371 0.0372 0.378 0.4507 0.2796 0.2798
Ft (N) -57.23 -21.84 -27.34 128.36 57.06 107.08 106.19
Fa (N) -13.57 -45.95 -46.83 38.046 51.87 41.46 41.55
Fr (N) -11.74 -0.91 -0.747 35.99 20.54 36.86 36.77

Table.6.8 Mean Square Error and Root Mean Square Error between Experimental and
Predicted statistics

Machini NSGA- PSO-NN ANFIS- ANFIS-PSO NSGA- PSO- ANFIS- ANFI


ng NN GA NN NN GA S-PSO
Objectiv
e
Error Mean Square Error Root Mean Square Error
Measure
Comparison Errors for 35 HRC AISI 4340 Steel
Ra(m 0.960 3.56 0.2177 0.2168 0.980 1.88 0.466 0.4656
)
Ft (N) 8.88*103 1.32*104 3.4*103 3.398*103 94.24 115.04 58.3 58.29
4 4 4
Fa (N) 2.67*10 2.06*10 2.109*10 2.11*104 114.49 43.73 145.2 145.4
Fr (N) 206.55 5.57*103 523. 517 14.37 74.66 22.87 22.75
Comparison Errors for 45 HRC AISI 4340 Steel
Ra(m 0.262 0.075 0.075 0.512 0.2751 0.275
)
Ft (N) 1.89*104 1.64*104 1.14*104 137.58 107.76 107.05
Fa (N) 1.55*103 3.74*103 3.83*103 39.488 61.19 61.91
3
Fr (N) 1.36*10 1.29*103 1.28*103 36.99 35.93 35.84

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
205
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

From table 6.7 the values of mean error and standard deviations of ANFIS combinations
were less than the Neural Network synergies. Similarly (table 6.8) the mean square error
and root mean square error for ANFIS synergies were lesser compared to Neural
Network Synergies. The amplitude of error between experimental and predicted statistics
was shorter compared to Neural Network.

The ANFIS combination performed better than the NN combination this enhancement is
possibly due to the optimization of adaptive layers. Though the statistical error is not very
large when compared in magnitude the curve traced by ANFIS synergies were found to
better than the curve traced by NN synergies. The error plots for each technique are
illustrated in Fig.5.126-Fig.5.153.

In brief the combinations of ANFIS performed better than the combinations of Neural
Network. However the developed prediction techniques were accurate enough to learn
experimental machining statistics.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
206
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

CHAPTER 7

CONCLUSION

On an overview the work dealt with optimizing machining performance in hard turning
of AISI 4340 steel and building accurate predictive models by applying intelligence
learning techniques.

7.1 Optimization Trends

From the results of optimization the best tradeoffs among machining objectives was
obtained by NSGA II solution space, followed by PSO while solution space of
SPEA2 was biased toward surface roughness recommending lower tool life in most
of its best solution.
The difference in tradeoffs recommended by algorithms is due to the degree of
elitism demonstrated by algorithms Elitism is similar to a pseudo memory that is
associated to algorithm by which it recognizes the best individual so that it doesnt
search for same individual in again in consecutive reducing exploration time for
searching best individual.
The elitism in NSGA II was implemented by tournament selection where
consecutive best individuals selected at each tournament are preserved by
replacement of chromosome in intermediate solution. Elitism was performed on
single set of population i.e. both fittest individuals and intermediate chromosomes
were members of chromosome. Though achieving elitism by single set of
chromosome slowed down the convergence while recursion. This drawback was
overcome by providing randomness in genetic operator. The mutation operator acted
as agent of entropy in solution space when elite individuals enveloped to local
minimums.
While in PSO the elitism was carried out by introducing pbest and gbest population
sets where the local minimum were stored in pbest and global minimum in gbest.
The repository element in PSO was elite members which contained gbest
populations. The elite mechanism in PSO was dynamic compared to NSGA II in

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
207
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

PSO, the gbest and pbest attractor in neighborhood topology acted as initiator for
gathering elite members. The problem of local minimum was overcome by the
swarm movement equation Unlike NSGA, PSO doesnt have genetic operator so
when the size of elite members exceeded inertia damping co efficient controlled it,
for better convergence a pseudo mutation on particles were introduced.
Similarity in solution space in NSGA II and PSO could be justified by finding
analogies between both algorithms. The best parents in NSGA acted as pseudo
particle attractor in recognizing elite members which is similar to the gbest and pbest
attractor in grid topology and the genetic operator is similar to that of swarm
movement operator Since the elite members in both the algorithms were same (i.e.,
500) which justly supports analogy.
But when solution space of SPEA 2 is compared the fittest individuals is quite
different from NSGA and PSO algorithm it shows inclination towards one vector
weakening other vector even though elitism is fairly applied. This behavior is drawn
from the niche behavior of fit individuals in archive (elite members).

7.2 Prediction Trends

Two variants of learning techniques were applied for recognizing pattern in


machining statistics. Initially neural network and ANFIS were applied exclusively
for learning machining examples. The accuracy of learnt networks were tested on
experimental statistics both techniques gave similar results in prediction. Neural
network gave relatively lower errors in comparison to ANFIS.
Among the different clustering techniques used in ANFIS the FCM clustering
technique gave relatively lesser errors in learning hence demonstrating better leaning
ability than among three techniques. To further improve the learning ability
combination of optimization and learning techniques were utilized Both EA and SI
coupled learning techniques were applied.
For both the leaning techniques (NN & ANFIS) collaborative combination was
utilized. The optimization technique was secondary technique while the leaning
technique was primary. In neural network the NSGA II and PSO optimized were

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
208
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

used to optimize the weights in hidden layer and reduce the error between targets and
outputs the prediction results obtained from the NSGA-NN were better in
comparison to PSO-NN.
The ANFIS combination Technique exhibited better learning trends than the ANFIS
applied exclusively. Both the combinations ANFIS-GA and ANFIS-PSO gave better
results in comparison to experimental statistics. The accuracy of both techniques was
similar with minor difference in learning error. In comparison to neural network
synergies the ANFIS synergies gave better results on experimental statistics and
illustrated enhanced learning.
To summarize the combined predictive models performed better in comparison to the
exclusive techniques and in optimization techniques the NSGA II and PSO algorithm
gave relatively good tradeoffs in MOOPS when compared to SPEA2 algorithm.

Future Scope

The unexplored Meta heuristic techniques can be applied for much better tradeoffs in
MOOPS and further enhancement in learning technique is possible by applying further
introducing stronger coupling among the prediction and optimization techniques.

The applied strategy for current machining system can be generalized to other
conventional and non-conventional machining systems.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
209
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

REFERENCES
[1] Andries P. Engelbrecht Computational Intelligence An Introduction John Wiley &
Sons Ltd ISBN 978-0-470-03561-0.
[2] Satish Chinchanikar, S.K. Choudhury, Effect of work material hardness and cutting
parameters on performance of coated carbide tool when turning hardened steel: An
optimization approach Measurement 46 (2013) pp. 15721584.
[3] Wear behaviours of single-layer and multi-layer coated carbide inserts in high speed
machining of hardened AISI 4340 steel Journal of Mechanical Science and
Technology 27 (5) (2013) 1451~1459
[4] Satish Chinchanikar, S.K. Choudhury, Hard turning using HiPIMS-coated carbide
tools: Wear behaviour under dry and minimum quantity lubrication (MQL)
Measurement 55 (2014) 536548.
[5] Satish Chinchanikar, S.K. Choudhury Investigations on machinability aspects of
hardened AISI 4340 steel at different levels of hardness using coated carbide tools
Int. Journal of Refractory Metals and Hard Materials 38 (2013) 124133.
[6] Gaurav Bartarya S.K. Choudhury State of the art in hard turning International Journal
of Machine Tools & Manufacture 53 (2012) 114.
[7] Satish Chinchanikar, S.K. Choudhury Machining of hardened steel Experimental
investigations performance cooling techniques: A review International Journal of
Machine Tools & Manufacture 89 (2015) 95109.
[8] Satish Chinchanikar, S.K. Choudhury Cutting force modeling considering tool wear
effect during turning of hardened AISI 4340 alloy steel using multi-layer
TiCN/Al2O3/TiN-coated carbide tools Int. J. Adv. Manuf. Technol. (2016) 83:1749
1762 DOI 10.1007/s00170-015-7662-5
[9] Predictive modeling for flank wear progression of coated carbide tool in turning
hardened steel under practical machining conditions Int. J Adv. Manuf. Technol.
(2015) 76:11851201 DOI 10.1007/s00170-014-6285-6
[10] T. G. Ansalam Raj & V. N. Narayanan Namboothiri, An Improved genetic
algorithm for the prediction of surface finish in dry turning of SS 420 materials Int J
Adv Manuf Technol (2010) 47:313324 DOI 10.1007/s00170-009-2197-2.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
210
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[11] Hesam Shahali, M Reza Soleymani Yazdi, Aminollah Mohammadi and Ehsan
Iimanian, Optimization of surface roughness and thickness of white layer in wire
electrical discharge machining of DIN 1.4542 stainless steel using micro genetic
algorithm and signal to noise ratio techniques Proc IMechE Part B: J Engineering
Manufacture 226(5) 803812_ IMechE 2012.
[12] A. Garg & L. Rachmawati & K. Tai Classification-driven model selection
approach of genetic programming in modelling of turning process Int J Adv Manuf
Technol (2013) 69, pp. 11371151.
[13] Khaider Bouacha, Mohamed Athmane Yallese, Samir Khamel, Salim Belhadi
Analysis and optimization of hard turning operation using cubic boron nitride tool
Int. Journal of Refractory Metals and Hard Materials 45 (2014), pp. 160178.
[14] Yiit Karpat & Turul zel, Multi-objective optimization for turning processes
using neural network modelling and dynamic-neighbourhood particle swarm
optimization Int J Adv Manuf Technol (2007) 35, pp. 234247.
[15] M. Al-Ahmari Prediction and optimisation models for turning Operations ISSN:
0020-7543, pp. 1366-588
[16] Adel T. Abbas Karim Hamza Mohamed F. Aly and Essam A. Al-Bahkali,
Multiobjective Optimization of Turning Cutting Parameters for J-Steel Material
Advances in Materials Science and Engineering Volume 2016, Article ID 6429160, 8
pages Hindawi Publishing.
[17] ZhenhuaWang,Juntang Yuan, Zengbin Yin and Chao Li, Study on high-speed
cutting parameters optimization of AlMn1Cu based on neural network and genetic
algorithm Advances in Mechanical Engineering 2016, Vol. 8(4), pp. 112 .
[18] Yunguang Zhou & Yadong Gong& Zongxiao Zhu & Qi Gao & Xuelong Wen
Modelling and optimisation of surface roughness from micro grinding of nickel-based
single crystal super alloy using the response surface methodology and genetic
algorithm Int J Adv Manuf Technol (2016) 85:2607 2622
[19] Shahram Saeidi & Maghsud Solimanpur & Iraj Mahdavi & Nikbakhsh Javadian
A multi-objective genetic algorithm for solving cell formation problem using a fuzzy
goal programming approach Int J Adv Manuf Technol (2014) 70, pp. 16351652.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
211
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[20] N Alberti1 and G Perrone2 Multi-pass machining optimization by using fuzzy


Possibilistic programming and genetic algorithms Proc Instn Mech Engrs Vol 213
Part B.
[21] Mohinder P Garg, Ajai Jain and Gian Bhushan Modelling and multi-objective
optimization of process parameters of wire electrical discharge machining using non-
dominated sorting genetic algorithm-II Proc IMechE Part B: J Engineering
Manufacture 226(12), pp. 19862001.
[22] A. Pramanick, N. Saha, P. P. Dey and P. K. Das Wire EDM Process Modelling
With Artificial Neural Network And Optimization By Grey Entropy-Based Taguchi
Technique For Machining Pure Zirconium Di-boride Journal of Manufacturing
Technology Research ISSN: 1943-8095.
[23] M. A. Sahali & I. Belaidi & R. Serra, New approach for robust multi-objective
optimization of turning parameters using probabilistic genetic algorithm Int J Adv
Manuf Technol (2016) 83:12651279.
[24] JS Dureja, VKGupta, Vishal S Sharma, Manu Dogra and Manpreet S Bhatti, A
review of empirical modelling techniques to optimize machining parameters for hard
turning applications Proc IMechE Part B: J Engineering Manufacture 2016, Vol.
230(3), pp. 389404.
[25] H. Ganesan G. Mohankumar, Optimization of Machining Techniques in CNC
Turning Centre Using Genetic Algorithm Arab J Sci. Eng. (2013) 38, pp. 15291538.
[26] I.S. Jawahir, X. Wang, Development of hybrid predictive models and
optimization techniques for machining operations Journal of Materials Processing
Technology 185 (2007) 4659 25.
[27] KA Sundararaman, K. P Padmanaban and M Sabareeswaran, Optimization of
machining fixture layout using integrated response surface methodology and
evolutionary techniques, Proc IMechE Part C:J Mechanical Engineering Science
2016, Vol. 230(13) 22452259 .
[28] Antonio Costa & Giovanni Celano & Sergio Fichera, Optimization of multi-pass
turning economies through a hybrid particle swarm optimization technique, Int. J
Adv. Manuf. Technol. (2011) 53:421433. DOI 10.1007/s00170-010-2861-6.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
212
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[29] S. Bharathi Raja, N. Baskar, Application of Particle Swarm Optimization


technique for achieving desired milled surface roughness in minimum machining
time, Expert Systems with Applications 39 (2012) 5982.
[30] S. Bharathi Raja, N. Baskar Optimization techniques for machining operations:a
retrospective research based on various mathematical models, Int. J. Adv. Manuf.
Technol. (2010) 48:10751090.
[31] S. Bharathi Raja, N. Baskar, Particle swarm optimization technique for
determining optimal machining parameters of different work piece materials in
turning operation Int. J Adv. Manuf. Technol. (2011) 54:445463. DOI
10.1007/s00170-010-2958.
[32] M. Chandrasekaran& M. Muralidhar&C. Murali Krishna & U. S. Dixit
Application of soft computing techniques in machining performance prediction and
optimization: a literature review Int. J. Adv. Manuf. Technol. (2010) 46:445464,
DOI 10.1007/s00170-009-2104.
[33] G. Prabhaharan , K. P. Padmanaban, R. Krishnakumar Machining fixture layout
optimization using FEM and evolutionary techniques, Int. J Adv. Manuf. Technol.
(2007) 32: 10901103, DOI 10.1007/s00170-006.
[34] Farahnakian M., Razfar, M. R., Moghri, M., &Asadnia, M. (2011),The selection
of milling parameters by the PSO-based neural network modeling method.
International Journal of Advanced Manufacturing Technology, 1-12.
[35] Yang, W., Guo, Y., & Liao, W. (2011a). Optimization of multi-pass face milling
using a fuzzy particle swarm optimization algorithm. International Journal of
Advanced Manufacturing Technology, 54(1-4), 45.
[36] Escamilla, I., Perez, P., Torres, L., Zambrano, P., & Gonzalez, B. (2009),
Optimization using neural network modeling and swarm intelligence in the machining
of titanium (ti 6al 4v) alloy. International Conference on Artificial Intelligence,
MICAI 2009, 33-38.
[37] Li, J. G., Yao, Y. X., Gao, D., Liu, C. Q., & Yuan, Z. J. (2008), Cutting
parameters optimization by using particle swarm optimization (PSO), Applied
Mechanics and Materials Vols. 10-12 (2008), pp. 879-883.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
213
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[38] Chen, Z. and Li, Y. (2008). An improved particle swarm algorithm and its
application in grinding process optimization, Proceedings of the 27th Chinese Control
Conference. 2-5
[39] Rajkamal Shukla, Dinesh Singh Experimentation investigation of abrasive water
jet machining parameters using Taguchi and Evolutionary optimization techniques
Swarm and Evolutionary Computation journal .via Taguchi method-based response
surface analysis. Measurement 45 (2012) 785794
[40] Ilhan Asiltrk a, Mehmet unkas, Modeling and prediction of surface roughness
in turning operations using artificial neural network and multiple regression method
Expert Systems with Applications 38 (2011) 58265832
[41] N Senthilkumar, T Tamizharasan, Flank wear and surface roughness prediction
in hard turning Via artificial neural network and multiple regressions.
[42] Miron Zapciu Jean-Yves Knevez Alain Grard Olivier Cahuc Claudiu
Florinel Bisu Dynamic characterization of machining systems Int. J. Adv. Manuf.
Technol. (2011) 57:7383DOI 10.1007/s00170-011-3277-7.
[43] Dilbag Singh & P. Venkateswara Rao, Flank wear prediction of ceramic tools in
hard turning Int J Adv Manuf Technol (2010) 50:479493 DOI 10.1007/s00170-010-
2550-5
[44] Yahya Isik, Investigating the machinability of tool steels in turning operations
Materials and Design 28 (2007) pp. 14171424.
[45] Hamdi Aouici, Mohamed Athmane Yallese, Kamel Chaoui, Tarek Mabrouki,
Jean-Franois Rigal. Analysis of surface roughness and cutting force components in
hard turning with CBN tool: Prediction model and cutting conditions optimization
Measurement 45 (2012), pp. 344353.
[46] Suha K. Shihab, Zahid A. Khan, Aas Mohammad and Arshad Noor Siddiquee,
Investigations on the Effect of CNC Dry Hard Turning Process Parameters on Surface
[47] Waleed Bin Rashid & Saurav Goel & J. Paulo Davim & Shrikrishna N.Joshi
Parametric design optimization of hard turning of AISI 4340 steel (69 HRC) Int J
Adv Manuf Technol (2016) 82, pp. 451462.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
214
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[48] Abhijit Saha, Subhas Chandra Mondal, Multi-objective optimization in WEDM


process of nanostructured hard-facing materials through hybrid techniques
Measurement 94 (2016) pp. 4659.
[49] Emre Yu cel and Mustafa Gunay Modelling and optimization of the cutting
conditions in hard turning of high-alloy white cast iron (Ni-Hard) Proc IMech E Part
C: J Mechanical Engineering Science 227(10) 22802290.
[50] Ilhan Asiltrk, Sleyman Neseli, Multi-responseoptimization of CNC turning
parameters via Taguchi method-based response surface analysis, Measurement 45
(2012), pp. 785794.
[51] Gaurav Bartarya S.K. Choudhury State of the art in hard turning International
Journal of Machine Tools & Manufacture 53 (2012) 114.
[52] Ashvin J. Makadia, J. I. Nanavati, Optimisation of machining parameters for
turning operations based on response surface methodology Measurement 46 (2013),
pp. 15211529.
[53] Ilhan Asilturk, Harun Akku, Determining the effect of cutting parameters on
surface roughness in hard turning using the Taguchi method. Measurement 44 (2011)
pp. 16971704.
[54] Aman Aggarwal and Hari Singh, Optimization of machining techniques A
retrospective and literature review Sadhana Vol. 30, Part 6, December 2005, pp. 699
711.
[55] Chinmaya R. Dandekar, Yung C. Shin, JohnBarnes, Machinability improvement
of titanium alloy (Ti6Al4V) via LAM and hybrid machining, International Journal
of Machine Tools & Manufacture 50 (2010) 174182.
[56] X. Wang, Z.J. Da, A.K. Balaji, I.S. Jawahir, Performance-Based Predictive
Models and Optimization Methods for Turning Operations and Applications: Part 3
Optimum Cutting Conditions and Selection of Cutting Tools, Journal of
Manufacturing Processes Vol. 9/No. 1 2007.
[57] Devinder Priyadarshi, and Rajesh Kumar Sharma, Effect of type and percentage
of reinforcement for optimization of the cutting force in turning of Aluminum matrix

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
215
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Nano composites using response surface methodologies Journal of Mechanical


Science and Technology 30 (3) (2016) 1095~1101.
[58] Ersan Aslan, Necip Camus cua, Burak Birgoren, Design optimization of cutting
parameters when turning hardened AISI 4140 steel (63 HRC) with Al2O3 + TiCN
mixed ceramic tool, Materials and Design 28 (2007) 16181622.
[59] Hashimoto, Y.B. Guo, A.W. Warren, Surface Integrity Difference between Hard
Turned and Ground Surfaces and Its Impact on Fatigue Life The Timken Company,
Canton, Ohio 44706, USA
[60] T. Ozel, Y. Karpat, A. Srivastava, Hard turning with variable micro-geometry
PcBN tools CIRP Annals - Manufacturing Technology 57 (2008) pp. 7376.
[61] Ravinder Kumar, Santram Chauhan, Study on surface roughness measurement for
turning of Al 7075/10/SiCp and Al 7075 hybrid composites by using response surface
methodology (RSM) and artificial neural networking (ANN).
[62] Mozammel Mia, Nikhil Ranjan Dhar Prediction of surface roughness in hard
turning under high pressure coolant using Artificial Neural Network Measurement 92
(2016) 464474
[63] Fabrcio Jos Pontes b, Anderson Paulo de Paiva a, Pedro Paulo Balestrassi a,
Joo Roberto Ferreira a,,Messias Borges da Silva b Optimization of Radial Basis
Function neural network employed for prediction of surface roughness in hard turning
process using Taguchis orthogonal arrays Expert Systems with Applications 39
(2012) 77767787.
[64] Vinayak Neelakanth Gaitonde S. R. Karnik & Luis Figueira & J. Paulo Davim
Performance comparison of conventional and wiper ceramicinserts in hard turning
through artificial neuralnetwork modeling Int J Adv Manuf Technol (2011) 52:101
114 DOI 10.1007/s00170-010-2714-3.
[65] Xiaoyu Wang Wen Wang Yong Huang Nhan Nguyen Design of neural network-
based estimator for tool wear modeling in hard turning Intell Manuf (2008) 19:383
396 DOI 10.1007/s10845-008-0090-8.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
216
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[66] D. Umbrello a,, G. Ambrogioa, L. Filice a, R. Shivpuri ba An ANN approach for


predicting subsurface residual stresses and the desired cutting conditions during hard
turning Journal of Materials Processing Technology 189 (2007) 14315
[67] Ravinder Kumar, Santram Chauhan, Study on surface roughness measurement for
turning of Al 7075/10/SiCp and Al 7075 hybrid composites by using response surface
methodology (RSM) and artificial neural networking (ANN).
[68] Ali R. Yildiz Hybrid Taguchi-differential evolution algorithm for optimization of
multi-pass turning operations Applied Soft Computing 13 (2013) 14331439.
[69] F. Kara K. Aslantas A. Cicek ANN and multiple regression method-based
modelling of cutting forces in orthogonal machining of AISI 316L stainless steel
Neural Comput & Applic (2015) 26:237250 DOI 10.1007/s00521-014-1721-
[70] Sener Karabulut Optimization of surface roughness and cutting force
duringAA7039/Al2O3 metal matrix composites milling using neural networks and
Taguchi method Measurement 66 (2015) 139149.
[71] Deb, K. (2008) Multi-objective Optimisation using Evolutionary Algorithms, 2nd
ed., John Wiley & Sons, Chichester. Deb, K., Pratap, A., Agarwal, S. and Meyarivan,
T. (2002) A fast and elitist multi-objective genetic algorithm: NSGA-II, IEEE
Transactions on Evolutionary Computation, 6(2), 182197.
[72] Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T. (2002) A fast and elitist
multi-objective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary
Computation, 6(2), 182197.
[73] Zitzler, E., Laumanns, M. and Thiele, L. (2001) SPEA2: Improving the strength
Pareto evolutionary algorithm. Proceedings of the Evolutionary Methods for Design,
Optimization, and Control with Applications to Industrial Problems, EUROGEN
2001, Athens, Greece, pp. 95100.
[74] S.Milad.Nayyer Sabeti 1 and MR.Deevband 2 Hybrid Evolutionary Algorithms
based on PSO-GA for Training ANFIS Structure International Journal of Computer
Science Issues, Volume 12, Issue 5, September 2015.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
217
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[75] Yosra JARRAYA, Souhir BOUAZIZ, Adel M. ALIMI Ajith ABRAHAM Fuzzy
Modeling System based on Hybrid Evolutionary Approach Machine Intelligence
Research Labs, WA, USA.
[76] J. Kennedy, R. Eberhart, Particle Swarm Optimization, From Proc. IEEE Intl.
Conf. on Neural Networks (Perth, Australia), IEEE Service Centre, Piscataway, NJ,
IV:19421948, 1995.

APPENDIX: A CONFERENCES AND PUBLICATION

Conferences

[1] Presented Paper in VISHWACON 2016-2017 at Vishwakarma Institute of


Information Technology, Pune on 17th Feb. 2017.

[2] Presented Paper in ICMMM-2017 at VIT-University, Vellore on 10th March 2017.

[3] Presented Paper in MECHPGCON-2017 held at Zeal College of Engineering and


Research, Pune on 20th June 2017.

[4] Presented Paper in ICMTS-2017 held at Indian Institute of Technology Madras,


Chennai on 7th July 2017.

[5] Paper presented in IconAMMA-2017 held at Amrita Vishwa Vidyapeetham


University on Aug 2017.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
218
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Selected Paper for conferences and publication

[1] A research paper selected at IconAMMA-2017 held at Amrita Vishwa


Vidyapeetham University on Aug 2017 and will be published in Materials Today:
Proceedings.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
219
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

APPENDIX: B CERTIFICATES

[1] Certificate of presented paper in MECHPGCON-2017 held at Zeal College of


Engineering and Research, Pune on 20th June 2017.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
220
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Reviewer & Evaluation Report MECHPGCON-2017

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
221
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[2] Certificate of presented paper in VISHWACON 2016-2017 at Vishwakarma


Institute of Information Technology, Pune on 17th Feb. 2017.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
222
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[3] Certificate of presented paper in ICMMM-2017 at VIT-University, Vellore on 10th


March 2017.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
223
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
224
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[4] Certificate of presented paper in ICMTS-2017 held at Indian Institute of


Technology Madras, Chennai on 7th July 2017.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
225
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

[5] Certificate of poster presentation in AVISHKAR-2016 held at Vishwakarma


Institute of Information Technology, Pune on 2nd December 2016.

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
226
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Appendix

ANFIS Algorithm

//Load data

[Inputs ,Targets]=Load Machining data

//Shuffle data

S=randompermutation([data])

[Inputs, Targets]=[Inputs(S,:),Targets(S,:)]

Train inputs=[Inputs, Targets]

Train(Inputs, Targets), Test(Inputs, outputs)]=[Input(S,:) Targets(S,: )]

//Generate ANFIS structure

Do Training objectives

fis =Create Initial Fis (data)

Create Initial fis(data)

Switch case

Case 1

Grid Partitioning ANFIS

Params

No of mf nmfs : 5

Input mf type Gauss mf

Output mf type : Linear

Fis structure= genfis(Train Input , Train targets, nmfs, gauss, Linear)

Data size =size(data,1)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
227
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

In_n =size(data, 2)-1

Input mfs_Type= In_mfs type

Rule_n=prod(nmfs)

Fis.name=anfis

Fis.and method=prod

Fis.or method= max

Fis.defuzzification method=weight average

Fis.implication method=prod

Fis.agrregation=max

Case 2

Cluster method

Influence Radius

Fis=generate fis (Train inputs, Train targets ,Radius)

Case 3

C-mean cluster method

FCM option structure

No of clusters (ncluster)

Portioning matrix component

Maximum no of iteration

Fis = generate fis(Train input, Train target ,ncluster, FCM options,Optimization method)

Training params [max epoch, error goal,initial step size, step size decreament rate, step
size increament rate]

Output =evaluate fis (Inputs,fis)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
228
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Train output= output [Train(Inputs)]

Test output=output[Test (Inputs)]

//Statistical Analysis

Train error =Train targets-Train outputs

Train MSE =Mean(Train error)

Train RMSE = MSE

Train Mean error=Mean(Train error)

Train Error STD= STD [Train error]

Test Error calculation

Test error = Test targets- Test outputs

Test MSE =Mean(Test error)

Test RMSE = MSE

Test Mean error=Mean(Test error)

Test Error STD= STD [Test error]

ANFIS-GA/PSO Algorithm

ANFIS GA/PSO

//Load data

[Inputs ,Targets]=Load Machining data

//Shuffle data

S=randompermutation([data])

[Inputs, Targets]=[Inputs(S,:),Targets(S,:)]

Train inputs=[Inputs, Targets]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
229
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Train(Inputs, Targets), Test(Inputs, outputs)]=[Input(S,:) Targets(S,: )]

//Generate ANFIS structure

fis =Create Initial Fis (data)

Switch case

Case 1

Train ANFIS using GA (fis,data)

Case 2

Train ANFIS using PSO(fis ,data)

Train output: evaluate fis(data.train inputs, fis)

Test output : evaluate fis(data.test inputs, fis)

Create Initial fis(data)

Switch case

Case 1

Grid Partitioning ANFIS

Params

No of mf nmfs : 5

Input mf type Gauss mf

Output mf type : Linear

Fis structure= genfis(Train Input , Train targets, nmfs, gauss, Linear)

Data size =size(data,1)

In_n =size(data, 2)-1

Input mfs_Type= In_mfs type

Rule_n=prod(nmfs)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
230
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Fis.name=anfis

Fis.and method=prod

Fis.or method= max

Fis.defuzzification method=weight average

Fis.implication method=prod

Fis.agrregation=max

Case 2

Cluster method

Influence Radius

Fis=generate fis (Train inputs, Train targets ,Radius)

Case 3

C-mean cluster method

FCM option structure

No of clusters (ncluster)

Portioning matrix component

Maximum no of iteration

Fis = generate fis(Train input, Trin target ,ncluster, FCM options)

Train fis cost(x,fis,data)

P0=Get fis params(fis)

P=x*p0

Fis =Set fis params(fis,p)

X=data.Train inputs

t=data.Train targets

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
231
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Y=evaluate(x,fis)

// calculate Statistical error

Error :[t]-[y]

MSE : mean(Error)

Get fis params(fis)

P=[]

N=size[fis.input]

For i [1-N]

Nmfs =size[fis.Input[i].mf]

For j [1-nmfs]

P=[p, fis.Input[j].mf.params]

Noutput=size[fis.output]

For i [1-noutputs]

Nmfs =size[fis.output[i].mfs.params]

For j [1-nmfs]

P=[p, fis.Input[j].mf.params]

Set fis params(fis,p)

P0=Get fis parms

P0=[]

X= data.Train Inputs

Y=data.Train Targets

Y=evaluate fis (x, fis)

Train Anfis using GA(fis, data)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
232
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

P0=Get fis params(fis)

Evalaute fitness function= Train fis cost(x,fis,data)

Nvar=size(p0)

Range [min , max]

Max it=50

N=25 // no of pop

Pc=[0.4-0.8] //crossover percentage

Nc=2*round(pc*N/2)

Pm=[0.2-0.4] //mutation percentage

Nm =round(pm*N)

=0.7

Mu=0.15 //mutation rate

//Initialize population

Pop.position=[]

Pop.cost=[]

For i [1-N]

If it>1

Pop[i].position =random(range ,N)

//Evalute

Pop[i].cost =Train fis cost(x,fis, data)

//sort Population

[cost ,sort order]= sort[pop.cost]

Pop=pop[sort order]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
233
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

//Best cost

Best pop=pop[1]

Worst pop=pop[end]

Main Loop

Do until (max it)

* best cos t
p exp( )
worst cos t
p
P N

p
n 1

[p1,p2]=roulette wheel selection(P)

[child 1 child 2]=crossover (p1,p2, crossover params, N)

//Evaluate off spring

Popc=Train fis cost (child1, fis, data)

//Mutate

P3=roulette wheel selection(P)

Child 3=muatate (p3,mutation params, N)

Popm =Train fis cost (child 3, fis, data)

Pop=[pop,popc,popm)

Crossover (p1,p2,crossover parameters)

Parametrs :(,range)

:random(-,1+,N)

y1 * p1 (1 )* p 2
y 2 * p 2 (1 )* p1

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
234
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Y1=min[max(y1,range)]

Y2=min[max(y2,range)]

Mutate(p3,mutation parameters, range)

Parameters : , range

Rmin=min(range)

Rmax=max(range)

dr=Rmax-Rmin

: *dr

y=p1+*random(Nm)

y=min(max(y,range))

Train anfis using PSO (fis ,data)

P0=Get fis params(fis)

Evaluate fitness function = Train fis cost (x, fis ,data)

nvar =size(p0,1)

range(min, max)

max it =50

N=25 //no of pop

W=1, Wdamp=0.99, C1=1, c2 =2

//Initialize Particle structure

{Particle.Position, Particle.Velocity, Particle.Cost, Particle.Best Position,


Particle.Bestcost, Particle.Is Dominated, Particle.Grid Index, Particle.Grid Subindex.}

//Evaluate particle.position and cost

For i :[1-N]

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
235
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Pop[i].position=random[Range,N]

Pop[i].velocity : zeros[N]

Pop[i].cost :Train fis cost (pop[i].position)

//update personal best

Exchange pop[i].best position with pop[i].position

Exchange popi[i].best cost with pop[i].cost

//determine domination level

Pop : domination(pop,N)

For i:[1-N]

For j:[i+1-N]

If dominates (pop[i], pop[j])

True(pop[j] Is dominated)

Else if dominates(pop[j],pop[i])

True(pop[i] Is dominated)

b=dominates(pop[i],pop[j])

b= all(x<y)&&any(x<y)

Do until max (IT)

For i : [1-N]

//Select Leader

leader=Select leader(rep, )

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
236
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

pop i .Velocity w * pop i .Velocity c1* rand VarSize .


* pop i .Best.Position pop i .Position c 2* rand VarSize .
* leader.Position pop i .Position
pop i .Position pop i .Position pop i .Velocity
pop i .Position max pop i .Position, VarMin
pop i .Position min pop i .Position, VarMax
pop i .Cost Train fis cost pop i .Position

New pop=mutatute(pop,pm,Range)

New pop.cost= Train fis cost (newpop.position)

Determine domination(pop)

If dominates(New.pop.position, pop.position)

True(Is dominated pop.position)

Else if dominates(pop.position, New.pop.position)

True(Is dominated New.pop.position)

Pop[i].Position = max(particle[i].Position,Rmin);

pop[i].Position = min(particle[i].Position,Rmax);

if particle(i).Cost<particle(i).Best.Cost

pop(i).Best.Position=particle(i).Position;

pop(i).Best.Cost=particle(i).Cost;

if pop(i).Best.Cost<BestSol.Cost

BestSol=pop(i).Best

Algorithm for Neural Network

Neural network

//Load data

[Train Inputs Target Inputs]= Load data

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
237
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

//Define network structure

Def Train function type :Train func

Hidden layers

//network type

Net=fitnet (hidden layer, size, Train func)

Fitnet (hidden layer ,size,Train func)

nnetparamsInfo.hiddensize

nnetparamsInfo.hidden layers

nnetparamsInfo.nntype

nnetparamsInfo.Train func

net.input.process func : {mapinput vectors}

net.output processfunc: {mapoutput vectors}

net.divide params.train ratio=0.7

net.divide params.validate ratio=0.15

net.divide params.testratio=0.15

//performance function

Net.perform fucn=MSE

[net ,train]=train(net, x,t)

//test network

Y=net(x)

E=[t]-[y]

Performance =perform(net ,t, y)

EANNHRC(x)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
238
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

//Input layer 1

X1_step1_xoffset=[Input]

X1_step1_gain=[e]

X1_step1_ymin=-1

//Layer 1

B1=[bais for each neuron]

IW1=[weights for each neouron]

//Layer2

B2=[bais for each neuron]

IW2=[weights for each neuron]

//output layer

Y1_step1_ymin=-1

Y1_step1_gain=[]

Y1_step1_xoffset=[outputs]

//simulation

For i:[1-size(output)]

//Input1

Xp1=mapminmax_apply([x1,i],x1_step1_gain, x1_step1_xoffset,x1_step1_ymin)

//Layer 1

A1=tansig_apply((b1,1,q)+IW1*xp1)

//layer 2

A2=[b1,1,q]+lw1*a1

//output 2

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
239
Evolutionary Algorithms for Multi-Objective Optimization: Modelling and Comparative Evaluation

Y(1,t,s)=mapminmax(a2,y1_step1_gain,y1_step1_xoffset,y1_step1_ymin)

Mapminmax_apply(x,settings_gain,settings_xoffset,setting_ymin)

Y=bsxfun(@minus,x,setting_x_offset)

Y=bsxfun(@times,y,setting_gain)

Y=bsx(@plus,y,setting_ymin)

//sigmoid symmetric transfer function

Tansig_apply(x)

//mapminimumand maximum output

Mapminmax_reverse(y,setting_gain,setting_xoffset,setting_ymin)

X=bsxfun(@minus,y,setting_x_offset)

x=bsxfun(@times,x,setting_gain)

x=bsx(@plus,x,setting_ymin)

VISHWAKARMA INSTITUTE OF INFORMATION TECHNOLOGY, PUNE


M.E. (Mechanical) (Design Engineering)
240

Anda mungkin juga menyukai