Читать книгу Data Analytics in Bioinformatics - Группа авторов - Страница 31

References

Оглавление

1. Guo, J., He, H., He, T., Lausen, L., Li, M., Lin, H., Zhang, A., Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing. J. Mach. Learn. Res., 21, 23, 1–7, 2020.

2. Abas, Z.A., Rahman, A.F.N.A., Pramudya, G., Wee, S.Y., Kasmin, F., Yusof, N., Abidin, Z.Z., Analytics: A Review Of Current Trends, Future Application And Challenges. Compusoft, 9, 1, 3560–3565, 2020.

3. Géron, A., Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, O’Reilly Media, United State of America, 2019.

4. Alshemali, B. and Kalita, J., Improving the reliability of deep neural networks in NLP: A review. Knowl.-Based Syst., 191, 105210, 2020.

5. Klaine, P.V., Imran, M.A., Onireti, O., Souza, R.D., A survey of machine learning techniques applied to self-organizing cellular networks. IEEE Commun. Surv. Tut., 19, 4, 2392–2431, 2017.

6. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Kudlur, M., Tensorflow: A system for large-scale machine learning, in: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265–283, 2016.

7. Alpaydin, E., Introduction to machine learning, MIT Press, United Kingdom, 2020.

8. Larranaga, P., Calvo, B., Santana, R., Bielza, C., Galdiano, J., Inza, I., Robles, V., Machine learning in bioinformatics. Briefings Bioinf., 7, 1, 86–112, 2006.

9. Almomani, A., Gupta, B.B., Atawneh, S., Meulenberg, A., Almomani, E., A survey of phishing email filtering techniques. IEEE Commun. Surv. Tut., 15, 4, 2070–2090, 2013.

10. Kononenko, I., Machine learning for medical diagnosis: History, state of the art and perspective. Artif. Intell. Med., 23, 1, 89–109, 2001.

11. Kotsiantis, S.B., Zaharakis, I., Pintelas, P., Supervised machine learning: A review of classification techniques, in: Emerging Artificial Intelligence Applications in Computer Engineering, vol. 160, pp. 3–24, 2007.

12. Freitag, D., Machine learning for information extraction in informal domains. Mach. Learn., 39, 2–3, 169–202, 2000.

13. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., Improving language understanding by generative pre-training, URL https://s3-us-west-2.amazonaws.com/openai-assets/researchcovers/languageunsupervised/languageunderstanding paper.pdf, 2018.

14. Garcia, V. and Bruna, J., Few-shot learning with graph neural networks, In Proceedings of the International Conference on Learning Representations (ICLR), 3, 1–13, 2018.

15. Miyato, T., Maeda, S.I., Koyama, M., Ishii, S., Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41, 8, 1979–1993, 2018.

16. Tarvainen, A. and Valpola, H., Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results, in: Advances in Neural Information Processing Systems, pp. 1195–1204, 2017.

17. Baldi, P., Autoencoders, unsupervised learning, and deep architectures, in: Proceedings of ICML Workshop on Unsupervised and Transfer Learning, 2012, June, pp. 37–49.

18. Srivastava, N., Mansimov, E., Salakhudinov, R., Unsupervised learning of video representations using lstms, in: International Conference on Machine Learning, 2015, June, pp. 843–852.

19. Niebles, J.C., Wang, H., Fei-Fei, L., Unsupervised learning of human action categories using spatial-temporal words. Int. J. Comput. Vision, 79, 3, 299–318, 2008.

20. Lee, H., Grosse, R., Ranganath, R., Ng, A.Y., Unsupervised learning of hierarchical representations with convolutional deep belief networks. Commun. ACM, 54, 10, 95–103, 2011.

21. Memisevic, R. and Hinton, G., Unsupervised learning of image transformations, in: 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007, June, IEEE, pp. 1–8.

22. Dy, J.G. and Brodley, C.E., Feature selection for unsupervised learning. J. Mach. Learn. Res., 5, Aug, 845–889, 2004.

23. Kim, Y., Street, W.N., Menczer, F., Feature selection in unsupervised learning via evolutionary search, in: Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2000, August, pp. 365–369.

24. Shi, Y. and Sha, F., Information-theoretical learning of discriminative clusters for unsupervised domain adaptation, Proceedings of the International Conference on Machine Learning, 1, pp. 1079–1086, 2012.

25. Balakrishnan, P.S., Cooper, M.C., Jacob, V.S., Lewis, P.A., A study of the classification capabilities of neural networks using unsupervised learning: A comparison with K-means clustering. Psychometrika, 59, 4, 509–525, 1994.

26. Pedrycz, W. and Waletzky, J., Fuzzy clustering with partial supervision. IEEE Trans. Syst. Man Cybern. Part B (Cybern.), 27, 5, 787–795, 1997.

27. Andreae, J.H., The future of associative learning, in: Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems, 1995, November, IEEE, pp. 194–197.

28. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M., Playing atari with deep reinforcement learning, in: Neural information Processing System (NIPS) ’13 Workshop on Deep Learning, 1, pp. 1–9, 2013.

29. Abbeel, P. and Ng, A.Y., Apprenticeship learning via inverse reinforcement learning, in: Proceedings of the Twenty-First International Conference on Machine learning, 2004, July, p. 1.

30. Wiering, M. and Van Otterlo, M., Reinforcement learning. Adapt. Learn. Optim., 12, 3, 2012.

31. Ziebart, B.D., Maas, A.L., Bagnell, J.A., Dey, A.K., Maximum entropy inverse reinforcement learning, in: Aaai, vol. 8, pp. 1433–1438, 2008.

32. Rothkopf, C.A. and Dimitrakakis, C., Preference elicitation and inverse reinforcement learning, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2011, September, Springer, Berlin, Heidelberg, pp. 34–48.

33. Anderson, M.J., Carl Linnaeus: Father of Classification, Enslow Publishing, LLC, New York, 2009.

34. Becker, H.S., Problems of inference and proof in participant observation. Am. Sociol. Rev., 23, 6, 652–660, 1958.

35. Zaffalon, M. and Miranda, E., Conservative inference rule for uncertain reasoning under incompleteness. J. Artif. Intell. Res., 34, 757–821, 2009.

36. Sathya, R. and Abraham, A., Comparison of supervised and unsupervised learning algorithms for pattern classification. Int. J. Adv. Res. Artif. Intell., 2, 2, 34–38, 2013.

37. Tao, D., Li, X., Hu, W., Maybank, S., Wu, X., Supervised tensor learning, in: Fifth IEEE International Conference on Data Mining (ICDM’05), 2005, November, IEEE, p. 8.

38. Krawczyk, B., Woźniak, M., Schaefer, G., Cost-sensitive decision tree ensembles for effective imbalanced classification. Appl. Soft Comput., 14, 554–562, 2014.

39. Wang, B., Tu, Z., Tsotsos, J.K., Dynamic label propagation for semi-supervised multi-class multi-label classification, in: Proceedings of the IEEE International Conference on Computer Vision, pp. 425–432, 2013.

40. Valizadegan, H., Jin, R., Jain, A.K., Semi-supervised boosting for multi-class classification, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2008, September, Springer, Berlin, Heidelberg, pp. 522–537.

41. Lapp, D., Heart Disease Dataset, retrieved from https://www.kaggle.com/johnsmith88/heart-disease-dataset.

42. Yildiz, B., Bilbao, J.I., Sproul, A.B., A review and analysis of regression and machine learning models on commercial building electricity load forecasting. Renewable Sustainable Energy Rev., 73, 1104–1122, 2017.

43. Singh, Y., Kaur, A., Malhotra, R., Comparative analysis of regression and machine learning methods for predicting fault proneness models. Int. J. Comput. Appl. Technol., 35, 2–4, 183–193, 2009.

44. Verrelst, J., Muñoz, J., Alonso, L., Delegido, J., Rivera, J.P., Camps-Valls, G., & Moreno, J., Machine learning regression algorithms for biophysical parameter retrieval: Opportunities for Sentinel-2 and-3. Remote Sens. Environ., 118, 127–139, 2012.

45. Razi, M.A. and Athappilly, K., A comparative predictive analysis of neural networks (NNs), nonlinear regression and classification and regression tree (CART) models. Expert Syst. Appl., 29, 1, 65–74, 2005.

46. Lu, Q. and Lund, R.B., Simple linear regression with multiple level shifts. Can. J. Stat., 35, 3, 447–458, 2007.

47. Tranmer, M. and Elliot, M., Multiple linear regression, The Cathie Marsh Centre for Census and Survey Research (CCSR), vol. 5, pp. 30–35, 2008.

48. Kutner, M.H., Nachtsheim, C.J., Neter, J., Li, W., Applied Linear Statistical Models, vol. 5, McGraw-Hill Irwin, New York, 2005.

49. Noorossana, R., Eyvazian, M., Amiri, A., Mahmoud, M.A., Statistical monitoring of multivariate multiple linear regression profiles in phase I with calibration application. Qual. Reliab. Eng. Int., 26, 3, 291–303, 2010.

50. Ngo, T.H.D. and La Puente, C.A., The steps to follow in a multiple regression analysis, in: SAS Global Forum, 2012, April, vol. 2012, pp. 1–12.

51. Hargreaves, B.R. and McWilliams, T.P., Polynomial trendline function flaws in Microsoft Excel. Comput. Stat. Data Anal., 54, 4, 1190–1196, 2010.

52. Dethlefsen, C. and Lundbye-Christensen, S., Formulating State Space Models in R With Focus on Longitudinal Regression Models, Department of Mathematical Sciences, Aalborg University, Denmark, 2005.

53. Brown, A.M., A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet. Comput. Methods Programs Biomed., 65, 3, 191–200, 2001.

54. Tripepi, G., Jager, K.J., Dekker, F.W., Zoccali, C., Linear and logistic regression analysis. Kidney Int., 73, 7, 806–810, 2008.

55. Press, S.J. and Wilson, S., Choosing between logistic regression and discriminant analysis. J. Am. Stat. Assoc., 73, 364, 699–705, 1978.

56. Menard, S., Applied Logistic Regression Analysis, vol. 106, Sage, United State of America, 2002.

57. Demartines, P. and Hérault, J., Curvilinear component analysis: A self-organizing neural network for nonlinear mapping of data sets. IEEE Trans. Neural Networks, 8, 1, 148–154, 1997.

58. Max, T.A. and Burkhart, H.E., Segmented polynomial regression applied to taper equations. For. Sci., 22, 3, 283–289, 1976.

59. Bendel, R.B. and Afifi, A.A., Comparison of stopping rules in forward “stepwise” regression. J. Am. Stat. Assoc., 72, 357, 46–53, 1977.

60. Mahmood, Z. and Khan, S., On the use of k-fold cross-validation to choose cutoff values and assess the performance of predictive models in stepwise regression. Int. J. Biostat., 5, 1, 1–21, 2009.

61. Hoerl, A.E., Kannard, R.W., Baldwin, K.F., Ridge regression: Some simulations. Commun. Stat.-Theory Methods, 4, 2, 105–123, 1975.

62. Fearn, T., A misuse of ridge regression in the calibration of a near infrared reflectance instrument. J. R. Stat. Soc.: Ser. C (Appl. Stat.), 32, 1, 73–79, 1983.

63. Hans, C., Bayesian Lasso Regression. Biometrika, 96, 4, 835–845, 2009.

64. Zou, H. and Hastie, T., Regularization and variable selection via the elastic net. J. R. Stat. Soc.: Series B (Stat. Methodol.), 67, 2, 301–320, 2005.

65. Ogutu, J.O., Schulz-Streeck, T., Piepho, H.P., Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions, in: BMC Proceedings, 2012, December, BioMed Central, Vol. 6, No. S2, p. S10.

66. Brieuc, M.S., Waters, C.D., Drinan, D.P., Naish, K.A., A practical introduction to Random Forest for genetic association studies in ecology and evolution. Mol. Ecol. Resour., 18, 4, 755–766, 2018.

67. Jurka, T.P., Collingwood, L., Boydstun, A.E., Grossman, E., van Atteveldt, W., RTextTools: A Supervised Learning Package for Text Classification. R J., 5, 1, 6–12, 2013.

68. Criminisi, A., Shotton, J., Konukoglu, E., Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends® Comput. Graph. Vis., 7, 2–3, 81–227, 2012.

69. Shi, T. and Horvath, S., Unsupervised learning with random forest predictors. J. Comput. Graph. Stat., 15, 1, 118–138, 2006.

70. Settouti, N., Daho, M.E.H., Lazouni, M.E.A., Chikh, M.A., Random forest in semi-supervised learning (Co-Forest), in: 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), 2013, May, IEEE, pp. 326–329.

71. Gu, L., Zheng, Y., Bise, R., Sato, I., Imanishi, N., Aiso, S., Semi-supervised learning for biomedical image segmentation via forest oriented super pixels (voxels), in: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2017, September, Springer, Cham, pp. 702–710.

72. Fiaschi, L., Köthe, U., Nair, R., Hamprecht, F.A., Learning to count with regression forest and structured labels, in: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 2012, November, IEEE, pp. 2685–2688.

73. Welinder, P., Branson, S., Perona, P., Belongie, S.J., The multidimensional wisdom of crowds, in: Advances in Neural Information Processing Systems, pp. 2424–2432, 2010.

74. Oza, N.C., Online bagging and boosting, in: 2005 IEEE international conference on systems, man and cybernetics, 2005, October, vol. 3, IEEE, pp. 2340–2345.

75. Wang, G., Hao, J., Ma, J., Jiang, H., A comparative assessment of ensemble learning for credit scoring. Expert Syst. Appl., 38, 1, 223–230, 2011.

76. Yerima, S.Y., Sezer, S., Muttik, I., High accuracy android malware detection using ensemble learning. IET Inf. Secur., 9, 6, 313–320, 2015.

77. Weinberger, K.Q. and Saul, L.K., Distance metric learning for large margin nearest neighbor classification. J. Mach. Learn. Res., 10, Feb, 207–244, 2009.

78. Keller, J.M., Gray, M.R., Givens, J.A., A fuzzy k-nearest neighbor algorithm. IEEE Trans. Syst. Man Cybern., 15, 4, 580–585, 1985.

79. Jain, R., Camarillo, M.K., Stringfellow, W.T., Drinking Water Security for Engineers, Planners, and Managers, Oxford: Elsevier, Inc, United Kingdom 2014.

80. Meyer-Baese, A. and Schmid, V.J., Pattern Recognition and Signal Analysis in Medical Imaging, Elsevier, Netherlands, 2014.

81. Staelens, S. and Buvat, I., Monte Carlo simulations in nuclear medicine imaging, in: Advances in Biomedical Engineering, pp. 177–209, Elsevier, Netherlands, 2009.

82. Murthy, S.K., Automatic construction of decision trees from data: A multi-disciplinary survey. Data Min. Knowl. Discovery, 2, 4, 345–389, 1998.

83. Criminisi, A., Shotton, J., Konukoglu, E., Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends® Comput. Graph. Vis., 7, 2–3, 81–227, 2012.

84. Tanha, J., van Someren, M., Afsarmanesh, H., Semi-supervised self-training for decision tree classifiers. Int. J. Mach. Learn. Cybern., 8, 1, 355–370, 2017.

85. Zahir, N. and Mahdi, H., Snow Depth Estimation Using Time Series Passive Microwave Imagery via Genetically Support Vector Regression (case Study Urmia Lake Basin). The Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., 40, 1, 555, 2015.

86. Gualtieri, J.A. and Cromp, R.F., Support vector machines for hyperspectral remote sensing classification, in: 27th AIPR Workshop: Advances in Computer-Assisted Recognition, 1999, January, vol. 3584, International Society for Optics and Photonics, pp. 221–232.

87. Brefeld, U. and Scheffer, T., Co-EM support vector learning, in: Proceedings of the Twenty-First International Conference on Machine Learning, 04, July, p. 16, 20.

88. Chang, C.C. and Lin, C.J., LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST), 2, 3, 1–27, 2011.

89. Hsu, C.W. and Lin, C.J., A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Networks, 13, 2, 415–425, 2002.

90. Shawe-Taylor, J. and Cristianini, N., Support Vector Machines, vol. 2, Cambridge University Press, Cambridge, 2000.

91. Marto, A., Hajihassani, M., Jahed Armaghani, D., Tonnizam Mohamad, E., Makhtar, A.M., A novel approach for blast-induced flyrock prediction based on imperialist competitive algorithm and artificial neural network. Sci. World J., 2014, 1–11, 2014.

92. Zhang, Z. and Friedrich, K., Artificial neural networks applied to polymer composites: A review. Compos. Sci. Technol., 63, 14, 2029–2044, 2003.

93. Maind, S.B. and Wankar, P., Research paper on basic of artificial neural network. Int. J. Recent Innovation Trends Comput. Commun., 2, 1, 96–100, 2014.

94. Dahl, G.E., Yu, D., Deng, L., Acero, A., Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Processing, 20, 1, 30–42, 2011.

95. Baltzakis, H. and Papamarkos, N., A new signature verification technique based on a two-stage neural network classifier. Eng. Appl. Artif. Intell., 14, 1, 95–103, 2001.

96. Zhao, Z.Q., Huang, D.S., Sun, B.Y., Human face recognition based on multi-features using neural networks committee. Pattern Recognit. Lett., 25, 12, 1351–1358, 2004.

97. Patil, V. and Shimpi, S., Handwritten English character recognition using neural network. Elixir Comput. Sci. Eng., 41, 5587–5591, 2011.

98. Davydova, 10 Applications of Artificial Neural Networks in Natural Language Processing, Retrieved from https://medium.com/@datamonsters/artificial-neural-networks-in-natural-language-processing-bcf62aa9151a.

99. Murakawa, M., Yoshizawa, S., Kajitani, I., Yao, X., Kajihara, N., Iwata, M., Higuchi, T., The grd chip: Genetic reconfiguration of dsps for neural network processing. IEEE Trans. Comput., 48, 6, 628–639, 1999.

100. Mozolin, M., Thill, J.C., Usery, E.L., Trip distribution forecasting with multi-layer perceptron neural networks: A critical evaluation. Transport. Res. Part B: Meth., 34, 1, 53–73, 2000.

101. Kalchbrenner, N., Grefenstette, E., Blunsom., P., A Convolutional Neural Network for Modelling Sentences, in: Proceedings of ACL, vol. 1, pp. 655–665, 2014.

102. Setiono, R., Baesens, B., Mues, C., Recursive neural network rule extraction for data with mixed attributes. IEEE Trans. Neural Networks, 19, 2, 299–307, 2008.

103. Gregor, K., Danihelka, I., Graves, A., Rezende, D.J., Wierstra, D., Draw, in: Proceedings of the 32nd International Conference on Machine Learning, PMLR, vol. 37, pp. 1462–1471, 2015.

104. Zen, H. and Sak, H., Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis, in: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, April, IEEE, pp. 4470–4474.

105. Sutskever, I., Vinyals, O., Le, Q.V., Sequence to sequence learning with neural networks, in: Advances in Neural Information Processing Systems, pp. 3104–3112, 2014.

106. Oymak, S. and Soltanolkotabi, M., Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. IEEE J. Sel. Areas Inf. Theory, 1, 84–105, 2020.

*Corresponding author: satyasundara123@gmail.com

Data Analytics in Bioinformatics

Подняться наверх