Machine learning aplicado al análisis del rendimiento de desarrollos de software

Autores/as

  • Victor Daniel Gil-Vera Ingeniero de Sistemas, Docencia e Investigación, Grupo de Investigación SISCO. Universidad Católica Luis Amigó orcid https://orcid.org/0000-0003-3895-4822
  • Cristian Seguro-Gallego Ingeniero de Sistemas, Docencia e Investigación, Grupo de Investigación SISCO. Universidad Católica Luis Amigó

DOI:

https://doi.org/10.33571/rpolitec.v18n35a9

Palabras clave:

Análisis de Resultados, Inteligencia Artificial, Modelado Predictivo, Pruebas de Rendimiento

Resumen

Las pruebas de rendimiento son determinantes para medir la calidad de los desarrollos de software, ya que permiten identificar aspectos que se deben mejorar en pro de alcanzar la satisfacción del cliente. El objetivo de este trabajo fue identificar la técnica óptima de Machine Learning para predecir si un desarrollo de software cumple o no con los criterios de aceptación del cliente. Se empleó una base de datos de información obtenida en pruebas de rendimiento a servicios web y la métrica de calidad F1-score. Se concluye que, a pesar de que la técnica de Random Forest obtuvo el mejor puntaje, no es correcto afirmar que sea la mejor técnica de Machine Learning; la cantidad y la calidad de los datos empleados en el entrenamiento desempeñan un papel de gran importancia, al igual que un procesamiento adecuado de la información.

Performance tests are crucial to measure the quality of software developments, since they allow identifying aspects to be improved in order to achieve customer satisfaction. The objective of this research was to identify the optimal Machine Learning technique to predict whether or not a software development meets the customer's acceptance criteria. A dataset with information obtained from web services performance tests and the F1-score quality metric were used. This paper concludes that, although the Random Forest technique obtained the best score, it is not correct to state that it is the best Machine Learning technique; the quantity and quality of the data used in the training play a very important role, as well as an adequate processing of the information.

Métricas de Artículo

|Resumen: 682 | PDF: 293 | HTML: 52 |

Citado por



Citas

ISO/IEC. (2011). BSI Standards Publication Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — System and software quality models. BSI Standards Publication. https://www.iso.org/standard/35733.html

Apte, V., Devidas, T. V. S. V., Akhilesh, G., & Anshul, K. (2017). AutoPerf : Automated Load Testing and Resource Usage Profiling of Multi-Tier Internet Applications. Proceedings of the 8th ACM/SPEC on Interna-tional Conference on Performance Engineering, 115–126. https://doi.org/10.1145/3030207.3030222

Bezemer, C., Eismann, S., Ferme, V., Grohmann, J., Heinrich, R., Jamshidi, P., Shang, W., Hoorn, A. Van, Villavicencio, M., Walter, J., & Willnecker, F. (2019). How is Performance Addressed in DevOps? A Survey on Industrial Practices. 45–50. https://doi.org/10.1145/3297663.3309672

Mohammad, R., & Hamad, H. (2019). Scalable Quality and Testing Lab (SQTL): Mission- Critical Applica-tions Testing. 2019 International Conference on Computer and Information Sciences (ICCIS), 1–7. https://doi.org/10.1109/ICCISci.2019.8716404

Arif, M.M., Shang, W. & Shihab, E. Empirical study on the discrepancy between performance testing re-sults from virtual and physical environments. Empir Software Eng 23, 1490–1518 (2018). https://doi.org/10.1007/s10664-017-9553-x

Syer, M.D., Shang, W., Jiang, Z.M. et al. Continuous validation of performance test workloads. Autom Softw Eng 24, 189–231 (2017). https://doi.org/10.1007/s10515-016-0196-8

Lenka, R. K., & Dey, M. R. (2018). Performance and Load Testing: Tools and Challenges. 2257–2261. https://doi.org/10.1109/ICRIEECE44171.2018.9009338

Oriol, M. (2020). Systematic Literature study on estimation and prioritization of quality requirements in software development. June, 24–27. https://doi.org/10.23919/CISTI49556.2020.9140957

Pradeep, S., & Sharma, Y. K. (2019). A Pragmatic Evaluation of Stress and Performance Testing Tech-nologies for Web Based Applications. 2019 Amity International Conference on Artificial Intelligence (AICAI), 399–403. https://doi.org/10.1109/AICAI.2019.8701327

Silva, Lady. (2020). Model Driven Engineering for Performance Testing in Mobile Applications. https://doi.org/10.1109/SEEDA-CECNSM49515.2020.9221828

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpreta-tions, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004

Jacob, A. (2018). Scrutiny on Various Approaches of Software Performance Testing Tools. Iceca, 509–515. https://doi.org/10.1109/ICECA.2018.8474876

Moghadam, M. H., Saadatmand, M., Borg, M., & Bohlin, M. (2020). Poster: Performance Testing Driven by Reinforcement Learning. 402–405. https://doi.org/10.1109/ICST46399.2020.00048

Li, H., Li, X., Wang, H., Zhang, J., & Jiang, Z. (2019). Research on Cloud Performance Testing Model. 2019 IEEE 19th International Symposium on High Assurance Systems Engineering (HASE), 2015, 179–183. https://doi.org/10.1109/HASE.2019.00035

Ahmad, T., Ashraf, A., Truscan, D., Domi, A., & Porres, I. (2020). Using Deep Reinforcement Learning for Exploratory Performance Testing of Software Systems with Multi-Dimensional Input Spaces. 8. https://doi.org/10.1109/ACCESS.2020.3033888

Postolski, I., Braberman, V., Garbervetsky, D., & Uchitel, S. (2019). Simulator-Based Diff-Time Perfor-mance Testing. 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), 81–84. https://doi.org/10.1109/ICSE-NIER.2019.00029

Lopez, L., Mart, S., Vollmer, A. M., Rodr, P., Franch, X., Oivo, M., Mart, S., Vollmer, A. M., Franch, X., Karhapää, P., Lopez, L., Burgués, X., Vollmer, A. M., Rodríguez, P., & Franch, X. (2019). Management of quality requirements in agile and rapid software development: a systematic mapping study Woubshet Behutiye Martínez-Fernández. Information and Software Technology, 106225. https://doi.org/10.1016/j.infsof.2019.106225

Cares, C. (2014). Top Ten NFR to Survive in the “Brave New World” of E-government Applications. 2014 9th Iberian Conference on Information Systems and Technologies (CISTI), 6. https://doi.org/10.1109/CISTI.2014.6876999

Silva, A., & Barroso, J. (2016). A survey about the situation of the elicitation of non-functional require-ments. 11th Iberian Conference on Information Systems and Technologies (CISTI). https://doi.org/10.1109/CISTI.2016.7521427

Silva, A., Fortaleza, U. De, Albuquerque, A. B., Fortaleza, U. De, & Barroso, J. (2016). A Process for Creating the Elicitation Guide of Non-functional Requirements A Process for Creating the Elicitation Guide of Non- Functional Requirements. January. https://doi.org/10.1007/978-3-319-33622-0

Yu, L., Alégroth, E., Chatzipetrou, P., & Gorschek, T. (2020). Utilising CI environment for efficient and effective testing of NFRs ☆. 117(May 2019). https://doi.org/10.1016/j.infsof.2019.106199

Hasnain, M. (2019). An Efficient Performance Testing of Web Services. 2019 22nd International Multi-topic Conference (INMIC), 1–8. https://doi.org/10.1109/INMIC48123.2019.9022763

He, S., Manns, G., Saunders, J., Wang, W., Pollock, L., & Soffa, M. Lou. (2019). A Statistics-Based Performance Testing Methodology for Cloud Applications: Proceedings of the 2019 27th ACM Joint Meet-ing on European Software Engineering Conference and Symposium on the Foundations of Software Engi-neering, 188–199. https://doi.org/10.6084/m9.figshare.7749356

Shariff, S. M., Li, H., & Bezemer, C. (2019). Improving the Testing Efficiency of Selenium-based Load Tests. 14th International Workshop on Automation of Software Test, 14–20. https://doi.org/10.1109/AST.2019.00008

Alshazly, A. A., Elnainay, M. Y., El-zoghabi, A. A., & Abougabal, M. S. (2020). A cloud software life cycle process (CSLCP) model. Ain Shams Engineering Journal. https://doi.org/10.1016/j.asej.2020.11.004

Bhowmik, T., & Do, A. Q. (2018). Refinement and Resolution of Just-in-Time Requirements in Open Source Software and a Closer Look into Non-Functional Requirements. Journal of Industrial Information In-tegration. https://doi.org/10.1016/j.jii.2018.03.001

Flores-Rios, B. L., & Pino, F. J. (2018). Non-functional requirements elicitation based on stakeholders’ s knowledge management. 26, 142–156. https://doi.org/10.22395/rium.v17n32a8.

Jiang, Z. M., & Hassan, A. E. (2015). A Survey on Load Testing of Large-Scale Software Systems. 5589(2), 1–32. https://doi.org/10.1109/TSE.2015.2445340

Ayala-rivera, V., Murphy, J., Portillo-dominguez, A. O., Darisa, A., & Kaczmarski, M. (2018). One Size Does Not Fit All: In-Test Workload Adaptation for Performance Testing of Enterprise Applications. ACM/SPEC International Conference on Performance Engineering, December 2010, 211–222. https://doi.org/10.1145/3184407.3184418

Moghadam, M. H., Saadatmand, M., Borg, M., Bohlin, M., & Lisper, B. (2019). Machine Learning to Guide Performance Testing: An Autonomous Test Framework. 2019 IEEE International Conference on Soft-ware Testing, Verification and Validation Workshops (ICSTW), 164–167. https://doi.org/10.1109/ICSTW.2019.00046

Aichernig, B. K., Bauerst¨atter, P., J¨obstl, E., Kann, S., Journal, S. Q., Koroˇsec, R., Krenn, W., Mate-is, C., Schlick, R., & Schumi, R. (2019). Learning and statistical model checking of system response times. Software Quality Journal. https://doi.org/10.1007/s11219-018-9432-8

Leitner, P., & Bezemer, C. (2017). An Exploratory Study of the State of Practice of Performance Test-ing in Java-Based Open Source Projects. Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering - ICPE, 373–384. https://doi.org/10.1145/3030207.3030213

AlGhamdi, H. M., Bezemer, C., Shang, W., Hassan, A. E., & Flora, P. (2020). Towards reducing the time needed for load testing. Journal of Software: Evolution and Process, April, 1–17. https://doi.org/10.1002/smr.2276

Publicado

2022-04-29

Cómo citar

Gil-Vera, V. D., & Seguro-Gallego, C. (2022). Machine learning aplicado al análisis del rendimiento de desarrollos de software. Revista Politécnica, 18(35), 128–139. https://doi.org/10.33571/rpolitec.v18n35a9