Using Artificial Neural Networks to Derive Process Model Activity Labels from Process Descriptions

Bibtex

Cite as text

						@Select Types{,
							 
							 
							 
							 
							 
							Journal   = "Band-1",
							 Title= "Using Artificial Neural Networks to Derive Process Model Activity Labels from Process Descriptions", 
							Author= "Mirco Pyrtek, Philip Hake1, and Peter Loos", 
							Doi= "https://doi.org/10.30844/wi_2020_r11-pyrtek", 
							 Abstract= "Recently, Artificial Neural Networks (ANN) have shown high potential in the area of Natural Language Processing (NLP). In the area of sentence compression, the application of ANNs has proven to outperform existing rule-based approaches. Nevertheless, these approaches require a decent amount of training data to achieve high accuracy. In this work, we aim at employing ANNs to derive process model labels from process descriptions. Since the amount of publicly available pairs of text and process model is scarce, we employ a transfer learning approach. While training the compression model on a large corpus consisting of sentence-compression pairs, we transfer the model to the problem of deriving label descriptions. We implement our approach and conduct an experimental evaluation using pairs of process descriptions and models. We found that our transfer learning model keeps high recall while losing performance on precision and compression rate.

", 
							 Keywords= "Business Process Modeling, Deep Leaning, Sentence Compression, Artificial Neural Network, Natural Language Processing
", 
							}
					
Mirco Pyrtek, Philip Hake1, and Peter Loos: Using Artificial Neural Networks to Derive Process Model Activity Labels from Process Descriptions. Online: https://doi.org/10.30844/wi_2020_r11-pyrtek (Abgerufen 23.11.24)

Abstract

Abstract

Recently, Artificial Neural Networks (ANN) have shown high potential in the area of Natural Language Processing (NLP). In the area of sentence compression, the application of ANNs has proven to outperform existing rule-based approaches. Nevertheless, these approaches require a decent amount of training data to achieve high accuracy. In this work, we aim at employing ANNs to derive process model labels from process descriptions. Since the amount of publicly available pairs of text and process model is scarce, we employ a transfer learning approach. While training the compression model on a large corpus consisting of sentence-compression pairs, we transfer the model to the problem of deriving label descriptions. We implement our approach and conduct an experimental evaluation using pairs of process descriptions and models. We found that our transfer learning model keeps high recall while losing performance on precision and compression rate.

Keywords

Schlüsselwörter

Business Process Modeling, Deep Leaning, Sentence Compression, Artificial Neural Network, Natural Language Processing

References

Referenzen

1. C. Houy, P. Fettke, P. Loos, W. van der Aalst und J. Krogstie, „Business Process Management in the Large,“ Business & Information Systems Engineering, pp. 385-388, June 2011.
2. M. Malinova und J. Mendling, „A Qualitative Research Perspective on BPM Adoption and the Pitfalls of Business Process Modeling,“ in International Conference on Business Process Management, Tallinn, Estonia, 2012.
3. H. van der Aa, H. Leopold und H. Reijers, „Comparing textual descriptions to process models–the automatic detection of inconsistencies,“ in Information Systems, 2016.
4. F. Friedrich, J. Mendling und F. Puhlmann, „Process model generation from natural language text,“ in International Conference on Advanced Information Systems Engineering, Berlin, Heidelberg, 2011.
5. M. X. Chen, O. Firat, A. Bapna, M. Johnson, W. Macherey, G. Foster, L. Jones, N. Parmar, N. Shazeer, A. Vaswani, J. Uszkoreit, L. Kaiser, M. Schuster, Z. Chen, Y. Wu und M. Hughes, „The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation,“ in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 2018.
6. L. Wang, J. Jiang und L. Liao, „Sentence Compression with Reinforcement Learning,“ in International Conference on Knowledge Science, Engineering and Management, Cham, 2018.
7. P. Hake, M. Zapp, P. Fettke und P. Loos, „Supporting Business Process Modeling Using RNNs for Label Classification,“ in International Conference on Applications of Natural Language to Information Systems, Liège, 2017.
8. K. Knight und D. Marcu, „Statistics-Based Summarization – Step One: Sentence Compression,“ in Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, 2000.
9. S. Sakti, F. Ilham, G. Neubig, T. Toda, A. Purwarianti und S. Nakamura, „Incremental sentence compression using LSTM recurrent networks,“ in Automatic Speech Recognition and Understanding, 2015.
10. K. Knight und D. Marcu, „Summarization beyond sentence extraction: A probabilistic approach to sentence compression,“ Artificial Intelligence, pp. 91-107, 1 July 2002.
11. S. J. Pan und Q. Yang, „A survey on transfer learning,“ IEEE Transactions on knowledge and data engineering, pp. 1345-1359, 2009.
12. K. Peffers, T. Tuunanen, C. Gengler, M. Rossi, W. Hui, V. Virtanen und J. Bragge, „The design science research process: A model for producing and presenting information systems research,“ in Proceedings of First International Conference on Design Science Research in Information Systems and Technology DESRIST, Claremont, California, 2006.
13. S. Chopra, M. Auli und A. M. Rush, „Abstractive Sentence Summarization with Attentive Recurrent Neural Networks,“ Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 93-98, 1 January 2016.
14. T. Cohn und M. Cohn, „Sentence Compression Beyond Word Deletion,“ 22nd International Conference on Computational Linguistics, pp. 137-144, 2008.
15. K. Filippova, E. Alfonseca, C. Colmenares, L. Kaiser und O. Vinyals, „Sentence compression by deletion with LSTMs,“ in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015.
16. D.-V. Lai, N. T. Son und N. Le Minh, „Deletion-based sentence compression using Bienc- dec LSTM,“ in International Conference of the Pacific Association for Computational Linguistics, Singapore, 2017.
17. L. Wang, J. Jiang, H. L. Chieu, C. H. Ong, D. Song und L. Liao, „Can Syntax Help? Improving an LSTM-based Sentence Compression Model for New Domains,“ in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, 2017.
18. Y. Zhao, H. Senuma, X. Shen und A. Aizawa, „Gated Neural Network for Sentence Compression Using Linguistic Knowledge,“ in International Conference on Applications of Natural Language to Information Systems, 2017.
19. F. A. A. Nóbrega und T. A. S. Pardo, „Investigating Machine Learning Approaches for Sentence Compression in Different Application Contexts for Portuguese,“ in International Conference on Computational Processing of the Portuguese Language, Cham, 2016.
20. Y. Miao und P. Blunsom, „Language as a Latent Variable: Discrete Generative Models for Sentence Compression,“ in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, 2016.
21. J. Bingel und A. Søgaard , „Text Simplification as Tree Labeling,“ in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , Berlin, Germany, 2016.
22. H. Jing, „Sentence Reduction for Automatic Text Summarization,“ in Proceedings of the Sixth Conference on Applied Natural Language Processing, Seattle, Washington, USA, 2000.
23. R. McDonald, „Discriminative sentence compression with soft syntactic evidence,“ in 11th Conference of the European Chapter of the Association for Computational Linguistics, 2006.
24. J. Clarke und M. Lapata, „Global Inference for Sentence Compression an Integer Linear Programming Approach,“ 2008.
25. T. Berg-Kirkpatrick, D. Gillick und D. Klein, „Jointly Learning to Extract and Compress,“ in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies – Volume 1, Portland, Oregon, 2011.
26. K. Filippova und Y. Altun, „Overcoming the Lack of Parallel Data in Sentence Compression,“ in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing , Seattle, Washington, USA, 2013.
27. S. Hochreiter und J. Schmidhuber, „Long Short-term Memory,“ Neural Computation, pp. 1735-1780, 1 December 1997.
28. F. Rodrigues, B. Martins und R. Ribeiro, „Neural Methods for Cross-Lingual Sentence Compression,“ in International Conference on Artificial Intelligence: Methodology, Systems, and Applications, Cham, 2018.
29. G. A. Miller, „WordNet: a lexical database for English,“ Communications of the ACM, Bd. 38, Nr. 11, pp. 39-41, 1995.
30. T. Mikolov, K. Chen, G. Corrado und J. Dean, „Efficient Estimation of Word Representations in Vector Space,“ Proceedings of the International Conference on Learning Representations, pp. 1-12, 1 January 2013.
31. D. Kingma und J. Ba, „Adam: A method for stochastic optimization,“ in International Conference for Learning Representations, San Diego, 2015.
32. V. Losing, B. Hammer und H. Wersing, „Incremental on-line learning: A review and comparison of state of the art algorithms,“ Neurocomputing, pp. 1261-1274, 31 January 2018.

Most viewed articles

Meist angesehene Beiträge

GITO events | library.gito