This thesis explores the comparative performance of specialized and non-specialized classifiers in predicting Critical Audit Matters (CAMs) from auditors' reports. FinBERT, a domain-specific pre-trained NLP model, is evaluated against traditional non-specialized classifiers such as Naive Bayes, Random Forest, Support Vector Machine, and K-Nearest Neighbors. The thesis aims to assess their effectiveness in identifying CAM topics using textual data and accounting variables. The results show that FinBERT outperforms the non-specialized classifiers in most metrics, demonstrating the benefits of advanced NLP techniques for financial text analysis. The combination of textual and numerical data was initially assumed to enhance the performance, but the experiment proved that the numeric data has a negative impact on the performance. The findings highlight the importance of domain-specific models and suggest further exploration with diverse data sources and additional NLP models to improve audit report analysis.