Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available August 28, 2025
-
Multi-criteria ABC classification is a useful model for automatic inventory management and optimization. This model enables a rapid classification of inventory items into three groups, having varying managerial levels. Several methods, based on different criteria and principles, were proposed to build the ABC classes. However, existing ABC classification methods operate as black-box AI processes that only provide assignments of the items to the different ABC classes without providing further managerial explanations. The multi-criteria nature of the inventory classification problem makes the utilization and the interpretation of item classes difficult, without further information. Decision makers usually need additional information regarding important characteristics that were crucial in determining the managerial classes of the items because such information can help managers better understand the inventory groups and make inventory management decisions more transparent. To address this issue, we propose a two-phased explainable approach based on eXplainable Artificial Intelligence (XAI) capabilities. The proposed approach provides both local and global explanations of the built ABC classes at the item and class levels, respectively. Application of the proposed approach in inventory classification of a firm, specialized in retail sales, demonstrated its effectiveness in generating accurate and interpretable ABC classes. Assignments of the items to the different ABC classes were well-explained based on the item’s criteria. The results in this particular application have shown a significant impact of the sales, profit, and customer priority as criteria that had an impact on determining the item classes.more » « less
-
Despite advances in deep learning methods for song recommendation, most existing methods do not take advantage of the sequential nature of song content. In addition, there is a lack of methods that can explain their predictions using the content of recommended songs and only a few approaches can handle the item cold start problem. In this work, we propose a hybrid deep learning model that uses collaborative filtering (CF) and deep learning sequence models on the Musical Instrument Digital Interface (MIDI) content of songs to provide accurate recommendations, while also being able to generate a relevant, personalized explanation for each recommended song. Compared to state-of-the-art methods, our validation experiments showed that in addition to generating explainable recommendations, our model stood out among the top performers in terms of recommendation accuracy and the ability to handle the item cold start problem. Moreover, validation shows that our personalized explanations capture properties that are in accordance with the user’s preferences.more » « less
-
The ability to determine whether a robot's grasp has a high chance of failing, before it actually does, can save significant time and avoid failures by planning for re-grasping or changing the strategy for that special case. Machine Learning (ML) offers one way to learn to predict grasp failure from historic data consisting of a robot's attempted grasps alongside labels of the success or failure. Unfortunately, most powerful ML models are black-box models that do not explain the reasons behind their predictions. In this paper, we investigate how ML can be used to predict robot grasp failure and study the tradeoff between accuracy and interpretability by comparing interpretable (white box) ML models that are inherently explainable with more accurate black box ML models that are inherently opaque. Our results show that one does not necessarily have to compromise accuracy for interpretability if we use an explanation generation method, such as Shapley Additive explanations (SHAP), to add explainability to the accurate predictions made by black box models. An explanation of a predicted fault can lead to an efficient choice of corrective action in the robot's design that can be taken to avoid future failures.more » « less