Full Text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

The Segment Anything Model (SAM) has made breakthroughs in the domain of image segmentation, attaining high-quality segmentation results using input prompts like points and bounding boxes. However, utilizing a pretrained SAM model for iris segmentation has not achieved the desired results. This is mainly due to the substantial disparity between natural images and iris images. To address this issue, we have developed SAM-Iris. First, we designed an innovative plug-and-play adapter called IrisAdapter. This adapter allows us to effectively learn features from iris images without the need to comprehensively update the model parameters while avoiding the problem of knowledge forgetting. Subsequently, to overcome the shortcomings of the pretrained Vision Transformer (ViT) encoder in capturing local detail information, we introduced a Convolutional Neural Network (CNN) branch that works in parallel with it. This design enables the model to capture fine local features of iris images. Furthermore, we adopted a Cross-Branch Attention mechanism module, which not only promotes information exchange between the ViT and CNN branches but also enables the ViT branch to integrate and utilize local information more effectively. Subsequently, we adapted SAM for iris image segmentation by incorporating a broader set of input instructions, which included bounding boxes, points, and masks. In the CASIA.v4-distance dataset, the E1, F1, mIoU, and Acc of our model are 0.34, 95.15%, 90.88%, and 96.49%; in the UBIRIS.v2 dataset, the E1, F1, mIoU, and Acc are 0.79, 94.08%, 88.94%, and 94.97%; in the MICHE dataset, E1, F1, mIoU, and Acc were 0.67, 93.62%, 88.66%, and 95.03%. In summary, this study has improved the accuracy of iris segmentation through a series of innovative methods and strategies, opening up new horizons and directions for large-model-based iris-segmentation algorithms.

Details

Title
SAM-Iris: A SAM-Based Iris Segmentation Algorithm
Author
Jiang, Jian 1 ; Zhang, Qi 1 ; Wang, Caiyong 2   VIAFID ORCID Logo 

 School of Information and Cyber Security, People’s Public Security University of China, Beijing 100038, China; [email protected] 
 School of Intelligence Science and Technology, Beijing University of Civil Engineering and Architecture, Beijing 100044, China; [email protected] 
First page
246
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
20799292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3159489308
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.