Abstract
Accurate diagnosis in medical imaging depends heavily on image quality, often degraded by illumination artifacts such as overexposure, underexposure, and specular reflections. This paper presents a novel prompt-assisted enhancement system for attenuating such artifacts in endoscopic imagery. Leveraging a BERT-based model’s semantic capabilities, our system interprets user prompts to dynamically select and apply targeted enhancement techniques. Unlike general-purpose prompt-based editors like InstructIR or InstructPix2Pix, our method is tailored to the spatially varying, clinically critical distortions specific to endoscopy.
By enabling localized correction of under- and over-exposed regions, our system improves downstream tasks such as 3D colon surface reconstruction. We show that this reprocessing enhances deep learning-based SLAM performance, yielding clearer visualizations and improved diagnostic accuracy. Furthermore, by integrating natural language prompts into the imaging pipeline, our system enables interactive, clinician-driven enhancements—potentially via voice commands—during live procedures. This introduces a new paradigm in human-AI collaboration for surgery and establishes a foundation for real-time, user-centered AI in clinical endoscopy.
1 Introduction
Colonoscopy is the gold standard for examining the inner lining of the colon and rectum, offering details of natural color and texture essential for detecting abnormalities such as polyps, inflammation, bleeding, and cancerous lesions. It remains unmatched in visualizing hollow organs like the colon, stomach, and esophagus. However, the procedure poses challenges due to the complexities in controlling the trajectory and viewpoint of the endoscope, including its distance and orientation relative to tissue. Suboptimal viewpoints can hinder lesion detection and diagnosis, while obstructions from anatomical structures or imaging artifacts—such as overexposure, underexposure, and specular highlights—further complicate the examination.
Medical imaging plays a pivotal role in modern healthcare by providing essential visual information for diagnosis and treatment planning. However, illumination artifacts—such as overexposure, underexposure, and specular reflections—can obscure critical details, increasing the risk of misdiagnosis. Traditional enhancement techniques, including histogram equalization, wavelet-based methods, and deep learning approaches, have been developed to address issues like low contrast and poor illumination [6, 13, 22]. While effective, these methods often require expertise in computer science and image processing, limiting their accessibility to clinicians. Moreover, they can be time-consuming and impractical in fast-paced clinical environments where efficiency and accuracy are vital.
To address this, we propose a novel enhancement system that allows clinicians to specify desired image changes via natural language prompts. This simplifies the enhancement process, enabling high-quality imaging without requiring complex software or technical knowledge. To the best of our knowledge, this is the first model specifically tailored for prompt-assisted enhancement in endoscopic imaging, bridging technical innovation and clinical usability.
This work presents the development and implementation of our prompt-assisted enhancement system, emphasizing its capacity to improve image quality and support more accurate diagnoses. Experimental results demonstrate the system’s effectiveness across diverse clinical scenarios, underscoring its value in medical imaging.
The paper is organized as follows: Sect. 2 outlines the motivation, Sect. 3 reviews related work on prompt-assisted image models, Sect. 4 details the dataset, training, and exposure correction, and Sect. 5 reports enhancement results and confusion matrices for classification using the trained BERT model.
2 Motivation
Endoscopic imaging is critical in modern diagnostics, particularly for minimally invasive procedures. Visualizing and navigating within hollow organs like the colon and esophagus is essential for effective diagnosis and treatment planning. A major advancement in this field is 3D reconstruction, which offers detailed maps of internal structures to support more precise interventions. However, applying traditional 3D reconstruction techniques to endoscopy faces challenges due to the unique lighting conditions inside the human body.
In domains like robotics and autonomous driving, methods such as Simultaneous Localization and Mapping (SLAM) [18], Density-Based Spatial Clustering (DBSCAN) [12], optical flow [21], and Structure from Motion (SfM) [20] have been used for 3D surface reconstruction. While effective in controlled settings, these techniques struggle under the unpredictable illumination typical of endoscopic environments.
Many 3D reconstruction algorithms, especially in autonomous systems, rely on the brightness constancy assumption—that pixel intensities remain stable across frames. For example, CNN-SLAM uses convolutional neural networks (CNNs) to predict depth maps based on this principle. However, in endoscopy, lighting conditions vary due to factors like camera angle, tissue curvature, and direct reflections, invalidating this assumption. Luo et al. [17] showed that depth predictions can substitute stereo measurements in systems like Stereo Direct Sparse Odometry (DSO), but these approaches still falter under the extreme photometric variations found in endoscopy.
These challenges are particularly evident in colonoscopy, where endoscope movement, tissue curvature, and lighting variability lead to overexposed, underexposed, or specular artifacts, as seen in Fig. 1. Such inconsistencies undermine traditional methods that assume stable illumination. Additional complications include moisture, surgical tools, and occlusions, which further reduce the reliability of SLAM-based 3D reconstruction.
To address these limitations, image enhancement has become essential for pre-processing. Our approach employs a prompt-assisted enhancement system powered by a BERT-based model, which interprets natural language prompts like ‘Remove the white dots’ for specularity removal or ‘Fix the underexposed areas’ for shadow correction. The model identifies and enhances problematic regions, correcting major illumination artifacts that would otherwise hinder 3D reconstruction. It targets common issues like underexposure, overexposure, and specular highlights, improving image consistency for SLAM and other computer vision methods.
In parallel, detecting unusable frames is equally important. Severe lighting changes, specular reflections, or distortions may render certain frames unfit for processing. Our system integrates a corrupted frame detection algorithm, such as the one by Axel Vega et al. [7], to flag or discard such frames based on a defined corruption threshold. This ensures that only high-quality data is used, maintaining pipeline integrity—especially critical in clinical scenarios where image quality varies.
Variable lighting and photometric inconsistencies in endoscopy hinder traditional 3D reconstruction methods, which often fail under clinical conditions despite success in controlled environments. Our prompt-assisted enhancement system improves input quality, enabling more accurate and robust reconstructions. Combined with corrupted frame detection, it ensures only reliable frames are processed. This approach addresses multiple image artifacts while maintaining real-time applicability in endoscopic procedures.
3 State-of-the-Art
Prompt-assisted image enhancement is a relatively recent development in computer vision, drawing inspiration from advances in natural language processing (NLP). Foundational work by Li and Liang [15] and Lester et al. [14] demonstrated how prompts can guide model behavior. In image enhancement, typical methods employ text embeddings or text-to-image generative models. For instance, Kawar et al. [11] introduced the Imagic model, using diffusion for image generation, while Brooks et al. [4] developed InstructPix2Pix, which combines GPT-3 with Stable Diffusion to perform image transformations based on natural language instructions.
Other notable approaches improve neural architectures for specialized tasks, particularly in medical imaging. Fischer, Alexander, and Yang [8] proposed a parameter-efficient segmentation method where continuous prompt tokens, prepended to the input and optimized via gradient descent, guide a frozen backbone to perform specific tasks without modifying the core model—enabling flexibility in adapting to new segmentation problems.
To our knowledge, no current models directly address prompt-based enhancement for endoscopic images. The most relevant work is InstructIR by Conde et al. [5], which serves as a primary reference for this study. InstructIR uses a GPT-4 language model to interpret prompts and perform image enhancement across seven degradation types: denoising, deblurring, dehazing, deraining, super-resolution, low-light enhancement, and general image improvement, using a dataset of 10,000 unique prompts.
The model employs NAFNet as the image backbone, featuring a 4-level encoder-decoder design. The encoder includes block configurations (2, 2, 4, 8) across levels 1 to 4, while the decoder uses (2, 2, 2, 2). Four middle blocks are inserted between encoder and decoder to enhance feature extraction. Skip connections use addition instead of concatenation. Task routing is managed by the Instruction Condition Block (ICB) embedded in both encoder and decoder components.
InstructIR shows limitations in medical imaging. As seen in Fig. 1, it struggles with specularity removal and causes data loss in bright regions despite brightness correction. To address this, we implemented Endo-LMSPEC [9] and Endo-STTN [6]. The lack of a prompt dataset for exposure and specularity hindered GPT-4 fine-tuning, so we used a BERT model, better suited for these tasks.
Prompt-driven generative models are increasingly relevant in medical imaging. González-González et al. [19] proposed a diffusion-based method for biomedical image translation guided by natural language. Although centered on microscopy, their work highlights the expanding use of language-guided diffusion in clinical imaging and supports developing prompt-based enhancement methods for specific modalities, as done here for colonoscopy.
4 Materials and Methods
The complete model is implemented in Python, integrating pre-trained models (Endo-LMSPEC and Endo-STTN) dynamically based on user prompts. As seen in Fig. 2, the system uses BERT to process prompts that guide the enhancement, whereas Endo-LMSPEC and Endo-STTN operate on image data only, and the model has been designed for real-time use in endoscopic procedures. The workflow is as follows:
-
1.
Upload and Preprocessing: User uploads images; the system clears previous files, resizes images, and generates specular masks.
-
2.
Prompt Interpretation: User provides a prompt, and a BERT model classifies it.
-
3.
Enhancement Selection: Based on the prompt, the system selects enhancement methods (specularity removal, exposure correction, or both).
-
4.
Enhancement Application: The selected models are applied to the images.
-
5.
Comparison and Validation: Original and enhanced images are displayed for user validation.
4.1 Exposure Correction
Endo-LMSPEC corrects exposure artifacts using a modified LMSPEC model for given different specularities as input, such as underexposed, overexposed and both (see Fig. 3). It integrates Laplacian pyramid decomposition with U-Net sub-networks and GANs for multi-scale illumination correction. The general flow can be seen in in Fig. 4, and the key steps include:
-
1.
Patch Extraction: Images are divided into patches based on intensity and gradient thresholds:
$$ P_i = \{ I(x, y) \, | \, \text {threshold}(I, \nabla I) \}. $$ -
2.
Multi-Scale Decomposition: Patches are decomposed using a Laplacian pyramid:
$$ L(P_i) = \{ P_i^1, P_i^2, \dots , P_i^L \}. $$ -
3.
U-Net Sub-Networks: Four U-Net-like sub-networks process pyramid levels:
$$ O_i^l = \text {U-Net}(P_i^l). $$ -
4.
Discriminator (GAN): A discriminator classifies patches as real or fake:
$$ D(O_i) = \{ 0, 1 \}. $$
Loss Functions. The total loss is a weighted combination of:
where:
In the above equations, \(O_i\) denotes the output generated by the model for the i-th sample. \(P_{\text {GT}}\) represents the corresponding ground truth target (e.g., the reference image, frame, or patch of the dataset), and \(P_{\text {GT}}^l\) is its down-sampled version at the l-th level of the image pyramid. The SSIM term, \(\textrm{SSIM}(O_i, P_{\text {GT}})\), measures the structural similarity between the generated output and the ground truth.
4.2 Specularity Removal
To adapt STTN for specularity removal in endoscopy, we modified Zeng et al.’s method [23]. As shown in Fig. 5, following Daher et al. [6], the system segments specularities, relocates them to create pseudo ground truth, and trains STTN using Hyper-Kvasir videos with embedding, matching, and attending stages. A temporal GAN, initialized with random mask training, generates a discriminator loss to enhance the frames.
-
1.
Pseudo Ground Truth Generation: Specularity masks are generated using the Dichromatic Reflection Model (DRM) and translated to create pseudo ground truth.
-
2.
Training: The model is trained using a temporal GAN with loss functions:
$$\begin{aligned} L = \lambda _{\text {hole}} \cdot L_{\text {hole}} + \lambda _{\text {valid}} \cdot L_{\text {valid}} + \lambda _{\text {adv}} \cdot L_{\text {adv}}, \end{aligned}$$(6)where:
$$\begin{aligned} L_{\text {hole}} = \frac{\Vert M^T \odot (Y^T - \hat{Y}^T) \Vert _1}{\Vert M^T \Vert _1}, \end{aligned}$$(7)$$\begin{aligned} L_{\text {valid}} = \frac{\Vert (1 - M^T) \odot (Y^T - \hat{Y}^T) \Vert _1}{\Vert (1 - M^T) \Vert _1}, \end{aligned}$$(8)$$\begin{aligned} L_{\text {adv}} = -\mathbb {E}_{z \sim P_{Y_1}(z)}[D(z)]. \end{aligned}$$(9)
General pipeline of the methodology of Endo-STTN. [6]
4.3 BERT-Based Sequence Classification
To automatically categorize user instructions into corresponding enhancement operations, we fine-tuned a BERT-based sequence classification model. The task was framed as a six-class classification problem, where each prompt is mapped to one of the defined enhancement actions: specularity removal, underexposure correction, overexposure correction, and their combinations.
The model is initialized with pre-trained weights from the bert-base-uncased checkpoint and fine-tuned end-to-end using a linear classification head appended to the [CLS] token representation. The final layer outputs a six-dimensional logit vector, corresponding to the probability distribution over the enhancement classes.
The model is optimized using the categorical cross-entropy loss function:
where \( y_i \) is the ground truth one-hot encoded label, and \( \hat{y}_i \) is the softmax probability assigned to class \( i \). This loss is appropriate for multi-class classification tasks where each input belongs to exactly one category.
Training was conducted using the Adam optimizer with a learning rate of \(1 \times 10^{-5}\), batch size of 32, and for a total of 15 epochs. Early stopping was applied based on validation F1-score to avoid overfitting. The dataset was split into 70% training, 15% validation, and 15% test partitions. Input prompts were truncated or padded to a maximum sequence length of 64 tokens.
Performance was evaluated using precision, recall, and F1-score for each class, as well as overall accuracy. The model achieved a macro-averaged F1-score of 0.89, confirming its ability to generalize across different phrasings of user intent. Detailed results per class are shown in Table 3.
4.4 Used Datasets
We use three datasets to train our system: (1) Endo4IE [10] for the image enhancement block, (2) a Structure from Motion (SfM)-based dataset for depth prediction using real colonoscopic videos, and (3) a custom prompt dataset for training the BERT model to classify and interpret enhancement prompts. Each dataset is essential to developing and evaluating its corresponding model component.
1) Endo4IE. To train the Endo-LMSPEC model, we used the Endo4IE dataset by Garcia-Vega et al. [10], created from frames extracted from EAD [2], EDD [1], and HyperKvasir [3]. Using CycleGAN [16], synthetic overexposed and underexposed versions of original endoscopic images were generated. The dataset includes: (1) 2,216 unmodified ground truth frames, (2) 1,231 overexposed frames, and (3) 985 underexposed frames. It was split into 70% training, 27% validation, and 3% test sets.
2) Prompt-Based Dataset for BERT Training. To train our BERT model for prompt classification, we curated a custom dataset of 1,150 prompts, each assigned to one of six key image enhancement classes. As no existing dataset addressed prompt-based enhancement, we ensured a balanced distribution across all classes (see Table 1). The most frequent classes—’specularity removal with underexposure correction’ and ’specularity removal with overexposure correction’—contain 227 and 207 prompts, respectively, allowing the model to distinguish between single and combined enhancement types. In Table 2 we can observe examples of the Prompt that is being told to our model, and the target enhancement that is being made by it.
Each prompt was carefully mapped to its corresponding enhancement class, allowing BERT to predict the correct technique from textual input. The dataset was divided into training and test sets, ensuring balanced representation across all six classes.
5 Results and Discussion
5.1 BERT Classification Results
Table 3 summarizes BERT’s performance on various image enhancement tasks prompted via natural language. The model achieved an overall accuracy of 0.88, demonstrating strong classification capabilities, though performance varied by class.
On the test set, the model performed best on specularity removal (F1-score: 0.93, precision: 0.94, recall: 0.91) and overexposure correction (F1-score: 0.92, precision: 0.90, recall: 0.95). Underexposure was more challenging (F1-score: 0.84), with high precision (0.95) but lower recall (0.75), indicating missed cases.
In combined classes, ‘Specularity + Underexposure‘ achieved an F1-score of 0.85 (precision: 0.76, recall: 0.96), while ‘Specularity + Overexposure‘ scored 0.83 (precision: 0.94, recall: 0.74), suggesting class overlap affected recall. Additional training data may improve these cases.
The best results were for ‘Overexposure + Underexposure‘, with an F1-score of 0.96, perfect recall (1.00), and precision of 0.92, likely due to the clear visual characteristics of this combination.
The model’s overall weighted precision, recall, and F1-score are 0.89, 0.88, and 0.88, respectively, indicating consistent performance across classes. Similar macro average values confirm that the model handles each class reliably, without major imbalances.
Enhancement Results
We evaluated the performance of the proposed enhancement models using three standard image quality metrics: Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM). These metrics were computed over the test split of the dataset, and results are reported separately for three types of photometric corruption: exposure errors, specularities, and frames affected by both simultaneously.
Table 4 presents the MSE and SSIM scores. Lower MSE values indicate smaller pixel-level differences from the ground truth, while higher SSIM scores reflect improved perceptual quality and structural preservation. Among all methods, Endo-LMSPEC achieved the lowest MSE for exposure correction (266.16) and the highest SSIM (0.811), significantly outperforming classical LMSPEC (MSE: 900.43, SSIM: 0.700). For frames corrupted by specularities, the best performance was achieved by the combined model (Endo-LMSPEC + Endo-STTN), which obtained the highest SSIM (0.834) and the lowest MSE (401.57). A similar trend is observed in the mixed artifact scenario, where the combined model again yielded the best SSIM (0.817) and lowest error.
Qualitative comparison of enhancement methods across various photometric artifacts. Each row represents a different type of degradation: overexposure with specularities (top), isolated specularities (middle), and underexposure (bottom). From left to right: reference (regular) frame, corrupted input, Endo-LMSPEC output, Endo-STTN output, and the combined Endo-LMSPEC + Endo-STTN output. The combined method shows improved robustness in restoring contrast, reducing artifacts, and preserving anatomical detail across all scenarios.
Table 5 summarizes the PSNR results, where higher values correspond to better reconstruction quality. Endo-LMSPEC performed best on pure exposure correction (23.88 dB), whereas Endo-STTN was particularly effective at handling isolated specularities (22.64 dB). Notably, the combined model achieved the best overall PSNR for frames exhibiting both types of artifacts (23.60 dB), confirming its capacity to generalize across complex photometric conditions.
These quantitative findings are supported by the qualitative comparisons shown in Fig. 6. The top row depicts frames with overexposure and specular reflections; the middle row shows isolated specularities; and the bottom row displays underexposed scenes. In each case, the combined model demonstrates improved contrast restoration, reduced artifact visibility, and better anatomical continuity compared to individual methods. Notably, it mitigates over-smoothing and retains realistic color tones across all scenarios, illustrating the complementary strengths of Endo-LMSPEC and Endo-STTN when integrated.
Together, these results confirm that combining structural and temporal enhancement strategies yields a more robust and generalizable solution for photometric artifact correction in endoscopic imaging.
6 Conclusion and Future Work
This study highlights the effectiveness of customized image pre-processing to improve camera trajectory reconstruction in 3D colonoscopy using RNN-SLAM. By applying localized corrections to under and overexposed regions, rather than global gamma adjustments, the Endo-LMSPEC model significantly enhances trajectory estimation and reconstruction quality. It proves more effective in mitigating illumination artifacts and specular reflections common in colonoscopic imaging.
A key contribution is the introduction of a prompt-driven, spatially-aware enhancement framework. Leveraging a BERT-based model, clinicians can guide enhancement through natural language prompts. Unlike systems such as InstructIR, which apply global changes, our method enables fine-grained, context-aware corrections aligned with clinical needs. This human-in-the-loop approach supports real-time, intuitive interaction without manual annotations, advancing AI integration in clinical workflows.
Future work will focus on incorporating outlier rejection into 3D point cloud generation to improve mesh fidelity, and evaluating robustness under motion blur and camera-induced distortions. We also plan to expand comparisons with emerging prompt-based models—such as MedSegDiff, Med-PaLM, and InstructIR—to further contextualize our approach within current advancements in medical image enhancement.
References
Ali, S.: A deep learning framework for quality assessment and restoration in video endoscopy. Med. Image Anal. 68, 101900 (2021)
Ali, S., Zhou, F., Braden, B., et al.: An objective comparison of detection and segmentation algorithms for artifacts in clinical endoscopy. Sci. Rep. 10, 2748 (2020)
Borgli, H., Thambawita, V., Smedsrud, P.H., et al.: Hyperkvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci. Data 7, 283 (2020)
Brooks, T., Holynski, A., Efros, A.A.: Instructpix2pix: learning to follow image editing instructions (2023). https://arxiv.org/abs/2211.09800
Conde, M.V., Geigle, G., Timofte, R.: Instructir: high-quality image restoration following human instructions (2024). https://arxiv.org/abs/2401.16468
Daher, R., Vasconcelos, F., Stoyanov, D.: A temporal learning approach to inpainting endoscopic specularities and its effect on image correspondence. Med. Image Anal. 90, 102994 (2023). https://doi.org/10.1016/j.media.2023.102994. https://www.sciencedirect.com/science/article/pii/S1361841523002542
Espinosa, R., Garcia-Vega, A., Ochoa-Ruiz, G., Lamarque, D., Daul, C.: Deep learning-based image exposure enhancement as a pre-processing for an accurate 3D colon surface reconstruction. In: XXIXème Colloque Francophone de Traitement du Signal et des Images (GRETSI 2023), Grenoble, France (2023). https://arxiv.org/abs/2304.03171. https://hal.archives-ouvertes.fr/hal-04170085
Fischer, M., Bartler, A., Yang, B.: Prompt tuning for parameter-efficient medical image segmentation (2022). https://arxiv.org/abs/2211.09233
Garcia-Vega, Aet al.: Multi-scale structural-aware exposure correction for endoscopic imaging. In: IEEE International Symposium on Biomedical Imaging. Cartagena de Indias, Colombia (2023)
Garcia-Vega, Axel; Ochoa, G., Espinosa, R.: Endoscopic real-synthetic over- and underexposed frames for image enhancement (2022). https://data.mendeley.com/datasets/3j3tmghw33/1
Kawar, B., et al.: Imagic: text-based real image editing with diffusion models (2023). https://arxiv.org/abs/2210.09276
Kim, B.S., et al.: Density clustering-based automatic anatomical section recognition in colonoscopy video using deep learning. Sci. Rep. 14(1), 872 (2024)
Lee, W., Nam, H.S., Seok, J.Y., et al.: Deep learning-based image enhancement in optical coherence tomography by exploiting interference fringe. Commun. Biol. 6(1), 464 (2023). https://doi.org/10.1038/s42003-023-04846-7
Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning (2021). https://arxiv.org/abs/2104.08691
Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation (2021). https://arxiv.org/abs/2101.00190
Liu, Y., et al.: Ct synthesis from mri using multi-cycle gan for head-and-neck radiation therapy, vol. 91, p. 101953. Elsevier (2021)
Luo, H., Gao, Y., Wu, Y., Liao, C., Yang, X., Cheng, K.T.: Real-time dense monocular slam with online adapted depth prediction network. IEEE Trans. Multimedia 21(2), 470–483 (2019). https://doi.org/10.1109/TMM.2018.2859034
Ma, R., Wang, R., Pizer, S., Rosenman, J., McGill, S.K., Frahm, J.-M.: Real-Time 3D reconstruction of colonoscopic surfaces for determining missing regions. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11768, pp. 573–582. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32254-0_64
Neha, F., Bhati, D., Shukla, D.K.: Generative ai models (2018–2024): advancements and applications in kidney care. BioMedInformatics 5(2) (2025). https://doi.org/10.3390/biomedinformatics5020018. https://www.mdpi.com/2673-7426/5/2/18
Phan, T.B., Trinh, D.H., Wolf, D., Daul, C.: Optical flow-based structure-from-motion for the reconstruction of epithelial surfaces. Pattern Recogn. 105, 107391 (2020)
Trinh, D.H., Daul, C.: On illumination-invariant variational optical flow for weakly textured scenes. Comput. Vis. Image Underst. 179, 1–18 (2019)
Yang, Y., Su, Z., Sun, L.: Medical image enhancement algorithm based on wavelet transform. Electron. Lett. 46, 120 – 121 (2010). https://doi.org/10.1049/el.2010.2063
Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting (2020). https://arxiv.org/abs/2007.10247
Acknowledgments
The authors wish to acknowledge the Mexican Secretariat of Science, Humanities, Technology and Innovation (SECIHTI) for the support in terms of postgraduate scholarships in this project, and the Data Science Hub at Tecnologico de Monterrey for their support on this project. This work has been supported by Azure Sponsorship credits granted by Microsoft’s AI for Good Research Lab through the AI for Health program. The project was also supported by the French-Mexican ANUIES CONAHCYT Ecos Nord grant (MX 322537/FR M022M01).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2026 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Espinosa, R., Hernández, E., Cerriteño Magaña, J., Ochoa, G., Daul, C. (2026). Prompt Assisted Enhancement for Correcting Illumination Artifacts in Endoscopic Images. In: Martínez-Villaseñor, L., Vázquez, R.A., Ochoa-Ruiz, G. (eds) Advances in Soft Computing. MICAI 2025. Lecture Notes in Computer Science(), vol 16221. Springer, Cham. https://doi.org/10.1007/978-3-032-09037-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-032-09037-9_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-032-09036-2
Online ISBN: 978-3-032-09037-9
eBook Packages: Computer ScienceComputer Science (R0)Springer Nature Proceedings Computer Science





