A case study on medical doctors using AI tools in diagnostics and treatment

From The Embassy of Good Science
Cases

A case study on medical doctors using AI tools in diagnostics and treatment

Related Initiative

What is this about?

The use case describes how AI decision-support systems assist medical professionals such as radiologists and vascular surgeons in diagnostics and treatment. AI tools developed by companies like Oxipit and Afliant analyze medical images, detect abnormalities, support surgical planning, and predict possible complications. For example, Oxipit’s chest X-ray system can automatically identify common radiological findings and even autonomously report high-confidence normal images. Afliant’s AI agent supports the treatment of abdominal aortic aneurysms by extracting data from CT scans, suggesting surgical options, and predicting post-operative risks. These systems reshape clinical workflows by supporting decision-making while keeping clinicians responsible for supervision and final judgment.

Why is this important?

This use case is important because it demonstrates how AI can improve efficiency, accuracy, and safety in healthcare. By analyzing large volumes of medical images and providing decision support, AI can reduce clinicians’ workload and help detect conditions that might otherwise be missed. In surgery, AI can improve planning and reduce risks by predicting complications. At the same time, it highlights the need for human oversight, ensuring that clinicians remain accountable and capable of interpreting, validating, and explaining AI-generated recommendations.

For whom is this important?

What are the best practices?

Best practices include maintaining strong human-in-the-loop oversight, where clinicians supervise and validate AI outputs rather than relying on them blindly. Medical professionals should develop algorithmic literacy to understand AI confidence scores, heatmaps, and potential errors. AI recommendations should be documented, including when they are accepted or overridden, to ensure accountability.

Systems should be regularly monitored and validated to detect failures or biases. Transparency with patients and colleagues about AI-supported decisions is also essential. Finally, integrating AI with training tools such as surgical simulators can help clinicians evaluate AI recommendations, improve their judgment, and safely adapt workflows to AI-assisted healthcare environments.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.6.0