Jump to content

Radiation Oncology/Artificial neural network

From Wikibooks, open books for an open world
  • A recent survey of AI applications in health care reported uses in major disease areas such as cancer or cardiology and artificial neural networks (ANN) as a common machine learning technique
  • Applications of ANN in health care include clinical diagnosis, prediction of cancer, speech recognition, prediction of length of stay, image analysis and interpretation (e.g. automated electrocardiographic (ECG) interpretation used to diagnose myocardial infarction), and drug development
  • Non-clinical applications have included improvement of health care organizational management, prediction of key indicators such as cost or facility utilization
  • ANN has been used as part of decision support models to provide health care providers and the health care system with cost-effective solutions to time and resource management

Most important types of neural networks that form the basis for most pre-trained models in deep learning:

Deep Learning

[edit | edit source]
  • Jürgen Schmidhuber. Neural Netw. 2015 Jan. Deep learning in neural networks: an overview (PMID: 25462637)

Machine Learning

[edit | edit source]
  • Michael Rowe. Acad Med. 2019 Oct.

An Introduction to Machine Learning for Clinicians (PMID: 31094727)

  • Yalin Baştanlar et al. Methods Mol Biol. 2014. Introduction to machine learning (PMID: 24272434)
  • Jenni A. M. Sidey-Gibbons and Chris J. Sidey-Gibbons. Machine learning in medicine: a practical introduction (PMID: 30890124)

Artificial Intelligence

[edit | edit source]
  • Rene Y Choi et al. Transl Vis Sci Technol. 2020. Introduction to Machine Learning, Neural Networks, and Deep Learning (PMID: 32704420)

Computational Intelligence

[edit | edit source]

Large Language Models

[edit | edit source]

Examination Performance

[edit | edit source]
  • Harvard, 2023 PMID 37356806 -- "Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot" (Rao A, J Am Coll Radiol. 2023 Jun 21;S1546-1440(23)00394-0.)
    • ChatGPT3.5 and GPT4 responses for breast cancer screening and breast pain versus ACR Appropriateness Criteria
    • Outcome: Open ended score 1.83/2.00 for breast cancer screening. Select-all-that-apply score GPT3.5 89% vs GPT4 98%. For breast pain, 1.12/2.00 and 58% vs 78%
    • Conclusion: Feasibility of using LLMs for radiologic decision making
  • Brown University; 2023 PMID 37541614 -- "Performance of Three Large Language Models on Dermatology Board Examinations" (Mirza FN, J Invest Dermatol. 2023 Aug 2;S0022-202X(23)02486-7.)
    • [No text available via PubMed]
  • Rothschild Foundation Hospital, France; 2023 PMID 37537126 -- "Success of ChatGPT, an AI language model, in taking the French language version of the European Board of Ophthalmology examination: A novel approach to medical knowledge assessment" (Panthier C, J Fr Ophtalmol. 2023 Aug 1;S0181-5512(23)00305-4.)
    • Performance of ChatGPT on French language version of European Board of Ophthalmology examination
    • Outcome: Success rate 91%, across all categories. Rapid answer
    • Conclusion: ChatGPT could be a valuable tool in medical education and knowledge assessment
  • Mayo Clinic; 2023 PMID 37529688 -- "Evaluating large language models on a highly-specialized topic, radiation oncology physics" (Holmes J, Front Oncol. 2023 Jul 17;13:1219326.)
    • Custom exam of 100 radiation oncology physics question. GPT3.5, GPT4, Bard, BLOOMZ vs medical physicists vs non-experts
    • Outcome: GPT4 outperformed other LLMs. High level of consistency, whether correct or incorrect
    • Conclusion: Potential for LLM to work alongside radiation oncology experts as knowledgeable assistants
  • Multi-institutional, 2023 PMID 36929393 Skyler B Johnson et al. "Using ChatGPT to evaluate cancer myths and misconceptions: artificial intelligence and cancer information"
    • Following expert review, the percentage of overall agreement for accuracy was 100% for NCI answers and 96.9% for ChatGPT outputs for questions 1 through 13 (ĸ = ‒0.03, standard error = 0.08)
    • There were few noticeable differences in the number of words or the readability of the answers from NCI or ChatGPT
    • Overall, the results suggest that ChatGPT provides accurate information about common cancer myths and misconceptions
  • Erlangen, 2023 ArXiv link "Benchmarking ChatGPT-4 on ACR Radiation Oncology In-Training (TXIT) Exam and Red Journal Gray Zone Cases: Potentials and Challenges for AI-Assisted Medical Education and Decision Making in Radiation Oncology" Yixing Huang et al.
    • For the TXIT exam, ChatGPT-3.5 and ChatGPT-4 have achieved the scores of 63.65% and 74.57%, respectively