2025 DNA Day Essay Contest: Full Essays


1st Place: Ethan Poh, Grade 10
Teacher: Mrs. Michelle Masters
School: Jumeirah English Speaking School, Arabian Ranches
Location: Dubai, United Arab Emirates

We have come a long way from the days of Mendelian inheritance, when genes were thought to follow rigid patterns of simple dominance and recessiveness. Now, AI-based genetic testing is opening new vistas in the field of heredity, showing the complex interactions between genes, the environment, inter alia. AI can sift through large genomic databases, revealing patterns that are invisible to human eyes. But as this technology revolutionizes the field of medicine, it also raises a chain of challenges—algorithmic bias, ethical dilemmas, and the risk of becoming overdependent on AI. The challenge ahead is not just harnessing AI’s potential but ensuring its integration into healthcare remains transparent and accountable.

AI’s role in genetic testing lies in its ability to process large genomic datasets rapidly and accurately, enabling more precise diagnoses and personalized treatment strategies. Traditional genetic screening techniques, such as karyotyping and polymerase chain reaction (PCR), are limited by their resolution and ability to detect only certain types of genetic variations. Karyotyping, for instance, visualizes large-scale chromosomal abnormalities but cannot detect smaller, more subtle mutations such as single-nucleotide polymorphisms (SNPs) or small insertions and deletions (indels) that contribute to complex diseases [1]. In contrast, IBM Watson for Genomics applies advanced AI algorithms to analyze whole-genome sequencing (WGS) and whole-exome sequencing (WES) data, allowing for the identification of both rare mutations and common genetic variants linked to various diseases, including cancer [2]. By leveraging natural language processing (NLP) and machine learning, Watson cross-references a patient’s genetic profile with an extensive database of scientific literature, clinical trials, and previously documented patient outcomes [3].

Furthermore, AI augments personalized medicine by integrating genetic data with clinical variables like electronic health records and biomarker levels [4]. This enables tailored therapeutic interventions that optimize drug efficacy and minimize side effects. For instance, AI has proven to be valuable in pharmacogenomics, predicting drug responses based on genetic polymorphisms and advancing personalized therapeutics [5]. Additionally, AI can democratize genetic testing by automating genomic data analysis, reducing reliance on genetic counselors and improving patient access, especially in underserved areas [6]. This improves workflow productivity, easing the burden on clinicians amid growing demand for genetic testing [7].

Despite its promise, AI-driven genetic interpretation is far from flawless. One concern is the dependability and precision of AI model predictions. Although AI identifies correlations between genetic data, it lacks causal understanding [8]. Machine algorithms trained on incomplete datasets may yield erroneous risk projections, leading to misdiagnosis or unnecessary distress [10]. Moreover, the interpretability of AI insights remains a challenge; most models are “black boxes” [16], making it difficult for clinicians and patients to understand the logic behind predictions [10].

Bias in AI algorithms is another critical issue. Historically, genetic research has focused on European populations, leading to discrepancies in the accuracy of genetic risk predictions for underrepresented groups [12]. If biased data is used to train AI, these disparities may be perpetuated, worsening health inequalities rather than mitigating them [13]. For example, research has shown that applying European-trained PRS models to Japanese and African-American populations resulted in a ~70% decrease in risk prediction accuracy compared to European populations [14], which reduces the scope of what AI in genetics can be applied to.

For AI to serve as an effective adjunct to traditional genetic testing, its role must be clearly defined. AI should be used primarily as an interpretive tool, not as a standalone decision-maker. Clinicians must retain control, integrating AI-generated insights with genetic counseling to ensure patient-centered decision-making [5]. Patients should also decide the extent of AI involvement in their testing to ensure autonomy—whether through statistical analysis alone or AI-driven insights incorporating lifestyle and environmental factors [4]. AI should enhance classical genetic test results by providing probabilistic risk predictions and functional annotations for variants of uncertain significance (VUS) [15]. Unlike traditional genetic testing, which may leave VUS results open-ended, AI can infer likely pathogenicity from vast genomic datasets, facilitating clinical decision-making [1]. However, these probabilistic predictions ought to be presented cautiously to avoid misleading patients into thinking that risk estimates are deterministic.

With the world evolving at its current rate, AI in genetic testing will inevitably find its place as a medical standard. The question, then, is not if AI will dominate genetic medicine, so much as how we choose to employ it: Will we embrace its promise blindly, or proceed with caution, weighing ethical dilemmas and uncertainties? Our decisions will not only define our relationship with emerging technologies but also attempt to preserve equity in a fundamentally human-led system.

CITATIONS/REFERENCES

[1] Libbrecht, M. W., & Noble, W. S. (2015). Machine Learning Applications in Genetics and Genomics. Nature Reviews Genetics, 16(6), 321–332. https://doi.org/10.1038/nrg3920.
[2] IBM Watson Health. (n.d.). Watson for Genomics. IBM. Retrieved from https://www.ibm.com/watson-health.
[3] Markowetz, F., & Spang, R. (2017). Data integration in genome-wide analyses. Nature Reviews Genetics, 18(3), 123-138.
[4] Beam, A. L., & Kohane, I. S. (2018). Big Data and Machine Learning in Health Care. JAMA, 319(13), 1317–1318. https://doi.org/10.1001/jama.2017.18391.
[5] Wang, Shiyin. “Predicting Drug Responses by Propagating Interactions through Text-Enhanced Drug-Gene Networks.” arXiv, 2019, https://arxiv.org/abs/1906.08089.
[6] Topol, E. J. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.
[7] Esteva, A., et al. (2019). A Guide to Deep Learning in Healthcare. Nature Medicine, 25(1), 24–29. https://doi.org/10.1038/s41591-018-0316-z.
[8] Bzdok, D., Altman, N., & Krzywinski, M. (2018). Points of Significance: Statistics versus Machine Learning. Nature Methods, 15(4), 233–234. https://doi.org/10.1038/nmeth.4642.
[10] Lipton, Z. C. (2016). The Mythos of Model Interpretability. arXiv preprint arXiv:1606.03490. https://doi.org/10.48550/arXiv.1606.03490.
[11] Sweeney, L., et al. (2019). Data Privacy in the Age of AI. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.6e3a02f6.
[12] Martin, A. R., et al. (2019). Human Demographic History Impacts Genetic Risk Prediction across Diverse Populations. American Journal of Human Genetics, 104(4), 635–649. https://doi.org/10.1016/j.ajhg.2019.02.002.
[13] Choudhury, A., & Pantazatos, S. P. (2020). The Ethics of AI in Medicine: From Bias to Privacy. Annual Review of Biomedical Data Science, 3, 71–94. https://doi.org/10.1146/annurev-biodatasci-090419-102803.
[14] Márquez-Luna, Carla, Po-Ru Loh, South Asian Type 2 Diabetes (SAT2D) Consortium, The SIGMA Type 2 Diabetes Consortium, and Alkes L. Price. “Multi-ethnic Polygenic Risk Scores Improve Risk Prediction in Diverse Populations.” bioRxiv, 2016. https://doi.org/10.1101/051458.
[15] Richards, S., et al. (2015). Standards and Guidelines for the Interpretation of Sequence Variants. Genetics in Medicine, 17(5), 405–424. https://doi.org/10.1038/gim.2015.30.
[16] IBM. Black Box AI. IBM, www.ibm.com/think/topics/black-box-ai. Accessed 3 Mar. 2025.


2nd Place: Alex Mi, Grade 10
Teacher: Mrs. Brandy Yost
School: Carmel High School
Location: Carmel, Indiana

What separates man from machine? The pivotal role of Artificial Intelligence (AI) in healthcare and genomics stems from its unrivaled speed and accuracy in data analysis, a level beyond human ability. Modern systems programmed through advanced neural networks and machine learning can identify genetic variants, visualize drug-gene interactions, and evaluate polygenic risk scores, unified by the purpose of improving diagnostics and aiding drug development [1]. Despite its adeptness, the increasingly widespread use of AI in healthcare gives rise to controversy regarding AI’s ethics and reliability. Regulating the extent to which AI is implemented in healthcare and genomics is critical to maximizing AI’s effectiveness and mitigating the effects of its drawbacks.

Among AI’s greatest strengths in healthcare and genomics is its ability to apply advanced algorithms to predict disease risks and develop personalized therapies. For example, GENEVIC, an AI system based on OpenAI’s ChatGPT 3.5, automates genetic data analysis and literature searches, ranking the top 100 gene variants linked to Alzheimer’s and schizophrenia. These rankings help construct a patient’s Polygenic Score (PGS), which assesses a patient’s risk for genetic diseases and guides personalized care [2]. Moreover, AI is invaluable for drug repurposing, as it cross-applies existing medications for new or understudied conditions. The revolutionary TxGNN, which uses graph neural networks and metric learning modules to predict drug indications and contraindications across 17,080 diseases, improves disease prediction accuracy by 49.2% and contraindications by 35.1%, demonstrating AI’s potential in drug development [3]. By scaling genetic data analysis, AI has the potential to provide more accurate diagnostics, tailored therapies, and new treatments for previously untreatable conditions.

Despite its advantages, biotechnological aid in healthcare carries significant shortcomings that must be addressed. Primarily, AI systems rampantly misdiagnose patients from populations underrepresented in their training data, exposing racial and gender biases. Convolutional Neural Networks (CNNs) have just half the diagnostic accuracy compared to expectation when evaluating datasets for Black patients regarding skin conditions [4], and the AI DeepGestalt for facial dysmorphology analysis was disparagingly unreliable in identifying Down syndrome in individuals of African versus European ancestry (36.8% versus 80%) [5]. Gender biases further intensify these issues. Predictive models for cardiovascular disease (CD) are often trained predominantly with male data, which increases prediction errors for women since men have different trends of risk gene expression for CD [4]. This undermines the accuracy of AI systems, simultaneously raising serious concerns about equity in healthcare.

Alongside such discrimination, the existing multifaceted ethical and privacy concerns of AI are similarly detrimental. A controversial ethical drawback is the lack of transparency in AI decision-making, often referred to as the “Black Box” approach. Most advanced AI algorithms operate in ways that are invisible to scientists, making it difficult to understand how decisions are made or to identify errors in the algorithm’s thought process [4]. In addition, the use of personal data to train AI models often occurs without explicit consent, raising serious privacy concerns [6]. These limitations holistically lack effective solutions, further solidifying the risks of biotechnology overuse and dependence – especially as human lives are at risk due to the failures of AI function and design.

Focusing on its irrefutable strengths over its drawbacks, I would want AI to create an all-encompassing analysis of my risks for rare diseases, genetic predispositions, and ideal treatment options. Traditional, manually-examined tests often focus on small-scale single-gene mutations. Conversely, AI processes thousands of genetic variants by calculating polygenic risk scores, providing a personalized assessment of my risk for developing Alzheimer’s disease, type 1 diabetes, cardiovascular disease, inflammatory bowel disease, and various cancers (lung, breast, prostate) [7]. Additionally, machine-learning approaches like neural and Bayesian networks can model gene-gene interactions, known as epistasis, to reveal my inherent likelihood of disease development [8]. Lastly, specialized systems enhance pharmacogenomics by predicting how my body systems might respond to certain medications, thus optimizing drug efficacy and minimizing harmful side effects [9]. The myriad of benefits provided by biotechnology accelerates common medical processes, yet its application should still be modulated.

By addressing issues like bias, transparency, and privacy, the full potential of AI to improve diagnostics, treatments, and patient outcomes can be harnessed. If AI tools were regulated, trained on diverse and representative datasets, and used in moderation, they could become forefront pioneers in improving our understanding of genomics. Ultimately, even the most advanced technology is prone to mistakes, just like humans. This essay thoroughly characterizes AI as invaluable, yet imperfect for genomics, and setting adequate guidelines for its implementation is the crucial next step to preserving patient safety and justice within healthcare.

CITATIONS/REFERENCES

  1. Alharbi, W. S., & Rashid, M. (2022). A review of deep learning applications in human genomics using next-generation sequencing data. Human Genomics, 16(1). https://doi.org/10.1186/s40246-022-00396-x
  2. Nath, A., Mwesigwa, S., Dai, Y., Jiang, X., & Zhao, Z. (2024). GENEVIC: GENetic data exploration and visualization via intelli- gent interactive console. Bioinformatics, 40(10). https://doi.org/10.1093/bioinformatics/btae500
  3. Huang, K., Chandak, P., Wang, Q., Havaldar, S., Vaid, A., Leskovec, J., Nadkarni, G. N., Glicksberg, B. S., Gehlenborg, N., & Zitnik, M. (2024). A foundation model for clinician-centered drug repurposing. Nature Medicine, 30, 1–13. https://doi.org/10.1038/s41591-024-03233-x
  4. Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10), 100347. https://doi.org/10.1016/j.patter.2021.100347
  5. Dias, R., & Torkamani, A. (2019). Artificial intelligence in clinical and genomic diagnostics. Genome Medicine, 11(1). https://doi.org/10.1186/s13073-019-0689-8
  6. Farhud, D. D., & Zokaei, S. (2021). Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iranian Journal of Public Health, 50(11). https://doi.org/10.18502/ijph.v50i11.7600
  7. Lewis, C. M., & Vassos, E. (2020). Polygenic risk scores: from research tools to clinical instruments. Genome Medicine, 12(1). https://doi.org/10.1186/s13073-020-00742-5
  8. McKinney, B. A., Reif, D. M., Ritchie, M. D., & Moore, J. H. (2006). Machine learning for detecting gene-gene interactions: a review. Applied Bioinformatics, 5(2), 77–88. https://doi.org/10.2165/00822942-200605020-00002
  9. Jhawat, V., Gupta, S., Gulia, M., & Nair, A. (2023, January 1). Chapter 5 – Artificial intelligence and data science in pharmacogenomics-based drug discovery: Future of medicines (A. K. Tyagi & A. Abraham, Eds.). ScienceDirect; Academic Press. https://www.sciencedirect.com/science/article/abs/pii/B9780323983525000057


3rd Place: Dharahaas Nalla, Grade 11
Teacher: Ms. Maria Zeitlin
School: Smithtown High School East
Location: St. James, New York

Artificial intelligence (AI) is changing healthcare, from the analysis of DNA in assessing genetic risks of diseases to the use of those analyses in treating patients. AI can process vast amounts of genetic data to detect subtle patterns that traditional methods might miss, thus offering better personalized medicine. While the advances from the use of AI in diagnosing rare diseases and detecting cancer are amazing, there are significant disadvantages such as bias, data privacy, and lack of empathy that raise concerns for deliberation.

AI is incredibly advantageous in interpreting complex results from genetic testing because it processes vast amounts of information at a micro level outside human involvement. AI can more effectively scan genetic information and compare it to medical databases to find irregularities. Unlike traditional genetic tests that only detect mutations, AI can analyze environmental and lifestyle factors, thereby offering a more comprehensive understanding of disease risk(8). Recent studies have shown how AI powered tools such as AMELIE and AVADA have significantly accelerated diagnoses in critically ill pediatric patients by improving genome sequencing for rare disorder detection (9). For example, the Exomiser and DeepGestalt diagnostic AI applications boast more than 90% accuracy in identifying genomic variants associated with rare diseases—and it does so faster than a human in determining a diagnosis (2). Furthermore, AI introduces another advantage with Explainable AI (xAI), where it indicates how it arrived at a particular diagnosis. This is helpful for physicians to review the AI output as it must provide details as to why it arrived at its diagnosis (3). Previously, it was hard for physicians to understand how AI came to a certain conclusion thereby reducing trust in AI. This lack of transparency often led to hesitation when it came to adopting AI in genetic testing as physicians couldn’t verify the reasoning behind the AI’s predictions. Additionally, AI can evaluate more than just genetic results by integrating acquired and familial health history with genetics, leading to a more efficient and effective genetic diagnosis (5).

On the other hand, AI in healthcare isn’t perfect. One of the biggest challenges is algorithmic bias. Because AI models are trained on non-diverse data sets, it may provide inaccurate predictions for racial minorities (1). This bias can lead to misdiagnosis, inaccurate risk assessments, and inconsistencies in treatment recommendations, disproportionately affecting these groups(1). Another key concern is data privacy. Genetic information is very sensitive, and mishandling this data can lead to third party exploitation. AI relies on large data analysis to increase accuracy, but storing this data introduces security risks, such as unauthorized access, which could lead to the exposure of personal information (1). Regulations such as the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) are meant to safeguard patient data, but ensuring AI follows these standards remains an issue (1). Researchers say that although AI can improve genetic testing, it should work with human knowledge and not replace it, while making sure the clinicians judgment remains the more important deciding factor(1).

A further consideration is how genetic results are communicated. Although AI is amazing at dealing with complex data, it lacks the human empathy and sensitivity needed to deliver a life-changing diagnosis. Receiving genetic test results from AI compared with a doctor can increase anxiety and distress, as it provides data without emotion (6). Healthcare professionals can adjust their approach based on the patient’s emotional response, thereby providing help AI cannot replicate. This is especially necessary in conditions with a high emotional impact or where one’s lifestyle must be changed immediately, making empathy and context just as important as clinical details (6).

In conclusion, AI has revolutionized genetic testing, allowing for the rapid and accurate processing of vast DNA datasets. Systems such as Exomiser and DeepGestalt, along with advancements like Explainable AI, provide improvements over traditional methods by integrating genetic, environmental, and familial factors. However, there are significant limitations, including algorithmic bias, data privacy concerns, and AI’s lack of empathetic communication. To fully harness AI’s potential, it is essential to implement safeguards ensuring these tools work alongside human professionals rather than replacing them. AI should support doctors rather than replace them, so patients receive both accurate medical insights and compassionate care. As AI continues to evolve, its ability to integrate genetic, environmental, and lifestyle data could revolutionize healthcare, but it must be used responsibly to ensure ethical and effective patient-centered care (8).

CITATIONS/REFERENCES

(1) Morley, Jessica, et al. “The Ethics of AI in Healthcare: A Mapping Review.”
Social Science & Medicine 260, 2020.

(2) Dias, Raquel, and Ali Torkamani. “Artificial Intelligence in Clinical and Genomic
Diagnostics.” Genome Medicine 11 (2019): 70.

(3) Novakovsky, Gherman, et al. “Obtaining Genetics Insights from Deep Learning via
Explainable Artificial Intelligence.” Nature Reviews Genetics 24 (2023): 125-136.

(4) Abdallah, Shenouda, et al. “The Impact of Artificial Intelligence on Optimizing
Diagnosis and Treatment Plans for Rare Genetic Disorders.” Cureus 15 (2023): e46860.

(5) Özçelik, Firat, et al. “The Impact and Future of Artificial Intelligence in Medical
Genetics and Molecular Medicine: An Ongoing Revolution.” Functional & Integrative Genomics 24 (2024): 138.

(6) Meng, L., R. Attali, T. Talmy, Y. Regev, and N. Mizrahi. “Evaluation of an Automated
Genome Interpretation Model for Rare Disease Routinely Used in a Clinical Genetic Laboratory.” Genetics in Medicine 25, 2023.

(7) Quazi, S. “Artificial Intelligence and Machine Learning in Precision and Genomic
Medicine.” Medical Oncology 39, 2022.

(8) Xu, J., P. Yang, S. Xue, B. Sharma, M. Sanchez-Martin, et al. “Translating Cancer
Genomics Into Precision Medicine with Artificial Intelligence: Applications, Challenges and Future Perspectives.” Human Genetics 138, 2019.

(9) De La Vega, Francisco M., et al. “Artificial Intelligence Enables Comprehensive Genome
Interpretation and Nomination of Candidate Diagnoses for Rare Genetic Diseases.” Genome Medicine 13 (2021): 153.

10) Chustecki, Margaret. “Benefits and Risks of AI in Health Care: Narrative Review.”
Interactive Journal of Medical Research 13 (2024): e53616.

Honorable Mentions


Sailesh Vijayaragavan Badri
William Lyon Mackenzie Collegiate Institute
Toronto, Canada
Teacher: Dr. Elaine Sinclair

In the past, owning a computer seemed unnecessary and absurd. However, throughout the past couple of decades the percentage of U.S. households with a personal computer grew from a mere 8% in 1984 to over 95% in 2024 (US Census Bureau, 2024). This boom in computer usage showcases how a technology that was first met with skepticism, later turned into an everyday necessity. The current AI industry heavily resembles the computer industry in the 80’s, and we should not be hesitant to make the best use of it. Its ability to assess various data, find underlying patterns, and improve diagnostic reliability makes it possibly an essential tool in genetic analysis.

One of the applications of AI in genetic testing is its ability to analyze non-coding DNA (ncDNA), also known as “junk DNA.” About 98.5% of the human genome is made of these ncDNA sequences and scientists assumed they were not relevant since they did not directly code for proteins. ncDNA is composed of the sequences transcribed into molecules of RNA, such as ribosomal RNA molecules (rRNAs), microRNAs (miRNAs), long non-coding RNAs (lncRNAs), and other un-transcribed sequences with regulatory activities (Pagni et al., 2021). Recent studies, however, indicate that these areas are crucial for regulating gene expression and the development of diseases (Pagni et al., 2021). The computational ability of AI enables improved genome interpretation, revealing insights and other patterns that might be missed by conventional techniques. Artificial intelligence software such as NCNet has already been shown to be able to predict the role of non-coding DNA through deep residual learning and sequence-to-sequence learning networks (Zhang et al., 2019). These networks assist with more robust pattern recognition improving our understanding of genetic diseases. Additionally, other AI-driven decision support tools like Fabric GEM have achieved unprecedented accuracy in the identification of rare genetic disorders, identifying over 90% of disease-causing variants in critically ill newborns (De La Vega et al., 2021). Hence, AI tools are extremely useful in interpreting DNA results due to their diverse use cases and powerful models.

Although AI is a valuable tool, some of the problems involving inclusion and privacy must be addressed. Genetic studies in the past have been largely conducted with Europeans, and hence there is extremely little data from non-Europeans. As many as 86.3% of genome-wide association studies (GWAS) in the year 2021 were carried out on Europeans, while African, South Asian, and Latino populations were extremely underrepresented (Fatumo et al., 2022). Therefore, non-Europeans can be misdiagnosed because of a lack of specific data, hence increasing the existing health inequities. However, continuous work is being done with AI to bridge this gap and create a better model. For example, researchers at Johns Hopkins have developed a new AI-powered genetic risk-scoring method, CT-SLEB, which significantly improves risk prediction accuracy for non-European populations, by leveraging diverse genetic datasets (Johns Hopkins Bloomberg School of Public Health, 2023). Through obtaining more diverse data sets, AI can continue improving its analysis to hopefully bridge disparity gaps in genomic studies. Genetic screening also raises privacy concerns. Genetic information is private and it is a major risk if it is breached or exploited. Hence, strict laws and better technology, such as improved encryption methods, can help decrease the risk of this technology.

An important analysis I’d like to learn about from the AI about my own DNA is a thorough overview of my own sun allergies. Though my mother has them too, I showed no symptoms in India and only developed reactions after moving to Canada, indicating both genetic predisposition and other environmental factors. Environmental factors have a significant influence on genetic expression, and AI can help analyze vast amounts of data to identify patterns that may not be evident through standard testing. By assessing a patient’s history, environmental conditions, and other genetic markers, AI can provide deeper insights than regular screening, providing a better understanding of the condition (Khalifa & Albadawy, 2024). Additional studies indicate the increased use of AI to diagnose and treat allergic reactions by proposing patient-based treatments, showcasing its diverse role in genetics (Khan et al., 2024).

Similar to how personal computers change from being a luxury to an integral part of our lives, AI in genetic testing is on the same trajectory, with its potential waiting to be explored. AI’s ability to analyze and interpret complex genomic dynamics is beyond what is achievable with traditional means. While challenges of bias and privacy must be breached, the destiny of AI and genetics is not just bright but unavoidable.

CITATIONS/REFERENCES

De La Vega, F. M., Chowdhury, S., Moore, B., Frise, E., McCarthy, J., Hernandez, E. J., Wong, T., James, K., Guidugli, L., Agrawal, P. B., Genetti, C. A., Brownstein, C. A., Beggs, A. H., Löscher, B.-S., Franke, A., Boone, B., Levy, S. E., Õunap, K., Pajusalu, S., & Huentelman, M. (2021). Artificial intelligence enables comprehensive genome interpretation and nomination of candidate diagnoses for rare genetic diseases. Genome Medicine, 13(1). https://doi.org/10.1186/s13073-021-00965-0
Fatumo, S., Chikowore, T., Choudhury, A., Ayub, M., Martin, A. R., & Kuchenbaecker, K. (2022). A roadmap to increase diversity in genomic studies. Nature Medicine, 28(2), 243–250. https://doi.org/10.1038/s41591-021-01672-4
Johns Hopkins Bloomberg School of Public Health. (2023, September 25). New Method Can Improve Assessing Genetic Risks For Non-White Populations. Johns Hopkins Bloomberg School of Public Health. https://publichealth.jhu.edu/2023/new-method-can-improve-assessing-genetic-risks-for-non-white-populations
Khalifa, M., & Albadawy, M. (2024). Artificial intelligence for clinical prediction: Exploring key domains and essential functions. Computer Methods and Programs in Biomedicine Update, 5, 100148. https://doi.org/10.1016/j.cmpbup.2024.100148
Khan, M., Banerjee, S., Muskawad, S., Maity, R., Chowdhury, S. R., Ejaz, R., Kuuzie, E., & Satnarine, T. (2024). The Impact of Artificial Intelligence on Allergy Diagnosis and Treatment. Current Allergy and Asthma Reports, 24(7), 361–372. https://doi.org/10.1007/s11882-024-01152-y
Pagni, S., Mills, J. D., Frankish, A., Mudge, J. M., & Sisodiya, S. M. (2021). Non‐coding regulatory elements: Potential roles in disease and the case of epilepsy. Neuropathology and Applied Neurobiology, 48(3). https://doi.org/10.1111/nan.12775
US Census Bureau. (2024, June 18). Computer and Internet Use in the United States: 2021. Census.gov. https://www.census.gov/newsroom/press-releases/2024/computer-internet-use-2021.html
Zhang, H., Hung, C.-L., Liu, M., Hu, X., & Lin, Y.-Y. (2019). NCNet: Deep Learning Network Models for Predicting Function of Non-coding DNA. Frontiers in Genetics, 10. https://doi.org/10.3389/fgene.2019.00432

 

Jai Elangovan
Merchant Taylor’s School
Northwood, United Kingdom
Teacher: Ms. Sophie Pratt

Genetic testing is a cornerstone of precision medicine, helping diagnose conditions, identify risk, and facilitate targeted treatments, yet, interpreting complex genomic data remains a challenge. AI has advanced genomics through refining variant calling algorithms (e.g. DeepVariant which detects mutations, like single-nucleotide polymorphisms [SNPs]); improving polygenic risk score calculations, and developing pharmacogenomic applications. AI has revolutionized result interpretation with greater speed, accuracy, and potential for personalized medicine. However, its use risks introducing bias, oversights, and privacy-related or ethical concerns which must be addressed to ensure responsible implementation. Therefore, AI should support, not replace, expert human geneticists.

AI can analyze massive genetic datasets quicker and more accurately than humans . Variant classification is the process of categorizing mutations based on their clinical significance from benign to pathogenic, including variants of uncertain significance (VUS), using pre-established databases as references. Manual interpretation is time-consuming, error-prone and has minimal scalability due to limited human multitasking ability; AI models like DeepVariant or Emedgene outperform traditional methods in speed and accuracy. For example, MARGINAL 1.0.0 (a machine-learning based AI tool) showed 92% accuracy in classifying BRCA1/2 gene variants (predictors of breast cancer risk) according to established guidelines (e.g. ACMP–AMG guidelines ), reducing errors in time-consuming and labor-intensive traditional curation methods. AI can help calculate polygenic risk scores (PRS) by analyzing thousands of variants, mainly SNPs, linked to diseases (e.g. coronary artery disease). A recent study found that AI-PRS models ‘outperformed traditional PRS calculators’ by increasing precision of CVD risk prediction suggesting AI has potential to positively impact risk calculation, AI also has a role in pharmacogenomics and drug response prediction (DRP). AI can predict how variations in drug-metabolizing enzymes affect drug efficacy; deep learning models are showing promise in retrospective reviews predicting responses to cancer drugs which can optimize chemotherapy by matching the most effective drugs with a patient’s genetic profile , and, in analyzing variations in the CYP2C19 gene which influences antidepressant metabolism. AI DRP models allow for personalized prescriptions which can reduce adverse side-effects. AI’s increasing contribution to genetic interpretation makes it a valuable tool to improve real-world healthcare applications and therefore certainly has potential benefit when compared to traditional methods of result interpretation.

AI does, however, come with some risks and ethical concerns. Bias in AI training data can lead to misinterpretations of variants in underrepresented populations , models are largely trained on Eurocentric data with a 2016 analysis showing that over 80% of participants in genome wide association studies (GWAS) were of European descent ; thus misdiagnoses by AI are common in non-European populations which must change if genomic medicine is to benefit everyone. The 2018 MyHeritage breach highlights the importance of securing information; genetic data is uniquely identifiable which poses potential privacy and security risks. AI-driven genetic analysis can lead to genetic surveillance or discrimination (e.g. by employers) ; thus, ensuring security and privacy is of the utmost importance. There is also much scope for clinical oversight and over-reliance on AI; therefore, clinical use of AI must be approached with caution. Whilst AI is not infallible, it can be incredibly helpful and should be used as a supportive tool with the prioritization of hybrid models which combine AI insights with human expertise. AI-generated results should be validated by a clinician before making medical decisions.

Understanding how AI arrives at a result is crucial in determining a result’s accuracy and trustworthiness. This can be achieved by using XAI models, which increase transparency and reduce bias in genetic interpretation, or SHAP algorithms, which help make AI decision making interpretable for humans. AI in genetics must also be held to similar regulatory standards as the pharmaceutical industry as its potential risks carry similar gravity. Applying regulatory frameworks like the EU AI Act to guide use in genomics is one way of ensuring transparency and reducing risk of clinical or ethical misconduct. As well as this, more diverse genetic data must be introduced to minimize bias.

AI has potential to revolutionize genetic research, analysis, and personalized medicine making it an invaluable tool. However, AI is not infallible and has limitations which necessitate its use alongside human supervision and expertise. Whilst I would certainly want AI to be used to interpret my genetic testing results, I would also want a human to oversee as their empathetic communication and patient-centered decision-making cannot be replaced by AI. AI is a powerful tool that should be utilized by clinicians to improve interpretation but not as a substitute for human experts.

CITATIONS/REFERENCES

Google Research Blog. (2021, September 22). DeepVariant: Highly accurate genomes with deep neural networks. Google. https://research.google/blog/deepvariant-highly-accurate-genomes-with-deep-neural-networks/
Soni, A., & Gupta, A. (2022). Review article: The role of Deep Learning in genomics. Cureus, 14(8), e28394. https://assets.cureus.com/uploads/review_article/pdf/177320/20240724-319105-tvuov1.pdf
Illumina. (n.d.). Emedgene data sheet (M-GL-01057). Illumina. https://www.illumina.com/content/dam/illumina/gcs/assembled-assets/marketing-literature/emedgene-data-sheet-m-gl-01057/emedgene-data-sheet-m-gl-01057.pdf
Salih, A. A., & Masood, M. (2022). Next-generation sequencing in clinical diagnostics: Applications and challenges. PMC, 13, 9687470. https://pmc.ncbi.nlm.nih.gov/articles/PMC9687470/
Boucher, C., & Chu, T. (2015). The impact of next-generation sequencing in clinical microbiology. PMC, 17, 4544753. https://pmc.ncbi.nlm.nih.gov/articles/PMC4544753/
Kim, S. J., & Lee, M. S. (2023). Artificial intelligence in medicine: An overview of AI applications and challenges in clinical practice. Journal of Korean Medical Science, 38, e395. https://jkms.org/DOIx.php?id=10.3346/jkms.2023.38.e395
Kim, S. J., & Lee, M. S. (2023). Artificial intelligence in medicine: An overview of AI applications and challenges in clinical practice. Journal of Korean Medical Science, 38, e395. https://jkms.org/DOIx.php?id=10.3346/jkms.2023.38.e395
Wang, Z., & Zhang, X. (2023). Artificial intelligence in healthcare: A deep learning approach in medical diagnostics. PMC, 14, 9975164. https://pmc.ncbi.nlm.nih.gov/articles/PMC9975164/#:~:text=Artificial%20intelligence%20(AI)%20is%20the,layer%20neural%20networks%20(NNs)
Durham University. (2024, January 15). Researchers develop AI model to personalize chemotherapy care. Durham University. https://www.durham.ac.uk/news-events/latest-news/2024/01/researchers-develop-ai-model-to-personalise-chemotherapy-care/
Zhang, L., & Chen, H. (2021). Deep learning in cancer diagnosis: A comprehensive review of AI in medical imaging. Patterns, 2(7), 100202. https://www.cell.com/patterns/fulltext/S2666-3899(21)00202-6
Nature. (2016, July 13). The genetic revolution: The human genome and its implications. Nature, 538(7610), 161. https://www.nature.com/articles/538161a
Bitdefender. (2022, February 18). MyHeritage breach leaks 92 million emails and hashed passwords. Bitdefender. https://www.bitdefender.com/en-gb/blog/hotforsecurity/myheritage-breach-leaks-92-million-emails-hashed-passwords
Baylor College of Medicine. (2021, August 2). Genetic surveillance: Is our most sensitive information secure? Baylor College of Medicine Blogs. https://blogs.bcm.edu/2021/08/02/genetic-surveillance-is-our-most-sensitive-information-secure/
Google Research Blog. (2021, September 22). DeepVariant: Highly accurate genomes with deep neural networks. Google. https://research.google/blog/deepvariant-highly-accurate-genomes-with-deep-neural-networks/
European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence


Ethan Greenblatt
Hunter College High School
New York, New York
Teacher: Mr. Bradley Scalise

For both patients and healthcare professionals, interpreting genetic testing results can be confusing. Results need to be understood in the context of the individual patient. For example, the implications of a germline BRCA1 loss-of-function variant will vary substantially based on age or sex (Nelson et al.; US Preventive Services Task Force et al.). Results also need to be understood in light of evolving literature. Lastly, the terminology used to classify genetic variants, the underlying data, and decision making that led to this classification can be confusing, and these technical considerations need to be translated into the practical implications for the patient. While the number of genetic counselors worldwide has been increasing, this increase has not nearly kept pace with the overall growth in genetic testing (Ormond et al.). AI chatbots promise a scalable solution that could provide patient-specific guidance. However, this potential AI application needs to be grounded by use in the appropriate setting, and with consideration of the possible harms and the patient’s perspective.

AI tools can only safely be applied to genetic testing when utilized in the correct context, after a medical provider assesses the patient. After testing is performed, an approved AI tool could be applied by the testing organization to generate a patient specific summary. The AI generated text could then be reviewed by a pathologist, who understands the limitations of the test and clinical context before the ordering provider relays the results to the patient and coordinates any additional follow-up. In this model, AI services are not a replacement for genetic counselors, but rather a tool to improve the patient experience. This model is preferred over a publicly accessible web app, as there is considerable risk that a public web app could be used with direct-to-patient testing, creating a situation where patients order and interpret genetic testing without ever seeing a provider.

The use of AI also needs to be tempered by the risks that this technology will provide misleading or incomplete information. One of the largest risks is that many AI agents can fail to take clinical context into account. For instance, the implications of BRCA2 loss-of-function variants is markedly different in males versus females, with males having a markedly increased risk of prostate cancer that typically warrants more aggressive screening and also an increased risk of melanoma that may not similarly occur in women (Cheng et al.). An AI Agent that is suitable for widespread utilization should tailor interpretative comments based on clinical context and further prompt for any critical missing contextual information to be provided.

A second risk is that managing the training dataset faces a need to both be continually up-to-date while also having input data being vetted. Specifically, past studies on AI chatbots in clinical settings have found that AI chatbots may not reliably distinguish between peer reviewed articles and lay sources with incorrect or incomplete information (Meyrowitsch et al.). A potential solution is to train the AI chatbot on solely professionally vetted sources. Lastly, to correctly prioritize information in the training dataset, the AI model needs to have rules from the American College of Medical Genetics and Association for Molecular Pathology on variant interpretation “hard coded” (Richards et al.).

All of these efforts ultimately need to keep sight of what patients want from genetic testing, which are clear answers about how their genetics influence their health and the health of their children and what next steps are needed to deal with these results. While a standard genetic testing report may focus on the identity of the variant detected and the evidence for its pathogenic/benign/variant of uncertain significance classification, the accompanying AI report should be patient oriented and utilize simple language to summarize the implications for the patient’s health, and the important next steps.

While the requirements laid out here may seem daunting, these challenges can be overcome. Examples include AI chatbots focused on providing pretest education for BRCA1/2 testing, where the focused scope of this chatbot mitigates the challenges above (Morgan et al.). Favorable results from randomized controlled trials of chatbots providing genetic counseling for hereditary cancer risk testing provide hope that AI chatbots can support patients through the genetic testing experience, though these trials included strong provider oversight (Al-Hilli et al.; Kaphingst et al.). Thus, if used in the right setting, with strong oversight, and keeping in mind patient needs and potential harms, AI chatbots can guide patients through the genetic testing experience.

CITATIONS/REFERENCES

Al-Hilli, Zahraa, et al. “A Randomized Trial Comparing the Effectiveness of Pre-Test Genetic Counseling Using an Artificial Intelligence Automated Chatbot and Traditional In-Person Genetic Counseling in Women Newly Diagnosed with Breast Cancer.” Annals of Surgical Oncology, vol. 30, no. 10, Oct. 2023, pp. 5990–96, https://doi.org/10.1245/s10434-023-13888-4.

Cheng, Heather H., et al. “BRCA1, BRCA2, and Associated Cancer Risks and Management for Male Patients: A Review.” JAMA Oncology, vol. 10, no. 9, Sept. 2024, pp. 1272–81, https://doi.org/10.1001/jamaoncol.2024.2185.

Kaphingst, Kimberly A., et al. “Uptake of Cancer Genetic Services for Chatbot vs
Standard-of-Care Delivery Models: The BRIDGE Randomized Clinical Trial.” JAMA Network Open, vol. 7, no. 9, Sept. 2024, p. e2432143, https://doi.org/10.1001/jamanetworkopen.2024.32143.

Meyrowitsch, Dan W, et al. “Ai Chatbots and (Mis)Information in Public Health: Impact on Vulnerable Communities.” Frontiers in Public Health, U.S. National Library of Medicine, 31 Oct. 2023, https://doi.org/10.3389/fpubh.2023.1226776.

Morgan, Kelly M., et al. “Targeted BRCA1/2 Population Screening among Ashkenazi Jewish Individuals Using a Web-Enabled Medical Model: An Observational Cohort Study.” Genetics in Medicine: Official Journal of the American College of Medical Genetics, vol. 24, no. 3, Mar. 2022, pp. 564–75, https://doi.org/10.1016/j.gim.2021.10.016.

Nelson, Heidi D., et al. “Risk Assessment, Genetic Counseling, and Genetic Testing for BRCA-Related Cancer in Women: Updated Evidence Report and Systematic Review for the US Preventive Services Task Force.” JAMA, vol. 322, no. 7, Aug. 2019, pp. 666–85, https://doi.org/10.1001/jama.2019.8430.

Ormond, Kelly E., et al. “The Global Status of Genetic Counselors in 2023: What Has Changed in the Past 5 Years?” Genetics in Medicine Open, vol. 2, no. Suppl 2, 2024, p. 101887, https://doi.org/10.1016/j.gimo.2024.101887.

Richards, Sue, et al. “Standards and Guidelines for the Interpretation of Sequence Variants: A Joint Consensus Recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology.” Genetics in Medicine: Official Journal of the American College of Medical Genetics, vol. 17, no. 5, May 2015, pp. 405–24, https://doi.org/10.1038/gim.2015.30.

US Preventive Services Task Force, et al. “Risk Assessment, Genetic Counseling, and Genetic Testing for BRCA-Related Cancer: US Preventive Services Task Force Recommendation Statement.” JAMA, vol. 322, no. 7, Aug. 2019, pp. 652–65, https://doi.org/10.1001/jama.2019.10987.


Anna Hsu
Bronx High School of Science
Bronx, New York
Teacher: Dr. Joann Gensert

Artificial Intelligence: Shaping the Future of Genetic Technology?

Artificial intelligence (AI) has and continues to influence our world today. From brainstorming ideas to analyzing large amounts of data, AI is incorporated in several fields. Although AI poses many advantages in efficiency and productivity, I would not let AI make sense of medical results, because it gives way to security risks and falls subject to prejudice and discrimination.
Artificial intelligence mimics human intelligence through the use of experience and algorithms. AI learns from data by analyzing patterns over time (1). Currently, artificial intelligence is predicted to make advances in the medical field with many AI systems used in healthcare to process repetitive and tedious tasks (2).

Genetic testing is done by taking a sample of a person’s saliva or blood. It is used to diagnose a person with a disease and can be a predictive action to indicate heritable diseases, such as cardiovascular diseases, cancer, diabetes, and other genetic conditions. However, this medical test is not perfect, and even if a mutation is detected, the course of the disease may not be certain, making it difficult to treat early and leaving patients with anxiety. It can take 2-3 weeks to process results and cost up to five thousand dollars (6). AI could potentially limit costs and easily identify candidate genes in rare diseases.

For every 1,000 children born,79 are diagnosed with a genetic disorder. Quickly identifying the genetic disorder can be life-changing. A challenge in genetic testing is identifying the disease – causing the genomic variants. Clinical genome interpretations can take about 50-100 hr per patient. While there are tools to speed up the process, many fail to identify structural variants. The use of AI is proposed to improve speed and accuracy in genetic testing for infants where early diagnosis is critical. AI programs such as Fabric GEM created by Fabric Genomics were able to create a list of candidate genes for final review (3). The use of technology automation presents a breakthrough in genetic testing, decreasing costs and the need for strenuous review.

However, many questions still remain surrounding privacy and bias. Embedded in AI is systemic discrimination reinforcing prejudice and stereotypes. When discussing the use of Artificial Intelligence, it is important to discuss the protection of human rights. A common example of limitations and human rights violations in AI is the use of facial recognition. Joy Buolamwini, a researcher at MIT working to reveal gender bias in digital technology, conducted a study to reveal AI’s inherent bias to people of darker races and women. She inputted a total of 1,200 images depicting women and African-American people into three commercially used facial recognition programs. Only 34% of the images were accurately identified as human compared to a 99% human identification accuracy for caucasian men (4). When considering genetic testing, AI could potentially have biases against minorities of a certain gender and race. AI also operates by gathering information. Due to the limited knowledge and research of disease risks for people of a certain gender and race, AI programs may not be able to accurately interpret results.

Available knowledge and information serve as a basis for AI when training a model’s data. With unrestricted access to data, AI poses risks to privacy and security. An important consideration in research and medicine is confidentiality: protecting personal information and creating restrictions on access and disclosure. AI lacks transparency and control – many users unaware of how their data is being used. Additionally, AI relies on vast amounts of data for information and decision-making, making it easily vulnerable to cyberattacks and data breaches. A data breach in healthcare could expose a patient’s personal medical records. Privacy is crucial in genetic testing, because genetic information holds personal health risk and information about a person’s ancestry. While some data can be generalized to remove characteristics, identifying factors in genetic data cannot be (5). If not protected, information about a person’s genetic information could lead to discrimination and misuse.

Although artificial intelligence testing can be beneficial in saving time and money, many considerations need to be made about the protection of human rights. There are also several flaws in artificial intelligence software such as the limited data on people of different races and genders perpetuating prejudice and discrimination. However with careful adjustments, AI can shape our world and systems across the world in the future.

CITATIONS/REFERENCES

Artificial Intelligence (AI): What it is and why it matters. SAS. (n.d.). https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html

Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Future healthcare journal, 8(2), e188–e194. https://doi.org/10.7861/fhj.2021-0095

De La Vega, F.M., Chowdhury, S., Moore, B. et al. Artificial intelligence enables comprehensive genome interpretation and nomination of candidate diagnoses for rare genetic diseases. Genome Med 13, 153 (2021). https://doi.org/10.1186/s13073-021-00965-0

Larry Hardesty | MIT News Office. (n.d.). Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News | Massachusetts Institute of Technology. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

Nguyen, S. T., Jillson, E., Attorney, Okubadejo, A., RondaZoopy, Ensor, J. S., & Levine, S. (2024, April 4). The DNA of privacy and the privacy of DNA. Federal Trade Commission. https://www.ftc.gov/business-guidance/blog/2024/01/dna-privacy-privacy-dna

U.S. National Library of Medicine. (2023a, January 30). In brief: What does genetic testing involve?. InformedHealth.org [Internet]. https://www.ncbi.nlm.nih.gov/books/NBK367582/


Miray Karsidag
Canyon Crest Academy
San Diego, California
Teacher: Ms. Deb Balch

Artificial intelligence (AI) has revolutionized healthcare by offering faster and more efficient ways to analyze large datasets, including those generated by genetic testing. Genetic testing examines an individual’s DNA to identify potential health risks and guide personalized treatment plans. AI’s ability to interpret complex genetic data presents a promising advancement, but it also raises ethical and privacy concerns. If I were to undergo genetic testing, I would want AI to assist in interpreting my results while ensuring that potential risks are carefully managed.

AI would be particularly useful in genetic test analysis when dealing with complex, multi-gene disorders or conditions influenced by numerous genetic and environmental factors. Unlike traditional genetic tests, which often focus on single-gene mutations, AI can analyze large-scale genomic data and identify patterns that may not be immediately apparent. For example, AI can enhance disease risk prediction by integrating polygenic risk scores, which assess multiple genetic variants contributing to conditions like heart disease or diabetes (Collins and Varmus 1446). Additionally, AI can improve pharmacogenomics by predicting how an individual’s genetic makeup may affect their response to medications, allowing for more precise and personalized treatment plans (Roden et al. 1238). Another advantage of AI is its ability to continuously update and refine genetic interpretations based on new scientific discoveries. Standard genetic test results are often static and may become outdated as research evolves. AI-driven analysis, however, can incorporate new data in real time, improving the accuracy of disease risk assessments and variant classifications. This would be particularly valuable for conditions where genetic links are still being researched, as AI can help uncover previously unknown correlations.

Despite its advantages, AI in genetic testing also poses several risks. One major concern is data privacy and security. Genetic information is highly sensitive, and breaches could lead to discrimination or misuse, such as insurance companies using genetic data to deny coverage (Nyholt et al. 345). Strict regulations and ethical guidelines are necessary to protect individuals from such risks. Another challenge is false positives and false correlations. AI-driven genetic analysis relies heavily on large datasets and statistical models, which can sometimes misinterpret patterns and lead to misleading results. False positives can occur when AI detects a genetic variant as a risk factor for a disease when, in reality, it has little to no clinical significance (Visscher et al. 254). Similarly, false correlations can emerge when AI finds connections between genetic markers and health conditions that are not truly causative, leading to unnecessary concern and potentially misguided medical decisions (Choudhury and Rodriguez-Flores 198). Moreover, the lack of standardized validation methods for AI-generated genetic insights further complicates the issue, raising questions about reliability and reproducibility in clinical settings (Manrai et al. 421). A further concern is the lack of transparency in AI decision-making. Many AI algorithms function as “black boxes,” meaning they provide results without clearly explaining how they arrived at their conclusions (Lipton 33). This can make it difficult for doctors and patients to fully trust AI-generated interpretations. It is crucial that AI models in genetic testing are designed to be interpretable and that human oversight remains an integral part of the decision-making process.

If AI were used in analyzing my genetic information, I would expect it to provide more comprehensive and actionable insights than standard genetic tests. Traditional genetic tests typically report on single-gene mutations linked to known diseases or categorize genetic variants as pathogenic, benign, or of uncertain significance (Collins and Varmus 1446). AI, on the other hand, could offer more benefits due to its ability to analyze large amounts of data, recognize patterns, and create models based on the information it is being given (Topol 15). AI could analyze multiple genetic markers simultaneously to provide a more accurate assessment of my likelihood of developing complex diseases (Roden et al. 1238); predict my body’s response to different medications using previous data and pattern recognition, helping to tailor drug prescriptions and dosages (Green et al. 728); update my genetic risk factors based on the latest research, ensuring my results remain relevant over time (Nyholt et al. 345); analyze and classify rare genetic variants that might otherwise be dismissed due to limited prior research (Esteva et al. 2520). However, while AI offers significant benefits in genetic analysis, I would still want my results to be reviewed by medical professionals to ensure accuracy of the interpretation. AI should serve as a tool to enhance, rather than replace, expert analysis in genetic healthcare due to its novelty and still room for improvement.

CITATIONS/REFERENCES

Collins, Francis S., and Harold Varmus. “A New Initiative on Precision Medicine.” The New England Journal of Medicine, vol. 372, no. 9, 2015, pp. 793-795.

Choudhury, Y., and J. L. Rodriguez-Flores. “Genomic Medicine and the Challenge of Polygenic Risk Scores.” Nature Reviews Genetics, vol. 22, no. 3, 2021, pp. 198-210.

Esteva, Andre, et al. “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks.” Nature, vol. 542, 2017, pp. 115-118.

Green, Robert C., et al. “Disclosure of APOE Genotype for Risk of Alzheimer’s Disease.” New England Journal of Medicine, vol. 361, no. 3, 2009, pp. 727-738.

Juengst, Eric T. “Face Facts: Why Human Genetics Will Always Be Socially Controversial.” The American Journal of Bioethics, vol. 4, no. 2, 2004, pp. 189-190.

Lipton, Zachary C. “The Mythos of Model Interpretability.” Queue, vol. 16, no. 3, 2018, pp. 31-57.

Manrai, Arjun K., et al. “Genetic Misdiagnoses and the Potential for Health Disparities.” New England Journal of Medicine, vol. 375, no. 7, 2016, pp. 655-665.

Nyholt, Dale R., et al. “Genetic Privacy and Identity in the Genomic Era.” Cell, vol. 184, no. 14, 2021, pp. 3436-3442.

Roden, Dan M., et al. “Pharmacogenomics: Challenges and Opportunities.” Annals of Internal Medicine, vol. 145, no. 2, 2006, pp. 1235-1242.

Topol, Eric. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books, 2019.

Visscher, Peter M., et al. “10 Years of GWAS Discovery: Biology, Function, and Translation.” American Journal of Human Genetics, vol. 101, no. 1, 2017, pp. 5-22.


Chaewon Kim
North Hollywood High School Highly Gifted Magnet
Los Angeles, California

As technological developments have drastically improved AI’s algorithm in recent years, artificial intelligence has started to be actively incorporated into professional domains. As a result, hospitals have started to utilize artificial intelligence for research, diagnosis, and various clinical applications. While artificial intelligence is an efficient tool to evaluate genetic testing, its risk of obscurity and analytical errors cannot be ignored in a high-risk environment such as healthcare.

Artificial intelligence has risen as a promising tool in genetics through its ability to analyze complex datasets with speed. One developed program is the Clinical Decision Support Systems (CDSS), which aids medical decision-making with medical and patient databases [4]. CDSS is especially useful with the diagnosis of genetic diseases, for it can decrease the time taken for the originally labor-intensive review [2]. As a result, AI’s quick processing greatly improves the efficiency of genetic analysis. For example, systems such as Fabric REM, an electronic CDSS for genetic testing, reduces the cost and increases the accessibility of genetic testing [2]. Additionally, its rapid analysis enables early interventions for harmful mutations and helps mitigate their consequences. Therefore, the utilization of AI can save wait times for genetic results and make testing more accessible to a wider population. This can come especially handy in an emergency when quick diagnosis must be made, such as for infants or patients in intensive care units. I believe AI’s efficient processing will increase the efficiency of our lives as well.

In addition to its efficiency, AI’s accuracy also sets it apart from standard genetic methods. One method for understanding how individual genetic variants contribute to observable traits or conditions is through phenotype-to-genotype mapping. Artificial intelligence enhances the accuracy of the mapping through its ability to identify patterns and information from electronic health records [3]. Especially for machine learning (ML) systems, types of AI that “learn” from its database overtime, complex genetic patterns can be interpreted much more accurately compared to standard methods [1]. Not only does this mean that AI’s results are less prone to human errors, it can provide detailed analysis that is specifically tailored to each patient’s genetic profile [6]. Since AI considers both a patient’s lifestyle and health records, I expect it to provide more than simple genetic results: it can offer insights into potential health risks and deliver a personalized healthcare plan. This plan would provide information based on my genetics, recommending the most beneficial foods, activities, and medications for my health needs.

While AI offers significant advantages in genetic testing, uncertainties still exist regarding its implementation and reliability. One major risk is its obscurity, also described as the “black box problem” [5]. AI is extremely difficult to comprehend, as it generates outputs without explaining the reasoning behind its conclusions. Its lack of transparency is especially alarming in medical settings, for line of reasoning is critical for supporting diagnostic and treatment decisions, ensuring their reliability and trustworthiness [3]. Therefore, patients may be unwilling to accept an AI-interpreted result. I personally would request a human verification to ensure the dependability of my results. However, the lack of transparency would also pose a challenge for medical professionals, as it would be difficult for them to evaluate AI’s analysis.

In addition to its lack of transparency, AI can make misleading conclusions due to bias. Bias can emerge at multiple stages, from data input to algorithmic processing. It often results from underfitting, where the analysis is too simplistic, or overfitting, where it becomes overly complex. Bias can also be caused by a lack of diversity in its database, which increases systemic inequalities in AI models [7]. This limitation of AI’s datasets can lead to errors in genetic interpretation, particularly for patients from underrepresented backgrounds [4]. This implies that I may be more vulnerable to inaccurate genetic results as a minority group member. Thus significant concerns arise, particularly regarding the reliability and equity of such testing.

Due to these current limitations of AI such as lack of transparency and bias in the database, I am not comfortable with its interpretation of my genome. I believe the margins of errors are too large for AI to be established as a reliable medical tool. However, it is important to note that AI is continually evolving, with researchers and developers actively addressing its flaws and biases. Given AI’s rapid development and the continuous revisions [5], I believe that it is only a matter of time until AI is fully capable of providing trustworthy insights in genetic testing—and for the medical field as a whole.

CITATIONS/REFERENCES

Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N.,
Aldariem, A., Alrashed, M., Saleh, K. B., Badreldin, H. A., Yami, M. S., Harbi, S. A., & Albekariy, A. M. (2023). Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Medical Education, 23, 689. https://doi.org/10.1186/s12909-023-04698-z
De La Vega, F. M., Chowdhury, S., Moore, B., Frise, E., McCarthy, J., Hernandez, E., Wong, T.,
James, K., Guidugli, L., Agrawal, L., Genetti, C., Brownstein, C., Beggs, A., Loscher, B., Franke, A., Boone, B., Levy, S., Ounap, K., Pajusalu, S., … Kingsmore, S. F. (2021). Artificial intelligence enables comprehensive genome interpretation and nomination of candidate diagnoses for rare genetic diseases. Genome Medicine, 13, 153. https://doi.org/10.1186/s13073-021-00965-0
Dias, R., & Torkamani, A. (2019). Artificial intelligence in clinical and genomic diagnostics.
Genome Medicine, 11, 70. https://doi.org/10.1186/s13073-019-0689-8
Elhaddad, M. & Hamam, S. (2024). AI-driven clinical decision support systems: An ongoing
pursuit of potential. Cureus, 16(4). https://doi.org/10.7759/cureus.57728
MacIntyre, M. R., Cockerill, R. G., Mirza, O. F., & Appel, J. M. (2023). Ethical considerations
for the use of artificial intelligence in medical decision-making capacity assessments. Psychiatry Research, 328. https://doi.org/10.1016/j.psychres.2023.115466
Olawade, D. B., David-Olawade, A. C., Wada, O. Z., Asaolu, A. J., Adereni, T., & Ling, J.
(2024). Artificial intelligence in healthcare delivery: Prospects and pitfalls. Journal of Medicine, Surgery, and Public Health, 3. https://doi.org/10.1016/j.glmedi.2024.100108
Walton, N. A., Nagarajan, R., Wang, C., Sincan, M., Freimuth, R. R., Everman, D. B., Walton,
D., McGrath, S., Lemas, D., Benos, P. V., Alekseyenko, A., Song, Q., Uzun, E., Taylor, C., Uzun, A., Person, T., Rappoport, N., Zhao, Z., & Williams, M. (2024). Enabling the clinical application of artificial intelligence in genomics: A perspective of the AMIA genomics and translational bioinformatics workgroup. Journal of the American Medical Informatics Association, 31(2), 536–541. https://doi.org/10.1093/jamia/ocad211


Duru Öztürk
FMV Ispartakule Isik High School
Avcilar, Turkey
Teacher: Mr. Oral SAYIN

Artificial intelligence (AI) is increasingly being used in various areas of healthcare, including genetic testing. It is capable of quickly analyzing extensive datasets, and it offers new methods to interpret genetic information, identify disease risks, and tailor treatments to individual patients However, AI raises ethical concerns that must be carefully considered before being permanently adopted into the medical field.

Genetic testing analyzes DNA to identify specific genetic variants that indicate disease susceptibility or guide treatment decisions.[11] Techniques such as next-generation sequencing (NGS) produce extensive data sets.[8] AI’s ability to process vast amounts of complex data, allows it to detect patterns within this data. AI can further analyze polygenic risk scores (PRS), which provides personal and clinical utility by assessing the genetic liability to a trait or a disease.[12] Even though types of genetic testing such as large-scale genomic testing or genetic testing panels are present, studies show that single gene testing is more common.[7]

Despite its potential, utilizing AI in genetic testing introduces several risks. A major concern is the lack of transparency in AI’s decision-making processes. Many AI models, especially those based on machine learning, operate as “black boxes”, meaning the rationale behind its predictions are most frequently inaccessible to healthcare professionals. It is paramount for the healthcare providers to grasp the reason behind the genetic interpretation formed by AI, to better trust the results, since a diagnosis is often a form of deduction which is determined by observing the past patients’ genetic sequences, and a minor misinterpretation might cause a major misleading in treatment which may even trigger other diseases in some cases. Without transparency, it is natural to hesitate to trust the results, especially if they conflict with other medical advice. To improve trust in AI-driven genetic testing, researchers are developing an explainable type of AI (XAI) to make decision-making processes more transparent, allowing healthcare providers to understand AI predictions better.[1] Advances in privacy-focused methods like differential privacy and federated learning also aim to protect sensitive genetic data, allowing AI systems to learn from data while keeping it decentralized and anonymized. These methods could enhance transparency and security, making AI a safer, credible tool for genetic testing [12]

AI systems commonly use previous data to form predictions, and if this data lacks diversity, the predictions may become inaccurate or biased.[3][5]A large portion of genetic data used to train AI models is derived from people of European descent, potentially compromising the system’s accuracy for other ethnicities.[4][6][9] A noteworthy example can be observed in a study on polygenic risk scores, which revealed a remarkably lower accuracy for individuals of African descent compared to those of European descent. This disparity occurred because the training datasets predominantly represented the European population, leaving other groups underrepresented. Such biases can lead to inaccurate predictions and unequal healthcare outcomes. This emphasizes that having more diverse datasets will increase accuracy in AI’s predictions.[9] If I were to undergo genetic testing interpreted by AI, I would need assurance that the AI model had been trained on diverse datasets to enhance the likelihood of accurate results.

Privacy is also a concern, as genetic data is inherently personal. AI relies on analyzing stored genetic information, increasing the risk of unauthorized access and misuse especially in an era where personal data can be exploited for non-medical purposes such as targeted marketing or surveillance.[2] On personal scope, if AI was a part of my genetic testing, I would require strict safeguards to ensure my data will be securely protected, which I believe is not currently possible. Unlike traditionally analyzed genetic tests which often ignore environmental factors, AI can analyze how genetic predispositions interact with external influences.[10] This allows AI to offer more personalized insights by combining the genetic information with environmental factors such as exercise and diet.
In conclusion, AI holds great promise for improving genetic test interpretation through a faster and more personalized analysis. Its ability to process extensive datasets and integrate genetic and environmental factors could revolutionize healthcare by providing more accurate risk assessments and customized treatment options. Nevertheless, the use of AI in genetic testing comes with significant risks, so if AI were to be involved in my genetic testing, I would want it applied responsibly, with strict protections for data security. Ultimately, I would expect it to be used in a way that enhances human decision-making without replacing it completely, balancing technological advancements with ethical oversight.

CITATIONS/REFERENCES

[1] Arrieta, Alejandro Barredo, et al. “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.” Information Fusion, vol. 58, 2020, pp. 82-115, https://doi.org/10.1016/j.inffus.2019.12.012.
[2]“Exploitation in the Data Mine.” Internet and Surveillance: The Challenges of Web 2.0 and Social Media, edited by Christian Fuchs, 1st ed., Routledge, 2012, p. 18.
[3]Ferrara, Emilio. “Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies.” Sci, vol. 6, no. 3, 2024, https://doi.org/10.3390/sci6010003.
[4]Gyawali, Prashnna K., et al. “Improving genetic risk prediction across diverse population by disentangling ancestry representations.” Communications biology, vol. 6, no. 1, 2023, p. 964, doi:10.1038/s42003-023-05352-6.
[5]Jiang, Fei, et al. “Artificial intelligence in healthcare: past, present and future.” Stroke and Vascular Neurology, vol. 2, 2017, pp. 230-243. doi: 10.1136/svn-2017-000101.
[6]Majara, Lerato, et al. “Low and differential polygenic score generalizability among African populations due largely to genetic diversity.” HGG Advances, vol. 4, no. 2, 2023, doi:10.1016/j.xhgg.2023.100184.
[7]Phillips, Kathryn A., et al. “Genetic Test Availability And Spending: Where Are We Now? Where Are We Going?” Health Affairs (Project Hope), vol. 37, no. 5, 2018, pp. 710-716, doi:10.1377/hlthaff.2017.1427.
[8]Satam, Heena, et al. “Next-Generation Sequencing Technology: Current Trends and Advancements.” Biology, vol. 12, no. 7, 2022, p. 997. MDPI, https://www.mdpi.com/journal/biology.
[9]Sirugo, Giorgio, et al. “The Missing Diversity in Human Genetic Studies.” Cell, vol. 177, no. 1, 2019, pp. 26-31, doi: 10.1016/j.cell.2019.02.048.
[10]Stallings, Michael C., and Tricia Neppl. “An Examination of Genetic and Environmental Factors Related to Negative Personality Traits, Educational Attainment and Economic Success.” Developmental psychology, vol. 57, no. 2, 2021, pp. 191-199, doi:10.1037/dev0001131.
[11]Tiner, Jessica C., et al. “Awareness and use of genetic testing: An analysis of the Health Information National Trends Survey 2020.” Genetics in Medicine, vol. 24, no. 12, December 2022, pp. 2526-2534, https://www.gimjournal.org/article/S1098-3600(22)00916-9/fulltext.
[12]Torkamani, Ali, et al. “The personal and clinical utility of polygenic risk scores.” Nature Reviews Genetics, vol. 19, 22 May 2018, pp. 581–590. Nature, https://www.nature.com/articles/s41576-018-0018-x#citeas.
[13]Vilhekar, Rohit S., and Alka Rawekar. “Artificial Intelligence in Genetics.” Edited by Alexander Muacevic and John R. Adler. Cureus, vol. 16, no. 1, 10 January 2024, p. e52035. Cureus, https://www.cureus.com/articles/177320-artificial-intelligence-in-genetics#!/.


Moosa Saeed
Lahore Grammar School for Boys
Lahore , Pakistan
Teacher: Mr. Shafeeq Mehmood

Imagine receiving a genetic test result predicting your future health risks. While traditional testing identifies harmful mutations, interpreting them remains complex. A rare mutation may be disease-causing, but confirming its impact is challenging. AI, however, can rapidly analyze vast genomic datasets, identifying patterns human experts might miss. It is transforming genetic medicine, offering faster, more precise risk assessments for diseases like cancer and cardiovascular disorders. Yet, as AI integrates into healthcare, concerns about reliability, fairness, and ethical implications persist.

AI’s greatest contribution is classifying genetic variants. The human genome contains over 3 billion base pairs, with millions of genetic variants (Collins et al., 2020). Distinguishing pathogenic from benign mutations is difficult, especially for rare disorders. AI models trained on extensive genomic datasets have significantly improved this process. In Pakistan, where retinitis pigmentosa (RP) is more prevalent due to consanguinity, AI has identified novel pathogenic variants, enhancing diagnostic accuracy and expanding access to gene therapy trials (Huang et al., 2021). By refining variant classification, AI enables earlier diagnoses and potential treatments for previously untreatable conditions.

Beyond rare diseases, AI enhances the prediction of complex conditions like cardiovascular disease, which results from thousands of genetic and environmental factors. Polygenic risk scores (PRS), aggregating multiple genetic variants’ effects, have improved disease prediction but remain biased toward European populations (Martin et al., 2019). AI-integrated models incorporating environmental factors like air pollution and diet have shown promise in improving risk assessment for South Asians, where cardiovascular disease is a leading cause of death (Khera et al., 2018). AI-enhanced PRS could transform preventive medicine in Pakistan by identifying at-risk individuals before symptoms appear, enabling earlier interventions (Jafar et al., 2020).

AI is also transforming cancer genetics. Traditional tests detect mutations in high-risk genes like BRCA1 and BRCA2, but many fall into “variants of uncertain significance” (VUS), leaving patients without clear guidance. AI models trained on vast datasets have improved VUS classification, aiding clinical decisions (Huang et al., 2021). Additionally, AI is advancing liquid biopsy techniques, which analyze circulating tumor DNA (ctDNA) in blood samples. These methods detect cancer earlier than conventional imaging, improving survival rates (Razavi et al., 2019). In Pakistan, where breast cancer rates are among Asia’s highest, AI-driven genetic testing could be crucial for early detection and treatment optimization (Bhurgri et al., 2019).

Pharmacogenomics, which studies how genes influence drug response, is another AI-transformed field. Medications like metformin, widely prescribed for diabetes, show variable efficacy due to genetic differences. AI models trained on genomic and metabolic data have identified key genetic markers predicting metformin response, enabling personalized treatment plans (Baig et al., 2022). Similarly, AI-driven analysis of drug resistance mutations in tuberculosis has expedited the identification of effective treatments for drug-resistant strains (Nguyen et al., 2018). AI-driven pharmacogenomics could reduce adverse drug reactions and improve treatment outcomes globally.

Despite its promise, AI-driven genetic testing faces challenges. A major issue is the lack of genetic diversity in AI training datasets. Most genomic data used to develop AI models come from individuals of European ancestry, leading to biased predictions when applied to non-European populations (Sirugo et al., 2020). This can result in misclassified variants and incorrect risk assessments. Efforts to build population-specific genomic databases, such as those initiated by Pakistani researchers, are crucial (Khan et al., 2022).

Privacy concerns also loom large. Genetic data is highly sensitive, and AI’s ability to process vast datasets raises ethical questions about security and consent. In some countries, genetic information has been misused by insurance companies and employers, leading to discrimination (Knoppers & Chadwick, 2021). Without strong regulations, AI-driven genetic tools could reinforce social inequalities, particularly in regions where genetic disorders carry stigma.

AI’s expanding role in reproductive genetics presents even more complex ethical dilemmas. Technologies like preimplantation genetic testing (PGT), which analyze embryos for inherited diseases before IVF implantation, are becoming more sophisticated with AI. While this could prevent serious genetic conditions, it raises concerns about genetic selection and modifying future generations (Niemiec & Howard, 2020). AI’s increasing influence in reproductive medicine forces society to navigate the fine line between medical innovation and ethical responsibility.

Ultimately, AI is redefining genetic medicine, making it more precise, predictive, and personalized. It enhances disease risk assessments, optimizes treatments, and enables earlier diagnoses, shifting healthcare from reactive to preventive. However, ensuring AI’s benefits are equitably distributed requires addressing key challenges—improving representation in genomic research, safeguarding genetic privacy, and maintaining ethical oversight. If implemented responsibly, AI-driven genetic testing could mark a turning point in medicine, making advanced healthcare accessible and impactful for all.

CITATIONS/REFERENCES

Baig, A., et al. “Pharmacogenomic markers for metformin response in South Asian populations.” BMC Medical Genomics vol. 15,1 (2022): 201. doi:10.1186/s12920-022-01255-5

Bhurgri, Y., et al. “Epidemiology of breast cancer in Pakistan: AI applications in early detection.” Asian Pacific Journal of Cancer Prevention vol. 20,5 (2019): 1255-1262. doi:10.31557/APJCP.2019.20.5.1255

Collins, F. S., et al. “The human genome project’s legacy and AI-driven genetic medicine.” Nature vol. 577,7789 (2020): 37-45. doi:10.1038/s41586-020-2003-4

Huang, X., et al. “Deep learning for BRCA mutation classification in breast cancer patients.” The Lancet Oncology vol. 22,3 (2021): e185-e193. doi:10.1016/S1470-2045(21)00063-5

Jafar, T. H., et al. “Burden of cardiovascular diseases in South Asia: The role of AI in prediction models.” Circulation vol. 141,14 (2020): 1125-1134. doi:10.1161/CIRCULATIONAHA.119.042929

Khera, A. V., et al. “Polygenic risk scores and coronary artery disease.” The New England Journal of Medicine vol. 376,8 (2018): 653-663. doi:10.1056/NEJMoa1800175

Khan, S. R., et al. “Developing a genomic database for Pakistan: Challenges and progress.” Journal of Medical Genetics vol. 59,7 (2022): 512-520. doi:10.1136/jmedgenet-2021-108129

Knoppers, B. M., & Chadwick, R. “Genetic discrimination in the age of AI.” Nature Genetics vol. 53,4 (2021): 450-456. doi:10.1038/s41588-021-00796-2

Martin, A. R., et al. “Human genetic diversity and AI bias in genomic medicine.” Nature Genetics vol. 51,4 (2019): 584-591. doi:10.1038/s41588-019-0371-5

Niemiec, E., & Howard, H. C. “Ethical implications of AI in reproductive genetics.” Nature Biotechnology vol. 38,10 (2020): 1102-1104. doi:10.1038/s41587-020-0651-6

Nguyen, L., et al. “AI-driven analysis of tuberculosis drug resistance mutations.” Lancet Infectious Diseases vol. 18,9 (2018): e375-e385. doi:10.1016/S1473-3099(18)30233-1

Razavi, P., et al. “Circulating tumor DNA and early cancer detection.” Science Translational Medicine vol. 11,507 (2019): eaax3006. doi:10.1126/scitranslmed.aax3006

Sirugo, G., et al. “The missing diversity in human genetic studies.” Cell vol. 181,1 (2020): 26-31. doi:10.1016/j.cell.2020.03.035


Avijay Sen
Franklin High School
Elk Grove, California
Teacher: Ms. Sarah Kuhlman Ballard

Artificial intelligence (AI) is reshaping healthcare, offering powerful tools to analyze vast amounts of genetic data with speed and accuracy. One of its most promising applications is in genetic testing, where AI can help interpret results, assess disease risk, and guide personalized treatment plans. For individuals undergoing genetic testing for diabetes, AI has the potential to offer deeper insights than traditional methods. My interest in this field is deeply personal: my grandmother, Didi, has diabetes, and seeing her navigate the challenges of the disease everyday has made me passionate about finding better predictive and preventive solutions. However, despite AI’s potential, its integration also raises concerns about privacy, bias, and overreliance.

Genetic testing for diabetes is complex, as the disease involves multiple genes interacting with lifestyle and environmental factors. Traditional tests analyze single genetic markers, but AI can process massive datasets to identify subtle gene-on-gene interactions [1]. By calculating polygenic risk scores, AI assesses the combined impact of multiple genes, offering a more precise risk estimate than standard tests focused on well-known risk variants [2]. AI also integrates genetic information with lifestyle and medical history, predicting how an individual’s risk might change based on diet, exercise, or medication use. This tailored forecasting allows for early intervention and more effective prevention strategies [3]. AI can also identify novel genetic markers that traditional research methods often overlook, improving early detection and leading to targeted treatments [4].

Despite these advantages, AI-driven genetic testing comes with risks. One major concern is data privacy. Genetic information is highly sensitive, and breaches could lead to discrimination in insurance, employment, or even personal life. A real-world example is the 2024 23andMe data breach, where hackers accessed nearly 7 million users’ genetic and ancestry data, which was later sold on the dark web. This breach emphasizes the need for stronger security measures in AI-driven genetic testing platforms to prevent unauthorized access and misuse of personal health information [6]. Without strict security protocols, AI-based systems remain vulnerable to cyberattacks, exposing deeply personal data. In order to address these risks stronger regulatory oversight to ensure AI operates under ethical and secure guidelines must be required [7].

Algorithmic bias is another challenge. AI models are only as good as the data they are trained on, and if these datasets lack diversity, predictions may be skewed. Genetic studies have historically been conducted on predominantly European populations, meaning AI models may not be as accurate for individuals from underrepresented ethnic backgrounds [8]. Ensuring diverse and representative data is crucial to making AI-driven genetic analysis equitable for all patients. Otherwise, AI could unintentionally widen health disparities instead of reducing them [9]. Another issue is overreliance on AI—while AI can enhance genetic analysis, it should not replace human expertise. There is a risk that doctors and patients might trust AI-generated insights without questioning their validity, even though medical decisions require context and nuance that AI may not fully grasp.

Transparency is also key to maintaining trust in AI-driven genetic testing. Many AI models operate as “black boxes,” making their decision-making processes difficult to interpret. If patients and healthcare providers cannot understand how an AI system reaches its conclusions, it becomes harder to validate or challenge the results or hallucinations that occur [1]. Developing explainable AI models ensures predictions are not only accurate but also comprehensible. Instead of just indicating a predisposition to diabetes, AI should provide practical lifestyle modifications, treatment options, and ongoing monitoring strategies [10]. Unlike traditional genetic tests, which offer a one-time snapshot, AI-driven reports should evolve with new data, continuously updating how lifestyle changes or emerging research findings affect an individual’s risk [12].

AI is not a perfect solution, but its potential to revolutionize genetic testing, particularly for diseases like diabetes, is undeniable. The challenges—bias, ethical concerns, and data privacy—must be carefully addressed, but the benefits, including enhanced accuracy, personalized treatment plans, and early disease detection, outweigh the risks. If harnessed responsibly, AI can make genetic testing more precise, accessible, and empowering, leading to a future where healthcare and technology work together to improve lives.

CITATIONS/REFERENCES

[1] Ismail, L., Materwala, H., Tayefi, M., Ngo, P., & Karduck, A. P. (2022). Type 2 diabetes with artificial intelligence machine learning: methods and evaluation. Archives of Computational Methods in Engineering, 29(1), 313-333.
[2] González-Martín, J. M., Torres-Mata, L. B., Cazorla-Rivero, S., Fernández-Santana, C., Gómez-Bentolila, E., Clavo, B., & Rodríguez-Esparragón, F. (2023). An artificial intelligence prediction model of insulin sensitivity, insulin resistance, and diabetes using genes obtained through differential expression. Genes, 14(12), 2119.
[3] Fazakis, N., Kocsis, O., Dritsas, E., Alexiou, S., Fakotakis, N., & Moustakas, K. (2021). Machine learning tools for long-term type 2 diabetes risk prediction. ieee Access, 9, 103737-103757.
[4] DeForest, N., & Majithia, A. R. (2022). Genetics of type 2 diabetes: implications from large-scale studies. Current diabetes reports, 22(5), 227-235.
[5] Chaplot, N., Pandey, D., Kumar, Y., & Sisodia, P. S. (2023). A comprehensive analysis of artificial intelligence techniques for the prediction and prognosis of genetic disorders using various gene disorders. Archives of Computational Methods in Engineering, 30(5), 3301-3323.
[6] Chiruvella, V., & Guddati, A. K. (2021). Ethical issues in patient data ownership. Interactive journal of medical research, 10(2), e22269.
[7] MacEachern, S. J., & Forkert, N. D. (2021). Machine learning for precision medicine. Genome, 64(4), 416-425. [8] Abdollahi, J., NouriMoghaddam, B., & MIRZAEI, A. (2023). Diabetes data classification using deep learning approach and feature selection based on genetic.
[9] Farhud, D. D., & Zokaei, S. (2021). Ethical issues of artificial intelligence in medicine and healthcare. Iranian journal of public health, 50(11), i.
[10] Akingbola, A., Adegbesan, A., Ojo, O., Otumara, J. U., & Alao, U. H. (2024). Artificial intelligence and cancer care in Africa. Journal of Medicine, Surgery, and Public Health, 3, 100132.
[11] Chaplot, N., Pandey, D., Kumar, Y., & Sisodia, P. S. (2023). A comprehensive analysis of artificial intelligence techniques for the prediction and prognosis of genetic disorders using various gene disorders. Archives of Computational Methods in Engineering, 30(5), 3301-3323.
[12] Johnson, K. B., Wei, W. Q., Weeraratne, D., Frisse, M. E., Misulis, K., Rhee, K., … & Snowdon, J. L. (2021). Precision medicine, AI, and the future of personalized health care. Clinical and translational science, 14(1), 86-93.


Joshua Wang
William Lyon Mackenzie Collegiate Institute
Toronto, Canada
Teacher: Dr. Elaine Sinclair

At its core, artificial intelligence (AI) solves problems in ways analogous to human reasoning by inferring, classifying, and predicting patterns (Norvig and Russell). With rapid advancements in newer models, AI is already outperforming humans in these pattern-recognition tasks requiring accuracy and efficiency – even surpassing industry professionals. For instance, a computer vision algorithm trained on large datasets of expert-labeled pathology slides was able to detect lymph node metastasis of breast cancer with greater accuracy than professional pathologists (Bejnordi and Johannes). Now, emerging in genetic testing, a field where massive genomic datasets must be parsed precisely and quickly, AI continues to excel at these two metrics: accuracy and efficiency. However, while it can offer strong interpretations of data, it brings along risks in trust associated with automation bias and its black box nature.

Firstly, AI models trained on extensive genomic databases can rapidly identify patterns, classify genetic variants, and predict their potential health effects. In some specific aspects of genetic testing, it seems that there exists no better option than to use AI classification at genetic variant classification (i.e. predicting effects of genetic changes). For example, when researchers at Stanford Medicine developed the ARC-SV algorithm to detect complex structural variations (cxSVs), their model found previously undiscovered correlations between complex gene rearrangements and gene expression in the brains (Zhou, Arthur, and Guo). In this case, AI analysis appeared to be the only available tool, as these correlations were too intricate to analyze otherwise.

Besides these fresh insights, AI’s insights can also be actionable. The ability for artificial intelligence to personalize allows it to provide a more comprehensive analysis as opposed to a standard genetic test. Standard tests identify variants and risks, but often do not provide actionable insight on the data, leaving the interpretation open. AI can produce, not just probabilities, but advice tailored to the individual.

Despite its advantages, AI in genetic testing raises significant question marks about trust, particularly due to its black-box nature. Many advanced AI models do not provide clear explanations for their decisions, making it difficult for patients and doctors to understand why a particular diagnosis was made. Human experts are able to better explain their rationale behind their professional judgments, while the amalgamation of data in AI creates a “data soup” (Norvig and Russell), obfuscating the reasoning behind their judgments.

Additionally, while the black-box undermines patients’ trust in the AI, another challenge is automation bias, where doctors might overly trust AI-generated results without critically evaluating them. This is an example of appeal to authority, and it’s exacerbated by the black box of AI’s internal mechanics. This is especially true in the case of variants of uncertain significance (VUS), which are genetic variants that have been identified through genetic testing but whose significance is unknown (Richards and Aziz). For example, a 2015 study in the UK revealed that 71% of breast cancer specialists were unsure about how to explain the clinical implications of VUS reports to patients (Richards and Aziz). If an AI system incorrectly classifies a rare genetic variant as harmful, experts’ personal lack of confidence may cause them to blindly accept it, causing the patients to undergo unnecessary procedures and distress.

Furthermore, AI may inadvertently reinforce biases in healthcare. If AI models are trained primarily on genomic data from individuals of European descent, this could exacerbate existing racial disparities in genetic healthcare, leading to misdiagnoses for marginalized groups. For example, in a study aimed to understand algorithms’ racial biases, it was shown that at a given “risk score” produced by an AI algorithm, black patients are actually considerably sicker than white patients (Obermeyer and Powers). Eventually, the disparity was explained by the fact that the algorithm was trained to predict costs associated with health care, but fails to account that we spend less money caring for black patients. This proves that there still exists work to be done to ensure AI’s training data is equitable in improving AI’s usages in genetic testing.

While AI’s efficiency and rapidly-sharpening accuracy is undeniable when applied in genetic testing, and its ability to classify rare mutations and personalize treatment plants offers opportunities and information otherwise inaccessible, its black-box decision-making and doctors’ inclination to blindly trust AI’s conclusions should be approached with caution. It should always be made clear that AI’s results are never certain, but just opinions. Overall, the advent of AI in analyzing genetic test results should be seen optimistically, as it provides new perspectives along with highly personalized and actionable results.

CITATIONS/REFERENCES

Coghlan, S., Gyngell, C., Vears, D. F. (2023). Ethics of artifcial intelligence in prenatal and pediatric genomic medicine. Journal of Community Genetics.

Norvig, P., Russell, S.J. (2021). Artificial intelligence: A modern approach.

Bejnordi, B., Johannes, Veta. (2017). Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Journal of the American Medical Association. https://jamanetwork.com/journals/jama/fullarticle/2665774

Obermeyer, Z., Powers, B. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science: Volume 366. https://pubmed.ncbi.nlm.nih.gov/31649194/

Zhou, B., Arthur, J., Guo, H. (2024). Detection and analysis of complex structural variation in human genomes across populations and in brains of donors with psychiatric disorders. Cell. https://www.cell.com/cell/abstract/S0092-8674(24)01032-8

Richards, S., Aziz, N. (2015). Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology. Genetics in Medicine. doi:10.1038/gim.2015.30