• AIXBIO
  • Posts
  • AIXBIO Weekly #17 - Oct/30/2023

AIXBIO Weekly #17 - Oct/30/2023

Regulation & Landscape

UN’s Strategic Move:

Steering AI for Global Good & Bridging Digital Divides

UN's new AI Advisory Body aims to guide global AI use for common good, addressing risks & ensuring equitable access. Potential to boost Sustainable Development Goals , yet challenges in expertise concentration & potential harms exist. Recommendations expected by year-end.

The UN has inaugurated an AI Advisory Body, pooling expertise from various sectors to navigate AI’s global governance, risks, and opportunities. Secretary-General António Guterres underscored AI’s phenomenal growth and its capacity to propel global goals, emphasizing the urgency to harness AI responsibly. He pointed out the potential of AI to revolutionize public services and crisis management, especially in developing nations. However, he cautioned against the concentration of AI expertise, advocating for equitable access to prevent widening global disparities. The Advisory Body is tasked with delivering governance recommendations and strategies to utilize AI for Sustainable Development Goals (SDGs ), contributing to the upcoming Summit of the Future and the Global Digital Compact discussions.

Frontier Model Forum Ushers in New Era of AI Safety & Governance

With Appointment of Chris Meserole and $10M AI Safety Fund

Chris Meserole has been named the first Executive Director of the Frontier Model Forum, a collaborative initiative between Google, Microsoft, OpenAI, and Anthropic, aiming to ensure the safe and responsible development of frontier AI models globally. The Forum has also launched a $10 million AI Safety Fund, in partnership with philanthropic entities, to advance research and develop tools for society to effectively test and evaluate advanced AI models. The fund will support independent researchers affiliated with various institutions worldwide.

The AI Safety Fund is a response to the rapid advancements in AI capabilities, recognizing the need for additional academic research in AI safety. The Forum and its partners are committed to facilitating third-party discovery and reporting of vulnerabilities in AI systems, viewing the AI Safety Fund as a crucial part of this commitment. The fund will primarily focus on supporting the development of new model evaluations and red teaming techniques for AI models, aiming to raise safety and security standards across the industry.

In the coming months, the Forum plans to establish an Advisory Board, issue its first call for proposals from the AI Safety Fund, and release additional technical findings.

MLCommons Unveils AI Safety Working Group to Establish Robust Benchmarks for Large Language Models

MLCommons forms AIS working group, focusing on safety benchmarks for large language models, addressing AI risks & aiming for industry-standard testing. Involves tech giants & academics. Aligns with responsible AI development policies.

MLCommons, a leading AI benchmarking organization, has initiated the AI Safety (AIS) working group, targeting the development of robust safety benchmarks, particularly for large language models (LLMs). Utilizing Stanford's innovative HELM framework, the group aims to address prevalent AI risks including toxicity, misinformation, and bias, emphasizing the necessity of industry-standard safety testing. The AIS working group aspires to create a comprehensive testing platform, offering a variety of tests for diverse applications, and summarizing results into easily interpretable scores, akin to safety ratings in other industries.

The immediate priority is to enhance AI safety testing technology, drawing from the technical and operational expertise of its members, which include major tech companies like Google, Meta, and Microsoft, as well as academic professionals. The group encourages open participation, allowing anyone to propose new tests to tackle unresolved safety issues. The collaborative effort is expected to yield clear insights on which AI models best address safety concerns, fostering innovation and setting common goals for AI safety.

The AIS working group's formation aligns with various policy frameworks and commitments to responsible AI development and safety, including the voluntary commitments made to the White House in July 2023, NIST’s AI Risk Management Framework, and the EU’s forthcoming AI Act.

Navigating the Intellectual Waters: The Complex Landscape of Patenting AI in Drug Discovery

Billions invested in AI-driven drug discovery, yet IP issues linger.

The biopharma industry is witnessing a surge in investments, with billions of dollars annually directed towards AI-enabled drug discovery, resulting in over 70 AI-derived medical solutions currently in clinical trials. Despite this, the industry faces a significant dilemma regarding the patenting of drug discovery algorithms. 

Companies are torn between protecting their innovations and navigating the fluid and fast-paced nature of AI technology. AI plays a crucial role in various phases of drug discovery, offering unparalleled speed and efficiency in target identification, molecule screening, and the design of new drug agents. However, the patenting of algorithms presents unique challenges, with approximately 3,000 patents filed for AI-related drug development and delivery. The patenting process is intricate, heavily influenced by legal interpretations and the discretion of the patent examiner. Companies are now forced to make strategic decisions, weighing the benefits of patenting against the costs and potential legal hurdles.

Taking on FEC Over Disclosure Rules Delay

CLC & OpenSecrets sue FEC over delayed transparency rules for special-purpose accounts 

The Campaign Legal Center (CLC) and OpenSecrets have initiated legal action against the Federal Election Commission (FEC), challenging the agency’s prolonged silence on a critical transparency matter. In 2019, both organizations jointly petitioned the FEC, seeking the establishment of new disclosure rules for “special-purpose” accounts held by national political party committees.

These accounts, pivotal in managing party headquarters, organizing key events, and funding legal proceedings, have been under scrutiny due to their ability to accept funds significantly above the general contribution limits.

The lawsuit holds substantial implications, particularly in AI regulation within political campaigns. This legal action could heighten scrutiny on AI’s role in ad targeting, voter profiling, and disinformation, urging for transparency and accountability as well as attempts for regulatory capture.

A favorable outcome for CLC and OpenSecrets might set a precedent, possibly catalyzing new regulations to ensure ethical AI use in political processes and ethical regulation of AI .

Ensuring transparency in AI practices is crucial for maintaining public trust, and this lawsuit underscores this necessity.

Funding

PhaseV Secures $15m to Optimize Clinical Trials with Advanced ML Technology

PhaseV, an Israel-based firm, has raised $15m to advance its ML tech for optimizing clinical trials. The tech aims to enhance trial design & retrospective analysis, potentially speeding up drug development.

PhaseV, based in Israel, specializes in the development of machine learning (ML) technology tailored for optimizing clinical trial designs and conducting retrospective analyses. The company recently announced a successful funding round, securing $15 million. This funding round witnessed participation from Viola Ventures, Exor Ventures, LionBird, and several notable angel investors.

The secured funds are earmarked for the further enhancement of PhaseV’s proprietary ML technology, with the overarching goal of accelerating the drug development process.

Australian AI Health Platform Heidi Health Secures $6.3M for GP Expansion and AI Integration

Australian AI health platform Heidi Health secures $6.3M in Series A, aiming to expand GP capacity & introduce new AI offerings, addressing a crucial GP shortage in Australia.

Heidi Health, an Australian AI health platform, has successfully raised US$6.3m in a Series A funding round. The capital injection is earmarked for expanding the company’s team, increasing the adoption of its software among general practitioners (GPs) and clinics across Australia, and eventually breaking into global markets. This strategic move is timely, as Australia is grappling with a predicted shortage of 10,600 GPs over the next decade, amidst a 58% surge in demand for healthcare services.

Innovation

Diagnosis

Paige's AI-Driven Breakthrough: Transforming Breast Cancer Detection & Pathology

Paige's AI application, Paige Lymph Node, has been granted the Breakthrough Device Designation by the U.S. FDA, recognizing its potential in enhancing the diagnosis of breast cancer metastases in lymph node tissue. This designation is reserved for technologies that promise more effective treatment or diagnosis of life-threatening diseases, especially when no approved alternatives exist or when the new technology presents significant advantages over existing options.

Paige Lymph Node stands out as the first AI application in its category to receive this recognition. The technology aids pathologists in quickly and accurately identifying small lymph node metastases, ensuring that patients receive optimal disease management.

Developed through deep learning and trained with over 32,000 digitized slides, Paige Lymph Node demonstrates near-perfect sensitivity in detecting breast cancer metastases.

EmpowerSuite: Revolutionizing Radiology with AI-Driven Insights and Gamification

NewVue's latest innovation, EmpowerSuite, stands out as a cloud-hosted, PACS-integrated radiology dashboard, centralizing all radiology operations to streamline workflows. The integration of gamification elements is a novel approach, aiming to boost job satisfaction, productivity, and encourage collaboration among peers.

EmpowerSuite provides radiologists with AI-generated clinical synopses, offering a comprehensive chronological record of patient interactions, and highlighting pertinent medical details. It goes a step further by presenting AI-generated clinical suggestions, recommended follow-up actions, and case citations, ensuring that radiologists have all the necessary information at their fingertips.

Enhancing Patient Safety: AI's Role in Preventing Surgical Complications

The article sheds light on the grave issue of Retained Surgical Items (RSIs) in surgeries, emphasizing the severe complications and the urgent need for innovative solutions. In 2019, RSIs topped the list of sentinel events, pushing the healthcare sector to actively seek and implement transformative solutions. The integration of AI-powered cameras and video analytics in operating rooms has proven to be a game-changer, significantly reducing RSI incidents, enhancing patient outcomes, and mitigating legal liabilities.

The technology enables real-time monitoring of surgical procedures, ensuring all surgical instruments are accounted for before the completion of surgery. Automated anomaly alerts and predictive analytics further contribute to preventing RSIs, providing the surgical team with immediate notifications in case of any irregularities. This not only enhances patient safety but also improves operational efficiency and reduces healthcare costs associated with prolonged recovery periods and additional surgeries. For healthcare providers, the integration of AI in surgical procedures translates to reduced legal liabilities, protection of reputation, and, most importantly, the preservation of patient trust.

Global Health Equity: A Leap Forward in Genetic Prediction Models

MIT's new genetic model includes diverse ancestries, improving prediction accuracy & aiming for global health equity. Significant gains in accuracy, especially for underrepresented populations. A step towards inclusive genomics.

MIT researchers have developed a groundbreaking genetic model that incorporates data from a broad spectrum of genetic ancestries, marking a significant stride towards inclusivity in genetic prediction models. Traditional polygenic scores, which estimate an individual’s genetic risk for diseases or likelihood of certain traits, have predominantly been based on European descent data, resulting in inaccurate predictions for non-European ancestries.

The new model, however, has demonstrated a dramatic increase in prediction accuracy, particularly for traditionally underrepresented populations. For instance, the model’s accuracy for individuals of African ancestry improved by 60%, and by 18% for those with admixed genetic backgrounds. This inclusive approach is poised to enhance health outcomes worldwide and promote health equity, ensuring the benefits of genomic sequencing are accessible across diverse global populations.

Opinions & Trends

AI & Biomarkers: The Future of Precision Medicine

Precision medicine is evolving with AI & biomarkers leading the way. From CAR T-cell treatments to CRISPR, the field is expanding. AI aids in trial efficiency & biomarkers offer targeted treatments. The future hinges on tech & data

Precision medicine is witnessing rapid advancements, with technology playing a pivotal role. Over a decade ago, the success of the first CAR T-cell treatment marked a significant milestone. Since then, the focus has shifted to CAR-T therapies, with six now approved. The field is also exploring the potential of CRISPR therapies, especially for rare diseases. While cell therapies dominate clinical trials, there's a noticeable rise in gene therapies. By the end of H1 2023, only 39 gene therapies were on the market, but projections suggest substantial growth in the next decade.

Precision medicine primarily targets oncology, but its scope is broadening to other areas, including autoimmune diseases. AI is becoming instrumental in clinical trials, especially in reducing dropout rates. One of the major advancements in the field is the utilization of biomarkers, with oncology being a primary area for biomarker-driven therapy. The introduction of digital biomarkers, like NeuraLight's AI-driven platform, is also noteworthy. As technology and AI continue to progress, their impact on precision medicine is undeniable. AI is now involved in all stages of drug development, from R&D to market. The future of precision medicine is promising, with experts emphasizing the need for standardization, technology integration, and data utilization.

Transforming Patient Care: Japanese Doctors Embrace Ubie's Innovative AI Tool

Japanese doctors now have access to Ubie's generative AI tool, streamlining patient interviews & boosting efficiency.

The tool is a game-changer in patient care, available to 1.5K+ healthcare professionals across Japan

Ubie, a Japanese startup, has revolutionized patient care by introducing a generative AI tool, the Medical Interview Summary Function, on its Ubie Medical Navi platform. This tool transforms lengthy and detailed preliminary patient interviews into concise summaries, aiding doctors in quickly understanding patient concerns. The platform replaces traditional paper medical questionnaires with digital forms, allowing for tailored preliminary interviews based on patient symptoms. After patients answer 20-30 questions related to their symptoms and lifestyle habits, the LLM-based feature generates a summary for the doctors, ensuring efficiency and clarity in patient care.

This innovative feature is now accessible to over 1,500 healthcare professionals across 47 prefectures in Japan, provided at no additional cost. Developed in collaboration with doctors through Ubie Lab, the company's research arm, the tool addresses the pressing needs of busy medical professionals, enhancing both patient communication and operational efficiency. Trials have demonstrated its effectiveness, with positive feedback from users indicating its potential for continued use in the medical field.

Responsible Development and Regulation of Open Foundation Models in AI

Experts at a Princeton-Stanford workshop discuss open foundation models in AI, highlighting innovation, risks, & need for responsible regulation. Emphasis on transparency & collaboration. Challenges like liability & security addressed. Future of AI regulation in focus

At a Princeton-Stanford workshop, experts from various sectors gathered to delve into the complexities of open foundation models in AI, discussing their potential, risks, and the path forward for responsible development and regulation. Foundation models have become central to AI, driving innovation and adoption. However, their widespread use introduces risks, necessitating a careful approach to regulation. The workshop featured discussions on Meta’s commitment to open science, the balance between innovation and responsibility, and the challenges posed by the rapid advancement of these technologies. The need for transparency, collaboration, and a nuanced regulatory framework was emphasized, highlighting the role of open foundation models in fostering scientific discovery while acknowledging the potential for harm. The discussions underscored the importance of addressing environmental impacts, ensuring security, and navigating the complexities of liability. The workshop concluded with a call for a balanced approach to regulation, ensuring the responsible development of AI technologies while fostering innovation.

& 

Essence of AI, Beyond Just Algorithms

How do we define AI's intelligence?

Exploring AI's true intelligence, this article delves into its historical context, philosophical debates, & societal implications.

The article from Duke University Press delves deeply into the intricate topic of artificial intelligence and its true essence. It begins by addressing the challenges of defining what intelligence means in the context of AI. Historically, the development of AI has been influenced by various factors, and the article provides a glimpse into this context.

Furthermore, it touches upon the philosophical debates that have surrounded AI's capabilities and potential. These debates often revolve around whether AI can truly replicate human intelligence or if it's just a sophisticated set of algorithms. The societal implications of AI's intelligence are also discussed, highlighting the broader impact of AI on our world. The article raises pertinent questions about the nature of AI and challenges readers to think critically about what it means for a machine to be "intelligent".

David and Google's AI expert, Danu , demystify Artificial Intelligence, delving into concepts such as AI, Machine Learning, and Deep Learning's nuances and evolution.

The discussion highlights the transformative impact of deep learning and the Transformer architecture, addressing computational challenges and forecasting AI's promising future across various sectors.

Papers

The AlphaFold Protein Structure Database, a collaborative effort between EMBL-EBI and Google DeepMind, has undergone a significant update to enhance its functionality and user experience. Now hosting over 200 million predicted 3D protein structures, the database has integrated new features such as sequence similarity-based search and the display of structurally similar predictions.

These advancements aim to support the scientific community by making data more accessible and navigable. The sequence similarity-based search, powered by BLAST, allows users to discover relevant predicted structures based on their input protein sequences, while the integration of Foldseek Cluster aids in handling the extensive dataset of predicted protein structures. This update marks a substantial step forward in facilitating structural biology research and demonstrates the commitment to enriching user experience in the field.