- AIXBIO
- Posts
- AIXBIO Weekly #21 - Nov/27/2023
AIXBIO Weekly #21 - Nov/27/2023

Regulatory Landscape
EU's AI Act: France, Germany, Italy Advocate for Self-Regulation in Foundation Models
The European Union's AI Act, a significant legislation designed to regulate Artificial Intelligence, is undergoing crucial changes. The Act's focus is on mitigating potential harm caused by AI technologies. However, the advent of ChatGPT, a versatile AI system based on OpenAI's GPT-4, has significantly influenced the ongoing negotiations.
France, Germany, and Italy are at the forefront of this shift, advocating for a different approach to regulating foundation models within the AI Act. They propose mandatory self-regulation through codes of conduct, emphasizing that the risks associated with AI lie more in its application than in the technology itself. This perspective aligns with a technology-neutral and risk-based approach, aiming to balance innovation with safety.
A key element of their proposal is the introduction of model cards for foundation models. These cards are intended to provide a comprehensive summary of information about trained AI models, accessible to a wide audience. They would include details like the number of parameters, intended uses, potential limitations, and results of bias and security assessments.
The proposed changes have sparked intense discussions within the EU, with some officials expressing strong reservations. The approach to foundation models is set to be a central topic in upcoming discussions among EU member states and legislative bodies.
Big Tech's Influence on the AI Act: Navigating Corporate Interests and Regulatory Challenges
How Big Tech undermined the AI Act" from Corporate Europe Observatory, , explores the influence of major tech companies on the European Union's legislative process for the AI Act. It highlights how companies like OpenAI, backed by Microsoft, have gained significant access to policymakers, shaping the discourse around AI regulation. The AI Act, aimed at a risk-based regulatory approach, has been a subject of intense debate, with Big Tech pushing to limit regulations on foundation models, such as ChatGPT, and high-risk AI applications.
The debate around AI regulation has also touched on issues like the amplification of social prejudice, monopoly power, and environmental impact. The article notes that Big Tech's lobbying efforts have been successful in reducing external oversight of potentially harmful AI within the EU. This has led to a situation where the development of general AI systems, on which other AI is built, would largely go unregulated, leaving Big Tech off the hook.
Tracking the Progress: Analyzing the AI Executive Order's Impact on Government Policy and Priorities
From Stanford HAI, "By the Numbers: Tracking The AI Executive Order," provides an insightful analysis of President Biden's Executive Order 14110, focusing on the development and use of Artificial Intelligence (AI). This order, signed on October 30, is a significant step in AI governance, comprising 150 requirements for federal entities. The Stanford Institute for Human-Centered AI (HAI) and the RegLab have developed an AI EO Tracker to monitor the implementation of these policies.
The EO's scope is broad, covering various AI-related issues, with a particular emphasis on AI safety, security, reliability, and attracting AI talent to the federal government. It involves over 50 federal agencies, with the Executive Office of the President and the Department of Commerce playing key roles. The order sets ambitious deadlines, with most tasks to be completed within a year, reflecting the urgency of AI policy implementation.
Over 35 venture capital firms and 15 companies have signed their Responsible AI commitments
These commitments focus on responsible AI practices, including internal governance, transparency, risk and benefit forecasting, auditing, testing, and continuous improvements. Alongside these commitments,
Also introducing a comprehensive 15-page Responsible AI Protocol, designed as a practical guide for investors and startups. This initiative, developed with inputs from AI experts, aims to strike a balance between innovation, policy, and capital, advocating for collaboration with elected leaders and responsible investment practices in the rapidly evolving AI landscape.
Partnerships & Funding
AI at the Forefront: SRW and Insilico's Joint Venture in Longevity Science
Collaboration between SRW Laboratories and Insilico Medicine, focusing on developing nutraceuticals for longevity using AI. This partnership is a response to the growing field of geroscience, which addresses aging by targeting cellular degradation. Insilico's Pharma, AI platform plays a crucial role, screening natural compounds for their potential in extending healthspan. The initiative is seen as a significant step in integrating AI with natural supplement development, a sector known for its safety and efficacy.
The economic implications of this partnership are noteworthy, with projections suggesting substantial global gains from extending human lifespan. However, the practical application of AI in this field is met with cautious optimism, considering the challenges in predicting the efficacy of natural compounds and translating these findings into marketable products.
The first suite of AI-assisted products is expected in 2024, marking a milestone in the convergence of technology and natural health products. This venture represents a blend of high-tech solutions with traditional nutraceutical practices, aiming to reshape our approach to healthspan and aging.
Recursion's Strategic Leap in TechBio: Collaborations with Tempus and Bayer, Supercomputer Expansion
Recursion Pharmaceuticals, announced significant strides in its operations. The company has formed a collaboration with Tempus Labs, providing access to one of the largest oncology datasets. This dataset, encompassing over 20 petabytes of data, includes DNA, RNA, and health records, which Recursion plans to use for developing AI/ML models for therapeutic discovery.
In parallel, Recursion is amplifying its computational capabilities by expanding its supercomputer, BioHive-1. This expansion, supported by NVIDIA, involves adding over 500 NVIDIA H100 Tensor Core GPUs to the existing 300 NVIDIA A100 GPUs. This upgrade is expected to quadruple Recursion's computational capacity, propelling BioHive-1 into the top 50 global supercomputers and making it the most potent in the biopharma sector.
Furthermore, Recursion has updated its collaboration with Bayer, focusing on precision oncology programs. This partnership leverages Recursion's advanced capabilities in identifying novel targets and chemistry for oncology indications. The agreement includes the potential initiation of up to seven oncology programs, with Recursion eligible for future payments and royalties.
Introducing Layer Health: Pioneering AI Solutions for Streamlining Healthcare Data
Layer Health, a new venture from MIT, has launched with a focus on revolutionizing healthcare through AI. Backed by $4 million from GV, General Catalyst, and Inception Health, the company addresses the critical issue of unstructured, inaccessible patient data in healthcare institutions. Their debut product, Distill, is designed to streamline the processing of clinical notes for various administrative and clinical tasks, such as quality measurement and real-world evidence curation. This AI-driven tool integrates seamlessly into existing healthcare systems, utilizing large language models to analyze data efficiently without the need for labeled data.
The platform has already seen adoption in the healthcare sector. xCures, a health technology firm, is utilizing Distill to better organize clinical data for cancer treatment and clinical trial matching. Similarly, the Froedtert & the Medical College of Wisconsin health network is leveraging Distill for enhancing their quality improvement processes and clinical registry submissions. These applications highlight Distill's potential in improving healthcare workflows and patient care.
Innovation
Chroma: a generative model developed by Generate Biomedicines for protein design
Chroma is a generative model developed by Generate Biomedicines for protein design. It uses geometric and functional programming instructions to create new protein molecules. The goal is to tap into the vast potential of protein molecules, which have evolved over billions of years, but are not fully explored due to the challenges of computational and experimental limitations.
To achieve this, Chroma introduces several key components. First, it employs a diffusion process that respects the statistical behavior of polymer ensembles. This process allows for the sampling of novel protein structures and sequences. Chroma also utilizes a neural architecture based on random graph neural networks, which enables efficient reasoning over molecular systems with a sub-quadratic scaling. This architecture allows for long-range interactions between different parts of the protein molecule. In order to synthesize 3D structures of proteins from predicted inter-residue geometries, Chroma incorporates equivariant layers. These layers efficiently generate the spatial arrangement of atoms within the protein based on the predicted geometries. Additionally, Chroma includes a general low-temperature sampling algorithm for diffusion models. This algorithm ensures that the generated protein structures are sampled in a way that aligns with the desired properties and functions. By combining these components, Chroma enables protein design as Bayesian inference under external constraints. These constraints can include symmetries, substructures, shape, semantics, and even natural-language prompts.
Experimental characterization of 310 proteins designed using Chroma has shown that the generated proteins express, fold, and exhibit favorable biophysical properties. In fact, crystal structures of two designed proteins have demonstrated atomistic agreement with Chroma samples, with a backbone root-mean-square deviation (RMSD) of approximately 1.0 Å.
Overall, Chroma offers a unified approach to protein design that holds promise for accelerating advancements in human health, materials science, and synthetic biology by enabling the programming of protein matter.
Deciphering Thoughts !
AI's Leap in Reading and Recreating Human Brain Waves
This breakthrough suggests potential future applications in medical fields and communication technologies. However, the technology is still nascent and requires extensive individual training, making it impractical for widespread use at present. Each participant's brain activity must be meticulously recorded over approximately 20 hours before the AI can accurately interpret and recreate images from their brain data.
The research also opens up a discussion on the ethical and legal implications of such technology. Concerns have been raised about the potential misuse of AI in surveillance or interrogation, as well as the commodification of personal brain data. Experts emphasize the need for governance and rights to be established to prevent oppressive uses of this technology.
Trends
AI's Emerging Role in Scientific Discovery: From Hypothesis Generation to Nobel-Worthy Contributions
The Nobel Turing Challenge, envisioning AI systems making groundbreaking discoveries by 2050. AI's current applications in science are diverse, ranging from data analysis to drafting research papers. A significant aspect of AI's contribution lies in hypothesis generation, a complex task traditionally requiring human creativity.
AI's role extends to drug discovery and gene function assignment, utilizing knowledge graphs to propose undiscovered links. The article also highlights AI-generated hypotheses across various scientific fields, including particle physics and biology.
However, the article points out challenges in AI-driven hypothesis generation. It emphasizes the need for AI systems to incorporate reasoning about the physical world, not just pattern matching. This integration of scientific knowledge into AI systems is crucial for their effective application in research.
The article suggests that AI's potential in science is vast, with the ability to automate hypothesis generation becoming increasingly important as data collection scales up. AI's capability to generate novel, 'alien' hypotheses could lead to unexpected scientific advancements and discoveries.
AI's Influence on Medical Prescriptions: A Study of Clinician Decision-Making
The Nature study investigates the impact of AI on clinicians' prescription decisions. Conducted with intensive care doctors, the experiment used four scenarios: a control group, peer human clinician advice, AI suggestions, and explainable AI (XAI) suggestions.
The key finding is that both AI and XAI significantly influence clinicians' decisions. However, contrary to expectations, XAI did not demonstrate a higher impact than AI alone. This challenges the assumption that explainability in AI systems necessarily enhances their effectiveness in clinical settings. The study also explored the correlation between clinicians' attitudes towards AI, their experience, and the influence of AI on their decisions. Interestingly, no significant correlation was found, indicating a uniform response to AI recommendations regardless of individual attitudes or experience.
This highlights a broader acceptance or skepticism of AI in healthcare, irrespective of personal or professional backgrounds. The research underscores the potential of AI in healthcare, particularly in critical care medicine, while also pointing out the complexities in its integration and the need for further exploration in the field of explainable AI.
AI's Emerging Role in Healthcare: Enhancing Diagnosis, Treatment, and Administration
AI is poised to play a crucial role in medicine, assisting physicians with both clinical and administrative tasks. This advancement aims to improve the quality, affordability, and accessibility of healthcare. A notable example is ChatGPT, which has demonstrated its capability by passing parts of the USMLE, a medical licensing exam, without additional medical training. Another significant development is Med-PaLM by Google Research and DeepMind, a large language model designed for medical domains, although it currently does not surpass clinician performance.
AI technologies are rapidly advancing, offering significant support in decision-making by processing vast data volumes. They provide comprehensive medical insights, enhancing diagnostic accuracy and treatment plans. AI's ability to detect minute abnormalities in medical imaging exemplifies its potential as a valuable decision-support tool. Additionally, AI can alleviate the burden of repetitive administrative tasks in healthcare, allowing physicians to focus more on patient-centered activities. For instance, ChatGPT could expedite the process of responding to insurance claims, thereby saving time and resources.
In treatment, AI's role is becoming increasingly personalized. It can analyze patient data to tailor treatment plans, as seen in fertility treatments where AI assists in developing precise dosing protocols. The article suggests that within the next five years, AI will offer far more accurate and efficient diagnostic, treatment, and administrative support than currently possible. This evolution in healthcare technology is creating a path towards a more efficient and effective healthcare system, indicating a significant shift in how medical care is delivered and managed.
Optimism and Challenges: The American Perspective on AI in Healthcare for 2024
A recent survey conducted by Medtronic and Morning Consult has revealed a significant optimism among Americans regarding the role of AI in healthcare for the year 2024. Over half of the respondents (51%) believe that AI will bring major advancements and breakthroughs in healthcare. The survey highlights a strong belief in AI's ability to enable earlier diagnosis of health conditions, with 61% of adults agreeing on this point. Additionally, about two-thirds (65%) recognize AI's potential in breaking down barriers to healthcare access.
Despite the optimism, there are notable barriers to AI adoption. A considerable number of respondents express concerns about AI making mistakes and a general lack of understanding of the technology. These concerns suggest that increasing consumer confidence in AI requires addressing these perceived barriers. Interestingly, 83% of consumers view the potential for AI errors as a significant barrier, and 80% are concerned about the lack of evidence showing AI's improvement in health outcomes.
The survey also sheds light on the public's view of AI in physician practices. While there is a favorable opinion towards AI-powered symptom trackers and health apps, only about one-third of adults prefer working with physicians who use AI. However, there is more openness to specific applications of AI in healthcare, such as analyzing tests and detecting cancer.
A thorough exploration into the accuracy and reliability of Large Language Models
The "factuality issue," which refers to instances where these models might produce information that isn't accurate or contradicts known facts. The paper discusses the potential consequences of such inaccuracies, especially as LLMs find applications in various domains like healthcare, law, and finance, where accuracy is crucial. The authors explore how LLMs handle facts, the primary reasons for factual errors, and methodologies for evaluating and enhancing their accuracy. The survey aims to guide researchers in enhancing the factual reliability of LLMs, ensuring they are both powerful and trustworthy.
And
AI expert Melanie Mitchell from the Santa Fe Institute Demystifies the workings of current-day AI & explores the future of Artificial Intelligence
How AI is already integrated into our daily lives ?
What are AI's potentials for further advancement in the near-term and long-term ?
What is the true level of its intelligence ?
ResearchGPT
ResearchGPT is a custom GPT especially designed for researchers.
It has a database of 282 million research articles and it answers your questions with references to published articles.
No fake citations.
Here's how to use it: twitter.com/i/web/status/1…
— Mushtaq Bilal, PhD (@MushtaqBilalPhD)
8:06 PM • Nov 19, 2023