• AIXBIO
  • Posts
  • AIXBIO Weekly #11 - Sep/18/2023

AIXBIO Weekly #11 - Sep/18/2023

Regulatory Landscape

Regulatory Landscape

Regulatory Landscape

The dawn of the AI era has seen significant strides globally. Recognizing AI's potential, In recent years, there has been growing momentum for more comprehensive AI regulation from various stakeholders across the ecosystem such as government, industry, academia, civil society, open-source community activists, and think tanks

The US has traditionally taken a light-touch approach to regulation, and this is generally true of AI regulation as well. The US government has not enacted any comprehensive AI legislation

Artificial Intelligence for the American People

Earlier In 2019, the National AI Commission released a report that made a number of recommendations for AI regulation, including the creation of a new federal agency to oversee AI development and deployment.

The National Artificial Intelligence Initiative (NAII)

2021 , The NAIIA directs the President, the NAII Office, the interagency committee (Select Committee on AI), and agency heads to:Sustain consistent support for AI R&D. Support AI education and workforce training programs. Support interdisciplinary AI research and education programs. Conduct outreach to diverse stakeholders. Support a network of interdisciplinary AI research institutes. Support opportunities for international cooperation on R&D and resources for trustworthy AI systems.

Blueprint for an AI Bill of Rights

In Oct 2022 Blueprint for an AI Bill of Rights was released, to outline a framework or set of guidelines related to the rights and regulations surrounding artificial intelligence. 

The five principles of the Blueprint for an AI Bill of Rights are:

CREATE AI Act of 2023

In July 2023 the U.S. introduced the CREATE AI Act, a bipartisan bill by U.S. Senators Martin Heinrich, Todd Young, Cory Booker, and Mike Rounds. This act authorizes the construction of the National Artificial Intelligence Research Resource (NAIRR).

Designed to reshape the future trajectory of artificial intelligence aiming to democratize AI development by providing essential resources through the NAIRR. This initiative seeks to bridge the investment gap between tech giants and other entities, emphasizing the need for broader participation in AI research and boosting American innovation across various sectors, including science, engineering, medicine, and agriculture. The NAIRR will also serve as a platform for developing and implementing trustworthy AI practices.

The act highlights the significant investment gap between tech giants like Google and Meta and other entities in AI research. It emphasizes the centralization of AI direction due to the vast data and computation requirements of modern AI. The act also references the National AI Initiative Act in the FY2021 (NAIIA) National Defense Authorization Act and the NAIRR Task Force Act's recommendations.

FDA Seeks Feedback on Regulating AI & ML in Drug Development & Manufacturing

The U.S. Food and Drug Administration (FDA) released a discussion paper requesting feedback from stakeholders in the pharmaceutical industry on how to regulate artificial intelligence (AI) and machine learning (ML) in drug development and manufacturing. AI and ML have already made an impact in these areas, but there are unique regulatory challenges that need to be addressed. The FDA is seeking input on issues such as ensuring compliance with current Good Manufacturing Practice (CGMP) when AI algorithms change manufacturing processes, addressing biases in AI decision-making, and protecting patient data privacy.

The FDA highlights potential benefits of AI/ML in drug development and manufacturing, such as creating predictive models to assess a patient's reaction to a drug, optimizing manufacturing processes, monitoring product quality, and detecting adverse events. The agency also outlines three overarching principles for using AI/ML in these fields: human-led governance and transparency, quality and reliability of data, and model development, performance, monitoring, and validation. The FDA is particularly interested in feedback on ensuring the accountability and trustworthiness of complex AI/ML systems, preventing bias and errors in data sources, securing cloud-based manufacturing data, storing data for regulatory compliance, and complying with regulations when ML algorithms adapt processes based on real-time data. Stakeholders are encouraged to provide feedback to the FDA by August 9, 2023, to help shape the agency's regulatory framework for AI and ML in drug development and manufacturing.

Tech Giants

Tech giants Google and IBM have responded to the U.S. government's call for comments on artificial intelligence (AI) and high-performance computing.

They voiced their support for flexible, risk-based AI regulatory frameworks, such as the National Institutes of Standards and Technology (NIST)’s AI Risk Management Framework.

They oppose the creation of a new single AI “super-regulator,” advocating instead for the federal government to take a more active role in promoting AI innovation and transparency. IBM has urged the administration to adopt a “precision regulation” posture towards AI, establishing rules to govern the technology’s deployment in specific use cases, not the technology itself. Google has reiterated the importance of NIST taking the lead on trustworthy AI policies, standards, and best practices in the U.S., and highlighted the need to reform government acquisition policies to harness the power of AI

The Frontier Model Forum

The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem. It will work on developing a public library of solutions to support industry best practices and standards. The forum also plans to establish an Advisory Board to guide its strategy and priorities. It welcomes participation from other organizations developing frontier AI models willing to collaborate towards the safe advancement of these models.

Biden-Harris Administration Initiates AI Cyber Challenge to Fortify U.S. Software Security

The Biden-Harris Administration has unveiled a significant two-year competition, the "AI Cyber Challenge" (AIxCC), aiming to harness artificial intelligence (AI) to safeguard America's most vital software, including the code that powers the internet and critical infrastructure. Led by the Defense Advanced Research Projects Agency (DARPA), this challenge seeks to identify and rectify software vulnerabilities using AI. The competition will see collaboration with several leading AI companies, including Anthropic, Google, Microsoft, and OpenAI. These companies will provide their expertise and cutting-edge technology for the challenge. With almost $20 million up for grabs in prizes, the competition aims to stimulate the development of innovative technologies to enhance the security of computer code, addressing one of the most pressing challenges in cybersecurity.

The announcement was made at the Black Hat USA Conference in Las Vegas, emphasizing the potential of AI in securing software used across the internet and in various societal sectors. DARPA will host an open competition, with the top competitors standing a chance to make a significant difference in America's cybersecurity landscape. The Open Source Security Foundation (OpenSSF) will act as a challenge advisor and will ensure the winning software code's immediate implementation to protect America's most crucial software.

White House Outlines R&D Priorities for FY 2025:

The White House released a memorandum detailing the Administration's research and development (R&D) priorities for the fiscal year 2025. The memo underscores the role of public science, technology, and innovation in achieving national aspirations such as health, environmental justice, global security, and economic growth. One of the primary focuses is on the responsible advancement of trustworthy artificial intelligence (AI) technology. The government aims to harness AI to improve public services, mitigate risks, and tackle societal challenges.

Governor Newsom's Executive Order for Ethical AI Development in California

California, recognized as the global epicenter for generative artificial intelligence, is taking significant steps to ensure the ethical and responsible use of this transformative technology. Governor Gavin Newsom has recently signed an executive order that lays out the state's approach towards AI. This order emphasizes the vast potential of GenAI, comparing its transformative power to the advent of the internet. However, it also acknowledges the inherent risks associated with such tools. To strike a balance, the state is adopting a measured approach, seeking expert opinions, and focusing on shaping a future where AI is ethical, transparent, and trustworthy.

The executive order encompasses various provisions to ensure the state remains a leader in AI while safeguarding its citizens. These include conducting a risk-analysis of potential threats posed by GenAI, establishing procurement guidelines, and developing a report on the beneficial uses of GenAI. Additionally, the state aims to provide training for government workers, fostering a workforce ready for the GenAI economy. A significant highlight is the establishment of a formal partnership with UC Berkeley and Stanford University to evaluate the impacts of GenAI on California.

Sens. Blumenthal & Hawley introduce a bipartisan AI framework

Senators Richard Blumenthal and Josh Hawley have unveiled a bipartisan framework aimed at regulating artificial intelligence (AI). This proposal comes as Congress intensifies its efforts to oversee this emerging technology. The senators' framework requires AI companies to apply for licensing, emphasizing that the tech liability shield, known as Section 230 of the Communications Decency Act, will not protect these companies from potential lawsuits. Blumenthal highlighted this framework as the first stringent legislative blueprint for enforceable AI protections. He expressed optimism about the path it sets for addressing both the potential and challenges of AI. Hawley, on the other hand, believes that the principles in this framework should guide Congressional action on AI regulation.

The framework proposes the establishment of a licensing system, managed by an independent oversight body. Companies that develop AI models would be required to register with this authority, which would possess the power to audit those applying for licenses. Furthermore, the proposal calls for clarity that Section 230, which currently shields tech companies from legal consequences of third-party content, should not be applicable to AI.

Algorithmic Accountability Act of 2022

While there is still no comprehensive AI legislation in the US, there are a number of bills pending in Congress. One of the most notable bills is the Algorithmic Accountability Act, which would require companies to conduct impact assessments on their AI systems to identify and mitigate potential risks.

Japan Embraces Light Touch AI Regulation Amid Global Debate on Approaches

Japan is proposing a light-touch approach to regulating artificial intelligence (AI) in order to quickly take advantage of the potential benefits of the technology. The Japanese government aims to address the challenges caused by its declining population through the use of AI. While Japan joins the US and the UK in favoring a hands-off stance on AI development, businesses are likely to follow the stricter rules proposed by the EU to ensure access to the lucrative European market. The EU has developed a comprehensive set of regulations through the EU AI Act, which includes requirements for AI developers to declare training data and minimize illegal or harmful content generation.

China's 'Heavy-Handed' Regulation Jeopardizes Progress in AI Race, Falling Further Behind US

China risks falling further behind the United States in the race to develop artificial intelligence (AI) due to "heavy-handed" regulation, according to experts. Despite China's impressive display of innovation at the World Artificial Intelligence Conference in Shanghai, it still lags behind its biggest competitor, the US. China has made significant strides in certain areas, such as military and cybersecurity, but overall, experts agree that it is slightly behind the US in AI development. There are three key ingredients needed for successful AI development: high-quality data, top-notch expertise, and advanced software and hardware. While China has more data than Western countries in certain fields, it has limitations due to less information available in Chinese. In terms of expertise, there is brilliant Chinese talent, but recent developments like Microsoft moving its AI centers away from China may hinder access to the best people in the future. China's hardware and software development is hampered due to export regulations imposed by the US, which restrict China's access to high-quality chips and technology. However, China does have advantages in areas like data labeling, where better access to affordable labor gives it an edge. The potential drawback for China is heavy regulation, as experts believe that China is likely to be more heavily regulated than other countries. This may hold the industry back and create uncertainty for tech companies. Overall, China is considered a key player in the AI race, but it needs to address regulatory issues to further its development and remain competitive with the US.

EU AI ACT

The EU has taken a more proactive approach to AI regulation than the US. In 2021, the European Commission proposed the AI Act, which would be the first comprehensive AI regulation in the world. The AI Act would classify AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk.

The AI Act is expected to be finalized in 2023 and take effect in 2025.

The EU introduced its first-ever law on AI, aiming to foster innovation while building trust. However, von der Leyen believes that a more comprehensive approach is needed at the global level. She suggests that this new framework should protect against systemic societal risks while promoting investments in safe and responsible AI systems. Drawing parallels with the IPCC for climate, she emphasized the need for a similar body for AI. This body would engage scientists, entrepreneurs, and innovators to understand AI's risks and benefits.

Spain launches AI regulation agency in bid to become industry leader

Spain is taking significant steps to position itself at the forefront of the AI industry by launching a dedicated AI regulation agency. This agency is the result of collaborative efforts between the Spanish Ministry of Finance and Civil Service and the Ministry of Economic Affairs and Digital Transformation. The establishment of such an agency underscores Spain's commitment to nurturing a thriving AI ecosystem within its borders. By introducing a regulatory body, Spain aims to ensure that AI innovations and applications align with established standards, promoting both growth and ethical considerations in the AI sector.

The EU’s latest amendments to the AI Act include new rules that cover manipulation by AI systems

The EU's recent amendments to the Artificial Intelligence Act introduce rules that address manipulation by AI systems, aiming to safeguard EU citizens from AI-related risks. However, these proposed laws are criticized for their vagueness and lack of scientific backing. Key concerns include the unclear definition of terms like "personality traits" and the need for a more comprehensive approach to defining and regulating manipulative AI techniques. The article emphasizes the importance of clear definitions and continuous multistakeholder review to ensure the Act's effectiveness in protecting individual rights, autonomy, and well-being.

Recently

European Commission President Ursula von der Leyen

Emphasized the need for Europe and its partners to develop a new global framework addressing artificial intelligence (AI) risks. Speaking at the "One Future" session of the G20 Summit in New Delhi, she highlighted the digital future's inevitability.

Von der Leyen stressed the dual nature of AI, presenting both risks and opportunities. She mentioned that even AI's creators are urging political leaders to regulate it.

KOLs testified before the U.S. Committee on Homeland Security & Governmental Affairs

The hearing focused on the implications and governance of artificial intelligence, particularly in the context of acquisition and procurement. Key members, including Chairman Gary Peters and Ranking Member Rand Paul, provided statements. The hearing featured testimonies from experts in the field of AI, representing both academia and industry. The discussions revolved around the responsible and ethical development and deployment of AI technologies, with insights from leading AI researchers and practitioners.

Critics

European Companies Express Concerns over EU's AI Regulations in Open Letter

In June 2023 Over 150 executives from European companies, including Renault, Heineken, Airbus, and Siemens, have signed an open letter criticizing the European Union's (EU) recently approved artificial intelligence (AI) regulations. The executives argue that the regulations outlined in the AI Act could "jeopardize Europe's competitiveness and technological sovereignty." The AI Act, which was greenlit by the European Parliament on June 14th, imposes strict rules on generative AI models, requiring providers of such models to register their products with the EU, undergo risk assessments, and meet transparency requirements. The signatories of the letter claim that these rules could lead to disproportionate compliance costs and liability risks, potentially driving AI providers out of the European market. They argue that the regulations are too extreme and risk hindering Europe's technological ambitions. The companies are calling for EU lawmakers to adopt a more flexible and risk-based approach to regulating AI. They also suggest the establishment of a regulatory body of experts within the AI industry to monitor the application of the AI Act. Some critics of the open letter, including Dragoș Tudorache, a Member of the European Parliament involved in developing the AI Act, claim that the complaints are driven by a few companies and emphasize that the legislation provides industry-led processes, transparency requirements, and a light regulatory regime.

Open-Source AI Leaders Rally to Safeguard Innovation in Upcoming EU AI Legislation

A coalition comprising prominent open-source AI stakeholders such as Hugging Face, GitHub, EleutherAI, and others, is making a concerted effort to influence EU policymakers. Their goal is to ensure that the forthcoming EU AI Act, poised to be the world's inaugural comprehensive AI law, champions the cause of open-source innovation. In a recently unveiled policy paper, these AI frontrunners have put forth recommendations designed to make the AI Act more amenable to open-source AI. They caution against the potential pitfalls of "overbroad obligations" that might inadvertently favor closed and proprietary AI development, sidelining open-source initiatives. Such a bias could potentially disadvantage the open AI ecosystem, especially when juxtaposed against AI behemoths like OpenAI and Google.

“Regulation often benefits incumbents “

Regulatory capture, where industries manipulate regulations in their favor, often at the public's expense.

Bill Gurley, (VC at Benchmark ),in his talk at ALL-IN Summit, delves deep into the concept of regulatory capture, where industries manipulate regulations in their favor, often at the public's expense.

Gurley introduces George Stigler, the father of regulatory capture theory, emphasizing that regulation often benefits incumbents. He cites the Telecommunications Act of 1996, which, instead of promoting competition and innovation, led to market consolidation and a decline in venture capital investment in telecom equipment.

Gurley warns that Silicon Valley is now in the crosshairs of regulators. However, he urges a cautious approach, emphasizing that while some regulation is necessary, it should not stifle innovation or be manipulated by a few powerful players.