Biden’s executive order on AI is a historic and ambitious initiative that aims to ensure that AI is developed and used in a safe, secure, and trustworthy manner.
President Joe Biden has issued a landmark executive order on artificial intelligence (AI) that aims to balance the needs of cutting-edge technology companies with national security and consumer rights. The order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.
Why is AI safety and security important?
AI is a powerful technology that can drive change at “warp speed” and has tremendous potential as well as perils. AI can create new opportunities for economic growth, social welfare, scientific discovery, and human creativity. But AI can also pose serious risks to national security, public health and safety, privacy, civil rights, consumer protection, and worker rights. For example, AI can be used to engineer dangerous biological materials, generate deceptive or harmful content, manipulate people’s behavior or emotions, disrupt critical infrastructure, or launch cyberattacks.
Therefore, it is essential to ensure that AI is developed and deployed in a safe, secure, and trustworthy manner. This means that AI systems should be aligned with human values and interests, respect human dignity and autonomy, prevent harm and discrimination, uphold fairness and accountability, protect privacy and security, foster innovation and competition, and comply with the law.
What are the main features of Biden’s executive order?
Biden’s executive order is the first of its kind in the US and the most comprehensive action ever taken by any government to advance the field of AI safety and security. The order directs the following actions:
- Require that developers of the most powerful AI systems share their safety test results and other critical information with the US government. This will ensure that AI systems are safe, secure, and trustworthy before companies make them public.
- Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
- Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
- Protect Americans’ privacy by requiring that federal agencies conduct privacy impact assessments for any AI system that collects or processes personal data. The Federal Trade Commission will also issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software.
- Advance equity and civil rights by requiring that federal agencies assess the potential impacts of AI systems on historically disadvantaged groups and communities. The Department of Justice will also issue guidance to prevent bias and discrimination in the use of facial recognition technology by law enforcement agencies.
- Stand up for consumers and workers by requiring that federal agencies monitor the effects of AI systems on consumer protection, competition, labor rights, and occupational safety. The Department of Labor will also issue guidance to ensure that workers are not unfairly displaced or discriminated against by AI systems.
- Promote innovation and competition by investing in research and development of AI technologies that can benefit society. The National Science Foundation will also launch a new initiative to support interdisciplinary research on AI ethics, governance, and social impact.
- Advance American leadership around the world by engaging with international partners to promote common values and standards for responsible AI development and use. The State Department will also lead efforts to establish multilateral agreements on AI safety and security.
The order is expected to have significant impacts on the technology industry, as well as on society at large. The order is also part of a broader strategy that involves legislative and diplomatic efforts to promote responsible AI innovation.
How will Biden’s executive order affect the technology industry?
Biden’s executive order is expected to have significant implications for the technology industry, especially for companies that develop or use advanced AI systems. On one hand, the order could create new challenges for companies that have to comply with stricter regulations, share sensitive information with the government, or face increased scrutiny from consumers or watchdogs. On the other hand, the order could also create new opportunities for companies that can demonstrate their commitment to safe, secure, and trustworthy AI practices, gain competitive advantage in the global market, or benefit from public funding or partnerships for research or innovation.
Some technology companies have already expressed their support for Biden’s executive order. For example, Microsoft’s president Brad Smith said that the order “will help ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence”. Google’s CEO Sundar Pichai said that the order “will help foster a culture of trust in AI and its potential to improve lives”. Facebook’s vice president of AI Jerome Pesenti said that the order “will help accelerate the development and adoption of responsible AI across industries and sectors”.
However, some technology experts have also raised some concerns or questions about Biden’s executive order. For example, Kate Crawford, a co-founder of the AI Now Institute, said that the order “is a good start, but it needs to be backed up by concrete actions and resources”. Stuart Russell, a professor of computer science at the University of California, Berkeley, said that the order “is a welcome step, but it does not address the fundamental issue of ensuring that AI systems are aligned with human values and preferences”. Oren Etzioni, the CEO of the Allen Institute for AI, said that the order “is a positive move, but it does not go far enough in addressing the global challenges and opportunities of AI”.
The order reflects Biden’s vision of making America a global leader in seizing the promise and managing the risks of AI.
Source URLs
📢 Partner with India CSR
Are you looking to publish high-quality blogs or insert relevant backlinks on a leading CSR and sustainability platform? India CSR welcomes business and corporate partnership proposals for guest posting, sponsored content, and contextual link insertions in existing or new articles. Reach our highly engaged audience of business leaders, CSR professionals, NGOs, and policy influencers.
📩 Contact us at: biz@indiacsr.in
🌐 Visit: www.indiacsr.in
Let’s collaborate to amplify your brand’s impact in the CSR and ESG ecosystem.