As the march of progress continues, artificial intelligence (AI) stands out as one of the most transformative forces of our time. Its profound influence on every facet of society invites us to consider critical questions regarding individual rights, as well as the need for transparency and accountability in AI systems. States and municipalities in the United States are at the forefront of this challenge, pioneering various actions to address algorithmic harm—from the formation of task forces to the implementation of privacy standards.
As the march of progress continues, artificial intelligence (AI) stands out as one of the most transformative forces of our time. Its profound influence on every facet of society invites us to consider critical questions regarding individual rights, as well as the need for transparency and accountability in AI systems. States and municipalities in the United States are at the forefront of this challenge, pioneering various actions to address algorithmic harm—from the formation of task forces to the implementation of privacy standards.
Recognizing Areas for Action
One of the distinguishing factors in current efforts is the difference between a sole focus on advancing AI research and taking meaningful steps to protect individuals from potential harm caused by algorithms. Protection of individual interests is our touchstone, reflected in the initiatives that have emerged within various jurisdictions.
Federal Progress on AI Principles and Oversight
At the federal level, the U.S. demonstrated its commitment to ethical AI by endorsing the Organisation for Economic Cooperation and Development (OECD) Principles in 2019, advocating for inclusive growth, sustainable development, and societal well-being. Aligning with this perspective, the U.S. Government established bodies such as the Defense Innovation Board and the National Security Commission on AI. They have dedicated years to studying AI and have greatly influenced the direction of national policy-making efforts.
Furthermore, the Office of Management and Budget directed federal agencies to devise AI regulations for the industries they oversee. Although the details of these plans are still under wraps, the public can take heart in this apparent dedication to responsible AI management.
In a pursuit of ethical stewardship, ten guiding principles were laid out, including public trust, scientific integrity, and non-discrimination. This guiding framework is designed to aid in the promotion of transparency, safety, and security in AI applications, establishing a benchmark for future developments.
Legislative Response to AI Challenges
The U.S. Congress is equally engaged, notably with the National AI Initiative Act of 2020, promoting AI research and development, and the funding of AI challenges. It also brought about a directive for the National Institute of Standards and Technology (NIST) to create an AI risk management framework, serving as a blueprint for AI safety and reliability.
Several proposed bills underscore the focus on accountability and transparency in algorithmic processes. The Algorithmic Accountability Act and the Algorithmic Justice and Online Platform Transparency Act, for example, aim to study and regulate the accuracy, bias, fairness, and security in AI systems, providing greater protection for consumers.
Moreover, the Facial Recognition and Biometric Technology Moratorium Act and the No Biometric Barriers to Housing Act reflect the growing concern over biometric surveillance. These acts propose prohibitions and guidelines to safeguard against the misuse of facial recognition and other biometric data, especially in sensitive settings such as federal establishments and public housing.
State and Local Efforts in Protecting Privacy
At the state and local level, proactive measures are also in place. Cities like San Francisco, Boston, and Oakland, as well as states such as Massachusetts, have introduced legislation banning or limiting the use of biometric technologies. These measures emphasize the importance of upholding individual privacy and autonomy in the face of increasingly sophisticated AI capabilities.
The push towards transparency, accountability, and the defense of human rights in AI is both vital and timely. As AI technologies evolve, policies and regulations must be crafted with a clear focus on upholding societal values and morals to realize the promise of AI while safeguarding against its inherent risks.
Looking Ahead: The Future of AI Policy
In forthcoming discussions, we will continue to explore the intricate intersection of AI with human rights, both within the United States and on the global stage. Policymakers, organizations, and individuals are contributing to a nuanced dialogue on how AI can be guided to work in the best interests of humanity. The subsequent part of this series will delve deeper into the initiatives and regulations shaping the responsible use of AI, highlighting global endeavors to navigate this complex environment.
As AI deepens its influence in our lives, ongoing vigilance and informed policy-making will be our compass to ensure that the technology enriches society while respecting the rights and dignity of each individual. Stay engaged as we follow the unfolding narrative of AI and its role within the broader canvas of human rights and societal development.
Information for this article was gathered from the following source.