CHAIRMAN HURD AND RANKING MEMBER KELLY RELEASE NEW REPORT ON ARTIFICIAL INTELLIGENCE
September 25, 2018
WASHINGTON – Today, House Oversight and Government Reform Subcommittee on Information Technology Chairman Will Hurd (TX-23) and Ranking Member Robin Kelly (IL-02) released awhite paper on artificial intelligence (AI). The paper presents lessons learned from the Subcommittee’s oversight and hearings on AI and sets forth key recommendations for moving forward.
Full text of the white paper can be foundhere.
- Beginning in February of 2018, the Subcommittee on Information Technology held a series of hearings on artificial intelligence (AI). In connectionwith those hearings, Committee staff met with leading experts from academia, industry, and government, and reviewed multiple reports on the subject.
Findings and Recommendations
While the Subcommittee’s work examined a number of challenges facing AI, the paper specifically focuses on the following four issue areas, and provides concrete recommendations for addressing each:
- The paper details how AI advancement in the short-term could lead to the loss of jobs due to AI-driven automation.
- To address this issue, the paper recommends federal, state, and local agencies “engage more with stakeholders on the development of effective strategies for improving the education, training, and reskilling of American workers to be more competitive in an AI-driven economy.”
- The federal government is also encouraged to “lead by example by investing more in education and training programs that would allow for its current and future workforce to gain the necessary AI skills.”
- The paper finds AI technologies rely heavily on computer algorithms that often require vast amounts of personal data and raise legitimate privacy concerns.
- To address this challenge, the paper recommends federal agencies “review federal privacy laws, regulations, and judicial decisions to determine how they may already apply to AI products within their jurisdiction, and – where necessary – update existing regulations to account for the addition of AI.”
- The paper describes how AI systems are increasingly being used to make consequential decisions about people and the harm that can occur when AI systems rely on biased data sets.
- To better account for this problem, the paper recommends that when federal, state, and local agencies use AI-type systems to make decisions about people, these agencies “should ensure the algorithms supporting these systems are accountable and inspectable.”
Malicious Use of AI:
- The paper highlights how AI’s computing power increases the risks of cyberattacks that are even more likely to exploit vulnerabilities that exist on the computer networks of public and private sector entities.
- The paper recommends the government address this challenge by taking more active steps to “consider the ways in which [AI] could be used to harm individuals and society, and prepare for how to mitigate these harms.”