House Bipartisan Task Force on Artificial Intelligence Report
In February 2024, the House of Representatives launched a bipartisan Task Force on Artificial Intelligence (AI). The group was tasked with studying and providing guidance on ways the United States can continue to lead in AI and fully capitalize on the benefits it offers while mitigating the risks associated with this exciting yet emerging technology. On 17 December 2024, after nearly a year of holding hearings and meeting with industry leaders and experts, the group released the long-awaited Bipartisan House Task Force Report on Artificial Intelligence. This robust report touches on how this technology impacts almost every industry ranging from rural agricultural communities to energy and the financial sector to name just a few. It is clear that the AI policy and regulatory space will continue to evolve while being front and center for both Congress and the new administration as lawmakers, regulators, and businesses continue to grapple with this new exciting technology.
The 274-page report highlights “America’s leadership in its approach to responsible AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.” Specifically, it outlines the Task Force’s key findings and recommendations for Congress to legislate in over a dozen different sectors. The Task Force co-chairs, Representative Jay Obernolte (R-CA) and Representative Ted Lieu (D-CA), called the report a “roadmap for Congress to follow to both safeguard consumers and foster continued US investment and innovation in AI,” and a “starting point to tackle pressing issues involving artificial intelligence.”
There was a high level of bipartisan work on AI in the 118th Congress, and although most of the legislation in this area did not end up becoming law, the working group report provides insight into what legislators may do this year and which industries may be of particular focus. Our team continues to monitor legislation, Congressional hearings, and the latest developments writ large in these industries as we transition into the 119th Congress. See below for a sector-by-sector breakdown of a number of findings and recommendations from the report.
Data Privacy
The report’s section on data privacy discusses advanced AI systems’ need to collect huge amounts of data, the significant risks this creates for the unauthorized use of consumers’ personal data, the current state of US consumer privacy protection laws, and recommendations to address these issues.
It begins with a discussion of AI systems’ need for “large quantities of data from multiple diverse sources” to perform at an optimal level. Companies collect and license this data in a variety of ways, including collecting data from their own users, scraping data from the internet, or some combination of these and other methods. Further, some companies collect, package, and sell scraped data “while others release open-source data sets.” These collection methods raise their own set of issues. For example, according to the report, many websites following “a voluntary standard” state that their websites should not be scraped, but their requests are ignored and litigation ensues. It also notes that some companies “are updating their privacy policies in order to permit the use of user data to train AI models” but not otherwise informing users that their data is being used for this purpose. The European Union and Federal Trade Commission have challenged this practice. It notes that in response, “some companies are turning to privacy-enhanced technologies, which seek to protect the privacy and confidentiality of data when sharing it.” They also are looking at “synthetic data.”
In turn, the report discusses the types of harms that consumers frequently experience when their personal and sensitive data is shared intentionally or unintentionally without their authorization. The list includes physical, economic, emotional, reputational, discrimination, and autonomy harms.
The report follows with a discussion of the current state of US consumer privacy protection laws. It kicks off with a familiar tune: “Currently, there is no comprehensive US federal data privacy and security law.” It notes that there are several sector specific federal privacy laws, such as those intended to protect health and financial data and children’s data, but, as has become clear from this year’s Congressional debate, even these laws need to be updated. It also notes that 19 states have adopted state privacy laws but notes that their standards vary. This suggests that, as in the case of state data breach laws, the result is that they have “created a patchwork of rules and regulations with many drawbacks.” This has caused confusion among consumers and resulted in increased costs and lawsuits for businesses. It concludes with the statement that Federal legislation that preempts state data privacy laws has advantages and disadvantages.” The report outlines three Key Findings: (1) “AI has the potential to exacerbate privacy harms;” (2) “Americans have limited recourse for many privacy harms;” and (3) “Federal privacy laws could potentially augment state laws.”
Based on its findings, the report recommends that Congress should: (1) help “in facilitating access to representative data sets in privacy-enhanced ways” and “support partnerships to improve the design of AI systems” and (2) ensure that US privacy laws are “technology neutral” and “can address the most salient privacy concerns with respect to the training and use of advanced AI systems.”
National Security
The report highlights both the potential benefits of emerging technologies to US defense capabilities, as well as the risks, especially if the United States is outpaced by its adversaries in development. The report discusses the status and successes of current AI programs at the Department of Defense (DOD), the Army, and the Navy. The report categorizes issues facing development of AI in the national security arena into technical and nontechnical impediments. The technical impediments include increased data usage, infrastructure/compute power, attacks on algorithms and models, and talent acquisition, especially when competing with the private sector in the workforce. The report also identifies perceived institutional challenges facing DOD, saying “acquisition professionals, senior leaders, and warfighters often hesitate to adopt new, innovative technologies and their associated risk of failure. DOD must shift this mindset to one more accepting of failure when testing and integrating AI and other innovative technologies.” The nontechnical challenges identified in the report revolved around third-party development of AI and the inability of the United States to control systems it does not create. The report notes that advancements in AI are driven primarily by the private sector and encourages DOD to capitalize on that innovation, including through more timely procurement of AI solutions at scale with nontraditional defense contractors.
Chief among the report’s findings and recommendations is a call to Congress to explore ways that the US national security apparatus can “safely adopt and harness the benefits of AI” and to use its oversight powers to hone in on AI activities for national security. Other findings focus on the need for advanced cloud access, the value of AI in contested environments, and the ability of AI to manage DOD business processes. The additional recommendations were to expand AI training at DOD, continue oversight of autonomous weapons policies, and support international cooperation on AI through the Political Declaration on Responsible Military Use of AI. The report indicates that Congress will be paying much more attention to the development and deployment of AI in the national security arena going forward, and now is the time for impacted stakeholders to engage on this issue.
Education and the Workforce
The report also highlights the role of AI technologies in education and the promise and challenges that it could pose on the workforce. The report recognizes that despite the worldwide demand for science, technology, engineering, and mathematics (STEM) workers, the United States has a significant gap in the talent needed to research, develop, and deploy AI technologies. As a result, the report found that training and educating US learners on AI topics will be critical to continuing US leadership in AI technology. The report notes that training the future generations of talent in AI-related fields needs to start with AI and STEM education. Digital literacy has extended to new literacies, such as media, computer, data, and now AI. Challenges include resources for AI literacy.
US leadership in AI will require growing the pool of trained AI practitioners, including people with skills in researching, developing, and incorporating AI techniques. The report notes that this will likely require expanding workforce pathways beyond the traditional educational routes and a new understanding of the AI workforce, including its demographic makeup, changes in the workforce over time, employment gaps, and the penetration of AI-related jobs across sectors. A critical aspect to understanding the AI workforce will be having good data. US leadership in AI will also require public-private partnerships as a means to bolster the AI workforce. This includes collaborations between educational institutions, government, and industries with market needs and emerging technologies.
While the automation of human jobs is not new, using AI to automate tasks across industries has the potential to displace jobs that involve repetitive or predictable tasks. In this regard, the report notes that while AI may displace some jobs, it will augment existing jobs and create new ones. Such new jobs will inevitably require more advanced skills, such as AI system design, maintenance, and oversight. Other jobs, however, may require less advanced skills. The report adds that harnessing the benefits of AI systems will require a workforce capable of integrating these systems into their daily jobs. It also highlights several existing programs for workforce development, which could be updated to address some of these challenges.
Overall, the report found that AI is increasingly used in the workplace by both employers and employees. US AI leadership would be strengthened by utilizing a more skilled technical workforce. Fostering domestic AI talent and continued US leadership will require significant improvements in basic STEM education and training. AI adoption requires AI literacy and resources for educators.
Based on the above, the report recommends the following:
- Invest in K-12 STEM and AI education and broaden participation.
- Bolster US AI skills by providing needed AI resources.
- Develop a full understanding of the AI workforce in the United States.
- Facilitate public-private partnerships to bolster the AI workforce.
- Develop regional expertise when supporting government-university-industry partnerships.
- Broaden pathways to the AI workforce for all Americans.
- Support the standardization of work roles, job categories, tasks, skill sets, and competencies for AI-related jobs.
- Evaluate existing workforce development programs.
- Promote AI literacy across the United States.
- Empower US educators with AI training and resources.
- Support National Science Foundation curricula development.
- Monitor the interaction of labor laws and worker protections with AI adoption.
Energy Usage and Data Centers
AI has the power to modernize our energy sector, strengthen our economy, and bolster our national security but only if the grid can support it. As the report details, electrical demand is predicted to grow over the next five years as data centers—among other major energy users—continue to come online. These technologies’ outpacing of new power capacity can “cause supply constraints and raise energy prices, creating challenges for electrical grid reliability and affordable electricity.” While data centers only take a few years to construct, new sources of power, such as power plants and transmission infrastructure, can take up to or over a decade to complete. To meet growing electrical demand and support US leadership in AI, the report recommends the following:
- Support and increase federal investments in scientific research that enables innovations in AI hardware, algorithmic efficiency, energy technology development, and energy infrastructure.
- Strengthen efforts to track and project AI data center power usage.
- Create new standards, metrics, and a taxonomy of definitions for communicating relevant energy use and efficiency metrics.
- Ensure that AI and the energy grid are a part of broader discussions about grid modernization and security.
- Ensure that the costs of new infrastructure are borne primarily by those customers who receive the associated benefits.
- Promote broader adoption of AI to enhance energy infrastructure, energy production, and energy efficiency.
Health Care
The report highlights that AI technologies have the potential to improve multiple aspects of health care research, diagnosis, and care delivery. The report provides an overview of use to date and its promise in the health care system, including with regard to drug, medical device, and software development, as well as in diagnostics and biomedical research, clinical decision-making, population health management, and health care administration. The report also highlights the use of AI by payers of health care services both for the coverage of AI-provided services and devices and for the use of AI tools in the health insurance industry.
The report notes that the evolution of AI in health care has raised new policy issues and challenges. This includes issues involving data availability, utility, and quality as the data required to train AI systems must exist, be of high quality, and be able to be transferred and combined. It also involves issues concerning interoperability and transparency. AI-enabled tools must be able to integrate with health care systems, including EHR systems, and they need to be transparent for providers and other users to understand how an AI model makes decisions. Data-related risks also include the potential for bias, which can be found during development or as the system is deployed. Finally, there is the lack of legal and ethical guidance regarding accountability when AI produces incorrect diagnoses or recommendations.
Overall, the report found that AI’s use in health care can potentially reduce administrative burdens and speed up drug development and clinical diagnosis. When used appropriately, these uses of AI could lead to increased efficiency, better patient care, and improved health outcomes. The report also found that the lack of standards for medical data and algorithms impedes system interoperability and data sharing. The report notes that if AI tools cannot easily connect with all relevant medical systems, their adoption and use could be impeded.
Based on the above, the report recommends the following:
- Encourage the practices needed to ensure AI in health care is safe, transparent, and effective.
- Maintain robust support for health care research related to AI.
- Create incentives and guidance to encourage risk management of AI technologies in health care across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes.
- Support the development of standards for liability related to AI issues.
- Support appropriate payment mechanisms without stifling innovation.
Financial Services
With respect to financial services, the report emphasizes that AI is already and has been used for decades within the financial services system, by both industry and financial regulators alike. Key examples of use cases have included fraud detection, underwriting, debt collection, customer onboarding, real estate, investment research, property management, customer service, and regulatory compliance, among other things. The report also notes that AI presents both significant risks and opportunities to the financial system, so it is critical to be thoughtful when considering and crafting regulatory and legislative frameworks in order to protect consumers and the integrity of the financial system, while also ensuring to not stifle technological innovation. As such, the report states that lawmakers should adopt a principles-based approach that is agnostic to technological advances, rather than a technology-based approach, in order to preserve longevity of the regulatory ecosystem as technology evolves over time, particularly given the rapid rate at which AI technology is advancing.
Importantly, the report notes that small financial institutions may be at a significant disadvantage with respect to adoption of AI, given a lack of sufficient resources to leverage AI at scale, and states that regulators and lawmakers must ensure that larger financial institutions are not inadvertently favored in policies so as not to limit the ability of smaller institutions to compete or enter the market. Moreover, the report stresses the need to maintain relevant consumer and investor protections with AI utilization, particularly with respect to data privacy, discrimination, and predatory practices.
A Multi-Branch Approach to AI/Next Steps
The Task Force recognizes that AI policy will not fall strictly under the purview of Congress. Co-chair Obernolte shared that he has met with David Sacks, President Trump’s “AI Czar,” as well as members of the transition team to discuss what is in the report.
We will be closely following how both the administration and Congress act on AI in 2025, and we are confident that no industry will be left untouched. We are here to assist stakeholders in all of the areas outlined above.
This publication/newsletter is for informational purposes and does not contain or convey legal advice. The information herein should not be used or relied upon in regard to any particular facts or circumstances without first consulting a lawyer. Any views expressed herein are those of the author(s) and not necessarily those of the law firm's clients.