The Department of Veterans Affairs has used artificial intelligence (AI) to analyze millions1 of voluntary survey responses, flagging indicators of self-harm or homelessness for immediate follow-up. A voice assistant named AUDREY, currently being piloted by the Department of Homeland Security2, helps provide first responders with the information they need to rapidly make critical decisions. The Food and Drug Administration (FDA) is exploring how predictive modeling can help combat the national opioid epidemic3 to more efficiently identify contributing trends and better direct interventions.
These examples—and scores of others—illustrate how the federal government, one of the nation’s foremost users of AI, is already using the technology to serve citizens. To better understand the state of AI adoption in government, Booz Allen Hamilton partnered with Ipsos on a nationwide survey of federal decision-makers and specialists involved in AI projects. Notably, 74 percent of respondents are AI project managers, and nearly half identified as project creators; altogether, they represent almost every executive department and all three branches of government. The survey asked not only what challenges they face and what outcomes they are pursuing, but also about decision-making related to AI investment and integration.
While the results strongly indicate that AI is widely embraced across the U.S. government, progress is nascent, and most respondents are focused on short-term priorities like efficiency through automation and making better-informed decisions. While projects are piloted and AI strategies are published, AI’s potential to make better use of taxpayers’ dollars may not be getting the practical consideration it needs to percolate and scale. A multifaceted plan of action for utilizing AI can break down barriers to imagination and drive strategic thinking about using AI to build human-machine teams.
The Stakes Are Different This Time
The U.S. government’s AI adoption journey is one shared by many legacy organizations. While sometimes considered a late adopter, the government is no stranger to leading-edge technology: Its research and development efforts, which have included AI funding since the 1960s, helped create today’s digital world. But AI creates new interdependencies and effects as few other technologies do.
Many AI initiatives address core operational tasks. Because of this, and the fact that AI necessitates significant legal and policy considerations, the technology comes with an imperative to educate the workforce to understand what it can and cannot do, legally and ethically speaking. Reflecting—and almost certainly compounding—this complexity, our survey showed that only 35 percent of survey respondents perceived their “senior leaders are committed” to AI, an action-perception-reality gap which cannot be ignored. Logically, AI requires active involvement and ownership far beyond the developers, as Henry Kissinger noted in The Atlantic last year.
In operationalizing AI, project managers and federal leaders now face two challenges. The first, at the micro level, is deploying AI from the lab to end users in the operations center, maintenance depot, or battlefield. The second, at the macro level, is executing the strategy and building enduring capacity to develop and adopt AI in everyday business. It’s critical to build a robust pipeline to operationalize AI that enables leaders, project managers, and the entire workforce to envision AI’s full potential now and in the future.
Clearly, operationalizing AI is an interdisciplinary effort. Technical infrastructure and data are key prerequisites to machine learning. Have decision-makers–comprising groups of individuals in nearly 80 percent of the cases surveyed–considered open architectures to reduce the risk of obsolescence? Is there a data strategy establishing unified policies and procedures, such as data quality standards, data use, and ownership reviews? Is there enough appropriate data to train a machine learning model in the first place? Is there a chief data officer, and are they a referee or decision authority in these and other processes? Has cybersecurity—one of the top concerns of respondents—been considered from the start?
A Question of Capacity, Not Will
Workforces must be technically and culturally prepared to realize AI’s full potential. Humans remain irreplaceable, but our roles change with the elevation of human-machine teaming, as repetitive or onerous tasks are offloaded to machines. Have knowledge and skills planning accounted for not just the developers, data scientists, executives, and end users, but also the broader workforce? The potential disruption of their day-to-day efforts can be unsettling, as some of our respondents perceived, so AI must be demystified and understood as part of an organizational goal. Are employees’ concerns being proactively addressed with attentive empathy, and do they participate in setting parameters for how and when AI is used? Who will evaluate the disruptions? By familiarizing a representative portion of the workforce with AI’s capabilities, organizations also gain opportunities to crowdsource understanding of its friction points and possible data goldmines.
Operationalizing AI is fundamentally a process comprising people, policy, and technology—and one that must examine impacts across the organizations and citizens it serves. Does planning, development, and integration consider ethical issues—such as privacy, biased data, or unintended consequences—and incorporate the right stakeholders to manage risk? What level of transparency must the AI model achieve in order to establish a threshold for trust, another top concern for our respondents? What level of uncertainty regarding its results can be tolerated, and who decides? Is development agile enough to overcome frictions such as data availability, data quality, changing requirements, or budget uncertainty? End users—including the government’s customers—will have keen and candid perspectives on deploying AI in its intended environment. Are feedback mechanisms in place to help evaluate the technology, including whether it strays from its trained purpose? Can we learn from the model’s exploration, even if the model is not deployable?
AI technology continues to advance at a rate that challenges even tech titans struggle. NVIDIA, the leading supplier of graphics processing units (GPUs) used in deep learning and supercomputers, now manufactures credit-card-sized AI computers which operate at nearly half a teraflop (one million million computations per second)—for less than $100 each. As government AI strategies consider the possibilities, leaders seem to want their workforce to be able to say, “I know what AI is and what it can do for me, and I know that we are capable of doing it.” This collective aspiration must be buttressed by a capability and culture for operationalizing AI—and one which includes everyone in the journey. The stakes could not be greater for our nation.
About Booz Allen
For more than 100 years, military, government, and business leaders have turned to Booz Allen Hamilton to solve their most complex problems. As a consulting firm with experts in analytics, digital, engineering, and cyber, we are a key partner on some of the most innovative programs for governments worldwide and trusted by their most sensitive agencies. We work shoulder to shoulder with clients, using a mission-first approach to choose the right strategy and technology to help them realize their vision. To learn more, visit BoozAllen.com.
About Our Research Partner Ipsos
Ipsos is a leading global independent market research company that is passionately curious about people, markets, brands, and society. They make our changing world easier and faster to navigate and inspire clients to make smarter decisions. They deliver research with security, speed, simplicity, and substance. For more information, please visit www.ipsos.com.