Written by Dave Nyczepir
Artificial intelligence research groups urge National AI Research Resource (NAIRR) task force to reconsider investing in shared computing and data infrastructure, which they say will subsidize the tech giants that control it rather than democratizing access.
The AI ââNow Institute at New York University and the Data & Society Research Institute submitted a joint response at the request of the task force for information, encouraging it to suspend its efforts to establish the NAIRR until it explores alternative investments in AI research and puts in place controls to ensure the responsible and ethical use of government data.
Despite the insistence of the White House Office of Science and Technology Policy and the National Science Foundation, NAIRR will democratize access to AI infrastructure for the benefit of academics and startups, researchers say this is in jeopardy by the fact that the government continues to license this infrastructure to the tech giants.
âWhat we’re looking at with the National AI Research Resource, unless it’s fundamentally transformed, is a significant grant going directly to Big Tech in a way that will expand and strengthen its power by laying off its infrastructure and making it even more powerful. more essential to a national research agenda, âMeredith Whittaker, co-founder of the AI ââNow Institute, told FedScoop. “Exactly at the same time, we are seeing increased pressure on these industry players due to their concentrated power, regulatory arbitrage and fundamental questions about their compatibility with democratic government.”
Only the tech giants can and will spend billions of dollars over the next decade employing hundreds of site reliability engineers and data center operators to maintain the AI ââinfrastructure, everything by creating the software, tools and application programming interfaces that make up the AI ââresearch environment. This is why the CIA contracts with Amazon for its AI infrastructure instead of building its own, Whittaker said.
This same infrastructure gives tech giants the ability to aggregate records of personal information about global populations and use them to increase their profits, while refusing to reveal how these systems work on the grounds of corporate secrecy.
âWhy do we celebrate AI systems as hyper-capable and the future of everything from governance to war, when there are so many problems with these systems that have been documented over and over again? Whittaker added.
Facebook whistleblower Frances Haugen testified before Congress earlier this month, accusing the company of failing to take appropriate steps to tackle disinformation and other harmful content on its platforms, favoring instead profits. At the same time, a global outage of Facebook and its platforms has hurt global communications due to over-reliance on apps like WhatsApp.
AI systems remain fragile, fallible, and encode bias patterns that can harm vulnerable communities when deployed at scale, but the Department of Defense continues to spend billions on technology – which often amounts to a handful of statistical techniques useful for analyzing data but marketed under the name AI.
âThe right decision then is to take a break from the rapid development of these technologies and develop the democratic infrastructure for meaningful surveillance – focusing in particular on those subject to AI,â said Whittaker.
This means that people’s agencies want to use facial recognition or AI to determine if they are receiving government benefits.
The AI ââNow Institute and Data & Society do not suggest in their response to RFI that AI research should be scuttled entirely, but rather propose that NSF expand its national AI research institutes by funding areas. underfunded research projects, scholarships for under-represented students, scholarships by placing them in agencies and forums where communities harmed by AI systems can influence their design and deployment. NSF should also preserve the independence of its research by ending the practice of co-funding institutes by companies, according to the response.
Federal resources should also be allocated to auditing enterprise AI systems, as they primarily depend on mass surveillance for the large amounts of data they need to train their models, the groups say.
âYes, we should do audits, we should oversee, we should know where these systems are; it’s just the ground, âWhittaker said. “But we also need to ask ourselves more probing questions about whether we are comfortable with the level of oversight and the level of concentrated power required to create these systems and, in particular, whether we are comfortable with it. in the hands of a handful of for-profit corporations.