Harvard-MIT initiative grants $750K to projects looking to keep tech accountable
Artificial intelligence, or what passes for it, can be found in practically every major tech company and, increasingly, in government programs. A joint Harvard-MIT program just unloaded $750,000 on projects looking to keep such AI developments well understood and well reported.
The Ethics and Governance in AI Initiative is a combination research program and grant fund operated by MIT’s Media Lab and Harvard’s Berkman-Klein Center. The small projects selected by the initiative are generally speaking aimed at using technology to keep people informed, or informing people about technology.
AI is an enabler of both good and ill in the world of news and information gathering, as the initiative’s director, Tim Hwang, said in a news release:
“On one hand, the technology offers a tremendous opportunity to improve the way we work —
including helping journalists find key information buried in mountains of public records. Yet we
are also seeing a range of negative consequences as AI becomes intertwined with the spread of
misinformation and disinformation online.”
These grants are not the first the initiative has given out, but they are the first in response to an open call for ideas, Hwang noted.
The largest sum of the bunch, a $150K grant, went to MuckRock Foundation’s project Sidekick, which uses machine learning tools to help journalists scour thousands of pages of documents for interesting data. This is critical in a day and age when government and corporate records are so voluminous (for example, millions of emails leaked or revealed via FOIA) that it is basically impossible for a reporter or even team to analyze them without help.
Along the same lines is Legal Robot, which was awarded $100K for its plan to mass-request government contracts, then extract and organize the information within. This makes a lot of sense: People I’ve talked to in this sector have told me that the problem isn’t a lack of data but a surfeit of it, and poorly kept at that. Cleaning up messy data is going to be one of the first tasks any investigator or auditor of government systems will want to do.
Tattle is a project aiming to combat disinformation and false news spreading on WhatsApp, which as we’ve seen has been a major vector for it. It plans to use its $100K to establish channels for sourcing data from users, since of course much of WhatsApp is encrypted. Connecting this data with existing fact-checking efforts could help understand and MITigate harmful information going viral.
The Rochester Institute of Technology will be using its grant (also $100K) to look into detecting manipulated video, both designing its own techniques and evaluating existing ones. Close inspection of the media will render a confidence score that can be displayed via a browser extension.
Other grants are going to AI-focused reporting work by the Seattle Times and by newsrooms in Latin America, and to workshops training local media in reporting AI and how it affects their communities.
To be clear, the initiative isn’t investing in these projects — just funding them with a handful of stipulations, Hwang explained to TechCrunch over email.
“Generally, our approach is to give grantees the freedom to experiment and run with the support that we give them,” he wrote. “We do not take any ownership stake but the products of these grants are released under open licenses to ensure the widest possible distribution to the public.”
He characterized the initiative’s grants as a way to pick up the slack that larger companies seem are leaving behind as they focus on consumer-first applications like virtual assistants.
“It’s naive to believe that the big corporate leaders in AI will ensure that these technologies are being leveraged in the public interest,” wrong Hwang. “Philanthropic funding has an important role to play in filling in the gaps and supporting initiatives that envision the possibilities for AI outside the for-profit context.”
You can read more about the initiative and its grantees here.