0 0
Read Time:5 Minute, 19 Second

[ad_1]

2020 has been an eventful year, not least due to a pandemic that shows few signs of slowing down. The AI ​​research community has had its own tribulations, capped by Google’s dismissal of ethicist Timnit Gebru and an argument over AI ethics and ‘cancel culture’ involving retired University of Washington professor Pedro Domingos. Facebook’s chief AI scientist Yann LeCun has left (and joined) Twitter after an acrimonious debate over the origin of the bias in AI models. And companies like Clearview AI, Palantir, and Rekor’s license plate digitization has broadened its reach to attract favors – and business – with law enforcement. All the while, artificial intelligence has failed to avoid disadvantage (and in some cases actively disadvantage) certain groups, whether responsible for moderating content, predict UK student gradesor crop images in Twitter timelines.

With 2020 in the mirror and New Year’s resolutions in mind, I think the AI ​​community would do well to consider the proposal that Zachary Lipton, assistant professor at Carnegie Mellon University, brought forward earlier this year. He argued for a one-year moratorium on studies for the entire industry to encourage « thinking » as opposed to « sprinting / scrambling / spamming » towards deadlines. « Greater rigor in exposition, science and theory is essential both for scientific progress and for fostering productive discourse with the general public, » Lipton wrote in a meta-analysis with the University of California , Jacob Steinhardt of Berkeley. “Moreover, as practitioners apply [machine learning] in critical areas such as health, law and autonomous driving, a calibrated awareness of the abilities and limitations of [machine learning] the systems will help us to deploy [machine learning] responsibly.  »

Lipton and Steinhardt’s advice was not heeded by researchers at the Massachusetts Institute of Technology, California Institute of Technology and Amazon Web Services, who, in an article published in July, suggested a method to measure bias. algorithmic facial analysis algorithms which one reviewer described as « high-tech blackface. » Another study this year, co-authored by scientists affiliated with Harvard and Autodesk, sought to create a « racially balanced » database that captured a subset of LGBTQ people but designed sex in a way that contradictory but dangerous, according to a University of Washington AI researcher. Os Keyes. More alarming was the August announcement of an Indiana parole study that seeks to predict recidivism with AI, even in the face of evidence that recidivism prediction algorithms reinforce racial bias and sexist.

In conversations with Khari Johnson of VentureBeat last year, Anima Anandkumar, director of machine learning research at Nvidia, Soumith Chintala of Facebook (who created the AI ​​framework PyTorch) and director of research at Nvidia. IBM, Dario Gil, predicted that finding ways for AI to better reflect the kind of society people want to build would become a central issue in 2020. They also expected the AI ​​community tackles issues of representation, equity and data integrity head-on, while ensuring that the data sets used to train the models are people-aware.

This does not happen. As researchers criticize Google for its opaque (and censorship) research practices, companies are commercializing models whose training contributes to carbon emissions, and problematic language systems are making their way to production, 2020 has been a year of regression rather than progression in many ways for the AI ​​community. But at the regulatory level, there is hope to right the ship.

In the wake of the Black Lives Matter movement, a growing number of cities and states have expressed concerns about facial recognition technology and its applications. Oakland and San Francisco in California and Somerville, Massachusetts are among the subways where law enforcement is prohibited from using facial recognition. In Illinois, businesses must obtain consent before collecting biometric information of any kind, including facial images. New York recently past a moratorium on the use of biometric identification in schools until 2022, and lawmakers in Massachusetts have put forward a suspension the use by the government of any biometric surveillance system within the Commonwealth. Most recently, Portland, Maine approved a ballot initiative banning the use of facial recognition by police and city agencies.

On a related note, the European Commission earlier this year proposed the Digital Services Act, which, if passed, would require companies to reveal information about how their algorithms work. Platforms with more than 45 million users in the European Union are expected to offer at least one ‘non-profiling’ content recommendation option, and fines for non-compliance could reach up to 6 % of a company’s annual turnover.

These examples do not suggest that the entire AI research community is ignoring ethics and therefore needs to brake. For example, this year will mark the Association for Computing Machinery’s fourth annual conference on Fairness, Accountability, and Transparency, which will feature among other work Gebru’s research on the impacts of large language models. On the contrary, recent history has shown that the production of AI, guided by regulations such as moratoriums and transparency legislation, promotes more equitable applications of AI than could have been pursued otherwise. In a typical case, faced with pressure from lawmakers, Amazon, IBM, and Microsoft have agreed to stop or end the sale of facial recognition technology to police.

The interests of shareholders – and even academia – will often conflict with the well-being of disenfranchised people. But the rise of legal remedies to curb abuse and abuse of AI shows weariness towards the status quo. In 2021, it’s not unreasonable to expect the trend to continue and the AI ​​community to be forced (or preemptively) to align. Despite all the failures of 2020, with any luck, he laid the groundwork for a change in thinking when it comes to AI and its effects.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark our channel. ‘IA, The Machine.

Thanks for reading,

Kyle wiggers

AI Writer

VentureBeat

VentureBeat’s mission is to be a digital city for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in running your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you,
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform
  • networking features, and more.

Become a member

[ad_2]

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Laisser un commentaire