AI Systems Have Challenges to Overcome
The main objective of Crawford and Paglen is to show how miss-fitted data used to train AI algorithms can produce weird and undesired outcomes. ImageNet Roulette is currently on display as part of an exhibition in Milan.The creators highlight how — simply by introducing a number of offensive categories, like misogynistic or racist terminology — the AI model will start delivering likewise results. On the one hand, it sheds a light on how problematic AI systems can be with fundamentally bad data. On the other, it also means that AI systems could be manipulated by feeding them altered data to get the desired outcome. Crawford tweeted that, although ImageNet was a major achievement for artificial intelligence by being such a huge database, the project revealed fundamental problems with bias when it comes to classifying human beings. While ImageNet Roulette is a fun and gimmicky tool, it should raise some questions on how well these systems can be effectively used in real-world scenarios. Did you try the ImageNET Roulette? Let us know what you think of the experiment and share your results!Want to see how an AI trained on ImageNet will classify you? Try ImageNet Roulette, based on ImageNet's Person classes. It's part of the 'Training Humans' exhibition by @trevorpaglen & me – on the history & politics of training sets. Full project out soonhttps://t.co/XWaVxx8DMC pic.twitter.com/paAywgpEo4
— Kate Crawford (@katecrawford) September 16, 2019
Image courtesy of Twitter/@katecrawford, Shutterstock.
Disclaimer
In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content. Please note that our Terms and Conditions, Privacy Policy, and Disclaimers have been updated.