Blogging Team 12: Spencer Cook, Qiaojing Huang, Sonika Modur, Witt Smith, Kayla Sprincis
News: AI & Looksmaxxing
Presented by Team 2: Slides
For the news presentation for class 18 Team 2 discussed the impact of AI on beauty standards and its involvement in a social media trend called “looksmaxxing”. This refers to taking extreme measures to improve perceived physical appearance.
The use of AI to judge appearance and rate attractiveness can be detrimental. Its ease of access and high integration in current society allows vulnerable individuals, such as children, to gain a negative perception of themselves and others, perpetuating the cycle of objectification and reinforcing harmful beauty standards. One of the main issues lies in the type of data that is used to train these facial-recognition models. A study on the impact of racial distribution in facial-recognition training data looked at one widely used dataset, MS1MV3, and found that it was 76.3% Caucasian, 14.5% African, 6.6% Asian, and 2.6% Indian. This skew in representation creates a bias in the AI that likely makes it more receptive to Eurocentric features and facial proportions. Because standards of attractiveness are often tied to perceived normativity(the closer the face is to the group average, the more attractive it will read), AI trained more heavily on one racial group will see that group’s features, proportions, and symmetry as the average for human faces in general. When members outside that racial category then use that AI model to rate their pictures, it will likely evaluate them based on closeness to that standard, further perpetuating Eurocentric beauty standards on non-European faces and reinforcing harmful ideas linked to eugenics and white supremacy.
Figure 1: Cats from Team 2 Presentation
As part of their presentation, Team 2 had a poll to asking which cat is the most “attractive” and instructed an AI to do the same. It involved several images of very similar cats, as seen in Figure 1, and showed how arbitrary the selection process was. The AI attempted to justify its selection with reasons based on symmetry and aesthetic cohesion.
The class then moved into discussing whether there is an appropriate or ethical way to use AI to rate attractiveness. The majority of the class agreed that there is not an ethical way to do so. Classmates brought up points about how there are cultural differences in what is attractive, and AI is not going to understand or account for that. Many thought it is not psychologically healthy for people to hyper-obsess about their appearance and compare themselves to these beauty standards. The discussion touched on how the younger generations are using AI for this purpose, with many people expressing disappointment at the fact. Some students had a more neutral viewpoint on the topic and shared that they did not think it was necessary or that it would be somewhat in vain to ban the technology, especially since there are already many other outlets enforcing the beauty standard, such as movies, magazines, and social media.
Team 2 concluded their presentation by discussing potential solutions. This included introducing social media regulations, which is already being done in some countries (including age restrictions in Australia).
Lead: AI and Creativity
Presented by Team 8: Slides
Reading and Media
- Musical DNA (RadioLab, 19 August 2010)
- David Cope, Facing the Music: Perspectives on Machine-Composed Music. Leonardo Music Journal, 1999.
- Ted Chiang, Why A.I. Isn’t Going To Make Art, The New Yorker, 31 August 2024. [PDF] [Web Link]
Discussion
Team 8 started off their presentation by discussing Musical DNA and Facing the Music. Both pieces of media discussed David Cope and his program EMI. This is a program that generates music in the style of a selected composer through identifying patterns in their pieces.
EMI and its creator faced a backlash when it first came out and in modern times. Many of the complaints included a feeling of betrayal at finding out the music was computer generated. Cope argued that the emotion involved with music is from yourself not the composer.
The last reading discussed was Why A.I. Isn’t Going To Make Art by Ted Chiang. This piece argued that art is subjective and from your own experiences. It pushed that AI is not capable of ideas and cannot create art.
Team 2 introduced a live quiz to determine if the class is able to tell the difference between human and AI generated music. There were ten questions and each involved two short clips, where one was human made and the other was AI generated. The clips can be found here.
The overall class correctness was 56% (there was one pair where 100% of the class correctly identified which was AI, bu essentially random guessing or worse performance on the others), which shows that people mostly could not identify the origin of the samples. During the game many commented that all samples sounded average and a “mush” of all music. Those that were more successful claimed that they were searching for which piece sounded more generic.
To start the discussion the class was then prompted to generate a word on how they each felt art is defined, which was then displayed in a word cloud. Common words included connection, human, expression, process, emotion, creativity, and electric. During the discussion, students argued that we should push creative tasks to be human endeavors and leave menial ones for AI.
The main argument against AI was its lack of sentience and need for training data involving the scraped work of others. Those with a more positive outlook on AI art credit that it has been misused as a shortcut for financial means, but there is a way to pursue it that is more genuine. Some questioned if it is even productive to try to define art since it is heavily based on the experience of the artist and the audience.
Then Team 2 asked if EMI’s output is different from asking a human composer to write a piece in another’s style. The core of the question lies in humans being influenced by their surroundings and not truly being original when creating. Those in opposition felt that humans interpreted their influences with emotion while AI reads the input as data. Those more positive on AI agreed that the process to create the piece may be different, but the end product would be mostly the same. They encouraged peers to stop centering the process and focus on the music. They also cited that AI would make the process faster. The opposition responded by asking if we want our art to be technically correct or human.
The final question presented to the class was if complete originality was possible, either for humans or computers. The class generally agreed that humans are constantly building on previous foundations, and that this also applies to computers. Some pointed out that people don’t always value originality, and often just want consistency.
The class was concluded by Professor Evans giving his own take on the content. He thinks that this is a topic that gets people emotional as art is core to human experience. Even if AI can produce outputs that are indistinguishable from human-produced art, we do (and should) care about who is behind the art that we experience.