The Ocean is Dying and No One Cares

As we move forward, it becomes easier to forget about certain parts of the world. One part that is starting to become increasingly neglected is the ocean, despite its major impacts on us. A specific part of the ocean that is being ruined, that seemingly no one is paying any attention to, are the coral reefs.

What’s happening to the reefs is something called coral bleaching, which is when the algae called zooxanthellae that lives in the reefs tissues leaves, thus causing the reef to lose its major food source as well as its vibrant color. The symbiotic algae leaves the reef when water temperatures rise up from climate change, as well as when polluted water enters the reef (National Oceanic and Atmospheric Administration).

Now, a reef that becomes bleached isn’t dead, and can actually be restored, but they become much more susceptible to mortality when they are bleached. As a result of the bleaching, the habitats of multiple species of fish are destroyed, and the overfishing crisis is exacerbated even further. It also hurts communities who depend on those fishes for their income or nutrition, as they will not be reproducing as much. Additionally, the reefs serve as natural walls to protect life on land from waves and storm surges, and reefs themselves produce billions of dollars yearly in tourism (World Wild Life). Speaking more on the human dependence on the reefs, it is estimated that around half a billion people depend on healthy reefs for their livelihood, and they are also useful for making new medicines (National Oceanic and Atmospheric Administration). 

This is all tragic, obviously, but if the coral reefs are so important, why isn’t there any major action being taken to support them? Well, the main reason as to why coral bleaching is even occurring is because of climate change and water pollution; decade-old topics that a lot of people are just tired of hearing. From the Earth Observatory, it is said that, “every 3 to 7 years a wide swath of the Pacific Ocean along the equator warms by 2 to 3 degrees Celsius.” Climate change is hitting our ocean hard, no doubt, but when also considering the severe storms that have been occurring, this also damages the reefs. Frankly enough, I think people don’t care about coral bleaching because, why should they? The social media and news presence of it is non-existent, so people aren’t generally exposed to what is going on. Additionally, if the issue is climate change and pollution, people are inherently going to think about it selfishly. When people think of how climate change and pollution is going to affect our world, people think of other people, the weather, food, water, etc., not marine life. To be completely honest, why would they? If no one is around to educate people on the current issues in the world, why would they ever expand their mindsets? How could they ever care?

It’s easy to shift the blame onto the media and news outlets and world governments and literally any major institution in the world, but what can you do to stop this? On an individual scale, the solutions are pretty straightforward; the standard climate change prevention, reducing pollution by recycling and disposing trash properly, reducing stormwater runoff, things like that (United States Environmental Protection Agency). These things can help in preventing more coral bleaching from taking place, but there are also organizations and programs that assist in repairing reefs who have been bleached. One of these programs would be the NOAA Coral Reef Conservation Program, which is putting in effort towards the research, conservation, and protection of coral reefs. The program’s efforts towards reef conservation range from growing and planting reefs, removing invasive species, as well as responding to any emergencies concerning the reefs (National Oceanic and Atmospheric Administration). 

Day by day, our reefs are getting bleached, and are likely being left for dead, hurting ecosystems across both water and land. Despite the real dangers and consequences that coral bleaching presents to a significant amount of life, the lack of concern for it is not only disappointing, but also infuriating. To completely ignore something so significant yet so prevalent is a betrayal of mankind on itself and the inhabitants of the planet. Were the situation to continue to worsen, the state of survival for around 25% of marine life would come into question (National Oceanic and Atmospheric Administration), and well being of so many people would be damaged, as well. Since no one wants to talk about it, either, it’s up to everyone as individuals to remedy the situation. Through the small actions from each person to assist in preventing coral bleaching by improving their own disposal habits, the reefs can still be maintained and restored at a healthy rate. Without action, however, those reefs, along with so much marine life as well as human life, will die.  


Does AI image generation exhibit stereotypes and bias?

By: Adrian Vargas, Abiri Franklyn, Ahnaf Habib, Luis Ocampo


In a world where technology and AI is ever evolving it is important to understand that AI can potentially perpetuate harmful stereotypes and biases towards certain groups/things. This study investigates whether AI generated images exhibit stereotypes related to gender, age, and race. Based on previous studies we hypothesized that AI will be more biased towards young white males. We used Bings AI image generator to generate 100 images per profession, based on gender-neutral profession terms such as Fire fighters, Lawyers, Computer Engineers, and Secretaries. Our research design involved manually categorizing individuals in the generated images using a set criteria based on age, race, and gender and comparing them to real-life demographics provided by Zippia datasets. Statistical analysis and visual representation of the data exhibited patterns and presence of bias in the AI-generated images such as the overrepresentation of white individuals and gender disparities. Overall, the results underscore the need for algorithmic fairness and inclusivity in AI systems in order to mitigate harmful effects on individuals in society. 

Key words: Artificial Intelligence, Bias, Stereotypes


This study investigates bias in AI image generation. Biased image generation may reinforce harmful stereotypes and discrimination, and can misrepresent or exclude certain individuals from the generated images. For this study, we used Bing’s Image Generation AI (, which uses an advanced version of OpenAI’s DALL-E. The prompts used were gender-neutral words such as Fire Fighters, Lawyer, Computer Engineer, and Secretary. 

In Caterina Antonopoulou’s study, the main idea presented in this study is the data sets used to train certain AI have underrepresented and overrepresented certain social groups, which leads to false representation data in these systems. AI generators were also found to have racial, and gender biases like being unable to recognize a black woman amongst her white colleagues. 

In another study by Marcelo O. R. Prates, he researched bias in machine translation, specifically in Google Translate. It found that certain occupations were associated with specific genders. Occupations traditionally dominated by males, such as scholar, engineer, and CEO, are consistently interpreted as male, while roles perceived as feminine, like nurse, baker, and wedding organizer, are interpreted as female. There is racial bias found within the Google Photo image labeling algorithm, dark-skinned individuals have been inaccurately labeled as “gorillas”. The findings revealed biases with both tools involving racial bias and gender bias.

In a different study by Mi Zhou, they show the presence of gender, racial, and facial expressions and appearance bias within different kinds of AI image generators, such as Midjourney, Stable Diffusion, DALL-E-2, BLS, and Google. It revealed gender and racial biases with AI image generators when it comes to certain occupations and facial expressions. This study emphasized AI has a role in shaping perceptions and stereotypes and its need to not be biased.

For this last study, by Blessing Mbalaka, it was found that when it comes to DALLE-E 2 it underperformed in generating detailed images of  “An African Family” compared to just “Family” showing bias in accurately capturing cultural diversity. Starry AI had poor accuracy in showing cultural elements even though it was better at rendering visible faces. The report expressed the importance of diversity in the training processes for the AI to become less biased. 

Our primary hypothesis is that AI-generated images will predominantly generate young white males. To test this hypothesis, our research design will use Bing’s Image generation AI to create an image from a gender-neutral career we provided it with. We will analyze the representation of different races, genders, and ages. We aim to show the bias AI has on different demographic categories and to promote algorithmic fairness and inclusivity.

Materials and Methods

Data Collection 

To gather background information we went to the City College of New York library database and searched for articles that related to the topic of “gender, age, or race bias in Artificial Intelligence (AI).” The article should be a primary source and also be “peer-reviewed journals.” 

Choose 1 gender-neutral profession term such as “Secretary,” “Lawyer,” “Computer Engineer,” or “Fire-Fighters.” Generate 100 images of the chosen profession using the Bing AI image generator.

To see whether or not the data presented by the AI presents bias we categorize the images based on age, race, and gender. Then we compare the results to datasets of the chosen profession provided by Zippia (ex. Lawyer demographics).

Research Design

This study is an observational study where we are gathering information on the biases that are present in images produced by artificial intelligence. The independent variable in this study was the job profession that we chose. The dependent variable that we are measuring is the ages, races, and genders of the people presented in the AI-generated images. The categorization of the images has to be done manually. Lastly, the control variable in the study is the image generation website you use (in this case it would be and also the number of images you produce (100). 


To collect the data from the images we manually categorize them to the best of our ability based on specific criteria for race, age and gender. Carefully examine each image and estimate the age of the individuals based on their facial features, wrinkles, hair color, posture, and clothing. To categorize race, look at skin tone, facial features, and hair type. Lastly, for gender look for the hairstyle, clothing, facial features, and body posture. The reliability of this method is not perfect since it is very subjective and open to human bias.  


The sample size was set from the beginning of the experiment at 100 images generated per profession chosen. A sample size of 100 images reduces random variability and outliers which leads to a more stable estimate of demographic distributions in the profession. Furthermore, larger sample size allows you to detect patterns between the images more easily and draw conclusions about the bias in AI-generated images compared to real life. The data was collected manually as mentioned before and the demographics of the profession were gathered from Zippia. 

Statistical Analysis

For this study it is crucial to do comparative analysis where we compare the demographics of the chosen profession obtained from Zippia with the demographics represented in the AI-generated images. We can showcase the presence of bias in the AI-generated images by examining whether certain age groups, racial groups, or genders are overrepresented or underrepresented compared to the Zippia demographics. Visual representations of the data will be created using graphs, charts, or tables to illustrate the distribution of age, race, and gender in AI and real life. 



AI Demographics:

Official Demographics:

Generating images of secretaries showed a bias for White women. Out of the 948 secretaries generated, 66.1% were women. With more than 80% of secretaries generated being caucasian. This was much greater than the real world statistic where the demographic is 65.7% caucasian. However, the percentage of women was lower than the real world percent. As, the real world statistic was 89.9% women, a whole 23.8% higher than what AI was able to generate. 

Computer Engineering:

AI Demographics:

Official Demographics:

When directed to generate images of Computer Engineers, there was a bias of showing only White people in their early 20s or mid-30s. The ratio of men to women was placed heavier on the number of women as out of the 232 people generated in images, more than fifty percent of those people were women. There is also a bias for displaying White people as Computer Engineers mainly in their early 20s and late 30s.


AI Demographics:

Official Demographics:

Generating images of lawyers showed a disproportionate amount of men compared to women. The real-world ratio of female lawyers to male lawyers is very close to fifty-fifty. In the one hundred images, 449 lawyers were generated, showing that women make up 40.98%. Unlike the real world where 51.5% of women are lawyers. Displaying that there is a bias to show men as lawyers. There is also a bias that shows more White people in their late 20s and early 30s as lawyers, as almost all of the people displayed as lawyers were caucasian. 

Fire Fighters:

AI Demographics:

Official Demographics:

When directed to generate one hundred images of Firefighters,  AI for the most part makes images of white men working as Firefighters. The images show 119 white men and only 1 white woman. The only other results that are not white people included 7 images of Black men and 8 images of unidentifiable firefighters. These unidentifiable photos would depict the first responder on the scene turned away from the image. All of the images show men displaying various wrinkles on their faces as though they are tired or middle-aged. Compared to the usual demographic of Fire Fighters, the data is very similar.  Showing that this occupation is overall dominated by White men.


Going into the research phase and generating images, we held the expectation that the AI software would hold certain biases. More specifically, the AI will generate more images of white men than they should, and the results were both supporting and contradicting this. 

While collecting and analyzing all of the images, we had a few struggles on the way. During the collection, the sites we used stagnated our progress since only a few images can be generated at a time without a subscription. As for analyzing, issues arose concerning how data should be organized. The miscommunication between the group slowed down the overall process, and future groups who are interested in conducting the experiment should be mindful of these things. 

Looking at the data itself and our hypothesis; we were mostly correct. Across all four of the careers chosen and the generated images, there were, by far, much more white people than there should have been. Computer Engineers and Lawyers had percentages close to what the field percentages are, only differing by about 2%. Meanwhile, for Firefighters and Secretaries, the amount of white people generated from the images was around 20% higher than what is found in the field. Additionally, when it came to people of color, the images weren’t really holding to the standard. It was a bit different when it came to gender, however. Overall, none of the sets of data were able to accurately present the field averages. The biggest outlier was the Computer Engineering data, which yielded a much higher percentage of females than in the industry.

There were other ways in which bias was demonstrated through the images. Notably when it came to African American and Hispanic/ Latino women. Looking at the images directly, for whatever reason, almost everyone in the aforementioned group shared a very similar kind of hairstyle; undefined, thicker coil patterns, and it was overall a strange decision made by the AI. White men and women, for comparison, also shared many of the same hairstyles from person to person, however, there were still slight differences between each person. It got to the point where it was easier to differentiate between a white woman and a white-passing Hispanic/ Latino just by looking at their hair. Additionally, the AI struggles with properly representing age. Repeatedly, the people generated are just too young when considering the fields they work in and their averages.

So what’s the issue? As mentioned earlier in the report, the AI seems to lack the ability to be unbiased due to their databases (Antonopoulou, C.). The databases aren’t vast enough to give the systems the proper ability to generate unbiased images. While mistakes are acceptable when creating new systems that don’t necessarily harm people, allowing them to make these mistakes for so long without any correction whatsoever is an issue. The continued ignorance of the developers of this software enables the biases of real-world people, and those can cause harm to others. At its core, these companies are just businesses trying to make money, and going out of their way to solve these issues would take away from their profits and likely not even be “worth the investment,” but there is an ethical obligation to be fulfilled here. 

Altogether, our findings through this study reflected our own thoughts coming in as well as the findings of other studies. While there were some surprises in our data, the fact remains that the AI image generators still have much work to do. 


Antonopoulou, C. (2023). Algorithmic bias in anthropomorphic artificial 

intelligence: Critical perspectives through the practice of Women Media Artists and designers. Technoetic Arts, 21(2), 157–174.  

Epistemically violent biases in artificial intelligence design: The case of 

dalle-e 2 and Starry Ai. Digital Transformation and Society. (n.d.). 

Prates, M.O.R., Avelar, P.H. & Lamb, L.C. Assessing gender bias in machine translation:

 a case study with Google Translate. Neural Comput & Applic 32, 6363–6381 (2020).

Zhou, M., Abhishek, V., Derdenger, T., Kim, J., & Srinivasan, K. (2024). 

Bias in Generative AI. arXiv preprint arXiv:2403.02726.2403.02726.pdf (




US. (n.d.). 

Zippia. Firefighter Demographics and Statistics [2024]: Number Of Fire Fighters 

In The US. (2023, July 21). 

Zippia. Lawyer Demographics and Statistics [2024]: Number Of Lawyers In The 

US. (2024, April 5).  


Central Park Conservancy PSA

For our PSA, we focused on the Central Park Conservancy. The Central Park Conservancy is an organization that works towards not only cleaning and maintaining the state of not only Central Park, but multiple other parks in New York City as well. Throughout our PSA, we wanted to not only shine light on the efforts of the Conservancy, but also the results of those efforts through our visuals. Central Park has many great sightlines, and we made sure to show some of that off. It’s because of the Conservancy that Central Park has kept its beauty and remained attractive to tourists, and hopefully it continues to fulfill its responsibilities in preserving the Park.