DTC475 Final Portfolio by Joshua Jackson

legends

Selected Blogs


Annotation blog 1.1 - June 28th 2020

As I worked my way through the reading for this week, the biggest thing that stood out to me was in the book Digital Media Ethics by Charles Ess. The question I have found in the text is something I feel is worth elaborating on. Ess asks if Cyberbullying is a “serious problem for those of us living in a digital age”(Ess, 2014)? With multiple platforms of social media, we can see that one’s privacy could be an issue because it is so readily available at the palm of one’s hand. Cyberbullying, cyberstalking is a new form of picking on someone, but with an audience that can be infinite, one could see how this type of bullying presents serious problems.

As I moved further on in this reading, I couldn’t help to think about how Ess used “Amanda Todd” as a basis for this part of the book. I remember her case because my oldest son was born in June of 2012. I used Facebook, Twitter, and Instagram to showcase my newborn to show my friends and family how happy I was to have this little guy in my life. As I lay on the couch with my newborn, October had come by fast, and my wife and I started to hear about this girl who was stalked and terrorized online through social media and ultimately Committed suicide.

I think Ess provides us with a way of knowledge for which no matter what this girl did, people could find her on social media even after she changed her accounts. Since the majority of people nowadays communicate through social media, was she supposed to just cut herself off from the world? After Amanda changed schools and where she lived physically, she was still targeted once she became active online. How can we combat cyberbullies or stalkers if our digital footprint can be tracked anywhere we move to? I fear this issue is one that will never have any progress unless these individuals are punished by law. Combatting Cyberbullies

Work Cited

Ess, Charles. Digital Media Ethics. Polity Press, 2014.

Explanation of choice

My first blog choice was something that came to me when I was reading Digital Media Ethics by Charles Ess. I chose to blog about the topic of cyberbullying because I have had a family member affected by this type of behavior online. Fortunately for us, she was able to defend herself instead of succumbing to the pressures of this act of hatred towards her. As my niece was a popular cheerleader at Skyview high school, some took it upon themselves to make her look trashy and slutty, but what they didn't expect was a young woman with a shit ton of confidence to stand up and fight back against them.

Annotation blog 1.2 - July 5th 2020

For the Blog this week, I have chosen to continue the discussion of Cyberbullying, but first I want to touch on the topic of journalism. According to Digital Media Ethics by Charles Ess, chapter four focuses on how the virtue ethics framework is the most helpful in friendship when trying to treat people in a more respectful manner. Ess also touches on the topic of the rise of “citizen journalism,” which has become more widespread in today’s culture with the rise of smart devices and the quickness of information sharing. Anyone with a smartphone can now share the news of what’s happening around them at the touch of a button (Ess, 142).

Another topic we find with being able to get close without actual physical contact is what we find with online “friendships,” which sometimes are used as a medium for numerous forms of cyberbullying (Ess, 122). I find it disturbing that individuals such as Josh Evans, in the case of Megan Meier, or the Amanda Todd case, would create these caring avatar type personas and intentionally flirt to hook someone in, only to stalk them and push the victim into ultimately ending their lives.

As I have said before, I feel strongly that more monitoring and harsher punishments need to be brought in to bring justice to the victims of this disgusting behavior. These types of cases make me wonder if people have good intentions when socializing on the internet, I would think most do, but as is with a news report it would be smart to check all sources for credibility. Are online relationships healthy

Work Cited

Ess, Charles. Digital Media Ethics. Polity Press, 2014.

Explanation of choice

My second choice I stayed with the topic of cyberbullying and cyberstalking because I have a personal problem with people who attack others for merely some kind of disgusting kick out of it. I believe that's why I joined the military. I have always wanted to fight for those who couldn't fight for themselves. I feel it is a personal responsibility of mine to step in and say something when someone is being hurt or bullied, as some of the elementary kids have found out at my son's school. Awareness about this type of bullying needs to be spread, and I hope I was able to capture that in my blog.

Annotation blog 3.2 - July 27th 2020

This week I took on Blackbox Society by Frank Pasquale for my blog choice. I found this reading very interesting as the author sounds like a bit of a conspiracy theorist, and I am totally into that. Frank Pasquale gives us a look into how major corporations use secret algorithmic mechanisms to aid in their daily decisions. Through the collection of personal data via the internet and social media, data collected allows tech companies to see our decisions made so that they can write algorithms that influence our daily lives. Moreover, information gathered by these major companies such as Netflix, Facebook, and Google is rumored to be sold to the government as Pasquale states in chapter 2, “Laws prevent government itself from collecting certain types of information, but data brokers are not so constrained. And little stops the government from buying that information once it’s been collected ” ( Pasquale, 2016).

These multibillion-dollar companies build profiles of their users to sell their computer spawned opinions to people worldwide. Algorithms give us scores which are used to determine if we are worthy of receiving credit, renting or buying a home, and even for employment. As Pasquale notes, these scores are so prevalent, yet it is very secretive on how they are calculated, suggesting that entire societies play along “black box” rules. This Big Data only benefits financiers and a handful of tech entrepreneurs, while everyday people live oblivious to these rules in which their lives are played. The Data Big Tech Companies Have On You

Work Cited

Pasquale, Frank. “The Black Box Society: the Secret Algorithms That Control Money and Information.” Amazon, Harvard University Press, 2016, www.amazon.com/Black-Box-Society-Algorithms-Information/dp/0674970845.

Explanation of choice

For my third choice, I selected my blog about information gathering and the bias towards people. I genuinely have a hard time believing that I am not being watched every day and that my life is not being controlled by the data I give to my online personas. I feel that violation of privacy from big tech firms is no different from the government keeping tabs on us. Because Big tech sells our information secretly anyway, then what is the point in the government having regulation against it? Wouldn't it be fairer to say we need regulations put in place on big tech so they cannot sell our personal info? I feel this is a very strong topic that needs further examination, but it's pretty hypocritical to have regulations on the government concerning spying and not for tech companies.

Multimodal Redesign



Please Click This Link To Be Taken To A Multimodal Redesign

Synthesis



One would think in 2020, Facial recognition technology would be flawless, yet this isn't the case. We use it to log into our phones and tablets as well as using it to make payments. We use algorithms for just about anything today, and the thought of this technology having a racial bias or flaw doesn't seem plausible, but this is not true. As presented in unit by Joy Buolamwini, we were informed how facial recognition bias towards woman and women of color is a bit of a problem. Concerning these algorithms, they are geared towards targeting specific classes and races of people or detecting women as men or simply not even knowing whether the person is male or female when most definitely the person is female.

The main problem with class and racial bias is if not programmed by a human, then the machine would not know whether you are black or white, rich or poor. When it comes to these algorithms, we start to see how a human that coded this software, own personal opinion, or racially charged agenda got into the program's code. For this reason, big tech firms need to step up their quality control process to ensure that racially motivated bias and class separations stay out of the end product. Because there is little we can do about this, people are being segregated online by race, class, and credit score, which creates a barrier in life. Law enforcement agencies have started using to social media spying and facial recognition software, but the problem is that the software targets people of color. This has been tested and proven by Joy Buolamwini and other software engineers. Some facial recognition software has gone as far as to say the subject was black or Latino, and the person was white.

We cannot afford to use software that is supposed to aid but instead targets an innocent person who may spend time in jail for something they have not committed. So how do we fix this? The answer is Big tech doesn't have a plan to deal with it. We know that people such as Cathy O'Neil and Joy Buolamwini are tackling these issues of algorithm bias, but it's not so simple. Coding AI-based algorithms are very time consuming and arduous. These programs have trillions of code lines and have been written by thousands of people over many years. It's simply not cost-effective for a company to redo there AI programs. This is the point where people like Joy and Cathy come into play. There is no better time to introduce industry-wide standards for bias and facial detection accuracy. These tests should include measurements on the performance of each algorithm in different categories, like race, age, and gender. Many companies such as Google and Facebook, as well as Amazon and IBM, said they would be more than willing to support overhauls and regulation. So why haven't we seen any change?

I feel one of the many things that answer this question is politics and also money. I believe major companies to have politicians in their pockets, and I understand this is significant speculation , but is it really that far from the truth. If the poor stay poor and racially bias still exists everywhere we look, then we remain dependent on the people elected to represent us. However, if we fight big tech and the politically charged machine to create legislation, then how can they keep us separated by class, gender, and race? It takes a revolution to create change; our forefathers saw this in the late 1700s. I feel this country is on the verge of social and class wars if we continue to allow racially-biased machines and greedy politicians to shape our everyday lives in a way they see fit to keep the population in control.

People like Cathy O'Neil and Joy Buolamwini are at the forefront of trying to get companies to understand the issues at hand and to help create changes. I feel it only takes a couple of people, such as Joy and Cathy, to ignite a spark in a nation for changes in technology and development to happen. I feel it to be my personal responsibility as a future web designer to ensure my code is without bias and sexism. Along with some tech warriors and noble politicians, we can create a global Algorithm free from prejudice and sexism, so when my kids are my age, they will have never even thought these types of bias ever existed.

Back to top