From: Nathan J. Nelson

To: Dr. Tucker, Shin-Ping

Subject: Homework 9

                                    12 April 2019

 

            What would you do?

You’re thirty minutes into your dream job interview and the interviewers final question catches you off guard; “If I was to examine your social media accounts, what would I see?”. What would you say?

Social media monitoring and evaluation by employers has been around for quite some time, for an individual to not think of this is carelessness on their part. I personally have nothing to hide, as I, on a regular basis conduct social media cleaning; whereby I go through all posts, likes, etc., and delete, remove, any material that could be questionable. For instance, a like on a page in 2010 may be viewed one way, however a decade later, it may be taboo. From my point of view, I am an open book and would allow them access as I have nothing to hide.

You’re surprised to get social media requests from an individual who just joined your company. You first thought is to ignore the request; however, you are concerned about things being awkward at work. What do you do?

I would take the opportunity to review what they find fascinating, are their posts socially or politically charged? If their social media accounts are tame, approve the request but monitor their content and how they conduct business at work. If you reach a point where you feel uncomfortable by what they are posting or the way that they carry themselves at work, terminate the online friendship, and keep things strictly professional at work. If you’re worried about them seeing that you deleted the friendship, you could choose not to follow them.

 

CDA Protects Social Media Companies

Do you believe that social media companies are doing enough to shut off the communication of terrorist groups? Do you have any ideas for actions they could take that would help solve the problem?

Today software programmers write codes that track user likes, friends, shopping habits, etc. perhaps an algorithm that tracks known or suspected criminals, or individuals that have known associations to these people and categorizes them according to a certain threat level. While such codes do already exist, are they being implemented in Social Media to help guard against the aiding of the enemy.

Should U.S. anti-terrorism laws take precedence over the “safe harbor” provisos of the CDA, why or why not?

Yes, they should, if you social media site is going to host content, stricter regulations and oversight need to be placed on them. Being able to claim, “I didn’t know” wouldn’t be allowed as a defense in other instances, why should this be any different. Ignorance is not an excuse.

Do research to learn that current status of the Gonzalez and Pulse lawsuits and if they were settled. Write a brief summary of your findings.

According to courthousenews.com, the majority of the Gonzalez case has been thrown out, however, the presiding judge left open a portion of the case where “YouTube’s ad revenue sharing contributed to the November 2015 attacks that claimed 130 lives, including that of 23-year-old California native Nohemi Gonzalez” (courthousenews.com). The family lawyer has yet to decide on whether they will pursue the judge’s offer.

In the case of Pulse vs, the city of Orlando, a federal judge has dismissed the case against the city, stating that actions didn’t reach a “shock the conscience threshold”. The judge ruled that the officer in question had two decisions, go inside and attempt to subdue the attacker, whereby more people could have been caught in the crossfire, or, call for backup before entering. However, the case is now being tried a different way in the hopes that they can bring charges against the security firm that allowed the shooter to have a position of trust and confidence given that his social media accounts showed that he declared allegiance to the Islamic State.

 

Google AdWords

Should Google take a more active approach in censoring its content providers? If it does, is it possible, that Google could run afoul of Title II of the Digital Millennium Copyright Act and lose its legal immunity of the actions of its users? It is a slippery slope when you begin to talk of censorship, however, one could argue that as the Google owns the platform they have the right to restrict the content that is contained within. Title II deals with the protection of the ISP when a user posts content that may be copyrighted, where it protects the provider under some safe harbor provisions.

How might Google deploy advanced technologies to identify content that is objectionable?  Through the use of AI google may start to identify material that is considered questionable.

Can/should Google provide advertisers with guarantees about what type of content their ads will appear next to? How could such guarantees be written so that they are enforceable?

Until such time that Google can sufficiently identify, prevent, and remove such content that is deemed offensive or that is violation of its terms of service, they should make no such guarantees. If such a guarantee were to be written, it would have to be crafted in such a way that it would protect Google and its parent company from breach of contract.