Author Archives: leokosola

Google’s cable investments

There was an article recently on New York Times covering Google’s undersea projects. They have a nice map of the history of undersea cables and which of them Facebook, Google, Microsoft or Amazon “partly own, solely own or are a major capacity buyer of a cable owned by another company”.

Map published in New York Times. Graphics by Karl Russell, Troy Griggs and Blacki Migliozzi.

It looks like the share of these major content providers among all internet cables is increasing quite rapidly. And especially Google is taking lead of creating its own cable infrastructure.

There is an interview of Jayne Stowell, who oversees construction of Google’s undersea cable projects. Couple of nice comments:

“People think that data is in the cloud, but it’s not,”
“It’s in the ocean.”

“It really is management of a very complex multidimensional chess board,” said Ms. Stowell of Google, who wears an undersea cable as a necklace.

There is also interviews and pictures of guys working in the cable ship Durable that Google uses for its laying operations.

“I still get seasick,” said Walt Oswald, a technician who has been laying cables on ships for 20 years. He sticks a small patch behind his ear to hold back the nausea. “It’s not for everybody.”

Recommend to read!

Here’s couple more images of what Google is planning from company blog post.

About captchas and trackers

I was excited about the large scale of media infrastructures mentioned in the introduction chapter and how things are connected but hard to see. The connection between web and biophysical world is easy to forget. Where does all the small pieces of regular websites like cookies and such come from? Who provides them and why they exist? Are they hyper-objects?

In Autumn, I studied CAPTCHA systems for a project, especially Google’s reCAPTCHA, which is commonly used captcha program on the internet. Recaptcha is a fully automated web security program that developers can use for free to protect their sites. Recaptcha’s primary function is to determine whether a visitor of the page is a human (good) or a “robot” such as spamming bot (bad).

But Recaptcha – unlike captchas before – creates secondary value too.

Recaptcha challenges are made so that they employ visitors to create useful data for Google. Recaptcha has provided useful information to digitize old books, improve Google Maps and develop machine learning algorithms.

Different captchas. (Source: back40design.com.)

Some critics have seen a connection between Recaptcha and Google’s deal with U.S. Department of Defense to analyze drone footage. Manuel Beltran thinks that while solving Google’s captcha challenges, clueless people become labour to create data that helps the U.S. Army.

Another example of blind spot in the web for humans are trackers that collect information of users. Probably most commonly used web service that uses trackers is Google Analytics. When its tracker is placed within a website, the site sends data to Google’s server. That server is located somewhere – maybe in Hamina or maybe in the United States.

Screenshot of the project website Algorithms Allowed.

Artist Joana Moll investigated the usage of trackers in websites of countries that US is enforcing embargoes and sanctions including Cuba, Iran, North Korea, Sudan, Syria, and the Ukrainian region of Crimea. She scraped websites with the fitting domains of these countries.

She found some interesting use cases of US-company-owned trackers: President of Iran official website uses Google Analytics, Ministry of Defense of Iran uses Google Analytics, Ministry of Finance of Syria uses Google Analytics, and so on.

It is amusing that webmasters of these governmental websites let US corporations read data of their visitors. I argue that it happens because it is hard for people to understand that few lines of code in a website may mean something in a physical world too.

Link to Joana Moll’s project Algorithms Allowed (2017).