Author Archives: julia

AI, a consumer product not free of the sins of capitalism

The article “Anatomy of an AI System” by Kate Crawford and Vladan Joler delves deeply in the more unknown, physical and resource hungry side of creating the AI products and services. They talk a lot about the labour and ecological costs that goes into creating a single product, like Amazon Echo.

It’s easy to agree with the most blatant exploitation in the mines and factories being unethical. However when we talk about the intellectual property it’s more difficult to see the exploitation as clearly. I’m sure we all feel uncomfortable of our personal info being collected. But harnessing the mass of online text and image output I’ve also contributed but what can’t be connected to me doesn’t make me feel exploited. Although I must admit, a relevant question is does this type of data exist at all or is all data collected in processes where it can be tracked back to me. I not feeling particularly exploited is also due to the fact that Google and Facebook offer their services free. As the process of the information harnessing is hidden I just tend to see a free and functional product as a representation of the company. Although the distribution of profits is clearly unfair and skewed and unfair I think the free services provided can operate as a type of distribution of wealth. Access to daily digital tools is definitely something that should be considered when we think about evening out the gap of possibilities in life.

When it comes to the profits only going to the hands of few the article, to my opinion, makes it seem like the phenomenon would be particularly connected to AI. I feel like the skewed distribution of wealth and exploitation of natural resources is more a problem caused by all businesses.  Having the fairly easy possibility to hid money in tax havens and operating in countries with weak workers rights and weak currencies is too tempting to any company that operates in global scale. Why would AI and tech industries be any different? Although in internet tech industries there is especially few global players and giants like Google, Facebook and Amazon have an upper hand in developing AI and therefore controlling its use. Whatever possibilities a normal consumer had for making a political impact by selecting ethical companies over unethical ones they lack in web tech. But would people really move to more ethical Facebook if one appeared to the markets?

The physical form on internet

How  Lisa Parks and Nicole Starosielski in their article “Signal Traffic: Critical Studies of Media Infrastructures” explain the need for the big server companies to present their data centers in a physical form shows something interesting of people. We need something to take a physical form in order to think that it is real. This has caused the companies that we may use daily to see less personal and part of a world that is not truly present.

Google and Facebook seem like very distant companies, whose presence only matter somewhere far away from Finland. I speculate that in many ways the invisibility of the actions and actual work by the big tech companies has saved them for the ethical reviewal and responsibility. You don’t really think of them as companies consisting of people, the people are rendered out of the picture (with the exception of single face Mark Zuckerberg maybe). Even now that we have started to talk about the social responsibility of the effects of algorithms, the fact that running a data center consumes ridiculous amount of power is not widely known or hardly ever brought up in ethical discussion of sustainability. Nobody also remembers how the new tech has changed how the whole country and it’s crucial infrastructure operates – we talk about more familiar topics everyone has experienced first hand, like school system, taxes or health care.

On top of the illusion of the internet living in a space that is not connected to the physical world around us, the technicality going into the internet infrastructure is intimidatingly complex. For many, seeing code causes an immediate adverse reaction, and talking about the complexity of the data politics and deals one gets so drawn into details that it’s difficult to see the bigger picture. It’s a scary thought, that maybe the world around us is turning too complex for people to make rational, well informed decisions in politics of the topics that truly make changes under the surface, especially when the political game is changing more emotionally provocative along with populism.