Questioning intelligence

By Marina Lermant 12 December 2021, Sunday

In the second week of extended intelligences led by Ramon Sanguesa and Lucas Lorenzo Pena, we began talking about what intelligence actually means. I find it curious because that was the main thing I was thinking about after the previous week; what intelligence means at its core. I think it’s important that we were questioning that as the name of the module, and challenging the idea of intelligence and what it can mean, whether it be human or machine driven. An interesting thought by Lucas was that if we can’t define intelligence, how do we expect to make it? Although we have an idea of where our own intelligence lies, it becomes impossible to understand or follow the path of that intelligence as it gets shifted and expanded by the machines we program it with. It seems that truly “intelligent” AI is capable of self-improvement just as humans are, using the past as building blocks to teach itself to get better over time, therefore increasing optimization without further need of human input. In order to have an AI that is more intelligent in the way humans think about intelligence, there is a need for randomness and creativity. This imaginative nature leads into the topic of animistic design and how it can be beneficial in AI.

Animistic Design and ethics

Animistic design is an “investigative strategy that exploits degrees of collaborativity curated uncertainty and unpredictability to imagine forms of digital interaction”. It acts as a milieu of human and nonhuman. The problem with anticipatory design is that humans are not rational, and designing in this way will inherently have bias. To me, designing with an animistic framework leaves more room for creativity, without following monotonous programs. An argument can be made for the fact that AI will never compare to the way in which humans think, but by using animistic design, we can avoid standardization and increase chances of spontaneity. Currently, I often have a relationship with technology that is very deterministic where I am the thinking, living, breathing human and the technology I use is simply a machine that assists me with tasks. However, if I think about how that relationship might change with animistic design, I believe it would allow for some type of more “personal” interaction with my devices. While I know my tech devices aren’t sentient and don’t want them to be, it would be interesting if they were more heavily embedded with a framework based on randomness that might be more similar to a human’s scattered, curious, and sometimes irrational way of thinking. By giving machines more perceived freedom in this way, it might allow us to feel less distant or disconnected from them when we are the ones that made them.

Because machines are so often biased even when we try not to embed them with biased data and inputs, we must try to actively fight against it. It is not enough to try to remain neutral with machines, because they will undoubtedly become biased. They should be programmed in a way that seeks to work against bias to not reflect so closely how society might think, but perhaps in an objectively more moral way if that is at all possible. I took the online test of MIT’s moral machine to see what my answers would be. The questions are all based on a self-driving car’s brakes failing and deciding the scenario of the outcome, sacrificing the lives of either option A or B. While taking the test, I realized all of my answers were completely biased based on my own individual beliefs and lived experiences. At the end, the website tells you how your answers compare to others. It was interesting to see my moral choices vs the average answers of others and where mine might align or differ, and what reasons I might have for why. My answers show my bias, whether good or bad. Tests like these show how pivotal the people in charge of creating AI systems are because if they do not use large samples from the general population when deciding what types of morals machines should have, then they would determine actions for many based on the mindsets of few which could lead to incredibly dangerous and skewed outcomes.

Using AI in a conceptual artifact

In groups, we were tasked with thinking of an object people can use in their daily lives that uses AI. This differed from the previous week because the previous project asked us to AI as a speculative tool and answer a question. This task asked us to insert that into a more tangible artifact. My group’s ideas were far-fetched, which I liked because it forced us to think about the extent and types of topics AI can handle. Our topic revolved around dreams, which is complex and not entirely understood in and of itself. While thinking up the logistics for the said device, we were constantly asking ourselves and evaluating the pros and cons of the device in the way we planned for it to function.